diff --git a/data/2202.08449.png b/data/2202.08449.png new file mode 100644 index 0000000000000000000000000000000000000000..d1234708fdd5b2cea5cc471c313b59a1cf65457a --- /dev/null +++ b/data/2202.08449.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46639e7b743fe2b52c62281d5d31c134b5a7f2107aa8696b5feeb9b94dfbb294 +size 1036090 diff --git a/data/2203.08450.png b/data/2203.08450.png new file mode 100644 index 0000000000000000000000000000000000000000..640610d772889020f3e094ec77892c9b91f7f26f --- /dev/null +++ b/data/2203.08450.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1bebb102f2c5df54bf3e1403a6350de25ed70d6fb728f0652a2f888e40039c3 +size 1197478 diff --git a/data/2204.07845.png b/data/2204.07845.png new file mode 100644 index 0000000000000000000000000000000000000000..775423ea6032555376acc4f47fed65f7773694a7 --- /dev/null +++ b/data/2204.07845.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:709d6305bf6c50639901890bc25247e281fc8da106d18eae4c740e4f88cf2987 +size 730639 diff --git a/data/2204.08563v2.png b/data/2204.08563v2.png new file mode 100644 index 0000000000000000000000000000000000000000..2045579c2f7ca822a3e3114aa685535d86896bd4 --- /dev/null +++ b/data/2204.08563v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:319bb152e911ee7ab6a1b9ebab9de8462b97b639574e9f4cedc3e99db6cb1bad +size 1207225 diff --git a/data/2205.11169.png b/data/2205.11169.png new file mode 100644 index 0000000000000000000000000000000000000000..2ff578bb01c95a7546dd8962d82eade69f233686 --- /dev/null +++ b/data/2205.11169.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f86bd0d5ccea23009cd95a5fee0e8d02005633c49ba3a225147a2ef322f9e016 +size 785620 diff --git a/data/2207.12955.png b/data/2207.12955.png new file mode 100644 index 0000000000000000000000000000000000000000..9abaf735d26062ff42ab3e68008b715cdbb0686a --- /dev/null +++ b/data/2207.12955.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae394ac0e718c3cf5942f1a980c04fdbcaf90bd3dfa67561a2d89f0acd12363b +size 405326 diff --git a/data/2208.09602v2.png b/data/2208.09602v2.png new file mode 100644 index 0000000000000000000000000000000000000000..0be2bf678d8437819a7e7829e1d01b3a7fcc74c7 --- /dev/null +++ b/data/2208.09602v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cc87de981a4364751bc503ad5a06a4256e3482fd11900b655a348fbb7c39bb1 +size 802258 diff --git a/data/2210.00266.png b/data/2210.00266.png new file mode 100644 index 0000000000000000000000000000000000000000..c0340a60798b133304841dd9ae7122fc635413d4 --- /dev/null +++ b/data/2210.00266.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72fbefaf1bca8a410764eff597f1318541496cc7620837cb85c6396b96be77fa +size 445915 diff --git a/data/2210.06998.png b/data/2210.06998.png new file mode 100644 index 0000000000000000000000000000000000000000..5b9c8a5c3e4ffba0fa08784b65fa82989b37769e --- /dev/null +++ b/data/2210.06998.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8a5bd0468bccabdebc06995cb4eae2484bbf076be2449bcdf03a3f18c88c60e +size 821946 diff --git a/data/2211.11018.png b/data/2211.11018.png new file mode 100644 index 0000000000000000000000000000000000000000..3dc8c4a101bbf77c066c34ea8a36e56ed2103857 --- /dev/null +++ b/data/2211.11018.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f42565511973b0086cad2e802ebdb5509859b2be4108257290174a42786c7f3c +size 757317 diff --git a/data/2301.08237.png b/data/2301.08237.png new file mode 100644 index 0000000000000000000000000000000000000000..e6a760398abd808c581ba2e9de79f463587a353e --- /dev/null +++ b/data/2301.08237.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:239d7c812cdc048e4caca7b4c66b9a513dc747933448fa40c81299b4367fe979 +size 737796 diff --git a/data/2302.09778.png b/data/2302.09778.png new file mode 100644 index 0000000000000000000000000000000000000000..36f6de91296330de88fee982289464879b2e50ca --- /dev/null +++ b/data/2302.09778.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3ef356de0746bb29b602482e4d218a58b2e58c1dbaf0cd751eb646f960e6f76 +size 761046 diff --git a/data/2303.02835.png b/data/2303.02835.png new file mode 100644 index 0000000000000000000000000000000000000000..0936eecb92ee7c691c20f744dba197a4b594f3d1 --- /dev/null +++ b/data/2303.02835.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14ed69fec9a61bc938ebe033dd404b87eb0a0e9dc7589abce5efbd9b96702ffe +size 827890 diff --git a/data/2303.06346v2.png b/data/2303.06346v2.png new file mode 100644 index 0000000000000000000000000000000000000000..86d3a96a3ac247a7d9a9859d90cee4fa3b589483 --- /dev/null +++ b/data/2303.06346v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22866fc6fc4ed081e007a91b4fc94555fb1993338ac3dd6bf56727f9290e539a +size 813567 diff --git a/data/2303.09383v3.png b/data/2303.09383v3.png new file mode 100644 index 0000000000000000000000000000000000000000..57fe074fa40d2f6df4e77ba42624738ad1782e9c --- /dev/null +++ b/data/2303.09383v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be853ca0897cabc38401475002d925407126723ad06dd893c281b74e3fbe7f5d +size 820381 diff --git a/data/2303.10275v2.png b/data/2303.10275v2.png new file mode 100644 index 0000000000000000000000000000000000000000..a4db2bb6b1b235a87f02c6b1288e3791ece5c751 --- /dev/null +++ b/data/2303.10275v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54bdabac182c9c7d493b75a0e5c3db2c32cdd59252061b607e284755e70663dd +size 1202667 diff --git a/data/2303.10613.png b/data/2303.10613.png new file mode 100644 index 0000000000000000000000000000000000000000..13aa073e126486db1317c7cf6045648f5ff86b95 --- /dev/null +++ b/data/2303.10613.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa9b7598eff5c5b7f8c6c90957907c0ee463a3d81a5c79baf97cb9d42dda33ba +size 928674 diff --git a/data/2303.10891.png b/data/2303.10891.png new file mode 100644 index 0000000000000000000000000000000000000000..fda43cd5c4e1fc81d9270daf3371244496de1c08 --- /dev/null +++ b/data/2303.10891.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21d0cecfe0323ae3e10900710fcb9920f244d099bd790b09237ba4f8df4fafa8 +size 886990 diff --git a/data/2303.11684.png b/data/2303.11684.png new file mode 100644 index 0000000000000000000000000000000000000000..249f7ed553cc0cb29f01add7f64cebfc9ed68894 --- /dev/null +++ b/data/2303.11684.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8646eadac374ea1de290af490ea9ede723915cd366a69638332ed2e84480b654 +size 856825 diff --git a/data/2303.16783.png b/data/2303.16783.png new file mode 100644 index 0000000000000000000000000000000000000000..a9c3f8e087153969cdbf820620bc2206b58f7b98 --- /dev/null +++ b/data/2303.16783.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fbbe1720bbcbdbfa807416f06499b8a0416e7c0c668110b47b4a1195720dce2 +size 920696 diff --git a/data/2304.05370.png b/data/2304.05370.png new file mode 100644 index 0000000000000000000000000000000000000000..4d10b2ff029b863a5c826f55759b785b809fe579 --- /dev/null +++ b/data/2304.05370.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d973ba7f0154aeed8e5dd817c7baf6feb5fe67005c3fc8ffb71e13703bed888a +size 783299 diff --git a/data/2304.08069v3.png b/data/2304.08069v3.png new file mode 100644 index 0000000000000000000000000000000000000000..a414e6784f3e822885bd56669e26c1e82d923eff --- /dev/null +++ b/data/2304.08069v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42ef264f117f5c3d9ccf2f797371209ad111a3fac29b0ceb59e7a6c7d2e2b25c +size 784855 diff --git a/data/2304.11523.png b/data/2304.11523.png new file mode 100644 index 0000000000000000000000000000000000000000..cedd451c141fdbfb00a304d860f4d7b3f0b35f47 --- /dev/null +++ b/data/2304.11523.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b0265b9e75abfba28850d130b1c03b682e45ba23c2c28169a945dd46dcc1eea +size 716337 diff --git a/data/2305.00163v2.png b/data/2305.00163v2.png new file mode 100644 index 0000000000000000000000000000000000000000..a2fa5efe763ffbff9f3ec5350376dd16fc84b433 --- /dev/null +++ b/data/2305.00163v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23dec70e74b6cc9b1d270459d16d7c0cf6cc54e00bed4860d7fa98e994b20599 +size 921482 diff --git a/data/2305.08275.png b/data/2305.08275.png new file mode 100644 index 0000000000000000000000000000000000000000..ad4a792c9ce3595b7764d9e78adc545e5613c5b4 --- /dev/null +++ b/data/2305.08275.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44fe1252ef65c61bef9464120ab49582fc9e103d1eb396b236f7c7f7279ac91a +size 763702 diff --git a/data/2305.10300v3.png b/data/2305.10300v3.png new file mode 100644 index 0000000000000000000000000000000000000000..5b5d4fc464830f4c93a629704dc985684da0860d --- /dev/null +++ b/data/2305.10300v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a9be148147646f5ee0f4864ca6a5ffd430d6b22b06720b52025a62fe66b57fe +size 718349 diff --git a/data/2305.11468v3.png b/data/2305.11468v3.png new file mode 100644 index 0000000000000000000000000000000000000000..0c72299e2d4837f558ef40c1cec5e7cb4bd80378 --- /dev/null +++ b/data/2305.11468v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b46f9b64af23e073f7f1e81ca1d7bc4ee16f26ca9315c2141901804ee3c99e74 +size 541082 diff --git a/data/2305.15404v2.png b/data/2305.15404v2.png new file mode 100644 index 0000000000000000000000000000000000000000..3c4366a2a9252b70b5c9001afeb619623a6b51f4 --- /dev/null +++ b/data/2305.15404v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d29cca67b5f4a4ae48e4847643294101eb27e8a929d77fe2f20c6b5f4ebaea37 +size 1148214 diff --git a/data/2305.17328.png b/data/2305.17328.png new file mode 100644 index 0000000000000000000000000000000000000000..20e93fcb46a17b3f93cbb25a3c01a4b681af746e --- /dev/null +++ b/data/2305.17328.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f86889d8cc165763184d709c02c77ae00fd161a7be3124c653b33f51f3b516ed +size 801616 diff --git a/data/2306.00987.png b/data/2306.00987.png new file mode 100644 index 0000000000000000000000000000000000000000..f5ae9b76dd5366163f6589815edb7aa2d300edf9 --- /dev/null +++ b/data/2306.00987.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f4014b184aa2aa054c66de2cdbfffa4a4112a1337ce949a00f053c2f9f89884 +size 725770 diff --git a/data/2306.02240.png b/data/2306.02240.png new file mode 100644 index 0000000000000000000000000000000000000000..b6d7f6eeae4ab19a5414c6aaf6ba93816c295fd2 --- /dev/null +++ b/data/2306.02240.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57c6ff7c3d71757878f27b297ee3f5cf413d8add2616d2a8f61c2c0dd787711b +size 912682 diff --git a/data/2306.02416.png b/data/2306.02416.png new file mode 100644 index 0000000000000000000000000000000000000000..282e57e50d7d9a4a796d185f6f2b950f55290f32 --- /dev/null +++ b/data/2306.02416.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef2ee7e424914c11517ebc00d5ee8a60f8b61d754568baa65148821d48ec153c +size 810058 diff --git a/data/2306.04216.png b/data/2306.04216.png new file mode 100644 index 0000000000000000000000000000000000000000..f5a148b58b029238e47296a11abae13a4dc73002 --- /dev/null +++ b/data/2306.04216.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d67651dbc0efc2779fb84aa61cb163d8b42fc43f37ca153f6a81023a422ed2aa +size 784195 diff --git a/data/2306.04300v3.png b/data/2306.04300v3.png new file mode 100644 index 0000000000000000000000000000000000000000..74cf54e24cbbae984f26763bcb78f3e25c6cedb0 --- /dev/null +++ b/data/2306.04300v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ac669f5cd5a54a185f2be88fba574e4195131dd2aac5979024a6dccd56fe540 +size 722586 diff --git a/data/2306.04451.png b/data/2306.04451.png new file mode 100644 index 0000000000000000000000000000000000000000..e5756eb142dbb6f1978bf3326d379a5f74ece988 --- /dev/null +++ b/data/2306.04451.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e52dc72bec928c602e40b435501b4cff9956bc0e85f279b490a45866a088817 +size 933951 diff --git a/data/2306.04744.png b/data/2306.04744.png new file mode 100644 index 0000000000000000000000000000000000000000..80b1762816906d864344283cd7422e91a5e1b6d8 --- /dev/null +++ b/data/2306.04744.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47a0e883cee8c4b74a4cd9875cfe91483ad21011e791a677c3b5ba906e7eb607 +size 785359 diff --git a/data/2306.05172.png b/data/2306.05172.png new file mode 100644 index 0000000000000000000000000000000000000000..615c3376cf12fb8e9efb82ba8be8897e4545ef66 --- /dev/null +++ b/data/2306.05172.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:763fb7ba444170bdeed77496daaeb5b011432ca9f1699cb9dbf7c314a52f2da8 +size 497124 diff --git a/data/2306.05427.png b/data/2306.05427.png new file mode 100644 index 0000000000000000000000000000000000000000..2a13ea5946a7a02f66526cddebda1492a8c7b4ce --- /dev/null +++ b/data/2306.05427.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a38b8d7754436ad169705ba240841c7bacb647a56fc57f1593e03aa8379a0ce +size 1547625 diff --git a/data/2306.05493.png b/data/2306.05493.png new file mode 100644 index 0000000000000000000000000000000000000000..c4bc155b8da19529775fc04b6b05f1cf0f8597f7 --- /dev/null +++ b/data/2306.05493.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87527a3affbd1a18a00f79a361218f81edb4126403abfad55dff643e1162d905 +size 900348 diff --git a/data/2306.05688.png b/data/2306.05688.png new file mode 100644 index 0000000000000000000000000000000000000000..9a8d96aef800be24c06fd5354be2f7ec37ccdbab --- /dev/null +++ b/data/2306.05688.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23b10962f976ea181cb57c8df337ee1caac706a05000fdaf59f5598dad12050d +size 430382 diff --git a/data/2306.07520.png b/data/2306.07520.png new file mode 100644 index 0000000000000000000000000000000000000000..d9105e0d36b01735d6fd15e162b53cf901dbc389 --- /dev/null +++ b/data/2306.07520.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a621e7fd263ae943d4d8ca6b887db9a7e9e186b18805cecc08c245097c6718a +size 776332 diff --git a/data/2306.07713.png b/data/2306.07713.png new file mode 100644 index 0000000000000000000000000000000000000000..16b27867f8041f14d824496ac8687de64ac2feaf --- /dev/null +++ b/data/2306.07713.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a599afcd48f149f24a8af3a0406cd5c323713140f89c951e0c243f6fbc5b041f +size 970045 diff --git a/data/2306.07831.png b/data/2306.07831.png new file mode 100644 index 0000000000000000000000000000000000000000..32c116d59272d2039033d68b3543a2cde1b1c554 --- /dev/null +++ b/data/2306.07831.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be59dfec893f76e05bc3e15ff1c7fc6b9bc7f0be409d6fce04bc5fc5bf6a491d +size 801627 diff --git a/data/2306.08045.png b/data/2306.08045.png new file mode 100644 index 0000000000000000000000000000000000000000..e8a52d08e0ff00dae1e9288998d56b60bbeba85e --- /dev/null +++ b/data/2306.08045.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a2e04f1fbf1a18595fa004f9f8f41dd270e8b82ce13be91f34947a412a8a578 +size 699829 diff --git a/data/2306.08736.png b/data/2306.08736.png new file mode 100644 index 0000000000000000000000000000000000000000..b1fb76a9c210e84efe3b2f6b825c1036664f3a10 --- /dev/null +++ b/data/2306.08736.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a80ce0426c6f3e6d843906a8b867ae9927cc382c67c5dfbd8b81dff5b229096 +size 901183 diff --git a/data/2306.08832.png b/data/2306.08832.png new file mode 100644 index 0000000000000000000000000000000000000000..2b762d9a924e3b86b051fccd1d925a2402f30938 --- /dev/null +++ b/data/2306.08832.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c379eccec0a32f925fb6ab2b83ca067ce45d6f17a1c4f5f747f8cb3784e5571 +size 823964 diff --git a/data/2306.09310.png b/data/2306.09310.png new file mode 100644 index 0000000000000000000000000000000000000000..e4a652c2b96297d58730939af05f3210ade05d85 --- /dev/null +++ b/data/2306.09310.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d85528be6da6091aa586fb17ce7d965e247c61e93fec3a9e5e78a21f01f9c18 +size 751585 diff --git a/data/2306.09330.png b/data/2306.09330.png new file mode 100644 index 0000000000000000000000000000000000000000..d37d0641aa79de44d65234d28324272a5c922db6 --- /dev/null +++ b/data/2306.09330.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef0880aed8097f8a864ba5b9b69069cb41fbaea40ad06acc8c4c4c80c3d5a793 +size 1349077 diff --git a/data/2306.09337.png b/data/2306.09337.png new file mode 100644 index 0000000000000000000000000000000000000000..f78aa7a520c22d6e99c912273a1d40fd09b7b967 --- /dev/null +++ b/data/2306.09337.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1dd4e0c7be3e475b23ae58feb4d95c6c877eba0b93a96c42ed1150f26f8815e +size 1034496 diff --git a/data/2306.09348.png b/data/2306.09348.png new file mode 100644 index 0000000000000000000000000000000000000000..1ecde4fbdf50cc5036441d3015a6ea8c57c51055 --- /dev/null +++ b/data/2306.09348.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f257e00ac95e575b1fc0742df7332618e6ea79dc7656d583492d9c6f21da7faa +size 1041565 diff --git a/data/2306.09551.png b/data/2306.09551.png new file mode 100644 index 0000000000000000000000000000000000000000..7ec41ad56df0c8509ce1cfed315425ed87b78a8e --- /dev/null +++ b/data/2306.09551.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:431353c7315dd63a1846b7ed2acce8dc185a81a3791f5b22de325f49db52a08c +size 551142 diff --git a/data/2306.10013.png b/data/2306.10013.png new file mode 100644 index 0000000000000000000000000000000000000000..8efa8eaa8f27672bfd3e77fb22cb48e071f2888f --- /dev/null +++ b/data/2306.10013.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef3847714c3bdd9db3e825b2ace53375ca3b1b1edeaa2666fddf184869eaced9 +size 831697 diff --git a/data/2306.10014.png b/data/2306.10014.png new file mode 100644 index 0000000000000000000000000000000000000000..06ffec28437a63d008839da02e1a9f286ba5d57d --- /dev/null +++ b/data/2306.10014.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:251b69f65c424f257ec4e2908d2380505134a2df81a4e6f461cbd4d3554efad6 +size 857947 diff --git a/data/2306.10239.png b/data/2306.10239.png new file mode 100644 index 0000000000000000000000000000000000000000..3917f99a31ad240c8b0387badac2183a071e9bb3 --- /dev/null +++ b/data/2306.10239.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:849dfab97a8fea7649f260f69033dc929090d2e477eed836cafcc13f7bcbb2e4 +size 954451 diff --git a/data/2306.11290.png b/data/2306.11290.png new file mode 100644 index 0000000000000000000000000000000000000000..ccdbc3bef5d96fbdc644a7b38fb27aade34713dd --- /dev/null +++ b/data/2306.11290.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51be1a4e8e17e6c50ea46b079f87ec3979050c7f3cfe89dffb7611138c03abda +size 968248 diff --git a/data/2306.11369.png b/data/2306.11369.png new file mode 100644 index 0000000000000000000000000000000000000000..7ac899f5a181b873838f8ab41d52a64ad3061f05 --- /dev/null +++ b/data/2306.11369.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c92a70054fe7c588ca65785a7d2c75a260cacbdd849be6352a123f0ad49b2db5 +size 738736 diff --git a/data/2306.12041v2.png b/data/2306.12041v2.png new file mode 100644 index 0000000000000000000000000000000000000000..b1d77496a78d8a73a4103490843f095b07834f4b --- /dev/null +++ b/data/2306.12041v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1adff1e3fdb6d6d14cc484db2c3fff55f43df444de60fb33c52c0e2aedcbe87 +size 929245 diff --git a/data/2306.12547.png b/data/2306.12547.png new file mode 100644 index 0000000000000000000000000000000000000000..1c8f0952dd475ec13ddb110666f0d726ed5866a6 --- /dev/null +++ b/data/2306.12547.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a6a2b723c762234be0f18463ed3389135503bb9ee8029950248815e2dd0851e +size 943528 diff --git a/data/2306.13325.png b/data/2306.13325.png new file mode 100644 index 0000000000000000000000000000000000000000..fe310c1d2c5ad358890a5321c589b1fefd159175 --- /dev/null +++ b/data/2306.13325.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4efb61c54e8c02df364251cf834eb0f748370363214bf9dcb1385c0b71b354f +size 1081033 diff --git a/data/2306.13361.png b/data/2306.13361.png new file mode 100644 index 0000000000000000000000000000000000000000..d2b1d9832c744c2688d352e5bdc78ab998078b88 --- /dev/null +++ b/data/2306.13361.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfadffbcee355178a5059ef34f0e89a34fdceda55b5b08da728a21f13b919f6f +size 669081 diff --git a/data/2306.13631.png b/data/2306.13631.png new file mode 100644 index 0000000000000000000000000000000000000000..eaf6b93c834250a2f7d61dab10b8b74ba9559e4c --- /dev/null +++ b/data/2306.13631.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ceb1e3a84b938b7a444af7ea3ed84b0d36c33230c78d80bb262018d53168c06e +size 841069 diff --git a/data/2306.13643.png b/data/2306.13643.png new file mode 100644 index 0000000000000000000000000000000000000000..88571bca284f7793bb4aa4ec7cd2aba63fb8498b --- /dev/null +++ b/data/2306.13643.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a624a80506e21b18130412bc225495b0e3c35c7777927792cea1c1a75bc359a +size 742672 diff --git a/data/2306.14435.png b/data/2306.14435.png new file mode 100644 index 0000000000000000000000000000000000000000..12d3c1f704f67f1912f0b12887d28fe251d57409 --- /dev/null +++ b/data/2306.14435.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f4fcfc549de68869d9a524523a62dfe967de3696ecbdeabc99a4a40b70b980b +size 1511045 diff --git a/data/2306.14451.png b/data/2306.14451.png new file mode 100644 index 0000000000000000000000000000000000000000..edcb431a5b524c9f05cc4068d536e6ea0c4b0e84 --- /dev/null +++ b/data/2306.14451.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:295bca2bfeba8a7cf59f0f968339603d749934e4f4a97f7807127d4794525d9f +size 985806 diff --git a/data/2306.14525.png b/data/2306.14525.png new file mode 100644 index 0000000000000000000000000000000000000000..7423263e1600e4af0f36432d598418f41388c786 --- /dev/null +++ b/data/2306.14525.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d9fa0947d50a8c6d91085381d958603d524066521ace1728807824583281013 +size 790088 diff --git a/data/2306.15507.png b/data/2306.15507.png new file mode 100644 index 0000000000000000000000000000000000000000..f22dfe434258381ed15ae85ffb7babfae33f2c2c --- /dev/null +++ b/data/2306.15507.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a4975346ed8bc82714d8a7e1a15b3b1e123cf638991ca590214d6d0303339bd +size 956076 diff --git a/data/2306.15612.png b/data/2306.15612.png new file mode 100644 index 0000000000000000000000000000000000000000..5b5d982327c9c5fdb87c3bb99ea53ef20d5bb2d4 --- /dev/null +++ b/data/2306.15612.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b83f3b50314a2cdfbc7aac9c56e7342895fe096c624f27de7da6940a7b44fb8 +size 1117974 diff --git a/data/2306.15669.png b/data/2306.15669.png new file mode 100644 index 0000000000000000000000000000000000000000..306212513c8b53e4b0e5e8ed128593cab801dc54 --- /dev/null +++ b/data/2306.15669.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:826f190eeccd7240be9853d0affcc7bd639f54204bdcb3bbbc24db570892522f +size 919854 diff --git a/data/2306.15670v2.png b/data/2306.15670v2.png new file mode 100644 index 0000000000000000000000000000000000000000..1bc6aec639f46218d1146494f7e9c1488cc12045 --- /dev/null +++ b/data/2306.15670v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a673650ea98407298e6b5dc1f0554a93682dfb754942ae6ff254be61fcd1ed9 +size 763191 diff --git a/data/2306.15755.png b/data/2306.15755.png new file mode 100644 index 0000000000000000000000000000000000000000..fc6b0e065d1810ea37071876f9b0d8374b67794b --- /dev/null +++ b/data/2306.15755.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d292a631b2199b141bc86d596f82f48410aa5ee5751b82dc169efaa9c79a6933 +size 772792 diff --git a/data/2306.15876.png b/data/2306.15876.png new file mode 100644 index 0000000000000000000000000000000000000000..e710ca77914499413428bbd47b1be46debe7ee5b --- /dev/null +++ b/data/2306.15876.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d48bf18f610c9340178332a5109f49eb474ecc8afc1ee71c452c6ad43fa778dd +size 580022 diff --git a/data/2306.16050.png b/data/2306.16050.png new file mode 100644 index 0000000000000000000000000000000000000000..7e5b1961a946cd8234d79ed9b3a82868105b02c6 --- /dev/null +++ b/data/2306.16050.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a99cdf8d9be93ffc46d9eaa696989adcef4c05700b46879c7da75ffd1bd4211 +size 906427 diff --git a/data/2306.16772.png b/data/2306.16772.png new file mode 100644 index 0000000000000000000000000000000000000000..26c64c0834dafc0be7635b2c3416f9946ee97727 --- /dev/null +++ b/data/2306.16772.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c88f1444aab293b8700e6db678a229e364008672bb6cba6b5a3900c60780d49 +size 724890 diff --git a/data/2306.16927v2.png b/data/2306.16927v2.png new file mode 100644 index 0000000000000000000000000000000000000000..33b9b8cfa399368f99aa99362b1c88437d011357 --- /dev/null +++ b/data/2306.16927v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae43767ee5cca6afa03c621ec359d8fec6b0308801d80361b1338abe5e7e4271 +size 828686 diff --git a/data/2306.16999.png b/data/2306.16999.png new file mode 100644 index 0000000000000000000000000000000000000000..b0dfc23e7eb7e3d9a11285d22ed12c4308742c73 --- /dev/null +++ b/data/2306.16999.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:717e0aff85c698fd74b4163d73147f169831845966e74112626cd47b0bb1fb86 +size 918012 diff --git a/data/2306.17560.png b/data/2306.17560.png new file mode 100644 index 0000000000000000000000000000000000000000..4adac68a848bc6d0164236cc60705f01794d7ff4 --- /dev/null +++ b/data/2306.17560.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25f2e68c649fa4ae36739f1222dbb9c8f16995b44e7753c48a95163031f91c33 +size 738626 diff --git a/data/2306.17618.png b/data/2306.17618.png new file mode 100644 index 0000000000000000000000000000000000000000..4092f7ab2ced6e50cd6adb83d39a362b9741917e --- /dev/null +++ b/data/2306.17618.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41791223bae57d4d6a99b72867aa5dc9ca942f51eb7d3b0c816759f1ac329ed2 +size 1002181 diff --git a/data/2306.17843.png b/data/2306.17843.png new file mode 100644 index 0000000000000000000000000000000000000000..87e2a7e2c4445afb05420433f6c2f8f38f40eb47 --- /dev/null +++ b/data/2306.17843.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b264d7f4bac9cc01f8d0b208bc3001e7fea09a98508f706c46b1353e8fef988c +size 899196 diff --git a/data/2307.00040.png b/data/2307.00040.png new file mode 100644 index 0000000000000000000000000000000000000000..f5a88344e24b1aa5fad079b718881b26a2657725 --- /dev/null +++ b/data/2307.00040.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89a03b01a920c0811840fec3765b976f83503734069f8d141adfcec89dae9042 +size 1002636 diff --git a/data/2307.00522.png b/data/2307.00522.png new file mode 100644 index 0000000000000000000000000000000000000000..8f534178a91b291bbfa631f53788feaeb474429e --- /dev/null +++ b/data/2307.00522.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4215b7e8cc4e7f0d171f6c323974d82533656a865f3865dd5b0169ba2686c688 +size 1643433 diff --git a/data/2307.00761v3.png b/data/2307.00761v3.png new file mode 100644 index 0000000000000000000000000000000000000000..29d8484f7747253f312d533631031d09f02414e1 --- /dev/null +++ b/data/2307.00761v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f60c1fde2e5378ba20603fbf1202db1c675d71e865bbda3e77137a4d022bd57 +size 804857 diff --git a/data/2307.00764.png b/data/2307.00764.png new file mode 100644 index 0000000000000000000000000000000000000000..de2ddf1178f4e8f3c1b0f0ac602f4bafeb780833 --- /dev/null +++ b/data/2307.00764.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:471f1f60c5d4498a67bda88c310829ea0e7b89cbf5d263b218e8ed62dd6cf246 +size 556542 diff --git a/data/2307.00842.png b/data/2307.00842.png new file mode 100644 index 0000000000000000000000000000000000000000..67ec3245c0d2cc267ebfe297a60dc2464cdaef57 --- /dev/null +++ b/data/2307.00842.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64432b78f1d9435940593e16ed4dd57c873c4ebfc73d4a151dcec4ec30fd8c28 +size 860965 diff --git a/data/2307.01200.png b/data/2307.01200.png new file mode 100644 index 0000000000000000000000000000000000000000..560bd5894717d01ad3874de2a3f6b1251f809985 --- /dev/null +++ b/data/2307.01200.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:805af8c8ff18b7efea34862be21f39c8d7c46043394e597a184a5c63c9f8e22d +size 1226432 diff --git a/data/2307.01421.png b/data/2307.01421.png new file mode 100644 index 0000000000000000000000000000000000000000..747d0933fa3ac47759b3053da641c7caf6b00792 --- /dev/null +++ b/data/2307.01421.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fae57df90c3b807352ff703fda3b94642f969a77166fd6883a708c5f40b4940 +size 734091 diff --git a/data/2307.04570v2.png b/data/2307.04570v2.png new file mode 100644 index 0000000000000000000000000000000000000000..749d929e21c23c307f4268ac6cd55cfdc2567ce1 --- /dev/null +++ b/data/2307.04570v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1065644d5c7c136c1b27ab45bad22c2f7f30a56331435a7ffbd9c9243fd8428a +size 720667 diff --git a/data/2307.04684.png b/data/2307.04684.png new file mode 100644 index 0000000000000000000000000000000000000000..7f9438ec3a2d51dd1193297a9fd6e7b0c7682d86 --- /dev/null +++ b/data/2307.04684.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50f2fa60893be324fccd419aa15e583977b2cafa3d5ec72a9f910a1821475f38 +size 1552058 diff --git a/data/2307.04725v2.png b/data/2307.04725v2.png new file mode 100644 index 0000000000000000000000000000000000000000..f553cc33e5f493947816448f01bb28ae45119023 --- /dev/null +++ b/data/2307.04725v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89a7d36d7e8e38d008a0b38bc4b665bb18093f327dc104567db848d9d58ca4f9 +size 1390325 diff --git a/data/2307.04760.png b/data/2307.04760.png new file mode 100644 index 0000000000000000000000000000000000000000..bf164bb741da8d81ecc1689212e4846df891d46a --- /dev/null +++ b/data/2307.04760.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d24003ce16384e21178d883366025401a19d6637110f41981c41bc5d1e33f52 +size 863569 diff --git a/data/2307.05033.png b/data/2307.05033.png new file mode 100644 index 0000000000000000000000000000000000000000..9f692e95b314310ef511be6a3bd02fa455f2ca00 --- /dev/null +++ b/data/2307.05033.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce41dec5978f238341b527445143c3ef210861f5aab18f66a2ca73df7dc834d3 +size 1144720 diff --git a/data/2307.06949.png b/data/2307.06949.png new file mode 100644 index 0000000000000000000000000000000000000000..c26812695f7a1c337ddf9a7dbbfafd07d1e0a238 --- /dev/null +++ b/data/2307.06949.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3016c0e6a0b0fe39be5cf61f225254831ae91953290d6ca4ab23252c0f515a3 +size 912647 diff --git a/data/2307.07214.png b/data/2307.07214.png new file mode 100644 index 0000000000000000000000000000000000000000..ecc9c6776e91bdcbc6e5153409038669e35086ff --- /dev/null +++ b/data/2307.07214.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02a004c4091588bc7967886f0c95f9d56bc1b60b6d899acdf2abe2ecd076d602 +size 920417 diff --git a/data/2307.07313.png b/data/2307.07313.png new file mode 100644 index 0000000000000000000000000000000000000000..f136a3cab060c890879edf1772b95fdde9bba139 --- /dev/null +++ b/data/2307.07313.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5eef8a57c061c6c05ab19710fa36ae1f3bc3e87bcc6b6e9427c83168eb2938a8 +size 1010380 diff --git a/data/2307.07511.png b/data/2307.07511.png new file mode 100644 index 0000000000000000000000000000000000000000..34d1471fa3edb0ed0335fff4f5041153e56cfb13 --- /dev/null +++ b/data/2307.07511.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6d62bc2b7a24806e343d571588adf6f0a8f07ee01de0cbcdffc4f51bd69f817 +size 532217 diff --git a/data/2307.07607.png b/data/2307.07607.png new file mode 100644 index 0000000000000000000000000000000000000000..a8ff491c945295eea5c69c9c350a532f1cfcfe5c --- /dev/null +++ b/data/2307.07607.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcc23e8b401259a174da512ff15c1c828751492fee425e200505047b911eaa28 +size 826420 diff --git a/data/2307.07944.png b/data/2307.07944.png new file mode 100644 index 0000000000000000000000000000000000000000..102b35c5530c26fd60b48f2c923be0fed8022764 --- /dev/null +++ b/data/2307.07944.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c229af93d27be026061d1c2182003313a5a446ecd629964e0a46045e903ab78 +size 876738 diff --git a/data/2307.08076.png b/data/2307.08076.png new file mode 100644 index 0000000000000000000000000000000000000000..7bdd1cde74596432ed59c372b59993a871aa2b54 --- /dev/null +++ b/data/2307.08076.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06c122d8486f5c6d17315fd1d3a8718ebedb9c8c7ab4dd17276c4f96555b9d6b +size 1117858 diff --git a/data/2307.08544.png b/data/2307.08544.png new file mode 100644 index 0000000000000000000000000000000000000000..83c2786b38b49ebd926ed98527498971e144034e --- /dev/null +++ b/data/2307.08544.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce8322f050cc4ed5dd3995ad54e39ec464d564c2a89c30138585b820f315c3fa +size 639387 diff --git a/data/2307.08672.png b/data/2307.08672.png new file mode 100644 index 0000000000000000000000000000000000000000..590258fa918ec69519e1eb9bbaafb2bcffc73b0a --- /dev/null +++ b/data/2307.08672.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74722b88117cef2dffbaf66ae9a831c4709ecfcd9aa43290ece1c346a8b73669 +size 939217 diff --git a/data/2307.08727.png b/data/2307.08727.png new file mode 100644 index 0000000000000000000000000000000000000000..619cd56e64e84b4eb778a38fb952516fb85a3f90 --- /dev/null +++ b/data/2307.08727.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96b28109338b494c50326d4d38a3bb01ba02f38720f582bba5b4760c97ce760f +size 728769 diff --git a/data/2307.08873.png b/data/2307.08873.png new file mode 100644 index 0000000000000000000000000000000000000000..e5524b76f2c15186b607d39031dce0cce17fe353 --- /dev/null +++ b/data/2307.08873.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14e4cb76a6ecbfabd40b14f8ac2e7330d03648dabba577d7124e1ad60e5ee1b1 +size 577364 diff --git a/data/2307.08919v2.png b/data/2307.08919v2.png new file mode 100644 index 0000000000000000000000000000000000000000..9b09275c115a33576b8c90a302811303dbaf5eca --- /dev/null +++ b/data/2307.08919v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fda5791e440f8166f009f5dcb60683f9f039e5d19c2d1985e040f7b423c136c +size 820336 diff --git a/data/2307.09283.png b/data/2307.09283.png new file mode 100644 index 0000000000000000000000000000000000000000..8c8748d5103c7f84dff7fb81522e39f7f7b3dbae --- /dev/null +++ b/data/2307.09283.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75bda391dbafd8ac191abfe58e67b869654b5e63fe0b11d2abf0e2101cf1aa58 +size 743651 diff --git a/data/2307.09437.png b/data/2307.09437.png new file mode 100644 index 0000000000000000000000000000000000000000..3d74fc5439a23d53391d85a9876efb085c34b9ea --- /dev/null +++ b/data/2307.09437.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1f2a6111758b99d4072bf05b2b2e95b3f5a1148577305862e96ff517389a46b +size 686371 diff --git a/data/2307.09481.png b/data/2307.09481.png new file mode 100644 index 0000000000000000000000000000000000000000..e068708c04b3f8bb1bb0e89a92bfa7cd437f458b --- /dev/null +++ b/data/2307.09481.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8f854c8e5eaaffaff9fb874e6201f6dd2274e8f9697694cd42cf485a833c6e7 +size 2024175 diff --git a/data/2307.09815.png b/data/2307.09815.png new file mode 100644 index 0000000000000000000000000000000000000000..49159f9a1c4b8d2b082b3598a0d90502f6fea6d7 --- /dev/null +++ b/data/2307.09815.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3749b2bc6f2828f09436096c3eb9de6cfc983122659ed2deb344e8734b59d13a +size 745807 diff --git a/data/2307.09892.png b/data/2307.09892.png new file mode 100644 index 0000000000000000000000000000000000000000..4abd83ae6895417264f36f1ccc522b1bd30a8de9 --- /dev/null +++ b/data/2307.09892.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5064e607ef782a7f678f5ee27f6b40af40f69f8ae529035f5f8eb905c98e6f4 +size 1171242 diff --git a/data/2307.10206.png b/data/2307.10206.png new file mode 100644 index 0000000000000000000000000000000000000000..8f39a59dac63f654339abcb2ea58e088a404b77a --- /dev/null +++ b/data/2307.10206.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1ad4fbcc7256570ba3e6c1fe9f58a107aee8181f2b12cdc0ead4af026c4857f +size 893713 diff --git a/data/2307.10350.png b/data/2307.10350.png new file mode 100644 index 0000000000000000000000000000000000000000..bf9bd85c8b98afca1fb373e0e03764754aa3de51 --- /dev/null +++ b/data/2307.10350.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e49eda989c109f3843a4c5ea8e22a3798b570aab86f11b4f6486baf68fb0f1ba +size 555899 diff --git a/data/2307.11086.png b/data/2307.11086.png new file mode 100644 index 0000000000000000000000000000000000000000..14d450040554c8484d569ebfa053d4b3a523cd22 --- /dev/null +++ b/data/2307.11086.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60eca5077cdfebd7be7ca1bd139bf7c35041c3f0a93a305bbe15bf207880599e +size 541107 diff --git a/data/2307.11108.png b/data/2307.11108.png new file mode 100644 index 0000000000000000000000000000000000000000..651e4d244b7e0cebc7426fc890a95f44e2231cb8 --- /dev/null +++ b/data/2307.11108.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75bd618b9c5de7573d490843847ef871b38fe7e7ae9c233e1cabd80304c9543e +size 728249 diff --git a/data/2307.11558.png b/data/2307.11558.png new file mode 100644 index 0000000000000000000000000000000000000000..3b0d070dffd833c3ebe758601861bd601a91ebd1 --- /dev/null +++ b/data/2307.11558.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e11ce65977e581532fe0794329260d67c6c5c56d0ffa3d93b8c7e8172e550835 +size 1001672 diff --git a/data/2307.12732.png b/data/2307.12732.png new file mode 100644 index 0000000000000000000000000000000000000000..5f13e96c66683790d4ec8875a2cda1b830d34396 --- /dev/null +++ b/data/2307.12732.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1234e0cccf99d919862dd391b4eaff4c968005a45c5468dc6e08ae746f8bdf57 +size 802013 diff --git a/data/2307.13339.png b/data/2307.13339.png new file mode 100644 index 0000000000000000000000000000000000000000..c5e8cb82c105e9e8cf41bd2798968759322018bc --- /dev/null +++ b/data/2307.13339.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7be4e67dece84b9efda8dc5546226f3ea1b63174cf9f75113dffaa49850c9f1d +size 816395 diff --git a/data/2307.13497.png b/data/2307.13497.png new file mode 100644 index 0000000000000000000000000000000000000000..1788726651f9cb133ff3f033b5ebf4ad780e4f1e --- /dev/null +++ b/data/2307.13497.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8019da47ddc7a7d07f469e64ac0521e74aee7952d93913f6a1a8672ab9da933a +size 827677 diff --git a/data/2307.13539.png b/data/2307.13539.png new file mode 100644 index 0000000000000000000000000000000000000000..943bab45cf973835f3dc493244b41b18ec9a96fd --- /dev/null +++ b/data/2307.13539.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a09fbf9652193d5b0c910339d4d2361263e12330ff59eb8a4ee769c7498f24e +size 845379 diff --git a/data/2307.13929v3.png b/data/2307.13929v3.png new file mode 100644 index 0000000000000000000000000000000000000000..246ee0cd788308df9309eadafbb0bd9575a32ef1 --- /dev/null +++ b/data/2307.13929v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fafed67491c2de8369d562fbadde20a24759d28b34fcc0430aac2ec3d09aa669 +size 809266 diff --git a/data/2307.14019.png b/data/2307.14019.png new file mode 100644 index 0000000000000000000000000000000000000000..0a8bb05d19e41d6099005957b1e294272a83aec5 --- /dev/null +++ b/data/2307.14019.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b01e10f0cbd3ed97347faf43fcf66ca86cb6b93d115582243a7fbc1525647e6f +size 731339 diff --git a/data/2307.14638v1.png b/data/2307.14638v1.png new file mode 100644 index 0000000000000000000000000000000000000000..7a0367934171eabf42e07f5824c29ba12eb9eabd --- /dev/null +++ b/data/2307.14638v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d46ef977cd6be4e30e81660b378356c83452228751d3c7c4151804aadd6471c +size 954410 diff --git a/data/2307.15973.png b/data/2307.15973.png new file mode 100644 index 0000000000000000000000000000000000000000..961d229fd767666c08e18d46b0e1d01522d916a0 --- /dev/null +++ b/data/2307.15973.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c030469fd8a758ec7d7beedfbcf5624bd17b5086ce39ef9375d6b267140b2f24 +size 909551 diff --git a/data/2307.16121.png b/data/2307.16121.png new file mode 100644 index 0000000000000000000000000000000000000000..04f35b14e1af5d0fda534395c29c747f9975fbe6 --- /dev/null +++ b/data/2307.16121.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9cdb86abec91a0da9f0512ea445b020d20b12f2672c57f5af7f2f3ddecf7a2a +size 836960 diff --git a/data/2307.16125.png b/data/2307.16125.png new file mode 100644 index 0000000000000000000000000000000000000000..4b501064d89cda49089afe57160f1edb655dbef7 --- /dev/null +++ b/data/2307.16125.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dce9a25f4f2f87144015b0bdfc7991edd272f0f574091bee859837f37127c9b4 +size 550202 diff --git a/data/2307.16377.png b/data/2307.16377.png new file mode 100644 index 0000000000000000000000000000000000000000..2ce65893880555523a66c3ba71ea09bb808c7d04 --- /dev/null +++ b/data/2307.16377.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19fe1216b1ad35d41382f82a98414ff071708014f12b358177a84b37daf3ac7a +size 972374 diff --git a/data/2307.16424.png b/data/2307.16424.png new file mode 100644 index 0000000000000000000000000000000000000000..909ac8fc61dd2bf54714d46a5bede94d29ebc657 --- /dev/null +++ b/data/2307.16424.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:999117d94bd0fa911f57f248079b355151b966c8885dabacb0acc861b64af313 +size 822061 diff --git a/data/2307.16449.png b/data/2307.16449.png new file mode 100644 index 0000000000000000000000000000000000000000..b7ab99a74c62051419fbf462121d8a13b8588920 --- /dev/null +++ b/data/2307.16449.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14ccd2cfa5113393f0ed37c4f97c2232f3593f7a41f8b8589735af8bfd2611dd +size 656078 diff --git a/data/2307.16620.png b/data/2307.16620.png new file mode 100644 index 0000000000000000000000000000000000000000..f8f126dbbc073ed1d1845e729356a3172c48bfad --- /dev/null +++ b/data/2307.16620.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c86ea96a522ae41f55c002de2db57c7bc482f5c1b0639fb4fee1892bc7724c8 +size 883766 diff --git a/data/2307.16897.png b/data/2307.16897.png new file mode 100644 index 0000000000000000000000000000000000000000..3e0aa25beaa4710e6c59d4948182aa2edc602d11 --- /dev/null +++ b/data/2307.16897.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1117b4e2081563d427602dc0aac0675020c0ac6764ebad23dcf017a3f9aebf9 +size 933851 diff --git a/data/2308.00353.png b/data/2308.00353.png new file mode 100644 index 0000000000000000000000000000000000000000..a98ce8b4c8a2249a2e6a82a78a9e6c8e731f83f5 --- /dev/null +++ b/data/2308.00353.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc611f39996e8f092806f4ae76605e4350c57ab943c97785638e0d359a17fbac +size 808227 diff --git a/data/2308.00692.png b/data/2308.00692.png new file mode 100644 index 0000000000000000000000000000000000000000..6bd69da7f9f7379923526bfa7d99073c4ac5e7b6 --- /dev/null +++ b/data/2308.00692.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74b7cece60607bd26b5739bde4ff93966e488bed142914a4d5e6d9feabf4b156 +size 855582 diff --git a/data/2308.01471.png b/data/2308.01471.png new file mode 100644 index 0000000000000000000000000000000000000000..04cf699418811246546397844f00f423646abead --- /dev/null +++ b/data/2308.01471.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19007a0874e74f6a49507ef1fbeec2533dd96ee144ee50de4b6e098336f5c6d6 +size 927164 diff --git a/data/2308.01557.png b/data/2308.01557.png new file mode 100644 index 0000000000000000000000000000000000000000..68e654e46b3fafe2a32c693bf0218152f52b1d02 --- /dev/null +++ b/data/2308.01557.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9067707659947d764b4607758ef316de72a1b6abd899058e34cf02a5060b1179 +size 1090166 diff --git a/data/2308.02897.png b/data/2308.02897.png new file mode 100644 index 0000000000000000000000000000000000000000..b7c28db65ba82afb65ae85bb1c71fd146d404fde --- /dev/null +++ b/data/2308.02897.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9f000347d60bd28c41715c1633ee1b255972f04e6aed8aa813f1b7381555219 +size 749851 diff --git a/data/2308.02963.png b/data/2308.02963.png new file mode 100644 index 0000000000000000000000000000000000000000..ad293af7ce1b97405e1ea7125a876cbd01ce985b --- /dev/null +++ b/data/2308.02963.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77e30b8ad2c60495102b7c045060efb31bf6dbbf45702c61ec07a241160adff5 +size 1063777 diff --git a/data/2308.03005.png b/data/2308.03005.png new file mode 100644 index 0000000000000000000000000000000000000000..28b651e8f4d48bcc08d9d3fc65ab6b58ce1d2067 --- /dev/null +++ b/data/2308.03005.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35e846883499accc07226cee0c3656869948bc8596c30d0b668b1cf2060d20dc +size 941468 diff --git a/data/2308.04556.png b/data/2308.04556.png new file mode 100644 index 0000000000000000000000000000000000000000..d9d6733099348db65db0eca1759873aba481d578 --- /dev/null +++ b/data/2308.04556.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a5c40bf13d4b77eef61cfcc8e0e5ef220e58e89f9d5ae7fa40f06a139b7508d +size 755717 diff --git a/data/2308.06107.png b/data/2308.06107.png new file mode 100644 index 0000000000000000000000000000000000000000..f3df8b52c563677388152c7cdb89fc7ef75039f3 --- /dev/null +++ b/data/2308.06107.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e54d2ef8152c89077eb4632c9ee7830dec6c0e084864c3f29c7b16142680a268 +size 780518 diff --git a/data/2308.06412.png b/data/2308.06412.png new file mode 100644 index 0000000000000000000000000000000000000000..3768190744142e25dfd9a12ac972761ed52a2ce2 --- /dev/null +++ b/data/2308.06412.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3f6f99e92a784402fe73ce3fd10195e9b5f3ec1b30f476fb4b14a48df510454 +size 833380 diff --git a/data/2308.06564.png b/data/2308.06564.png new file mode 100644 index 0000000000000000000000000000000000000000..5b6d6243177cfa33c043175acc90188daf32a5bc --- /dev/null +++ b/data/2308.06564.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41bd75269b217ab50d6d982602a48523a2ffd5ab0be58713a4b5f88c5506ed29 +size 905379 diff --git a/data/2308.06699.png b/data/2308.06699.png new file mode 100644 index 0000000000000000000000000000000000000000..36c87c251729826b32e5a2392238e0dd7f65f11a --- /dev/null +++ b/data/2308.06699.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e6c5c0b58540102a4d74a972c86742ba9d7d35cfb8980c10c872c107a091ab3 +size 1454044 diff --git a/data/2308.07891.png b/data/2308.07891.png new file mode 100644 index 0000000000000000000000000000000000000000..75243dda234a49b64108a2b24a6fbfd0ec88f4d5 --- /dev/null +++ b/data/2308.07891.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b92a57917c7f6ba050dd40d448a1cab17e69abc0393bd5d48593d98d12edf87c +size 871759 diff --git a/data/2308.07903.png b/data/2308.07903.png new file mode 100644 index 0000000000000000000000000000000000000000..63b75c9c260a12d4236928f3d2fdad6ea8c99896 --- /dev/null +++ b/data/2308.07903.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19d31d212fb4838a65cd92ae60bde0cc9637c944be2521237844735b4c78dfe0 +size 1031203 diff --git a/data/2308.07926.png b/data/2308.07926.png new file mode 100644 index 0000000000000000000000000000000000000000..ba452482091f27df15cc72ccb9617d2040cdec69 --- /dev/null +++ b/data/2308.07926.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f0bf034e561baf84555029b4342c63118c25b984170fdd02c90e96170b80f1e +size 1720435 diff --git a/data/2308.08110.png b/data/2308.08110.png new file mode 100644 index 0000000000000000000000000000000000000000..3e0898712330d0d01d51b7475bea52887e36b07e --- /dev/null +++ b/data/2308.08110.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21d4815e749af8bc0551d82330077b1ea14e4ef379ee82045021fc45540372af +size 1011137 diff --git a/data/2308.08843.png b/data/2308.08843.png new file mode 100644 index 0000000000000000000000000000000000000000..5f111d6fc199769f8169c3345bae6bf15f85c987 --- /dev/null +++ b/data/2308.08843.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db0cd38e2db1f7da008bd76f216671f8f53827d28fdcef86ff93a71edee70c64 +size 1333383 diff --git a/data/2308.09107.png b/data/2308.09107.png new file mode 100644 index 0000000000000000000000000000000000000000..86dbd0b4b28623e0efd1021fe8bc943beba847d8 --- /dev/null +++ b/data/2308.09107.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7587290b9ae26646d2215afbaab27b282f240b667f388e3e7242dd70b5d6974a +size 443896 diff --git a/data/2308.09302.png b/data/2308.09302.png new file mode 100644 index 0000000000000000000000000000000000000000..b7c71bc9a0217f6bbd289e2eb1d53208fa248036 --- /dev/null +++ b/data/2308.09302.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:758bef8c4bddb7f04229a33cdfefeb87a129eca2956e1410c7e2d7ead5d7daad +size 878430 diff --git a/data/2308.09303.png b/data/2308.09303.png new file mode 100644 index 0000000000000000000000000000000000000000..cebf6273ea39e5b377e2747d97d6762e6f455ae8 --- /dev/null +++ b/data/2308.09303.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:939df31be0c60478445118d5441df0783f46d3705512a23c014814791b44eae9 +size 775852 diff --git a/data/2308.09421.png b/data/2308.09421.png new file mode 100644 index 0000000000000000000000000000000000000000..63ab10f0de6fb16a106095d673bddc4fd43770bc --- /dev/null +++ b/data/2308.09421.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1f54267c15c78d908786b842f473c576cb203d9c75e6263c45407688c184f37 +size 826512 diff --git a/data/2308.09710.png b/data/2308.09710.png new file mode 100644 index 0000000000000000000000000000000000000000..142fa1110c10274e50cb5f45949dcea17e87be1f --- /dev/null +++ b/data/2308.09710.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25f1ac25a5e02beef1287e3fcab5a5a09c5ebe9ea1cd11fc16c525fcda31fe29 +size 1686390 diff --git a/data/2308.09718.png b/data/2308.09718.png new file mode 100644 index 0000000000000000000000000000000000000000..61881e7b7630afa2997f6b5510470b20eb4c7331 --- /dev/null +++ b/data/2308.09718.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f32247b9c431d42707006d390019433a7e15a0cdd5aa3a6c0fa06b6b653f7c8d +size 589180 diff --git a/data/2308.09905.png b/data/2308.09905.png new file mode 100644 index 0000000000000000000000000000000000000000..6b339f17f1ac53f595e8447fe8c11f5122ec9164 --- /dev/null +++ b/data/2308.09905.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b7c9b41f525dfcbfbef8c67828089120fe214e02f3b37eff81b4fae724769a2 +size 895979 diff --git a/data/2308.09911.png b/data/2308.09911.png new file mode 100644 index 0000000000000000000000000000000000000000..30cdb0366ce8ef907453d0cc5e50b68dcdfaa7c5 --- /dev/null +++ b/data/2308.09911.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8d502bd958dd667f72cfbb8634c2478dcedc1d7ea23784135389eb9d35874b1 +size 899495 diff --git a/data/2308.10299.png b/data/2308.10299.png new file mode 100644 index 0000000000000000000000000000000000000000..3c1a719aaeb4b641623081320d98c4d0d73448cb --- /dev/null +++ b/data/2308.10299.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47645612ef47d2fed0e473d7a3bcef041de309633ff1cdc5a0cf977e8fe7d4f7 +size 982808 diff --git a/data/2308.10305.png b/data/2308.10305.png new file mode 100644 index 0000000000000000000000000000000000000000..2e04a89bfa2dfe1f6ca63c5edfa309f526500ef5 --- /dev/null +++ b/data/2308.10305.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:757be9f22e19121bc5aafcd22c450b0e568dd2b392af3c71c27452aa6905bd91 +size 765322 diff --git a/data/2308.10627.png b/data/2308.10627.png new file mode 100644 index 0000000000000000000000000000000000000000..bcdbcd6b81bfd21ea1f6e1373f2568a04365ac0b --- /dev/null +++ b/data/2308.10627.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35e258ce96483a31fdf1241ac6332e10b22153fc3fcef0c0d075931da2154653 +size 829359 diff --git a/data/2308.10638v2.png b/data/2308.10638v2.png new file mode 100644 index 0000000000000000000000000000000000000000..640f719ca27a34a4e44bfacd53f8991be22fa322 --- /dev/null +++ b/data/2308.10638v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cdaa69a41d1821fd154428441f294365a87f72ac322045739924c201ab4c11b +size 893992 diff --git a/data/2308.10997.png b/data/2308.10997.png new file mode 100644 index 0000000000000000000000000000000000000000..02dfa2b4e78f2ab69fcd4c9cf6e7f126c36bb2a9 --- /dev/null +++ b/data/2308.10997.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19377990463625cede23e177ed50a41a3109b93842bf10fff8392ec69e3a1f22 +size 1618711 diff --git a/data/2308.11228v2.png b/data/2308.11228v2.png new file mode 100644 index 0000000000000000000000000000000000000000..d5380cd4fff5256aff55250ee94eac8978877ed2 --- /dev/null +++ b/data/2308.11228v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6eefab0d0b79d6ba794f4eff309486713f452280721dbe1d930afeac2da24b2a +size 919102 diff --git a/data/2308.11408.png b/data/2308.11408.png new file mode 100644 index 0000000000000000000000000000000000000000..f43a84a6ba931ccb45f4ccbfa3b055a240f63e9c --- /dev/null +++ b/data/2308.11408.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8289c4fda87e28f887879b88bc23971589db726c99c795abb028019524863b58 +size 1054725 diff --git a/data/2308.12001.png b/data/2308.12001.png new file mode 100644 index 0000000000000000000000000000000000000000..cc92bcf07ff2928914d26cdaa6c3384852db744a --- /dev/null +++ b/data/2308.12001.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5a77f317389f59bc3788c66c7c9d12ca05f140f7346b014eb72b035ead1a1db +size 776035 diff --git a/data/2308.12112.png b/data/2308.12112.png new file mode 100644 index 0000000000000000000000000000000000000000..d4b2d4a2bafdb149a9f8f1c0dec80f4202813a40 --- /dev/null +++ b/data/2308.12112.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0105c7f7b92fffdd7967b84b9301a883e625c3d091aa370938e6e6f87168fa6 +size 984201 diff --git a/data/2308.12462.png b/data/2308.12462.png new file mode 100644 index 0000000000000000000000000000000000000000..394ad6c146ec3ff13445a46caca84b0ea4f54173 --- /dev/null +++ b/data/2308.12462.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:239bf571c708a7cb15be804d3775131f73bff6274d55b31d91ee3512e55317fb +size 802201 diff --git a/data/2308.12469.png b/data/2308.12469.png new file mode 100644 index 0000000000000000000000000000000000000000..b76cd0609b673a888a9ffdb66991bb70d7e716aa --- /dev/null +++ b/data/2308.12469.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e875a5a0fde68a8f42fd447aa6b8718165efaead59934006a74671ef344efdef +size 696000 diff --git a/data/2308.12532v6.png b/data/2308.12532v6.png new file mode 100644 index 0000000000000000000000000000000000000000..d25f9b4c68eccee3cd9b17aabd74263453853d6c --- /dev/null +++ b/data/2308.12532v6.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87caa4b5cee9f9926fc981a18c66104bcb92e9225b0904bf5cfbc193b55c4ccd +size 782621 diff --git a/data/2308.12831.png b/data/2308.12831.png new file mode 100644 index 0000000000000000000000000000000000000000..f14afc041397da4f0fc17110909c3b3b8d4fc1b3 --- /dev/null +++ b/data/2308.12831.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d87f5676246c96d848282c7a7f99d76c8eb808a0ec8ebad21f464ee269e6ae3 +size 846943 diff --git a/data/2308.13223.png b/data/2308.13223.png new file mode 100644 index 0000000000000000000000000000000000000000..e11851a32ba78966179d9dd78d033a08cb6770ee --- /dev/null +++ b/data/2308.13223.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9eba4553e829ed1d3eabc990a66f601133539a5a32de865b04a6146a941dc70 +size 915302 diff --git a/data/2308.13287.png b/data/2308.13287.png new file mode 100644 index 0000000000000000000000000000000000000000..20632ba4e9cbebfb35529cdcf755d6c7f8d587f3 --- /dev/null +++ b/data/2308.13287.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4771073c9cc9e4055e1708ec78fd80feb74c4dcfcd8ccc215bf0fcda0d85c402 +size 741236 diff --git a/data/2308.13628.png b/data/2308.13628.png new file mode 100644 index 0000000000000000000000000000000000000000..7161a11aba374674f5bca622d0c784d6435de4fa --- /dev/null +++ b/data/2308.13628.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9410e177c2a8baa300514c08c5036e2d4db56f1a39c54d41af1b711c169fc853 +size 449371 diff --git a/data/2308.13712.png b/data/2308.13712.png new file mode 100644 index 0000000000000000000000000000000000000000..da2286b2a4a6f2ebc6acbbce113c7b1e6ff0b96d --- /dev/null +++ b/data/2308.13712.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1171dfdcef2ed7421f4fa3b01f95456e1018c091714a423325efe697ca05fe44 +size 784446 diff --git a/data/2308.13812.png b/data/2308.13812.png new file mode 100644 index 0000000000000000000000000000000000000000..8d01b92c204a4893363130529ff0d511ffb27ec4 --- /dev/null +++ b/data/2308.13812.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f54ee0900d8259acdacfb7163f7ec8b6c9f883ce2caab206bf5d70616823e828 +size 824293 diff --git a/data/2308.13888.png b/data/2308.13888.png new file mode 100644 index 0000000000000000000000000000000000000000..c540e0b48746d5e44da9d6321c5e6571590c15bd --- /dev/null +++ b/data/2308.13888.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e066bc540f866fe19542066b59cd6914617b38fea500be75b08c93a1d6e02e4 +size 802679 diff --git a/data/2308.14316v2.png b/data/2308.14316v2.png new file mode 100644 index 0000000000000000000000000000000000000000..610056064ab72e0a30ce6efa64994767ebdfba34 --- /dev/null +++ b/data/2308.14316v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38ce5661f52c5073aff1c15fab378c4854d4c2e8e8a4acca3a249818347b47d7 +size 831416 diff --git a/data/2308.14610.png b/data/2308.14610.png new file mode 100644 index 0000000000000000000000000000000000000000..1871384628f2d382ff46bf97d09271a313d722c2 --- /dev/null +++ b/data/2308.14610.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32df55ad7518267bd8d1f11bf876ce76eb4197e8e82cda7120b06fcef88a5cab +size 745359 diff --git a/data/2308.14710.png b/data/2308.14710.png new file mode 100644 index 0000000000000000000000000000000000000000..a0291d23a290e80fe87c5ed6ef1daa4041cccc3f --- /dev/null +++ b/data/2308.14710.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85f338536e3e3e5d07361332e2bff5deeb92625597345ae22cc1f7fec50facef +size 1019317 diff --git a/data/2308.14740.png b/data/2308.14740.png new file mode 100644 index 0000000000000000000000000000000000000000..546ef0899e037ccd9b85a085d89fc42bac989a7e --- /dev/null +++ b/data/2308.14740.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41a709976acae2f74f5a8685211e63723280a7b769acaeb7316e0630fb0ef544 +size 1292109 diff --git a/data/2308.14900.png b/data/2308.14900.png new file mode 100644 index 0000000000000000000000000000000000000000..5b6751f5edafbf7505dcdd174a7063c5c0324e30 --- /dev/null +++ b/data/2308.14900.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cd50f17e185a3175c718af3b8d14fc7219be0c163a8ddb45ccb50265b3ab6b0 +size 809959 diff --git a/data/2308.15074.png b/data/2308.15074.png new file mode 100644 index 0000000000000000000000000000000000000000..c0f8aadac38ca94ef6e6b287e1b5c6889f110b02 --- /dev/null +++ b/data/2308.15074.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f2f2e1cee7803bc5679acd73c4be662969bd387f82a908bf3abebe29f847c37 +size 834695 diff --git a/data/2308.15692.png b/data/2308.15692.png new file mode 100644 index 0000000000000000000000000000000000000000..78264363ca2b44731a806865aed6477bf72ade8e --- /dev/null +++ b/data/2308.15692.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59585c89365b6729771988c9b042f7e45d84f3a2a8736d03f712f8f0ca8de76f +size 1223352 diff --git a/data/2308.15984.png b/data/2308.15984.png new file mode 100644 index 0000000000000000000000000000000000000000..aecddc827077f9c3d0adacbf3aed9db2a1b72606 --- /dev/null +++ b/data/2308.15984.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21dfb5ede6265360464925f9315678d1f79b401df260915b0acc916cc9471faa +size 788864 diff --git a/data/2308.16246.png b/data/2308.16246.png new file mode 100644 index 0000000000000000000000000000000000000000..a66a8149c39bfecef8cb69f63f39635311dc605e --- /dev/null +++ b/data/2308.16246.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9f8be8fae8cec8e483cc79292b1872f36e9213422d86c8a2068ffedb35ad1d7 +size 954717 diff --git a/data/2308.16682.png b/data/2308.16682.png new file mode 100644 index 0000000000000000000000000000000000000000..b9b3c85cc7fea3742c97c533c66a9167da42876f --- /dev/null +++ b/data/2308.16682.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d66aa576c04faf7612f7ab38bf4acdab6f7548e1b7f99a1f67aff6552c6b03c4 +size 1109870 diff --git a/data/2308.16758.png b/data/2308.16758.png new file mode 100644 index 0000000000000000000000000000000000000000..c61ff296082d11b07a387ce66be8fe1d19dbfb9d --- /dev/null +++ b/data/2308.16758.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99c2aedcf9c3cc5d0b75cf13ea899c9c6ada7c88692f6a9ad44c1c51efc67dca +size 1431529 diff --git a/data/2308.16876v2.png b/data/2308.16876v2.png new file mode 100644 index 0000000000000000000000000000000000000000..20f07123b0d04bc11f526a190cae211379c434ea --- /dev/null +++ b/data/2308.16876v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7437ecc7770472940f62cdbf7957fe8762863de5eaa4bc1cabaac2cadce6ece7 +size 962543 diff --git a/data/2309.00024.png b/data/2309.00024.png new file mode 100644 index 0000000000000000000000000000000000000000..98c9c64870322deb046559aad047476bb8098db5 --- /dev/null +++ b/data/2309.00024.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c0607b0f1d9d7b8d542ce77f811ab4b34995f259a46c4003a20369b80579f4b +size 802607 diff --git a/data/2309.00610.png b/data/2309.00610.png new file mode 100644 index 0000000000000000000000000000000000000000..122f5cca70dfb3ad255faecde68e0fe9b57037db --- /dev/null +++ b/data/2309.00610.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:992e0a5679e26b11f8cd6219b7bd7331ba9be16948053c449674379c721efcca +size 1412669 diff --git a/data/2309.00696.png b/data/2309.00696.png new file mode 100644 index 0000000000000000000000000000000000000000..cb5aebbb4e6f139c7220b1989870142e1ae3d26b --- /dev/null +++ b/data/2309.00696.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e372ca66adc74bea4d1ecc59b151f7c60aa90a2caa3b41b90877179cc7177834 +size 872830 diff --git a/data/2309.01327.png b/data/2309.01327.png new file mode 100644 index 0000000000000000000000000000000000000000..e8f9958b40dea2a2f86f06b1e1dd2422034f6df9 --- /dev/null +++ b/data/2309.01327.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5c309a712cb62740276d3c741cf25be3896a2684a632f07fb56d1d96ed4f7da +size 853071 diff --git a/data/2309.01838.png b/data/2309.01838.png new file mode 100644 index 0000000000000000000000000000000000000000..b575da9a7800f0f46b0828cfa86c9d333a9d7064 --- /dev/null +++ b/data/2309.01838.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1058974119dc40126e1e6c1804cf51aa34d4a8bba652b70db7726b9a772ae103 +size 915208 diff --git a/data/2309.01858.png b/data/2309.01858.png new file mode 100644 index 0000000000000000000000000000000000000000..7e6b112de0f7642a40ed2e72910fd23f8737927f --- /dev/null +++ b/data/2309.01858.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97949cd7e7018446790e91c2894967780a91cb9d55f93f864e96f60dd68116c7 +size 797601 diff --git a/data/2309.02165.png b/data/2309.02165.png new file mode 100644 index 0000000000000000000000000000000000000000..078771d6223031696290b388212e093948d8a3ae --- /dev/null +++ b/data/2309.02165.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44d8724d7e62e8fb24df8f9933a022bc97d89379198a3e145adbcfc62c3ff975 +size 775542 diff --git a/data/2309.02685.png b/data/2309.02685.png new file mode 100644 index 0000000000000000000000000000000000000000..4fbd90623e6552c27aa646cee116f2e0992f3416 --- /dev/null +++ b/data/2309.02685.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4957945f1ad5a0dc5e11abe530ba57484d3e0391c7fbb46e6aa15734750925b5 +size 739471 diff --git a/data/2309.03185.png b/data/2309.03185.png new file mode 100644 index 0000000000000000000000000000000000000000..9e5ec4fd957e39f583cd0dd7ab173c996b2ef55c --- /dev/null +++ b/data/2309.03185.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:458ae13e99afe1a6672a981432f6c5fa5453d5a902c7269f1027f55081615b65 +size 1344613 diff --git a/data/2309.03542.png b/data/2309.03542.png new file mode 100644 index 0000000000000000000000000000000000000000..c4fe6dd9c0b3f835b22f53b725d356bbe66a6782 --- /dev/null +++ b/data/2309.03542.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7403b8b0bf56308fe85c09b4489ec2889768fe3dbdae94ab9051fe8d2dde1c8e +size 540559 diff --git a/data/2309.03895.png b/data/2309.03895.png new file mode 100644 index 0000000000000000000000000000000000000000..be6c687d0499846bade23461e37d015220d8de92 --- /dev/null +++ b/data/2309.03895.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7632cbaf84d8e7f12237358183b63eb9ecc396568627ae6bbee7efb1cf96aceb +size 1586091 diff --git a/data/2309.04228.png b/data/2309.04228.png new file mode 100644 index 0000000000000000000000000000000000000000..0c36ee4a524569f5e86324239e80f022ade43c07 --- /dev/null +++ b/data/2309.04228.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d212eb161bc362c63621fb7cf463d50c46e799a134c4803be9b3ad9aea7fd60 +size 1532633 diff --git a/data/2309.04437.png b/data/2309.04437.png new file mode 100644 index 0000000000000000000000000000000000000000..63093f889647159fed41a4b1ed8215c84cad7189 --- /dev/null +++ b/data/2309.04437.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae6d08ee442411a087e28dc40b90eb2c864560e7374c58b7f025830a90450b48 +size 795661 diff --git a/data/2309.04506.png b/data/2309.04506.png new file mode 100644 index 0000000000000000000000000000000000000000..2d94c1cf1d2261c21d83964bf83edd486f4965da --- /dev/null +++ b/data/2309.04506.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:484ef776f8195ccebdbc0f0ad9695175bf08394bc5165c14e415b48577ae14a0 +size 746965 diff --git a/data/2309.05073.png b/data/2309.05073.png new file mode 100644 index 0000000000000000000000000000000000000000..e4a173909e9842127ec8fe03ca2b4a3aa8d04b63 --- /dev/null +++ b/data/2309.05073.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d218e580816abd7396affa705fc17578e87bbaa955226c1ea2c15fa0f5903060 +size 1506405 diff --git a/data/2309.05203.png b/data/2309.05203.png new file mode 100644 index 0000000000000000000000000000000000000000..896ea714e61309bf861ed8f0a32235412a865b0d --- /dev/null +++ b/data/2309.05203.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c6cf9d35a1b0ee4bd927f7dd481d49ea5a42677c583a61ab2c339a3823dc8c3 +size 823995 diff --git a/data/2309.05950.png b/data/2309.05950.png new file mode 100644 index 0000000000000000000000000000000000000000..bf02bfbd2b6089d6d3868b68eab5fb32b54718e8 --- /dev/null +++ b/data/2309.05950.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fa4d452d27a34c9f61d8a082980d417ba698704973b29f53aca2d015e21e48a +size 817153 diff --git a/data/2309.06255v3.png b/data/2309.06255v3.png new file mode 100644 index 0000000000000000000000000000000000000000..9b8a03c3d9e840d45c82c64a472a13be708a83d6 --- /dev/null +++ b/data/2309.06255v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efbf1d46bb9bd1a300567558679c3cf3e3b32d0ef45e0777c987a11d4bce7fe4 +size 777363 diff --git a/data/2309.07439.png b/data/2309.07439.png new file mode 100644 index 0000000000000000000000000000000000000000..7db4ead3861b888f9087b8c68e099c9b54fac7d4 --- /dev/null +++ b/data/2309.07439.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cb0c9a4916500947c806ad07be3c80ef78f48ec800bcf7af6910703559b372d +size 713506 diff --git a/data/2309.07849v3.png b/data/2309.07849v3.png new file mode 100644 index 0000000000000000000000000000000000000000..26ba53a5a5730f471b13fb10e32c77675afd6a1d --- /dev/null +++ b/data/2309.07849v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0f1d78706db49920bf81a328260c1cfd50e50d176f89a387feef00d45636a1a +size 1109217 diff --git a/data/2309.07906.png b/data/2309.07906.png new file mode 100644 index 0000000000000000000000000000000000000000..9a557dae75c0d60dc8f4b23de52ef52c358f806f --- /dev/null +++ b/data/2309.07906.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f536e0ce49f4b55bbbf57b3e1e990c019502df36c77236b96d4eae369abe887 +size 801733 diff --git a/data/2309.09818.png b/data/2309.09818.png new file mode 100644 index 0000000000000000000000000000000000000000..eacccecae582b6e8f4888f05df25a80e459bbb75 --- /dev/null +++ b/data/2309.09818.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62e2881c6445fe0a5de0603bb1f8d501152fbe19a9c5521a987e1b4339a41e5f +size 1376920 diff --git a/data/2309.10058.png b/data/2309.10058.png new file mode 100644 index 0000000000000000000000000000000000000000..cb8f8a11d9544effb5e0bacdb89cb1634ea06b2e --- /dev/null +++ b/data/2309.10058.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df03aee6a039ad96e865727298d96558b253f4c83b0be9bb47d219c0d4717f2e +size 573337 diff --git a/data/2309.10314.png b/data/2309.10314.png new file mode 100644 index 0000000000000000000000000000000000000000..3ecd6dc152f54ca8a74ecf439aa1d3431338dc93 --- /dev/null +++ b/data/2309.10314.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e26beb221714a04ecb3657ff24257534b128128a5a307f6a639d2cb740c6a11c +size 936524 diff --git a/data/2309.10649.png b/data/2309.10649.png new file mode 100644 index 0000000000000000000000000000000000000000..a5d71a4fbde65114bdbef371d74814535955f4ee --- /dev/null +++ b/data/2309.10649.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:269756ec0e82ba13bdabf45f72cdbd6a3199fe6045722f2cc607655bce27201e +size 453545 diff --git a/data/2309.10911.png b/data/2309.10911.png new file mode 100644 index 0000000000000000000000000000000000000000..62712c83db2eed52ee117afbf9099d87c62b442d --- /dev/null +++ b/data/2309.10911.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fbb26467255e275aa8222fcf8a676a633a2694c8294f314ff6edd57940cd0ab +size 923980 diff --git a/data/2309.11281.png b/data/2309.11281.png new file mode 100644 index 0000000000000000000000000000000000000000..bf70fe9d496e81138f602bfe7d881662cdffb603 --- /dev/null +++ b/data/2309.11281.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32d74955c1fc90b50d7a175e20bb5ffb29c118fcd5040078c6a8390f66b79cd3 +size 1007430 diff --git a/data/2309.11497.png b/data/2309.11497.png new file mode 100644 index 0000000000000000000000000000000000000000..def74ad1dae3537a2d9ec740980a7d0e37c5d1ed --- /dev/null +++ b/data/2309.11497.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14c6703beaadd5776784c8421a04322379ba17dd63f560a3babdb777ea465ca0 +size 1588810 diff --git a/data/2309.11523.png b/data/2309.11523.png new file mode 100644 index 0000000000000000000000000000000000000000..01a430a4c3e9243c6b39172184295d0fdf99497c --- /dev/null +++ b/data/2309.11523.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d48d86f1af6e146fac3b718a04281fd4b028f930c8395d22569db852cdd8e3b +size 746785 diff --git a/data/2309.11718.png b/data/2309.11718.png new file mode 100644 index 0000000000000000000000000000000000000000..da009c51961739543ae647cbc8c945272778e1ce --- /dev/null +++ b/data/2309.11718.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22b7a672891d93f9edd2bc41f2276dbdaa338fc7cf5200025210459fdf0647e2 +size 381341 diff --git a/data/2309.11804.png b/data/2309.11804.png new file mode 100644 index 0000000000000000000000000000000000000000..937bd675ec272eaa3cb238a406a3266ca778b3eb --- /dev/null +++ b/data/2309.11804.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6aa1b8f98dc1de56dff54697af1489ea60a1f9c93fd85f4248609be576f3b22 +size 428365 diff --git a/data/2309.12378.png b/data/2309.12378.png new file mode 100644 index 0000000000000000000000000000000000000000..53493c69473e7fb68e44a597ca39aa739fd9f07c --- /dev/null +++ b/data/2309.12378.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d12d34425ced626f41cb519c68343c979b8e13c149d7817a3ecaa35c3387a63 +size 762644 diff --git a/data/2309.12790.png b/data/2309.12790.png new file mode 100644 index 0000000000000000000000000000000000000000..b82ed73714b2854ae1661d38c267c5023e4684d0 --- /dev/null +++ b/data/2309.12790.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c96c309bf3a31d6bfa862b9dc8f88bf0d904362b31310b278f2bad8ef9a3d8c +size 933065 diff --git a/data/2309.13006.png b/data/2309.13006.png new file mode 100644 index 0000000000000000000000000000000000000000..3c9aa8c67c94e5ccc54b734dbd6c87551ba6d35c --- /dev/null +++ b/data/2309.13006.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6310501814cb75c53a81795c9bd9bfdacdcf41bdf1145a6722cec6b91a881c5d +size 452319 diff --git a/data/2309.13101.png b/data/2309.13101.png new file mode 100644 index 0000000000000000000000000000000000000000..0dfb3f8e5de5420fa2daa49ea6dfc81a8e35f706 --- /dev/null +++ b/data/2309.13101.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9154cc4e46f4b1d12f0b1c28ca26d91dbe8c259b760792eae5caa3fb9a142ceb +size 1003069 diff --git a/data/2309.13524.png b/data/2309.13524.png new file mode 100644 index 0000000000000000000000000000000000000000..56c80d7c7f7e39ee268c03d25e13b5acaa5f741d --- /dev/null +++ b/data/2309.13524.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43ff3ccaf8a9c872e0263415f5cdfdc38c571bbc3d5bace347fb375cff6e6842 +size 553104 diff --git a/data/2309.13596v2.png b/data/2309.13596v2.png new file mode 100644 index 0000000000000000000000000000000000000000..14c49e06ea48795624c6da6d7cba14d5ce5888b4 --- /dev/null +++ b/data/2309.13596v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21c7917bce19f468e54d3a197b9dc942eeb121848a3b2ebe0e687b8f4fbc5d6f +size 1050154 diff --git a/data/2309.13855.png b/data/2309.13855.png new file mode 100644 index 0000000000000000000000000000000000000000..746a41a83b495b5709544587d22c65e07cae71aa --- /dev/null +++ b/data/2309.13855.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26bdc8a5dc24f0b8488e01ef0fab56783309f9c5760fc3da380b6ea3cb0793cb +size 669995 diff --git a/data/2309.13925.png b/data/2309.13925.png new file mode 100644 index 0000000000000000000000000000000000000000..444ef5e1757131e0a06374308401ea64ca0958db --- /dev/null +++ b/data/2309.13925.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f8dc6cbbe70aa5638fc6d9a5946616df3b10ea2530a8e3f8d7849c8e4e25416 +size 915284 diff --git a/data/2309.14611.png b/data/2309.14611.png new file mode 100644 index 0000000000000000000000000000000000000000..5eb927fa1c158eef21c3e97b7c7f05e27de8dc54 --- /dev/null +++ b/data/2309.14611.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e06441d141a24df8d177b7b51c1a484ed857d6bfce31852d8ce1ac8c5876988a +size 798162 diff --git a/data/2309.14786.png b/data/2309.14786.png new file mode 100644 index 0000000000000000000000000000000000000000..af2643ec6ce7c07332497d585e1575b2efcf44de --- /dev/null +++ b/data/2309.14786.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a12c246a190d387df206de60535b132e504e05db595b40881d3ed93b3b7f0a4a +size 932208 diff --git a/data/2309.14949v1.png b/data/2309.14949v1.png new file mode 100644 index 0000000000000000000000000000000000000000..49e7442b22e008829a8542b4870832de6bde09ff --- /dev/null +++ b/data/2309.14949v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d08b0833bc1c1b14e6a40ca19bf46b4905de6d939dfae4f336d9916d2500bfb6 +size 528889 diff --git a/data/2309.15729.png b/data/2309.15729.png new file mode 100644 index 0000000000000000000000000000000000000000..90f3514be84fa95385dd584bbb92a79e1b30fa6b --- /dev/null +++ b/data/2309.15729.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0cfb47682f8ef13552170df04a9768bfb6003f1db701977b07edb099d167cfb +size 615183 diff --git a/data/2309.15785.png b/data/2309.15785.png new file mode 100644 index 0000000000000000000000000000000000000000..bdb538b1852a7ee0304325d642e061357ef6fc2c --- /dev/null +++ b/data/2309.15785.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c864ca8f79311875e6739a099cf3f96762e2952a69730062c8f0de24e98ba3e +size 625504 diff --git a/data/2309.16397.png b/data/2309.16397.png new file mode 100644 index 0000000000000000000000000000000000000000..1d8b4113e4cbab1eb94c9c0206737eded812c641 --- /dev/null +++ b/data/2309.16397.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4806caa0c891c8bc82c18f435ce2518722aba2c11782b045083fa2a93345e938 +size 636157 diff --git a/data/2309.16421.png b/data/2309.16421.png new file mode 100644 index 0000000000000000000000000000000000000000..747c5fc30d667e4976d5d6a091905caa0fb1dd33 --- /dev/null +++ b/data/2309.16421.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a83d94e943afbecd1c686b9f063a766293028098576ab1ea423094664db75067 +size 864913 diff --git a/data/2309.16496.png b/data/2309.16496.png new file mode 100644 index 0000000000000000000000000000000000000000..17531c3696513112c6eb3c3758043bbc4b7c37a8 --- /dev/null +++ b/data/2309.16496.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8aa61a5ca866751360f8d4d0798764f749a1aa692132d86d067623d2d1b5674 +size 1652067 diff --git a/data/2309.16585.png b/data/2309.16585.png new file mode 100644 index 0000000000000000000000000000000000000000..f51d5f775486c7a917b0e28f6653ecc638571e19 --- /dev/null +++ b/data/2309.16585.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2fb98f9a993e16a8666dcb18284c41eebb4ea82c3d20627245d96039f0eb0cc +size 1285874 diff --git a/data/2309.16650.png b/data/2309.16650.png new file mode 100644 index 0000000000000000000000000000000000000000..fc408114dd9283a6c27a3a8bb4e6541324251c18 --- /dev/null +++ b/data/2309.16650.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd98c8b9df3ab5ede7fb1c219d03265848a340f4d79cd37e1e3776f921fc93a3 +size 1256065 diff --git a/data/2309.16975.png b/data/2309.16975.png new file mode 100644 index 0000000000000000000000000000000000000000..a2f8ae4a6f86cdcb909a10b4d106a2ef6a233489 --- /dev/null +++ b/data/2309.16975.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5553ea637d46ffa6444317a2d2dd50f7eda64e158287437e4f1a679c55b8569d +size 690978 diff --git a/data/2309.16992.png b/data/2309.16992.png new file mode 100644 index 0000000000000000000000000000000000000000..2d3a52606dd0cae1d406ee5dce4b7a5d38135453 --- /dev/null +++ b/data/2309.16992.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bc156bb47bbc0b1b95b90a609d457e93e76ad97fdf4ae711e26e4eb5a576295 +size 436007 diff --git a/data/2310.00031.png b/data/2310.00031.png new file mode 100644 index 0000000000000000000000000000000000000000..5955748755069f42b20dd953de1fcd35d1f5d473 --- /dev/null +++ b/data/2310.00031.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a91868198bd5d4017b6a747bb00d58a863281a34af2a1fdf6306096a18d7e65d +size 799207 diff --git a/data/2310.00132.png b/data/2310.00132.png new file mode 100644 index 0000000000000000000000000000000000000000..c732f20e358e0acca7da5f03a78051362b827248 --- /dev/null +++ b/data/2310.00132.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79f0e870067c86bdb8bdf449fa627b8cd496587cd8d8d5cef8e748987d0bd265 +size 820847 diff --git a/data/2310.00258.png b/data/2310.00258.png new file mode 100644 index 0000000000000000000000000000000000000000..0cce6a960d4e7af1f6e436df07c7a2ae77bc40f8 --- /dev/null +++ b/data/2310.00258.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d651be6ce7efaa7eee62c58754562d35f5b04611d1162ad362c24e1230179974 +size 732239 diff --git a/data/2310.00582.png b/data/2310.00582.png new file mode 100644 index 0000000000000000000000000000000000000000..f9cb991158a58169118b008e71dcb90503c3a017 --- /dev/null +++ b/data/2310.00582.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80aa56c5c01640e25104d24bd53dacaefe9402258a1c5fb7485bd6387bd840dc +size 762653 diff --git a/data/2310.00816.png b/data/2310.00816.png new file mode 100644 index 0000000000000000000000000000000000000000..287f1bd078a0f258680f88bbdf786b893e03b9ea --- /dev/null +++ b/data/2310.00816.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30195646fa9a6964bb8804a5806898518e910d4521ae3eb51161e407ff22896e +size 751793 diff --git a/data/2310.00944.png b/data/2310.00944.png new file mode 100644 index 0000000000000000000000000000000000000000..14c2cdfba33323333cf52c3d56324f01d76b6474 --- /dev/null +++ b/data/2310.00944.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cfa6a8e4e95ef9746b3677a0822abe98e40ffc63da8bfb19fbc70183176b2c4 +size 1072597 diff --git a/data/2310.01218.png b/data/2310.01218.png new file mode 100644 index 0000000000000000000000000000000000000000..e4ae805998fff1b60edd3d1e19508a2a4a69c620 --- /dev/null +++ b/data/2310.01218.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d87703e6fe7331b1fa83d497ea13a4ce6e6d8b70708aa20b9795305224c98106 +size 678696 diff --git a/data/2310.01406.png b/data/2310.01406.png new file mode 100644 index 0000000000000000000000000000000000000000..63d6e6698e0feedc89f72c117a29606cd0fd2ffb --- /dev/null +++ b/data/2310.01406.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47bddeaea67fcebc1d4426c56798e8bbe49f8c67ab9c1ec53915ff727d76b2ce +size 1259976 diff --git a/data/2310.01407.png b/data/2310.01407.png new file mode 100644 index 0000000000000000000000000000000000000000..d91abe7dbc5a69bad5e41a3b3e6cba0f27b893ef --- /dev/null +++ b/data/2310.01407.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16a3ffd325ab6153866276c751d75e4668b32a2803c767e617504db63282cdd5 +size 1282912 diff --git a/data/2310.02110.png b/data/2310.02110.png new file mode 100644 index 0000000000000000000000000000000000000000..5570e60c4c7fa4096527fe82883227a342c946ca --- /dev/null +++ b/data/2310.02110.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f85bd5cfdc1d307b7675e52516820a66f05ebf0c38f826d8c93c1c183e25e9aa +size 807820 diff --git a/data/2310.03744.png b/data/2310.03744.png new file mode 100644 index 0000000000000000000000000000000000000000..dafe7c4ee06f6892b543d9230a7043de3e6ba7ab --- /dev/null +++ b/data/2310.03744.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51b2016c6b7abbfbe81f7bada4a3ae6235da7f0af89f36a867762eec37627953 +size 778349 diff --git a/data/2310.03827.png b/data/2310.03827.png new file mode 100644 index 0000000000000000000000000000000000000000..e93f2e53f56e73fe5395d1d7e4c0d7a7883b2f50 --- /dev/null +++ b/data/2310.03827.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6faaa545d5402d4e7d64eec395f38e09b9bda4fc08b9a6ae6da4a103842a9cf5 +size 776502 diff --git a/data/2310.03898.png b/data/2310.03898.png new file mode 100644 index 0000000000000000000000000000000000000000..5f77696851a2abd722c41dbb927dd5bc5222bddd --- /dev/null +++ b/data/2310.03898.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3550231a58051b9f53c00268aa737117af9b0bc8676f1f1c1080b0f2c372a45 +size 627121 diff --git a/data/2310.03978.png b/data/2310.03978.png new file mode 100644 index 0000000000000000000000000000000000000000..0e54590956a9b80769adedcf3fdde763567da75c --- /dev/null +++ b/data/2310.03978.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49d1a917293cf395b75ea5b322e3869d94e96110f705f0ccad961b3b2a900398 +size 552421 diff --git a/data/2310.04041.png b/data/2310.04041.png new file mode 100644 index 0000000000000000000000000000000000000000..883de5d6b69307230aeebae201f35d8d4b9e1a43 --- /dev/null +++ b/data/2310.04041.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:348c18c83255e15f623f631c32e7e70eea560e8f18467b6dfebf6dbdd1474e67 +size 1162241 diff --git a/data/2310.05364.png b/data/2310.05364.png new file mode 100644 index 0000000000000000000000000000000000000000..515772527e227720dc780326ec37c64f020a7e7a --- /dev/null +++ b/data/2310.05364.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ab1a55d025a1bce84c6c7810a98ac5adf591c31b7a77ab1a4d1de284647c832 +size 888568 diff --git a/data/2310.05370.png b/data/2310.05370.png new file mode 100644 index 0000000000000000000000000000000000000000..3fd088e71a27a1c4d5025ecf921ed8ceedca5d7b --- /dev/null +++ b/data/2310.05370.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:647db9dca0854416841d8701213715e15d7513186bf80544402d7277dd9431a3 +size 961031 diff --git a/data/2310.06282.png b/data/2310.06282.png new file mode 100644 index 0000000000000000000000000000000000000000..9645f2280df5a4ad7c35794b2908962b9483d492 --- /dev/null +++ b/data/2310.06282.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dea2fbc9731b722c4240749e44419261de296c77c49640b29852d146d362fd44 +size 750255 diff --git a/data/2310.06627.png b/data/2310.06627.png new file mode 100644 index 0000000000000000000000000000000000000000..1ba438a45802ce802f32dc532bd2cb5868ba81aa --- /dev/null +++ b/data/2310.06627.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9252fa5849916048c57c20d572ea5ff6b00065684bdb09cdabc84392cd61425 +size 730017 diff --git a/data/2310.07997.png b/data/2310.07997.png new file mode 100644 index 0000000000000000000000000000000000000000..68e50a179a625c1fcd5b6192afb82397432e9ad5 --- /dev/null +++ b/data/2310.07997.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a51590b3fb85dd5ca9fb64dd39773067763584d62c871f4798a1662a343e667c +size 926832 diff --git a/data/2310.08092.png b/data/2310.08092.png new file mode 100644 index 0000000000000000000000000000000000000000..56637f629cc6002bf7274a5c8fed332fe0285bb5 --- /dev/null +++ b/data/2310.08092.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e429db545590d19d2c44ff7714357cbd5a227d52dd9dc6dcafc61de0dd183128 +size 642652 diff --git a/data/2310.08129.png b/data/2310.08129.png new file mode 100644 index 0000000000000000000000000000000000000000..015f9b8a4dee1a245efe9c60355ba542662abf16 --- /dev/null +++ b/data/2310.08129.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc9dd2c855c37c8b3b06ee6892165934a95238f170c6751885e408c7a6cdfb8e +size 979468 diff --git a/data/2310.08230.png b/data/2310.08230.png new file mode 100644 index 0000000000000000000000000000000000000000..c70e19eb0dc22f25abf619d6f16c956eb63fb2ee --- /dev/null +++ b/data/2310.08230.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad56a93d9604f4f893950ad0a87121adb5b1d0b9e67aea63509375219397f25b +size 735169 diff --git a/data/2310.08255.png b/data/2310.08255.png new file mode 100644 index 0000000000000000000000000000000000000000..ea93f3f57689a2908bbcc042d278fd5b94719ed0 --- /dev/null +++ b/data/2310.08255.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8525786dd8f270e26e3ea252f1e2e1e97341cb95719fc5c6c48b13ede4ee02e3 +size 798581 diff --git a/data/2310.08332.png b/data/2310.08332.png new file mode 100644 index 0000000000000000000000000000000000000000..eb90fc4147e0fa55acc6d88304b2e48860733627 --- /dev/null +++ b/data/2310.08332.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48318230f5d600b65c19708591c58421dc0a692c71c61f9eee166217f7fd88c8 +size 1271325 diff --git a/data/2310.08332v1.png b/data/2310.08332v1.png new file mode 100644 index 0000000000000000000000000000000000000000..eb90fc4147e0fa55acc6d88304b2e48860733627 --- /dev/null +++ b/data/2310.08332v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48318230f5d600b65c19708591c58421dc0a692c71c61f9eee166217f7fd88c8 +size 1271325 diff --git a/data/2310.08337.png b/data/2310.08337.png new file mode 100644 index 0000000000000000000000000000000000000000..7225f902dfef5657e334e7c61e233972eb78ce2f --- /dev/null +++ b/data/2310.08337.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6df2e5a14fd8439d20f1325da8bb495c14e084ec04ae380b1c4c44124cf14811 +size 801611 diff --git a/data/2310.08370.png b/data/2310.08370.png new file mode 100644 index 0000000000000000000000000000000000000000..cca197be7a8d4cbb364adf236d0e44103b5a2b48 --- /dev/null +++ b/data/2310.08370.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5fe4bc6aa378a74a1d457037fef514e782ccf61b6c9fdfca78f09f4af0e7d71 +size 771077 diff --git a/data/2310.08471.png b/data/2310.08471.png new file mode 100644 index 0000000000000000000000000000000000000000..b673d112e802a7282647efba32e1ac87acb3778f --- /dev/null +++ b/data/2310.08471.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ee812694314201524b176fed15e3c54c8fb77d4ea0779fcc975de13b1964b31 +size 956870 diff --git a/data/2310.08528.png b/data/2310.08528.png new file mode 100644 index 0000000000000000000000000000000000000000..09a1cda60034f2e19754e457e8011b25e647531c --- /dev/null +++ b/data/2310.08528.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfd228a6323737607f87318abbab141560eaa4298fe4209d3a8ed56c805462cc +size 956943 diff --git a/data/2310.08529v3.png b/data/2310.08529v3.png new file mode 100644 index 0000000000000000000000000000000000000000..b9dc2e0550f34b1de21d8a4b54b154cbd78e90c8 --- /dev/null +++ b/data/2310.08529v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3167a5db86217a1d2196c07aa7cfb653d8b1ed2f606bd5ee562b586dd6a1253 +size 877104 diff --git a/data/2310.08854.png b/data/2310.08854.png new file mode 100644 index 0000000000000000000000000000000000000000..87c60cb0fec05b8d19229da6b64819b4be040f3e --- /dev/null +++ b/data/2310.08854.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:120b9b76b0ad2601a0e0aa65071418274d611e3f6f2af65aaa46d3e9c5b61010 +size 580436 diff --git a/data/2310.08873.png b/data/2310.08873.png new file mode 100644 index 0000000000000000000000000000000000000000..0f3cee11f96ba38990cc7ac54ebcaa50b5fa568a --- /dev/null +++ b/data/2310.08873.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1acf7cf7c0c617613b44529ae4999632cfa106501eae511f1fa5449943a801f7 +size 1059456 diff --git a/data/2310.08956.png b/data/2310.08956.png new file mode 100644 index 0000000000000000000000000000000000000000..5c0bd2942f8aebb3c535b681f4f0e27bc5acb1ce --- /dev/null +++ b/data/2310.08956.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cf9593ca61340a7c2e486fa71c1f3856ad2e89688bdaf18c0ea9f8dd794a8fe +size 790389 diff --git a/data/2310.09276.png b/data/2310.09276.png new file mode 100644 index 0000000000000000000000000000000000000000..39629a632d8f747baf55b6a672f66aaf657c74a7 --- /dev/null +++ b/data/2310.09276.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e3be6cf54e8300cf549ac373c18aaebe169ace2e20b87de7ca945684fc989f8 +size 912356 diff --git a/data/2310.09469.png b/data/2310.09469.png new file mode 100644 index 0000000000000000000000000000000000000000..3bd2d242ff6e870fe6e1450a9e1a6578511a4047 --- /dev/null +++ b/data/2310.09469.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c138da7a7d24e7a2cf0635c03c28538e35078405982cc508efad7aa08a2830d5 +size 651933 diff --git a/data/2310.09528.png b/data/2310.09528.png new file mode 100644 index 0000000000000000000000000000000000000000..f61eb330ea24ecdbcebe44fbd30f3dc8b914c3a7 --- /dev/null +++ b/data/2310.09528.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:804f9b77f81e7bff2704d46e414df548e1cd462c72c95bce9e70c18054194fa3 +size 528753 diff --git a/data/2310.10343.png b/data/2310.10343.png new file mode 100644 index 0000000000000000000000000000000000000000..025b709e01efb53eb811ed1fd788c404bdcae752 --- /dev/null +++ b/data/2310.10343.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8aa69d48620c704eddd32db6a79ca2fcbb43388b7687ecb04baa2bc33cc68ac9 +size 942807 diff --git a/data/2310.10404.png b/data/2310.10404.png new file mode 100644 index 0000000000000000000000000000000000000000..6084956823cffe0cbb1143b88fc0d76188728932 --- /dev/null +++ b/data/2310.10404.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee9d7524537e5d00d283f4b82c10026ee6f223342e5301806b7828914d948ae3 +size 843772 diff --git a/data/2310.10413.png b/data/2310.10413.png new file mode 100644 index 0000000000000000000000000000000000000000..b3e8830e2e55d05efe7c60348f9059574e19355e --- /dev/null +++ b/data/2310.10413.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58903d712a0b5193e9e6c1841ce920e540312ad1eb6bedd7820acea746868be4 +size 541491 diff --git a/data/2310.10624.png b/data/2310.10624.png new file mode 100644 index 0000000000000000000000000000000000000000..43258771d2d313d2f58da312f55391118ff6ae14 --- /dev/null +++ b/data/2310.10624.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10981dcbaa8475efd526372c6155d01c2de21be049b5205dd36d24229c86cb84 +size 2116076 diff --git a/data/2310.10700.png b/data/2310.10700.png new file mode 100644 index 0000000000000000000000000000000000000000..717552513cfd4c78a56b113358c28bde97ab9a0c --- /dev/null +++ b/data/2310.10700.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b2b53b0b6ded9c65d166891421604353f10f1b9010ca0e9b0db3fe4364f9fcd +size 786299 diff --git a/data/2310.10769.png b/data/2310.10769.png new file mode 100644 index 0000000000000000000000000000000000000000..5f34cb464b4192739ba84d99511558d4662f7ed3 --- /dev/null +++ b/data/2310.10769.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68cfc658a75529849c10f40959d5e7d220ef17dcdf21a714a57841a4e3963228 +size 1741541 diff --git a/data/2310.11440.png b/data/2310.11440.png new file mode 100644 index 0000000000000000000000000000000000000000..354d4e12e31b1f2e7aabe126ae793cb9330475f4 --- /dev/null +++ b/data/2310.11440.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4005c85abc3e97276779d40f3ea97ec9046ff1f144b9a55019faacf4235fe2a1 +size 788944 diff --git a/data/2310.11448.png b/data/2310.11448.png new file mode 100644 index 0000000000000000000000000000000000000000..095c24d6c431c51ccb4f96217a698e7c5c24ce09 --- /dev/null +++ b/data/2310.11448.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e23ec81eccda5813be717e87b9df252848f743de0d2fe2afd785ff2e6297a04 +size 1129269 diff --git a/data/2310.11696.png b/data/2310.11696.png new file mode 100644 index 0000000000000000000000000000000000000000..bfe785612e4f9c11939e3c700820e2cc295222ca --- /dev/null +++ b/data/2310.11696.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b49dd9be8f4fdd85fd14062c16fcc0e49d7c41860909cd5a42f6c105985d9386 +size 911722 diff --git a/data/2310.12153.png b/data/2310.12153.png new file mode 100644 index 0000000000000000000000000000000000000000..0d56168510437f0a258ebba252032b755025ce7b --- /dev/null +++ b/data/2310.12153.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f6799a372205331fda08c1370e7c8cfe1c63b2ea9decd400444625375e5f345 +size 758113 diff --git a/data/2310.12790.png b/data/2310.12790.png new file mode 100644 index 0000000000000000000000000000000000000000..c60438d6a12717300b3f7682dbb7855fc4452004 --- /dev/null +++ b/data/2310.12790.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:172ebf0b07cc05be2d4bbedaee9c0af10d76ef9332f582fd857d7d1d06f4c3e0 +size 761440 diff --git a/data/2310.12877v4.png b/data/2310.12877v4.png new file mode 100644 index 0000000000000000000000000000000000000000..e3a9c3db657c63cc2fefbc206781d640be203ce5 --- /dev/null +++ b/data/2310.12877v4.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:639a5cfc56a5bcba42c3c9b223e7d70e4aa288e8d9869e67e062b5d760fd55ea +size 804604 diff --git a/data/2310.12982.png b/data/2310.12982.png new file mode 100644 index 0000000000000000000000000000000000000000..6e0e116c09aad29e479ade6ad0499af51aa1970f --- /dev/null +++ b/data/2310.12982.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da368031d73d7d425536bfd108b36e2f848f681ad947780ac7f53887e3846423 +size 958919 diff --git a/data/2310.13772.png b/data/2310.13772.png new file mode 100644 index 0000000000000000000000000000000000000000..1452cddcc46c7582a92e1eae6e349dd302d90478 --- /dev/null +++ b/data/2310.13772.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58233c1786189d48ac1636ff6db1d36060961e98df9d5c186e43028eed2f492e +size 917196 diff --git a/data/2310.14172.png b/data/2310.14172.png new file mode 100644 index 0000000000000000000000000000000000000000..b5b3e6766add77ee16ef4c2b60e53249ae26397c --- /dev/null +++ b/data/2310.14172.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2dccb1396f653cc1456cdddd90f78a2327fd7c2b32cc3c08e200855d947283c8 +size 449454 diff --git a/data/2310.14566.png b/data/2310.14566.png new file mode 100644 index 0000000000000000000000000000000000000000..bd34a4186879ba2545391b666e41c0b125f634c4 --- /dev/null +++ b/data/2310.14566.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87b2a55a9d0522c19a7eb0db5ef14a3c79de0448f0c296624f269206942235c4 +size 780994 diff --git a/data/2310.14695.png b/data/2310.14695.png new file mode 100644 index 0000000000000000000000000000000000000000..14e84703d156730068eb407a549454fb159982f5 --- /dev/null +++ b/data/2310.14695.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfc8c5376b751c4afefec24f4b02c25650bcf5e5aa4315bd34b43bcdcf7eaaca +size 884467 diff --git a/data/2310.14729.png b/data/2310.14729.png new file mode 100644 index 0000000000000000000000000000000000000000..f16080db8673da8b9e869ca0a1c194b010049d9d --- /dev/null +++ b/data/2310.14729.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22ac0b9d52ecf864e571f27a676319c0febcf5153065bca740c3dbb0ddba6a88 +size 816208 diff --git a/data/2310.15008.png b/data/2310.15008.png new file mode 100644 index 0000000000000000000000000000000000000000..7623cf2dc01540024b605a4bad38b7bde927ecb1 --- /dev/null +++ b/data/2310.15008.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89fdb52d1ef44ed02c9164a17225356b159984cf1d21ea996ab29e22289308a2 +size 1582399 diff --git a/data/2310.16667.png b/data/2310.16667.png new file mode 100644 index 0000000000000000000000000000000000000000..a42d30149760d89f28728500c6370d0ddb04692d --- /dev/null +++ b/data/2310.16667.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b62bb90b5884fd03b03270bcf339a205b41787fd82147aef0bd2308e82453791 +size 591081 diff --git a/data/2310.16825.png b/data/2310.16825.png new file mode 100644 index 0000000000000000000000000000000000000000..7119507f6777e323fe35a523d2aa763498928685 --- /dev/null +++ b/data/2310.16825.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ad4bd4995dda855845a201c8166f17d50afc16b6761de23ee0387813e59905a +size 553443 diff --git a/data/2310.16861.png b/data/2310.16861.png new file mode 100644 index 0000000000000000000000000000000000000000..a144cf4559c51b5d929f50aeb7c465766faec06a --- /dev/null +++ b/data/2310.16861.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82888acd2a9be54ca45a2318b7f9eb970f7ab1416df9bfd4aee8325d82a2a2ab +size 779235 diff --git a/data/2310.17154.png b/data/2310.17154.png new file mode 100644 index 0000000000000000000000000000000000000000..d3aa5d25b72bc0dda6bc45fc4e67bfd945bd1fcf --- /dev/null +++ b/data/2310.17154.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6495d8ba675293db492039da23ce87ed541ce82bf29b402a41089eaf7b85bfa2 +size 456903 diff --git a/data/2310.17504.png b/data/2310.17504.png new file mode 100644 index 0000000000000000000000000000000000000000..1ad1a6903dedfc607d1f7d7164c8c6ed1982310a --- /dev/null +++ b/data/2310.17504.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:418585743bf007e4df94484c94d0cf10a7d085ed7d78a4e32c3fb655d6082258 +size 796040 diff --git a/data/2310.17569.png b/data/2310.17569.png new file mode 100644 index 0000000000000000000000000000000000000000..85617f2432500f8c563368066948cf0ca9f0da48 --- /dev/null +++ b/data/2310.17569.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42f608d3d4d2f60f5d860c0fcd6e26598e38485e851fc7356ba80bb90370bbf4 +size 812907 diff --git a/data/2310.17994.png b/data/2310.17994.png new file mode 100644 index 0000000000000000000000000000000000000000..77e5dbfb1b23e41c52a04b4260b0d7428ccc68c1 --- /dev/null +++ b/data/2310.17994.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8526b34f65050b6b1fc4ff99b9a0d9a6a6bdce3321fd9b3b449bc1eabec0bee +size 814331 diff --git a/data/2310.18285.png b/data/2310.18285.png new file mode 100644 index 0000000000000000000000000000000000000000..c3a866f4c96375fc1e42a0aa22d287a5e85b31c0 --- /dev/null +++ b/data/2310.18285.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d399f4bc7c8a19ee9aa467a53f8103a5673aaf0273e9eb8d3321ccaea5c7bde +size 800602 diff --git a/data/2310.18698.png b/data/2310.18698.png new file mode 100644 index 0000000000000000000000000000000000000000..0c281e2c4a7e0ed5f8ef401fbb898bb8e80a0439 --- /dev/null +++ b/data/2310.18698.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:acf2c2247887af3d6cb5ed78e4f34756c5dc5389cea10bca1d01be0060adf816 +size 735340 diff --git a/data/2310.18709.png b/data/2310.18709.png new file mode 100644 index 0000000000000000000000000000000000000000..90db34a2cec401f7572d1ba32a6e0b0f68b7c18b --- /dev/null +++ b/data/2310.18709.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:106518e73e1a1f54ab0635dc2c06fad88b72e3bdceb9c3406fdee1cac13f9269 +size 747380 diff --git a/data/2310.19512.png b/data/2310.19512.png new file mode 100644 index 0000000000000000000000000000000000000000..cca6c340c922add507790b32775b99de2c749df5 --- /dev/null +++ b/data/2310.19512.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:047fed67a6f19ff7b47bbdccfa8f5bce600f02b505dda6fa58a68e10392eac9b +size 918116 diff --git a/data/2310.19654.png b/data/2310.19654.png new file mode 100644 index 0000000000000000000000000000000000000000..3748e2a83171695f6b985556d5edc7deb12db875 --- /dev/null +++ b/data/2310.19654.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b2b3556d7f620cf574ca14fa1ead6242f1ed42ee25c80970067019546f98ee1 +size 793438 diff --git a/data/2310.19721.png b/data/2310.19721.png new file mode 100644 index 0000000000000000000000000000000000000000..d98e6ff547102431c22995919b297f2702461804 --- /dev/null +++ b/data/2310.19721.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c84fb2820938b6fbe2464e6d52ea95b3041dc6f1a73b94a74f3de20fc34372cf +size 816748 diff --git a/data/2310.20685.png b/data/2310.20685.png new file mode 100644 index 0000000000000000000000000000000000000000..e6467b2bb5e550c3f612449240bf6e53b57dc877 --- /dev/null +++ b/data/2310.20685.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea3a2d26125199faa5d82f423b2e66b8b2ab6ca451f09b1feae5c0c46ed84666 +size 534441 diff --git a/data/2311.00399.png b/data/2311.00399.png new file mode 100644 index 0000000000000000000000000000000000000000..5d237009a32aab85346faec9c528ec3e2cb1e723 --- /dev/null +++ b/data/2311.00399.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0c178abeb3437cbba17c789f905762fbc09afa4201b2cf3a86d4a541fc7040f +size 833358 diff --git a/data/2311.00618.png b/data/2311.00618.png new file mode 100644 index 0000000000000000000000000000000000000000..1099fb4b8fd63832243807d48b8ab8a3b358c94c --- /dev/null +++ b/data/2311.00618.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8a74c7757bbf9453225636084fa56bf31cafab02b31ad61d6ab8870be9618cd +size 975766 diff --git a/data/2311.01357.png b/data/2311.01357.png new file mode 100644 index 0000000000000000000000000000000000000000..51e737af114f7e7e67a71ae38deecd5b78f62473 --- /dev/null +++ b/data/2311.01357.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26d481a3cf52d1eb75f89d6f8e0e2d4d368ecf4581f622301a2357947aa75064 +size 821073 diff --git a/data/2311.01734.png b/data/2311.01734.png new file mode 100644 index 0000000000000000000000000000000000000000..5cdcac58b3a9476202ebe2fc07244028472e4793 --- /dev/null +++ b/data/2311.01734.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb57b9002501b2f7eda41197f78f83123818d0718f6eb07afb5158ca822632eb +size 682133 diff --git a/data/2311.02072.png b/data/2311.02072.png new file mode 100644 index 0000000000000000000000000000000000000000..ab082bd8d7b629cb4551d37f9abc304562b69a19 --- /dev/null +++ b/data/2311.02072.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f44196a47a7bb46588fe5f97b574e520f1c95753c6d617660a95b5bb862a0f35 +size 1160079 diff --git a/data/2311.02077.png b/data/2311.02077.png new file mode 100644 index 0000000000000000000000000000000000000000..ffa4a69c97c55b32dde87e20299b2714b24fd976 --- /dev/null +++ b/data/2311.02077.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecd6fe58828ce006b4689b3e87313fa733a5ad376e63818a82791eb97dd36ccf +size 610191 diff --git a/data/2311.02633.png b/data/2311.02633.png new file mode 100644 index 0000000000000000000000000000000000000000..c16cd8230e5fa5a5abf61e9d211819b5d86a10b0 --- /dev/null +++ b/data/2311.02633.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:968aa3e80a79f818e046ac2e31f968c7981b638ef3f57971b9bc1205b96f5b39 +size 715166 diff --git a/data/2311.02995.png b/data/2311.02995.png new file mode 100644 index 0000000000000000000000000000000000000000..2daf3f18c5c9bc0ffbc542f87a62d889dfa559d5 --- /dev/null +++ b/data/2311.02995.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:753037e0d6844510f8fc3bf561d2da408b8e0e54da86774f73a5eeefd7a02bf6 +size 970716 diff --git a/data/2311.03149.png b/data/2311.03149.png new file mode 100644 index 0000000000000000000000000000000000000000..4953fc51d55d2e9039b0388bf912129bc468ec2a --- /dev/null +++ b/data/2311.03149.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51eb15b786fb07252be910a257d65a629bc6106982b1c0f4cdf30674838a9df8 +size 845855 diff --git a/data/2311.03356v1.png b/data/2311.03356v1.png new file mode 100644 index 0000000000000000000000000000000000000000..ba75c8488cea5325731ce2f5febe4bda9d6f987f --- /dev/null +++ b/data/2311.03356v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f653e9a4fcdb3ea759b23a064ee5c742ec301c10526e4808f99a27596e4a9e4 +size 1189438 diff --git a/data/2311.03524.png b/data/2311.03524.png new file mode 100644 index 0000000000000000000000000000000000000000..3aeaf7a455869573438985bce41f9848e9aa5269 --- /dev/null +++ b/data/2311.03524.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3e3fa9b798dce4dae59ae26d29a20c4d174dd1abe53af53ae895f74fabeaf11 +size 707178 diff --git a/data/2311.04246.png b/data/2311.04246.png new file mode 100644 index 0000000000000000000000000000000000000000..324cacecfbefa97bbcb92a70b8ea60772cf4e9de --- /dev/null +++ b/data/2311.04246.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9f1a6a97860c2da5803416f834a9710efc112a1b9f880a57ed0dc45a8176783 +size 1158204 diff --git a/data/2311.04257.png b/data/2311.04257.png new file mode 100644 index 0000000000000000000000000000000000000000..1c2b63d3c457a7edaa88af391c446fd8887577fe --- /dev/null +++ b/data/2311.04257.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32d1b0a652217815550c3bba852e0b222ef804d541e39d3304c4c4bec65ab5ef +size 753234 diff --git a/data/2311.04391.png b/data/2311.04391.png new file mode 100644 index 0000000000000000000000000000000000000000..c6f4d18e2d2eac724e406e79dc159f79d2b41359 --- /dev/null +++ b/data/2311.04391.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa17aeefc6c99ee068ac5156a4f1f8ade8190e30625ae88bfcfed3439dcdd997 +size 612134 diff --git a/data/2311.05304v2.png b/data/2311.05304v2.png new file mode 100644 index 0000000000000000000000000000000000000000..ab53723ec527db6851b550b6fb5375485a0af71d --- /dev/null +++ b/data/2311.05304v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:142847a03475509206f06d575a78aafee7332ea345654d1237c163da4b6677e1 +size 694190 diff --git a/data/2311.05698.png b/data/2311.05698.png new file mode 100644 index 0000000000000000000000000000000000000000..bbd0caac0203203d1807e0b833003d428d703cc1 --- /dev/null +++ b/data/2311.05698.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dedc5318a70051b1558ae020d4b5eaf5b3234810b8dcd2f0c8e8079ed3652b2a +size 865238 diff --git a/data/2311.06067.png b/data/2311.06067.png new file mode 100644 index 0000000000000000000000000000000000000000..2241e7ded0f2f6f05115bc5f03579557ae721f25 --- /dev/null +++ b/data/2311.06067.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:666cac6ce307e61571a3ee4f4fd51d609d55f20b74a821e527aed5419c24fbdb +size 743169 diff --git a/data/2311.06242.png b/data/2311.06242.png new file mode 100644 index 0000000000000000000000000000000000000000..b484d945940bd4affa36d7a5a566271936fe8daa --- /dev/null +++ b/data/2311.06242.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27cd33cb8c4292d34a137b86a5b5afb62f3dec601216a325ed7e0e1e0411ee5f +size 758646 diff --git a/data/2311.06322.png b/data/2311.06322.png new file mode 100644 index 0000000000000000000000000000000000000000..f5da5d380659f8d37698bfc61647b22c45dc01bc --- /dev/null +++ b/data/2311.06322.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fdfd5db27f6bcc0562cf28c0f1a65c4bad06bca502e22f0fd33ca4127b01035 +size 739319 diff --git a/data/2311.06607.png b/data/2311.06607.png new file mode 100644 index 0000000000000000000000000000000000000000..eb3ef7a61fb6cee6afaee8e24a9b34c4d05d79a5 --- /dev/null +++ b/data/2311.06607.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8bfcbd5699a28eaf7f15d67be59a9bec8b9a062e4ff7214aeba59db03aa5dba +size 718934 diff --git a/data/2311.06612.png b/data/2311.06612.png new file mode 100644 index 0000000000000000000000000000000000000000..95ec0b45c6ffe0c22081c594497e88298e8ce491 --- /dev/null +++ b/data/2311.06612.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:104e0acce1ea39984a6877544c671e710f3b3b50da0352092125efa8086ad936 +size 775232 diff --git a/data/2311.06783.png b/data/2311.06783.png new file mode 100644 index 0000000000000000000000000000000000000000..55cbaa5a62a49359170e1a2a7296ec649a187c56 --- /dev/null +++ b/data/2311.06783.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39308cd8d48817100449e582f8891be7d47fa9d081edfeccc92b2bcf15a0c120 +size 941716 diff --git a/data/2311.07042.png b/data/2311.07042.png new file mode 100644 index 0000000000000000000000000000000000000000..58d86e278eaf219ba7d14b5f346fc8cae7954ec4 --- /dev/null +++ b/data/2311.07042.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a858c5781719a55163dee03b9427ef643b18f48ecec1166dcfed70a0a3069fe +size 808471 diff --git a/data/2311.07044.png b/data/2311.07044.png new file mode 100644 index 0000000000000000000000000000000000000000..99017b5191d16969c528f97d7381d5d634a5a40e --- /dev/null +++ b/data/2311.07044.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8ef12ae1cbad21019c0f9522face2aca765a560bbdc7d679d515b921519594d +size 947752 diff --git a/data/2311.07113.png b/data/2311.07113.png new file mode 100644 index 0000000000000000000000000000000000000000..cf45da59d16da66993c1c1578014861429374350 --- /dev/null +++ b/data/2311.07113.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b8d4ce04057d78e79ca1a8fd72501e32a5ad314323466ecd586aa94c9999a0c +size 938294 diff --git a/data/2311.07630.png b/data/2311.07630.png new file mode 100644 index 0000000000000000000000000000000000000000..bc872e9d094615be11072f2f9a6a3d932371793e --- /dev/null +++ b/data/2311.07630.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:872ddb5c2354760d713d4ac658984dbdbf74f425b0c55bd37936d3feb763e182 +size 432039 diff --git a/data/2311.07634.png b/data/2311.07634.png new file mode 100644 index 0000000000000000000000000000000000000000..2ece80d24345b3b8db4fa44ee96c9afa62772690 --- /dev/null +++ b/data/2311.07634.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62065b91aa87d1f7802904798f8a8989722298e7ee37d6a8da4ea0d928502e69 +size 730696 diff --git a/data/2311.08046.png b/data/2311.08046.png new file mode 100644 index 0000000000000000000000000000000000000000..11688a004319e3289cecf6f5fa5d94550138db38 --- /dev/null +++ b/data/2311.08046.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d10b23a8d34f30595fbd70e5157a988636552dbaf7640bd8d98404dd4b73bd5 +size 1037000 diff --git a/data/2311.08359.png b/data/2311.08359.png new file mode 100644 index 0000000000000000000000000000000000000000..35b24702b77722f6ca4d089fc8cfaeb48ed27503 --- /dev/null +++ b/data/2311.08359.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01ebe206e89efeb79284b8ecd1f35c99c9d36b7eb57149a62a5fa3773997d6b7 +size 948707 diff --git a/data/2311.09104.png b/data/2311.09104.png new file mode 100644 index 0000000000000000000000000000000000000000..035f45ec9059a7462cf8758876dc0199154a326c --- /dev/null +++ b/data/2311.09104.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23ae63b1c671182cb5143b36553e1f6602199f7140878cf71a9b20b63d708e24 +size 892900 diff --git a/data/2311.09257.png b/data/2311.09257.png new file mode 100644 index 0000000000000000000000000000000000000000..1bb312c4f338cd387c59eb23d9fbecad421b824d --- /dev/null +++ b/data/2311.09257.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e720e26e70b0f5902c9bfbe00490e1b861e91f09ab4725bf74e88d3339d71840 +size 768873 diff --git a/data/2311.09543.png b/data/2311.09543.png new file mode 100644 index 0000000000000000000000000000000000000000..d4815962d5e4b8656ab5e98e880d8fb81ee918be --- /dev/null +++ b/data/2311.09543.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7af370b57a1582a7e2c301cbc1530112fb0b23c6796e0594b565a071b6e66ceb +size 536236 diff --git a/data/2311.09571.png b/data/2311.09571.png new file mode 100644 index 0000000000000000000000000000000000000000..a3af381036482924cf8c879ccd16cbbf5f9abeda --- /dev/null +++ b/data/2311.09571.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70cd1707121cccead224151cdd2f6d2799958490888ca3aa59eb69c428b35449 +size 882725 diff --git a/data/2311.10081.png b/data/2311.10081.png new file mode 100644 index 0000000000000000000000000000000000000000..0e8a191f1ed0d29533ee71d9f922885e60e8d3a9 --- /dev/null +++ b/data/2311.10081.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5da315ba331ce76d93464e980bf6667c4dcf103be74c2f1e2f4843706722350f +size 839787 diff --git a/data/2311.10089.png b/data/2311.10089.png new file mode 100644 index 0000000000000000000000000000000000000000..ccb00eec5edc3ba1a2b872a8203eca59165f6392 --- /dev/null +++ b/data/2311.10089.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f1f412e20b82eaf2e074bd80aac19c1fbfbc47d834a098365f9bd38b1ee9d2b +size 1135107 diff --git a/data/2311.10111.png b/data/2311.10111.png new file mode 100644 index 0000000000000000000000000000000000000000..e257829c94bfde7f39bb6ad453c9942190c0f7ea --- /dev/null +++ b/data/2311.10111.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eba604dde6c8e1de3a43ed857a2bd77a37869ee1c5901ab04ff98a367d28c947 +size 811942 diff --git a/data/2311.10329.png b/data/2311.10329.png new file mode 100644 index 0000000000000000000000000000000000000000..88d59eeab3d0b5fd64d032336a6aa19a0fbdccb1 --- /dev/null +++ b/data/2311.10329.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3ad09ff8eb146581e8ca1acf0ca57d1aa8c5755e5e21d764d0b632978b614ee +size 1201430 diff --git a/data/2311.10339.png b/data/2311.10339.png new file mode 100644 index 0000000000000000000000000000000000000000..d7629e6ef51810cf169e4f39bea6a65c910feb9d --- /dev/null +++ b/data/2311.10339.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8586cbf3e65b6fc80f1d94e56a8d0f3bea041c44b4f829ad92df5e5b2a3fbf29 +size 855045 diff --git a/data/2311.10356.png b/data/2311.10356.png new file mode 100644 index 0000000000000000000000000000000000000000..2999859e27ba6be502069ce03b8eaeb6a98f2676 --- /dev/null +++ b/data/2311.10356.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0f477815376d08ad267d09eab88a9b0e9ed44a2277fc2bb88dec25d0037020e +size 849668 diff --git a/data/2311.10382.png b/data/2311.10382.png new file mode 100644 index 0000000000000000000000000000000000000000..75548c5293278c894b4720b3c5583a2655faa35a --- /dev/null +++ b/data/2311.10382.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dec0814a3801af3a4d70decd4ba5b182d6b6919d874c51d68dcd93add6ae1699 +size 1253608 diff --git a/data/2311.10529.png b/data/2311.10529.png new file mode 100644 index 0000000000000000000000000000000000000000..02dc214849e989feeaf009da35258129c1d76b46 --- /dev/null +++ b/data/2311.10529.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbe00577e368a846ae541c8e405231d99225e51c161a908b4688109dba530d57 +size 940354 diff --git a/data/2311.10605.png b/data/2311.10605.png new file mode 100644 index 0000000000000000000000000000000000000000..cbfd5b70537f0432b9bd5d5995df9226562939e1 --- /dev/null +++ b/data/2311.10605.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:445cc5a78ba1724f43ab32969044c34cf267f07b6d34e8db8dd0cefe1f21fc90 +size 772427 diff --git a/data/2311.10696.png b/data/2311.10696.png new file mode 100644 index 0000000000000000000000000000000000000000..3eed93789cba0904f78e9c19ea4afe2147b5cf6b --- /dev/null +++ b/data/2311.10696.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d03d806a28fc9236f1fd27a6eccd1166ec54b3c72f9c84adef038d1ea5deaf3c +size 743067 diff --git a/data/2311.10707.png b/data/2311.10707.png new file mode 100644 index 0000000000000000000000000000000000000000..ec2eb374cd9d9b4582cb41e76905a9f7c047f986 --- /dev/null +++ b/data/2311.10707.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62b5a39b99c217d065099e9627c0bd9d72e3ec9242d18ba422ce58258499e42b +size 820405 diff --git a/data/2311.10802.png b/data/2311.10802.png new file mode 100644 index 0000000000000000000000000000000000000000..dd653b1a9eaa42def96aef6b33001e4bc2bf32c4 --- /dev/null +++ b/data/2311.10802.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:706afacfcfa05e0702ccf5024514a1a5acec74a0f393090a4d999e9d2b21e50f +size 780096 diff --git a/data/2311.10950.png b/data/2311.10950.png new file mode 100644 index 0000000000000000000000000000000000000000..1492c0f7f677ba1c2a9fca712de08321d7354d77 --- /dev/null +++ b/data/2311.10950.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:713b91bad9d7d1cb519b1465f6063a94489c07da86b6b08f576827294b5ec02d +size 1078658 diff --git a/data/2311.10959.png b/data/2311.10959.png new file mode 100644 index 0000000000000000000000000000000000000000..b095a09f8d422cb419546d875a5309341e790cad --- /dev/null +++ b/data/2311.10959.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12d1e47ec3f2b023cd277539daabf94cba762941e38b5815527597c46e784ac4 +size 831642 diff --git a/data/2311.10982.png b/data/2311.10982.png new file mode 100644 index 0000000000000000000000000000000000000000..f9b40a5d4200ade02cc682994dc1fa8d729bf8b6 --- /dev/null +++ b/data/2311.10982.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30cef1d9a9f11ce770758c98ce433cf9c5ebadef157683a7571e3eeef8a7c3a0 +size 2318781 diff --git a/data/2311.10983.png b/data/2311.10983.png new file mode 100644 index 0000000000000000000000000000000000000000..8eeea8d0bd9609929dca00ecb9d753cb0ed9d817 --- /dev/null +++ b/data/2311.10983.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:acbdb05742ecde88714527cc7399b1a463a555b2bf583eee69757caca4dbb8c1 +size 808000 diff --git a/data/2311.11013.png b/data/2311.11013.png new file mode 100644 index 0000000000000000000000000000000000000000..22db186f613e5764a6aa74776009aebec620c487 --- /dev/null +++ b/data/2311.11013.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:139f31f32bfeb17a0c48b230d33a094e3f3f23030d34b9e6858c9dac91d956ae +size 981217 diff --git a/data/2311.11016.png b/data/2311.11016.png new file mode 100644 index 0000000000000000000000000000000000000000..6f23e498a0a3f691a64010db479ac62a0e6e67e3 --- /dev/null +++ b/data/2311.11016.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca7076e83c30cbe1e6104e66d873c70bd9afb4711d854e5835257b5b06bf7ba6 +size 1107623 diff --git a/data/2311.11106.png b/data/2311.11106.png new file mode 100644 index 0000000000000000000000000000000000000000..8af7d172b67072d3b7b5033537a9518f1e26720d --- /dev/null +++ b/data/2311.11106.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0150a15ef8ab9da0254f543fac7541def412ac2bc8392f61e7ab20df7909d581 +size 796075 diff --git a/data/2311.11125.png b/data/2311.11125.png new file mode 100644 index 0000000000000000000000000000000000000000..998890d77d6809f08c2676e01dbd9154bfe93cc9 --- /dev/null +++ b/data/2311.11125.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46336e1a3bb6bb278da9421ae58712c1ce79a7c2915aa8276d36d577311ff730 +size 956643 diff --git a/data/2311.11178.png b/data/2311.11178.png new file mode 100644 index 0000000000000000000000000000000000000000..245a6b66666da4d8567b43a15779e551d098764a --- /dev/null +++ b/data/2311.11178.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e94e6e6a6a409fb439ac50a088a6d820f3a24f3761aa1d7a9772e3fcbb09aa3b +size 820257 diff --git a/data/2311.11278v1.png b/data/2311.11278v1.png new file mode 100644 index 0000000000000000000000000000000000000000..b42b4c6602ad4428542693c3e0b8072a12f7937e --- /dev/null +++ b/data/2311.11278v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c7cfadce8af56ed9892a66687ae803bfa21ac6e86c712ec739ad1d7f2e5d169 +size 674513 diff --git a/data/2311.11284.png b/data/2311.11284.png new file mode 100644 index 0000000000000000000000000000000000000000..e47875512e4f4d150621b592b93493d4e883f005 --- /dev/null +++ b/data/2311.11284.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c10ac2bfa3ba57c7086fa08ef4f461c09ea88979482620fe3c3aa337e393288d +size 1548818 diff --git a/data/2311.11417.png b/data/2311.11417.png new file mode 100644 index 0000000000000000000000000000000000000000..44a0d1b654d95ea9e7af190267543690975eb7d7 --- /dev/null +++ b/data/2311.11417.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3f5e9f65bbedeb9c643001cec6d4b1304bde3b05be9bc8488199f2adaecf87e +size 884818 diff --git a/data/2311.11600.png b/data/2311.11600.png new file mode 100644 index 0000000000000000000000000000000000000000..c4f5763fed4f368ae9a8eca4706729d49bb6698b --- /dev/null +++ b/data/2311.11600.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8e4a17070aa6cc40d2ae24ae935083501177785729efd2204f7e61d5487c96b +size 798375 diff --git a/data/2311.11666.png b/data/2311.11666.png new file mode 100644 index 0000000000000000000000000000000000000000..1425335dcf3a2417f06211655c78c9e21dfe2490 --- /dev/null +++ b/data/2311.11666.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b48a8a96515ba712932685cf42d3b6734080f90d46853e3b6308e84218d1605 +size 1194964 diff --git a/data/2311.11700.png b/data/2311.11700.png new file mode 100644 index 0000000000000000000000000000000000000000..c73e00f4d6c6e0f0801e16ab7ece546b17c39b29 --- /dev/null +++ b/data/2311.11700.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6660581c2f3b57cf5f4bde1ada57ac2c20c4a94da5dce3d87ab2a8257778904 +size 1116325 diff --git a/data/2311.11837v1.png b/data/2311.11837v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f99db09ad7a8acbd5e3855dcfe766d24b49cb6bb --- /dev/null +++ b/data/2311.11837v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b3198808086b2e63da8f63c437338b5bc40adcc37e78f25ab05fc9db800b007 +size 692533 diff --git a/data/2311.11845.png b/data/2311.11845.png new file mode 100644 index 0000000000000000000000000000000000000000..905d5f63c7084e1950aefca50c86f27ed93f72c2 --- /dev/null +++ b/data/2311.11845.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39b7369ebbe1068dd63265fa600ce5e81dad5a6d823cbb4f10f18f77a0fc2f6a +size 1331618 diff --git a/data/2311.11860.png b/data/2311.11860.png new file mode 100644 index 0000000000000000000000000000000000000000..09d4db690520643a8c4aab633e96a21ce775d74e --- /dev/null +++ b/data/2311.11860.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4022002eeda5e355ab481f61f96ddec934dd6741f970bf65e746f66412cea4ce +size 878375 diff --git a/data/2311.11863.png b/data/2311.11863.png new file mode 100644 index 0000000000000000000000000000000000000000..5af91b670ed23f25bc25e2cfa0f400569d14085a --- /dev/null +++ b/data/2311.11863.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d305dc61fdd548db774a602814f0ab9107854c9638be1610beb5c4dd2ff0e5b2 +size 1044230 diff --git a/data/2311.11908v3.png b/data/2311.11908v3.png new file mode 100644 index 0000000000000000000000000000000000000000..1a233a26eeb726982c05463e301c3ea6c36075e8 --- /dev/null +++ b/data/2311.11908v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63391823cad81ec236d1531076ad0266047169cc4ce4b219a54c15b372771ace +size 503223 diff --git a/data/2311.11995.png b/data/2311.11995.png new file mode 100644 index 0000000000000000000000000000000000000000..32657cf58b0fb4410c99ac554c7229e6ca36a6e0 --- /dev/null +++ b/data/2311.11995.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9937ed9818263e8271ea881aa69ed088e884bba77ec1ba2759a5773e176f881 +size 588063 diff --git a/data/2311.12062.png b/data/2311.12062.png new file mode 100644 index 0000000000000000000000000000000000000000..f8031a864f0240061019eb16b5c7cfb2e9e42b30 --- /dev/null +++ b/data/2311.12062.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:734c1f0acb93876c333cbd95788445531ff4464099bb88cd3e30343ade3cd807 +size 739610 diff --git a/data/2311.12075.png b/data/2311.12075.png new file mode 100644 index 0000000000000000000000000000000000000000..3282be605f846ca7cfd71b77720f1cb57f4a68d1 --- /dev/null +++ b/data/2311.12075.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:592af8dd56094382e5e0a2157392958ec3587ba9cd1b79d5c15a96f83aa08e2b +size 822072 diff --git a/data/2311.12079.png b/data/2311.12079.png new file mode 100644 index 0000000000000000000000000000000000000000..4f9aa547dec93fcd545f97d379ae43675b1c5570 --- /dev/null +++ b/data/2311.12079.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77b41d58a67ff478f3c3d0c66e3310a515e43618d4fe6eb527dad24237b77624 +size 990258 diff --git a/data/2311.12194.png b/data/2311.12194.png new file mode 100644 index 0000000000000000000000000000000000000000..900a87e5dffc572fdff5420e0d5778d8df613a5e --- /dev/null +++ b/data/2311.12194.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:205723946ca4626ffb7c50820b250bb49e7231758ffdc272dc10a694021d892c +size 924011 diff --git a/data/2311.12198.png b/data/2311.12198.png new file mode 100644 index 0000000000000000000000000000000000000000..029f3b36218a51d750c4b8fb00324c114caf51e7 --- /dev/null +++ b/data/2311.12198.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:521baea4cb0ab216c551bf6e860a4d29da93d0e18934a1dd047ea408d8995f4d +size 1344426 diff --git a/data/2311.12291.png b/data/2311.12291.png new file mode 100644 index 0000000000000000000000000000000000000000..d3ba0412beb72659bf312af501f3650bf76f8248 --- /dev/null +++ b/data/2311.12291.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74d62bfdd523cc95de539635100448bcae8cdb854f7d895c95df03327eeb0360 +size 753514 diff --git a/data/2311.12342.png b/data/2311.12342.png new file mode 100644 index 0000000000000000000000000000000000000000..04bb960efc3bb27c266810967773966a0a80e5fa --- /dev/null +++ b/data/2311.12342.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41622774a32a3aedaa06b68ccd891e3a6f7043462eac2e4a3dd2ddc1ec79c486 +size 425121 diff --git a/data/2311.12386.png b/data/2311.12386.png new file mode 100644 index 0000000000000000000000000000000000000000..0ec40196cac61b89feb640484b6bce2ba44996f1 --- /dev/null +++ b/data/2311.12386.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8c9ec2191df92cdea27affc98bcf54c249d5b10a142c56dd453c69d94848f73 +size 1223688 diff --git a/data/2311.12588.png b/data/2311.12588.png new file mode 100644 index 0000000000000000000000000000000000000000..16bf4fd04a060fa01eb460e5501c993682048557 --- /dev/null +++ b/data/2311.12588.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15a89bf7c7a4d2bead2a4923cf5a307ef7b6f390f16bef5b60abb0365bb15fcb +size 840757 diff --git a/data/2311.12754.png b/data/2311.12754.png new file mode 100644 index 0000000000000000000000000000000000000000..72b8d30441792d01ffedddcc2798405a66dd49f4 --- /dev/null +++ b/data/2311.12754.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2727dd80861bb4e0561d22531ea35bfa1e190f049000ef8487440dcbf952ca0 +size 961615 diff --git a/data/2311.12796.png b/data/2311.12796.png new file mode 100644 index 0000000000000000000000000000000000000000..5caf81bec4e906640b9919d02cb6eeb2aa91f75f --- /dev/null +++ b/data/2311.12796.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c27fb4016cd8cd6336e106b5c52025f364397002ea2d11e9857ca2e7a60cf6b +size 684959 diff --git a/data/2311.12886.png b/data/2311.12886.png new file mode 100644 index 0000000000000000000000000000000000000000..f29eef78428542abc985b5e753af969b26e92902 --- /dev/null +++ b/data/2311.12886.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb48e8b4ff794b9440580c2d08b716ba37c9a599850cb39ea1bcf80e518f9960 +size 782151 diff --git a/data/2311.12905.png b/data/2311.12905.png new file mode 100644 index 0000000000000000000000000000000000000000..a39af2c0478b10625494c06ccb36454df946c9e4 --- /dev/null +++ b/data/2311.12905.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f57294ce644e046f9ce78dc4cadf85c346affde5a9321c197ebe33fa03337724 +size 723110 diff --git a/data/2311.12908.png b/data/2311.12908.png new file mode 100644 index 0000000000000000000000000000000000000000..e2d18b2a2ec748e56ad266337bbc7060e549b126 --- /dev/null +++ b/data/2311.12908.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e679074d19160f14a1f6583ed33f2db7ec55c49d2c7270bd1c2184d8d44645fb +size 784514 diff --git a/data/2311.12956.png b/data/2311.12956.png new file mode 100644 index 0000000000000000000000000000000000000000..78e0f9d02703135d29354c0f0b037098450f84cb --- /dev/null +++ b/data/2311.12956.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c3e5d18ea22d133c7692dac30e045fab31ea0c7d13aae668c015676f62d6918 +size 837678 diff --git a/data/2311.13099.png b/data/2311.13099.png new file mode 100644 index 0000000000000000000000000000000000000000..a2249c93afcd69a3d05da21b447258f525149d66 --- /dev/null +++ b/data/2311.13099.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd153f03003b7a00eca9ec0902e0d5b2480cd5f812accd3febfbd69dc8e909e3 +size 996582 diff --git a/data/2311.13120.png b/data/2311.13120.png new file mode 100644 index 0000000000000000000000000000000000000000..5f294ef3afc15caeddbb6b5c884e99318711cd53 --- /dev/null +++ b/data/2311.13120.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53d7e6f8dfb029c52dde8475539b227a06d1dad922836beb43f542f13f00551d +size 849317 diff --git a/data/2311.13127v3.png b/data/2311.13127v3.png new file mode 100644 index 0000000000000000000000000000000000000000..3a39caac4157421e66503f1c1e180b76df324196 --- /dev/null +++ b/data/2311.13127v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e53e80adc95c8eb0021964d8a856013e5e26a1183fa5b03ecb9fb25b2281902 +size 951009 diff --git a/data/2311.13187v1.png b/data/2311.13187v1.png new file mode 100644 index 0000000000000000000000000000000000000000..106e1ad5e86c915db123915d7af1b378c4511df1 --- /dev/null +++ b/data/2311.13187v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fb24db1180e79dc421d84706f4e3a2223db45794d2b552f0034d01c45d4c0e0 +size 1026881 diff --git a/data/2311.13231.png b/data/2311.13231.png new file mode 100644 index 0000000000000000000000000000000000000000..85ee3f1f87c3f7d712c267f07166a0452786d188 --- /dev/null +++ b/data/2311.13231.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10fef49be48401b13697352878b0b06f3256bdfd37f82168d055b8e7ac7e4a2f +size 790174 diff --git a/data/2311.13250v2.png b/data/2311.13250v2.png new file mode 100644 index 0000000000000000000000000000000000000000..c720e174e8cb5d890021986fa39ff97425bf0781 --- /dev/null +++ b/data/2311.13250v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a49752fedbcf4383c9c686a08ccb6382571a7f223ce329bc936c1c6afd2619a0 +size 798422 diff --git a/data/2311.13535.png b/data/2311.13535.png new file mode 100644 index 0000000000000000000000000000000000000000..f8e1a2261bf4142d7acab2988513a8d002721b56 --- /dev/null +++ b/data/2311.13535.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ed426fbfa48f02c3755ea8da1618a260d50c07a2102401046d938650df9c73f +size 992107 diff --git a/data/2311.13601.png b/data/2311.13601.png new file mode 100644 index 0000000000000000000000000000000000000000..0303eb363e0de950df1c8d1b6c9b75958307f890 --- /dev/null +++ b/data/2311.13601.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8f540edc33aede1ad4360e733ec3489dc96ef854d27bc635937602a63f451e7 +size 1572493 diff --git a/data/2311.13602.png b/data/2311.13602.png new file mode 100644 index 0000000000000000000000000000000000000000..970823b7a8f795da39cf8c3ea32544680b00e329 --- /dev/null +++ b/data/2311.13602.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02b608e73a7e5c8000cc6550d06a8ed2a9987e67dd66a595a189328ba0adb715 +size 967824 diff --git a/data/2311.13608.png b/data/2311.13608.png new file mode 100644 index 0000000000000000000000000000000000000000..668af696d93a88888cc1a3809fd998b5354f81b0 --- /dev/null +++ b/data/2311.13608.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:959698d3bfd64fd02ca91ecb3fd7b6fdd0c68df7730a4426c9d33987262dd295 +size 738272 diff --git a/data/2311.13612.png b/data/2311.13612.png new file mode 100644 index 0000000000000000000000000000000000000000..84adeaca410a7ff3bdbe895d816813b47f2da056 --- /dev/null +++ b/data/2311.13612.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bee17c008d318d334aa671483310d5c08feb63961a7ea43b8cc8e99ddde62e34 +size 839101 diff --git a/data/2311.13613.png b/data/2311.13613.png new file mode 100644 index 0000000000000000000000000000000000000000..219440cea792afaaf7f60800d36a20805a4ebd53 --- /dev/null +++ b/data/2311.13613.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48a4d32d65e5e27000a6cde943687cb1ac0782f13840b1c227c6dc44433fea82 +size 823242 diff --git a/data/2311.13614.png b/data/2311.13614.png new file mode 100644 index 0000000000000000000000000000000000000000..007d4c9efd2ae2cad53d27b3c6d163d4ae91bf7c --- /dev/null +++ b/data/2311.13614.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71140602981b11fdd68e86e3c8c599ec0329931a140cf75e52a4310ce92f9985 +size 859219 diff --git a/data/2311.13657.png b/data/2311.13657.png new file mode 100644 index 0000000000000000000000000000000000000000..7d000fa0c4d5fba2247e78118dab686b9c4ee82f --- /dev/null +++ b/data/2311.13657.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5df7daaa78a43b7f0a85a2a6e3d3ff2d79574890af9518a1603ca6845859f251 +size 812693 diff --git a/data/2311.13681.png b/data/2311.13681.png new file mode 100644 index 0000000000000000000000000000000000000000..2f83032f5a0bdd54eaa960cece100d88b8696af9 --- /dev/null +++ b/data/2311.13681.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17d556ea1d0c6dfb7ef071f83ec0470425b941d9da701cedf6905af98d78e7fa +size 1271077 diff --git a/data/2311.13793.png b/data/2311.13793.png new file mode 100644 index 0000000000000000000000000000000000000000..d2303291b83b12eb2f037c97a7eb0c6b7f89fe2d --- /dev/null +++ b/data/2311.13793.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f0a6579a12f9266289ae025219ff9c7fbd7d5d84be58325f477810a19da375c +size 909886 diff --git a/data/2311.13831.png b/data/2311.13831.png new file mode 100644 index 0000000000000000000000000000000000000000..43eee5d31136bf17bf64c41a54eef33dc854c6f5 --- /dev/null +++ b/data/2311.13831.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b777a9d4d1b79d82cedce560a239a75ddb31a0b35e89aa677589bea744447017 +size 1466553 diff --git a/data/2311.14097.png b/data/2311.14097.png new file mode 100644 index 0000000000000000000000000000000000000000..352370d8f8ab839b435fe42f11965cfe1dd0603b --- /dev/null +++ b/data/2311.14097.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e6cb22316db5d855b30f828ed335f15f9dd82e3f072a4f3d91454bdf15a59fa +size 756644 diff --git a/data/2311.14155.png b/data/2311.14155.png new file mode 100644 index 0000000000000000000000000000000000000000..c1a44844a19226d0760705d6f6cc728ca07f8d34 --- /dev/null +++ b/data/2311.14155.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9adeb27e11ff3958d45a1be2f83eec4f57561de2a4d709bdb5ff93a9329a102 +size 1204362 diff --git a/data/2311.14402.png b/data/2311.14402.png new file mode 100644 index 0000000000000000000000000000000000000000..e876392560b95286939cd9401f383211a6863ef8 --- /dev/null +++ b/data/2311.14402.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd2f21173ab8491f41c56eb194507efba91d6e860205938433261d8534384d95 +size 715944 diff --git a/data/2311.14405.png b/data/2311.14405.png new file mode 100644 index 0000000000000000000000000000000000000000..eb7769ad904f922e29ae8b96bb37012cbb33235f --- /dev/null +++ b/data/2311.14405.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13c1a9864fb96285f941ad840627642d5e39640539ccf28b00b1a2987d3363a6 +size 692395 diff --git a/data/2311.14521.png b/data/2311.14521.png new file mode 100644 index 0000000000000000000000000000000000000000..5d4d43ce886128618163aa53e05244d467ea2551 --- /dev/null +++ b/data/2311.14521.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49e5cf2c2e5efc093ef4a6bbfd2f815e5381df94dde0242d6da47e9d34193b1c +size 1578069 diff --git a/data/2311.14749.png b/data/2311.14749.png new file mode 100644 index 0000000000000000000000000000000000000000..118c83cf7a73fb149f247b17d813d85570076086 --- /dev/null +++ b/data/2311.14749.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccccef7e36b692c8cd918c21615df6e8db3a4f0637b9b4856fea1745735f719e +size 946346 diff --git a/data/2311.14757.png b/data/2311.14757.png new file mode 100644 index 0000000000000000000000000000000000000000..7490b57b26004e757d4ab45617943b407d7dd604 --- /dev/null +++ b/data/2311.14757.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c0abaede382a1b4ae38ada61b199bbd07a346cf5fbc87f8c0e32dee843c1eff +size 1271145 diff --git a/data/2311.14758.png b/data/2311.14758.png new file mode 100644 index 0000000000000000000000000000000000000000..2958876277cc122b9a746eaf2e916cd88d1b0829 --- /dev/null +++ b/data/2311.14758.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70c3eb9ad48d9749b2783626d206eed0d84203eff4ab24326bd22f4764930636 +size 855025 diff --git a/data/2311.14760.png b/data/2311.14760.png new file mode 100644 index 0000000000000000000000000000000000000000..b3f74ef02270e6b85d9a5fd60ecdde603ca1d15c --- /dev/null +++ b/data/2311.14760.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c09a3ba8c620e85d889df9db59cce8a44a0e913bf0146d7adb4474f589604661 +size 1109844 diff --git a/data/2311.14897.png b/data/2311.14897.png new file mode 100644 index 0000000000000000000000000000000000000000..fa0083567b769758e151764e4d038bfbe69efb58 --- /dev/null +++ b/data/2311.14897.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cb72e3197f113a79c70459ffa0f8535bea4ae942afe5739da6f2c0cc0cad89f +size 765342 diff --git a/data/2311.14960.png b/data/2311.14960.png new file mode 100644 index 0000000000000000000000000000000000000000..9b1e52dc11af691cbbc5a86c482cf59154e1a12b --- /dev/null +++ b/data/2311.14960.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5b6373e94895775689fc6733e4cbdd8e0dc5dc6753fa09b4974448dc468964a +size 832287 diff --git a/data/2311.15011.png b/data/2311.15011.png new file mode 100644 index 0000000000000000000000000000000000000000..c8b481a0cab688571075f3c2ffa999114bc946a5 --- /dev/null +++ b/data/2311.15011.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e0166ffca83c79b14bcc2a902e87b3adc1a71a2d0f34721aa4a9fcf8b8c7eb5 +size 838963 diff --git a/data/2311.15206.png b/data/2311.15206.png new file mode 100644 index 0000000000000000000000000000000000000000..9706f5c0c8182cccca77420d06fada718a4d9d28 --- /dev/null +++ b/data/2311.15206.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97a32fcb0b5d5ebe14b6679158c1a4e1d251b457f265824f8585c5908c7b83f3 +size 929056 diff --git a/data/2311.15243.png b/data/2311.15243.png new file mode 100644 index 0000000000000000000000000000000000000000..ea6da3da350aff4c0884ecf3c925019bf9b1b5e0 --- /dev/null +++ b/data/2311.15243.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6243f4b73603ed4001f95eca1295d256e77595f686d688089f93b2440d12b026 +size 975771 diff --git a/data/2311.15260.png b/data/2311.15260.png new file mode 100644 index 0000000000000000000000000000000000000000..2ddc42ff849868ef0321703656e21ddebbde1828 --- /dev/null +++ b/data/2311.15260.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79d436addfadcd9be7f015408a09663bc812c4c7c4e8a195e5d25903720abb85 +size 1721102 diff --git a/data/2311.15264.png b/data/2311.15264.png new file mode 100644 index 0000000000000000000000000000000000000000..638ce28d49a6d66dbfe47d705d802c36af753527 --- /dev/null +++ b/data/2311.15264.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ac23d35ba0deffa2516eaaca20fa5c7838cfe7acf644caa4881089197b832bb +size 731535 diff --git a/data/2311.15383.png b/data/2311.15383.png new file mode 100644 index 0000000000000000000000000000000000000000..a4cfc391c127ab8caf931f3d172ff6f2a2e9f8ca --- /dev/null +++ b/data/2311.15383.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b402248fb20f4738673893b63eaff08281dbf72a252b975c41e21f9855f9816c +size 884710 diff --git a/data/2311.15421.png b/data/2311.15421.png new file mode 100644 index 0000000000000000000000000000000000000000..f0d71ce6603e7d79d2881c0ee4cd7005e69a69e1 --- /dev/null +++ b/data/2311.15421.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f2c6a2bcabb98b0d0f305c055fd3a0b0b73c3095fafeaeefef9e1e7879d5317 +size 865983 diff --git a/data/2311.15435.png b/data/2311.15435.png new file mode 100644 index 0000000000000000000000000000000000000000..810df4513e3b89434b4056aea1c6db62b6aa3d6c --- /dev/null +++ b/data/2311.15435.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8b4e6fc69b305d982022769941486f29436c8222559629e1d80257a2808b9cb +size 1029822 diff --git a/data/2311.15475.png b/data/2311.15475.png new file mode 100644 index 0000000000000000000000000000000000000000..1629a2dd632427607c8223939cb4affc2fb55417 --- /dev/null +++ b/data/2311.15475.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46f75bfeaf6909638bc8a3e3b0c7ca843760bfa306a38009affcf61186fdc737 +size 982304 diff --git a/data/2311.15529v1.png b/data/2311.15529v1.png new file mode 100644 index 0000000000000000000000000000000000000000..9537725b2b1fe4a2cfd444c72c04bc2a2a3ee473 --- /dev/null +++ b/data/2311.15529v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cff5816af654bebedfe47bbf4c1a042c0f5dd2be5ed86cfe55066026e117229 +size 708653 diff --git a/data/2311.15537.png b/data/2311.15537.png new file mode 100644 index 0000000000000000000000000000000000000000..aed451a30069f880cd187e02abbb083c36ba3b0c --- /dev/null +++ b/data/2311.15537.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:173d8b0a38af52b0cdce22994f72c3aa53bfae4c6766f529def7a5ef6709da23 +size 751187 diff --git a/data/2311.15596.png b/data/2311.15596.png new file mode 100644 index 0000000000000000000000000000000000000000..ec57b0b35f1dc0fc40757aea547c58658b1c3c49 --- /dev/null +++ b/data/2311.15596.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc8ff1b9961f25a1a5231b3f7bfbb571534e7e82e2f2bda694f9bf10575d7ed9 +size 741579 diff --git a/data/2311.15599.png b/data/2311.15599.png new file mode 100644 index 0000000000000000000000000000000000000000..f57f6957b3a30968cddd9327f53fcda1a0f29ac6 --- /dev/null +++ b/data/2311.15599.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34be9f001ca223df305b7b1d957699320e724f6daf267dc1b971d3d2a471613f +size 812380 diff --git a/data/2311.15619.png b/data/2311.15619.png new file mode 100644 index 0000000000000000000000000000000000000000..dca19ecf99d6198ce7c53e9aec979c4c87a2decd --- /dev/null +++ b/data/2311.15619.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bb91f6b9b37c5b1c67aafacaaab4edda5a0354dcd6fddc160a21d6bc056bf90 +size 822129 diff --git a/data/2311.15637.png b/data/2311.15637.png new file mode 100644 index 0000000000000000000000000000000000000000..9694745fd88270171ebf7ca8b714f1f8fc169232 --- /dev/null +++ b/data/2311.15637.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf88c9dfa027c22a46bb9c117e31f3ab6a4af3c385a64ea90e5ac040ee435a7e +size 1263795 diff --git a/data/2311.15672.png b/data/2311.15672.png new file mode 100644 index 0000000000000000000000000000000000000000..ac1045f25dd0877166ab8cdb2297b00b7f1c04e1 --- /dev/null +++ b/data/2311.15672.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:341cea411eebf2e4e9cafec07f549ff5bbf47646d2528725978e07eeefd98349 +size 970882 diff --git a/data/2311.15707.png b/data/2311.15707.png new file mode 100644 index 0000000000000000000000000000000000000000..fe49aa875e67e551b4161c75c3ded01a73fb1a7d --- /dev/null +++ b/data/2311.15707.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba01b6c22f68efe4351f766b38efbe6c6d34a978ac4bae4a973af87e702b6750 +size 1494771 diff --git a/data/2311.15744.png b/data/2311.15744.png new file mode 100644 index 0000000000000000000000000000000000000000..52cbafd268b068259b8b72dec9d1ca0ad96489b6 --- /dev/null +++ b/data/2311.15744.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9564d1a74df40a38f12d3bc1590b70fe6e42a5a3ac5ecf741942a4aac2d28d51 +size 1376290 diff --git a/data/2311.15773.png b/data/2311.15773.png new file mode 100644 index 0000000000000000000000000000000000000000..e38cfb0bca09e4d86d3cb02e12f331988e98b9fa --- /dev/null +++ b/data/2311.15773.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c27f55052415d7bb526b16b29b2175e997a659535a20a6532df9de07f4e46081 +size 1387541 diff --git a/data/2311.15803.png b/data/2311.15803.png new file mode 100644 index 0000000000000000000000000000000000000000..0b6073c1a5deaa3443dda9d5cea0ecd32c61e601 --- /dev/null +++ b/data/2311.15803.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b3baa63e032e7cc1a00bd7ece29f8971670f7fd3f89af92a79933892101694c +size 715691 diff --git a/data/2311.15826.png b/data/2311.15826.png new file mode 100644 index 0000000000000000000000000000000000000000..68575c865951aa898b28706146471868ba37799e --- /dev/null +++ b/data/2311.15826.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a22bdba498eaa98ca336147e55b32f6b060f941b9e5c59502b5b31ba8548b84 +size 996872 diff --git a/data/2311.15841.png b/data/2311.15841.png new file mode 100644 index 0000000000000000000000000000000000000000..46a812a25a7698aa2f0f76a2f78d4f52ca38e681 --- /dev/null +++ b/data/2311.15841.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51d759121e5b0369d95a5cb375bf5d0211caad8222f537d14a57854cd5a902c7 +size 1812845 diff --git a/data/2311.15851.png b/data/2311.15851.png new file mode 100644 index 0000000000000000000000000000000000000000..51665cbc1ffa8460ebda6a01fd92ef4bf9a0d6b8 --- /dev/null +++ b/data/2311.15851.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99127edc3a2ccf0cb64d39325f580323b7f6269c9c2ee3292035571839dc814f +size 893982 diff --git a/data/2311.15855.png b/data/2311.15855.png new file mode 100644 index 0000000000000000000000000000000000000000..e781cb27e30e68718ce2dad0c233bbd0dc4755f7 --- /dev/null +++ b/data/2311.15855.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82be15ffa0bcb39f6038396e744f4d9e47e50c1ba70ca4f15b29a499fb15010c +size 992812 diff --git a/data/2311.15879v2.png b/data/2311.15879v2.png new file mode 100644 index 0000000000000000000000000000000000000000..3c64991c93b91f9f5c2a2d06c86abb1dcbb6433e --- /dev/null +++ b/data/2311.15879v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68ae195a665e7e50bb1ff8fdad914f34594097d2122b277a63d7e32cd61034f4 +size 825573 diff --git a/data/2311.15937.png b/data/2311.15937.png new file mode 100644 index 0000000000000000000000000000000000000000..33149d146449b5cad2c54a0208143efd9e91a9ed --- /dev/null +++ b/data/2311.15937.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63f3805134682eac6e8cf6546cff8a90afb17b3c4a4ee59a85f74ec071965955 +size 876611 diff --git a/data/2311.15939.png b/data/2311.15939.png new file mode 100644 index 0000000000000000000000000000000000000000..d2e3eda68e556cc9fd50b5ee588d265dd5184f60 --- /dev/null +++ b/data/2311.15939.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:491e6269bd85af33d8a7692319b23f51f6c8cd06cc522c71c57b892e17746ce0 +size 881623 diff --git a/data/2311.15977.png b/data/2311.15977.png new file mode 100644 index 0000000000000000000000000000000000000000..3abc92bc9509a1b153a8306489b604e2588290c5 --- /dev/null +++ b/data/2311.15977.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73decb25670d127ef4d63672ed999fa43d7212f0e64d3caafa2a714b50811ad7 +size 877378 diff --git a/data/2311.15980.png b/data/2311.15980.png new file mode 100644 index 0000000000000000000000000000000000000000..90fa4f754b0d28a2e50f604a390247aa4429188d --- /dev/null +++ b/data/2311.15980.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6140704b62d825362f125ba2d601add4c4b2011394e09b39cae30292c9ddcfcb +size 712378 diff --git a/data/2311.16037.png b/data/2311.16037.png new file mode 100644 index 0000000000000000000000000000000000000000..cc7d2e2770bf0ca73ee89abf3d6b7f8fb9bb9ba6 --- /dev/null +++ b/data/2311.16037.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d911ec6bff786b89f851c36f49ab45c1edcebc9f556c1f55fb2c32d4833850ff +size 1028544 diff --git a/data/2311.16081.png b/data/2311.16081.png new file mode 100644 index 0000000000000000000000000000000000000000..405a9addeac69b6bf524f659a76dcdcc1a8d00a6 --- /dev/null +++ b/data/2311.16081.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1addb675bbc76125e6593f97fdc9a2dfc405db977642b4e64eae644239d982de +size 815634 diff --git a/data/2311.16090.png b/data/2311.16090.png new file mode 100644 index 0000000000000000000000000000000000000000..79f7095b227016641b4b27b00c2d18fc8fe97f33 --- /dev/null +++ b/data/2311.16090.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e3930d6de30d66f96c59dc431e534dbe697e38adab95135dfbb9f8c30b8ef5b +size 1434139 diff --git a/data/2311.16096.png b/data/2311.16096.png new file mode 100644 index 0000000000000000000000000000000000000000..923bce5ecda1047874eb5ea660e49fda491ddb89 --- /dev/null +++ b/data/2311.16096.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f28ac7457e8b968bf078e0358a6a8064b70159133470e57bbc85545958364ee8 +size 952423 diff --git a/data/2311.16097v2.png b/data/2311.16097v2.png new file mode 100644 index 0000000000000000000000000000000000000000..4a6b525541eff3591f02dd29b170f70ea0527748 --- /dev/null +++ b/data/2311.16097v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35aab2a597674fc65c6a2d2b6959cf0126294e1b404cf11c592095794253918c +size 1057773 diff --git a/data/2311.16099.png b/data/2311.16099.png new file mode 100644 index 0000000000000000000000000000000000000000..7451c9778c7bb9d75b25b8836855e4a6274ea1d2 --- /dev/null +++ b/data/2311.16099.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daf85bd11a20185b6442a3fe9cfeb48c2eeb1b44c004fbe02e455bc66b787559 +size 1055938 diff --git a/data/2311.16117.png b/data/2311.16117.png new file mode 100644 index 0000000000000000000000000000000000000000..dccfda4901a3d721aec9e0582308b5a6927f8779 --- /dev/null +++ b/data/2311.16117.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fc7311e0f7766fdc10d800ebf94fa136694b94a8c0fd4d4abe7f4c0ba7ea589 +size 924463 diff --git a/data/2311.16194.png b/data/2311.16194.png new file mode 100644 index 0000000000000000000000000000000000000000..c61f0bf781f7e84f23fac701b4ce6406d125c51b --- /dev/null +++ b/data/2311.16194.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c66242b6fba484260ad9de7867456dd64461b808ad0a7e6a529f544b3bb2334c +size 794299 diff --git a/data/2311.16304.png b/data/2311.16304.png new file mode 100644 index 0000000000000000000000000000000000000000..eee1a869d99fe9c24d80dc71e757f8f480d2167b --- /dev/null +++ b/data/2311.16304.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce3399c38d14eb326ac4b41d34252760bda538ef68637e3a41ceebabc5985de6 +size 785862 diff --git a/data/2311.16420.png b/data/2311.16420.png new file mode 100644 index 0000000000000000000000000000000000000000..75f0489680e3b0235b69f1357e5628ca6df20c1d --- /dev/null +++ b/data/2311.16420.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79d8466a365ea7ae46991d4e46571def7a40f4e972c2179dacac59a2bff93aae +size 996774 diff --git a/data/2311.16432.png b/data/2311.16432.png new file mode 100644 index 0000000000000000000000000000000000000000..96e91d39ae5a7bd71cf7f1591b5a6c47aab67f59 --- /dev/null +++ b/data/2311.16432.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3e6e2fcc633ae25501b22f002ed468302352070e553c65f82a6ea42dea457d2 +size 1395617 diff --git a/data/2311.16464.png b/data/2311.16464.png new file mode 100644 index 0000000000000000000000000000000000000000..379ff6335336ebe7bfbfa283a44b076393feb5f6 --- /dev/null +++ b/data/2311.16464.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e0119558cc12ed4b5d4b770bc35743dbec13ccb1d6639ca72a333c31ba4afcd +size 796889 diff --git a/data/2311.16473.png b/data/2311.16473.png new file mode 100644 index 0000000000000000000000000000000000000000..1ca9c5a8b66e4914bccf61c8ee016df34a2225fc --- /dev/null +++ b/data/2311.16473.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b41a7ebac7498d0d9c0d07588320c3dbc013cdbc5d1b44bee946c710a574eca +size 923405 diff --git a/data/2311.16491.png b/data/2311.16491.png new file mode 100644 index 0000000000000000000000000000000000000000..a070ea82fef398c4d19efd417f22cb4a369765b0 --- /dev/null +++ b/data/2311.16491.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:987634972f9d67af31de36068311c942ca4a3c33d35de68149e4f511c94b87a1 +size 803230 diff --git a/data/2311.16493.png b/data/2311.16493.png new file mode 100644 index 0000000000000000000000000000000000000000..6275fa980cb6174bf3056a4fd3497816992cb944 --- /dev/null +++ b/data/2311.16493.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d610641a8b8869240b1ccf976720018a093c7b6eace7df143381ba5c55c7c9a +size 992301 diff --git a/data/2311.16494.png b/data/2311.16494.png new file mode 100644 index 0000000000000000000000000000000000000000..e4b4217a8fce179b212014ae55e422de16012bf4 --- /dev/null +++ b/data/2311.16494.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fd7ec86945d72891aec356c2a4f774fdc2e9a5617a488f2243269e6b875cbfc +size 790857 diff --git a/data/2311.16495.png b/data/2311.16495.png new file mode 100644 index 0000000000000000000000000000000000000000..793830c775c6d26e23432703bac90cbbee450986 --- /dev/null +++ b/data/2311.16495.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6957b5ce751e8eb7d072b7361bd88f90188c60a289b5fe861772f249a35c5be2 +size 946153 diff --git a/data/2311.16498.png b/data/2311.16498.png new file mode 100644 index 0000000000000000000000000000000000000000..96daba439ce7affee42cbcb6a9d2f15667588628 --- /dev/null +++ b/data/2311.16498.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:466e47aebdfee9a6e358dd4c7eb147d0c3b74086e232e61e33a638df06b14c25 +size 1126728 diff --git a/data/2311.16500.png b/data/2311.16500.png new file mode 100644 index 0000000000000000000000000000000000000000..4b9cd36c8aeda4ae1c3fb27d5c6c73bc15fae685 --- /dev/null +++ b/data/2311.16500.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:212a74512d51102170a687cdc4a4ac2ab3f0b90b359991ae51a846d7f19c87c6 +size 437468 diff --git a/data/2311.16502.png b/data/2311.16502.png new file mode 100644 index 0000000000000000000000000000000000000000..0c90163f337891b201291ec3f78549cf2121adea --- /dev/null +++ b/data/2311.16502.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0710f50a793118eb329939a279fe4a6b1ac8c2923baffdd0560fd9413f30fb58 +size 820347 diff --git a/data/2311.16503.png b/data/2311.16503.png new file mode 100644 index 0000000000000000000000000000000000000000..931357bbc8c5e29598246f1fe4fb1354220aa012 --- /dev/null +++ b/data/2311.16503.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:007892de13a1d1e8beef0ce07f0da9158d5b35a1e90bcf9f9544dbc2f89ace06 +size 828976 diff --git a/data/2311.16510.png b/data/2311.16510.png new file mode 100644 index 0000000000000000000000000000000000000000..7b3f82eac228259bb6dc44be1e0c5682887e926e --- /dev/null +++ b/data/2311.16510.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1acaa8080ed591be66729fee6d03c0d1042a1f620c4260226127b19a52ade773 +size 720887 diff --git a/data/2311.16512.png b/data/2311.16512.png new file mode 100644 index 0000000000000000000000000000000000000000..3deea526ccf3851de7d735bb21f2d38305ea118f --- /dev/null +++ b/data/2311.16512.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47d2bcd8b44762c6ef82bd6c6bf99e5e4b11f88fac796ed3e4de61ba56121362 +size 1747609 diff --git a/data/2311.16516.png b/data/2311.16516.png new file mode 100644 index 0000000000000000000000000000000000000000..49dbe47d71d6f746e1e2badbf3348fdd1c163193 --- /dev/null +++ b/data/2311.16516.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:617c9cb079292feb1a78f05638aab1d4785fbfcc182f476f772abb97604b4a08 +size 1933536 diff --git a/data/2311.16518.png b/data/2311.16518.png new file mode 100644 index 0000000000000000000000000000000000000000..02bd9835ea1755b24c83fd1a5eae730597242ba9 --- /dev/null +++ b/data/2311.16518.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:100255f98493e0473418d04bca405620a3c4e01bbf063e80554c7055af6f6e7b +size 818491 diff --git a/data/2311.16682.png b/data/2311.16682.png new file mode 100644 index 0000000000000000000000000000000000000000..d07504a166d560fe7ff7bb57374f2441f7815c5e --- /dev/null +++ b/data/2311.16682.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76a3d37e828b17a001c94e69fa46a48a9bbf8e04b1d121d48b84dc959ac659c2 +size 746667 diff --git a/data/2311.16703.png b/data/2311.16703.png new file mode 100644 index 0000000000000000000000000000000000000000..da63d3edae64865b4ec0a45f25615fd3465163e9 --- /dev/null +++ b/data/2311.16703.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0b716f6f379d0c45e2f916c856ca3e8364698773d6e1e4017b52a283a082279 +size 795660 diff --git a/data/2311.16707.png b/data/2311.16707.png new file mode 100644 index 0000000000000000000000000000000000000000..fe0ec959b4ab07ca60642a0336a39371ec1cb398 --- /dev/null +++ b/data/2311.16707.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf1bedb5b30982c6d5ce21a12467a4bd7fe9c62195dde44013b6d15e068dfb25 +size 881941 diff --git a/data/2311.16711.png b/data/2311.16711.png new file mode 100644 index 0000000000000000000000000000000000000000..65b40c07c8599213418acd7de0b00dfae08f4d57 --- /dev/null +++ b/data/2311.16711.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f62b5183ce95c67fd1419a9bfa9f6c1666e3e6f9b689d5f68e49609e0cafddc8 +size 1101365 diff --git a/data/2311.16714v1.png b/data/2311.16714v1.png new file mode 100644 index 0000000000000000000000000000000000000000..e3462453da3b065bbb74fbb7310468f5755289c5 --- /dev/null +++ b/data/2311.16714v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cde982b568eb7741f3a1d13cdff67a1a45a119435d698064bf3c2f858e04b1db +size 1173322 diff --git a/data/2311.16728.png b/data/2311.16728.png new file mode 100644 index 0000000000000000000000000000000000000000..1205ec434286c58e826b11df11d93ce607ac15d0 --- /dev/null +++ b/data/2311.16728.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6084f876bd6c27a74f4cbe89a2decc3c754ab0ec2af7c28f0617aa670618628c +size 1228932 diff --git a/data/2311.16739.png b/data/2311.16739.png new file mode 100644 index 0000000000000000000000000000000000000000..b9bc76a09004b8a62b2f8887acccf8d9b802f11d --- /dev/null +++ b/data/2311.16739.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9af2baaf884d93fa4e49e02cd4f0a6b75ef163c7a5af547a8f4a025a20cb5f4 +size 954033 diff --git a/data/2311.16813.png b/data/2311.16813.png new file mode 100644 index 0000000000000000000000000000000000000000..694fb37c2de345f851cb92692f37b3ff475723e0 --- /dev/null +++ b/data/2311.16813.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66c47f7bd235c04c46d6745451f63826c0b5691a92753485a7feda87e04f4895 +size 1430293 diff --git a/data/2311.16833.png b/data/2311.16833.png new file mode 100644 index 0000000000000000000000000000000000000000..6b617deed6b250f9edc78e7e6de295eedaff9b72 --- /dev/null +++ b/data/2311.16833.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61ef24cd5376c446a63f881d927032d0b2d6487be5c1493f26779afd76ed1b00 +size 690193 diff --git a/data/2311.16845.png b/data/2311.16845.png new file mode 100644 index 0000000000000000000000000000000000000000..e458ffdd63cebefabaefad817a68a8e1d94b10ce --- /dev/null +++ b/data/2311.16845.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f815553a78a5e4527236a41bfe338c8ed9e1ddbfd50c1e5f458ac2bf54acb8aa +size 961691 diff --git a/data/2311.16854.png b/data/2311.16854.png new file mode 100644 index 0000000000000000000000000000000000000000..55cd941732f7bdb7b699df7d1455e6bef6cd0f46 --- /dev/null +++ b/data/2311.16854.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:912cebc2a4aa6d5510906536412944b68c02ce0cd8270ea6a75524ec41430631 +size 1024129 diff --git a/data/2311.16918v1.png b/data/2311.16918v1.png new file mode 100644 index 0000000000000000000000000000000000000000..de282b9c514ccd8794e88e9f72ad13fe3c589d32 --- /dev/null +++ b/data/2311.16918v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77d702df0b60a48eaa236ea5c1bef19684a12ecd6e7149fe1a7a36d438721566 +size 1388572 diff --git a/data/2311.16922.png b/data/2311.16922.png new file mode 100644 index 0000000000000000000000000000000000000000..342e9738cb7a1cbe68200229e1f466a3f2b90301 --- /dev/null +++ b/data/2311.16922.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bf0fd13085acf3aa4b2ab72b60ac4680e9199493701f6318f3898d437f92ccf +size 815604 diff --git a/data/2311.16926.png b/data/2311.16926.png new file mode 100644 index 0000000000000000000000000000000000000000..bcb0ef414fbf97f5e3a6f06b8d6fa6b2fae4cae8 --- /dev/null +++ b/data/2311.16926.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e437d0dd91108d02b07bfb684e6c8fd5e8b9174e9a4aeeedbf5b82946ea3f3ac +size 831407 diff --git a/data/2311.16961v1.png b/data/2311.16961v1.png new file mode 100644 index 0000000000000000000000000000000000000000..2a868e0afa242481e6b422685cf31359e4e247a0 --- /dev/null +++ b/data/2311.16961v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73e467338cdd793e934291efcd5038706e2016260c4b2fe437427a6bc1c20de7 +size 778647 diff --git a/data/2311.16973.png b/data/2311.16973.png new file mode 100644 index 0000000000000000000000000000000000000000..114f6a332911d6170eb0a57940ea69052632431b --- /dev/null +++ b/data/2311.16973.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ebe74c97a001d128b98a6c8eb2052bddf49bb2aa76e818641ac4e28cac84faa +size 2293464 diff --git a/data/2311.17002.png b/data/2311.17002.png new file mode 100644 index 0000000000000000000000000000000000000000..a685c367e4498306b1f4736447e29ce9a2b7681b --- /dev/null +++ b/data/2311.17002.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d56dede525dd18e0c94e7e0cf201c21c789ee705fdd779b762cae541562540ef +size 2186739 diff --git a/data/2311.17005.png b/data/2311.17005.png new file mode 100644 index 0000000000000000000000000000000000000000..c6881a69aebefc93510409d516be25b1ee5e1d34 --- /dev/null +++ b/data/2311.17005.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:795f363aa3741a431d25df7f70f99b35d65042967e1122ad214b5b04cef4b237 +size 924062 diff --git a/data/2311.17009.png b/data/2311.17009.png new file mode 100644 index 0000000000000000000000000000000000000000..e990bd3c6bad6dd82198b163484b02608cf5ab49 --- /dev/null +++ b/data/2311.17009.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:563b1af064b566a23871cd653588c3bf0b5b2fd87390788110f642638661a7f0 +size 1480668 diff --git a/data/2311.17024.png b/data/2311.17024.png new file mode 100644 index 0000000000000000000000000000000000000000..ad9230a9b93fbf2e8b528a640183aa32289f7f50 --- /dev/null +++ b/data/2311.17024.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bcb5d8ee0afc2328551c1c58a3a3befb0f699b58a341084abf4088a12984100 +size 967939 diff --git a/data/2311.17034.png b/data/2311.17034.png new file mode 100644 index 0000000000000000000000000000000000000000..b7b76b4a3551ba5834195ba307ffc71a0a6d5ef5 --- /dev/null +++ b/data/2311.17034.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68704574f4157393d906bebc366f8849b35d2e0c39325bd2853cfe3c600c7d34 +size 1025923 diff --git a/data/2311.17048.png b/data/2311.17048.png new file mode 100644 index 0000000000000000000000000000000000000000..bc4832883788561b9aa69e8c3707d8ebf7ff8f8b --- /dev/null +++ b/data/2311.17048.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e926bfe84e9abac81e04e248a09b98803e1906d42dc3fc18b1fa335de9a7d6c +size 994047 diff --git a/data/2311.17049.png b/data/2311.17049.png new file mode 100644 index 0000000000000000000000000000000000000000..1018e8bc6f907e88aa893463f66e6673187de6b2 --- /dev/null +++ b/data/2311.17049.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:232286df1bf55a5b063e18681b1d7c0b04fce231c74c0b637bb529bb832167b6 +size 710753 diff --git a/data/2311.17060v1.png b/data/2311.17060v1.png new file mode 100644 index 0000000000000000000000000000000000000000..7931ce455ca5004345c1d9084b19bc33f4da3ee6 --- /dev/null +++ b/data/2311.17060v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:284431a5eadc1065b651bfb0cd1420ab4dd53b80d2557de84dfbc84fddb1a6af +size 1063143 diff --git a/data/2311.17061.png b/data/2311.17061.png new file mode 100644 index 0000000000000000000000000000000000000000..60de1a4d86a84c5193fab83d5e75da17536c23c0 --- /dev/null +++ b/data/2311.17061.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cddc7a9620b47147076bf746546b4229a84c58c7b65832f4933a69ac9aae6988 +size 1448044 diff --git a/data/2311.17076.png b/data/2311.17076.png new file mode 100644 index 0000000000000000000000000000000000000000..0518f541b94076db9ebd305825cf5a8c9dbdbfc3 --- /dev/null +++ b/data/2311.17076.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25ffcfca868336d68e248ffd0681b07868666bf5422ea7868360f7f5ee2a4ec7 +size 798040 diff --git a/data/2311.17082.png b/data/2311.17082.png new file mode 100644 index 0000000000000000000000000000000000000000..85a3cbd6bf2592395a69b95005c883a550a28d52 --- /dev/null +++ b/data/2311.17082.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb3f65029490336bfa6e1d0f3450a026bbdb69b9296b172e08ecdad208de5092 +size 896994 diff --git a/data/2311.17083.png b/data/2311.17083.png new file mode 100644 index 0000000000000000000000000000000000000000..b83e4241dc1eaa04a8964411cf6be26316791263 --- /dev/null +++ b/data/2311.17083.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee99ea8d082ff3dc95ccfb90037abe52dba1f5663cf4baa9e2e897f4ba7be767 +size 1032570 diff --git a/data/2311.17089.png b/data/2311.17089.png new file mode 100644 index 0000000000000000000000000000000000000000..029337693176123def2c61537a162f99e054b68c --- /dev/null +++ b/data/2311.17089.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8801fe65dc5445f31801489e37664822d93966578e594f167a6b1c57f0a0497b +size 1622276 diff --git a/data/2311.17094.png b/data/2311.17094.png new file mode 100644 index 0000000000000000000000000000000000000000..93ce5f52d9448cee6ffbdccfebee5b79a7d8bd61 --- /dev/null +++ b/data/2311.17094.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42dce0c2104c26a1c033f9f45cbc26325e7c1c0e0421302b8f1966b9b2d8c709 +size 817981 diff --git a/data/2311.17095v1.png b/data/2311.17095v1.png new file mode 100644 index 0000000000000000000000000000000000000000..80c993c978d7a87c60f7a292b2bb83858d96a7c1 --- /dev/null +++ b/data/2311.17095v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaa3545ee6d17d5234cd3a19b7715b2447c89649f35f4e2be83fff3aaf4c1e09 +size 1101502 diff --git a/data/2311.17112.png b/data/2311.17112.png new file mode 100644 index 0000000000000000000000000000000000000000..2c036873bf4bdc2b339646f316ba9bf18598a517 --- /dev/null +++ b/data/2311.17112.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6efe048c8bd28e7470cf0d266d55c18926dac7b11f9218cc826df3b700f67b8f +size 758863 diff --git a/data/2311.17113.png b/data/2311.17113.png new file mode 100644 index 0000000000000000000000000000000000000000..7d821c91bddae36988984eac845fc94179e23ff4 --- /dev/null +++ b/data/2311.17113.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:719e3674b70ff0758ea38db3f459d41869b8c7931bf912466d55b7a0946dc1e0 +size 802727 diff --git a/data/2311.17117.png b/data/2311.17117.png new file mode 100644 index 0000000000000000000000000000000000000000..f9058d6fe238062845be983bfdffa5ccfe1a802d --- /dev/null +++ b/data/2311.17117.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1c7f5d7fd9e6c829b83cbe3b09694ce4c6fb02cb0b35c04c1846e1ee671ad87 +size 1210859 diff --git a/data/2311.17119.png b/data/2311.17119.png new file mode 100644 index 0000000000000000000000000000000000000000..d4aa0196580158d624fb674a3dbe620197cf6e6f --- /dev/null +++ b/data/2311.17119.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9deeedf35d12487ad769fd6bfce6cdf6710533395c23a336f4eb0c62bd32c5e +size 762257 diff --git a/data/2311.17123.png b/data/2311.17123.png new file mode 100644 index 0000000000000000000000000000000000000000..1858346e8decb171ef74a81188746fea5451ae96 --- /dev/null +++ b/data/2311.17123.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61bad4eceb89d0bf1c326a903c95c1ab9964f5dd04d7365ee0f2d51e6607298e +size 1080056 diff --git a/data/2311.17132.png b/data/2311.17132.png new file mode 100644 index 0000000000000000000000000000000000000000..2e4649977b2b49cedd9f1f11f269e91d5f9b67c2 --- /dev/null +++ b/data/2311.17132.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd30a261475e483992abf0ab2e88c21efb460a6651b0be352303c968648839f2 +size 781484 diff --git a/data/2311.17138.png b/data/2311.17138.png new file mode 100644 index 0000000000000000000000000000000000000000..00735dbbb4f7b20e18eb55a1cdff501d8fc9f7bc --- /dev/null +++ b/data/2311.17138.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce993222708f586e974fa6cd54787efbb83ee2edca1b26074b40f66e1252a352 +size 1449456 diff --git a/data/2311.17216.png b/data/2311.17216.png new file mode 100644 index 0000000000000000000000000000000000000000..5a388baaef1476ddeba78bb005ecbcd07f5d1579 --- /dev/null +++ b/data/2311.17216.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1538f466e8039a84ad514400d0c7e5605fdde8f414bfb6c19f28eb7086b4348a +size 785856 diff --git a/data/2311.17241.png b/data/2311.17241.png new file mode 100644 index 0000000000000000000000000000000000000000..a32c73939dac9e7815ab887cd0c08565f3a2d662 --- /dev/null +++ b/data/2311.17241.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f87dbda05fd7eb9f6bbc4b1adad223b3290968bb49eb59bc089f0f9a84b8de2 +size 797262 diff --git a/data/2311.17261.png b/data/2311.17261.png new file mode 100644 index 0000000000000000000000000000000000000000..330ca52ff9ee6dba7960a20093911849fd533652 --- /dev/null +++ b/data/2311.17261.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f98430b6e5a965355d1dc293e28c3ab477112d8828a192c345e2da5dff5a0f5a +size 1131250 diff --git a/data/2311.17286.png b/data/2311.17286.png new file mode 100644 index 0000000000000000000000000000000000000000..8e65e25a7c8fd4bc1e4bc8a85e327059d20bad7b --- /dev/null +++ b/data/2311.17286.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:651ac0c10c4f641d980bc9eb21b10e671976f82e76f4b7dd9094b4467bb5ccfa +size 758859 diff --git a/data/2311.17315.png b/data/2311.17315.png new file mode 100644 index 0000000000000000000000000000000000000000..16783516d3bf4614e217fc5a3bf540fb24b0f98f --- /dev/null +++ b/data/2311.17315.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ea954920fa500b797545d6e8596bc1a87ddf032aba13ccef0fd2a04fe7dba43 +size 769967 diff --git a/data/2311.17320.png b/data/2311.17320.png new file mode 100644 index 0000000000000000000000000000000000000000..381051a9cb65d8fb4d83cc3ea364fe201e616091 --- /dev/null +++ b/data/2311.17320.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa09ce74ec44d96821323b836c4d958bdb43217dd2ea35ad099bd46481fa9cf2 +size 739016 diff --git a/data/2311.17352.png b/data/2311.17352.png new file mode 100644 index 0000000000000000000000000000000000000000..9197da53e58386e6cbf31279344dc5d408b3a82a --- /dev/null +++ b/data/2311.17352.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4c4976601aea7e4a54a173b5c2e3f1bc95a03fee6d2830a1764af60774d0849 +size 834179 diff --git a/data/2311.17389.png b/data/2311.17389.png new file mode 100644 index 0000000000000000000000000000000000000000..fd3692e0b421ec2e0d78d06ff49dc3001e446b75 --- /dev/null +++ b/data/2311.17389.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16ea4bc05af775a7a32d5b0b1063f0a18ce6051f0769c1aba07551dfd5387763 +size 808414 diff --git a/data/2311.17396.png b/data/2311.17396.png new file mode 100644 index 0000000000000000000000000000000000000000..a99eae5609dfd5d71f9a83ae158b1df54a9631bc --- /dev/null +++ b/data/2311.17396.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3d2c88c3935686242bd95d2f27e9bea03c83c634cdd45108e12deda7dd8591e +size 779886 diff --git a/data/2311.17435.png b/data/2311.17435.png new file mode 100644 index 0000000000000000000000000000000000000000..4f2ee4e11a2220a81f1abe16206fa21ea0292a35 --- /dev/null +++ b/data/2311.17435.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0d8510cb99aaa9b09ddd0eb9bdd759ef5006927f0523e702fdcce007c11375d +size 1150192 diff --git a/data/2311.17456.png b/data/2311.17456.png new file mode 100644 index 0000000000000000000000000000000000000000..9d96d23d12a95d3a7be61da46c89c45aafc9aea6 --- /dev/null +++ b/data/2311.17456.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0535adbbd020b656ae450ac6e14e636968f8b1040e234e5d77e9eb0c0335358d +size 1106191 diff --git a/data/2311.17461v1.png b/data/2311.17461v1.png new file mode 100644 index 0000000000000000000000000000000000000000..325254006203464b13453be7c07aeb4d76ec60fa --- /dev/null +++ b/data/2311.17461v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eca9497bb1f2b3cdde8d0bc07d7eef16b1984fff9476cad77ef221b93f21e431 +size 1920383 diff --git a/data/2311.17516.png b/data/2311.17516.png new file mode 100644 index 0000000000000000000000000000000000000000..e0673002f3a478b62c939ebb6e71d777d466fcd7 --- /dev/null +++ b/data/2311.17516.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4816307f16a5db6e94cbbd99cd2ebbfe978b0f802fc9b868e58003eeaf8a2679 +size 1243354 diff --git a/data/2311.17518v2.png b/data/2311.17518v2.png new file mode 100644 index 0000000000000000000000000000000000000000..f3d939edcf423769b798f6e1c7c9e29d93c55112 --- /dev/null +++ b/data/2311.17518v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ce170c4573ded42ff0a0be781c8d1acaf4dec85dce488d37c6e183f5017ba43 +size 802655 diff --git a/data/2311.17532.png b/data/2311.17532.png new file mode 100644 index 0000000000000000000000000000000000000000..595f2f869e924e1b00c62e74f76d286050d302a9 --- /dev/null +++ b/data/2311.17532.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f9cc44252ac97d206587b512fecbb19b12721c0d86caeaae9895e3677d96a1e +size 761729 diff --git a/data/2311.17590v2.png b/data/2311.17590v2.png new file mode 100644 index 0000000000000000000000000000000000000000..0f8ab79729292c898ebc507dd838dc06cd36e941 --- /dev/null +++ b/data/2311.17590v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15d003f9f919c9716023355881282f3c997ce87a71937ff81fcceb06692e0469 +size 1106846 diff --git a/data/2311.17597.png b/data/2311.17597.png new file mode 100644 index 0000000000000000000000000000000000000000..c141541bc6439efb85c0536de27608f409a834b4 --- /dev/null +++ b/data/2311.17597.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b558e528e06eb0d2e24517286759df46f87de69b40e7ff7eb3b81bc8d58c176 +size 814512 diff --git a/data/2311.17663.png b/data/2311.17663.png new file mode 100644 index 0000000000000000000000000000000000000000..d4787dd776337dd69fedf34b7e5e2679a8a9424f --- /dev/null +++ b/data/2311.17663.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a53ee96118240c6d3986c8d6d8eb5cd61f85450e7292196122cb52bce1f3c7c +size 1044891 diff --git a/data/2311.17737.png b/data/2311.17737.png new file mode 100644 index 0000000000000000000000000000000000000000..cdd28fe2742889fabbd5d049bbd5a0664b8aec39 --- /dev/null +++ b/data/2311.17737.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64ac32cc2fedeb191794258fd4ffe8d8b8d2d37cd071130a1518cefcc6201f59 +size 1186819 diff --git a/data/2311.17754.png b/data/2311.17754.png new file mode 100644 index 0000000000000000000000000000000000000000..5bcd335856f09a1864008ad41a85d8f026fd479d --- /dev/null +++ b/data/2311.17754.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6594585d37beaf9573ef17397b86bedbe8a68e5c1a8b0735f28b8cc53accf1e8 +size 1066087 diff --git a/data/2311.17776v1.png b/data/2311.17776v1.png new file mode 100644 index 0000000000000000000000000000000000000000..2f7e3f1356a64579caa3109e0d65b0abe8dfce58 --- /dev/null +++ b/data/2311.17776v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:587669008685e5c67002eec4e7dc4bd1336ab64642a3a2d9a67079d75c36ff95 +size 820215 diff --git a/data/2311.17833.png b/data/2311.17833.png new file mode 100644 index 0000000000000000000000000000000000000000..f23062d4ed68cc1e9071be18b16f8581f9b35468 --- /dev/null +++ b/data/2311.17833.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:888e82b71ab8b38cc48a0676773b2d8a561812a65add0491b6331b2f8846403f +size 786844 diff --git a/data/2311.17857v1.png b/data/2311.17857v1.png new file mode 100644 index 0000000000000000000000000000000000000000..cfe9b8f1c30f284b3dbe8f4a5d2514ad9d44063e --- /dev/null +++ b/data/2311.17857v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:833d123fb1db42d18e4983996114652a0225883b25d7fda98873b4b1752ecb2e +size 1093349 diff --git a/data/2311.17901.png b/data/2311.17901.png new file mode 100644 index 0000000000000000000000000000000000000000..8e20715b2b3b4882d6fd1a130fbe63100e39e076 --- /dev/null +++ b/data/2311.17901.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69b4fde45060ca732c4913cd9c0b55646d5ce91df1aee360e920dd959e09b584 +size 1356454 diff --git a/data/2311.17910v1.png b/data/2311.17910v1.png new file mode 100644 index 0000000000000000000000000000000000000000..c0a03c12d24789fac25dfe7186c81592cf4b6853 --- /dev/null +++ b/data/2311.17910v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9994e9c2dc8bb0bbca26272368eb7a09d83726db534bd2cc680b79f59d8c9d6 +size 1114245 diff --git a/data/2311.17911.png b/data/2311.17911.png new file mode 100644 index 0000000000000000000000000000000000000000..53c5a8d0dd50f2b992bf571b92f5e440c3e67d63 --- /dev/null +++ b/data/2311.17911.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a712d98237446c38f180d047942adb88603573d2a892914ca5d18be351a7ceb +size 914876 diff --git a/data/2311.17917.png b/data/2311.17917.png new file mode 100644 index 0000000000000000000000000000000000000000..ce48fe5231dc4e9ae16464691b8f1d0d0857781f --- /dev/null +++ b/data/2311.17917.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9abf81fd4df9aeb32f365667c45bb35379f03c3ffef82144c53d027e76772ac2 +size 1016856 diff --git a/data/2311.17918.png b/data/2311.17918.png new file mode 100644 index 0000000000000000000000000000000000000000..5858fb8eff255836e53b03ea0c6cbce9313c48b5 --- /dev/null +++ b/data/2311.17918.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e6fe71dfc9fc5371affd5cb87f15c23c0a55c2dc7e528abf91002ee37aa47ab +size 909785 diff --git a/data/2311.17919.png b/data/2311.17919.png new file mode 100644 index 0000000000000000000000000000000000000000..4f89e56b99f4337897af88e0838bd4007ee6fc73 --- /dev/null +++ b/data/2311.17919.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:370d170f7f71e45b633d9612dbe1f5bbc4eaf69b4649501844044ad644076532 +size 1501971 diff --git a/data/2311.17922.png b/data/2311.17922.png new file mode 100644 index 0000000000000000000000000000000000000000..9810797aa990aa630c6f749e69d891e80b4b3813 --- /dev/null +++ b/data/2311.17922.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c648f64ad4c73480abe3cd974ec8c3568c796899c2632152f06f5d37246fc46 +size 785796 diff --git a/data/2311.17938.png b/data/2311.17938.png new file mode 100644 index 0000000000000000000000000000000000000000..f03bf473366081dc91a283f8437d02c9f72cd612 --- /dev/null +++ b/data/2311.17938.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e5fc621b34960197a286f65fd1261e2fb0c8f3891c95c061636006125ecd0e9 +size 874662 diff --git a/data/2311.17948.png b/data/2311.17948.png new file mode 100644 index 0000000000000000000000000000000000000000..769115857187e4a2d0e27bc95391c8b20b6000e5 --- /dev/null +++ b/data/2311.17948.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45147d401c2475d2f38c4b318ea1e2b0761fc3b62ee9ba4199aa519b70ea7c3e +size 1041300 diff --git a/data/2311.17950.png b/data/2311.17950.png new file mode 100644 index 0000000000000000000000000000000000000000..a7dd0cde410c74f2f96780f05fcf0e6bc0c7bef4 --- /dev/null +++ b/data/2311.17950.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8dfc3d0d14a82c4538b7dbac91c09077537a26f177ca9ed6e2e5e8747824c379 +size 1439994 diff --git a/data/2311.17951.png b/data/2311.17951.png new file mode 100644 index 0000000000000000000000000000000000000000..37b43d5d311267c2078fe41bcb3f5cedd1739273 --- /dev/null +++ b/data/2311.17951.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:945f9d88eb75bb5ccd408c3266414c213a373ca7fd3b6ba9cede3cd8676403e7 +size 837874 diff --git a/data/2311.17977.png b/data/2311.17977.png new file mode 100644 index 0000000000000000000000000000000000000000..976fadd0616975492ffd38942ca689d205c38bf9 --- /dev/null +++ b/data/2311.17977.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ef24b65fb73fa43d797104733e017341c7dacb062ad58b1e1559c84d6a3b7a4 +size 1041960 diff --git a/data/2311.17982.png b/data/2311.17982.png new file mode 100644 index 0000000000000000000000000000000000000000..96da8627b383561673dddf77b7da91380d3d10ce --- /dev/null +++ b/data/2311.17982.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45820908b73261353ca31e35b029c23272b93150d913ed9743c41d4e476c0006 +size 775747 diff --git a/data/2311.17984.png b/data/2311.17984.png new file mode 100644 index 0000000000000000000000000000000000000000..6c824fd9ef5f0d9697c11b908458ef28027adb25 --- /dev/null +++ b/data/2311.17984.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfa50302f37eb29884ac412757a27896c04e8609ba3e02dd03f76a1490c0edd5 +size 1225337 diff --git a/data/2311.18113.png b/data/2311.18113.png new file mode 100644 index 0000000000000000000000000000000000000000..c47f74e483b5a9c32c65f1f164dc7f0c4d91fe4c --- /dev/null +++ b/data/2311.18113.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfab0b33e141006edb8645d9ba5cda2d21167275bcca1ebe80bb1cd38cb791d0 +size 875992 diff --git a/data/2311.18129.png b/data/2311.18129.png new file mode 100644 index 0000000000000000000000000000000000000000..59a7b9b6447cab8b9fb1dae021b59345b1aeafff --- /dev/null +++ b/data/2311.18129.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff6c4b0328ddbca58af7830f1645a8e247f0733d04e7e149a63ce96984aa60d7 +size 784849 diff --git a/data/2311.18168.png b/data/2311.18168.png new file mode 100644 index 0000000000000000000000000000000000000000..bfc679a051f5f87d7b317412c149838f64977787 --- /dev/null +++ b/data/2311.18168.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d522fbcc1a7b29d6449de7373ac947f373afd30ad184931bf463308d5eaa5bad +size 764164 diff --git a/data/2311.18231.png b/data/2311.18231.png new file mode 100644 index 0000000000000000000000000000000000000000..8cde51c8ddbda643011db346e693e04ca9a0eefa --- /dev/null +++ b/data/2311.18231.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7db5f9836484be2d036be9cb866acdf89131546396edd1662f9284d929c6b59 +size 824089 diff --git a/data/2311.18259.png b/data/2311.18259.png new file mode 100644 index 0000000000000000000000000000000000000000..bf330eefb17a72d0957b1e0c89899d55b43add0b --- /dev/null +++ b/data/2311.18259.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38a2a2a1bc74790da15578208ce5332334343ae608817ffa7f5b823812c81ff4 +size 750718 diff --git a/data/2311.18287.png b/data/2311.18287.png new file mode 100644 index 0000000000000000000000000000000000000000..1dda6e7f15659e770088ca425736ce2f9d3b55c8 --- /dev/null +++ b/data/2311.18287.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33b10557559a30486eec9688bc545ed7f4474065cddd1320fb9dfddc0220e2f1 +size 1177544 diff --git a/data/2311.18303.png b/data/2311.18303.png new file mode 100644 index 0000000000000000000000000000000000000000..7a24da6226b1cbd79b57f4da5863066ee55aff22 --- /dev/null +++ b/data/2311.18303.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5819e5ccba43aed0e91c5d3454ed130d2f91e1a86f212e0452460cfd01bf4a03 +size 914568 diff --git a/data/2311.18331v1.png b/data/2311.18331v1.png new file mode 100644 index 0000000000000000000000000000000000000000..af5698fd186fe7b2456770c5edc0e2af4ed463df --- /dev/null +++ b/data/2311.18331v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb52f46b06e4a8b4a0aff11d223aac19f1ba769d8193b306d7d57e699c605792 +size 771734 diff --git a/data/2311.18363.png b/data/2311.18363.png new file mode 100644 index 0000000000000000000000000000000000000000..85338aeacbb93c2776e6859adf9724c9b7fadc28 --- /dev/null +++ b/data/2311.18363.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:164dbdae97e3262456d5db419424b87dde25a30661fc77e3bdc83b680e409ba0 +size 733247 diff --git a/data/2311.18387v1.png b/data/2311.18387v1.png new file mode 100644 index 0000000000000000000000000000000000000000..8583c449222c76ebb2642a65e8b5c5b0f6e5afe5 --- /dev/null +++ b/data/2311.18387v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53243a258dc40623ae2064e5c7cde40c4dbf5e23fc6f2299f3817e027f1fd1b5 +size 813673 diff --git a/data/2311.18405.png b/data/2311.18405.png new file mode 100644 index 0000000000000000000000000000000000000000..587effda39bc7f84f99da1c7f4aa1af0029708b2 --- /dev/null +++ b/data/2311.18405.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:774699c48c2f553f6976a8d52c5325d7b2319d5672b9ed6d54f434df860276ec +size 1072437 diff --git a/data/2311.18445v1.png b/data/2311.18445v1.png new file mode 100644 index 0000000000000000000000000000000000000000..dc4ac44be7aa52b55ed2fd78b66ac2b5ab4466e1 --- /dev/null +++ b/data/2311.18445v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5c14b278627cb20ead46ceaddcef72d7411236b60b15ae4d04adf5dc11cf754 +size 794788 diff --git a/data/2311.18448v1.png b/data/2311.18448v1.png new file mode 100644 index 0000000000000000000000000000000000000000..c96e29bf306ca111304bb658ab597252d11ea511 --- /dev/null +++ b/data/2311.18448v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b714853a79f79b3ae4bb3450ea15e3237b91824df5f9829f3a9a8a1f5a99d96a +size 1460166 diff --git a/data/2311.18482.png b/data/2311.18482.png new file mode 100644 index 0000000000000000000000000000000000000000..6f6a6bb1c8acfc467fc2fd42cd933f6102b449f4 --- /dev/null +++ b/data/2311.18482.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:867e55fd20ce4c9f33886d5ba0df22e669c1b8f88fed34c9be7fab9bcbd2fa36 +size 1657150 diff --git a/data/2311.18605.png b/data/2311.18605.png new file mode 100644 index 0000000000000000000000000000000000000000..cb79e9c52a5308238245f9eb1ad88433f1e033dc --- /dev/null +++ b/data/2311.18605.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fcd54ddd59a616c2309b4f94911d0b29d84dbc646c02e385f3784a5374aa417 +size 803211 diff --git a/data/2311.18608.png b/data/2311.18608.png new file mode 100644 index 0000000000000000000000000000000000000000..04c8634bc715c90a23a25e4d540b647125350917 --- /dev/null +++ b/data/2311.18608.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79c9a4ccd027206d866f1a7c548d474a0972becf215f7f50d7e4a9b00afc9695 +size 1951649 diff --git a/data/2311.18618.png b/data/2311.18618.png new file mode 100644 index 0000000000000000000000000000000000000000..a3d714c09fbf0ad7d702e1fdf58895f710c1b450 --- /dev/null +++ b/data/2311.18618.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d059f6ff9aba8f9c0c8d0b15c4c018fbe625a3369ab58389d9201d23b7053409 +size 494090 diff --git a/data/2311.18635.png b/data/2311.18635.png new file mode 100644 index 0000000000000000000000000000000000000000..715cf1fb8e461e1427ed1fc13a26ede73965213c --- /dev/null +++ b/data/2311.18635.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80e975ab738b40b36cd036f8fd3da1e0f009337e6228f6770c51aad10711c7dc +size 898790 diff --git a/data/2311.18649.png b/data/2311.18649.png new file mode 100644 index 0000000000000000000000000000000000000000..1b6bd5d45438ce0438aa84fac03c26e6107bf64a --- /dev/null +++ b/data/2311.18649.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a09d0e86d2950cc48b6943c129d335f134866716766b8a9126160553f757e73 +size 772437 diff --git a/data/2311.18651.png b/data/2311.18651.png new file mode 100644 index 0000000000000000000000000000000000000000..3ce74e17cc38b251b116d2db7d6acae1344bb289 --- /dev/null +++ b/data/2311.18651.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b317a99ee5463c72932e2cf926a0388d41ad0a079a0808b0634fb49aed1964e +size 1018697 diff --git a/data/2311.18695v1.png b/data/2311.18695v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5630346d1068396365daf379e28ee64db359c4bf --- /dev/null +++ b/data/2311.18695v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc551a415a899c6c8c0b8785a100258ecd840fad89a0fff1b32819d394d703e7 +size 802146 diff --git a/data/2311.18729.png b/data/2311.18729.png new file mode 100644 index 0000000000000000000000000000000000000000..1d0beadbd07c7ce24bdf9df8aa8b079900bdbee2 --- /dev/null +++ b/data/2311.18729.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccc273bceb58f19147ded8e10b2a8f3cd122c828aba22824ffb9026355402fe2 +size 1754148 diff --git a/data/2311.18775.png b/data/2311.18775.png new file mode 100644 index 0000000000000000000000000000000000000000..96d0283b6eba336b163ee1049aade4f35cfbf439 --- /dev/null +++ b/data/2311.18775.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cdff722297b97b86384c23a5185c6c85dfc04623523f9a3edfa8e91f7da9f3d +size 752823 diff --git a/data/2311.18803.png b/data/2311.18803.png new file mode 100644 index 0000000000000000000000000000000000000000..e5dc3ad0c9b463290df0b7ace42714583239cd67 --- /dev/null +++ b/data/2311.18803.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0207db36f19234eb9af78cd0b7f0a8811d1ad3dd1efff3c266b070615dfa4dee +size 787596 diff --git a/data/2311.18822.png b/data/2311.18822.png new file mode 100644 index 0000000000000000000000000000000000000000..28db1309eb91c1ff6a79cc92b24735e682042933 --- /dev/null +++ b/data/2311.18822.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f1dd5a1fe1b4a60856200837355eb5654a774b47d03f4353ef8ea4ec5fa377a +size 2188978 diff --git a/data/2311.18828.png b/data/2311.18828.png new file mode 100644 index 0000000000000000000000000000000000000000..2f97cc2fac320281ecf8e7492ac92225552910a0 --- /dev/null +++ b/data/2311.18828.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a98a078b8aa9e1eedae030c66294eea0d18432336f99fef998dedfc5d9494cea +size 1832233 diff --git a/data/2311.18829.png b/data/2311.18829.png new file mode 100644 index 0000000000000000000000000000000000000000..a433b2aacf65fccdbe4c4d2e9d028bbe10a0729a --- /dev/null +++ b/data/2311.18829.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06be8774292afe76ad721b94a708bf09b080e09c3f4623231b49157c6bf76269 +size 1849397 diff --git a/data/2311.18830.png b/data/2311.18830.png new file mode 100644 index 0000000000000000000000000000000000000000..b4d75f34228bafb6ef296a3fd1c4fc2ed5258ce9 --- /dev/null +++ b/data/2311.18830.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:868257f12e0aa680ae948d97dad4a072da98650e68fb04d7127b9da4dc90b3a2 +size 1653674 diff --git a/data/2311.18832.png b/data/2311.18832.png new file mode 100644 index 0000000000000000000000000000000000000000..ee8dacbb585533bc76848a57d82da32a98342d75 --- /dev/null +++ b/data/2311.18832.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:098753c25832e4c057d7a3c7122bb2cbdd1bd6f2381e255dd21c0d53927de01e +size 1450102 diff --git a/data/2311.18836.png b/data/2311.18836.png new file mode 100644 index 0000000000000000000000000000000000000000..f1cac2d8744466c453fea47670eafeb60209a030 --- /dev/null +++ b/data/2311.18836.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31b034853860b83c7f34e7f02b0465ac0e27b9ca6be587e860c5ad230b39849c +size 741375 diff --git a/data/2311.18840.png b/data/2311.18840.png new file mode 100644 index 0000000000000000000000000000000000000000..ba81d6eb543cbdfe670cd396b32e15912716d492 --- /dev/null +++ b/data/2311.18840.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2154c31932dfd54a5fe825d39562a3fad729cd08f23a0a22bc1368bf59e942de +size 911811 diff --git a/data/2312.00057.png b/data/2312.00057.png new file mode 100644 index 0000000000000000000000000000000000000000..18f0a5a66b6e2fcb3fe226069e053afc7237e382 --- /dev/null +++ b/data/2312.00057.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae1fdc2c29b93bfb15d28ef9473f3add4e31398c21ab861fa5b1f07bdfe0f37a +size 842009 diff --git a/data/2312.00063.png b/data/2312.00063.png new file mode 100644 index 0000000000000000000000000000000000000000..a5fde6baa6d53697d55d9fb1ec348a16fe72773d --- /dev/null +++ b/data/2312.00063.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6596fef2125ed4dd4ec55ff3add2fa7990ca4189b6ebdacf94321d6cf058314c +size 942147 diff --git a/data/2312.00065.png b/data/2312.00065.png new file mode 100644 index 0000000000000000000000000000000000000000..02a6b986af996745b35bdae75e5e85a7904814d3 --- /dev/null +++ b/data/2312.00065.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e67be15ce874a41ac49f587bd3aff58dacb5cf9f3ca834595039113929e3e35a +size 1059240 diff --git a/data/2312.00068.png b/data/2312.00068.png new file mode 100644 index 0000000000000000000000000000000000000000..b2a426a5258a0eb648e42b4794596da96a2cc655 --- /dev/null +++ b/data/2312.00068.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3972ef1b1982dbb2f5dd5ecfb0a6e0e5f9347a207355b659be8f76cecd4b83a8 +size 609926 diff --git a/data/2312.00075.png b/data/2312.00075.png new file mode 100644 index 0000000000000000000000000000000000000000..5e52981d923428896a8b483ecb1b7a2b6611232c --- /dev/null +++ b/data/2312.00075.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21edee43dd38c876f05a41ada1f28bb42457b6c6531b4f3b608c8e507fadc245 +size 1279564 diff --git a/data/2312.00081.png b/data/2312.00081.png new file mode 100644 index 0000000000000000000000000000000000000000..ed675be18399ac9957b4960bfc54783f4a89345e --- /dev/null +++ b/data/2312.00081.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d267160510c12d8bc4ac950d06139c301eb259282061b8d6d35883d898c4f1b5 +size 846034 diff --git a/data/2312.00084.png b/data/2312.00084.png new file mode 100644 index 0000000000000000000000000000000000000000..aec0c119092ba29011a902dc47f12500f0ef37aa --- /dev/null +++ b/data/2312.00084.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64953cdd61bafaa52ee7224edc2f139365af5fc6064a1c6c1ec0f5a006916e32 +size 890688 diff --git a/data/2312.00093.png b/data/2312.00093.png new file mode 100644 index 0000000000000000000000000000000000000000..bed46d1c4867c7057255130a8b8861026b8fadcf --- /dev/null +++ b/data/2312.00093.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a29f0e3ca23bfab0e174bca066668254ccb4534339aca67973414f5ace56e07 +size 879355 diff --git a/data/2312.00094.png b/data/2312.00094.png new file mode 100644 index 0000000000000000000000000000000000000000..7498a98461c522d5bb728070aa5036fa6ec11c0e --- /dev/null +++ b/data/2312.00094.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f721507e6f3da4cbb564d8c1c752ba85613a16b89b3a14fc6ca0d9b4d5cd13e +size 1153342 diff --git a/data/2312.00096.png b/data/2312.00096.png new file mode 100644 index 0000000000000000000000000000000000000000..c408e710652365be39eeab5419f92ac7bd03bb37 --- /dev/null +++ b/data/2312.00096.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b2e2c6b9bcf6c470b57fc067a5504a4bbc5b3fbef0dfc8c53f9b1b59a312dc8 +size 823109 diff --git a/data/2312.00109.png b/data/2312.00109.png new file mode 100644 index 0000000000000000000000000000000000000000..c79846a5b2b52d4252d58cdb9aed5c8aca7ec312 --- /dev/null +++ b/data/2312.00109.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89aff8f2af23085aeb095ef51411d0d7ac82a5d23835e83fd7a17f8a8410be04 +size 1175312 diff --git a/data/2312.00210.png b/data/2312.00210.png new file mode 100644 index 0000000000000000000000000000000000000000..90a3f68e378f150f321a3fb86d1cd0925388ba2c --- /dev/null +++ b/data/2312.00210.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e513237c1cac4d50e75507e46a751c0bf5a9c06a97582895cdd66ec1b5be3a48 +size 1182281 diff --git a/data/2312.00311.png b/data/2312.00311.png new file mode 100644 index 0000000000000000000000000000000000000000..ed0a4cc71edbe9f71790306940deb5b0046aa057 --- /dev/null +++ b/data/2312.00311.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a79bb7dffe40bb8aa4cde68a50bbfa5a004327ff9311599e09e77ef275fd779e +size 908507 diff --git a/data/2312.00362.png b/data/2312.00362.png new file mode 100644 index 0000000000000000000000000000000000000000..9dde63a1ea56879ef9888187de492aa7739430f2 --- /dev/null +++ b/data/2312.00362.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22d426884255b2e4529c24a0ec5f23758ff5f1aff6bea7f63d2e847ecf491dab +size 708961 diff --git a/data/2312.00375.png b/data/2312.00375.png new file mode 100644 index 0000000000000000000000000000000000000000..767fd6e99a26fcdcfb31d032038ee8051e718ac8 --- /dev/null +++ b/data/2312.00375.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cf98e9d5d831c0eb0f9e85ea5b428505eca766661f0c5a6f60014c84b171aa0 +size 1181642 diff --git a/data/2312.00404v1.png b/data/2312.00404v1.png new file mode 100644 index 0000000000000000000000000000000000000000..9629d6c4d6cdc9f77f70f8e127c0a00f4b43554d --- /dev/null +++ b/data/2312.00404v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e00f3694b9d297f4181005e6e3b394e75d2d960789281c4612d5a0f55a5f37e0 +size 580496 diff --git a/data/2312.00598.png b/data/2312.00598.png new file mode 100644 index 0000000000000000000000000000000000000000..fd9065ccb055165b9c308c1153b5421f9ca87265 --- /dev/null +++ b/data/2312.00598.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cc9ea84577663ce3d759d0291c097ed9865d493cd135921bfd655f8ff36efab +size 975953 diff --git a/data/2312.00600.png b/data/2312.00600.png new file mode 100644 index 0000000000000000000000000000000000000000..63d50974202ee73f6629ffe867405cfa938983c7 --- /dev/null +++ b/data/2312.00600.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58f8546a934350bd2b6df385af28a6fee082b1168f60a8356b83d896bddefba4 +size 793253 diff --git a/data/2312.00633.png b/data/2312.00633.png new file mode 100644 index 0000000000000000000000000000000000000000..641a00687d8cf2a54dc8fc4d3085355107bf0658 --- /dev/null +++ b/data/2312.00633.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c1844399ae988adebbf4b4211c885c171db273f1af536301c740b6cd4572ccf +size 835651 diff --git a/data/2312.00648.png b/data/2312.00648.png new file mode 100644 index 0000000000000000000000000000000000000000..37799173981e7fb625e3548e2101b0bc54111cbb --- /dev/null +++ b/data/2312.00648.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0e834f2df776c0d6eeceedaef0c7511068e8890cda99a432a4bc0440f26061a +size 1156000 diff --git a/data/2312.00690v2.png b/data/2312.00690v2.png new file mode 100644 index 0000000000000000000000000000000000000000..a3dfe902d5496f4cb8b55eab2c771c769ca5669a --- /dev/null +++ b/data/2312.00690v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8426700736ec9aa2b4a34aa4db275b85a2b76dd4d0b61214d00aeadcd97ed499 +size 989844 diff --git a/data/2312.00703.png b/data/2312.00703.png new file mode 100644 index 0000000000000000000000000000000000000000..48407b3b365f69873ce4b73559b5308bcd51e218 --- /dev/null +++ b/data/2312.00703.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d85ade34888f0b8f27bb92cd842f66d90841ddcdfa9b2f388a065b3bdc202225 +size 761802 diff --git a/data/2312.00739.png b/data/2312.00739.png new file mode 100644 index 0000000000000000000000000000000000000000..c87089aca2ca31c06a53613cddb29d825165994d --- /dev/null +++ b/data/2312.00739.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8912fb7b508040aed7e490f1d12bdaa1abf6cca8cdafcef3036cc38b9e8ec4e7 +size 1271006 diff --git a/data/2312.00777.png b/data/2312.00777.png new file mode 100644 index 0000000000000000000000000000000000000000..84ff74c9acc66f5f42aa6834da7c75352d70504e --- /dev/null +++ b/data/2312.00777.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd2e1cb3c1b0477ff9f7fcb6473cc18c5be849a10ac9ec39896a07d3bf941249 +size 1621328 diff --git a/data/2312.00778.png b/data/2312.00778.png new file mode 100644 index 0000000000000000000000000000000000000000..fa02acf2471292c728c5b9bbaa2053ed12f65894 --- /dev/null +++ b/data/2312.00778.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7deb2ee7c9b7d11a760f0aac65e81db642cfc5a0fca8107f899c5a998a7d78c9 +size 1078326 diff --git a/data/2312.00784.png b/data/2312.00784.png new file mode 100644 index 0000000000000000000000000000000000000000..4469f63c7d888cd6bb9eef035867b35ffb806850 --- /dev/null +++ b/data/2312.00784.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:895dc9698ecaf2b32075c4c4cb84aba90fc11c9de15c362bbceb9b8e14eb0b64 +size 846748 diff --git a/data/2312.00785.png b/data/2312.00785.png new file mode 100644 index 0000000000000000000000000000000000000000..db8b3a78314d031130b2d0a1b0c45fabdf4d8876 --- /dev/null +++ b/data/2312.00785.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a98d730626894492308768828f208f2385c91d1da8e583cffee381ac7c157596 +size 759978 diff --git a/data/2312.00786.png b/data/2312.00786.png new file mode 100644 index 0000000000000000000000000000000000000000..4af7b2f4016d75a529b6ccdb9d8194504e99d354 --- /dev/null +++ b/data/2312.00786.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5ede1f3e3efa406d33aa21b6a36fb7f22c15bef105ff785447b593c87b28447 +size 1078477 diff --git a/data/2312.00825.png b/data/2312.00825.png new file mode 100644 index 0000000000000000000000000000000000000000..d211fd4cf4aef2a2591e0febb52c2b24110e6703 --- /dev/null +++ b/data/2312.00825.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb7ead96eacc3fe7b351952a5dedbe7c92304f68304bf6cb278f203ac9d038af +size 756210 diff --git a/data/2312.00834.png b/data/2312.00834.png new file mode 100644 index 0000000000000000000000000000000000000000..178446afe956dfd3f768b38a2e6fd802849ad505 --- /dev/null +++ b/data/2312.00834.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f86ed26071d4def5ed197ec1a418cd9d1b1fa30fd411243f8e8d77d81296c1ac +size 818215 diff --git a/data/2312.00845.png b/data/2312.00845.png new file mode 100644 index 0000000000000000000000000000000000000000..b80514e11d7cdd3022ce9731c7675eb37b36e7f7 --- /dev/null +++ b/data/2312.00845.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6dbd08d10a810865228af836c69057021084de9a3cf4febd5660561cecf8e7ed +size 1663952 diff --git a/data/2312.00849.png b/data/2312.00849.png new file mode 100644 index 0000000000000000000000000000000000000000..403898922e78d7c5651cfc9d9f22a6099f54cb26 --- /dev/null +++ b/data/2312.00849.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3b0283cfdc507af3b5375cbcff80316a5ea9a9c3920ad1d6016e2211f9824ee +size 772901 diff --git a/data/2312.00852.png b/data/2312.00852.png new file mode 100644 index 0000000000000000000000000000000000000000..dccdb4aa8de75bf5e215bcdc6961e9dca6e87437 --- /dev/null +++ b/data/2312.00852.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8e52c386b621f3640a07116176866b59f01a660d59840f4a20f0819815a481f +size 1508336 diff --git a/data/2312.00853.png b/data/2312.00853.png new file mode 100644 index 0000000000000000000000000000000000000000..24a1c54e5822f0a2076e9b4598561f458c4961a4 --- /dev/null +++ b/data/2312.00853.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f83e9d47a5986f70edbb9b61bfc4d6050fedc498a4c3302fe90406826168cc3 +size 1747414 diff --git a/data/2312.00858.png b/data/2312.00858.png new file mode 100644 index 0000000000000000000000000000000000000000..c2b440e1e223564f7f2d4c13f98acbc0744da9ed --- /dev/null +++ b/data/2312.00858.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73a560e23a608449b3a0f6aef59eb2b5d135b36eec4a45daf8cebb1effadebd9 +size 1847171 diff --git a/data/2312.00863.png b/data/2312.00863.png new file mode 100644 index 0000000000000000000000000000000000000000..8c9ebe7951de97f8adf0fde82b853627abdac1d6 --- /dev/null +++ b/data/2312.00863.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:234f37dfd4f462f7c217a93f067f3d292d1cbb3a612a2fa71872e5e770e88bec +size 801302 diff --git a/data/2312.00869.png b/data/2312.00869.png new file mode 100644 index 0000000000000000000000000000000000000000..7929c01c70c87f1ef2b0fbd8a64b4bed4692fab6 --- /dev/null +++ b/data/2312.00869.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:236cffd0e865711ea819657b377acfc1e1abbd22c2d61ef64bf804441a2b6a54 +size 809153 diff --git a/data/2312.00878.png b/data/2312.00878.png new file mode 100644 index 0000000000000000000000000000000000000000..68222efea52463c7247a565e786d1312b3991001 --- /dev/null +++ b/data/2312.00878.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6f978a6f6fe9651a2b8989f78acd9a862f86eac99d7e4d6c3051183308e821c +size 1517583 diff --git a/data/2312.00968.png b/data/2312.00968.png new file mode 100644 index 0000000000000000000000000000000000000000..49527b4481e7a0be6d53abec7861da548aefd934 --- /dev/null +++ b/data/2312.00968.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01ec022b689cf40a7a73d2923404147956466f251a5023954e2e864806c5bc98 +size 776201 diff --git a/data/2312.01017.png b/data/2312.01017.png new file mode 100644 index 0000000000000000000000000000000000000000..2467284bd338badd2f6bbe8919b1eff23f2bffee --- /dev/null +++ b/data/2312.01017.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7702dd0e6ebe1d6d368840b5020d2a33a9b352a6f81e4548d40d95be493ec8ef +size 776115 diff --git a/data/2312.01068.png b/data/2312.01068.png new file mode 100644 index 0000000000000000000000000000000000000000..ecff90d4cf3623e0ca157680ded6febff79adb2a --- /dev/null +++ b/data/2312.01068.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a1610e121cc3f480610425435a5058cdad5cb93d99f6475098b4cac7a4fe7a3 +size 944379 diff --git a/data/2312.01099.png b/data/2312.01099.png new file mode 100644 index 0000000000000000000000000000000000000000..d1dbef93d9091c69733428a41c18945555874c95 --- /dev/null +++ b/data/2312.01099.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1c09a8c1805cbbd48cb6f6367a3be7f9e3492ea132cc25b51c32f359ae47add +size 954757 diff --git a/data/2312.01196.png b/data/2312.01196.png new file mode 100644 index 0000000000000000000000000000000000000000..309d715c833cb48e9f17fe356fb152e3c6027942 --- /dev/null +++ b/data/2312.01196.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c0b899b042393ed96ca9acc688521543939d9cafe8ab3fac4ca84699b940f18 +size 979661 diff --git a/data/2312.01215.png b/data/2312.01215.png new file mode 100644 index 0000000000000000000000000000000000000000..26122d8708801960e003455fe80a81e5b8baa8e0 --- /dev/null +++ b/data/2312.01215.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c756733bbb385b418f384eecf47943ae9e434d596a9445b48f96879d75368ba +size 910241 diff --git a/data/2312.01220.png b/data/2312.01220.png new file mode 100644 index 0000000000000000000000000000000000000000..f21433f31ce94aa4f05c3e1d90d8d32de9b492cf --- /dev/null +++ b/data/2312.01220.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3330f883e67fb7eaaa6b461409a408307f3e394e462ee36b8cbe314f8fa80652 +size 822825 diff --git a/data/2312.01280.png b/data/2312.01280.png new file mode 100644 index 0000000000000000000000000000000000000000..13a3fd18d44958586e26d1a08f798b3b3d5aec5d --- /dev/null +++ b/data/2312.01280.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6006fceeb427d608ca98170f32ef18974aa81e6ff1b2b256b420fc3c05784df5 +size 1076769 diff --git a/data/2312.01305.png b/data/2312.01305.png new file mode 100644 index 0000000000000000000000000000000000000000..69f37d188bc5ece7e1c93599693b7c9ad3674b7d --- /dev/null +++ b/data/2312.01305.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:813f9c4a2e6c36a52b6e4cbbeb1554c3263afae1349fc8036267b23eaf31b385 +size 859995 diff --git a/data/2312.01381.png b/data/2312.01381.png new file mode 100644 index 0000000000000000000000000000000000000000..4108f546ba0ebd43ea40b509bba3ceb64cb0467e --- /dev/null +++ b/data/2312.01381.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4933fcef344c4a9a8248a36301ace943572ad2195414376571944379b2e29d40 +size 741168 diff --git a/data/2312.01407.png b/data/2312.01407.png new file mode 100644 index 0000000000000000000000000000000000000000..416a3abfb7bf21506d6e688f08123275804a2863 --- /dev/null +++ b/data/2312.01407.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a89e08c155fd805517c9ac7d57eebf63a8848f806ed808a4c6830970c1431d4 +size 1131907 diff --git a/data/2312.01409.png b/data/2312.01409.png new file mode 100644 index 0000000000000000000000000000000000000000..905cb80f154419415745c0441b13faf54dd1147d --- /dev/null +++ b/data/2312.01409.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:293b53e99c0ee3f268347a883cf1ee40f588c451e9a36f1105ed22e78f5fe3e2 +size 1061636 diff --git a/data/2312.01531.png b/data/2312.01531.png new file mode 100644 index 0000000000000000000000000000000000000000..6b7e0721af7211e4c87344dea4f909a002c4b0f0 --- /dev/null +++ b/data/2312.01531.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2402c2cd9358fd71ef31996ef93c1ff3fa46991b6ddd2f7599c8641d6e0f0a1 +size 844164 diff --git a/data/2312.01564.png b/data/2312.01564.png new file mode 100644 index 0000000000000000000000000000000000000000..a012268d3f0acd82dfb9e119d88f5911fd648e70 --- /dev/null +++ b/data/2312.01564.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab7511e2b1d6a5f710a4088c3931e6e30b34580099d8368f600322ed1d30b24c +size 810194 diff --git a/data/2312.01571.png b/data/2312.01571.png new file mode 100644 index 0000000000000000000000000000000000000000..8f5fae2a1a1636c472d7fe2af531d7d1a2ff96aa --- /dev/null +++ b/data/2312.01571.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12142b429e20c868717fd37ecf25cdd108e1808df972c462213821d99ea37fab +size 773527 diff --git a/data/2312.01616.png b/data/2312.01616.png new file mode 100644 index 0000000000000000000000000000000000000000..0ef799b354eb765664312b0e72b89b65adc00b45 --- /dev/null +++ b/data/2312.01616.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2124e38f19d1f5981d666ef983cac1fb8c3b1025bb6cceb57d403087409c8f64 +size 683358 diff --git a/data/2312.01623.png b/data/2312.01623.png new file mode 100644 index 0000000000000000000000000000000000000000..61e76c111b25373a34b70d12deeaba3c7ba73ba2 --- /dev/null +++ b/data/2312.01623.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:542f7e4a1f470ae732851b5b574d2bf2dd9ff70e540dfe9932bfede5b747f403 +size 854221 diff --git a/data/2312.01663.png b/data/2312.01663.png new file mode 100644 index 0000000000000000000000000000000000000000..963b717da940e0adf86e9e4c020681ce6d7e1b68 --- /dev/null +++ b/data/2312.01663.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fdd0ca0b43489b8c541fa13e8b78ce233c63774422af3f06355754a63f6dded +size 1322397 diff --git a/data/2312.01689.png b/data/2312.01689.png new file mode 100644 index 0000000000000000000000000000000000000000..758481cc1bd5ebe9bc7b31e5e1258a59aa1b062d --- /dev/null +++ b/data/2312.01689.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c92fa9ab4d382e139c6b30a94b06351b49837a6202eb31b449bfb1b7317ffff +size 704808 diff --git a/data/2312.01696v1.png b/data/2312.01696v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f426d9e000c4d960b842a6fa400c7aabe7b515c7 --- /dev/null +++ b/data/2312.01696v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6effa0002849e3c374480b431ac3943e8d3734d0fefdc57614b4ef358532fffe +size 760720 diff --git a/data/2312.01711v2.png b/data/2312.01711v2.png new file mode 100644 index 0000000000000000000000000000000000000000..73eff04e46a7c93dab5469fe368a6a082c2bd6f6 --- /dev/null +++ b/data/2312.01711v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6d73ffd231b18c74282f005f3a0eeeb98efaf485fa84e41ec4788a64fe78610 +size 941598 diff --git a/data/2312.01725.png b/data/2312.01725.png new file mode 100644 index 0000000000000000000000000000000000000000..6f057a8026bc2050fc63a4d24b8af5099e39e2ba --- /dev/null +++ b/data/2312.01725.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc6bd40853513544a2a9bc4a34d77a272935eff7affd9270c9a06b6fdfb54efa +size 1585558 diff --git a/data/2312.01746v1.png b/data/2312.01746v1.png new file mode 100644 index 0000000000000000000000000000000000000000..d5d9123d32bb9536bd366578bc58de60f72ec139 --- /dev/null +++ b/data/2312.01746v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b57ff94b26e7f3bd1632e03440075ee239019f5854ae4fd5044fd89b45a2cd1c +size 588949 diff --git a/data/2312.01831v2.png b/data/2312.01831v2.png new file mode 100644 index 0000000000000000000000000000000000000000..1514316069dbf55785e146df30dfc8f03c29070b --- /dev/null +++ b/data/2312.01831v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43fcf3254374b43af6b9ae6c88f0e47394ac8e5eab9b9784dd18ae8016d5c7bd +size 792184 diff --git a/data/2312.01897.png b/data/2312.01897.png new file mode 100644 index 0000000000000000000000000000000000000000..841086cbd135ff4b0235d8221d7f406ca4c081d4 --- /dev/null +++ b/data/2312.01897.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b65c3b1f658747ef02113bfe27efaa527a59592bd270c3811f8d17b23c756bb +size 753853 diff --git a/data/2312.01919.png b/data/2312.01919.png new file mode 100644 index 0000000000000000000000000000000000000000..1e29d2433879ee6f7041bc35fa7152cf5a3767bf --- /dev/null +++ b/data/2312.01919.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48880683e34c0af80816b1e14297d6e91d8bacf527942b91581cfceed18812c3 +size 800721 diff --git a/data/2312.01964.png b/data/2312.01964.png new file mode 100644 index 0000000000000000000000000000000000000000..dc5b43072b043ed3ff736dc608e0852897d9a3dc --- /dev/null +++ b/data/2312.01964.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ec13da83439468603ce4cac859faba2b78efc67259b9c9722a9b1113cd3da18 +size 773246 diff --git a/data/2312.01985.png b/data/2312.01985.png new file mode 100644 index 0000000000000000000000000000000000000000..3d3fb771d1fc4f6d80d338c2a6b6705b0fa56d3e --- /dev/null +++ b/data/2312.01985.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70fa7e56d26f292c91bfb10771c1b6e8d033500ee379622ddbe1e6e122371de9 +size 899326 diff --git a/data/2312.01987.png b/data/2312.01987.png new file mode 100644 index 0000000000000000000000000000000000000000..262f77b284ca7a23cf33101bdc205a7ba4278a0a --- /dev/null +++ b/data/2312.01987.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aaa63d2632d2dfe11b9499f80a934997b29d49533b4278f4263d5c8aa4c7af05 +size 835292 diff --git a/data/2312.01998.png b/data/2312.01998.png new file mode 100644 index 0000000000000000000000000000000000000000..86ba2056f80bf76e8be69eca3d5686e57579c647 --- /dev/null +++ b/data/2312.01998.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00fab8d78529f8d4352c3ad4d826b2de54411533ebdd32dbcf483ff8cd50cc8d +size 765773 diff --git a/data/2312.02010.png b/data/2312.02010.png new file mode 100644 index 0000000000000000000000000000000000000000..fd4c6f7d14a524a10a40a5a8b5d2c3375a9afa14 --- /dev/null +++ b/data/2312.02010.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4188f4aeac059cfbd43aa20478733ab6f8bb698151867b520b7e7469afcbb4a +size 919582 diff --git a/data/2312.02051.png b/data/2312.02051.png new file mode 100644 index 0000000000000000000000000000000000000000..205bbee672b0cf9f9691c7590b993dc6c059febe --- /dev/null +++ b/data/2312.02051.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51474a5a6e63d3f98b32bbababe4edafc84f27117e0d7d553059c0e3452c5ea5 +size 875256 diff --git a/data/2312.02069.png b/data/2312.02069.png new file mode 100644 index 0000000000000000000000000000000000000000..887702e46cea6aaf4643b516d1be609e0eeecf9f --- /dev/null +++ b/data/2312.02069.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bedd6d003c634dde680f251a34866b5dac81ce8a003af9d843a78aa8f551998d +size 982457 diff --git a/data/2312.02087.png b/data/2312.02087.png new file mode 100644 index 0000000000000000000000000000000000000000..6c56b6c259fa0e6ada96223bd5509c34fb8d2d8e --- /dev/null +++ b/data/2312.02087.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f42e928ff1401ea7b900fa264a5a5400332ab3afe37bf642279f3f82f26a231 +size 1437071 diff --git a/data/2312.02109v1.png b/data/2312.02109v1.png new file mode 100644 index 0000000000000000000000000000000000000000..870435ae025b7f736c90e9a830301be9c7b6f3d1 --- /dev/null +++ b/data/2312.02109v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d783d7858c2c0a9ebd9b7697be4125cdb5467d4ca2fabaf725c48b1c7ba9ca3 +size 1501598 diff --git a/data/2312.02126.png b/data/2312.02126.png new file mode 100644 index 0000000000000000000000000000000000000000..1fcec80a7565fff8dd18a80b32e7fc9ca3435b6d --- /dev/null +++ b/data/2312.02126.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d973ca06cd5dd3ff160b137bbbcdfb72549a0d4f44e6aeac76cc8c6390af3f8f +size 1248208 diff --git a/data/2312.02133v1.png b/data/2312.02133v1.png new file mode 100644 index 0000000000000000000000000000000000000000..a9e0a2e73a04a97abb4915c73ace2fa61fb55cbe --- /dev/null +++ b/data/2312.02133v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1381034ee0c476a95f5e83d9523db87b00c7883aaab530e8a42b359521dc9830 +size 1319925 diff --git a/data/2312.02134.png b/data/2312.02134.png new file mode 100644 index 0000000000000000000000000000000000000000..89d558201c494b10c6e2f6d14f21adb5db973426 --- /dev/null +++ b/data/2312.02134.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3528ecfeaae45530b2a6adce3275d0544c8febe2983b7d704ec8d558e513ff3b +size 824045 diff --git a/data/2312.02136.png b/data/2312.02136.png new file mode 100644 index 0000000000000000000000000000000000000000..f95871f1fb67b91634a21b564e0cb0e7da8b97ad --- /dev/null +++ b/data/2312.02136.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fd3b4b9c5f0fce669cffa2b674187df46008b6a671005b97825c31d1586b409 +size 870085 diff --git a/data/2312.02137.png b/data/2312.02137.png new file mode 100644 index 0000000000000000000000000000000000000000..85e0b5195035f8bb1b7c5072073f7051f9a16999 --- /dev/null +++ b/data/2312.02137.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a004a9818019821b423ed2029e78cb017b8acf561ff3979bef08b5c077c339ee +size 1014047 diff --git a/data/2312.02145.png b/data/2312.02145.png new file mode 100644 index 0000000000000000000000000000000000000000..5f8074915ca23ec1af8053f736a0668261dadc2b --- /dev/null +++ b/data/2312.02145.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0f1f376402b2777a4e2e68a9c254fe9311a747e7adcdf4b172fbe26c7a49596 +size 1878367 diff --git a/data/2312.02149.png b/data/2312.02149.png new file mode 100644 index 0000000000000000000000000000000000000000..5c22878c0b050ad3024526a667dfbfb80df21b3c --- /dev/null +++ b/data/2312.02149.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5779b7686a6279f085135c98bc8b13ac5c2645500f32bd3d02ff9f69889887f +size 1679362 diff --git a/data/2312.02150.png b/data/2312.02150.png new file mode 100644 index 0000000000000000000000000000000000000000..fda3f9ba15c7cde49fe458288db5e959524c7e23 --- /dev/null +++ b/data/2312.02150.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f253f252d813386e879158a06a085efb413174117a4a294594294e35afe06e88 +size 1092003 diff --git a/data/2312.02152.png b/data/2312.02152.png new file mode 100644 index 0000000000000000000000000000000000000000..5bdfd5da6529615cbeb7a6022d1c04b09d298b1b --- /dev/null +++ b/data/2312.02152.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:919c04b2b9d1ff5f69e90639247788516c2a09ef3f4bb30dc3cd13676cad2ecd +size 1215212 diff --git a/data/2312.02153v1.png b/data/2312.02153v1.png new file mode 100644 index 0000000000000000000000000000000000000000..288e8ad08054b88b0a4de909a60b442772cdcd60 --- /dev/null +++ b/data/2312.02153v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff95889bb162290cba6ce4c59787191f00adf110efd3c3fe0256f721785df38e +size 953820 diff --git a/data/2312.02155.png b/data/2312.02155.png new file mode 100644 index 0000000000000000000000000000000000000000..43ba9857005625dc16686b62d3c33af4eecf0e9f --- /dev/null +++ b/data/2312.02155.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e75f8c72c9277a9a4f6b6d63eaafbefcc6b850689f76a26cbecb1fde9f7d0aea +size 1105496 diff --git a/data/2312.02158.png b/data/2312.02158.png new file mode 100644 index 0000000000000000000000000000000000000000..6f77f0cf78906c40f859ada0ee26fecb088cb60b --- /dev/null +++ b/data/2312.02158.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77ec61e8bab2158ae1723b225758edc613255d5907b48edfb4ead49ce0c2ed31 +size 763369 diff --git a/data/2312.02190.png b/data/2312.02190.png new file mode 100644 index 0000000000000000000000000000000000000000..84632c8215ea740e71120f4fb0589b6877f01fce --- /dev/null +++ b/data/2312.02190.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0de4dea6f50c3d92e36bb8af627844f4d406f68d6fdefa5f28beb38ffb90deae +size 1210062 diff --git a/data/2312.02196.png b/data/2312.02196.png new file mode 100644 index 0000000000000000000000000000000000000000..391f74d5b8480efdb9871bd0c3e73463aa0177ad --- /dev/null +++ b/data/2312.02196.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48f00cbfee8df6ed4157b70b6888d4cd050331636d7fa97da92f036928cb5143 +size 841402 diff --git a/data/2312.02209.png b/data/2312.02209.png new file mode 100644 index 0000000000000000000000000000000000000000..e70ca7dcb35e76a96fb66093b8f00391bc92b909 --- /dev/null +++ b/data/2312.02209.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84eeedc6dd8320b2be31dc58ad89449f22905c6f8cf756a636dc8cb327fabebc +size 749765 diff --git a/data/2312.02214.png b/data/2312.02214.png new file mode 100644 index 0000000000000000000000000000000000000000..286426adc8f3af084651d14f6c5f4780b586bfac --- /dev/null +++ b/data/2312.02214.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8788739c07f36051903052f85a289fb72a574c325e8e4e4814e9b064bf4a58d8 +size 856216 diff --git a/data/2312.02221.png b/data/2312.02221.png new file mode 100644 index 0000000000000000000000000000000000000000..63471a1150bb3751fdc04fa15eab48c5afca3352 --- /dev/null +++ b/data/2312.02221.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:340d38c9aaaff5044f25943c415be4071ae41eb45ed4ce773e0327073313b323 +size 766949 diff --git a/data/2312.02228.png b/data/2312.02228.png new file mode 100644 index 0000000000000000000000000000000000000000..318e5ff24adb213d6115961102a871bd96df0760 --- /dev/null +++ b/data/2312.02228.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2571848a55acefa65799914badfe0904bbd3a07a6fb0eb852488c232f176c49 +size 1420352 diff --git a/data/2312.02232.png b/data/2312.02232.png new file mode 100644 index 0000000000000000000000000000000000000000..8af95dfe3219863e49e09b828429fe320a214d45 --- /dev/null +++ b/data/2312.02232.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:427b0261bc86bea6a25bcc1a1074b611d11c5f7ee288bb1bdb3eadd1236885b3 +size 837518 diff --git a/data/2312.02238.png b/data/2312.02238.png new file mode 100644 index 0000000000000000000000000000000000000000..67032fff0bea3ecc2dc95c8e9b024768a3ebaf29 --- /dev/null +++ b/data/2312.02238.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bc596869f332604a20210bc47c34942bebb7c290e23ec7e032776f0d66312c1 +size 1998573 diff --git a/data/2312.02244.png b/data/2312.02244.png new file mode 100644 index 0000000000000000000000000000000000000000..e41931a636049df591218d88a8fa40c732f8198b --- /dev/null +++ b/data/2312.02244.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44229c7601219e6868eec8eb7c5602ec007e1b00cc284ea9c725cec1b14d5cce +size 801704 diff --git a/data/2312.02284.png b/data/2312.02284.png new file mode 100644 index 0000000000000000000000000000000000000000..3e673ee570ef8cbaaa01c3035d4362c212a534bc --- /dev/null +++ b/data/2312.02284.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcab87eea3effdc41325844a88e4966a3cdfb0b81a15490687629e6183110293 +size 1520624 diff --git a/data/2312.02432.png b/data/2312.02432.png new file mode 100644 index 0000000000000000000000000000000000000000..7361a0321b01613aabd82b515c091c21c7236b8f --- /dev/null +++ b/data/2312.02432.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:058867820e92eba75f212ad11803d7f5f19d2317e48d75f092b98b66c0740e17 +size 1218882 diff --git a/data/2312.02434.png b/data/2312.02434.png new file mode 100644 index 0000000000000000000000000000000000000000..98537f0d5b80f869cdcdcc337d3b1658d3d7211e --- /dev/null +++ b/data/2312.02434.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da956cded6bea77a57e51cc102fab1f26ac393b3109c4bd21aa7088ff4e9fc39 +size 761065 diff --git a/data/2312.02439.png b/data/2312.02439.png new file mode 100644 index 0000000000000000000000000000000000000000..a539ab02340fae95a8fc3da9c5d47d32c2cb3617 --- /dev/null +++ b/data/2312.02439.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bce144d763ef3725f291bececb0f9e4c697cfd442b6fa24cebd8e343a1f31ed3 +size 759384 diff --git a/data/2312.02470v1.png b/data/2312.02470v1.png new file mode 100644 index 0000000000000000000000000000000000000000..1710fb7f7114fd197b0f176a7e83dd7383f4b94f --- /dev/null +++ b/data/2312.02470v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d80d7ab5a74d53e43a04c720d09e95fe7e911e60ad3f4687c5137659d718bbfc +size 566027 diff --git a/data/2312.02480.png b/data/2312.02480.png new file mode 100644 index 0000000000000000000000000000000000000000..12afeeedac315e6c872c97b2248628371c057c61 --- /dev/null +++ b/data/2312.02480.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8aec4a50c52edff3c2d8d6d7855d6eabfb13d6935f7a0577095a3e44263e70f1 +size 799443 diff --git a/data/2312.02512v2.png b/data/2312.02512v2.png new file mode 100644 index 0000000000000000000000000000000000000000..ea33adfacad8b70bfcc450230b67643b5b8c0167 --- /dev/null +++ b/data/2312.02512v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9f160fcc3d55935ceee183d5e13a2bd393d5bc1b6d199a8657948ac88bd5e22 +size 874310 diff --git a/data/2312.02520v2.png b/data/2312.02520v2.png new file mode 100644 index 0000000000000000000000000000000000000000..94157d015c8be57b18f1ae3fdeddccd0830fb4a8 --- /dev/null +++ b/data/2312.02520v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79c48708a5ef36acba90219d4c339d8c1d70d472c198070b6dfc15a648f3c3cf +size 823382 diff --git a/data/2312.02528.png b/data/2312.02528.png new file mode 100644 index 0000000000000000000000000000000000000000..c02785845a11b30000901c247bf844c5353ff68c --- /dev/null +++ b/data/2312.02528.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfd30c00ef2a0aa151336523212ae1e31754f4c1ac870e54a6eaa6ba6aa602a7 +size 775717 diff --git a/data/2312.02567.png b/data/2312.02567.png new file mode 100644 index 0000000000000000000000000000000000000000..582ffb3c50738b10f1cae1a685304d774d2ef483 --- /dev/null +++ b/data/2312.02567.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:805c1fea4ddb763d9fd0919033325ff1fa32fa992eec97fb5ca792ccd875ee41 +size 830636 diff --git a/data/2312.02696.png b/data/2312.02696.png new file mode 100644 index 0000000000000000000000000000000000000000..781e2408f1f99195f3c00625c58bcd8cd3dcbecd --- /dev/null +++ b/data/2312.02696.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fef4cee58b93986b39fb04e3637f1e4f22c6202a2f3b88b4aab7e771d4b3ba1 +size 719428 diff --git a/data/2312.02702.png b/data/2312.02702.png new file mode 100644 index 0000000000000000000000000000000000000000..1c23b571a0fe5f67501382670a5fb85ab0056481 --- /dev/null +++ b/data/2312.02702.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b08ed4b914f9915d94d94541b78bd754901e922e8eaa95dd565a7af80b2bbe0 +size 979060 diff --git a/data/2312.02719.png b/data/2312.02719.png new file mode 100644 index 0000000000000000000000000000000000000000..3a096fc7bcd90f75e8e409bfc06894234028bf57 --- /dev/null +++ b/data/2312.02719.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05ab1120ff6065691c555f35cc1a7fe78f550ee3c7fd711c648f5bf1c343bb00 +size 960599 diff --git a/data/2312.02753.png b/data/2312.02753.png new file mode 100644 index 0000000000000000000000000000000000000000..c9ef43b7325e58cb0be42757b4f3bf202ce87dbd --- /dev/null +++ b/data/2312.02753.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae76d4fba006e9180679ff00586bf942d71022b513da9d5f41e727cba802a6ea +size 691550 diff --git a/data/2312.02813.png b/data/2312.02813.png new file mode 100644 index 0000000000000000000000000000000000000000..79ada9374a044d04150c6a102c3e377419e06d85 --- /dev/null +++ b/data/2312.02813.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0274eb9968740622c97fab6be95f1511903a091b2725b8d3757863de29d9c44d +size 1143235 diff --git a/data/2312.02914.png b/data/2312.02914.png new file mode 100644 index 0000000000000000000000000000000000000000..781d9ce83e51752fd5fdd4984288335b50d60c27 --- /dev/null +++ b/data/2312.02914.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0454255fd8bf6db2bc8b0688971980c9eebc8026997cd93c6a691424c1cd260 +size 737412 diff --git a/data/2312.02918v2.png b/data/2312.02918v2.png new file mode 100644 index 0000000000000000000000000000000000000000..0ea67dc531e299902433543478870f59c1218349 --- /dev/null +++ b/data/2312.02918v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4466a6e6439884c29e2e2d3a401f84f89e78cfb2cf7391329f564da4bd3c9067 +size 1897877 diff --git a/data/2312.02963.png b/data/2312.02963.png new file mode 100644 index 0000000000000000000000000000000000000000..84726f6c75fbc3738889aab56b745489f04995a9 --- /dev/null +++ b/data/2312.02963.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fc5292a37b5e0b21e224f298c6d786dde3bb830ceb49c50123f23d417e8f028 +size 1926517 diff --git a/data/2312.02970.png b/data/2312.02970.png new file mode 100644 index 0000000000000000000000000000000000000000..eb5457a29133272cb6ad69b38d754001892c5e6d --- /dev/null +++ b/data/2312.02970.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16291fc26c851b5f77a9caa0457b82cdffd6252c22ad81db4834913346be3cf3 +size 1750724 diff --git a/data/2312.02974.png b/data/2312.02974.png new file mode 100644 index 0000000000000000000000000000000000000000..f381200ff59484e0d2a2e986bfd4a1c600217a5c --- /dev/null +++ b/data/2312.02974.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4b69ac53e3db3eb337058a7af725347e2ef089efa457330efa81d11183ea4f1 +size 1067036 diff --git a/data/2312.02976.png b/data/2312.02976.png new file mode 100644 index 0000000000000000000000000000000000000000..0a1c07096c85449b61ecbf64c654883a178d4fd4 --- /dev/null +++ b/data/2312.02976.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:871e0933dfa081d1dd9a908c0770a6f1f21fc35bb585ce7db78acd1fe5fb99d0 +size 971029 diff --git a/data/2312.02980.png b/data/2312.02980.png new file mode 100644 index 0000000000000000000000000000000000000000..42a454ef8b42a207b5046ba516d12f086da219b4 --- /dev/null +++ b/data/2312.02980.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de16ea9f3c21581dae6beee15c230c76222b0033e1b5982bcf112e30138d4103 +size 898480 diff --git a/data/2312.02981v1.png b/data/2312.02981v1.png new file mode 100644 index 0000000000000000000000000000000000000000..e98095420c346c3e476a060c61b421eb895ccdb9 --- /dev/null +++ b/data/2312.02981v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4ba11ff835f4bba20eb752dce7fb329401ce6e52b0ae8a75244e6db9eb41b14 +size 1361081 diff --git a/data/2312.03025.png b/data/2312.03025.png new file mode 100644 index 0000000000000000000000000000000000000000..e58f70b91fd0c0c03d27c10a907997ff912fef84 --- /dev/null +++ b/data/2312.03025.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7de8c5e8fcbe7ddd8210cac5e1a58ee70e91f2071bd28a875d1d3c203b7f214c +size 786860 diff --git a/data/2312.03029.png b/data/2312.03029.png new file mode 100644 index 0000000000000000000000000000000000000000..23a1541eb3c45abd6a2dab423e7445b88468ffa2 --- /dev/null +++ b/data/2312.03029.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7facf7f34ba38686c2edd7023dd2ed3d3d055b79fdbe10673729773a9dc21c40 +size 1229662 diff --git a/data/2312.03031.png b/data/2312.03031.png new file mode 100644 index 0000000000000000000000000000000000000000..0c52e6991325afa3a1fcb446d670b8cbb210557d --- /dev/null +++ b/data/2312.03031.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a4d47cdd3d812d31caa571a96e948252545d69ddbbcdd52dfce7e96ffb5a6d1 +size 757400 diff --git a/data/2312.03033.png b/data/2312.03033.png new file mode 100644 index 0000000000000000000000000000000000000000..abe90178d31f5644d0c324f1e59f360b8f31ba64 --- /dev/null +++ b/data/2312.03033.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77bf8d72348ecb208c4ba51fa580827e7a8374ed826c4c34b1e350243a8451e7 +size 837962 diff --git a/data/2312.03045.png b/data/2312.03045.png new file mode 100644 index 0000000000000000000000000000000000000000..4df1bb5adf28bf9f6d0787ec50fc56b0fcfd52cd --- /dev/null +++ b/data/2312.03045.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:275d1e492e55200c3e99a603af24f97d56c988b2dd04e6d9752786fa5ac93513 +size 877458 diff --git a/data/2312.03050.png b/data/2312.03050.png new file mode 100644 index 0000000000000000000000000000000000000000..a532871d227f730fdf769aa113fa808209588b91 --- /dev/null +++ b/data/2312.03050.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc56ca5cc22a22cce6465c9d38051c57dd294d1fd89f2cc1a1acde5905844419 +size 852543 diff --git a/data/2312.03052.png b/data/2312.03052.png new file mode 100644 index 0000000000000000000000000000000000000000..c673de90015a245c9e2ef7bc1507036fc86a4455 --- /dev/null +++ b/data/2312.03052.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1745ac926540fe257b1a9f8622dcb585b1ef5bb554aa468fe2b61a04594832ac +size 785859 diff --git a/data/2312.03102.png b/data/2312.03102.png new file mode 100644 index 0000000000000000000000000000000000000000..820044f2b55b92ad484d4d3d3809a0aac2eccb1d --- /dev/null +++ b/data/2312.03102.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6ea7b81bea9cd0b82aa6fb796af35bd743034ee8ce4efb5ce4d416c5cb86d74 +size 907502 diff --git a/data/2312.03160.png b/data/2312.03160.png new file mode 100644 index 0000000000000000000000000000000000000000..cbde744aec1e7f721c5bb7d26c570b27d8142944 --- /dev/null +++ b/data/2312.03160.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87727c882d304b10b3055f605e5695aea65b71ff8b2dc75e74ccaca807a5a237 +size 1037939 diff --git a/data/2312.03203.png b/data/2312.03203.png new file mode 100644 index 0000000000000000000000000000000000000000..c6a783e59afd8981ff22cbb0fb91ee5fcec28450 --- /dev/null +++ b/data/2312.03203.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbf0e700ad0a9f68ad1908a88b8e1f4a1e7a6c769699446474fa9c50f6582672 +size 1444226 diff --git a/data/2312.03209.png b/data/2312.03209.png new file mode 100644 index 0000000000000000000000000000000000000000..97c83a980e2620448e0840dbd5adc32ae4f02f47 --- /dev/null +++ b/data/2312.03209.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b52f1403824980ef169b1bfa308953ac3c586b6e1a81b1172312bfe34550fb4 +size 1367387 diff --git a/data/2312.03391.png b/data/2312.03391.png new file mode 100644 index 0000000000000000000000000000000000000000..fa6b9105b78bb0f5b23e320a70a5f36d85dbf629 --- /dev/null +++ b/data/2312.03391.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faeefdfbf3bf24faf50b9dca466e7ae2924a8efa203a586aa7847476be89a414 +size 793749 diff --git a/data/2312.03420.png b/data/2312.03420.png new file mode 100644 index 0000000000000000000000000000000000000000..6be7fc79525599745761a1fca0012ce22c10359e --- /dev/null +++ b/data/2312.03420.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d014316baf0baded9627b54e5fc01e5a26f627fbb270c8e69c231312343e2fd +size 790668 diff --git a/data/2312.03431.png b/data/2312.03431.png new file mode 100644 index 0000000000000000000000000000000000000000..ebe7def27b601a9e395c529228b763052c60aab8 --- /dev/null +++ b/data/2312.03431.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca2952969744fcd479b3db6215e220d87a59d9e7ac05a68dcce12679b6d9efea +size 1488998 diff --git a/data/2312.03441v4.png b/data/2312.03441v4.png new file mode 100644 index 0000000000000000000000000000000000000000..10fe89cc7beda98b25df9fac58f3c2c1124f68c9 --- /dev/null +++ b/data/2312.03441v4.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efdd685f407f48cdd03f83fa0231f11b60cdbadcbc60ee4b7a320fa461334764 +size 885760 diff --git a/data/2312.03442.png b/data/2312.03442.png new file mode 100644 index 0000000000000000000000000000000000000000..34055c8486640cca9b53cc4d1c6704c4001d3fa4 --- /dev/null +++ b/data/2312.03442.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4f535f2754cf27b839521073da244e879a8caaf932a3b6c28f71b711195a53c +size 1139924 diff --git a/data/2312.03461.png b/data/2312.03461.png new file mode 100644 index 0000000000000000000000000000000000000000..018126d16c04ba9b01e66f52f2a16238b0f1e96c --- /dev/null +++ b/data/2312.03461.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83f75e8d21430f3d399f6f8779aebf8d6c3cb311777c39749c18f7ad82806d42 +size 1139288 diff --git a/data/2312.03502.png b/data/2312.03502.png new file mode 100644 index 0000000000000000000000000000000000000000..a6ca60cb9e20a04298c8954039948b9eb69393c5 --- /dev/null +++ b/data/2312.03502.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf0b5d2069bfd4bd46cc9f10bdb9694ef926290e4923e4797359368201fc5fff +size 899157 diff --git a/data/2312.03526.png b/data/2312.03526.png new file mode 100644 index 0000000000000000000000000000000000000000..86ae9ce6078c4df944bebab7fb55e87f8123fb20 --- /dev/null +++ b/data/2312.03526.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:897b8ce7f8b62bb72983e2faa481748bee663621b02d1424b39e2a95f0f232ca +size 783827 diff --git a/data/2312.03585.png b/data/2312.03585.png new file mode 100644 index 0000000000000000000000000000000000000000..55b2f7389364c4736853e48fc6140a23383bdbb2 --- /dev/null +++ b/data/2312.03585.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f015f0c22beea09f71550e08de10deb572fc56030d2a0f15f084921f0fe4a65a +size 984395 diff --git a/data/2312.03596.png b/data/2312.03596.png new file mode 100644 index 0000000000000000000000000000000000000000..f698c83c5b900cd47ccc6e7a296ad394d56a4659 --- /dev/null +++ b/data/2312.03596.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66a4b4d17e58377101e2e49ef37daf72020f7c0e077f4dc470b62e27dec3b03a +size 990808 diff --git a/data/2312.03611.png b/data/2312.03611.png new file mode 100644 index 0000000000000000000000000000000000000000..9be77825b606773297241459a9a863672c0fb7f0 --- /dev/null +++ b/data/2312.03611.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfb5f646606b06fb3a5766f0f14ca4cda9e0caf656032b2a3ecd0651cb43942f +size 1183478 diff --git a/data/2312.03626.png b/data/2312.03626.png new file mode 100644 index 0000000000000000000000000000000000000000..cfe3ac44e257edb2c160d30cdf78c97386517859 --- /dev/null +++ b/data/2312.03626.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92caafe9144ed158f25b398ae252a080a425eb57db9803660a140e9f66191db4 +size 1691853 diff --git a/data/2312.03628.png b/data/2312.03628.png new file mode 100644 index 0000000000000000000000000000000000000000..872d88ba8ced99dc26bde6a8d3617c046438da76 --- /dev/null +++ b/data/2312.03628.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8d489fc14a46051d013087b5a993b6e7a4eda411a301891e60fcf6956fd365a +size 977077 diff --git a/data/2312.03678.png b/data/2312.03678.png new file mode 100644 index 0000000000000000000000000000000000000000..cd50ccf989d3c4cd61185ff09bbdfd997cfd8ccf --- /dev/null +++ b/data/2312.03678.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:600a58a85d0bb0ef9d8f5adc2d6bee0cd86949d35b26d95c49afe7e3d7e50c92 +size 857997 diff --git a/data/2312.03700.png b/data/2312.03700.png new file mode 100644 index 0000000000000000000000000000000000000000..b39f8fb22268b2e2a977a1069b97ae445db28fe5 --- /dev/null +++ b/data/2312.03700.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ac9eeb849f5ad0f9c9a35d86f39d5bfd45a0ba5ad5bc9bda1602a8d1135e613 +size 723084 diff --git a/data/2312.03703.png b/data/2312.03703.png new file mode 100644 index 0000000000000000000000000000000000000000..e26279705fdac54f95d984e5c785edeed8dbdb44 --- /dev/null +++ b/data/2312.03703.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bad3c759c2598616775733850fba888c0e0c42b81ff358e2a89baf0f45643a3 +size 840258 diff --git a/data/2312.03704.png b/data/2312.03704.png new file mode 100644 index 0000000000000000000000000000000000000000..57c6399d92e10c74dfe653ec4dd48269cf1dcec1 --- /dev/null +++ b/data/2312.03704.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6818279580fa9aaea697a8410dd541dc1f21af137a71d0183e011894b031ed27 +size 1105209 diff --git a/data/2312.03732.png b/data/2312.03732.png new file mode 100644 index 0000000000000000000000000000000000000000..767c08c6f38c4c18be68e22e521ccd9f2b890443 --- /dev/null +++ b/data/2312.03732.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eefe245e679c438b78cc9957f8d778642890d41360e6aee22371035819a83bd1 +size 573889 diff --git a/data/2312.03767.png b/data/2312.03767.png new file mode 100644 index 0000000000000000000000000000000000000000..5713685977293f579d7e1795b87aa4627304f75c --- /dev/null +++ b/data/2312.03767.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce34dd36d373d8558736937fa7d8a77817d561a4371f354a7756dc8e874c78db +size 734365 diff --git a/data/2312.03777.png b/data/2312.03777.png new file mode 100644 index 0000000000000000000000000000000000000000..6a31901471483dc07c5d4fcefe46e74c73e0f8cb --- /dev/null +++ b/data/2312.03777.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:207a47a4e7eb97ae1f44545f837000b7ac835962e8882bdc77542c98d32e90ac +size 902355 diff --git a/data/2312.03799.png b/data/2312.03799.png new file mode 100644 index 0000000000000000000000000000000000000000..9ee72293422bbbedf377ce1cb63d1caeeacbe665 --- /dev/null +++ b/data/2312.03799.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc6ff376bac7752f878a1c63b41c2f02c55010fa59a784b2058130d783047e47 +size 1152158 diff --git a/data/2312.03806.png b/data/2312.03806.png new file mode 100644 index 0000000000000000000000000000000000000000..7bd3f2fef24ab544dbb5242ef3327d4f70436222 --- /dev/null +++ b/data/2312.03806.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f48f71aca9bec9915d8189db9cabe1d0a4ccad64b4a397290d3967e9309ba30 +size 1175104 diff --git a/data/2312.03816.png b/data/2312.03816.png new file mode 100644 index 0000000000000000000000000000000000000000..3249e7d4c21f03d7d9cffce6d8260492e183296d --- /dev/null +++ b/data/2312.03816.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f86c6258d052157ae94c2d96c684c05e6c96135962c6f911eb2758356419e61 +size 1904886 diff --git a/data/2312.03818.png b/data/2312.03818.png new file mode 100644 index 0000000000000000000000000000000000000000..e8e9f65e154ee93bffee7ae5c1797b04b1145bda --- /dev/null +++ b/data/2312.03818.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad6a3cd73fc08683d08ac0323469943b63b89c07738d2da5404d6648a497fe3d +size 1078099 diff --git a/data/2312.03884.png b/data/2312.03884.png new file mode 100644 index 0000000000000000000000000000000000000000..82e9eae09e84178c0260107a4052dc0d55e284cb --- /dev/null +++ b/data/2312.03884.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54d90edf0445d123076529ca33b51df1c7fe978fbf3895da793508200b46309f +size 2692654 diff --git a/data/2312.04016.png b/data/2312.04016.png new file mode 100644 index 0000000000000000000000000000000000000000..aaa6388695d747a4dc739664f7ecbb991bd6b653 --- /dev/null +++ b/data/2312.04016.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca67df31a90f787208c83906d0b4ad62a0cf3f61f08b84131929cbf52d8536ef +size 889059 diff --git a/data/2312.04043.png b/data/2312.04043.png new file mode 100644 index 0000000000000000000000000000000000000000..b575ba6a8307e6c709bbe4f42cc5fbab4153dbef --- /dev/null +++ b/data/2312.04043.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14a622c3a2623b3e8e28ae568eed94c63006ffde30a05f05c70194930384dbf7 +size 857839 diff --git a/data/2312.04076.png b/data/2312.04076.png new file mode 100644 index 0000000000000000000000000000000000000000..83f89999893f185d77f0718224e43d101aabd722 --- /dev/null +++ b/data/2312.04076.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fac497063a959ada26a5a85742a4cbaf7f8a0081700f2c090580ae264b78c75 +size 744088 diff --git a/data/2312.04089.png b/data/2312.04089.png new file mode 100644 index 0000000000000000000000000000000000000000..b9cc0d3fa5ef2244ad473387352605fcf7a86fae --- /dev/null +++ b/data/2312.04089.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9e38142aecd7f12d5f6324154abc2bf30c1c7f8590e4ee598bbdb707b8c3af5 +size 879988 diff --git a/data/2312.04117.png b/data/2312.04117.png new file mode 100644 index 0000000000000000000000000000000000000000..7174a399032640138f2418b819d43e209a5835a0 --- /dev/null +++ b/data/2312.04117.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d211468fd0f95f106014e51239b376b754e32b0a4b12169a3a6f8e70d351af3 +size 816719 diff --git a/data/2312.04248.png b/data/2312.04248.png new file mode 100644 index 0000000000000000000000000000000000000000..13ac5ae0353fa533ccb415675b37a93c920f94ac --- /dev/null +++ b/data/2312.04248.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efecbbd6c49eb19afa77ede96e830dd670a3f77ea0c3a361099e0e7ed90bec7b +size 920755 diff --git a/data/2312.04265.png b/data/2312.04265.png new file mode 100644 index 0000000000000000000000000000000000000000..65c6cff3c880f1adc14469a8040d1d23fac9962e --- /dev/null +++ b/data/2312.04265.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c25625888ac519c688442ade5e42c630afbf1d080ad765961f4ed2ca4b0194f4 +size 885106 diff --git a/data/2312.04302.png b/data/2312.04302.png new file mode 100644 index 0000000000000000000000000000000000000000..ca3c3e48470ec8a153c38dd02838ee6d1a999c71 --- /dev/null +++ b/data/2312.04302.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:759354b57c6beb7f5326349da183bf637d6dd8750003c617e8ee7477612bd4a0 +size 853956 diff --git a/data/2312.04328.png b/data/2312.04328.png new file mode 100644 index 0000000000000000000000000000000000000000..609fc69c30968182849e365f9a2147a9a5dfc26e --- /dev/null +++ b/data/2312.04328.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1554dc82f7de17e9b5cbc6f60d7e0ac78431b8d8259f1efd0422ae227cdc0810 +size 964203 diff --git a/data/2312.04334.png b/data/2312.04334.png new file mode 100644 index 0000000000000000000000000000000000000000..a7674881a55bd2b66cf9d40a50c6e3f9b869a1de --- /dev/null +++ b/data/2312.04334.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:852d946c19e3d398943a26e8180d08321558f606c022ef48689028a7bdb13039 +size 854280 diff --git a/data/2312.04364v1.png b/data/2312.04364v1.png new file mode 100644 index 0000000000000000000000000000000000000000..1e91bdbad2f36f28faa13a3933c610cd2d074e53 --- /dev/null +++ b/data/2312.04364v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9f145e407a5046853a41e6c9d30c14efc09732842b3bf2538b5598fb7934614 +size 1603151 diff --git a/data/2312.04372v2.png b/data/2312.04372v2.png new file mode 100644 index 0000000000000000000000000000000000000000..1fbd518ca93ebfdc2f16efb715274f689941c667 --- /dev/null +++ b/data/2312.04372v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8cd95204322ec1871bc4e996339962d814b24d14df2fe3eb696693a04cc0707 +size 728386 diff --git a/data/2312.04410.png b/data/2312.04410.png new file mode 100644 index 0000000000000000000000000000000000000000..c9912d85b1c1fed75bdad667cdab4742e9c44e56 --- /dev/null +++ b/data/2312.04410.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb18350861f501abfe691d6c9b6a68d71da919cd448fd593a1f79274685cdd2c +size 1504420 diff --git a/data/2312.04433.png b/data/2312.04433.png new file mode 100644 index 0000000000000000000000000000000000000000..4ed6b05c4c50585160449c2b4b9547cec5392427 --- /dev/null +++ b/data/2312.04433.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2df07cf7a694c6762c945eb11ed2dee4230b34d663a88b00f7a4aaf37b593e2b +size 1443628 diff --git a/data/2312.04461.png b/data/2312.04461.png new file mode 100644 index 0000000000000000000000000000000000000000..271f7464439f2952115c0e09cea925010d85b830 --- /dev/null +++ b/data/2312.04461.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:311316cd42d33e26c725624bd43e31b1fbc9bc16dd53470380be31e2c791f5d9 +size 1572929 diff --git a/data/2312.04466.png b/data/2312.04466.png new file mode 100644 index 0000000000000000000000000000000000000000..56dcff267623ab50e621054b1fdbc1d504013fc0 --- /dev/null +++ b/data/2312.04466.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b362e4cf8030db77ee7513e3cef3d11b5e5839ef8cc9daf58dcd3e5bbd1ed982 +size 866688 diff --git a/data/2312.04483.png b/data/2312.04483.png new file mode 100644 index 0000000000000000000000000000000000000000..7177268b61e666ac217ce54b74b422e6414e50fb --- /dev/null +++ b/data/2312.04483.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63c2e788f0d93f3c1565914855f75fe1faa00c94fb51bde7794dbeff885ea5ad +size 1305717 diff --git a/data/2312.04519.png b/data/2312.04519.png new file mode 100644 index 0000000000000000000000000000000000000000..c70dcfd1ba0d940bd83da768c2349d40c783fb87 --- /dev/null +++ b/data/2312.04519.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e44d88c8919a01f15ed14e40e2338eaec45dea593f7798be2711fda6383310b9 +size 840739 diff --git a/data/2312.04521.png b/data/2312.04521.png new file mode 100644 index 0000000000000000000000000000000000000000..33991c455c4374a9465ee595b477d424abbce1fa --- /dev/null +++ b/data/2312.04521.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4be6576a9dd7a59446a3a9fe5c72d7a6ce3d009d3ec692d5e86e0f4d0db45d23 +size 734387 diff --git a/data/2312.04524.png b/data/2312.04524.png new file mode 100644 index 0000000000000000000000000000000000000000..599e4302155638a517ce9c2753178575fc2f9ee9 --- /dev/null +++ b/data/2312.04524.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7185ad7eea66aed94d046165bd6016d53a8228c8e10b1519cba58d8bed9fe5d +size 1542338 diff --git a/data/2312.04529.png b/data/2312.04529.png new file mode 100644 index 0000000000000000000000000000000000000000..9ae99242ecd93e0c3244b8391285f74a9e43f67e --- /dev/null +++ b/data/2312.04529.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37c86d5b730c8a09aa382c29e6ee1857f8b5654119e9b551bb16aa75e580f7b2 +size 845984 diff --git a/data/2312.04534.png b/data/2312.04534.png new file mode 100644 index 0000000000000000000000000000000000000000..983d5d84081a9f872b08d378a369b77699d368ec --- /dev/null +++ b/data/2312.04534.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:594732207f592909071fd76056f340ef9dac9aedfc5a402d5b01d8c5b4ab52ae +size 1410676 diff --git a/data/2312.04547.png b/data/2312.04547.png new file mode 100644 index 0000000000000000000000000000000000000000..603ba6dab97ed505d00954c6183733f5b0035a8f --- /dev/null +++ b/data/2312.04547.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52ee35e212fc357b0099b55dea79ddfdc5600badf4ce6a897d91e828ec838c58 +size 1643069 diff --git a/data/2312.04548.png b/data/2312.04548.png new file mode 100644 index 0000000000000000000000000000000000000000..d81f547a738821816a3ac68793e65a6bb7962999 --- /dev/null +++ b/data/2312.04548.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5f8127ce1861e5d13d642856e0d72948532f1cb07978cc3e4e4ccbe6c1333a3 +size 1180204 diff --git a/data/2312.04551.png b/data/2312.04551.png new file mode 100644 index 0000000000000000000000000000000000000000..5508faa3a61e8bcabaa281806ecb3227e879d4bf --- /dev/null +++ b/data/2312.04551.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e0068639bfd79bfb266b02c0aa3edf2c06febc961a30c88d10938f33a431ec6 +size 1039755 diff --git a/data/2312.04552.png b/data/2312.04552.png new file mode 100644 index 0000000000000000000000000000000000000000..d4bb034eb69fd45c2a8d3505340d02b710fd5845 --- /dev/null +++ b/data/2312.04552.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb556f99a3eb0e05f1f97bf3ccd20afaca2d09d222161c1c0061e5551a3d0119 +size 1026837 diff --git a/data/2312.04553.png b/data/2312.04553.png new file mode 100644 index 0000000000000000000000000000000000000000..7b76dc63f025eff6d6121a7a8f1d27ce2c3308b3 --- /dev/null +++ b/data/2312.04553.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a709d1b63de810a4542df43c1af606d783359b4f9b13c56c6ada520719e6c07d +size 1004990 diff --git a/data/2312.04554v1.png b/data/2312.04554v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5edf5ff74818aba208748dcfe2bef91430030c18 --- /dev/null +++ b/data/2312.04554v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d9d9c2738b0d181a436ac967146715121fea77866fe07b74d264751cb1ce444 +size 1150845 diff --git a/data/2312.04557.png b/data/2312.04557.png new file mode 100644 index 0000000000000000000000000000000000000000..6bbcb4c287e1d57f7a8f20c57fc3bb616e026874 --- /dev/null +++ b/data/2312.04557.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d47f4a5f2ebc2eed7d4c8f473ebeed93c60d4ddd231397f7b9a231e59513889 +size 1847905 diff --git a/data/2312.04560.png b/data/2312.04560.png new file mode 100644 index 0000000000000000000000000000000000000000..c3c5b91542dbd05d210080133eabda68b463d020 --- /dev/null +++ b/data/2312.04560.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af8b8736407e1c0377ab3ca65935240f0b4e651f79404c8d9f4f236fe28fbe7d +size 1111499 diff --git a/data/2312.04563.png b/data/2312.04563.png new file mode 100644 index 0000000000000000000000000000000000000000..67cfabacb21e906ef3c1dcb1c32750dc95ace238 --- /dev/null +++ b/data/2312.04563.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a1c79a8277195fc3a739b782651c083f6425432fa4f2d930f3a55bcac2842ce +size 1475472 diff --git a/data/2312.04565v1.png b/data/2312.04565v1.png new file mode 100644 index 0000000000000000000000000000000000000000..80b5645b0eedfe39a4314e7563d23b11e72e1f09 --- /dev/null +++ b/data/2312.04565v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b60a4eecfe3797e9ce5e2e1334c10712ac8af843c3aaad9ceef7ced935f6c735 +size 955924 diff --git a/data/2312.04567.png b/data/2312.04567.png new file mode 100644 index 0000000000000000000000000000000000000000..2c3a2c3c1b6a03f934ffbb86cdc1130cd60c8898 --- /dev/null +++ b/data/2312.04567.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68c0efd96383f6b9e8d1a697602b706af0feb5fe5653b0801cf668635fc55050 +size 733415 diff --git a/data/2312.04651.png b/data/2312.04651.png new file mode 100644 index 0000000000000000000000000000000000000000..6ba91821ccd05e392ea5c3cf3d5335b50a9202d3 --- /dev/null +++ b/data/2312.04651.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc61679d0628557cd5b633558943b5db5d65235fcd87388bf4c6e9068b9b126f +size 1145938 diff --git a/data/2312.04655.png b/data/2312.04655.png new file mode 100644 index 0000000000000000000000000000000000000000..c1e48637746cb7bd7f4c4dd851a5672dcf4f8e4a --- /dev/null +++ b/data/2312.04655.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d65c3953130f836fe9246276c20ecdea440b8818f74b4546e4ed97574bde078c +size 759695 diff --git a/data/2312.04670v1.png b/data/2312.04670v1.png new file mode 100644 index 0000000000000000000000000000000000000000..2147ea52c150fae664d6a59906478514898ee90a --- /dev/null +++ b/data/2312.04670v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a911bf4c1587147c6dccbb746b46b4e9e707b95113499330ca78525d7e4fc27 +size 734252 diff --git a/data/2312.04746.png b/data/2312.04746.png new file mode 100644 index 0000000000000000000000000000000000000000..87106ea2387293e2fd1053dca82ce61133e053d9 --- /dev/null +++ b/data/2312.04746.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02c8a698e5b0ddcfbb2f0e2d83a58fba631e0f3a6edbdd4e1d4fd471013dceb9 +size 1037173 diff --git a/data/2312.04802.png b/data/2312.04802.png new file mode 100644 index 0000000000000000000000000000000000000000..86935d8a472691daa28003e5a586cc928304365d --- /dev/null +++ b/data/2312.04802.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96f398bb383c330d2ea0d2d991a2ad0b7bd1736da46bbecb9d78bf30cbea1355 +size 577473 diff --git a/data/2312.04803.png b/data/2312.04803.png new file mode 100644 index 0000000000000000000000000000000000000000..47b915e4ba8da787253a56bcec8e190e3135bfd5 --- /dev/null +++ b/data/2312.04803.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e09bb2503852fbb61eef862cb38ce0110050355987c9889eebe0bf9aacb3a07 +size 1075505 diff --git a/data/2312.04819.png b/data/2312.04819.png new file mode 100644 index 0000000000000000000000000000000000000000..580dce831c9e0ec84ee9081540f12c10ee09da73 --- /dev/null +++ b/data/2312.04819.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ea0623fc6a7adac9763b6d3aef012246b5517e2241ee3faaebbb378b09da8be +size 622529 diff --git a/data/2312.04884.png b/data/2312.04884.png new file mode 100644 index 0000000000000000000000000000000000000000..3aee8a673cbc729d17447e4ddac06ec020c322a2 --- /dev/null +++ b/data/2312.04884.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7888a306b5be754316b15ce5f4ab7aa722ea420e0120f51dcdfee85e3d5ae477 +size 910309 diff --git a/data/2312.04913.png b/data/2312.04913.png new file mode 100644 index 0000000000000000000000000000000000000000..58ffe9b15062af245f91b0346bd2fb911277534f --- /dev/null +++ b/data/2312.04913.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:843cc272b355f480fd7ca9699b46b788764d4f7c8e7a28bc126b26f8ab422f47 +size 846847 diff --git a/data/2312.04962.png b/data/2312.04962.png new file mode 100644 index 0000000000000000000000000000000000000000..c357e5ec4d95edb03e6c9f9cba49f2c419d97ded --- /dev/null +++ b/data/2312.04962.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d694bcb7e795f3425e6a4e14dc9befcc2567843d3484ed292c9e9fa1f97c85f +size 1006842 diff --git a/data/2312.04963.png b/data/2312.04963.png new file mode 100644 index 0000000000000000000000000000000000000000..a8de2ead545b69f0d1a3c88a513fea84c6002da6 --- /dev/null +++ b/data/2312.04963.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:264ff9f81f26879de9c9a18882d8dc691c5addd96fe219b87b907a9e8a9a7436 +size 987593 diff --git a/data/2312.04964.png b/data/2312.04964.png new file mode 100644 index 0000000000000000000000000000000000000000..84092362f19c9bace8b9aebad2569b30bfaacf20 --- /dev/null +++ b/data/2312.04964.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a664933feb54627355193bdda41cb4e3167385043fe21f194faf27f81e3a067b +size 805865 diff --git a/data/2312.04965.png b/data/2312.04965.png new file mode 100644 index 0000000000000000000000000000000000000000..86aeef42f6499f0037bbd7f87e3c07a943b85232 --- /dev/null +++ b/data/2312.04965.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f3e2a34dcf78e223a36af8f6669550b531fa0ef8183299dc9967f82e12e7b59 +size 1663956 diff --git a/data/2312.05006.png b/data/2312.05006.png new file mode 100644 index 0000000000000000000000000000000000000000..2b57a8ff634b38f49fa9327aeb6a53a93f113154 --- /dev/null +++ b/data/2312.05006.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:740f2e5b8b329b80c83a14e843387df201a6c9a7e2332381bc1e570b80deab2a +size 766444 diff --git a/data/2312.05039.png b/data/2312.05039.png new file mode 100644 index 0000000000000000000000000000000000000000..85a27ccdf5559dd11c9658cdd2cc77c594a8d3ac --- /dev/null +++ b/data/2312.05039.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e397ff0aa42069ac71c43b59db09242083cca754dae868152acb7a1014dfa7d1 +size 1413145 diff --git a/data/2312.05208.png b/data/2312.05208.png new file mode 100644 index 0000000000000000000000000000000000000000..59c7b1af494076418de345490066890620707b29 --- /dev/null +++ b/data/2312.05208.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f48329a01e7046707396c340dd2cf14af8b5009f55d2811c2ef544926b93dd7 +size 1457028 diff --git a/data/2312.05210.png b/data/2312.05210.png new file mode 100644 index 0000000000000000000000000000000000000000..479efc366825156fe4947cc877244dbf29b0fd34 --- /dev/null +++ b/data/2312.05210.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2efcabbb7958dee1ede81c42959768ca8672b842149edab24117153acf5b7180 +size 869930 diff --git a/data/2312.05239.png b/data/2312.05239.png new file mode 100644 index 0000000000000000000000000000000000000000..376dd8b7ac284c91ae5503e46cf318e6c6071bc2 --- /dev/null +++ b/data/2312.05239.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20634937ba3f5dc1d370ac73d99ce0d451950044ab8589892089811a0c3ab7a0 +size 2307881 diff --git a/data/2312.05247.png b/data/2312.05247.png new file mode 100644 index 0000000000000000000000000000000000000000..6707a6d022daf065cbf114ceede10a4a35d57223 --- /dev/null +++ b/data/2312.05247.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03ec1b123da9635823d5e7dccb2e10fe4e7e3ee9b2be1f3f7740d0a124270a31 +size 790200 diff --git a/data/2312.05251.png b/data/2312.05251.png new file mode 100644 index 0000000000000000000000000000000000000000..13a5a4bb3422eeee27aa32918e2abc1cc85f62a4 --- /dev/null +++ b/data/2312.05251.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5c1a0891d6c6119cd20bcea53821a69fda4e1c8cbc1838522218ab750827f72 +size 1685618 diff --git a/data/2312.05264.png b/data/2312.05264.png new file mode 100644 index 0000000000000000000000000000000000000000..5fc2edc167be479f341b6cdf92a4912f4656f4b1 --- /dev/null +++ b/data/2312.05264.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:085e270478f7860af329a3caf0f1499fa0a26444c33527a07e663ab9c0e5e3ae +size 847621 diff --git a/data/2312.05278v2.png b/data/2312.05278v2.png new file mode 100644 index 0000000000000000000000000000000000000000..08a54f1989eb942358271adce67c6839572cf8c8 --- /dev/null +++ b/data/2312.05278v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3466ee3dace9c7c5aaec4ba8971fb670d156c2ed8bb6b81a77808d5fa0bb7189 +size 759313 diff --git a/data/2312.05291.png b/data/2312.05291.png new file mode 100644 index 0000000000000000000000000000000000000000..a857abf7875708b22e8d585b86c91263a76f1d6f --- /dev/null +++ b/data/2312.05291.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f50513d6dab6adbbd18a3908ea48ed783cc6d7640dacb62d2f6901d6cde2e3b1 +size 941190 diff --git a/data/2312.05387.png b/data/2312.05387.png new file mode 100644 index 0000000000000000000000000000000000000000..d8852623f4edeb24c449276aa19652cb4d348095 --- /dev/null +++ b/data/2312.05387.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f004a9af42d2ef90898b1499a59ff0e1597fb241cdd1c1e2d067f41b2c30c94d +size 750742 diff --git a/data/2312.05390.png b/data/2312.05390.png new file mode 100644 index 0000000000000000000000000000000000000000..dbe11a7beff872d979160a90b46504267a79e587 --- /dev/null +++ b/data/2312.05390.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:663d43f4d7edc31b7c3b57f8742265fe0db718aa310e80836cc807f64c7a3671 +size 1738015 diff --git a/data/2312.05634.png b/data/2312.05634.png new file mode 100644 index 0000000000000000000000000000000000000000..f8f8375e9d032f7eb48ec5219a9f2d7d2bd00716 --- /dev/null +++ b/data/2312.05634.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef5f5934d59e35087dc410459d19af2b1328f803016a77c56e677535d2b42e8a +size 887436 diff --git a/data/2312.05664.png b/data/2312.05664.png new file mode 100644 index 0000000000000000000000000000000000000000..775429c34026dda0b55a56e2b44198f590ccfdb3 --- /dev/null +++ b/data/2312.05664.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95988998bbebff7079088abb108d9d9785db476bb76669ff3b940970a15999d1 +size 982047 diff --git a/data/2312.05716.png b/data/2312.05716.png new file mode 100644 index 0000000000000000000000000000000000000000..502bc1d80944770de0c905acf6ca019ec772d567 --- /dev/null +++ b/data/2312.05716.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15126b54f8815b22e00784d3c5c0c7be47c6f378a23fee66b4efb68605edbe88 +size 714579 diff --git a/data/2312.05752.png b/data/2312.05752.png new file mode 100644 index 0000000000000000000000000000000000000000..28a488ad9289c718bcd546ef5828a94ca9aaf8a0 --- /dev/null +++ b/data/2312.05752.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e44107990d9dc5e654a15f2f91b36eb6656e3dd5382e9bd299bf625135624dd0 +size 776787 diff --git a/data/2312.05849.png b/data/2312.05849.png new file mode 100644 index 0000000000000000000000000000000000000000..4a9728010c4a5260df9e9faf9702dd6879d1d357 --- /dev/null +++ b/data/2312.05849.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:096517b3dea7b419e24cb4c229f402a5d95316a58128d29bab613e8a3f4476e9 +size 1062670 diff --git a/data/2312.05856.png b/data/2312.05856.png new file mode 100644 index 0000000000000000000000000000000000000000..f8373c476575f45b32f5a776c71d1a54e225633e --- /dev/null +++ b/data/2312.05856.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1480b24529cde00eaa16ab138978e38570805a84667472bd4bc6d170b3897dd9 +size 1269421 diff --git a/data/2312.05889.png b/data/2312.05889.png new file mode 100644 index 0000000000000000000000000000000000000000..aa2e30ac3a6494f645a8b730a9b054e0249ebb2d --- /dev/null +++ b/data/2312.05889.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6d52616e20196364f0e6dd3dd0a121f684ac28f557b8c2acd4abf3251d8d857 +size 1009177 diff --git a/data/2312.05923.png b/data/2312.05923.png new file mode 100644 index 0000000000000000000000000000000000000000..e24cb4162fd60e9bc7e459315d7fc226a739fdd6 --- /dev/null +++ b/data/2312.05923.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:703c65bceda2d6d75da1a744c8a330b846346be323395f275b27fa15ce751ba6 +size 953198 diff --git a/data/2312.05941.png b/data/2312.05941.png new file mode 100644 index 0000000000000000000000000000000000000000..de1f90c06b1bbac470d60bb0c625c52187fab69b --- /dev/null +++ b/data/2312.05941.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23e166fcea3305b498ea1163b14ffe378ef53f585fd794baa5a05209d11f0308 +size 909846 diff --git a/data/2312.05995.png b/data/2312.05995.png new file mode 100644 index 0000000000000000000000000000000000000000..b8496d18152a93a2024f7192027ea4b302131ffb --- /dev/null +++ b/data/2312.05995.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:166cd28fe79a5b0071fb0e3d81dc22db4978d6a2c47bd0603d5aaf26a0ba8fdc +size 749438 diff --git a/data/2312.06038.png b/data/2312.06038.png new file mode 100644 index 0000000000000000000000000000000000000000..68d552ed022d65512f4126aa1d3a7f81cc57c167 --- /dev/null +++ b/data/2312.06038.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62eb8ee2e24e013abd2058fa1e96607ec3574f6b790871585d9e28e3159b07fa +size 1326245 diff --git a/data/2312.06059v1.png b/data/2312.06059v1.png new file mode 100644 index 0000000000000000000000000000000000000000..ddcd1cf96d8f2f39c11c5c24a83010d56a316aca --- /dev/null +++ b/data/2312.06059v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef26bf31f0fac1200cca067b7c2c2cd0e7cdec81d3b858a3d69dc522270aa9f9 +size 1388805 diff --git a/data/2312.06112.png b/data/2312.06112.png new file mode 100644 index 0000000000000000000000000000000000000000..b19694ab90d77a6f1cdb87d28c32fe0a694a1e0c --- /dev/null +++ b/data/2312.06112.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e85dfc303440a9cdceeba052974ef3f8e1840d5a8bb8ab3a6942a18484d3619 +size 1019640 diff --git a/data/2312.06184.png b/data/2312.06184.png new file mode 100644 index 0000000000000000000000000000000000000000..8fb20dbac469fa3717110c6e9f5521abe4b5d51d --- /dev/null +++ b/data/2312.06184.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6aa5de05b8ace878aa818ff1c6b9440683a1c4dcf082b1ac4637ac626f801bfd +size 749243 diff --git a/data/2312.06230.png b/data/2312.06230.png new file mode 100644 index 0000000000000000000000000000000000000000..2af36f098bd41e0baac6cc21d60a07682eeff2fe --- /dev/null +++ b/data/2312.06230.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf78a936e9a2f6fb73278754ff8d8c9f7253de0dee6aa0b35183b6299c1568fe +size 531530 diff --git a/data/2312.06354.png b/data/2312.06354.png new file mode 100644 index 0000000000000000000000000000000000000000..7840952f34dc30001ba20eb44c33352e1aacca40 --- /dev/null +++ b/data/2312.06354.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c125bbec9d6dfeb9ffc55c66d2b86328e07c58704e743f5596c5f9557d89491 +size 1254977 diff --git a/data/2312.06358.png b/data/2312.06358.png new file mode 100644 index 0000000000000000000000000000000000000000..d2085df023c0be94d21fe8c6464102aa6b2e4f04 --- /dev/null +++ b/data/2312.06358.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7f384f21c2ca5b8d16b4a174c4211545d4e8d9a9fd581b5f587f3d32d2776e3 +size 1035736 diff --git a/data/2312.06420.png b/data/2312.06420.png new file mode 100644 index 0000000000000000000000000000000000000000..b42943db5e90234b40d82c6beca561642579acdf --- /dev/null +++ b/data/2312.06420.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6198d7d4ac2221e3a5b62bd9cdb242b3dc9f8aa974e5253c6dd5cc9c851ec97 +size 910976 diff --git a/data/2312.06439.png b/data/2312.06439.png new file mode 100644 index 0000000000000000000000000000000000000000..a096001d9deb4f0a83070c9350e4b0a66863fed4 --- /dev/null +++ b/data/2312.06439.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0eb253dd83dfc59996a0d3912d1477c6e370c9ba7fc70469efa9934b00b5d8f9 +size 911408 diff --git a/data/2312.06462.png b/data/2312.06462.png new file mode 100644 index 0000000000000000000000000000000000000000..81c3c8c7a9012f0a77df8faa575d514384b2bde6 --- /dev/null +++ b/data/2312.06462.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52ccc68b754c9eb2c42e4de9923ae65eefe9f95746e650ae8e5852e5feb93487 +size 730023 diff --git a/data/2312.06505.png b/data/2312.06505.png new file mode 100644 index 0000000000000000000000000000000000000000..7b7d799b3ab24b7c8c397bfcafeab950cabad106 --- /dev/null +++ b/data/2312.06505.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75a4907cd669da34681d35e88ee8296eb2aed5b851bf06bf74fbdcbec43dbb77 +size 947463 diff --git a/data/2312.06553.png b/data/2312.06553.png new file mode 100644 index 0000000000000000000000000000000000000000..34d18af1a5e6fa3efcd871ab949cfa7fcd9ff389 --- /dev/null +++ b/data/2312.06553.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1295f88e59f102333207b0ad3befbf2b969166139c59ca2fa49eebe7d88a78e +size 505654 diff --git a/data/2312.06640.png b/data/2312.06640.png new file mode 100644 index 0000000000000000000000000000000000000000..4cc617fb5b0c9ef8dca1a6d7bc404c3d6a0ea831 --- /dev/null +++ b/data/2312.06640.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6436552419bd553e043285d94fc8fddb9e84a20fa2a47619059a0e425c14ada1 +size 2138288 diff --git a/data/2312.06655.png b/data/2312.06655.png new file mode 100644 index 0000000000000000000000000000000000000000..5b393198e1dbd936bef8faa6d7f3ffc72bd5fae2 --- /dev/null +++ b/data/2312.06655.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b816dbafccf1343f6c44a8abc93d3fe9dc3b73ade732b50ef8a60731ea7111d1 +size 1661520 diff --git a/data/2312.06663.png b/data/2312.06663.png new file mode 100644 index 0000000000000000000000000000000000000000..e2cd22e2d4e6a369ed22ddba9521f0527e374d80 --- /dev/null +++ b/data/2312.06663.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a506e3d8834c08e4a540fee0abd3daf0746bd52aa8cad896cfea07a08dc4d09f +size 1214761 diff --git a/data/2312.06685.png b/data/2312.06685.png new file mode 100644 index 0000000000000000000000000000000000000000..60a1fa80c3588f2027aa8434e6fa824cf2e83899 --- /dev/null +++ b/data/2312.06685.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e8950b0112c8eef5ee122c134f93bd40f4d633cb0ccd8897422914929473dbd +size 995161 diff --git a/data/2312.06704.png b/data/2312.06704.png new file mode 100644 index 0000000000000000000000000000000000000000..aaf31fc31a6882433d344f858387809b8e3c5a08 --- /dev/null +++ b/data/2312.06704.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c643958f4710765bc93dfef1d8fa8a8ae2aeccefe217245a109cc87164cab26 +size 1304823 diff --git a/data/2312.06709.png b/data/2312.06709.png new file mode 100644 index 0000000000000000000000000000000000000000..e325313be40b102446eec674de732c0ef895f80f --- /dev/null +++ b/data/2312.06709.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb9e131953853352844c20c9f486b1856475a814c97edf8d8d6ebedf1d26440f +size 1072001 diff --git a/data/2312.06713.png b/data/2312.06713.png new file mode 100644 index 0000000000000000000000000000000000000000..f642564e96bcee22650913cdfe4773c0169b264d --- /dev/null +++ b/data/2312.06713.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5441f103e5d1f554757917176c2af921a206967a0335c49664e5d5b81a29e69a +size 1009500 diff --git a/data/2312.06716.png b/data/2312.06716.png new file mode 100644 index 0000000000000000000000000000000000000000..a7af156490a1006bcfa834856013b63dbe0adca2 --- /dev/null +++ b/data/2312.06716.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e4210d85ecb278a74496ddc70821822fed1515dacd6afaf7ded24be324164d8 +size 1093376 diff --git a/data/2312.06725.png b/data/2312.06725.png new file mode 100644 index 0000000000000000000000000000000000000000..e58d2256d8bb715c15dd34307152752130c7e923 --- /dev/null +++ b/data/2312.06725.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad62c259aef9700dabdb859cb00861a6b77f648e8dc7a4b0ca24ca45f472e283 +size 878957 diff --git a/data/2312.06733.png b/data/2312.06733.png new file mode 100644 index 0000000000000000000000000000000000000000..8ec401be4d53dbcf60967a6e92c428d26256840d --- /dev/null +++ b/data/2312.06733.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cb993f0696fde6071bf7c5c9e3d8d1ba301933d8d6d6e3839a9dcf7255b37e8 +size 1005911 diff --git a/data/2312.06734.png b/data/2312.06734.png new file mode 100644 index 0000000000000000000000000000000000000000..937628b76257c73ada5e0ad1f53e5b2e40bea0c8 --- /dev/null +++ b/data/2312.06734.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68a37adee27759c3abcd474a554f9b6618ac0fee4f9a5eb2a3fb5a034f411b9c +size 790621 diff --git a/data/2312.06739.png b/data/2312.06739.png new file mode 100644 index 0000000000000000000000000000000000000000..d388e9b56fca1988ac8b171d084764899715782e --- /dev/null +++ b/data/2312.06739.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5823b662d01190c6ec2a994dd1222a434f392736629e238ea12cb490401ff82 +size 1644214 diff --git a/data/2312.06740.png b/data/2312.06740.png new file mode 100644 index 0000000000000000000000000000000000000000..e6dcfeb01a22b3db4f7fa8022d605e25257f48af --- /dev/null +++ b/data/2312.06740.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f5125e155dc733482223d0f4792c3e9c9505b96d425c8d51261673e4e7754db +size 996958 diff --git a/data/2312.06741.png b/data/2312.06741.png new file mode 100644 index 0000000000000000000000000000000000000000..ab2b4f3ae095fd8a62cac26818fb81ca4aafaf3b --- /dev/null +++ b/data/2312.06741.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc14778e84479d9207ded27c515ef3945572931aa1c30b3c3a330f9503e7de90 +size 1251510 diff --git a/data/2312.06742.png b/data/2312.06742.png new file mode 100644 index 0000000000000000000000000000000000000000..55f723b1c7213da629e45ac24ae9a7a2999a4379 --- /dev/null +++ b/data/2312.06742.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54b96259527657c85b64db7b4fff7c745e8b7d3a73706d8d767656ea6c550c1b +size 737259 diff --git a/data/2312.06874.png b/data/2312.06874.png new file mode 100644 index 0000000000000000000000000000000000000000..e5bddd5748dac2b9926c69466a2507f7fdb82212 --- /dev/null +++ b/data/2312.06874.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe10bc461b54260e938e4aaf9436b2200205116e0f6ce6f9bb20af68ab2fa80b +size 497314 diff --git a/data/2312.06886.png b/data/2312.06886.png new file mode 100644 index 0000000000000000000000000000000000000000..828cde9d0d0ce77710a88a93ea0277088f1f886f --- /dev/null +++ b/data/2312.06886.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad2f5b46c8b059e63f712ab3ee40145ad05d87a3f5425c44f41858531e29ace0 +size 1610120 diff --git a/data/2312.06968.png b/data/2312.06968.png new file mode 100644 index 0000000000000000000000000000000000000000..68d6689c95023b51a7845f63094873d8d6300b1c --- /dev/null +++ b/data/2312.06968.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22efc040ff8f9b48a25ca52358e6da3c933697acb3e18f17b33de1bb66be8bc4 +size 830191 diff --git a/data/2312.07061.png b/data/2312.07061.png new file mode 100644 index 0000000000000000000000000000000000000000..d2a0a965d7f1164e81bea3e8751b5cf54ef301a0 --- /dev/null +++ b/data/2312.07061.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f804a1495f7d7b06f7f9573c94a32279df886c10fb03bc0894cfa7c09e9f935 +size 729150 diff --git a/data/2312.07063.png b/data/2312.07063.png new file mode 100644 index 0000000000000000000000000000000000000000..85f5d7b200c4495c437ff66684c9451600cdf005 --- /dev/null +++ b/data/2312.07063.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61274e8ae3af41423d0af13b869e382373bd2ca7a94bcf89e29a8ea708a51f41 +size 923949 diff --git a/data/2312.07067.png b/data/2312.07067.png new file mode 100644 index 0000000000000000000000000000000000000000..b600866093e5089123922012ed5b620f8c5c069c --- /dev/null +++ b/data/2312.07067.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e425b86bf3c3e58292b70734f051853779ab1b177310acf461115339f8d96d8a +size 743465 diff --git a/data/2312.07246.png b/data/2312.07246.png new file mode 100644 index 0000000000000000000000000000000000000000..30fee99cc0ccb8d1c0f256ee463a4c26572206aa --- /dev/null +++ b/data/2312.07246.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccf6baee91a57af73b42c706149048730ac2b596d1d54f275d6c5874dae5ca0b +size 1045223 diff --git a/data/2312.07322.png b/data/2312.07322.png new file mode 100644 index 0000000000000000000000000000000000000000..49b311796df95ed001061b2baa2a95ccea18f0ce --- /dev/null +++ b/data/2312.07322.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe7eb125e47e4138d74e3738ee8af50374875bef7d7985a424c641e94c3385a0 +size 1181810 diff --git a/data/2312.07330.png b/data/2312.07330.png new file mode 100644 index 0000000000000000000000000000000000000000..b66d145bab2976b9239ecc7414e879e3155f5172 --- /dev/null +++ b/data/2312.07330.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51812e1bc25c33a1c6d80ea01430c1ae312a4c96e8400639ac1948aebfa13bea +size 979584 diff --git a/data/2312.07378v1.png b/data/2312.07378v1.png new file mode 100644 index 0000000000000000000000000000000000000000..9e198603eacebae1dc53aeac16a231e3b124109b --- /dev/null +++ b/data/2312.07378v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa3e0bc0ac8bee3fb648547d38664bc626d313426d2c0396143503cbd765f8d7 +size 873524 diff --git a/data/2312.07395.png b/data/2312.07395.png new file mode 100644 index 0000000000000000000000000000000000000000..ba84559d6365ac328d07c657526846fc1a23cce9 --- /dev/null +++ b/data/2312.07395.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53191a47438f9fdb7ee473b601e46819497453913ad458ce46b9a819da784946 +size 771640 diff --git a/data/2312.07409.png b/data/2312.07409.png new file mode 100644 index 0000000000000000000000000000000000000000..119cdf7b2cefb58c5c14d037171944c05b3227ed --- /dev/null +++ b/data/2312.07409.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2d52fdb41b871a85e6ccd05c0cba8db41f9c5f726d4b1377d65b6dd2f032021 +size 1550677 diff --git a/data/2312.07423.png b/data/2312.07423.png new file mode 100644 index 0000000000000000000000000000000000000000..f4329940c7f670b2f65609b6e4dfd3a0a91566c0 --- /dev/null +++ b/data/2312.07423.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bdfd72341afd11c27827d851fb4ae5ef51dd746de229f62af19e512dcf2d68a +size 797719 diff --git a/data/2312.07472.png b/data/2312.07472.png new file mode 100644 index 0000000000000000000000000000000000000000..6e37b735def4de22cbc0a0563cb6880070622f5f --- /dev/null +++ b/data/2312.07472.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ef13bd13eaa66c74131bb0998fc84b31acb848729d0f7b92930dfbf4316b9a4 +size 1046177 diff --git a/data/2312.07488.png b/data/2312.07488.png new file mode 100644 index 0000000000000000000000000000000000000000..7102dcc7e48e9e0dba30a83e126038c0b85005ee --- /dev/null +++ b/data/2312.07488.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a3b77e234d3f183b895f1b86f54f301c061039378a40719af835154a7f400f3 +size 1007960 diff --git a/data/2312.07504.png b/data/2312.07504.png new file mode 100644 index 0000000000000000000000000000000000000000..6565796545d5aa0bcacb71ff623dc783966d3c4b --- /dev/null +++ b/data/2312.07504.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dd768230b1b90ca6e43bea61ba32aac39021b9e188913428bad507716502742 +size 1493329 diff --git a/data/2312.07509.png b/data/2312.07509.png new file mode 100644 index 0000000000000000000000000000000000000000..aae161919ca6570b6e06895be6884423615151c0 --- /dev/null +++ b/data/2312.07509.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3064bf71546d7d1ca5f84b31b7ffc698a7ff26b657b27fff63e416122184fc21 +size 1211302 diff --git a/data/2312.07526.png b/data/2312.07526.png new file mode 100644 index 0000000000000000000000000000000000000000..5a7543882d933bf1d4077812b8f7935c2faee5f7 --- /dev/null +++ b/data/2312.07526.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:748ccf410c6e99cd7693eaa030c67013ff5e0edcab936da6a3c91ec4c5726bed +size 719912 diff --git a/data/2312.07530.png b/data/2312.07530.png new file mode 100644 index 0000000000000000000000000000000000000000..d3d70193cb4c8fadb068a4b6cf20d4a6f8b24df0 --- /dev/null +++ b/data/2312.07530.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c30e6bb10c50a843f9d9374e736eb373a3f099ba52cfba45de375f51ef3e7c0f +size 435337 diff --git a/data/2312.07531.png b/data/2312.07531.png new file mode 100644 index 0000000000000000000000000000000000000000..6aee0380664b6d2ec36c76f61e3dbcf97ec2f2fd --- /dev/null +++ b/data/2312.07531.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66fa89e16c1e0b3443ad492229a213ae8c9e2ad8d6b39ed2ba53a2813fc1182c +size 1221205 diff --git a/data/2312.07533.png b/data/2312.07533.png new file mode 100644 index 0000000000000000000000000000000000000000..bd1055231359b0a2a973790f0bf0c9f49b34c3ec --- /dev/null +++ b/data/2312.07533.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72fddc6a5ca17be6c998cb70eb22c6f460496543b63ad5b9a73a7d7074f7a78e +size 785542 diff --git a/data/2312.07536.png b/data/2312.07536.png new file mode 100644 index 0000000000000000000000000000000000000000..ca0bed8b60b465d63ccf23cb2c6f7fee91b67402 --- /dev/null +++ b/data/2312.07536.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5917c55dcb32612192a0daedf4b9a0988e3ce9c6ba762cd75ef1f7968f1dfc29 +size 1501435 diff --git a/data/2312.07538.png b/data/2312.07538.png new file mode 100644 index 0000000000000000000000000000000000000000..5f422f6283c14297f26f220d1903f23a4b6c1892 --- /dev/null +++ b/data/2312.07538.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3513beb88721b457d42c8712c679b9436e1a42a2b4120fe8723f3fe9e6a494c +size 797645 diff --git a/data/2312.07661.png b/data/2312.07661.png new file mode 100644 index 0000000000000000000000000000000000000000..a7a95ea85acc0da5958449a4c6aa063b56f81196 --- /dev/null +++ b/data/2312.07661.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cf37e3ebb0106f40bc7cd70b8c13192fca43a6cc503fe8027fb073accf990cb +size 1365816 diff --git a/data/2312.07804.png b/data/2312.07804.png new file mode 100644 index 0000000000000000000000000000000000000000..3a25d8592680fe7842a749f69a6fab69f5376434 --- /dev/null +++ b/data/2312.07804.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3180361f44a8e0cc63b93d07cb4da7cce026f38531b2df2cb95c0ab7cc25673f +size 688270 diff --git a/data/2312.07856.png b/data/2312.07856.png new file mode 100644 index 0000000000000000000000000000000000000000..a30d67c4d097b6e74832ac9cb78b2afaab2332ae --- /dev/null +++ b/data/2312.07856.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c4ca1a494a83b7d05a880b67d00dd43c624d13bcd3954e51ab45d2a78b88ce4 +size 782056 diff --git a/data/2312.07865.png b/data/2312.07865.png new file mode 100644 index 0000000000000000000000000000000000000000..2761cdced213a93a8415fae7fe9ca98d00951c1c --- /dev/null +++ b/data/2312.07865.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70bf557062a8260347ce4f53fcf2af0b1d8cc01d172b1e9aa665c543e071614e +size 1151039 diff --git a/data/2312.07920.png b/data/2312.07920.png new file mode 100644 index 0000000000000000000000000000000000000000..2fec5d2cb8aa03b8535bc6077311ef20f5eeb548 --- /dev/null +++ b/data/2312.07920.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21b51426181d17739aac5f7abd96b60e97a9c9f8e00f45706a9d30246a325546 +size 1299003 diff --git a/data/2312.07937.png b/data/2312.07937.png new file mode 100644 index 0000000000000000000000000000000000000000..390ac83982a2d751e7e3a277414852d99a74d29e --- /dev/null +++ b/data/2312.07937.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:361c75c3861aeaf29795e06cbcd6b2e02070e53617a1a59d7b864041880f70e5 +size 784847 diff --git a/data/2312.08007.png b/data/2312.08007.png new file mode 100644 index 0000000000000000000000000000000000000000..cc37a36384bb71c765e7bf568932b573d547cdfc --- /dev/null +++ b/data/2312.08007.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cea7dde5a7839da3559b5e9a5719066abc143954b17f08d63bcb889797e7ca1 +size 981203 diff --git a/data/2312.08071v1.png b/data/2312.08071v1.png new file mode 100644 index 0000000000000000000000000000000000000000..578961e514dabccca0e0d34a69aebc14466022b8 --- /dev/null +++ b/data/2312.08071v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b3daaf223e149f175a13809c79894d15cc3b321e0c0b46cbe2716de6430b4cd3 +size 1201854 diff --git a/data/2312.08128.png b/data/2312.08128.png new file mode 100644 index 0000000000000000000000000000000000000000..8beb9c9953f863218bb3f3dede2e61a5c4e3fdbd --- /dev/null +++ b/data/2312.08128.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8b51ce0ddfe462da2abc75c56b29c41782e91665a3cff218956699a4f1779bf +size 1030025 diff --git a/data/2312.08338.png b/data/2312.08338.png new file mode 100644 index 0000000000000000000000000000000000000000..e5b3d315259fead1d853152aef836d8104e5ca73 --- /dev/null +++ b/data/2312.08338.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eec4b0852566b5b552fb0772543adcc3ac71291b2d45291a317fedca21fc0dd0 +size 1601060 diff --git a/data/2312.08344.png b/data/2312.08344.png new file mode 100644 index 0000000000000000000000000000000000000000..558c7cde9484f5f780c2f2ee132956533af83da3 --- /dev/null +++ b/data/2312.08344.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3203cd680a69734a981d4bf92aa59145de17d1e75e6b155e7ed487853103a3c0 +size 871217 diff --git a/data/2312.08366v1.png b/data/2312.08366v1.png new file mode 100644 index 0000000000000000000000000000000000000000..4fa085a40721f8cc8d11bf9ea223b6bf78cdb4f1 --- /dev/null +++ b/data/2312.08366v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b6f09c4a7834d5cc87eb71c59a3fb750afe3ae00df8cfe5699d4eb9d7dec1bc +size 1009067 diff --git a/data/2312.08371.png b/data/2312.08371.png new file mode 100644 index 0000000000000000000000000000000000000000..6fffd75562cd5cecb99ae6781bf6a3526e067758 --- /dev/null +++ b/data/2312.08371.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2afac90345f4d8c3684a2e01dd79d3f484983906e4c08c7cedcf5fe029d2f888 +size 730476 diff --git a/data/2312.08459.png b/data/2312.08459.png new file mode 100644 index 0000000000000000000000000000000000000000..80521ada96e186876dadf4f94c0596d49777a321 --- /dev/null +++ b/data/2312.08459.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:833d868c33d445f18a01fe938930e7e9168ccb3357f7fd4a6132c4122fe904c9 +size 1042310 diff --git a/data/2312.08568.png b/data/2312.08568.png new file mode 100644 index 0000000000000000000000000000000000000000..6ce407a38d255c9096d125c7b1b347737113a4b1 --- /dev/null +++ b/data/2312.08568.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee69837a3a459385e7807d12529f2a8043aa9a9b5fd444dddcd5bafbdb376c1f +size 1194157 diff --git a/data/2312.08578.png b/data/2312.08578.png new file mode 100644 index 0000000000000000000000000000000000000000..76c17bcebd7e4cf4c628ba69b65b4246936f9ca7 --- /dev/null +++ b/data/2312.08578.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca73c4be233eeb1e7a80d0cb9d4718df707e581cadf7b95cf7b8201f04d94d7c +size 755057 diff --git a/data/2312.08591.png b/data/2312.08591.png new file mode 100644 index 0000000000000000000000000000000000000000..4c73500fc8eaebb04e711d0e9bbe663e41e11b29 --- /dev/null +++ b/data/2312.08591.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c41f4a9e691c8262960ba8751fedc848e27808a83b881e8701e0ccc8d27667a1 +size 851154 diff --git a/data/2312.08606.png b/data/2312.08606.png new file mode 100644 index 0000000000000000000000000000000000000000..293fde77e9ba09120643fb7f0d7c07f051476742 --- /dev/null +++ b/data/2312.08606.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eead679219bb598a34fa954b1c0daa5150c313da848fca521bf9b19239a5fba6 +size 841860 diff --git a/data/2312.08631.png b/data/2312.08631.png new file mode 100644 index 0000000000000000000000000000000000000000..610912cf911d532d452b8ef779034d8104d37e7c --- /dev/null +++ b/data/2312.08631.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:048c80bbb998a0f03912fcfdd2e447bed5bc63a0baa30b8e5546927c28d067ca +size 695935 diff --git a/data/2312.08869.png b/data/2312.08869.png new file mode 100644 index 0000000000000000000000000000000000000000..63656b0c696e752b37f45a28d5ae9bd845236c9b --- /dev/null +++ b/data/2312.08869.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c96a3fc7768388cbcef16c5f7dc17acc4c39a4245bda0998335961c9e9cc9d01 +size 1145514 diff --git a/data/2312.08870.png b/data/2312.08870.png new file mode 100644 index 0000000000000000000000000000000000000000..02f9f0891b147bf85182519189fca491db1c4494 --- /dev/null +++ b/data/2312.08870.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe120f2c593e7cf27866ce54e303f8650e8f920a9cd9a36cd67afbef01c0fcd9 +size 898102 diff --git a/data/2312.08875.png b/data/2312.08875.png new file mode 100644 index 0000000000000000000000000000000000000000..3d811951487eacab75174373ea6950fa249f02a1 --- /dev/null +++ b/data/2312.08875.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4cf27889feef448530f82a5907e43332ab7a0d0bd840f97076dd9bebbfcb84d +size 843705 diff --git a/data/2312.08878.png b/data/2312.08878.png new file mode 100644 index 0000000000000000000000000000000000000000..9db1b9a331a7e5a12dc60d2e761962f030ebef48 --- /dev/null +++ b/data/2312.08878.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0aca02915c80447013b9bd1dd9d90c5e83a2822497bfb42931bbf9c8589d50df +size 795214 diff --git a/data/2312.08883.png b/data/2312.08883.png new file mode 100644 index 0000000000000000000000000000000000000000..acf3a52d40a96833ff1ad1b2803809614f86c086 --- /dev/null +++ b/data/2312.08883.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cd4ad6215a1fad96183da54e9f4e454b7761d916f70d1a4f7fb0b3e28de17fa +size 1172787 diff --git a/data/2312.08885.png b/data/2312.08885.png new file mode 100644 index 0000000000000000000000000000000000000000..8729bc8fe8ebe65a982e5c38641de69b8d0850e3 --- /dev/null +++ b/data/2312.08885.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37266f99a9bd11859d8ac7af9b52f799178f4f5a69250f1f208947be95181eeb +size 1876832 diff --git a/data/2312.08886.png b/data/2312.08886.png new file mode 100644 index 0000000000000000000000000000000000000000..b4878110cc144f02311bc01d2073e3074c8359e4 --- /dev/null +++ b/data/2312.08886.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fc0f291035638e29a208b98df193e7f079ced769840b83da5da0a6894987c63 +size 1483831 diff --git a/data/2312.08912.png b/data/2312.08912.png new file mode 100644 index 0000000000000000000000000000000000000000..455740b3a0e7e92a34abe152d673a81ce7e7e5be --- /dev/null +++ b/data/2312.08912.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2798b87daaee9821406b21fb2371a3323b29744fd64df8a7497e4263c178dd95 +size 612181 diff --git a/data/2312.08914.png b/data/2312.08914.png new file mode 100644 index 0000000000000000000000000000000000000000..47a0b732b45c1df3eec8d1f87826bc3e3d6ee4fe --- /dev/null +++ b/data/2312.08914.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d8ea6a01053880af123431c4d014e874ca56f0281b9dc285d73191191c46a15 +size 765656 diff --git a/data/2312.08963.png b/data/2312.08963.png new file mode 100644 index 0000000000000000000000000000000000000000..7eb9acd761d9307b5390388d69829a22aff14d44 --- /dev/null +++ b/data/2312.08963.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2a684d71c119b252d97d60e10450aebb8934c02eb3668cf19905096f0320f32 +size 942714 diff --git a/data/2312.08985v3.png b/data/2312.08985v3.png new file mode 100644 index 0000000000000000000000000000000000000000..23339cbe8bdd9f9754b622fb4b81268142dbe270 --- /dev/null +++ b/data/2312.08985v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69a4a9fdf0c5bd8d29ce002bbee33f6dd44d0c7f6fab123819e5406d72d0dfad +size 1037434 diff --git a/data/2312.09008.png b/data/2312.09008.png new file mode 100644 index 0000000000000000000000000000000000000000..566753be28dc600e3d7e6d2108530cecb188336c --- /dev/null +++ b/data/2312.09008.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84c545754ed1a85c90a1fad1474711335ce8151d9dcbebf3f1c9b87468a2e864 +size 876686 diff --git a/data/2312.09056v1.png b/data/2312.09056v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f787a72507b273b2a39c80f1dce849e834c0467c --- /dev/null +++ b/data/2312.09056v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9636323b0b9fce5cf5db554836ea803073a1870e2f856ea0b5fb5372ad24328f +size 871090 diff --git a/data/2312.09067.png b/data/2312.09067.png new file mode 100644 index 0000000000000000000000000000000000000000..6a8ed0b61c18bf58de44b7413c20bbf22f0dd0e6 --- /dev/null +++ b/data/2312.09067.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fc58adb96f4cb3d169377639bf9a9615a11cfea617a87c1fd386b1be4b027dd +size 1630868 diff --git a/data/2312.09069.png b/data/2312.09069.png new file mode 100644 index 0000000000000000000000000000000000000000..6e068c52792483898421eb7f97bbb607aad0376e --- /dev/null +++ b/data/2312.09069.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a12bc3fbcb5ae02291c55789b3de55266e6e718f40ec53bbc18a790d8c1bae00 +size 836838 diff --git a/data/2312.09138.png b/data/2312.09138.png new file mode 100644 index 0000000000000000000000000000000000000000..74f5c981d94197410ffff5c99f7e96e63cc80cff --- /dev/null +++ b/data/2312.09138.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aeea06d1319d9b5992e1242cdf7e43e7ff78815bd8f43bd901f8403c4c7743da +size 890569 diff --git a/data/2312.09147.png b/data/2312.09147.png new file mode 100644 index 0000000000000000000000000000000000000000..659e009cab0e67249066175936521ee556d0e55f --- /dev/null +++ b/data/2312.09147.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51a6fc98ad55412920a82a7d06224a9bdcf367113eb9b1c464a689784a8bd1f8 +size 759740 diff --git a/data/2312.09158.png b/data/2312.09158.png new file mode 100644 index 0000000000000000000000000000000000000000..e12e03e4a35b3ad6073a49aa59ac4fad524ea9b7 --- /dev/null +++ b/data/2312.09158.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:841f537bbc2ca81fcf9368c57d9eb4727579a238f2173578d859b6d8744a35d5 +size 780841 diff --git a/data/2312.09168v2.png b/data/2312.09168v2.png new file mode 100644 index 0000000000000000000000000000000000000000..59fa1148530839e0eed318b9d59454667a2979b0 --- /dev/null +++ b/data/2312.09168v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edb8b94c68302776b89d8e172dc7096d18d1de7dd5fbb1e121a554103f546d11 +size 1886088 diff --git a/data/2312.09181.png b/data/2312.09181.png new file mode 100644 index 0000000000000000000000000000000000000000..8deae2322fb40e7b42cb8331cee748ca22e77307 --- /dev/null +++ b/data/2312.09181.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d04b9ab4e76d5a7c9ef4da2998827581823b09c954abf40ec3bc9babeaafbff +size 414165 diff --git a/data/2312.09222.png b/data/2312.09222.png new file mode 100644 index 0000000000000000000000000000000000000000..cb0671f48013a2d39820b25cf3878e86d4beb8b9 --- /dev/null +++ b/data/2312.09222.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9ade971f7ab74e18904aa977a3fe449435d5d581d2d6423b827ebec1810a8aa +size 833842 diff --git a/data/2312.09228.png b/data/2312.09228.png new file mode 100644 index 0000000000000000000000000000000000000000..36755dc46a89e1d7ebda697b9f1cbc699f22f234 --- /dev/null +++ b/data/2312.09228.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af7b315591d815771439a141ed77ea0cce1485f9a079d02a31e3995baf96e6b8 +size 878960 diff --git a/data/2312.09237.png b/data/2312.09237.png new file mode 100644 index 0000000000000000000000000000000000000000..36be2a6f7378642ff2fa44cc1f5e443fcad45731 --- /dev/null +++ b/data/2312.09237.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:569d8e688c81ca9610278a97ee74d2a78458e7f7d56b721d69d0e62c7232e23a +size 830139 diff --git a/data/2312.09238.png b/data/2312.09238.png new file mode 100644 index 0000000000000000000000000000000000000000..76d7a0de7f9dc43cc210a6af6f7c9a764ab52655 --- /dev/null +++ b/data/2312.09238.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ad6def5653d7be762d53d91e804711f5c3a0a420d53008ffc96f0a24c2c77f1 +size 731876 diff --git a/data/2312.09249.png b/data/2312.09249.png new file mode 100644 index 0000000000000000000000000000000000000000..febfd474c30d55df6787f0c6161c7cc852af5b89 --- /dev/null +++ b/data/2312.09249.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e43389bbd1b09df59b6d891b4502f0613ba3debf7d3024c5c78a9ec3b78e8c3 +size 1022588 diff --git a/data/2312.09250.png b/data/2312.09250.png new file mode 100644 index 0000000000000000000000000000000000000000..5b5f67e4c1b017e244503bf13c3906ee2a409dd3 --- /dev/null +++ b/data/2312.09250.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb19bfaf9c11e0850890ed619bb4601cbea2e2ab987bb96f323bfdbdbf413dd1 +size 1134186 diff --git a/data/2312.09337.png b/data/2312.09337.png new file mode 100644 index 0000000000000000000000000000000000000000..6dfba5146817e62e92cdd434ba5265a8db0bed3c --- /dev/null +++ b/data/2312.09337.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb7d490f5da4a87518523f7fb81415f00ae075d575e143cedbc2d39d6ed4511d +size 803309 diff --git a/data/2312.09523.png b/data/2312.09523.png new file mode 100644 index 0000000000000000000000000000000000000000..688eb27c27f8fb6ea54e10a95496e0fce96dee40 --- /dev/null +++ b/data/2312.09523.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcb686722afd942419befb049b552c115eb04e0bf01e344d4e60511a4234b4d1 +size 1180330 diff --git a/data/2312.09558.png b/data/2312.09558.png new file mode 100644 index 0000000000000000000000000000000000000000..e34df6e8a2cf855819d2559b578d834a778a032e --- /dev/null +++ b/data/2312.09558.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc3d8dcf0ff47024c39bcba5479dd2effd6e022ab737f982fe581ee1871058ad +size 777115 diff --git a/data/2312.09570.png b/data/2312.09570.png new file mode 100644 index 0000000000000000000000000000000000000000..9db1ddf300ea9b27d1fa442173d15218d6b2a546 --- /dev/null +++ b/data/2312.09570.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6a6801a134d4a3bda73533cc1f128c62956aa95a7c9e8ea77f9b262a24a2177 +size 735480 diff --git a/data/2312.09625.png b/data/2312.09625.png new file mode 100644 index 0000000000000000000000000000000000000000..e042ecc283bbb2984b9cebcc04754f3a0fc9b972 --- /dev/null +++ b/data/2312.09625.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aed9e93f6c25b6b38944516b8dd87a787f05896fe05a9647520bb5e13e645e85 +size 823798 diff --git a/data/2312.09788.png b/data/2312.09788.png new file mode 100644 index 0000000000000000000000000000000000000000..821ef2934f7a3e089e8ae10a00d7347071847e48 --- /dev/null +++ b/data/2312.09788.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11811e458ce83dc24a724e41a9b6634601a73f12b84106df362733e36f6f4fb5 +size 729795 diff --git a/data/2312.09866.png b/data/2312.09866.png new file mode 100644 index 0000000000000000000000000000000000000000..998f94ce1138f63c554a89ce70a41bb55f06c832 --- /dev/null +++ b/data/2312.09866.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:886aee10b373b6907eb7995e128b9e8f0e57d47808e2f60e8e6a1060a9598a8b +size 1173585 diff --git a/data/2312.09913.png b/data/2312.09913.png new file mode 100644 index 0000000000000000000000000000000000000000..74ef15c61611fc9ef01776a0b39e6e8aa8d4097a --- /dev/null +++ b/data/2312.09913.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a9fa02cb40f27e169b9389fd126e4b9a15179a57442ed4fbeebee90c22b19d7 +size 994981 diff --git a/data/2312.09925.png b/data/2312.09925.png new file mode 100644 index 0000000000000000000000000000000000000000..d21e03843b42d8d0188e2bfe8fb8ab2d1d6d55ea --- /dev/null +++ b/data/2312.09925.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99818608a6b326b612988260301e7f8ed15aa31f93bf36234fce1bb14a02ef08 +size 836789 diff --git a/data/2312.10032.png b/data/2312.10032.png new file mode 100644 index 0000000000000000000000000000000000000000..3171b1f96ffd324120691096d80461ffa5c0dc9c --- /dev/null +++ b/data/2312.10032.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:926841c9b5b71fe247ee055655313f69a6de345a4719dc48376f5f16551dbafb +size 808732 diff --git a/data/2312.10035.png b/data/2312.10035.png new file mode 100644 index 0000000000000000000000000000000000000000..506af439ba972277aaf993ef2e791103d33f044c --- /dev/null +++ b/data/2312.10035.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b0a124add00a0b8278d92adef214120ea77df76f402a15ecc343f4c387a4606 +size 834818 diff --git a/data/2312.10103.png b/data/2312.10103.png new file mode 100644 index 0000000000000000000000000000000000000000..892fa34e843b8e651fff49ecb9c01554d68ec93e --- /dev/null +++ b/data/2312.10103.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de9538efb22adb443b803185cdaa094691191545ec448ede0d6b209b544950da +size 1069730 diff --git a/data/2312.10113.png b/data/2312.10113.png new file mode 100644 index 0000000000000000000000000000000000000000..ed0872e84e9e65492879a350465c64b061bbb005 --- /dev/null +++ b/data/2312.10113.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b9eb565164f620790b62a2c0f6d3a4656c38f7e5397d9104734e7515df3d726 +size 1234194 diff --git a/data/2312.10115.png b/data/2312.10115.png new file mode 100644 index 0000000000000000000000000000000000000000..a0c98ff6d44157fb7f9e196c4502ddbeea0eee5c --- /dev/null +++ b/data/2312.10115.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37f78236fb5938da23b96f2871771a2263fa6d54e0244eb1a2d665e92ab0108a +size 880663 diff --git a/data/2312.10118.png b/data/2312.10118.png new file mode 100644 index 0000000000000000000000000000000000000000..6020ccf78715492575c140a0e58a5e5e24653b3c --- /dev/null +++ b/data/2312.10118.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:017e6ce85195be16a7811a57547df2d0ee2fbe051c3bf9db531e61cf3d1b7619 +size 1005189 diff --git a/data/2312.10136.png b/data/2312.10136.png new file mode 100644 index 0000000000000000000000000000000000000000..204e2b78d8283fbf42b1c6a2ba9a7f7f5f3126bf --- /dev/null +++ b/data/2312.10136.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf3c427b3c8b2012d502d8b5276fb52d327649bc322434b7949c62d456a88c05 +size 741426 diff --git a/data/2312.10144.png b/data/2312.10144.png new file mode 100644 index 0000000000000000000000000000000000000000..e350f7a541c08b18024b7a554806b4cbede74ad7 --- /dev/null +++ b/data/2312.10144.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e91a89de8fe10ee083cee43031b3a3c2f6b57d7d8f54250eb200aa3dcc81d296 +size 693198 diff --git a/data/2312.10240.png b/data/2312.10240.png new file mode 100644 index 0000000000000000000000000000000000000000..646d567e291bceaec064c2982d5df8931d3f8de8 --- /dev/null +++ b/data/2312.10240.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f92c99d223ed33b80f6c27d97dfc181b131af39a94086a6e65f9f71f9de97387 +size 730464 diff --git a/data/2312.10305.png b/data/2312.10305.png new file mode 100644 index 0000000000000000000000000000000000000000..31d2552d959412fe16d14717c91e2bb3b78a8c60 --- /dev/null +++ b/data/2312.10305.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8dd984596beff5823f4a44752154e979e2c20dfdb911a73cb9cb29221dac08d2 +size 820451 diff --git a/data/2312.10461.png b/data/2312.10461.png new file mode 100644 index 0000000000000000000000000000000000000000..e188354a5b13f29479e2007561c75921fe9ddc54 --- /dev/null +++ b/data/2312.10461.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14c979f14507e09dc75ccb58832cced47d9bc966cf5535e0d240c3e7bc07d36c +size 1087694 diff --git a/data/2312.10531.png b/data/2312.10531.png new file mode 100644 index 0000000000000000000000000000000000000000..9c7a56d687e3d109fedfdc92253d83742247ea17 --- /dev/null +++ b/data/2312.10531.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef4d19586443a55b140534a92a23d7921fdde7b49cb63bf7e2350063129e1196 +size 707963 diff --git a/data/2312.10540.png b/data/2312.10540.png new file mode 100644 index 0000000000000000000000000000000000000000..3732cc2c77307e3d19438d6aa38c04eff37e8014 --- /dev/null +++ b/data/2312.10540.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9e4811c1bda08fff1b3448d307969afdb9a634ee82b41af8937421b579b8d68 +size 636230 diff --git a/data/2312.10634.png b/data/2312.10634.png new file mode 100644 index 0000000000000000000000000000000000000000..528a93f0563e1c637745aaa87803c14297c0453e --- /dev/null +++ b/data/2312.10634.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24ab6700071d97c0a7db51b10468b5b87dd394a162974307d7fb6ee25fe0718f +size 787925 diff --git a/data/2312.10656.png b/data/2312.10656.png new file mode 100644 index 0000000000000000000000000000000000000000..dc9467d4bd7b20a606be45bc530bcb55c4ea6a2c --- /dev/null +++ b/data/2312.10656.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b703ac83f99689605fcc226b1fe2451be58c12adbbbfed25f7df98dc8eaa888 +size 1105098 diff --git a/data/2312.10671.png b/data/2312.10671.png new file mode 100644 index 0000000000000000000000000000000000000000..069be0670f72a343566e0c8166035edc186356ce --- /dev/null +++ b/data/2312.10671.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff1b70f2cb9b149befc01dae6214acc11701821cfc24cbb447a8b0ce9aa98fc4 +size 906424 diff --git a/data/2312.10835.png b/data/2312.10835.png new file mode 100644 index 0000000000000000000000000000000000000000..751c0402c0a602252e5afb8c95cb39b5e48a5cc1 --- /dev/null +++ b/data/2312.10835.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0316d2dcfbd60b801338d0de0823f1cc27568fff5f957e2f876147640e6853a +size 949098 diff --git a/data/2312.10908.png b/data/2312.10908.png new file mode 100644 index 0000000000000000000000000000000000000000..c4fe5aa5c2d7203adc33bb9c77cffa8558436927 --- /dev/null +++ b/data/2312.10908.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44198ddc7db4caa1410bf6cb94d33e3f042a6cfa5f3f8a6fe645642f60c8fe78 +size 1064016 diff --git a/data/2312.10998v1.png b/data/2312.10998v1.png new file mode 100644 index 0000000000000000000000000000000000000000..a45cea97f6ad257538ddf6557fb874b58a6da0ca --- /dev/null +++ b/data/2312.10998v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:607b7ec81e3881e3a8d27346761caeaa299dfbdbe47c16f5384760bb8ffc6050 +size 767398 diff --git a/data/2312.11269.png b/data/2312.11269.png new file mode 100644 index 0000000000000000000000000000000000000000..259e33264ad1208abf16f0180cf529425b73d9b9 --- /dev/null +++ b/data/2312.11269.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b9863bb569e250d0cb76050e164f161adfa29b9655ed4d6fe3816813e010b34 +size 859195 diff --git a/data/2312.11360v1.png b/data/2312.11360v1.png new file mode 100644 index 0000000000000000000000000000000000000000..1bb196b37e8933132e6db2250698c971ed438b6d --- /dev/null +++ b/data/2312.11360v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7db5724246130b30c9b88ad4b384f5ffff5cf13171eb1ca9de93f98ef46acbe +size 1277556 diff --git a/data/2312.11392.png b/data/2312.11392.png new file mode 100644 index 0000000000000000000000000000000000000000..66c3f46cc1679150a4a20f186df09ce2ed8b1db7 --- /dev/null +++ b/data/2312.11392.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:753f7bb58aa7992c4da727e4dfd503667dbd99df35ecfb3b614308630415a435 +size 1560717 diff --git a/data/2312.11461.png b/data/2312.11461.png new file mode 100644 index 0000000000000000000000000000000000000000..51a42c11f8eab4a03841307bdf2618dbb317e0dd --- /dev/null +++ b/data/2312.11461.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c098d682db068ecac88c797d5bf2c3c71197682cd33a4d8a6e224cb37630438 +size 1088532 diff --git a/data/2312.11557.png b/data/2312.11557.png new file mode 100644 index 0000000000000000000000000000000000000000..53872ccb62b52210223442e1f1f61574043b7f1f --- /dev/null +++ b/data/2312.11557.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0034a7d7a1f1807f08f14ac1f7f77f104b9e43e1f477427e599caa91771cceca +size 1038389 diff --git a/data/2312.11598.png b/data/2312.11598.png new file mode 100644 index 0000000000000000000000000000000000000000..05d0edaaf475ab0f02a827700694c97d4fada6df --- /dev/null +++ b/data/2312.11598.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49db083d0699d85853227d13a59b5a69fd25e85bb10e301063329cd19231707f +size 772342 diff --git a/data/2312.11666.png b/data/2312.11666.png new file mode 100644 index 0000000000000000000000000000000000000000..204e3117d77425d7b8e2b89f790c3f2448157ae2 --- /dev/null +++ b/data/2312.11666.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ec97726c8338c3e32c50367b69d9d2198cf833f375d9809d380f7ef69209186 +size 1501465 diff --git a/data/2312.11782.png b/data/2312.11782.png new file mode 100644 index 0000000000000000000000000000000000000000..484f9468dbd15b70572e4175a3c1d4d90360c787 --- /dev/null +++ b/data/2312.11782.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:580d4196b56eb47e3c6d7873a453f22e0daf4e89ef5f2e799fcf65403e5a0cde +size 947809 diff --git a/data/2312.11894.png b/data/2312.11894.png new file mode 100644 index 0000000000000000000000000000000000000000..851e2e1559473954fc7fcccce83dc2a6d7f055ce --- /dev/null +++ b/data/2312.11894.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04ae4b245278111adc425cfc19e68bd5972d942ee44f576d40866576bd2fba1a +size 815424 diff --git a/data/2312.11911.png b/data/2312.11911.png new file mode 100644 index 0000000000000000000000000000000000000000..bc3be3e74408a05fe47e489e73e384e56ca3ded2 --- /dev/null +++ b/data/2312.11911.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d1fc1f99ad78a145f8370856bd3ae50a9b5338425906010d5923f4b9c7e9d84 +size 1285762 diff --git a/data/2312.11972.png b/data/2312.11972.png new file mode 100644 index 0000000000000000000000000000000000000000..18208795d01610171fb22a030adbea7cb6ad00ca --- /dev/null +++ b/data/2312.11972.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbd1db9ec038af45e1e831ebe33b2c38acd47ad2e07d6d09cd0d2d476c3e4f04 +size 813050 diff --git a/data/2312.11994v1.png b/data/2312.11994v1.png new file mode 100644 index 0000000000000000000000000000000000000000..da148723b61764948759df0d28e0920ea1dcbabf --- /dev/null +++ b/data/2312.11994v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f402f018a3a62b1e42b3eeb186c16a48bf673678dc5ecd5e57052d9f666134d4 +size 787182 diff --git a/data/2312.12142.png b/data/2312.12142.png new file mode 100644 index 0000000000000000000000000000000000000000..eb9b09fb2c5007562ac26316dc9e8ca39f0d8c1e --- /dev/null +++ b/data/2312.12142.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fc892f2123c16e03c00a021ba2d55f19fc3bd6bf21130783da53fdf5b6a1eaa +size 984874 diff --git a/data/2312.12198.png b/data/2312.12198.png new file mode 100644 index 0000000000000000000000000000000000000000..aff3368dc22ce8deb4f16cf883aae72c4b3592e2 --- /dev/null +++ b/data/2312.12198.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2d886069f1e3117650cdd80eac9686ac1d4b4f130c2dc449be6b7740468a1b1 +size 980628 diff --git a/data/2312.12274.png b/data/2312.12274.png new file mode 100644 index 0000000000000000000000000000000000000000..cdeaa1104233d2bb3d0da358011abef842fde104 --- /dev/null +++ b/data/2312.12274.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8b8348de1eb05753ee9c47b1ab7f4303f34d28663af0a32939faee4e93efcbb +size 1277982 diff --git a/data/2312.12337.png b/data/2312.12337.png new file mode 100644 index 0000000000000000000000000000000000000000..01d48f660505bfe50a3c793f398201ca94da4be5 --- /dev/null +++ b/data/2312.12337.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ff1c91052ee770d0ff3b816dd75792679047ceeb1e8d6899d18cb42f3d252b2 +size 958907 diff --git a/data/2312.12416.png b/data/2312.12416.png new file mode 100644 index 0000000000000000000000000000000000000000..2a6c7796e95ab2c89854b3f34dc6ffcff9a50dd4 --- /dev/null +++ b/data/2312.12416.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b535cc0bdd98ffc38a2368e0c8ac1eaba9678b867584dd490c8e46659f5dafed +size 969284 diff --git a/data/2312.12418v1.png b/data/2312.12418v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5e3245ff0926a96fd359158fb33b22b5e2e35348 --- /dev/null +++ b/data/2312.12418v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72bbcef1c39d88de141b3a519b7b8f19857573b02acf76c89abd68833a22b3a3 +size 1125395 diff --git a/data/2312.12423.png b/data/2312.12423.png new file mode 100644 index 0000000000000000000000000000000000000000..4129520b6ad299bf16664a8af24ccd9831b6b329 --- /dev/null +++ b/data/2312.12423.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ad7b928ae6e7f7dfa09b025345752b8673b7ed950d2c74b05f8f49e876ee9fa +size 796073 diff --git a/data/2312.12463.png b/data/2312.12463.png new file mode 100644 index 0000000000000000000000000000000000000000..dc55983d6dde876351368089df5a797d275260d6 --- /dev/null +++ b/data/2312.12463.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a1a17a014e15b73339bbc44114d6df518836f8b3615dce4b4c6806a5328627f +size 774814 diff --git a/data/2312.12468.png b/data/2312.12468.png new file mode 100644 index 0000000000000000000000000000000000000000..01b67af7fa5f4f515455eeaa9a3690270b54f627 --- /dev/null +++ b/data/2312.12468.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b7ecd9b9f1fd534ab181027dbe38efc68a3915a45f8cefef4b11febffcd7469 +size 789984 diff --git a/data/2312.12470.png b/data/2312.12470.png new file mode 100644 index 0000000000000000000000000000000000000000..0229015efc992aa9fc3c693b480344fd1742db94 --- /dev/null +++ b/data/2312.12470.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ac40bea06d7725b1a939d1d2f5ebef856cb15b18ef3f40de2df4a1a7c81a28f +size 1035867 diff --git a/data/2312.12471.png b/data/2312.12471.png new file mode 100644 index 0000000000000000000000000000000000000000..ce53af90c079c35eaf9504a2cbca0701f3e820f4 --- /dev/null +++ b/data/2312.12471.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:275c455b35c0c8748e3a20c187932b7b1708ecd005a80f81bac00b01c88406ed +size 1492185 diff --git a/data/2312.12478.png b/data/2312.12478.png new file mode 100644 index 0000000000000000000000000000000000000000..d34682bb9aa41c378620da796d6d699f130fde8e --- /dev/null +++ b/data/2312.12478.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:756efda3fb21a4477ec0c96205718224aa70b33b153fe7d5e78b3f3709b833c8 +size 800038 diff --git a/data/2312.12480.png b/data/2312.12480.png new file mode 100644 index 0000000000000000000000000000000000000000..57b3ead8e4890b9ad64c600247f698f7dbde66e3 --- /dev/null +++ b/data/2312.12480.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11d9f6d3fc9fce88fafe81c2e75dcc5d4880ee3b7bb143a407929899ceefbe88 +size 876003 diff --git a/data/2312.12490.png b/data/2312.12490.png new file mode 100644 index 0000000000000000000000000000000000000000..4be88e3b0b68e7dc51b3ecb9ee635d8215b400d6 --- /dev/null +++ b/data/2312.12490.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e2cb1c9a231294e6209356b92b3dc3244f9d150f77b6ef6328a72228955dbc8 +size 806481 diff --git a/data/2312.12722.png b/data/2312.12722.png new file mode 100644 index 0000000000000000000000000000000000000000..f58d333d88e8ec16e4865ce164c7755ebc84db28 --- /dev/null +++ b/data/2312.12722.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:932065263c384800a8bad2d6ef2bce63acd1743a039622cf81c981eaf055ace5 +size 780673 diff --git a/data/2312.12730.png b/data/2312.12730.png new file mode 100644 index 0000000000000000000000000000000000000000..ec7555ebffa9664adb95d5fa92112c7b74a92bb0 --- /dev/null +++ b/data/2312.12730.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e59ae5b0af3049e172959cccd99d173d52d222b53dafb6f74cc0e627348dc5d2 +size 873706 diff --git a/data/2312.12743.png b/data/2312.12743.png new file mode 100644 index 0000000000000000000000000000000000000000..663cb770df1a8233ff20a159fbd7bf8f8c054d7d --- /dev/null +++ b/data/2312.12743.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7b69bab8e87f8ef9eebc5fb9ececd7cbc79f120cc9c88780fd43b15c1041e9c +size 569121 diff --git a/data/2312.12870.png b/data/2312.12870.png new file mode 100644 index 0000000000000000000000000000000000000000..938a8d35641a23e7b845c874d0ae11c49131bb59 --- /dev/null +++ b/data/2312.12870.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6503ce30bfba3cb85a852cb73310ae9c4a237c8ce7c1d01bc6f5bfb76efed8b0 +size 913621 diff --git a/data/2312.13016.png b/data/2312.13016.png new file mode 100644 index 0000000000000000000000000000000000000000..3f0e26e1b2b3d62b97bb54fd1fbaef278fd18a33 --- /dev/null +++ b/data/2312.13016.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0de0d3f58b872f09bb4ba67b3fc17f6f20fdb8e894bba088c7723b6cd2fd9268 +size 1355985 diff --git a/data/2312.13066.png b/data/2312.13066.png new file mode 100644 index 0000000000000000000000000000000000000000..0db0f01ce56771d63a27642dc30edd407696001c --- /dev/null +++ b/data/2312.13066.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0573dba6a187a417fe0fb0e45c98bd490e8c0f7617ddbeff10197df1c8ca9ed7 +size 880934 diff --git a/data/2312.13091v2.png b/data/2312.13091v2.png new file mode 100644 index 0000000000000000000000000000000000000000..c802e7d254f889e07022aaf50e71d88024c9bc9e --- /dev/null +++ b/data/2312.13091v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bda7d0a702645d49e21f9a1747b70a5f6c142321ef92baa33096692233c46729 +size 1093983 diff --git a/data/2312.13102.png b/data/2312.13102.png new file mode 100644 index 0000000000000000000000000000000000000000..c1a4c6879a8c66e445578c5a2035d78e6c095c89 --- /dev/null +++ b/data/2312.13102.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a78b374bf4f866506eddf82c485ecd95bf38325efe1e70aa76f88f112c219306 +size 1002429 diff --git a/data/2312.13108.png b/data/2312.13108.png new file mode 100644 index 0000000000000000000000000000000000000000..b38a4c40712ad1d97c479aa51dab5c6fc98e8611 --- /dev/null +++ b/data/2312.13108.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16999d0ad918854d139f9197835f1c397d669fd211ff2a8457b99096e2dd7176 +size 645580 diff --git a/data/2312.13150.png b/data/2312.13150.png new file mode 100644 index 0000000000000000000000000000000000000000..cbb8e02c289779e2cae4e71366f105fa7e227f6f --- /dev/null +++ b/data/2312.13150.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05aa3ef98eff25228fe2b1855194dd3f42e925ad25851042b4181c9f67c567b7 +size 856402 diff --git a/data/2312.13216.png b/data/2312.13216.png new file mode 100644 index 0000000000000000000000000000000000000000..64bcb82be580df5a2095732b6236c61b7386909b --- /dev/null +++ b/data/2312.13216.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbbd8e7dc715365c307d90f2bf5fd606478ba803e0fd151261b9de79ee4b3e16 +size 877196 diff --git a/data/2312.13286.png b/data/2312.13286.png new file mode 100644 index 0000000000000000000000000000000000000000..307338c489847284a93180e236728b9912a76fa4 --- /dev/null +++ b/data/2312.13286.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3968f85b78f07bc64972a4f609ae83e7326c048ba7670bcdd96b6c3f5e0b143 +size 767451 diff --git a/data/2312.13313.png b/data/2312.13313.png new file mode 100644 index 0000000000000000000000000000000000000000..74d89bc9e620c1dcee648280bdb0c28f668451e2 --- /dev/null +++ b/data/2312.13313.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52eb3b865532fcd0aa9a45772c4244943181ba7a856f2e172824701687d12859 +size 811914 diff --git a/data/2312.13314.png b/data/2312.13314.png new file mode 100644 index 0000000000000000000000000000000000000000..9874f493cfb89b2d0652afb83dba47f7de20a829 --- /dev/null +++ b/data/2312.13314.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5d3c6d1f3dc44f2626691c4d918f004f6790d79a5093ee371d070f20f863f32 +size 782660 diff --git a/data/2312.13319.png b/data/2312.13319.png new file mode 100644 index 0000000000000000000000000000000000000000..0230ce1d4dadc18290ec88e38deddb7942e8dfc1 --- /dev/null +++ b/data/2312.13319.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee1963e4c4db6ac2897c30f9d0ac55078ed28d699ee52ea145c4f994e345d63e +size 877508 diff --git a/data/2312.13328.png b/data/2312.13328.png new file mode 100644 index 0000000000000000000000000000000000000000..c4da4d988cf086fbc36328b0751c8f0d174d37c3 --- /dev/null +++ b/data/2312.13328.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71cc4c9aedb99586769de82ac7655ccd1d20bfd659944f84afaab14ca3b796cd +size 962769 diff --git a/data/2312.13396v1.png b/data/2312.13396v1.png new file mode 100644 index 0000000000000000000000000000000000000000..c748837ef322d83a256d5b401a9ea68b72c0f632 --- /dev/null +++ b/data/2312.13396v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70c8fd9f48a6e0ff0ce2f128deb5631c22bc2c5c61ad6537ba05e0a31f434d39 +size 779031 diff --git a/data/2312.13500.png b/data/2312.13500.png new file mode 100644 index 0000000000000000000000000000000000000000..b5a85678a1d7f136af4f03233d6a0a2aaff944ac --- /dev/null +++ b/data/2312.13500.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:434001b39b83f8097259a04f32b046755e21eedb2b17b546800aca4379004477 +size 645114 diff --git a/data/2312.13746v1.png b/data/2312.13746v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f922dee145ffed6f2fcb9b81e3b1d0983c499163 --- /dev/null +++ b/data/2312.13746v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83e24ee5eda1ecd3f01a26075e81ba78f8a995736255e5e70c3ca374ee80efe9 +size 1072151 diff --git a/data/2312.13763.png b/data/2312.13763.png new file mode 100644 index 0000000000000000000000000000000000000000..308bb17f62c20c0b25da38273228f7942837c775 --- /dev/null +++ b/data/2312.13763.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fad5800cc007a9fce179dce3432f576ba06c7f97657e2bf5dae1dd6b82fe4804 +size 1370041 diff --git a/data/2312.13834.png b/data/2312.13834.png new file mode 100644 index 0000000000000000000000000000000000000000..548bafc7edf42cab0c534dc3f6c7801f9e4f54c9 --- /dev/null +++ b/data/2312.13834.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffb9fb56b83896f266cf2b14fdc0dbafde6b16a10881d052e8665148bb707fa7 +size 1652588 diff --git a/data/2312.13913.png b/data/2312.13913.png new file mode 100644 index 0000000000000000000000000000000000000000..11d0f768b47042dd44b0856507e09a6f908c2f52 --- /dev/null +++ b/data/2312.13913.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2603beef5f927b254a2aaaeb1d78a6cafbe7b3d0132bb70b990b4d4c14be111d +size 1472909 diff --git a/data/2312.13964.png b/data/2312.13964.png new file mode 100644 index 0000000000000000000000000000000000000000..cae06720588bd59f2c7a8b79a381047da7e69cbe --- /dev/null +++ b/data/2312.13964.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f039c241c2f9a37f552b285b433ee9841aee39368cefd8f12e10197ecfd5bf0 +size 1410995 diff --git a/data/2312.13980v1.png b/data/2312.13980v1.png new file mode 100644 index 0000000000000000000000000000000000000000..91305f7d34ecd385de467c9898a77e7851d4eebd --- /dev/null +++ b/data/2312.13980v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6284422f46e2538f91e5e87ab83245987e833286661f9975765ac299f5452b50 +size 794519 diff --git a/data/2312.14124.png b/data/2312.14124.png new file mode 100644 index 0000000000000000000000000000000000000000..b0a22ef0cbbefaf7fc2e33452375a883dfdafeac --- /dev/null +++ b/data/2312.14124.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36cdef68f0e81801da1d07bcb5e52175d551b8a35a7756c9de9abb79fafeb6a4 +size 835317 diff --git a/data/2312.14132v1.png b/data/2312.14132v1.png new file mode 100644 index 0000000000000000000000000000000000000000..9e89b056437af4f4777491cd141eb0e58d1e9127 --- /dev/null +++ b/data/2312.14132v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f06ac484c5200e3be4fa1979326d3f3b28967ac99c651118cdf35359c212fbb +size 1319140 diff --git a/data/2312.14135.png b/data/2312.14135.png new file mode 100644 index 0000000000000000000000000000000000000000..ed188025210260e082c50f1dce67a50b3435168f --- /dev/null +++ b/data/2312.14135.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8734cdd83ab96a131b0b50f15f5fbed2339a36874e4be336177d6ee6e57ef2e1 +size 857628 diff --git a/data/2312.14198.png b/data/2312.14198.png new file mode 100644 index 0000000000000000000000000000000000000000..ce9781ec432f54bc1b50c8ef831f794bfae7b371 --- /dev/null +++ b/data/2312.14198.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21be7c9ea989bd4cebed27a0c81bd0916eca6f4817319aeb8956ff66f71a409d +size 716322 diff --git a/data/2312.14233.png b/data/2312.14233.png new file mode 100644 index 0000000000000000000000000000000000000000..93494f599ee180ec150592530d299b8b1df2d958 --- /dev/null +++ b/data/2312.14233.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d888f286d825d37a14c53ad0f95e1ec6ce9a6a2fd3ed70409980bf1dd74feba5 +size 1047351 diff --git a/data/2312.14235.png b/data/2312.14235.png new file mode 100644 index 0000000000000000000000000000000000000000..6ba4dc08dfe92a8c50425f94a5e274a4338d56f7 --- /dev/null +++ b/data/2312.14235.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e574257e3d45d37e608d164374d45f7cf67b549e3f7f3018679d9257e530a13c +size 1107186 diff --git a/data/2312.14238.png b/data/2312.14238.png new file mode 100644 index 0000000000000000000000000000000000000000..468c53262763d24e51d560b0c2c9669b3c347d41 --- /dev/null +++ b/data/2312.14238.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad7f69198269dc4bfaef91491ca5af9f7adfb0b2ef16f8ae6e0727f8e1a362bc +size 702870 diff --git a/data/2312.14239.png b/data/2312.14239.png new file mode 100644 index 0000000000000000000000000000000000000000..cff2d2181ba9996fa21c98d9025e100bb3ab8d63 --- /dev/null +++ b/data/2312.14239.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5b544e1d520cc3b1b87f9a127e228c3d9c81926dfec6a397abd82a94bf4e7de +size 860772 diff --git a/data/2312.14440.png b/data/2312.14440.png new file mode 100644 index 0000000000000000000000000000000000000000..446824b9236223d9ea540e5c1bbf8b3e40d7e4bd --- /dev/null +++ b/data/2312.14440.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7461d7b38a2ba26621779f266d5c97bc920dc351506fb7eb1aaa12b54db6a12f +size 832008 diff --git a/data/2312.14494.png b/data/2312.14494.png new file mode 100644 index 0000000000000000000000000000000000000000..b7cd82f06a2c7ff6a14a10f4ba7718231184537d --- /dev/null +++ b/data/2312.14494.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a415fe156f646fc573c0b66af7aa190f6f233a871a84d83ae8ba53d381744f8 +size 803791 diff --git a/data/2312.14667v1.png b/data/2312.14667v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5fc9102e33bca69d427930dda90fafc5d6093089 --- /dev/null +++ b/data/2312.14667v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de46879ee6d446527b38f0db94ee880d980d2721c0b0d3297dc2adb68789e3dc +size 906346 diff --git a/data/2312.14937.png b/data/2312.14937.png new file mode 100644 index 0000000000000000000000000000000000000000..3433634711c8325ab26939dd0c8509bfd80cf33c --- /dev/null +++ b/data/2312.14937.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46e13c8e4999e066c1e67ccfbd74aff3215211ac4f47154156c6c95a656996eb +size 1052713 diff --git a/data/2312.14985.png b/data/2312.14985.png new file mode 100644 index 0000000000000000000000000000000000000000..4d33833b01085c4af5b5789645cd3a674aebfe80 --- /dev/null +++ b/data/2312.14985.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99cb84de14f129cd0fb4cc3aff11163bc49963a68005d028f3341d1e3a7890f1 +size 1244096 diff --git a/data/2312.14988v1.png b/data/2312.14988v1.png new file mode 100644 index 0000000000000000000000000000000000000000..fd3bb597a1711151f6a4ebda3dca7d2375a832ed --- /dev/null +++ b/data/2312.14988v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d115f0d162d7f3bd0c58c433c41025ddec5c88744d7d5561e10d269f64487307 +size 673030 diff --git a/data/2312.15010.png b/data/2312.15010.png new file mode 100644 index 0000000000000000000000000000000000000000..ab2faf759d91b93b3d89339fb207515476d1e210 --- /dev/null +++ b/data/2312.15010.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fb31c3541ea27a34989134ebc1b7acf1447d688ba7368b6273b2eaf998d1a1f +size 943937 diff --git a/data/2312.15297.png b/data/2312.15297.png new file mode 100644 index 0000000000000000000000000000000000000000..8a0a9cc75ac75a9b73a1b76e438b2390e153175c --- /dev/null +++ b/data/2312.15297.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95b3376bd21f8312d3188d0af4f5d5eeb74f2afb1986753048fcc11e58b77910 +size 697743 diff --git a/data/2312.15406.png b/data/2312.15406.png new file mode 100644 index 0000000000000000000000000000000000000000..d07c9839b5c54e9dc41c28cf1cc301aae202b08a --- /dev/null +++ b/data/2312.15406.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c06af77ae46f0893344ec7f76f45c9f17f72e5635083d0ec9af426ecb844da10 +size 871502 diff --git a/data/2312.15540.png b/data/2312.15540.png new file mode 100644 index 0000000000000000000000000000000000000000..bc4b806b64fe37adc74532867eaf454dbe82686c --- /dev/null +++ b/data/2312.15540.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:881bcc82190f29bc394876bbc5d524cf0452cad304d683172504d985fb686519 +size 1432633 diff --git a/data/2312.15770.png b/data/2312.15770.png new file mode 100644 index 0000000000000000000000000000000000000000..4dd303c891de66dd83a018b460dbee31ebb1e4f7 --- /dev/null +++ b/data/2312.15770.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1ec704c7a12393d605e4701e65ba03b8449da5ac3aad6747a147d7baa1e0f45 +size 1602246 diff --git a/data/2312.15895.png b/data/2312.15895.png new file mode 100644 index 0000000000000000000000000000000000000000..b86f11ee3bd59074aa926db56f3cdbef1c022344 --- /dev/null +++ b/data/2312.15895.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71ba129b3bf1adf9eed99a50ad6e12c10fc986127a2847355605220d293e73b4 +size 983185 diff --git a/data/2312.15905.png b/data/2312.15905.png new file mode 100644 index 0000000000000000000000000000000000000000..b29708ae9ed91fd4cca2b49501647fd5af6fb5b1 --- /dev/null +++ b/data/2312.15905.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4e9ccd030733e82d15b30d3643f7f994c725da6264605b2f9ad283097104761 +size 1551104 diff --git a/data/2312.16051.png b/data/2312.16051.png new file mode 100644 index 0000000000000000000000000000000000000000..0c6a0be1cea005c252edc3106ca23ea2726d710a --- /dev/null +++ b/data/2312.16051.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd8fb93e106db0525a5531b31980faa12099a0a7e1a18a9ac79dff4047e15eab +size 980134 diff --git a/data/2312.16084.png b/data/2312.16084.png new file mode 100644 index 0000000000000000000000000000000000000000..c73568e89d5e186092018c52402bb915dda8b9cd --- /dev/null +++ b/data/2312.16084.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7476b171cb8ff5e44d1eaa31582d25436af27d9257da79def242652e987aeed9 +size 1316949 diff --git a/data/2312.16145.png b/data/2312.16145.png new file mode 100644 index 0000000000000000000000000000000000000000..87932ebc60a1dbc6b63aeda79a1f1656d09f5d18 --- /dev/null +++ b/data/2312.16145.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae3ebdec418dd83a7b0b899a3d0ce19475b7ed2e085321cdf2566773c70f3a91 +size 1317657 diff --git a/data/2312.16170v1.png b/data/2312.16170v1.png new file mode 100644 index 0000000000000000000000000000000000000000..87ea8e8bfbf4d3329ee0dcef050c77efaf27a419 --- /dev/null +++ b/data/2312.16170v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09ca52cd453ccd35eab1c6dfa2994073a489b397c66d3d38f95d0cecec7d2490 +size 1344275 diff --git a/data/2312.16217.png b/data/2312.16217.png new file mode 100644 index 0000000000000000000000000000000000000000..39ad4bb312f352511fb064e425890b5b2632bdeb --- /dev/null +++ b/data/2312.16217.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6971887e7028432916b958cd2542d0119de329d1192e6e3284adc1bdc3cf1c56 +size 811283 diff --git a/data/2312.16222.png b/data/2312.16222.png new file mode 100644 index 0000000000000000000000000000000000000000..1ef65cac8997e10b6d4bd2c5f93c841eaa6d9ba2 --- /dev/null +++ b/data/2312.16222.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d69c7bf1509d693f9930b73d4df10288cf17ef340d0883a1e8c63915383ad005 +size 943418 diff --git a/data/2312.16245.png b/data/2312.16245.png new file mode 100644 index 0000000000000000000000000000000000000000..dbade6ce39ccecfd198be0768cf3df5b8c8f2e02 --- /dev/null +++ b/data/2312.16245.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:beced2c1cb1c596ac1232756e70478d83309438a91f8557f364a3a1951084f4a +size 883554 diff --git a/data/2312.16256.png b/data/2312.16256.png new file mode 100644 index 0000000000000000000000000000000000000000..be3a98483eab35785fd74092e3e1bb5206e95e79 --- /dev/null +++ b/data/2312.16256.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:931dccf48372ed07fc953422ed3b0c7fb4830dda25097cf0895daa78890fd02b +size 1963250 diff --git a/data/2312.16272.png b/data/2312.16272.png new file mode 100644 index 0000000000000000000000000000000000000000..2984f9265e642e8d709b284f521006298c0f8d0f --- /dev/null +++ b/data/2312.16272.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19886d600118345e142791c56011cf6a17b10d31beb3e73ba3c56e5b372dd41c +size 1572347 diff --git a/data/2312.16279.png b/data/2312.16279.png new file mode 100644 index 0000000000000000000000000000000000000000..e3774bce4b96a62d60f6aae18dea7705c9414700 --- /dev/null +++ b/data/2312.16279.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:374efc7c806eb4b6fa8dec882c742c1e16946860f875ba11f21acd29e1a330ba +size 820599 diff --git a/data/2312.16476.png b/data/2312.16476.png new file mode 100644 index 0000000000000000000000000000000000000000..43c77c7c53c47a36ab09df29320a59918b519f44 --- /dev/null +++ b/data/2312.16476.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e29e0e0cdbfbee752ac9116c17598fe4f2bfc6387df9a216ce9787b8e8bd778 +size 753213 diff --git a/data/2312.16519.png b/data/2312.16519.png new file mode 100644 index 0000000000000000000000000000000000000000..a1eeb612078e58a9b58ddf6331d430d46c59d600 --- /dev/null +++ b/data/2312.16519.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e7fe581aebc288f6521ca7641896649f5792305ae3971f5a1d8fa4d05fb3020 +size 994438 diff --git a/data/2312.16649.png b/data/2312.16649.png new file mode 100644 index 0000000000000000000000000000000000000000..4230b81470f5b672f01732e0d4d29778fd40bd1f --- /dev/null +++ b/data/2312.16649.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e13b138d4bb2a420152a53ffe7774b227f573d8a950152bd702c36583148e2ec +size 798601 diff --git a/data/2312.16794.png b/data/2312.16794.png new file mode 100644 index 0000000000000000000000000000000000000000..99e67544104b7c843b9a28bd105f3f9d3edb7f63 --- /dev/null +++ b/data/2312.16794.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a580586338eb034ca7bcdd650777f4b2579c719cf5b77a6a7fce2ebdebe469a +size 1286682 diff --git a/data/2312.16812.png b/data/2312.16812.png new file mode 100644 index 0000000000000000000000000000000000000000..8a31df4058a75ff57d58feff3475ff379954e228 --- /dev/null +++ b/data/2312.16812.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd4fb86b5c29e29c5b43a624ebef5a596ebb8fa042d06e3b5f73da6a26bbae61 +size 1043196 diff --git a/data/2312.16837.png b/data/2312.16837.png new file mode 100644 index 0000000000000000000000000000000000000000..ab5b2d311440d4cef94ddb9e9cdcc6dcaf10a015 --- /dev/null +++ b/data/2312.16837.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9398def6b93b5186d2c4eedcf488a886c368906ee61e02341d64b523851813fe +size 1364884 diff --git a/data/2312.16933v1.png b/data/2312.16933v1.png new file mode 100644 index 0000000000000000000000000000000000000000..3b4a3bd3f30feb13669db135550bdf09d93db7a2 --- /dev/null +++ b/data/2312.16933v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:551f1ce9e29a87ac48b974f8d7de411a39d2e949cda1d8c3a62558ede74a8013 +size 556040 diff --git a/data/2312.16943.png b/data/2312.16943.png new file mode 100644 index 0000000000000000000000000000000000000000..f7722a2ef372275b750cad9ee3cd897a3a004396 --- /dev/null +++ b/data/2312.16943.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbcd9ad8b4c4cb83f05be74c640fbfaedbfd39c945939d7c5ac37aaa867a9aaa +size 78674 diff --git a/data/2312.17133.png b/data/2312.17133.png new file mode 100644 index 0000000000000000000000000000000000000000..e4a3ad26b2e64b63b1bcad584c1eeed23ce447a9 --- /dev/null +++ b/data/2312.17133.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea1c3a28580ae7fe1181fa52472878be1b4043b3cf41767e7d466782adc54923 +size 712686 diff --git a/data/2312.17161.png b/data/2312.17161.png new file mode 100644 index 0000000000000000000000000000000000000000..66ae1b771c67b58ef46aba890270d1e757b4bf0d --- /dev/null +++ b/data/2312.17161.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a61493844b88f50cc5eb62dec1b495f857c958d56da2154c663ed941f5135026 +size 1558862 diff --git a/data/2312.17172.png b/data/2312.17172.png new file mode 100644 index 0000000000000000000000000000000000000000..5e08e4be1f8e7406c3c09389ec9f8a7a1079dc24 --- /dev/null +++ b/data/2312.17172.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd9e3c4492ed4a7618349e0606230b8b98ebcd6e82a1318bff5eb04edb2604ec +size 1396882 diff --git a/data/2312.17205.png b/data/2312.17205.png new file mode 100644 index 0000000000000000000000000000000000000000..a449b38bb87d21eadc5fa30fecdb884ec60b1e72 --- /dev/null +++ b/data/2312.17205.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bc5abd3941f338eb12bad208b5cd3b9209fa3c0ccecb50898b7199a5580c48e +size 895849 diff --git a/data/2312.17243.png b/data/2312.17243.png new file mode 100644 index 0000000000000000000000000000000000000000..10d7b23fafcca8c2e6f54c9f10263be6e1241196 --- /dev/null +++ b/data/2312.17243.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f124cacf752f5575f8b6abd531bc688dcf531d63e919c072b1dec41e4bda734d +size 1175138 diff --git a/data/2312.17247.png b/data/2312.17247.png new file mode 100644 index 0000000000000000000000000000000000000000..e804bacaf854615fb68a19f6393a52175048eaee --- /dev/null +++ b/data/2312.17247.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e47e07391a05454a3cc38d42b1fceec0031ad28db7700b2d2ef4c022a41895ab +size 1318816 diff --git a/data/2312.17269.png b/data/2312.17269.png new file mode 100644 index 0000000000000000000000000000000000000000..e52f23e7ded6c772d5e5f7807cc36bbc89160483 --- /dev/null +++ b/data/2312.17269.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:045a642bc91222ec148ad0247fd647f189838a1591ace86ca166e75fe0c733ea +size 774060 diff --git a/data/2312.17334.png b/data/2312.17334.png new file mode 100644 index 0000000000000000000000000000000000000000..162dd31793237b34609acda60dde719e4f20838a --- /dev/null +++ b/data/2312.17334.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cdfc5c64bcfa9a7d8cc07ca08514fc147b9fef2916c71874b0e2cd00c6d3480b +size 784816 diff --git a/data/2312.17648.png b/data/2312.17648.png new file mode 100644 index 0000000000000000000000000000000000000000..808b80c45378ba4b817474135cc866d66dd18f94 --- /dev/null +++ b/data/2312.17648.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25963edb08b972303432ca2eb9f337383853d36c39bbe831e476c11adca210b6 +size 428225 diff --git a/data/2312.17655.png b/data/2312.17655.png new file mode 100644 index 0000000000000000000000000000000000000000..1490dc24cfc734174be14936e40733916dc5a682 --- /dev/null +++ b/data/2312.17655.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0599d4fe522627516566b442d99b6819df3b5512d7b3a5e1015fdaea3fd934da +size 769519 diff --git a/data/2312.17681.png b/data/2312.17681.png new file mode 100644 index 0000000000000000000000000000000000000000..05fe97687214bb0c2b63eae1711b9c068d4f9585 --- /dev/null +++ b/data/2312.17681.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a585a5f4703f7c002ad1f4ff0f8eb175402399457cae7829cf33c6bd6b74bda +size 1439822 diff --git a/data/2312.17686.png b/data/2312.17686.png new file mode 100644 index 0000000000000000000000000000000000000000..621a91b2a62361f81c3b7d6a42dfdd79c60728c0 --- /dev/null +++ b/data/2312.17686.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09d5fa27d50a74ef1fe578fde77c94d0e5c6c7618ce4ec479a9dce6d98e1e637 +size 795549 diff --git a/data/2312.17742.png b/data/2312.17742.png new file mode 100644 index 0000000000000000000000000000000000000000..0d07a1316cf036732000f5509ca15c235e6866fb --- /dev/null +++ b/data/2312.17742.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85e7dd53fe791c679a7fa169931677d5465b62e513d9c7b350f3bce982f1138b +size 715185 diff --git a/data/2401.00027.png b/data/2401.00027.png new file mode 100644 index 0000000000000000000000000000000000000000..48e8104aaeb1fda759aade776e32ad83bd301483 --- /dev/null +++ b/data/2401.00027.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2752f24e87f474c2ba018b0167e45a7563ca3635bfaac0d98e8351883c2998d +size 728159 diff --git a/data/2401.00028.png b/data/2401.00028.png new file mode 100644 index 0000000000000000000000000000000000000000..5d98a4add0014c69b8040e578807dba609ea53d0 --- /dev/null +++ b/data/2401.00028.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4878969fbac81f0cf332f020ab35778676235acb20f472691b881661e8771709 +size 663638 diff --git a/data/2401.00029.png b/data/2401.00029.png new file mode 100644 index 0000000000000000000000000000000000000000..98033d7e707fddd8b9714a59f92ffb386044d20d --- /dev/null +++ b/data/2401.00029.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa4253132bc2fbf4120df4eaf5692867f6e5e47cb01b1d4ae42bd4fd3d7de726 +size 779355 diff --git a/data/2401.00094.png b/data/2401.00094.png new file mode 100644 index 0000000000000000000000000000000000000000..6e6b937afef04d07827f40ec92b47b01c939ed1d --- /dev/null +++ b/data/2401.00094.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc5b01cbea75c54b5c2802a7e7a60ef7f4fdeb9ecd92cbf8d9a3ce8ea3c08c71 +size 887363 diff --git a/data/2401.00374.png b/data/2401.00374.png new file mode 100644 index 0000000000000000000000000000000000000000..ad8ac9e1cabf77c651712e0da2ad711b4dbca7e6 --- /dev/null +++ b/data/2401.00374.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49739205f4b536aa7d0bc1984303c00d634bdab8a2a97d2025daea662e6637d4 +size 841845 diff --git a/data/2401.00789.png b/data/2401.00789.png new file mode 100644 index 0000000000000000000000000000000000000000..c2d726afabdffd557179405e728e7f0bd1e9a2ee --- /dev/null +++ b/data/2401.00789.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f7c6c6fb66a12a6a7dbd38350b408485531c1ee88dcae8fe1f09161c0eae1fa +size 890721 diff --git a/data/2401.00847.png b/data/2401.00847.png new file mode 100644 index 0000000000000000000000000000000000000000..1f612fffd4a69df14aa868b20e2b73709279b9e2 --- /dev/null +++ b/data/2401.00847.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fcdf220892f07eef88b627d2bd83017618ad55f96e69b50d1d5ff8687ef83d5 +size 1022849 diff --git a/data/2401.00889.png b/data/2401.00889.png new file mode 100644 index 0000000000000000000000000000000000000000..80bc56381904e3235c168927517821bcd0c4a4ed --- /dev/null +++ b/data/2401.00889.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:340e0cd7354bc3a5546608ca12a0fd90a54f5bdc37254b12fa01dfe57dc22bd7 +size 1175037 diff --git a/data/2401.00901.png b/data/2401.00901.png new file mode 100644 index 0000000000000000000000000000000000000000..270d4e5faec3799469fe2254c699d5296518c03d --- /dev/null +++ b/data/2401.00901.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaf4b7f6f93dca0389cac5de371188d5ccef539bd12219200ebf6346aac641f4 +size 728882 diff --git a/data/2401.00909.png b/data/2401.00909.png new file mode 100644 index 0000000000000000000000000000000000000000..5f0a1561610a6bd9e775c9c8ad18373ee33d4361 --- /dev/null +++ b/data/2401.00909.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5f43da77054d82301f0c8a6639f5b390077a40012bdeda8ca586513fa966cc2 +size 812827 diff --git a/data/2401.00929.png b/data/2401.00929.png new file mode 100644 index 0000000000000000000000000000000000000000..d4a457a55703e3d932e4154e2c26c8a6909ddeec --- /dev/null +++ b/data/2401.00929.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01273c5ff0b937af25c0b5d056298007c9838d76907058465169507e98554a24 +size 933182 diff --git a/data/2401.00979.png b/data/2401.00979.png new file mode 100644 index 0000000000000000000000000000000000000000..8c2946d68d462151aa5549acfcba00dd0bdf31e6 --- /dev/null +++ b/data/2401.00979.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b18d71970687bf471e792b645aa76f31b5e4a2a695c02cac30c70c297aabccb +size 857181 diff --git a/data/2401.00988v1.png b/data/2401.00988v1.png new file mode 100644 index 0000000000000000000000000000000000000000..3c244296850cb25f24af33116c561cca085b4256 --- /dev/null +++ b/data/2401.00988v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f490dacdf975b8ecdb0bda0dd78bd9a6a176dce76afe4c29c6dfa1535930e05f +size 1010035 diff --git a/data/2401.01042.png b/data/2401.01042.png new file mode 100644 index 0000000000000000000000000000000000000000..6c1f1ac9d04c82da22fd0c4d73cd2c708b53fa7c --- /dev/null +++ b/data/2401.01042.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c542510d5e2aa300b0cab79acd8532655d11644ecb2d5dc2c645c3ef9aea2d2 +size 429866 diff --git a/data/2401.01173.png b/data/2401.01173.png new file mode 100644 index 0000000000000000000000000000000000000000..3c30715b4a6458f1c4bec0d7a8d9939b340ad146 --- /dev/null +++ b/data/2401.01173.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:522e5166d569e6adadb21e96664fdaac39107a7a55af30fdf7c7f7036fd93b07 +size 813493 diff --git a/data/2401.01207.png b/data/2401.01207.png new file mode 100644 index 0000000000000000000000000000000000000000..e164b60fdbc8e444fd702b251570dee0c1cb0b34 --- /dev/null +++ b/data/2401.01207.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d7b27ae2c1ec683f6a4d241d7526eb22a3caa2662a8c8af42315675bd6ff281 +size 990178 diff --git a/data/2401.01448.png b/data/2401.01448.png new file mode 100644 index 0000000000000000000000000000000000000000..85f916d9e74e0228aeb0e618ae0385d383792fc2 --- /dev/null +++ b/data/2401.01448.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78c4549584e7d2e1acb850913fa208686057f44e89dde6f492f59e66d025db5a +size 920149 diff --git a/data/2401.01482.png b/data/2401.01482.png new file mode 100644 index 0000000000000000000000000000000000000000..c956fc169b1a5b563e616216cd300d7bfcfab27c --- /dev/null +++ b/data/2401.01482.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a64eda71cff7d72ec67094d26ee73f8342e66535ca5584b00b09524379befa5 +size 874609 diff --git a/data/2401.01543.png b/data/2401.01543.png new file mode 100644 index 0000000000000000000000000000000000000000..f35f947f178cc2c876b68fc92e2eb289cbf78d7e --- /dev/null +++ b/data/2401.01543.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71327ae777f0d4099b982174c07d951adc73f3bc06db83149d4963b579304e29 +size 836141 diff --git a/data/2401.01578.png b/data/2401.01578.png new file mode 100644 index 0000000000000000000000000000000000000000..b63e7740e73b8fa1356e4065c715909e440ee883 --- /dev/null +++ b/data/2401.01578.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31796a77f03e2e4b7a6a9a51448c83017045ccb144162bea8e49cb2c7589bfb9 +size 840350 diff --git a/data/2401.01647.png b/data/2401.01647.png new file mode 100644 index 0000000000000000000000000000000000000000..9b5c54337e5e4e559a29dc880d02a2a1ba5040af --- /dev/null +++ b/data/2401.01647.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a5d1b5dc10508913da0b2719106bd9a4cadba98eae3d1109c50bf86292fa9dc +size 1486311 diff --git a/data/2401.01702.png b/data/2401.01702.png new file mode 100644 index 0000000000000000000000000000000000000000..18b16dd0a77d3c8117294404c3469483fed4bb32 --- /dev/null +++ b/data/2401.01702.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59693c75cb14a6a9f9197913686255e53aa1e8a2ea858af192566d1ee7487cfc +size 1476634 diff --git a/data/2401.01823.png b/data/2401.01823.png new file mode 100644 index 0000000000000000000000000000000000000000..ef5a168f1d385e537cf0fc56c27cdf47e7441c6a --- /dev/null +++ b/data/2401.01823.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d307a2910b52e0ba65a6bcdaec5b5bb112357cd15820b4b3e8048e7e68d331ac +size 887477 diff --git a/data/2401.01862.png b/data/2401.01862.png new file mode 100644 index 0000000000000000000000000000000000000000..662754877e28b8625bfdc3e8c8a1e99434c8d158 --- /dev/null +++ b/data/2401.01862.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64a0f421ecec68fa4c1af18ebe59138f30176155c0acd2fc0c89d64485d7a5b2 +size 804849 diff --git a/data/2401.01885.png b/data/2401.01885.png new file mode 100644 index 0000000000000000000000000000000000000000..5c4e22f80aab9bd087c67f343a0ebb19314810b3 --- /dev/null +++ b/data/2401.01885.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6df2bc28a5b3b801aff0960260eb8fed7a80e5589fac1d2dc94281c399eba8f9 +size 909551 diff --git a/data/2401.01887v1.png b/data/2401.01887v1.png new file mode 100644 index 0000000000000000000000000000000000000000..c3dfc8d961dee4f1a4bfb08137f17dc687050f4b --- /dev/null +++ b/data/2401.01887v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1561dbedb19fb8fc35d4ca8d2bdf734934e315052d3f9120fbd9bbe0688947e6 +size 839922 diff --git a/data/2401.01952.png b/data/2401.01952.png new file mode 100644 index 0000000000000000000000000000000000000000..b9f42ac865a4c34f972a3f7d5882163e20a5662a --- /dev/null +++ b/data/2401.01952.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c4e1bc750c07da48fa4aa50d9cb47348aa8cd6602be5f8f5b2358aa02d7a157 +size 1039054 diff --git a/data/2401.02317.png b/data/2401.02317.png new file mode 100644 index 0000000000000000000000000000000000000000..689b6bedfb6e51e7f7577de03ce0a9d2844abc41 --- /dev/null +++ b/data/2401.02317.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89cd109d2458513633aea5ae4d53a5348e09eee59ed4a6cb5e53aa5576080326 +size 593257 diff --git a/data/2401.02400.png b/data/2401.02400.png new file mode 100644 index 0000000000000000000000000000000000000000..9542bd7ec5a1db4ad2e778942b9e2644e1594ea8 --- /dev/null +++ b/data/2401.02400.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41892e0c91ed72be868cca8b04ec4c892ef4665a45f125274ae489ab170ade9f +size 1036223 diff --git a/data/2401.02411.png b/data/2401.02411.png new file mode 100644 index 0000000000000000000000000000000000000000..3812643999bcac38ce911a87ead03a786bfab8d4 --- /dev/null +++ b/data/2401.02411.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d233525bf511f428fe559a8a69afb4ffe3ec8816cbf32c59c358ddc79d51a790 +size 1640208 diff --git a/data/2401.02416.png b/data/2401.02416.png new file mode 100644 index 0000000000000000000000000000000000000000..6981702ebd570be309ac93307a805ad2f96c679a --- /dev/null +++ b/data/2401.02416.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adc51f2dbfed4e676bdb69351ab4000542ce73bc3891e208c49b9acb3f28450e +size 835782 diff --git a/data/2401.02436.png b/data/2401.02436.png new file mode 100644 index 0000000000000000000000000000000000000000..bb641bc31d8d3ed4631c2b580c2f5c24879b897e --- /dev/null +++ b/data/2401.02436.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00a9973d3767353c4a7cd84bfa05ac16f3f7e936a5e67dc06b472aa8dda2213a +size 872146 diff --git a/data/2401.02460.png b/data/2401.02460.png new file mode 100644 index 0000000000000000000000000000000000000000..552961d53e0af3e973a334ec5a105593ac777a57 --- /dev/null +++ b/data/2401.02460.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04ae828d2af7a171e0fccda0ccd2def34387c96a3f14b04780afda79bf8c373a +size 816566 diff --git a/data/2401.02847.png b/data/2401.02847.png new file mode 100644 index 0000000000000000000000000000000000000000..8db844fe80c62ee8f168f26af914f7f3ab5a2590 --- /dev/null +++ b/data/2401.02847.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b5b4aaa477c690ebfa5ac0ea0a70970fc73662bc7c7e5b191785922d063c9a2 +size 1357023 diff --git a/data/2401.02937.png b/data/2401.02937.png new file mode 100644 index 0000000000000000000000000000000000000000..0fd18f978a0cb8f48b78512ee29051960d79f2b2 --- /dev/null +++ b/data/2401.02937.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58dd306fcbef32e5b2387c5252c53ec8e274f067cc335093110882e06189de0e +size 811775 diff --git a/data/2401.03043v1.png b/data/2401.03043v1.png new file mode 100644 index 0000000000000000000000000000000000000000..e54961a478697cfa7cc812b3e426696fe73bc944 --- /dev/null +++ b/data/2401.03043v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f266355469d12cd6fd76a811ad07f7a5d22df330b332f7982b0a68fd35a3b32d +size 903733 diff --git a/data/2401.03411.png b/data/2401.03411.png new file mode 100644 index 0000000000000000000000000000000000000000..9813bc086e43e6cc6f8f5ffbb033cb1bcde61b6c --- /dev/null +++ b/data/2401.03411.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a85febf9d7cde9e9185d3ad02b1644d3f079b3d4fdce4623d9a94b7ac36ebfb +size 704670 diff --git a/data/2401.03707.png b/data/2401.03707.png new file mode 100644 index 0000000000000000000000000000000000000000..43ceb1cab202aca1f87d4abc2bf01a29eddb2367 --- /dev/null +++ b/data/2401.03707.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b12682b26fb96fb21239b6f48c83322b0546073229e283da25ff861d165a5bd9 +size 1035205 diff --git a/data/2401.03785.png b/data/2401.03785.png new file mode 100644 index 0000000000000000000000000000000000000000..a6d97f44dd0533588352a602382ac80adf41058f --- /dev/null +++ b/data/2401.03785.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e4c80f621fe30ab64c06dc9dd9e29191a7bc3ef093e7e1ef64db10b830525bb +size 803180 diff --git a/data/2401.03989.png b/data/2401.03989.png new file mode 100644 index 0000000000000000000000000000000000000000..09fe5f47b2c78e58c5883a1ba412a7c5df2ebbf0 --- /dev/null +++ b/data/2401.03989.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e807d9fab9459e3aea172306a4fdaa5a6bdf520a56e3e91a352eda345c59cf54 +size 1503278 diff --git a/data/2401.04071v1.png b/data/2401.04071v1.png new file mode 100644 index 0000000000000000000000000000000000000000..679799e1be6ee41aa9dd3edad3a365b1cf7721bc --- /dev/null +++ b/data/2401.04071v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a780aeeb46804c705c9bb112cbe2775d335d1efcdc0c687ef576e045097c1e1 +size 818792 diff --git a/data/2401.04092.png b/data/2401.04092.png new file mode 100644 index 0000000000000000000000000000000000000000..ecda61dcf4a2d05d1ba16928dc504958f0b0bbdc --- /dev/null +++ b/data/2401.04092.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:757c87f2bc9aa88192945a5cbfaa8284f2e182bf658ff3ba56abafdde7c33e70 +size 1227496 diff --git a/data/2401.04105.png b/data/2401.04105.png new file mode 100644 index 0000000000000000000000000000000000000000..7cd5a2b2b487c65fbb0cf9eb73fba8ab7386bc28 --- /dev/null +++ b/data/2401.04105.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fbd0708f4014b6d4a586bcb07229d7ba9b31ee044a79450469d7bf53e66677c +size 771878 diff --git a/data/2401.04244.png b/data/2401.04244.png new file mode 100644 index 0000000000000000000000000000000000000000..90833391c5495923525d24935b14ae91ac8fcd77 --- /dev/null +++ b/data/2401.04244.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2306c7d99b4734896e659daf1a3d3f3f7c9d16cac67e9883c028934fe5069cba +size 733234 diff --git a/data/2401.04350v3.png b/data/2401.04350v3.png new file mode 100644 index 0000000000000000000000000000000000000000..f299d587dfc371ce17c2cbcb7a8670a275902d88 --- /dev/null +++ b/data/2401.04350v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d846f1951b00a001f71dee4bfeb7e3b5e1de0434fccbcd0b8b6e6f10cb063fbf +size 729404 diff --git a/data/2401.04390.png b/data/2401.04390.png new file mode 100644 index 0000000000000000000000000000000000000000..3c6cb387c2c5ffa9cf39ab0f3df74477940de933 --- /dev/null +++ b/data/2401.04390.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8df750026b0fd7b874341ffadf41ade069dc9dfc76ffe1333f16f951bf33dec7 +size 819428 diff --git a/data/2401.04394.png b/data/2401.04394.png new file mode 100644 index 0000000000000000000000000000000000000000..38b5d9b5e13c60c231b5f188cd4973b2f086b768 --- /dev/null +++ b/data/2401.04394.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c247ed6056698818ae029efa657aba0d40ceb25009ca113b4b5d6658c190a254 +size 855937 diff --git a/data/2401.04608.png b/data/2401.04608.png new file mode 100644 index 0000000000000000000000000000000000000000..4d271bfda6d651b1d24c4ed631712c8fe34689c0 --- /dev/null +++ b/data/2401.04608.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9a04d2c349f07b12f7360d828de6a21d32fb56eed16d1c6af6f6c94207e5928 +size 2022399 diff --git a/data/2401.04716.png b/data/2401.04716.png new file mode 100644 index 0000000000000000000000000000000000000000..1a5e2fe1f6036be1f88550834f1e69cc64f7fd9f --- /dev/null +++ b/data/2401.04716.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82655e1822d73b1c77e1c9b5e25f39096028451cf73235ed5651e8d158d5d446 +size 938309 diff --git a/data/2401.04727.png b/data/2401.04727.png new file mode 100644 index 0000000000000000000000000000000000000000..7f6e349bd2b45d5420d8a0cc465f45664b332a5e --- /dev/null +++ b/data/2401.04727.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f65b59f4ade6a6c312e3b7a9035da9ddfa92b7e705a0f77fbff49db0a3a88a8d +size 635734 diff --git a/data/2401.04728.png b/data/2401.04728.png new file mode 100644 index 0000000000000000000000000000000000000000..0b6a0e543af49fcac0cdc48135b2f5e8bf881bcd --- /dev/null +++ b/data/2401.04728.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91ce57643abcfadb478e44f89e7cec855a81244807b616fad072e7b88c1f5fd6 +size 911127 diff --git a/data/2401.04747.png b/data/2401.04747.png new file mode 100644 index 0000000000000000000000000000000000000000..79fcf41d4bc2a88d96a8ef1a6d7c7e1b5605bc96 --- /dev/null +++ b/data/2401.04747.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a91c2610e2c639d1f711fc4694ef4e7f30ec8faf1d91757ad0b6e258fec9f06a +size 880807 diff --git a/data/2401.04921.png b/data/2401.04921.png new file mode 100644 index 0000000000000000000000000000000000000000..8ef9d47f8a4e6932fd2f6ecbc86c60bb479369c0 --- /dev/null +++ b/data/2401.04921.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d45d767b60f539f801f67d99e1bb796714c171fcdcadc6225f7c8ddbc182f97b +size 860652 diff --git a/data/2401.04928.png b/data/2401.04928.png new file mode 100644 index 0000000000000000000000000000000000000000..dcc9497755aa75a1983037673bd9bc0b70e7933f --- /dev/null +++ b/data/2401.04928.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b278d1905e1b798b4290b1b12e8391afb9305a663f9a35f6b300d1c73bf9e164 +size 731266 diff --git a/data/2401.05011.png b/data/2401.05011.png new file mode 100644 index 0000000000000000000000000000000000000000..9493bda0e35e0d1ebcb81652ef81451fa72c8bab --- /dev/null +++ b/data/2401.05011.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc0512bb26ed4bf5b7adcad7187825d2ff849b8d216d54b8f571109222c9b07e +size 785842 diff --git a/data/2401.05224.png b/data/2401.05224.png new file mode 100644 index 0000000000000000000000000000000000000000..0f43b24c449704049e367cb82c489f60dab206b7 --- /dev/null +++ b/data/2401.05224.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e76d7af6a71e97a376014a623aef8ed88be554b5ada5d71497891c27163dae8 +size 828073 diff --git a/data/2401.05334.png b/data/2401.05334.png new file mode 100644 index 0000000000000000000000000000000000000000..a5362d8ec5cb954e3b6587adc376f6dee5edc939 --- /dev/null +++ b/data/2401.05334.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c41fbdb1458801e921e751b88d42fa28270d0dc000a1618f3930e442b0a72837 +size 963502 diff --git a/data/2401.05335v1.png b/data/2401.05335v1.png new file mode 100644 index 0000000000000000000000000000000000000000..9a6522c75d87fec71b6a8388b8c041f3265aa9bf --- /dev/null +++ b/data/2401.05335v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23c04bef9560e17d95bebd18d4700fc18be6ecef82becd2bc54592c03bbe5b49 +size 1393699 diff --git a/data/2401.05577.png b/data/2401.05577.png new file mode 100644 index 0000000000000000000000000000000000000000..e7b2fd27089be34a49dade8ec31b4f18171a463c --- /dev/null +++ b/data/2401.05577.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:280004aec26a7e7326f1b7a320bdce21ffaec34b68074013996530009e8ba027 +size 716597 diff --git a/data/2401.06056.png b/data/2401.06056.png new file mode 100644 index 0000000000000000000000000000000000000000..b4617915dd2d14cc476b5bbbdd1a524579a16f0a --- /dev/null +++ b/data/2401.06056.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eeabc43cb9c3cd4549f00cefb2c161519023e2e29e2c0e20b067198978f02d77 +size 1208186 diff --git a/data/2401.06116v1.png b/data/2401.06116v1.png new file mode 100644 index 0000000000000000000000000000000000000000..c345ee23603024e84d7e75f700929387655c3ac8 --- /dev/null +++ b/data/2401.06116v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:822e7e2626e4180cec1bcd3ec1f7a7a709139028e834ca2b99e9b891d9ad2889 +size 864799 diff --git a/data/2401.06129.png b/data/2401.06129.png new file mode 100644 index 0000000000000000000000000000000000000000..577f13edba8f3e2a22c50beb4f6db4072261aec1 --- /dev/null +++ b/data/2401.06129.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc6af3d1077349818fb69f90ab7f4346713e6f503e97853bb64dbcd0313119e6 +size 871229 diff --git a/data/2401.06146.png b/data/2401.06146.png new file mode 100644 index 0000000000000000000000000000000000000000..bae0a0d9f967a5e97562e9d0ded9ebe5a546d6ad --- /dev/null +++ b/data/2401.06146.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a0245e867d84a445ef43d9e49d9b967732f8969b19e2c4cb5903ed59e473fb4 +size 918438 diff --git a/data/2401.06197.png b/data/2401.06197.png new file mode 100644 index 0000000000000000000000000000000000000000..0a242989018d88ff5a013587f08abbea079ede2e --- /dev/null +++ b/data/2401.06197.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:921f2e0b7fdbd4d9063af66cd68a7396701678083f3453c9bdafa5215a88658a +size 687871 diff --git a/data/2401.06209.png b/data/2401.06209.png new file mode 100644 index 0000000000000000000000000000000000000000..aec4e8a0d49add7b975ee4a03278ffe6bc193e1e --- /dev/null +++ b/data/2401.06209.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63e225ebef21caeb3fae73c33c58ba5bf92454f0c75af63f1553a51796cca353 +size 1255172 diff --git a/data/2401.06312.png b/data/2401.06312.png new file mode 100644 index 0000000000000000000000000000000000000000..a3e4178120ddbc161acb0976b9febb98535327e9 --- /dev/null +++ b/data/2401.06312.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5dd9b3b4968427fbe3963dea30942d7f21d44d702b3d553686839c7895823530 +size 757490 diff --git a/data/2401.06395.png b/data/2401.06395.png new file mode 100644 index 0000000000000000000000000000000000000000..684a1baaa32cd010b7dac098fa613dffe7e4b3af --- /dev/null +++ b/data/2401.06395.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88ac5e51e7cca4e7ddd16fc7e0062f5e8b6b1c161c6969470522535ffac16885 +size 733464 diff --git a/data/2401.06578.png b/data/2401.06578.png new file mode 100644 index 0000000000000000000000000000000000000000..9d614ed3c016bbd0adbca56e320f70bfba0eccfb --- /dev/null +++ b/data/2401.06578.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e81930a9fb1968e99779d0896a59a037b60d93366e38f2cbdba154ddfc53b3a +size 834226 diff --git a/data/2401.06614.png b/data/2401.06614.png new file mode 100644 index 0000000000000000000000000000000000000000..fef3b494f53e0584e2e788edba216e19cf75f69a --- /dev/null +++ b/data/2401.06614.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8cd23e704c736f554c3186241ead50d64ca93811de44fead9abc7e67dd3fc28b +size 864676 diff --git a/data/2401.07114.png b/data/2401.07114.png new file mode 100644 index 0000000000000000000000000000000000000000..75554f2f61891b3de136dbffb50db0b86413302a --- /dev/null +++ b/data/2401.07114.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fb16e0e7cbcd276c16d592efd231385edd2816a4e94613b56f134ac65b69dd5 +size 669966 diff --git a/data/2401.07402.png b/data/2401.07402.png new file mode 100644 index 0000000000000000000000000000000000000000..2a101d22ef7d257a304b5144618e30301fa5c664 --- /dev/null +++ b/data/2401.07402.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b2433218b522ce3cfbae637ca1cae048d93c36f10976a00ef3e67b2bc08f405 +size 756564 diff --git a/data/2401.07745.png b/data/2401.07745.png new file mode 100644 index 0000000000000000000000000000000000000000..fa6b360df9fe2c0cb3d148a873bde85ff4437032 --- /dev/null +++ b/data/2401.07745.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b23dcac426c8341b1203d7c913807910456cc21e1a275cb32fe8c5562e138a59 +size 1125139 diff --git a/data/2401.07770.png b/data/2401.07770.png new file mode 100644 index 0000000000000000000000000000000000000000..b14227a426f2525e53e8c92b6de627a42dcef78e --- /dev/null +++ b/data/2401.07770.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b51195ac95038f6a83b2178d5f99db70153bbefdab6658fc8cc6afa72f61006 +size 1059808 diff --git a/data/2401.08036.png b/data/2401.08036.png new file mode 100644 index 0000000000000000000000000000000000000000..caf79e0a2c9ea1c523455feb5eae303362903292 --- /dev/null +++ b/data/2401.08036.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c723fb8c01cff55d89d7010d9cb5fba257c7455fef0fcbe552cd5a82028c1238 +size 1127313 diff --git a/data/2401.08053.png b/data/2401.08053.png new file mode 100644 index 0000000000000000000000000000000000000000..b782c5f68c286aef58d625a82ab88f89178ed87f --- /dev/null +++ b/data/2401.08053.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46123107be2ec2878b13a6ea3f56375f24336fd9be383e671cb6deb1f0f84390 +size 1780408 diff --git a/data/2401.08209.png b/data/2401.08209.png new file mode 100644 index 0000000000000000000000000000000000000000..19bba8c95153f3ed1f63bc84de24d835e2902ec2 --- /dev/null +++ b/data/2401.08209.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:078016fa1b280f9ba66889e2bc2faed56a110f28c7f1199dbbe0f26dc763a56d +size 1183237 diff --git a/data/2401.08399.png b/data/2401.08399.png new file mode 100644 index 0000000000000000000000000000000000000000..5c46adad9576bea3b6cbf57303149afdd2b39b34 --- /dev/null +++ b/data/2401.08399.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7dffd7cff5a4f1b55fd80a9d5cf21ef5c93cf18e1da2d4feb31fb7f3b6eb7a27 +size 989882 diff --git a/data/2401.08407.png b/data/2401.08407.png new file mode 100644 index 0000000000000000000000000000000000000000..4a8bc0292f9d59f8b9146d35b5ac434fbbe16ad2 --- /dev/null +++ b/data/2401.08407.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c475cda3fa8e16f47c07e1f54879a28c9915e229be3a0ca481d365e87f565627 +size 752913 diff --git a/data/2401.08570.png b/data/2401.08570.png new file mode 100644 index 0000000000000000000000000000000000000000..c0caa610895456b93fbeb1d2baeb52194abec740 --- /dev/null +++ b/data/2401.08570.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7596a57ceeb81c098c258fc19fb5ac359f0f9a5806ac796d69e046f8f39efaf3 +size 1175492 diff --git a/data/2401.08577.png b/data/2401.08577.png new file mode 100644 index 0000000000000000000000000000000000000000..6a9975f4c41ebdc205e548a3090b268a0bbd384f --- /dev/null +++ b/data/2401.08577.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08512afa67eaa4cbdc5b309b95db5c298b08138d894f41e98472df9f49829c7b +size 950887 diff --git a/data/2401.08739.png b/data/2401.08739.png new file mode 100644 index 0000000000000000000000000000000000000000..9dec34c3f9813ed0d458786cd016d6e8b5791bd7 --- /dev/null +++ b/data/2401.08739.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:929a4e1d1392dfe0ef1c2dd15b85c303ee528c6025158c2f88987e5d2be36a72 +size 988872 diff --git a/data/2401.08741.png b/data/2401.08741.png new file mode 100644 index 0000000000000000000000000000000000000000..26330870c441659d44046888343dcdaab49ad823 --- /dev/null +++ b/data/2401.08741.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:789febaa4073fe231fd205da03d15f76c3dd5b5d66d68e8c8d9e733025411438 +size 1239093 diff --git a/data/2401.08937.png b/data/2401.08937.png new file mode 100644 index 0000000000000000000000000000000000000000..16cdefba048a1ad96b52a809f9c569b095c639ce --- /dev/null +++ b/data/2401.08937.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4ac52e1328e8f5d4e915145c0e201bbc7930b6f205c1e3b996c2ea67330933c +size 728721 diff --git a/data/2401.09340.png b/data/2401.09340.png new file mode 100644 index 0000000000000000000000000000000000000000..20d43a8e61ae28067a00cab7e627aba8b7739fbc --- /dev/null +++ b/data/2401.09340.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a2220522a18f0f8d97f5f40b9143717f6d860a515b1467f0b40cde4178d6388 +size 1606647 diff --git a/data/2401.09414.png b/data/2401.09414.png new file mode 100644 index 0000000000000000000000000000000000000000..bfd179924ed63ea503c53c57b13283c3b9f97269 --- /dev/null +++ b/data/2401.09414.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bec2c523301f7094b7a5af9296ad168aeceffab79b62def55c627efb686132f6 +size 812441 diff --git a/data/2401.09419.png b/data/2401.09419.png new file mode 100644 index 0000000000000000000000000000000000000000..54e7c36401aac48188d381205d09a7937d57735b --- /dev/null +++ b/data/2401.09419.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a29f2c5f5b57aa3d9deaa56aeba7e2c5200da2609f6e0ed7785bc6a8ea6db42b +size 1288034 diff --git a/data/2401.09603.png b/data/2401.09603.png new file mode 100644 index 0000000000000000000000000000000000000000..b8a6cc5fc036857e8e3f31fc0f25312c949e9790 --- /dev/null +++ b/data/2401.09603.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29096c22c3843141aedca76cfb1fd99256112f8b26c97c3b30ccd3a26b6045b2 +size 715788 diff --git a/data/2401.09721.png b/data/2401.09721.png new file mode 100644 index 0000000000000000000000000000000000000000..a6afcf0b7db3960177dee4b4fe6d0cf509cc3120 --- /dev/null +++ b/data/2401.09721.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:993369553fcf7d690b013e751b897015b8fb3d68746e3b97ef99be1d8ede641e +size 833193 diff --git a/data/2401.09740.png b/data/2401.09740.png new file mode 100644 index 0000000000000000000000000000000000000000..fdd0607bd1af4250fe4093fe27e411085d6230ce --- /dev/null +++ b/data/2401.09740.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:211f92d9fe1992693e00839b87483ddf13535890228dea4e0363c3f498525c35 +size 837382 diff --git a/data/2401.10005v1.png b/data/2401.10005v1.png new file mode 100644 index 0000000000000000000000000000000000000000..df853e04af6c8c30a8d994af3e0bfbe9d0d08c3d --- /dev/null +++ b/data/2401.10005v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e033554f8eebc5981bba116f42a6965770684d1e044f2edc831f108f84dbfac0 +size 736675 diff --git a/data/2401.10171.png b/data/2401.10171.png new file mode 100644 index 0000000000000000000000000000000000000000..7cd0fded9a1436e051c5500e2fdf64cfb050875e --- /dev/null +++ b/data/2401.10171.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:289aa01aad2379f10de4a322be692f86a0e17bc60fab951e89ead6766c326eed +size 791005 diff --git a/data/2401.10217.png b/data/2401.10217.png new file mode 100644 index 0000000000000000000000000000000000000000..b9b3b8d6be65b5d98fe546765315eb85cf72ab8a --- /dev/null +++ b/data/2401.10217.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cda1d11b382f09403100461e40e53bd5daf2242122dabd2aaca6899bee92a827 +size 884142 diff --git a/data/2401.10219.png b/data/2401.10219.png new file mode 100644 index 0000000000000000000000000000000000000000..c7b86ad44333b1098511815f0a5273ecfeae98c9 --- /dev/null +++ b/data/2401.10219.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88529bc9c227d6f3e49d739708685438d0e553dbfcfdfe8fe570ff7d35a5f99b +size 1227394 diff --git a/data/2401.10224.png b/data/2401.10224.png new file mode 100644 index 0000000000000000000000000000000000000000..eff23619f997c1dc2297135748da8b1447b40e48 --- /dev/null +++ b/data/2401.10224.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95f03313b03c13ff3f977fc5ea205a0f9978e1878c3b2646e40f6ea2e64a924b +size 1341837 diff --git a/data/2401.10226.png b/data/2401.10226.png new file mode 100644 index 0000000000000000000000000000000000000000..01480db4fc092e2554cca0c94fa3eb068c63fea5 --- /dev/null +++ b/data/2401.10226.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b3489362dd50f08fea136fdfa2cff90db554d3a07e825d614ab136c4ebff054 +size 816910 diff --git a/data/2401.10229.png b/data/2401.10229.png new file mode 100644 index 0000000000000000000000000000000000000000..6015a81275c495524294df9533a048a79710d989 --- /dev/null +++ b/data/2401.10229.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65b4a03e3befa92f005b121ccbcbc14edbf0dc7cfc6ff36be9c57b8d26e8cbc9 +size 1523189 diff --git a/data/2401.10786.png b/data/2401.10786.png new file mode 100644 index 0000000000000000000000000000000000000000..c4ba09db0dffe1463b86999c14313b33fddec15e --- /dev/null +++ b/data/2401.10786.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fa4c2e6eded224d23d46173fbffe791ef4d14e50332586bbc77eafba537e122 +size 1228784 diff --git a/data/2401.10831.png b/data/2401.10831.png new file mode 100644 index 0000000000000000000000000000000000000000..9f0c4c5e279f155406739ccf90aa69d5efc4fef0 --- /dev/null +++ b/data/2401.10831.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83f5d8d6f7548a0cf6c0d01d36303ac2fdba77b8fbe681eded382bc0587c6242 +size 1209953 diff --git a/data/2401.10891.png b/data/2401.10891.png new file mode 100644 index 0000000000000000000000000000000000000000..d92e5efe7b80905372104c5096f2a20d1902fc5f --- /dev/null +++ b/data/2401.10891.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98062c92e46a7336af1b69d517d28edbeb9db50e59406977cf785d15e98c51fb +size 1123313 diff --git a/data/2401.11078.png b/data/2401.11078.png new file mode 100644 index 0000000000000000000000000000000000000000..5b595776b5051a916d37f5d74e220e8aba6ca36a --- /dev/null +++ b/data/2401.11078.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:157abcfc95d68c735388633424f86ecdf7a8de366a6becd1a68d49ab07e81aef +size 1099696 diff --git a/data/2401.11085.png b/data/2401.11085.png new file mode 100644 index 0000000000000000000000000000000000000000..16a051d25cbc41d157c641fd303336b115b551fb --- /dev/null +++ b/data/2401.11085.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c911ec76297291740ef3359b9b0c7dcfd5489a4cefbabc215143759b4dbe9758 +size 963114 diff --git a/data/2401.11140.png b/data/2401.11140.png new file mode 100644 index 0000000000000000000000000000000000000000..fe73e2fa22ff18b36e37b534b7eb87600bccc81a --- /dev/null +++ b/data/2401.11140.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53be447c2d550708db78689e447388ae326c840c46bc1b192c256c07121a6840 +size 718470 diff --git a/data/2401.11704v1.png b/data/2401.11704v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f4629f862812ae33161f3607d90db411dffbcc35 --- /dev/null +++ b/data/2401.11704v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c6c0a7a35039960d35427466637f6277e2f4fb199eaa44e11b420aa8848f79b +size 1088290 diff --git a/data/2401.11739.png b/data/2401.11739.png new file mode 100644 index 0000000000000000000000000000000000000000..78d9c8c8cb01b7248c37ee79ecd0d721a602afd6 --- /dev/null +++ b/data/2401.11739.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:661de136278a2d37303e859e0af75a7a225c2edce56cbf0e9228f6df9eb04e21 +size 1252400 diff --git a/data/2401.12168.png b/data/2401.12168.png new file mode 100644 index 0000000000000000000000000000000000000000..acdf92eeeceefe5bfc460d1a4529375be933119a --- /dev/null +++ b/data/2401.12168.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f229533369e91fb947c602686edfe529b32146d0aa5b5065947516a3682fc515 +size 796664 diff --git a/data/2401.12175v2.png b/data/2401.12175v2.png new file mode 100644 index 0000000000000000000000000000000000000000..4b4bd807b238ce012ab307864e3732dcba29da4f --- /dev/null +++ b/data/2401.12175v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5767bd5eb029974eb049528272893b7d05ae920012b360e5fbc095460ec0e3f5 +size 558927 diff --git a/data/2401.12425.png b/data/2401.12425.png new file mode 100644 index 0000000000000000000000000000000000000000..5753d5c5950404a76f695918de6243c176c94169 --- /dev/null +++ b/data/2401.12425.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd219bfa4f3f4dc76cf056d8f0d3d66790c52a5cdc1411ce8ae375a508ec5a8c +size 894570 diff --git a/data/2401.12592.png b/data/2401.12592.png new file mode 100644 index 0000000000000000000000000000000000000000..db4ad74d35810190b25b9ca5a65fff1413cbd7f6 --- /dev/null +++ b/data/2401.12592.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cec8738e62a3107bcc325fba40953e4fb8c90a08a746cfdbc75cbbdbde5725de +size 1277824 diff --git a/data/2401.12694.png b/data/2401.12694.png new file mode 100644 index 0000000000000000000000000000000000000000..990ab5d29ec9a2065e9fd6fec1af40038b1618b9 --- /dev/null +++ b/data/2401.12694.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e1673b5a82b0c509adef30b2787c7f31d398c0f665efb6ca2e91a3093673770 +size 885134 diff --git a/data/2401.12979.png b/data/2401.12979.png new file mode 100644 index 0000000000000000000000000000000000000000..c827fbb3f3d8bc85e4dfdc6ae2dc9d7e37bcf125 --- /dev/null +++ b/data/2401.12979.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d59b459c480c94343415a60d5ff068e36e16487b560b9a30177e58653c330aa +size 824474 diff --git a/data/2401.13082.png b/data/2401.13082.png new file mode 100644 index 0000000000000000000000000000000000000000..9d06ba194ea413e7dccb1f824553a5254859b947 --- /dev/null +++ b/data/2401.13082.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b099030339554494a289b371f7b3c0c936819483770ab43a610fc2f7e5a76fa7 +size 1096129 diff --git a/data/2401.13296.png b/data/2401.13296.png new file mode 100644 index 0000000000000000000000000000000000000000..1bc1ba97e77b2dac4507cf68fae73abbff734225 --- /dev/null +++ b/data/2401.13296.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe97505a63331d6b40903c07b8252161d8f93ec7534353dc19d2a16341671712 +size 729693 diff --git a/data/2401.13627.png b/data/2401.13627.png new file mode 100644 index 0000000000000000000000000000000000000000..99b327e1e9c3a8cf965a5b4766172c79c9103044 --- /dev/null +++ b/data/2401.13627.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:957e4070812052e370a5b9aec8d80b9aec214ba80a1bc31b7b5783b3c908c72c +size 1955132 diff --git a/data/2401.13650.png b/data/2401.13650.png new file mode 100644 index 0000000000000000000000000000000000000000..cef050028c8b1b41bd8d457a0187c3e6a9ec086d --- /dev/null +++ b/data/2401.13650.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90b3199808d27f3b6d1f30dcabec2d35539a5585928b939eb5bd3831988f4961 +size 644402 diff --git a/data/2401.13856.png b/data/2401.13856.png new file mode 100644 index 0000000000000000000000000000000000000000..99f919790c7f0d3f336a4983502d50e25571597a --- /dev/null +++ b/data/2401.13856.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d36569b5ea90d07852526fe1893a930f36c9ddb4106025804c80b743f3e354dc +size 624492 diff --git a/data/2401.14349.png b/data/2401.14349.png new file mode 100644 index 0000000000000000000000000000000000000000..b5093f8bff7f4cdd9a1af621059cfdc62b28c044 --- /dev/null +++ b/data/2401.14349.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38bb78f0f3134ab5923d1c4d98f57ab9ea9f1ee7da1165f62857b82f2d82c934 +size 885741 diff --git a/data/2401.14391.png b/data/2401.14391.png new file mode 100644 index 0000000000000000000000000000000000000000..8e9647aed3e3f8ee2aaf806b135941da56debbad --- /dev/null +++ b/data/2401.14391.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe498bc72e44ffa847e4519f598aa9ec065cf9d17cb60adf80d294efe6bf0778 +size 930308 diff --git a/data/2401.14398.png b/data/2401.14398.png new file mode 100644 index 0000000000000000000000000000000000000000..64d36ace3c48acb34af6627fce8922e12ef66b84 --- /dev/null +++ b/data/2401.14398.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ab56d354691a1f2ba31aebf622bb9b3c22348894da8124e2c2e4c6aa7b1ce0e +size 782283 diff --git a/data/2401.14405.png b/data/2401.14405.png new file mode 100644 index 0000000000000000000000000000000000000000..f6c21a2c806adf0e50451f7a6c5589aeaa3ef4d5 --- /dev/null +++ b/data/2401.14405.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8b9abdc17ea853174d0ae54f104fa60ebe9f9e067aa20edf92157766fc8eb8b +size 771985 diff --git a/data/2401.14718.png b/data/2401.14718.png new file mode 100644 index 0000000000000000000000000000000000000000..5da9c5b89fb9e9d121692923d8d834553ded6fb7 --- /dev/null +++ b/data/2401.14718.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e7f43890038971956d201527247fc853afb8f1cee1d9c4e21930fdf9c6b0a8d +size 783385 diff --git a/data/2401.14758.png b/data/2401.14758.png new file mode 100644 index 0000000000000000000000000000000000000000..aa9e9d580739fece74624e499ecf19b5d0e14b4c --- /dev/null +++ b/data/2401.14758.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbf63d8889f4bff3451896661ccb19b84c9e79df659c1cd360d19e0ab001ee46 +size 618246 diff --git a/data/2401.15261.png b/data/2401.15261.png new file mode 100644 index 0000000000000000000000000000000000000000..f4a118b09818175d03adaf74f7699fc20d4d49dc --- /dev/null +++ b/data/2401.15261.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e33381416ad81c350c1ddf1fda9d35503792219b34eace9a7b32e5d1c98c0107 +size 959958 diff --git a/data/2401.15859.png b/data/2401.15859.png new file mode 100644 index 0000000000000000000000000000000000000000..fcfb9d1ab324094bbe0cbbb841327f04316b7393 --- /dev/null +++ b/data/2401.15859.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:275aa4a19f0f8a2210c3918647c93e0fc48ced5f468f5cece6259bae597313cb +size 1597803 diff --git a/data/2401.16001.png b/data/2401.16001.png new file mode 100644 index 0000000000000000000000000000000000000000..9f1e2c606c63dc7967c84262c77aa41741d5774a --- /dev/null +++ b/data/2401.16001.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2db9ffd0b81939f49c17900f2fff7d7c2c2598f68f04528714eba255b03d47d0 +size 776107 diff --git a/data/2401.16287.png b/data/2401.16287.png new file mode 100644 index 0000000000000000000000000000000000000000..68b1b6e3a05dc7288946c694733dfbe861f93a17 --- /dev/null +++ b/data/2401.16287.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:873b04140969d3edb995259715c4e4829cbbd141a50d1e8a618efddd1f8e7c1a +size 585338 diff --git a/data/2401.16456.png b/data/2401.16456.png new file mode 100644 index 0000000000000000000000000000000000000000..225dfdfe78723b7c8b8c521d710ef3290e89198c --- /dev/null +++ b/data/2401.16456.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05abb5b6545f8bbd5e15c0c00abc69b21f326cc525249838584c219be2ba6bcd +size 786279 diff --git a/data/2401.16741v1.png b/data/2401.16741v1.png new file mode 100644 index 0000000000000000000000000000000000000000..caf24b2b71aff903f887f0834b5306c41306b9a9 --- /dev/null +++ b/data/2401.16741v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6599ce277b915224a343ddf77fc73c6bb8e4c482563bf87327e453dcbd78e769 +size 1118955 diff --git a/data/2401.17270.png b/data/2401.17270.png new file mode 100644 index 0000000000000000000000000000000000000000..8cbb8dc135c627064116caa83a94a7930ebbadfe --- /dev/null +++ b/data/2401.17270.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97d847ec6fb19e67e05a1792fc8ad39a5bcd523bfad1d62de44e7882adf72341 +size 643867 diff --git a/data/2401.17603.png b/data/2401.17603.png new file mode 100644 index 0000000000000000000000000000000000000000..b0a6687cb48bd3b87f5815c394b7b9b95faa1605 --- /dev/null +++ b/data/2401.17603.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a21f6cf4ab6dddfd43103ae5afcd4075b2ba80b2c8ee20914b35b445e3abf03 +size 548681 diff --git a/data/2401.17879.png b/data/2401.17879.png new file mode 100644 index 0000000000000000000000000000000000000000..5eff8d056059747ba533aca4ec59df57495275b2 --- /dev/null +++ b/data/2401.17879.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6ed12ecfa358e8b861cd9d4bdc17eef2ec017c2a35e8a29c865a55c31ba5ae2 +size 796925 diff --git a/data/2401.18084.png b/data/2401.18084.png new file mode 100644 index 0000000000000000000000000000000000000000..bf8890a99677da233a57d3f4266d6896290ca4e3 --- /dev/null +++ b/data/2401.18084.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6458187aece50257b4ecdc25e53a25a9332bff24af0fcb95752a4157b4781de1 +size 844033 diff --git a/data/2402.00627.png b/data/2402.00627.png new file mode 100644 index 0000000000000000000000000000000000000000..a0f068560d9b94da77cbcb602e8474eefa1a21c8 --- /dev/null +++ b/data/2402.00627.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c97ed6ce49b467bc7bc1817e8d8b521d26ca42df82607e3d5db39169d8217807 +size 1547416 diff --git a/data/2402.00863.png b/data/2402.00863.png new file mode 100644 index 0000000000000000000000000000000000000000..8cea2e0e6b93526d201c51ea122860f804c044aa --- /dev/null +++ b/data/2402.00863.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:745d32a96f889499c8ded047d6337dd275477fda5541317c768dc92b434adc42 +size 1744660 diff --git a/data/2402.02045.png b/data/2402.02045.png new file mode 100644 index 0000000000000000000000000000000000000000..32e01a7a0778876421c5875646a0d090a8ed3040 --- /dev/null +++ b/data/2402.02045.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:230d2ce733eebc4bdf9ff8f67f6dbdc52abdf6377d8801ed2f547d12281fcf40 +size 700142 diff --git a/data/2402.02235.png b/data/2402.02235.png new file mode 100644 index 0000000000000000000000000000000000000000..d6b6d425497e7dd4dea2833faf9050f5598d697a --- /dev/null +++ b/data/2402.02235.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e46fc98e4dedbbd6894647aec81489fc735d513db4aea4cf3ebb5690adfda1a +size 899345 diff --git a/data/2402.02352.png b/data/2402.02352.png new file mode 100644 index 0000000000000000000000000000000000000000..5677eceb914e2c1acf551083e01a0dcea39cfa5c --- /dev/null +++ b/data/2402.02352.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a0e90dfcbec70514ede7a8edc2865b73b5bdd77a830b2263496c6cb1f09b801 +size 813255 diff --git a/data/2402.02583.png b/data/2402.02583.png new file mode 100644 index 0000000000000000000000000000000000000000..25e8d88359b5fca279adf60fe888dafa852665d8 --- /dev/null +++ b/data/2402.02583.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12568a7d266f48e53b5cc7f442918c1a1409bd68012670080493dab7cb9b4633 +size 1423652 diff --git a/data/2402.02887.png b/data/2402.02887.png new file mode 100644 index 0000000000000000000000000000000000000000..73f04736cf334008c3a665cb70036fde39de7f31 --- /dev/null +++ b/data/2402.02887.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89119141e293d23d8435cdbd88558f4d9206d2b28312a5b274912c0b9da60c6a +size 789054 diff --git a/data/2402.03161.png b/data/2402.03161.png new file mode 100644 index 0000000000000000000000000000000000000000..a0d3e22692ce1011e32ff8041ba3825d315676f5 --- /dev/null +++ b/data/2402.03161.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60305f2acec267a603f0544dd8c123667fee3dc6aff658d829d58f95d68e78f6 +size 929392 diff --git a/data/2402.03290.png b/data/2402.03290.png new file mode 100644 index 0000000000000000000000000000000000000000..14554885ab44389386d74f15810f9261a15e2cb6 --- /dev/null +++ b/data/2402.03290.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24ac92bcc400998e9059a975684608b4f685ad8b086e9e7553864f9c91f0de79 +size 1392797 diff --git a/data/2402.03312.png b/data/2402.03312.png new file mode 100644 index 0000000000000000000000000000000000000000..eca1b0eea92da664cb01cd8ae9254cd5f70baf21 --- /dev/null +++ b/data/2402.03312.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2b3ffcf48972c26a92ee81440e76ceb685cb453ad0ed91aec89c4fbc9ce4065 +size 823065 diff --git a/data/2402.03587.png b/data/2402.03587.png new file mode 100644 index 0000000000000000000000000000000000000000..244a3f2b07bde9c21685e7775e5dea69263d4a9f --- /dev/null +++ b/data/2402.03587.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a69b616fb991e015c28907143b9920e4252bfc32cf89e35e4be4e0a40842769c +size 671568 diff --git a/data/2402.03908.png b/data/2402.03908.png new file mode 100644 index 0000000000000000000000000000000000000000..80a7f7eeb072e0aa8c5c422d1e09973ca740d960 --- /dev/null +++ b/data/2402.03908.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:488d5e4034150d9e01bd2bdbf91004646d19c4d65524a4e1f09dda402146adf5 +size 1151383 diff --git a/data/2402.04356.png b/data/2402.04356.png new file mode 100644 index 0000000000000000000000000000000000000000..6bb6fda1856046daf5275ae22bd830136364aef0 --- /dev/null +++ b/data/2402.04356.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:864705de567464deb3e5e96b04f441257723c510adf0ab7d3f7ba15583c44c79 +size 720769 diff --git a/data/2402.04476.png b/data/2402.04476.png new file mode 100644 index 0000000000000000000000000000000000000000..ad4d988f6fb09014c9f37bec5eec1f6edffe3be2 --- /dev/null +++ b/data/2402.04476.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35df9ade521a03e72773a711fd96f8b1d065279474c3793e22f06188c8d95450 +size 774432 diff --git a/data/2402.04754.png b/data/2402.04754.png new file mode 100644 index 0000000000000000000000000000000000000000..0f35a683f44dd3d6933f812aed317226b7a4a3d1 --- /dev/null +++ b/data/2402.04754.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f58b38e709ac20069472055d1e3842e32b3279d66168d51c9118c417fb3369e8 +size 613350 diff --git a/data/2402.05235.png b/data/2402.05235.png new file mode 100644 index 0000000000000000000000000000000000000000..32f8628a0e1dec45983da871b0a470a7b576857a --- /dev/null +++ b/data/2402.05235.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e3bfb86172deec9d7aa4335a021f5c1df4ef4778e50e088f83fad47af6a9db9 +size 1270593 diff --git a/data/2402.05408.png b/data/2402.05408.png new file mode 100644 index 0000000000000000000000000000000000000000..43c79d5eda6f35cd99213f2fea30f604523d6c90 --- /dev/null +++ b/data/2402.05408.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1caed3ee6d922388915b880853ca9e8f1fa188eb7b9528c075c111e5a0a85a19 +size 1518142 diff --git a/data/2402.05472.png b/data/2402.05472.png new file mode 100644 index 0000000000000000000000000000000000000000..de48114e087c6241bd6d50f91a5d20d4fdb5b16d --- /dev/null +++ b/data/2402.05472.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ab059da421e0a06b6b4f5f37e54aa29e9f0806fe4bfcd98fc6c64616a14c875 +size 1032084 diff --git a/data/2402.05746.png b/data/2402.05746.png new file mode 100644 index 0000000000000000000000000000000000000000..96b10a5c84c7db247713e1b77f08af7f28227310 --- /dev/null +++ b/data/2402.05746.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b66e00047488704fd44c2f22ea6185d103e57bdefe2808666f810b01b6f8ff4 +size 917396 diff --git a/data/2402.05917v1.png b/data/2402.05917v1.png new file mode 100644 index 0000000000000000000000000000000000000000..1941b2115e7ef217e0776c9fd12a7e91b29de1eb --- /dev/null +++ b/data/2402.05917v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43bb499791b5ea465738f9a3732e5a36dd288eb5248c31dc558767b9b382a823 +size 950554 diff --git a/data/2402.05932.png b/data/2402.05932.png new file mode 100644 index 0000000000000000000000000000000000000000..91f4167d7c6cb2463025122c4dbe27b5bd1f1138 --- /dev/null +++ b/data/2402.05932.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f846d3d300a34501eeed8453c654e707b055f1f097cd363fd39341067151b18f +size 851612 diff --git a/data/2402.05937.png b/data/2402.05937.png new file mode 100644 index 0000000000000000000000000000000000000000..cd83f6d13329b8eb75c4d4011a11afba033ff434 --- /dev/null +++ b/data/2402.05937.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54c4ac89afca4963ea5098581be74ead5d9b46348e41c43cb85363421a86eb25 +size 1016813 diff --git a/data/2402.06106.png b/data/2402.06106.png new file mode 100644 index 0000000000000000000000000000000000000000..770363593e033f92b8d02f842d9542dd86f0b9bc --- /dev/null +++ b/data/2402.06106.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d948f449053f422e824a01773d9da88c4e6dca6c98c5ac5ecdeb6e17602738d +size 811784 diff --git a/data/2402.06117.png b/data/2402.06117.png new file mode 100644 index 0000000000000000000000000000000000000000..313dfac3f6efa13d030d537692ed77b70b9422af --- /dev/null +++ b/data/2402.06117.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:661494304173a30b243a27846db27a9fdfa7f9c2b019b65cbafe34ff198aab59 +size 777057 diff --git a/data/2402.06136.png b/data/2402.06136.png new file mode 100644 index 0000000000000000000000000000000000000000..cf150c6dd85749c5531ca7d30f4b2a4828bd0676 --- /dev/null +++ b/data/2402.06136.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd1dcc056747a7cc33238a1630bf6a2b053f03598832bb4fc81eb4fc7f593ce8 +size 426283 diff --git a/data/2402.06559.png b/data/2402.06559.png new file mode 100644 index 0000000000000000000000000000000000000000..cb8f2de445d3cb94514227dc86a307cc39ccdc13 --- /dev/null +++ b/data/2402.06559.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6541192739d5b1498a3a28a5dad5bb3b2d36bb4658286a28f6e32c054218d6ea +size 795573 diff --git a/data/2402.06659.png b/data/2402.06659.png new file mode 100644 index 0000000000000000000000000000000000000000..2eb99547c600f27c2feed967f0854071757a9d87 --- /dev/null +++ b/data/2402.06659.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5f0357fcf50eea6d6dadba26162b8462ecc1cfc795b886501bde4a9b8644438 +size 672257 diff --git a/data/2402.07183.png b/data/2402.07183.png new file mode 100644 index 0000000000000000000000000000000000000000..25a9cd0efe361653aef6d11c25bc91fc456e243c --- /dev/null +++ b/data/2402.07183.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26609f31dc8fb2dd74b5e67358ddb7f9c9f342c31fdc3fb4d9c39e094ce62409 +size 666707 diff --git a/data/2402.07220.png b/data/2402.07220.png new file mode 100644 index 0000000000000000000000000000000000000000..639aad9bfd84dee3c4bb9d4864acafcf0e5f04c7 --- /dev/null +++ b/data/2402.07220.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:128af62e90f641e94e77ba0ae8cea9e79692aa4ec969853dcc9ed6a3d75a88bb +size 1197627 diff --git a/data/2402.07635.png b/data/2402.07635.png new file mode 100644 index 0000000000000000000000000000000000000000..255c1e54b2030a76e4da88dc2954a23556279d5d --- /dev/null +++ b/data/2402.07635.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:037ba1e89468a738b725ef2969ad4d0a585035e6a43560931d0053b4a7675ca4 +size 884430 diff --git a/data/2402.07739v1.png b/data/2402.07739v1.png new file mode 100644 index 0000000000000000000000000000000000000000..0b05b78121533089037d6c469f12a7d77caa1b8b --- /dev/null +++ b/data/2402.07739v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f779bd35d6dcf314bef5b38007ed04e650341264cf3153d8f35cb2b75cd570e2 +size 729472 diff --git a/data/2402.08359.png b/data/2402.08359.png new file mode 100644 index 0000000000000000000000000000000000000000..2fbfed97662b3358475152fd0f3531da603b01ab --- /dev/null +++ b/data/2402.08359.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5e084f46e9fc3773a6bb2e47e730a561d26b95eef9bf7e4d90d6fb66c702658 +size 796503 diff --git a/data/2402.08622.png b/data/2402.08622.png new file mode 100644 index 0000000000000000000000000000000000000000..bab036ab751858bde10a0e8c1efbb1fbea7e077d --- /dev/null +++ b/data/2402.08622.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69237e71a70cc9a643ed4caf39c9dddc574a6a6d8faba633edb9682d7ba8880e +size 799367 diff --git a/data/2402.08654.png b/data/2402.08654.png new file mode 100644 index 0000000000000000000000000000000000000000..d514d467208a572ee37f43e0ab3ade4ce3188497 --- /dev/null +++ b/data/2402.08654.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f214b73c63c03778ced3d1065f7999436b204ee8a3d548ed1b06fc2f9e207594 +size 1042139 diff --git a/data/2402.08657.png b/data/2402.08657.png new file mode 100644 index 0000000000000000000000000000000000000000..c1e09cba4106db3246b12d2e6bcbbf038f6776d8 --- /dev/null +++ b/data/2402.08657.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b54a669e7a87365b37d2f2c9d902a7bbe90ee4301ba3b489684d501063afb5c8 +size 933143 diff --git a/data/2402.08714.png b/data/2402.08714.png new file mode 100644 index 0000000000000000000000000000000000000000..80b19087dc5cd4e6ea8ebbfbfb5ed97b305b7cd7 --- /dev/null +++ b/data/2402.08714.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbe74170415d2ec35db7ea6a08b6a8d583f2fc389793e885b4edd664bf90416b +size 1332747 diff --git a/data/2402.08876.png b/data/2402.08876.png new file mode 100644 index 0000000000000000000000000000000000000000..f43a53531e5263deedb8203f9472d66f5a691b0f --- /dev/null +++ b/data/2402.08876.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b972515536ce8b3cff53e6ae57217f1ab87485cea2163512726b7560a40171ec +size 952539 diff --git a/data/2402.08919v1.png b/data/2402.08919v1.png new file mode 100644 index 0000000000000000000000000000000000000000..988c41c223ee8a97007aaab8ecb1f7b7ef40459a --- /dev/null +++ b/data/2402.08919v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5886d33b84b61e11401dfc48864a6a7c907dacdb778379561b0cf52f5f701a5b +size 793620 diff --git a/data/2402.08922.png b/data/2402.08922.png new file mode 100644 index 0000000000000000000000000000000000000000..b0f239f5e813f2af21776e0f0dae4201d2c7f4a9 --- /dev/null +++ b/data/2402.08922.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a70f2973f0c432e4f5601e27e49d8d3ecf77cb928c99ab8e9497c7575558299c +size 774646 diff --git a/data/2402.09181.png b/data/2402.09181.png new file mode 100644 index 0000000000000000000000000000000000000000..a724b8a92927e4722ddd1892ca946fbcd16dcd0d --- /dev/null +++ b/data/2402.09181.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9d0d95869578b1924c5709a1ec833982815880440ffc12b9a8f658332a85ee3 +size 758271 diff --git a/data/2402.09812.png b/data/2402.09812.png new file mode 100644 index 0000000000000000000000000000000000000000..194704b48cb570c5c9c82be586b1071d2dac62a2 --- /dev/null +++ b/data/2402.09812.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c8dce069b25fd7c8262a2b9beb26ec9c5df98c0042fa8fc63e54b81c43d49b4 +size 1857294 diff --git a/data/2402.09944.png b/data/2402.09944.png new file mode 100644 index 0000000000000000000000000000000000000000..aef4a2bf0c2ca8c71953309452b0fcc8199ea4b6 --- /dev/null +++ b/data/2402.09944.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:849b2726ecdd4227b21d5a1b9288d9a87c12fee33a6670581d3acba9761a1923 +size 1296290 diff --git a/data/2402.10099.png b/data/2402.10099.png new file mode 100644 index 0000000000000000000000000000000000000000..0d67cd48bba6e647118ca4a85eb5deb96276000b --- /dev/null +++ b/data/2402.10099.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4bf6ae5b8f039b90945ce476287e6c5cd1f576938ec4ba241f9cc7c4ead616d +size 799822 diff --git a/data/2402.10128.png b/data/2402.10128.png new file mode 100644 index 0000000000000000000000000000000000000000..87dd20e9aadcaea10637a36ca7d77461539bc0b3 --- /dev/null +++ b/data/2402.10128.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cd0111612df4af0ccb02a502195ac172cfc97a4b9ec6cd6230849f0a8039d18 +size 882490 diff --git a/data/2402.10401.png b/data/2402.10401.png new file mode 100644 index 0000000000000000000000000000000000000000..1085f2996ca1f8b4d78c2e18b4be7d7e36203d98 --- /dev/null +++ b/data/2402.10401.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:646747bae229ac2c83d6484d044f3c3971660a7e97234ae56caa0a327abdf65e +size 856134 diff --git a/data/2402.10636.png b/data/2402.10636.png new file mode 100644 index 0000000000000000000000000000000000000000..0a8957c3c98f62ed0d1560cdff24c64e16e12274 --- /dev/null +++ b/data/2402.10636.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:743b05111eb7267c8b9edec30de16fb140b135ba027ce90c0573cb82e4be43cf +size 877293 diff --git a/data/2402.11874.png b/data/2402.11874.png new file mode 100644 index 0000000000000000000000000000000000000000..abfa87a0146b987e6fb79f9452f6c4130dfff785 --- /dev/null +++ b/data/2402.11874.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70e4dd33f2f96e3d3fc556fe3b950dec3d6f47491262d7798c9c7119c7e984c2 +size 1027796 diff --git a/data/2402.12259.png b/data/2402.12259.png new file mode 100644 index 0000000000000000000000000000000000000000..b4ab60010a414b447475f9abfebf9bd254ed3960 --- /dev/null +++ b/data/2402.12259.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6055acbfc6ea32042a02e3ff5ba1cddaca86d8ccbb3d5d7d0ba30ab5c48d860e +size 854468 diff --git a/data/2402.12712.png b/data/2402.12712.png new file mode 100644 index 0000000000000000000000000000000000000000..5848da99a3ecc42730ac2936e95b4a6b3a40905c --- /dev/null +++ b/data/2402.12712.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8e11ea8f9e86a0801a60c7e8f7ba6d7f23892321a59680fd31a599ea2e2052e +size 409184 diff --git a/data/2402.13250.png b/data/2402.13250.png new file mode 100644 index 0000000000000000000000000000000000000000..5dbd097f81fe2601f680c9fc6b2e72c91ef3d28c --- /dev/null +++ b/data/2402.13250.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d0eb74f0d253d5f9817bed6ad159019de898cfd62ee3623c54d8585ca54bb71 +size 797725 diff --git a/data/2402.14000.png b/data/2402.14000.png new file mode 100644 index 0000000000000000000000000000000000000000..47e2f60c5d49e804bb867ad45d704af4f19eba36 --- /dev/null +++ b/data/2402.14000.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1aca1e72fdcb75e43b440e494090175effe5816010e4a683ff1e655fb013f863 +size 381175 diff --git a/data/2402.14000v1.png b/data/2402.14000v1.png new file mode 100644 index 0000000000000000000000000000000000000000..9bb4ec77c3ba5413d04e5591abc3ba63df4f78cc --- /dev/null +++ b/data/2402.14000v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f04c50943062fbb5f4486775baa38149cb816ea73aeeb26a56d44747fc9fff6c +size 1482295 diff --git a/data/2402.14371v2.png b/data/2402.14371v2.png new file mode 100644 index 0000000000000000000000000000000000000000..9b612e78779d2502287fd1d24d6c3cc2a476f428 --- /dev/null +++ b/data/2402.14371v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53d45b78f61219943003ee668e6f20898889bceec95ed2fa70dfb5ae2f614cd7 +size 898347 diff --git a/data/2402.14654.png b/data/2402.14654.png new file mode 100644 index 0000000000000000000000000000000000000000..a07892bb9228177364ac2bb040716ff241e4d8b4 --- /dev/null +++ b/data/2402.14654.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9a503b7d0dbd7136836449e0cd110c3784dfe8c96737d85d210a81f749c70c8 +size 752285 diff --git a/data/2402.14795.png b/data/2402.14795.png new file mode 100644 index 0000000000000000000000000000000000000000..44e7cad6a87a54aefce674e93bb00b03e03e07ee --- /dev/null +++ b/data/2402.14795.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2c91a1b08085e4d71a883cb61b088b96dedc223e460d83171ddf0ba53a7229c +size 921687 diff --git a/data/2402.14797.png b/data/2402.14797.png new file mode 100644 index 0000000000000000000000000000000000000000..4403d90e2cc1affe95a3cad80290a2d5a38ec0c4 --- /dev/null +++ b/data/2402.14797.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8588fe332e97362bae61f84e631a89d5bdef621b9b73d9e42989a0b10345c34 +size 1521492 diff --git a/data/2402.15017.png b/data/2402.15017.png new file mode 100644 index 0000000000000000000000000000000000000000..a8b8d7726adcda0880e06778eeafbb961e3909bd --- /dev/null +++ b/data/2402.15017.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2397b20814b9c77721a3444b903e3b8640212a26cf292d075a9e86602df53654 +size 663626 diff --git a/data/2402.15509.png b/data/2402.15509.png new file mode 100644 index 0000000000000000000000000000000000000000..f2be2ffdd2fa693c1d6d5c14badae24261d951e9 --- /dev/null +++ b/data/2402.15509.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:316a9cf9d8c2e72efcc26e86438a7beaaafd858c6c96f742570303dc236d1f52 +size 905212 diff --git a/data/2402.15584.png b/data/2402.15584.png new file mode 100644 index 0000000000000000000000000000000000000000..3ce52ad1e20d47cc760424b3c6862cfaae60aa41 --- /dev/null +++ b/data/2402.15584.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca1ad787d2d7e00f045deb9c5731456bad2a8058372ea1eaa8398c63d6bbdb18 +size 714197 diff --git a/data/2402.15865.png b/data/2402.15865.png new file mode 100644 index 0000000000000000000000000000000000000000..e095566711cd712fb8b83e751d79379ac813f4b8 --- /dev/null +++ b/data/2402.15865.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0d2e6ad2cf605e336563a3f7a7313675faf09b14c43dc459d02d5f559bfbb57 +size 811546 diff --git a/data/2402.16174.png b/data/2402.16174.png new file mode 100644 index 0000000000000000000000000000000000000000..be607581e67e9f9ff9e474af871897f79421795d --- /dev/null +++ b/data/2402.16174.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42f3ac49188fc952e57e5053b79e229feacbe1a4ffad440cc7a47ac98e8a8b71 +size 922076 diff --git a/data/2402.16407.png b/data/2402.16407.png new file mode 100644 index 0000000000000000000000000000000000000000..5d23339cd97fbb4162bfe9c746a3f3259c43ff96 --- /dev/null +++ b/data/2402.16407.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e765de8a83e8a7479b9eec50eb59dd711222e7522286b8ccc89ee6c01f6afe8 +size 848999 diff --git a/data/2402.16594.png b/data/2402.16594.png new file mode 100644 index 0000000000000000000000000000000000000000..63cfae6a5dff537ffc0069f8cc9136f73a171683 --- /dev/null +++ b/data/2402.16594.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c705ccaa532a4aa094af456a71e7ded227f637eb5cbbd72ec3aea4710fb2699 +size 711929 diff --git a/data/2402.16846.png b/data/2402.16846.png new file mode 100644 index 0000000000000000000000000000000000000000..34c030221bc2f7120bdb2b3ec1222fb83ee39e27 --- /dev/null +++ b/data/2402.16846.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83e519bd80339fc29c9bfd86b08b73d07584e1cc52bce9c5ea6ff257b1eac15d +size 1561500 diff --git a/data/2402.17062.png b/data/2402.17062.png new file mode 100644 index 0000000000000000000000000000000000000000..b235bee8f99df919b864437f5308cf144305d173 --- /dev/null +++ b/data/2402.17062.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2d2691e1495879fae6184f83ee7a49914113a5cb15ea17f83d67d37443f420b +size 772447 diff --git a/data/2402.17065.png b/data/2402.17065.png new file mode 100644 index 0000000000000000000000000000000000000000..9566b0c6369b04a1b899d472ce0214f2aaef7402 --- /dev/null +++ b/data/2402.17065.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:171d6ed5937fba8fbd990964727a0d85e9fa24edfce7f23a5bb1ee4cc16721b5 +size 1335307 diff --git a/data/2402.17171.png b/data/2402.17171.png new file mode 100644 index 0000000000000000000000000000000000000000..015409765c7ac4e8f93d60638d967070e845fe49 --- /dev/null +++ b/data/2402.17171.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:349f6c2386000d318a01e02ecabe0ccb5b6b95b64f95d0a1688349db13bf1805 +size 1421382 diff --git a/data/2402.17172.png b/data/2402.17172.png new file mode 100644 index 0000000000000000000000000000000000000000..bfd4db4aa4ae24f4bdaff6cc29af44216c97d4a8 --- /dev/null +++ b/data/2402.17172.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bcb4491e01d2897b458a70d87d254700dba931f4ec2b55157d78235d93185a7 +size 749945 diff --git a/data/2402.17200.png b/data/2402.17200.png new file mode 100644 index 0000000000000000000000000000000000000000..3924e61f5e9a8013808245221af9ec093a676008 --- /dev/null +++ b/data/2402.17200.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54614e8b9c51dc27766c4803499d5b0fa9b8e02c89ec5f8eb9db8fa6353b4ea3 +size 863889 diff --git a/data/2402.17210.png b/data/2402.17210.png new file mode 100644 index 0000000000000000000000000000000000000000..8a8a258d2eabf174c2d5d8379432cb7ccde584f0 --- /dev/null +++ b/data/2402.17210.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8526317511a4af953218ca9c3312d9756f91411127c466a4506be7ef1078422a +size 772827 diff --git a/data/2402.17228.png b/data/2402.17228.png new file mode 100644 index 0000000000000000000000000000000000000000..a97b7ff6cbb4817b5725653198cdf37645cd7a18 --- /dev/null +++ b/data/2402.17228.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c75b1843e5ce4c15071dba7d106ca4ca76573af491c4222d85ab66509c96cbd4 +size 854802 diff --git a/data/2402.17229v1.png b/data/2402.17229v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f6d9d571497b54e928c91f0cb070d91282b83779 --- /dev/null +++ b/data/2402.17229v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:241985d201f0e87fd37eba641cbced5210cf0f38966f1c5d498a573c9c4a6b0a +size 763399 diff --git a/data/2402.17251.png b/data/2402.17251.png new file mode 100644 index 0000000000000000000000000000000000000000..e4d8777040dd5465c3ad67b9917b1516d5f7c67e --- /dev/null +++ b/data/2402.17251.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4cc272150ca836f79f9016e1ed82a85b4cb672294a14d542f8045532736dea5f +size 731477 diff --git a/data/2402.17275.png b/data/2402.17275.png new file mode 100644 index 0000000000000000000000000000000000000000..92dfd3442cecefb1890bd1658d4de684ddb318ed --- /dev/null +++ b/data/2402.17275.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b7e872a20dbe3c68657d474817c29f9f7eaf824d0067e05f19beba4b85db593 +size 1564069 diff --git a/data/2402.17292v1.png b/data/2402.17292v1.png new file mode 100644 index 0000000000000000000000000000000000000000..d7ec091b768c7cfd524e75a384061779a84ec664 --- /dev/null +++ b/data/2402.17292v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af37ffd82fe6cece350d716c065fa51b4136eac2fee3075f31f93e60bd415ae4 +size 756560 diff --git a/data/2402.17300v1.png b/data/2402.17300v1.png new file mode 100644 index 0000000000000000000000000000000000000000..49b163fa26c28def2150f3a40b2e9dbe0ced8097 --- /dev/null +++ b/data/2402.17300v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f85e35fbfb1c57ef6e68760d2e5cf7e930d58e546090809f6fb5ae8feaac8fc +size 868882 diff --git a/data/2402.17323.png b/data/2402.17323.png new file mode 100644 index 0000000000000000000000000000000000000000..7ce433d4a3437c498ea49c20f95d203e0de5194c --- /dev/null +++ b/data/2402.17323.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d21a717aacfc09fd23310374a221c00443fc062479f72b7eb76dcce1fd0d835f +size 946737 diff --git a/data/2402.17351.png b/data/2402.17351.png new file mode 100644 index 0000000000000000000000000000000000000000..6460000abf4db9da9748d26c524ebcac7a03c066 --- /dev/null +++ b/data/2402.17351.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd7aafaf5491abe14fb68202a61e360354c0b27d11fd3ddd283bc2caa7ec3f79 +size 858236 diff --git a/data/2402.17364v1.png b/data/2402.17364v1.png new file mode 100644 index 0000000000000000000000000000000000000000..bc35ffc0a15f8d65faceb16f961393b42545d6bf --- /dev/null +++ b/data/2402.17364v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b52388e15f9e90f3b898ada9656adccc7b07cc9a6dfa73abc3de13d4cd5f7ef +size 825718 diff --git a/data/2402.17376.png b/data/2402.17376.png new file mode 100644 index 0000000000000000000000000000000000000000..5d5e0c7aaf1391fe1913f1213a8bdecb936749b7 --- /dev/null +++ b/data/2402.17376.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a0713fc28f2bf3004c2ff2e19139cfbe03d9f70477946cd2735ac0c5841460f +size 823522 diff --git a/data/2402.17414v1.png b/data/2402.17414v1.png new file mode 100644 index 0000000000000000000000000000000000000000..832989fa837ecdaa5e448694b309046eb7b0f8f1 --- /dev/null +++ b/data/2402.17414v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71ee053623a8bb3f864da00b5d644559ebb9fd92ffcaa4cd4c24b159d671d43b +size 744203 diff --git a/data/2402.17417.png b/data/2402.17417.png new file mode 100644 index 0000000000000000000000000000000000000000..5fc23bf58b6b568e1eb30af1a3ed0eed8f6cc7b7 --- /dev/null +++ b/data/2402.17417.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95fdaf0f1b9e0d363d197a24b2f50bf9236a9e50e52c1bbc4201c56247cf2a14 +size 705347 diff --git a/data/2402.17427.png b/data/2402.17427.png new file mode 100644 index 0000000000000000000000000000000000000000..5d4bd4fe486bebde4c35580b80c8ab2608d62493 --- /dev/null +++ b/data/2402.17427.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fc00dbc5ec670681f84014659ae0fa2253d1e33497252bdd8828cf22b3ea9ef +size 1483570 diff --git a/data/2402.17464.png b/data/2402.17464.png new file mode 100644 index 0000000000000000000000000000000000000000..7b69f5245d4a501afa57c46f4ae386c607b4ce78 --- /dev/null +++ b/data/2402.17464.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7029d04455bc11a3a55762db910d92934e3258104fd001325a41af03a98b91f8 +size 830247 diff --git a/data/2402.17483.png b/data/2402.17483.png new file mode 100644 index 0000000000000000000000000000000000000000..a110a7deb5ce4023f4ad44a21f469949d66581de --- /dev/null +++ b/data/2402.17483.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c44515cb56d8f3977932ec3dd8bfb2f48c345bf9bce66798cfee18ab4958ea9e +size 940494 diff --git a/data/2402.17562v1.png b/data/2402.17562v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f48f4faebf89129eafb97c4fe2c71463eb439ce5 --- /dev/null +++ b/data/2402.17562v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6ba5bbe0f338fb00d5da5d2a508e231bf17991d5062249f82bf5b9dc0b97ab7 +size 814694 diff --git a/data/2402.17563v1.png b/data/2402.17563v1.png new file mode 100644 index 0000000000000000000000000000000000000000..d429cb302651a964ec06557dfe3270d71331a1f3 --- /dev/null +++ b/data/2402.17563v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4c4c748bbc77844676b13ab3cd2555f122e531d7c05733d0bf232a0c81d7742 +size 792113 diff --git a/data/2402.17587.png b/data/2402.17587.png new file mode 100644 index 0000000000000000000000000000000000000000..c4126af8469c7d4333a26204966825275466ea81 --- /dev/null +++ b/data/2402.17587.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8daaef65857ede11e7f57fb83e24132d830b8af5628b696932096333427b712a +size 1288913 diff --git a/data/2402.17614.png b/data/2402.17614.png new file mode 100644 index 0000000000000000000000000000000000000000..be61f8a1c9943b7c6dc1f976ab932e14830b5d9d --- /dev/null +++ b/data/2402.17614.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1276c1ef64cd4f0b04cbaa21a8d64f8be71e0ff5068a6b938fbdde9161ecebd +size 757265 diff --git a/data/2402.17664.png b/data/2402.17664.png new file mode 100644 index 0000000000000000000000000000000000000000..02d45bc1b366eac5f46bc8e900083f0e4aaf62fe --- /dev/null +++ b/data/2402.17664.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84d1879fe5034fa2cbeb22a68c314fe23c1d51030cd1c8a5c6f1542123884aa1 +size 1009968 diff --git a/data/2402.17678.png b/data/2402.17678.png new file mode 100644 index 0000000000000000000000000000000000000000..6c0c5123ad32deec75e8f2c0ecd08cdc9e383233 --- /dev/null +++ b/data/2402.17678.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6095580750363a81b8130172c922d27550631c690658e9b03c6f8c847345c9de +size 774322 diff --git a/data/2402.17723.png b/data/2402.17723.png new file mode 100644 index 0000000000000000000000000000000000000000..1ca331592382eef7628a8c2de4efddbd04c7e56b --- /dev/null +++ b/data/2402.17723.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1989179501517437c4e4bf35d4f940c54be07448b09e6c7b473dd558905575b +size 783706 diff --git a/data/2402.17726.png b/data/2402.17726.png new file mode 100644 index 0000000000000000000000000000000000000000..26256119622e09c723c5f3fd9a4b21c9f042ae73 --- /dev/null +++ b/data/2402.17726.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5fdeab1e76882ddf5772f212abf161469c78346f12baf06d38291bbd984bd268 +size 995963 diff --git a/data/2402.17729.png b/data/2402.17729.png new file mode 100644 index 0000000000000000000000000000000000000000..8d8c06ca6b9a2c19034723baa6a47ccad399cadc --- /dev/null +++ b/data/2402.17729.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33ce533bff7f5f7dad54ab40ec3494a3ad4f54f04906cf8ad80eee31582ec268 +size 784741 diff --git a/data/2402.17862v1.png b/data/2402.17862v1.png new file mode 100644 index 0000000000000000000000000000000000000000..b27fcd57aacc4ffe2c0dafd290cc4b0ebc5ac11f --- /dev/null +++ b/data/2402.17862v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:574736722d955a2fa838026e09549049f0e938287ac1af165f65d0e2650bf3be +size 785253 diff --git a/data/2402.17951.png b/data/2402.17951.png new file mode 100644 index 0000000000000000000000000000000000000000..6b66c2305d8971ac1379e55676932f7f50de671c --- /dev/null +++ b/data/2402.17951.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa0c7c78425d06ebc37238ea90b765f9ff5d67323fbb924f721f7774944d44c9 +size 862401 diff --git a/data/2402.18078.png b/data/2402.18078.png new file mode 100644 index 0000000000000000000000000000000000000000..9ece4f418dd277946112c81b5ce4e851f4158e78 --- /dev/null +++ b/data/2402.18078.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:704fcb149faf8b450fb485589e862a5f09740925eb86800bc15952a059cad14a +size 1026430 diff --git a/data/2402.18091.png b/data/2402.18091.png new file mode 100644 index 0000000000000000000000000000000000000000..1db8a0c63895e62e81ed3e7b5af47c8eaf07bc8a --- /dev/null +++ b/data/2402.18091.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59abb366ff0fe865954bfbd3cd10b717e2915f4245c6c0ac1e96cd4e3e5c8644 +size 1004124 diff --git a/data/2402.18102.png b/data/2402.18102.png new file mode 100644 index 0000000000000000000000000000000000000000..3596d72bc94611955b726842fefab14d2fb8154a --- /dev/null +++ b/data/2402.18102.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4690e1ee312aa0b8818894cdb2300cd4e5d90fc4b38559569a9f8fead5dc21d4 +size 990169 diff --git a/data/2402.18115.png b/data/2402.18115.png new file mode 100644 index 0000000000000000000000000000000000000000..63a9ff1c3958981bce762f684a5f522dbce62bf4 --- /dev/null +++ b/data/2402.18115.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:590790322351ab68cb45ecf23e8e6328df9bdbccc2d03b4e3701ccd98859c1cd +size 1181422 diff --git a/data/2402.18133.png b/data/2402.18133.png new file mode 100644 index 0000000000000000000000000000000000000000..9787f208738e894084436924069d4084f2af423c --- /dev/null +++ b/data/2402.18133.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7d0d033c49369c7bcc1f606ad9a84216e78e0276ed0e98be0b96a3a4035203d +size 803701 diff --git a/data/2402.18146.png b/data/2402.18146.png new file mode 100644 index 0000000000000000000000000000000000000000..f189be3db3783a3393fa5399a04ecbd91aa0ce25 --- /dev/null +++ b/data/2402.18146.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c062739941a0244e5410f8b42a904137b23fc70180f0a9af951faef652ba2a0 +size 926434 diff --git a/data/2402.18152.png b/data/2402.18152.png new file mode 100644 index 0000000000000000000000000000000000000000..5bc06bf41462aade21c89b2577f3482a627601ad --- /dev/null +++ b/data/2402.18152.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de506dbc7b92e38880057b817c9b0eda3b1bf3145df3ec58befb9fc5a7883a27 +size 680282 diff --git a/data/2402.18162.png b/data/2402.18162.png new file mode 100644 index 0000000000000000000000000000000000000000..2ee43ef3d982baa5064f9e97cea5a3833114a512 --- /dev/null +++ b/data/2402.18162.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cbbe2af9d47359ac180df2316b307d7e4d82a1f2f4065a94702da890bf4750b +size 506471 diff --git a/data/2402.18192v1.png b/data/2402.18192v1.png new file mode 100644 index 0000000000000000000000000000000000000000..ffcb28f2d8361aa3b53211fb9892363e887711ac --- /dev/null +++ b/data/2402.18192v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3bb096fb4418e5e9aa5df304c71b4a3b0cdf52417055c90ead3ec7ebf432347 +size 2002404 diff --git a/data/2402.18206.png b/data/2402.18206.png new file mode 100644 index 0000000000000000000000000000000000000000..fcd5298a407a15fdb44ac155c4ab5a25622aedb7 --- /dev/null +++ b/data/2402.18206.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47d217cb72f422ddb7e822e967458e1e3ed24cb5b9b6d485ac68879133ce4e72 +size 1015201 diff --git a/data/2402.18277.png b/data/2402.18277.png new file mode 100644 index 0000000000000000000000000000000000000000..c6259296f1fa078b78a019cdd217e5ab319d7609 --- /dev/null +++ b/data/2402.18277.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eab0f0a942d64c53c145929b36f7fe79cd7122e643de3c6b4a5962ec2c6491d7 +size 844557 diff --git a/data/2402.18330.png b/data/2402.18330.png new file mode 100644 index 0000000000000000000000000000000000000000..1244cc22624ab2ca97bf74220108cf7ac7854b18 --- /dev/null +++ b/data/2402.18330.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d8266c99382f2cc8429c08a99adaf45ebfa7e22906cb9004a27755dd155ffdd +size 766951 diff --git a/data/2402.18372.png b/data/2402.18372.png new file mode 100644 index 0000000000000000000000000000000000000000..e648ac4706c504fc299ddd8d75f261b639801eb7 --- /dev/null +++ b/data/2402.18372.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7a607862076dfe431d98bcf892f49fb0d49a560434b823271f03db68957124a +size 744002 diff --git a/data/2402.18447.png b/data/2402.18447.png new file mode 100644 index 0000000000000000000000000000000000000000..3250fc11c1b446a40a2b33d2a68eda48ec99f4dd --- /dev/null +++ b/data/2402.18447.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0628456d35534098e35f0a9ba4afb738a7e81fe5f213ae85a125af382f937a9 +size 758607 diff --git a/data/2402.18467.png b/data/2402.18467.png new file mode 100644 index 0000000000000000000000000000000000000000..05d5597621d32fa43cb64d1ef346fc05c88ed283 --- /dev/null +++ b/data/2402.18467.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20a9d66c6030adff7074bd42cba7de3d0411c84a6840d74c0a780ba594f5cd61 +size 846743 diff --git a/data/2402.18490.png b/data/2402.18490.png new file mode 100644 index 0000000000000000000000000000000000000000..54c2049d1abd61181de04f51b0d498de0b3adbdd --- /dev/null +++ b/data/2402.18490.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54910a25958ee147f7c84a0f406f78304a489f8cf57ee7d0be7861eed6e44380 +size 708494 diff --git a/data/2402.18528.png b/data/2402.18528.png new file mode 100644 index 0000000000000000000000000000000000000000..ce2e08f85e6a40ee1e966d525fac18bb9891d39e --- /dev/null +++ b/data/2402.18528.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33bdaf927554b129fc449b612c54efdea379d733d71fd27a89cefa8807ba2e4c +size 746357 diff --git a/data/2402.18573.png b/data/2402.18573.png new file mode 100644 index 0000000000000000000000000000000000000000..e15fb5e467e68057cc9747e50f818b08b17104fa --- /dev/null +++ b/data/2402.18573.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61479d410d74c329de5fa95cdf8f4779c49fd9a80bea89a33add815a71c719d6 +size 1179643 diff --git a/data/2402.18771v2.png b/data/2402.18771v2.png new file mode 100644 index 0000000000000000000000000000000000000000..d5e76581b57b275efaa118bb2a0b6ede4fdda517 --- /dev/null +++ b/data/2402.18771v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3973b284b18fb38cef84ad1d05a7e4eb52748f4639bb082bb5b5a7914779bc6f +size 1027501 diff --git a/data/2402.18786.png b/data/2402.18786.png new file mode 100644 index 0000000000000000000000000000000000000000..bf6ce41f9b7c0edd26fb0d06543a560c0f34675c --- /dev/null +++ b/data/2402.18786.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e955436319c675cc8f1bd3efc6866015d6621e89540101c965a508d9f1d7026 +size 805549 diff --git a/data/2402.18817.png b/data/2402.18817.png new file mode 100644 index 0000000000000000000000000000000000000000..b66fafdb90780f482d5c3b3bf7adaf86785020ff --- /dev/null +++ b/data/2402.18817.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0857e0467ee00aa6a49c4dca6ea896c31bd0eeb5e656c08fbfbb080d5aeced01 +size 831678 diff --git a/data/2402.18842.png b/data/2402.18842.png new file mode 100644 index 0000000000000000000000000000000000000000..bb09a665e08a438659407260bde58d989e10aa82 --- /dev/null +++ b/data/2402.18842.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d123aec2e18cbf4f68ed33d01b5ed99c22184921e0cd133bed28c93b8be6f551 +size 944606 diff --git a/data/2402.18848.png b/data/2402.18848.png new file mode 100644 index 0000000000000000000000000000000000000000..d969a9ee96ac0179d3249a6893f0d0c6ce8dbee1 --- /dev/null +++ b/data/2402.18848.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2610021e0d0f7f0829231e5f26b698204cb41e533539d437ee04b61c8f24ce64 +size 1554068 diff --git a/data/2402.18853.png b/data/2402.18853.png new file mode 100644 index 0000000000000000000000000000000000000000..cd5a8b237d0e92fd23ad069c8b9fd08f4e86c8e2 --- /dev/null +++ b/data/2402.18853.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:114e0b6a4423f847eecc51b9fae58077eb181151a8dfee58662317c79049d20b +size 770673 diff --git a/data/2402.18862.png b/data/2402.18862.png new file mode 100644 index 0000000000000000000000000000000000000000..d668dbf9a7ec95fe02b553a0e9bfada38670ad1e --- /dev/null +++ b/data/2402.18862.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1151a82e47968c4356be26afb6c788bf512c1e6a972a5073ac72aae2764851a8 +size 893365 diff --git a/data/2402.18919.png b/data/2402.18919.png new file mode 100644 index 0000000000000000000000000000000000000000..46369a6845fdab25bb11543562db343064793ced --- /dev/null +++ b/data/2402.18919.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d342b2f607096b2816807fdf3a77f609faae5611cacc45785d3f13fb090e2e34 +size 689094 diff --git a/data/2402.18920.png b/data/2402.18920.png new file mode 100644 index 0000000000000000000000000000000000000000..dadb7c228d7e884e770e1a97c3ea10ebe6f6f1c7 --- /dev/null +++ b/data/2402.18920.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f31144f72a4f37ce9fdbd3f7f898a371b1bf5973281f40390190ea6d1cbc833 +size 902198 diff --git a/data/2402.18929.png b/data/2402.18929.png new file mode 100644 index 0000000000000000000000000000000000000000..d3af6d281806739fb44026f946cd11adaf93787b --- /dev/null +++ b/data/2402.18929.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa625d5aaf07a3e52161a325d36262df797978474382b6f04b3290d36c7e63f9 +size 1210868 diff --git a/data/2402.18933.png b/data/2402.18933.png new file mode 100644 index 0000000000000000000000000000000000000000..5a5feb812e0796ce697786d04254469167a90057 --- /dev/null +++ b/data/2402.18933.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0155af1fc4e15b0543f2190855db2cbc55033ca9a3fafda28a7dc8c2775bc8f8 +size 806417 diff --git a/data/2402.18934.png b/data/2402.18934.png new file mode 100644 index 0000000000000000000000000000000000000000..d698bc1edc7d2478d0d344ff4c5049309abfb24f --- /dev/null +++ b/data/2402.18934.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fcafd8114875bcf75fd5b69407f1b0376927c57637807d371ff341a9401f217 +size 1143324 diff --git a/data/2402.18956.png b/data/2402.18956.png new file mode 100644 index 0000000000000000000000000000000000000000..fbf88b853d8f32be5058dcfc992900d9bd2c58d6 --- /dev/null +++ b/data/2402.18956.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce7a8a18c1e38dea310e50f7e3fd482ffc70b4aa640347f9ab5886bf365e3286 +size 746435 diff --git a/data/2402.18969.png b/data/2402.18969.png new file mode 100644 index 0000000000000000000000000000000000000000..4394c1732a9afb6feba9ec10016f89b18a114134 --- /dev/null +++ b/data/2402.18969.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71d1af9281c241f97efe91e50b711081196b7d916a23d36c531aa5bea0b8db87 +size 884623 diff --git a/data/2402.18975v1.png b/data/2402.18975v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5c9c26c0ea1ff53f7b37fbf1a9064cf414f9cd3d --- /dev/null +++ b/data/2402.18975v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37e1a45cb520cd1ef5bf0f614083b5052c366c649236c3e8837d65cecbc98583 +size 793804 diff --git a/data/2402.19002.png b/data/2402.19002.png new file mode 100644 index 0000000000000000000000000000000000000000..013bc69d61b9cbb179c07afab05eb06243b934f2 --- /dev/null +++ b/data/2402.19002.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b648ae6cc6391cf8742c4252ccbd697f2cd88bd76b03ac48a71db958960f9cd +size 737345 diff --git a/data/2402.19014.png b/data/2402.19014.png new file mode 100644 index 0000000000000000000000000000000000000000..9bc8991163b6fc64ecbebf4934cf65fc4d60d58f --- /dev/null +++ b/data/2402.19014.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82c0d7bd089ba66f5e7b11c6760e8754e3aa896735cceb5b9283466038652be5 +size 739460 diff --git a/data/2402.19082.png b/data/2402.19082.png new file mode 100644 index 0000000000000000000000000000000000000000..0c202ed9c37ffcb2cfac1da47b3d0fc3e18b9d38 --- /dev/null +++ b/data/2402.19082.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:953125a2fd25b149b228f85a637fc566056a5821f152a4b46a203e42c81190f9 +size 905600 diff --git a/data/2402.19122.png b/data/2402.19122.png new file mode 100644 index 0000000000000000000000000000000000000000..2e8ddc36b7cd54922f8460eafe885fea7363d381 --- /dev/null +++ b/data/2402.19122.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25b055ef45f64b511eb9d0cdc7627a026458c59965b0c0412e11c47336cdb499 +size 806806 diff --git a/data/2402.19144.png b/data/2402.19144.png new file mode 100644 index 0000000000000000000000000000000000000000..c434fd0cedd7bd2a0e3f1fe665903af67cc266b9 --- /dev/null +++ b/data/2402.19144.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f114f068a5b196cb904a41a6c8148899e8ac38aec3fb4cd03553660ccf94645 +size 823498 diff --git a/data/2402.19161v1.png b/data/2402.19161v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5592a1eb924ada3fae5f458a4ffa909f1608e76f --- /dev/null +++ b/data/2402.19161v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b11e54d979d9117a4c77af98865456d9b99aa6d5e3c61c2f12192a141f23732 +size 836249 diff --git a/data/2402.19231.png b/data/2402.19231.png new file mode 100644 index 0000000000000000000000000000000000000000..20c473f007bfbf5b609e9b68bd99661dff9ba220 --- /dev/null +++ b/data/2402.19231.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c1adfe7f07509515a7c84a326d8e3393636e5d16486086ef88ccf830c0e88d9 +size 758709 diff --git a/data/2402.19270.png b/data/2402.19270.png new file mode 100644 index 0000000000000000000000000000000000000000..511137edba7df37fd4255ed49a4fcd16128151e1 --- /dev/null +++ b/data/2402.19270.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c88e4affac8b255df7c64d8269a214017195e0eb885eb6fb2033b87f82382273 +size 772684 diff --git a/data/2402.19276.png b/data/2402.19276.png new file mode 100644 index 0000000000000000000000000000000000000000..a2746de2ff645a4c781760a22138112016fc07e4 --- /dev/null +++ b/data/2402.19276.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97b53ed2ff30951daeb2426144e0309a6150d1d17a1bace3da180455b3369d6e +size 764987 diff --git a/data/2402.19286.png b/data/2402.19286.png new file mode 100644 index 0000000000000000000000000000000000000000..0373a2d2f9ac2df01b0d9714faaf32e6a67c2b61 --- /dev/null +++ b/data/2402.19286.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da0f246f66fb255ac45c4349d8e2ae529cf8ce8d02a9fb8791e1a17bca3b0de7 +size 766324 diff --git a/data/2402.19289v2.png b/data/2402.19289v2.png new file mode 100644 index 0000000000000000000000000000000000000000..83ddd005182bd2b7b0651771b36284f65e1dc5c6 --- /dev/null +++ b/data/2402.19289v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47f6e7179ec0be249f4cdcf6d09f7c6ad7ba732cbb87190da893d8840800915d +size 949767 diff --git a/data/2402.19298.png b/data/2402.19298.png new file mode 100644 index 0000000000000000000000000000000000000000..cdeaf47049a0bdd23a87ed3dc0b6dc75fd4785ed --- /dev/null +++ b/data/2402.19298.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b628c40ee005521b9916cebd4abf69b3903988fbbcc7d3bb0f3ec31c7122481 +size 893561 diff --git a/data/2402.19302.png b/data/2402.19302.png new file mode 100644 index 0000000000000000000000000000000000000000..bd371e7c3711eba9c6807b94aefc8845d27ce7c6 --- /dev/null +++ b/data/2402.19302.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2611649122a4428a383ea7048913757967a2e1344171ef049ce1413cb0ed9110 +size 870998 diff --git a/data/2402.19326.png b/data/2402.19326.png new file mode 100644 index 0000000000000000000000000000000000000000..041605edf2f2f694d8bdcd6de670f1a5837a6342 --- /dev/null +++ b/data/2402.19326.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4111bddea53e9f2bae7c0a097b4bc8052db564605128b5dac666cae30409f33 +size 876890 diff --git a/data/2402.19387.png b/data/2402.19387.png new file mode 100644 index 0000000000000000000000000000000000000000..506e41ed21cf04058fd5f2705573d540bee246eb --- /dev/null +++ b/data/2402.19387.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ddc67b8d67ea73d5d8f7ae9df88a158aba9593bbfd29c2e4ff17e626f8a14fa +size 786549 diff --git a/data/2402.19422.png b/data/2402.19422.png new file mode 100644 index 0000000000000000000000000000000000000000..1ed69870e232c000825f39835199aa83b5e21c9a --- /dev/null +++ b/data/2402.19422.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a2c450f750d15d9da4e6cb4bba923e8551555a746680ef61448e5e447db7a64 +size 764852 diff --git a/data/2402.19463.png b/data/2402.19463.png new file mode 100644 index 0000000000000000000000000000000000000000..3d51fcde15d57f90b8d7aa8a418e32df56aea1db --- /dev/null +++ b/data/2402.19463.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c29a94b8551331780fe06880ad8363c3f69f8da801d9a78c990b54284781cfd +size 965488 diff --git a/data/2402.19470.png b/data/2402.19470.png new file mode 100644 index 0000000000000000000000000000000000000000..585172e32551be98839835b2693ea8d030729ac0 --- /dev/null +++ b/data/2402.19470.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f64560106a84ea7389d319c7d0ec5d718320c5261f469e2b17f60315b1675644 +size 764321 diff --git a/data/2402.19479.png b/data/2402.19479.png new file mode 100644 index 0000000000000000000000000000000000000000..da5ff81adefde8f097d4114ce18eeaa5a37c47af --- /dev/null +++ b/data/2402.19479.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90b9552a1edb0fa2d642257a2e597fc4c33d04e48d2754fee42d7fce8477d702 +size 1493571 diff --git a/data/2402.19481.png b/data/2402.19481.png new file mode 100644 index 0000000000000000000000000000000000000000..32922184eb0db6c355e63e4415ab614c7198dc7d --- /dev/null +++ b/data/2402.19481.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6023b14c39d35b04c528bc37482fa79e3d2e2a2e9f718a6b4d10d01a9420bb4e +size 1500081 diff --git a/data/2403.00041.png b/data/2403.00041.png new file mode 100644 index 0000000000000000000000000000000000000000..2ecd8e338d37aaeecd8f05cc2a85936f6d044f03 --- /dev/null +++ b/data/2403.00041.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4351794c235050d5ac8fb0d0c03cd6454d09e590105ad0139f4ddc44c6bbb448 +size 769008 diff --git a/data/2403.00272.png b/data/2403.00272.png new file mode 100644 index 0000000000000000000000000000000000000000..37d7073d1db6ee1da93ea1c7b9ad18d438a90208 --- /dev/null +++ b/data/2403.00272.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91acc3538afc1083029abbc75a4c3e733c5d458d15da64cd6a96307f35455a3a +size 789746 diff --git a/data/2403.00274.png b/data/2403.00274.png new file mode 100644 index 0000000000000000000000000000000000000000..f507536d087aeceace8c1a33961c060d637c6b33 --- /dev/null +++ b/data/2403.00274.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7524ac5607f8d0804701bb6eb207958467d21065bda0274965e90d03e45e271f +size 841431 diff --git a/data/2403.00303.png b/data/2403.00303.png new file mode 100644 index 0000000000000000000000000000000000000000..879a17f171e17749ba725fb0911a56e148b09e50 --- /dev/null +++ b/data/2403.00303.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4df5a85a2e07e8c13fcadce8e15f26de05427a69188bc65d70427ff6192fe850 +size 777101 diff --git a/data/2403.00372.png b/data/2403.00372.png new file mode 100644 index 0000000000000000000000000000000000000000..08e157f61ab40dd9f792aa22784af30de33abd2d --- /dev/null +++ b/data/2403.00372.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d2d7bdba177c5dd65a3ac0a485640ddd88dd968552cb2611f3ab989dd38d0b2 +size 784276 diff --git a/data/2403.00436.png b/data/2403.00436.png new file mode 100644 index 0000000000000000000000000000000000000000..ab150d909e33300643771ac24c3ba7a1d2bd70f7 --- /dev/null +++ b/data/2403.00436.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:392e77dd88d9df1787f2e13a8a8d737da251c84162b41dc85ec5512199818ae1 +size 783718 diff --git a/data/2403.00459.png b/data/2403.00459.png new file mode 100644 index 0000000000000000000000000000000000000000..56c3b5671214673e2367db3ded36fef509489f54 --- /dev/null +++ b/data/2403.00459.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d19ef43c935969df6337e27f820e5c807d6256160ceab514d125f0e84de9e1b +size 1362066 diff --git a/data/2403.00483.png b/data/2403.00483.png new file mode 100644 index 0000000000000000000000000000000000000000..7b44bb3b79d11c22359bc486f1b9958eb9f14c57 --- /dev/null +++ b/data/2403.00483.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e47f65b9caf1259fba5cb87792e843555e7104ca232ea4c6ec434165f423baae +size 938841 diff --git a/data/2403.00486.png b/data/2403.00486.png new file mode 100644 index 0000000000000000000000000000000000000000..775ccdaea1ad1d2d72e81e5d3c68d2786d6b6b77 --- /dev/null +++ b/data/2403.00486.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:758c029a1d78009b421e89586ea76117b0d77783435521f6ca5e0f1d85ae33c0 +size 873269 diff --git a/data/2403.00543.png b/data/2403.00543.png new file mode 100644 index 0000000000000000000000000000000000000000..f57e1149e178d4512b460c765e191d6004847439 --- /dev/null +++ b/data/2403.00543.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e19249cff1788c7f72c39269ed50d65a81e6d382982201a1d6dadcbc4978594 +size 735004 diff --git a/data/2403.00567.png b/data/2403.00567.png new file mode 100644 index 0000000000000000000000000000000000000000..72e79acc75f5d8a0d7de6e0c5150e9196c1b27ea --- /dev/null +++ b/data/2403.00567.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36872decb9e4dac5833c5d23a662e207cc1eb936cf5a364d9f104545fbb4fb74 +size 812291 diff --git a/data/2403.00592.png b/data/2403.00592.png new file mode 100644 index 0000000000000000000000000000000000000000..28d185801f34d0b24eb136c2b72771d3e1b905ec --- /dev/null +++ b/data/2403.00592.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac64d40ea2c8cd101cecbc4ce2b917b15c0711dd3416970d2246f1278a0399fc +size 801764 diff --git a/data/2403.00644.png b/data/2403.00644.png new file mode 100644 index 0000000000000000000000000000000000000000..73d00c1d8618afb309085b1933b1f17b1241a78e --- /dev/null +++ b/data/2403.00644.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47daefdf77862921d9b8228eeece69f34e81a889d199f89ecb8ebf8ca64d80c9 +size 1344539 diff --git a/data/2403.00691.png b/data/2403.00691.png new file mode 100644 index 0000000000000000000000000000000000000000..b3291901183d6f59017e59e7a117458fbe9f3936 --- /dev/null +++ b/data/2403.00691.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19acc554ff0c9867c32685275ac263dc751f416137a805f35ecbb59f62978ed0 +size 843060 diff --git a/data/2403.00712.png b/data/2403.00712.png new file mode 100644 index 0000000000000000000000000000000000000000..d2acccdcc12dac17289ebe51e35d3b6515d6d018 --- /dev/null +++ b/data/2403.00712.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1731ac081f0c013941e95cddbc6eb9669da5c7767e0e3e4e7dd28d9921fbd049 +size 1641850 diff --git a/data/2403.00939.png b/data/2403.00939.png new file mode 100644 index 0000000000000000000000000000000000000000..37a46dc35411865a12d9b9df04feb3438cad1905 --- /dev/null +++ b/data/2403.00939.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2997fc5a5358329cebc790abb31ee71526e95ce989fb348160cb2ca0df91acd +size 1152357 diff --git a/data/2403.01053.png b/data/2403.01053.png new file mode 100644 index 0000000000000000000000000000000000000000..eda3c913da9c40636bec13d532fb0a1fbb4e8d05 --- /dev/null +++ b/data/2403.01053.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d2589a34d475b18c695ec2469c0a3ff3ffd511dbd0e6afea20605e6885a3002 +size 883020 diff --git a/data/2403.01105.png b/data/2403.01105.png new file mode 100644 index 0000000000000000000000000000000000000000..e675debc756aa6323dbdc990007185588fb8c67a --- /dev/null +++ b/data/2403.01105.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cb6a3dbe51bcfd7879daa101224ae3ec7d90a851a6a993963d3b7805de656be +size 832420 diff --git a/data/2403.01124.png b/data/2403.01124.png new file mode 100644 index 0000000000000000000000000000000000000000..c7988c0825193b02848fa58619d5221d109c9366 --- /dev/null +++ b/data/2403.01124.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:509ca210d7f1f340081be3fc1d04729fd5f09e3de306c62d25dea87825f99d41 +size 803797 diff --git a/data/2403.01226.png b/data/2403.01226.png new file mode 100644 index 0000000000000000000000000000000000000000..3f0bbcb939a7b98d588fd0e64e3c593eac6c9d60 --- /dev/null +++ b/data/2403.01226.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:642666e6a5ec4fca62fd8659629d1042f3c738666ab6a15957080f8474a2784a +size 755119 diff --git a/data/2403.01231.png b/data/2403.01231.png new file mode 100644 index 0000000000000000000000000000000000000000..5b1cf33784ac0de09a89781d22a146b979233c1c --- /dev/null +++ b/data/2403.01231.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab3c431079690f6752e26e1636c53fef80df20ad41a66f561709e43e29ab4286 +size 1182786 diff --git a/data/2403.01238.png b/data/2403.01238.png new file mode 100644 index 0000000000000000000000000000000000000000..1dd9d0b2d433aa02a4dfd46bba38258516a5e6cd --- /dev/null +++ b/data/2403.01238.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c1f89f3882e2c3adab80bfcd009d8a283022f1033cf08d5ff2ddba098b6507e +size 739082 diff --git a/data/2403.01300.png b/data/2403.01300.png new file mode 100644 index 0000000000000000000000000000000000000000..5f23a250db32987cb81dfd280f74e29197b4a556 --- /dev/null +++ b/data/2403.01300.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1456b82b2a7f148173190024ae301795d8b0494d7794b76e809d3abab944228f +size 839200 diff --git a/data/2403.01316.png b/data/2403.01316.png new file mode 100644 index 0000000000000000000000000000000000000000..f4ecbf1ba710c5a05e02f63b3b42d57c09c6e9e8 --- /dev/null +++ b/data/2403.01316.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f998045dc2f06d44ec5143dfc470e1263460881840716da26f80bbde05bd5d2a +size 1446372 diff --git a/data/2403.01325.png b/data/2403.01325.png new file mode 100644 index 0000000000000000000000000000000000000000..8c9d80c081a3d9d6e8c9585fa26311f9c781709d --- /dev/null +++ b/data/2403.01325.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd0e76187b2206bf259baf74aed33120118a86bd19d81cf85af9819b3e43b5bf +size 884465 diff --git a/data/2403.01414.png b/data/2403.01414.png new file mode 100644 index 0000000000000000000000000000000000000000..247cd251e8a4434afd1c584a5e756bc75c6eff09 --- /dev/null +++ b/data/2403.01414.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4289de597e5a9650cb1fe9d500b1591bfe8abe9728f370714627260f11b7c328 +size 803982 diff --git a/data/2403.01427.png b/data/2403.01427.png new file mode 100644 index 0000000000000000000000000000000000000000..58d35361f20b1f62c87c576022eef9d0f64b4a77 --- /dev/null +++ b/data/2403.01427.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:385ad4f1484785601259563b6b21983f62547d9cff88956284eb26b12dd19503 +size 836692 diff --git a/data/2403.01431.png b/data/2403.01431.png new file mode 100644 index 0000000000000000000000000000000000000000..8f41d4d709ff6592ba47152d503fc11703e82f50 --- /dev/null +++ b/data/2403.01431.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:774195309235c223d495826c13db86c285d934f1f903233f43ed5a1bb8bc86ff +size 630322 diff --git a/data/2403.01439.png b/data/2403.01439.png new file mode 100644 index 0000000000000000000000000000000000000000..e771f487f5f4e5b5bc1333f448e393368954952b --- /dev/null +++ b/data/2403.01439.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1134a0f940575f1d41c7e33a1b831839c5c5f76ed6e2d7abe4cbee795584088 +size 739283 diff --git a/data/2403.01444.png b/data/2403.01444.png new file mode 100644 index 0000000000000000000000000000000000000000..005b4675e96818949db05f4eb4d8ec4d8277f920 --- /dev/null +++ b/data/2403.01444.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a88e6e3f968e93efa5f72e0265fd4b2a81d26937ca14631ba31595c9fe03a38 +size 1111063 diff --git a/data/2403.01482.png b/data/2403.01482.png new file mode 100644 index 0000000000000000000000000000000000000000..958dc5e3c2076cd73fb0a3a889c836633dc3f0b2 --- /dev/null +++ b/data/2403.01482.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c37344ff9b8cdf566ecf7db4c3a36ac0bf8a30302dc8330363ca63b2866e517 +size 913246 diff --git a/data/2403.01517.png b/data/2403.01517.png new file mode 100644 index 0000000000000000000000000000000000000000..bf3b8f228ef3768e66aa0690a0bc367f6d28802c --- /dev/null +++ b/data/2403.01517.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc55108bef876b106aeddd36cbbb1825355da24bc515bff3917119529bcef613 +size 962465 diff --git a/data/2403.01598.png b/data/2403.01598.png new file mode 100644 index 0000000000000000000000000000000000000000..c7186af0a4b79d07e003964c37cf1065fa841c34 --- /dev/null +++ b/data/2403.01598.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd08f47977c120bac9e1440af2463749688a0e916545fd6a1673c1924f97dada +size 1145091 diff --git a/data/2403.01619.png b/data/2403.01619.png new file mode 100644 index 0000000000000000000000000000000000000000..f1c01d05febbd6ca927650b7a4e3d14583eef7dd --- /dev/null +++ b/data/2403.01619.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a462c115c57fbb681e7a14ef2e246d2ef09c70a39710b75f6713c4a26ee5435d +size 766936 diff --git a/data/2403.01693.png b/data/2403.01693.png new file mode 100644 index 0000000000000000000000000000000000000000..9dbe65fe5e58dbe58f1c49c24096c6c3f98d7fc3 --- /dev/null +++ b/data/2403.01693.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6e0df67343a6b3da21d72ee46a6ab1c4d5ddc74d4be68a17d5e313d1d293373 +size 800884 diff --git a/data/2403.01753.png b/data/2403.01753.png new file mode 100644 index 0000000000000000000000000000000000000000..a15b7bbc6874996688525dd0937a1d2aa092258c --- /dev/null +++ b/data/2403.01753.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5467dd5cb7bcb6a44559e81b7cb3bd154d2314ae8f37d243e525a2051f7f45cf +size 755681 diff --git a/data/2403.01773.png b/data/2403.01773.png new file mode 100644 index 0000000000000000000000000000000000000000..07c98580879b24033a75507514df06c9fdf135f5 --- /dev/null +++ b/data/2403.01773.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92a054c8b959e659ea77ac2b8c24cfe7f200cfef94661e83d18628eb205ff4e5 +size 760437 diff --git a/data/2403.01781v1.png b/data/2403.01781v1.png new file mode 100644 index 0000000000000000000000000000000000000000..89bd82223c93cf09b20796734632586bec3c404f --- /dev/null +++ b/data/2403.01781v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a3d06106104fe4029f68ce564b602fb68dd474513fbc2f4ab09e93b2b5f2c89 +size 796637 diff --git a/data/2403.01795.png b/data/2403.01795.png new file mode 100644 index 0000000000000000000000000000000000000000..282123c9ca228f581841f172612f015818e3eb4c --- /dev/null +++ b/data/2403.01795.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:242b543985c9ac8ed477bb7255201e913409c373fad53b8b47bdced2f542dfc7 +size 660207 diff --git a/data/2403.01807.png b/data/2403.01807.png new file mode 100644 index 0000000000000000000000000000000000000000..0574f8b19a8be9c436bd133f586d7d0eea5fc1ba --- /dev/null +++ b/data/2403.01807.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a2228b1c0fe790de8daf2ac1423e0c09defe1267bc3a240acb98ed0f7d3677c +size 1493010 diff --git a/data/2403.01813.png b/data/2403.01813.png new file mode 100644 index 0000000000000000000000000000000000000000..95096161dccf872ac544611dabd7860a260230b9 --- /dev/null +++ b/data/2403.01813.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f65e70fc095b2162e1f03e793a333afef937e2cbb6b8c5dfa2c3bfd62c10217 +size 666763 diff --git a/data/2403.01818.png b/data/2403.01818.png new file mode 100644 index 0000000000000000000000000000000000000000..075d4607859dae97f3d439801c2cb15fac7d937f --- /dev/null +++ b/data/2403.01818.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7866484a5ce0550548928e3251bf3fa7898d188a45b7bfb85a36d21f6b1399ca +size 835036 diff --git a/data/2403.01849.png b/data/2403.01849.png new file mode 100644 index 0000000000000000000000000000000000000000..0fd1dda0bbffb058e89dcf004a789f689ff84378 --- /dev/null +++ b/data/2403.01849.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92ee952dcd6f1dc6f4e07431b91b884a93a2e55db9db4de501cd5b1a15838e16 +size 789143 diff --git a/data/2403.01852.png b/data/2403.01852.png new file mode 100644 index 0000000000000000000000000000000000000000..39d37e505a5a1f24f7c97b873ead3de5cdd49916 --- /dev/null +++ b/data/2403.01852.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7eeddd7f902c7dc825ad89b05c8339a493002ca3ac44081c491f8e3118c0ee0 +size 1219747 diff --git a/data/2403.01901.png b/data/2403.01901.png new file mode 100644 index 0000000000000000000000000000000000000000..161e8ad9c5991a5106029b8b3e8c7bf11ec1fcba --- /dev/null +++ b/data/2403.01901.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a7848ef425466a60a059a04450124b4d0f75f27af8780d183fbbea9ebe2a9ff +size 1395924 diff --git a/data/2403.01944.png b/data/2403.01944.png new file mode 100644 index 0000000000000000000000000000000000000000..f2b3734238f49c74932e277ef995da37033686a3 --- /dev/null +++ b/data/2403.01944.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66bb2778a39c0e93a07153b0b8da79bb0e1dbc34000289d91904ebe85daf3bb3 +size 969636 diff --git a/data/2403.01968v1.png b/data/2403.01968v1.png new file mode 100644 index 0000000000000000000000000000000000000000..1afcb48dec2becc5395d5ba1df003e89e63b4a56 --- /dev/null +++ b/data/2403.01968v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7800f6e03c5c73473f48489ec7e8faab22340b0b053eeebf48dd424a34e9e00e +size 869212 diff --git a/data/2403.02041.png b/data/2403.02041.png new file mode 100644 index 0000000000000000000000000000000000000000..fc836eb52bf535e89496352626109b4ec374686b --- /dev/null +++ b/data/2403.02041.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3493a3c99c94d616ec8a837edb3cac398c1bc531572929f012fc850719dc4d77 +size 682441 diff --git a/data/2403.02075.png b/data/2403.02075.png new file mode 100644 index 0000000000000000000000000000000000000000..a9eb8492876db2f5365f0e93c3bf3d23492db402 --- /dev/null +++ b/data/2403.02075.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28f1950bca2a2af37010578e2f774082852ab9fbaaf8d1daa18369614c561df1 +size 889404 diff --git a/data/2403.02090.png b/data/2403.02090.png new file mode 100644 index 0000000000000000000000000000000000000000..4fe76c2a4eb426fc57ec6a3df7bba257c48a3ae9 --- /dev/null +++ b/data/2403.02090.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9454fa19cc30f113dd2b0a961c9b9ce363583e4497d71f74ddf8b348348ed20 +size 1006671 diff --git a/data/2403.02138.png b/data/2403.02138.png new file mode 100644 index 0000000000000000000000000000000000000000..fe4472a0de96c2b7b39d1ee4483cf408790ca0cd --- /dev/null +++ b/data/2403.02138.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c405920891b95ce1a16c9ff6697d0205c152c15c4c7f7d46dc8a2f3987c51c03 +size 808345 diff --git a/data/2403.02241.png b/data/2403.02241.png new file mode 100644 index 0000000000000000000000000000000000000000..ebd3fa1ab09f130e954eee58e1d13b8f4795a631 --- /dev/null +++ b/data/2403.02241.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f28c7cf8ab9c7e75f215dd6b9272138e9387919e4b5fb2b86f90b2db87ac293 +size 709158 diff --git a/data/2403.02249.png b/data/2403.02249.png new file mode 100644 index 0000000000000000000000000000000000000000..dbd41a07e8a9b83634986cd269976b00320c1d4e --- /dev/null +++ b/data/2403.02249.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f549068b7e54609de9ec7f27268bba393d1734cba3c5d25db3736d83f5987538 +size 707651 diff --git a/data/2403.02265v1.png b/data/2403.02265v1.png new file mode 100644 index 0000000000000000000000000000000000000000..0c34adb20ab614411e09ec694b76b86012358d49 --- /dev/null +++ b/data/2403.02265v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3836f4e0e82d643034c7df28bca300ed948136a37221353e41f2a6e91066aff +size 989086 diff --git a/data/2403.02330v1.png b/data/2403.02330v1.png new file mode 100644 index 0000000000000000000000000000000000000000..dd6507efc72990c05fb71f573178a200a5058e42 --- /dev/null +++ b/data/2403.02330v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea21952fd673cd095db08cbed55cef0ecf76402ff5c5dc98637083ae8752ddb0 +size 1164448 diff --git a/data/2403.02561.png b/data/2403.02561.png new file mode 100644 index 0000000000000000000000000000000000000000..c447770d284debafd64efe9812b958b7fed79a99 --- /dev/null +++ b/data/2403.02561.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d11ff74b988e1bab4bfeabec8fc14b5c47e5a661eec75582d8081d83d16d999 +size 1143281 diff --git a/data/2403.02601.png b/data/2403.02601.png new file mode 100644 index 0000000000000000000000000000000000000000..dd2941d41abfbc0b512fca2c20e78b029e81f7bc --- /dev/null +++ b/data/2403.02601.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5849e57a6b3bf9362ac8bf2311cd2cee8ddfdb7171e4d5fb78e1cd74e326865 +size 983244 diff --git a/data/2403.02611.png b/data/2403.02611.png new file mode 100644 index 0000000000000000000000000000000000000000..936b71784e74558ce569fc636e0771afe7516eb5 --- /dev/null +++ b/data/2403.02611.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfc7be6f5bfa28cdbf2d3bc9d69b763e5a77a346c5da2fbbfb8c4ae27fce519e +size 742868 diff --git a/data/2403.02626.png b/data/2403.02626.png new file mode 100644 index 0000000000000000000000000000000000000000..b79e2d585060cc0ed551eca83bdc1b9c7583508f --- /dev/null +++ b/data/2403.02626.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f45e37191a3b64c4211fed7671d49633dc777fedc3466698cf880bfb86c7d1c +size 1057703 diff --git a/data/2403.02628.png b/data/2403.02628.png new file mode 100644 index 0000000000000000000000000000000000000000..23d2595295d530fc8790099e5e33aaa8a036955d --- /dev/null +++ b/data/2403.02628.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83c719fd2a10b2fb0b37713209794bdc235955817d9d66a12f0a3320bfbf0a28 +size 807750 diff --git a/data/2403.02640.png b/data/2403.02640.png new file mode 100644 index 0000000000000000000000000000000000000000..faeab253376f585e7fc763638a107c678326baec --- /dev/null +++ b/data/2403.02640.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55166b0a6560d12a049ec87fc788f98cfd45d4d9b7b59bfc1e91d2dbe3fa6c4c +size 1176498 diff --git a/data/2403.02649.png b/data/2403.02649.png new file mode 100644 index 0000000000000000000000000000000000000000..e647ae4ba819c637756e9c32c0123fbd00bb7cc1 --- /dev/null +++ b/data/2403.02649.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02c0f0e62066b1115c831f3480367e87b0b72e8573f4f0e9a66ca9528d382d7c +size 891313 diff --git a/data/2403.02746.png b/data/2403.02746.png new file mode 100644 index 0000000000000000000000000000000000000000..eba1935ea751928cefc3206bb875904a73c239fb --- /dev/null +++ b/data/2403.02746.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b84bc0305c765e2548760f314004be67346bda3082364111f50185c6bd7ae30 +size 966753 diff --git a/data/2403.02753.png b/data/2403.02753.png new file mode 100644 index 0000000000000000000000000000000000000000..e3c0d9b2e0616cb73904543694ea22d65e63b57d --- /dev/null +++ b/data/2403.02753.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d00b39c927a742dfe2daab8a4a88c80012e15f96b75c76b556d802f9515bdb64 +size 808739 diff --git a/data/2403.02767.png b/data/2403.02767.png new file mode 100644 index 0000000000000000000000000000000000000000..031a09f9100b7308a73e051f5e02a3472e622603 --- /dev/null +++ b/data/2403.02767.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2192f938e1a249d59d09fa3f0206f81812cf8ed30194f90eb1d1e3c21fec5c9d +size 756191 diff --git a/data/2403.02769.png b/data/2403.02769.png new file mode 100644 index 0000000000000000000000000000000000000000..51088a349914e1ae49ce27bf81b2e1934a1fd380 --- /dev/null +++ b/data/2403.02769.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55f19e8d2ae9d5b4ee94390c92f259adcaf0e32fe46338ec4d484311ec0272f4 +size 940393 diff --git a/data/2403.02781v3.png b/data/2403.02781v3.png new file mode 100644 index 0000000000000000000000000000000000000000..426c1669d2314e80621245ed311b6388bffbcf0d --- /dev/null +++ b/data/2403.02781v3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f40b74c4c5908c4e67733b065f31a17ab6a7b370d1357973e9d6abf72d2d7e5b +size 782938 diff --git a/data/2403.02782.png b/data/2403.02782.png new file mode 100644 index 0000000000000000000000000000000000000000..70dd7de4009da208b3725d653e14f5891ea4c88c --- /dev/null +++ b/data/2403.02782.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ec2f76a1514f69dcdab7b6086cc21d6ccdacad6d83cc03ccdb8d475ff6f5b26 +size 768506 diff --git a/data/2403.02886.png b/data/2403.02886.png new file mode 100644 index 0000000000000000000000000000000000000000..c1fa338021d17826f89f338d40b18c259f3c63c9 --- /dev/null +++ b/data/2403.02886.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0754fa8c61bbde0deb88badc162da5e1d0c35881a41f615b19438bde14238ea +size 938864 diff --git a/data/2403.02899.png b/data/2403.02899.png new file mode 100644 index 0000000000000000000000000000000000000000..2fdb48c21e36c6d4023c9ce3ddf686a9d40df386 --- /dev/null +++ b/data/2403.02899.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d23d7a38d004e4fbbdc6f3215858fbfb4bb4cc4595509eb0ef04fdfbf7d0bed4 +size 784117 diff --git a/data/2403.02969.png b/data/2403.02969.png new file mode 100644 index 0000000000000000000000000000000000000000..9c16c7457076fb3f8e8bfdf96c0e070119dd0502 --- /dev/null +++ b/data/2403.02969.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cc04469b1625ed7fc28c1b8dbae67e6203baa577b3238925087a06c848c32be +size 915230 diff --git a/data/2403.02981.png b/data/2403.02981.png new file mode 100644 index 0000000000000000000000000000000000000000..64e548066838736c51b97315bb45a251e740d6dd --- /dev/null +++ b/data/2403.02981.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:895fd7942854597182a3a27879430cc942fb030d13ae77f4bd71cbdb94a15dc6 +size 984129 diff --git a/data/2403.02991.png b/data/2403.02991.png new file mode 100644 index 0000000000000000000000000000000000000000..77d692188766eb0de9c90b4933f32d2e9a67907d --- /dev/null +++ b/data/2403.02991.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4576aa6a147edf4030d6b3eb0a132f5f162d19b9f9ce6a963680556ec7c85bbc +size 760138 diff --git a/data/2403.03037.png b/data/2403.03037.png new file mode 100644 index 0000000000000000000000000000000000000000..d2eeab810757a3d8f44173adedfdfda36066ab70 --- /dev/null +++ b/data/2403.03037.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7837991fd12a676a45ce5eb3dc62913266b382d265c02c2ca660ba2ab2ee97bc +size 864260 diff --git a/data/2403.03063v1.png b/data/2403.03063v1.png new file mode 100644 index 0000000000000000000000000000000000000000..fe7873ffa766cdfa3610472943609ab0f6462bc3 --- /dev/null +++ b/data/2403.03063v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40cb8d6ccbbf94f6f3f4698fc38f838735510ce30413d96b2accd4b14d00ca66 +size 949768 diff --git a/data/2403.03077.png b/data/2403.03077.png new file mode 100644 index 0000000000000000000000000000000000000000..7cf5d25cfd233ce2dd8ce7097edc4b12e1fb4b01 --- /dev/null +++ b/data/2403.03077.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19df89c36d1542840afbeb6e1d15a86591c549e01dd5b1e729122740ce321ebc +size 819646 diff --git a/data/2403.03122v1.png b/data/2403.03122v1.png new file mode 100644 index 0000000000000000000000000000000000000000..3db0f51b3589239f345e26f61aa741193a1f284c --- /dev/null +++ b/data/2403.03122v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d0c6b2d8abc5c8e9977d2379012a5eb7e49034f5bca93e16e54dc7d6a6c7e0f +size 913256 diff --git a/data/2403.03170.png b/data/2403.03170.png new file mode 100644 index 0000000000000000000000000000000000000000..9cbaec2f978fcf643c5dbb6b1aca0381a523ca3f --- /dev/null +++ b/data/2403.03170.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0c7a9e01926d7c3f0f77fc9f07cfcecfb61eb2394631a2064e5f29ef15dbdf4 +size 752112 diff --git a/data/2403.03221.png b/data/2403.03221.png new file mode 100644 index 0000000000000000000000000000000000000000..7dabadb11e3344d1ef11e41f131ea90a9da28172 --- /dev/null +++ b/data/2403.03221.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c29f0a681f81e5b1647aff573be72c2d5416aa2bf0325cfa24f609b2afbf351f +size 734587 diff --git a/data/2403.03346.png b/data/2403.03346.png new file mode 100644 index 0000000000000000000000000000000000000000..6a79e57f004fcaad23e76b1d6c6eb8f1cc1f934f --- /dev/null +++ b/data/2403.03346.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8db4dd0856e969e93b3c38728eca31b9a445ce3e6ec337fd3243a8e6d1ea8dc +size 783126 diff --git a/data/2403.03370.png b/data/2403.03370.png new file mode 100644 index 0000000000000000000000000000000000000000..c7b91fdd6b0a45b8cf8a70317c604ef204d2bb3a --- /dev/null +++ b/data/2403.03370.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ff758980b0719f9644fb9e27b35289a591b59f1fb54613d8f7fd4ea9ac2d791 +size 863162 diff --git a/data/2403.03421.png b/data/2403.03421.png new file mode 100644 index 0000000000000000000000000000000000000000..d73439ebce4e9443bd01f77ab01afceeed92e96f --- /dev/null +++ b/data/2403.03421.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4565caca5f2b0c48db6b36813c50d6e6f66bf0d1b78287bd55a1b717c042d064 +size 821183 diff --git a/data/2403.03431.png b/data/2403.03431.png new file mode 100644 index 0000000000000000000000000000000000000000..448d4597e971d0cf8935975dd9da376abab445a3 --- /dev/null +++ b/data/2403.03431.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25f5dee5b52ce896ca690513854aedc29ff6f136c7499218ef6338bc9cb6a708 +size 872030 diff --git a/data/2403.03447.png b/data/2403.03447.png new file mode 100644 index 0000000000000000000000000000000000000000..bacc43013b141814e3eba9002f9fc0384530a41b --- /dev/null +++ b/data/2403.03447.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6121976ed47420fd396bd6a5a7874878a962f63497a7a5a631bf092442765434 +size 976828 diff --git a/data/2403.03477.png b/data/2403.03477.png new file mode 100644 index 0000000000000000000000000000000000000000..b4a968ad7e903c81421965696b2fe211f414d163 --- /dev/null +++ b/data/2403.03477.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49faea6d277a53b5337a81fab68cd0d96c1073b9780336b8a0ca755953840368 +size 1174447 diff --git a/data/2403.03485.png b/data/2403.03485.png new file mode 100644 index 0000000000000000000000000000000000000000..e58e397a5dd9ab7e2e454a8624b671dd2b8a7bbb --- /dev/null +++ b/data/2403.03485.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9420ece57d99ea002c965aea637279b1599b1e6819ec79c0596cecd61136f903 +size 819544 diff --git a/data/2403.03532.png b/data/2403.03532.png new file mode 100644 index 0000000000000000000000000000000000000000..92d0e3190b13071b3b89bc979863861cedc5bab3 --- /dev/null +++ b/data/2403.03532.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52cebda9c63b9952e8831af79a42314a0fb97d1c36eaefcc537de6f736e74123 +size 741317 diff --git a/data/2403.03561.png b/data/2403.03561.png new file mode 100644 index 0000000000000000000000000000000000000000..9c6bd9c7d72b3ac1d4c1a5fc5b8ba2964a9bb949 --- /dev/null +++ b/data/2403.03561.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e97346600daa3ffd2512167d235533ba160ba6f463a2d54e78f1dde980fd615 +size 937183 diff --git a/data/2403.03608.png b/data/2403.03608.png new file mode 100644 index 0000000000000000000000000000000000000000..618c93567e4a9505c34527d140b0128d56dcf1e8 --- /dev/null +++ b/data/2403.03608.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb4d7039eed4b34f7b543b2ddf8f9a814c12b201743f02f9ca34e6c819e2d447 +size 780719 diff --git a/data/2403.03662v1.png b/data/2403.03662v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5aadf512561e90de6c2e64cf5456ec4d13b7ddc0 --- /dev/null +++ b/data/2403.03662v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:601a57019e24c3b942e57b9da51a3f5113c76786872a7c4cf9d7d7cf3b264671 +size 794251 diff --git a/data/2403.03715.png b/data/2403.03715.png new file mode 100644 index 0000000000000000000000000000000000000000..c25785c2420d8d4c53103c1e2cd2cd3640cd0910 --- /dev/null +++ b/data/2403.03715.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11f02be001253293bdd167c5217b527b074ae9900885c1e5d054a5e0b88ce49d +size 813030 diff --git a/data/2403.03736.png b/data/2403.03736.png new file mode 100644 index 0000000000000000000000000000000000000000..9da7f26b6edb6df41c8badc462349930281ee37c --- /dev/null +++ b/data/2403.03736.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c09f9604000c6f04271216d21d41cc22e5464f2c360bf1a4b73c403752a47b95 +size 1128539 diff --git a/data/2403.03739.png b/data/2403.03739.png new file mode 100644 index 0000000000000000000000000000000000000000..54b59d32ba829258294a340f37f9abf6d3267505 --- /dev/null +++ b/data/2403.03739.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29f0d98a199106cabec74865cef5cc9c5b1c3f4ad60aa8e76c632946e28472d7 +size 802678 diff --git a/data/2403.03740.png b/data/2403.03740.png new file mode 100644 index 0000000000000000000000000000000000000000..dcc30318ff7e10cfdbb1154016028c52fa734c0d --- /dev/null +++ b/data/2403.03740.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e78398ce0c038fd9e65332f05cbc96c6b72b865222a1df65c81612a2d61334e +size 961104 diff --git a/data/2403.03881.png b/data/2403.03881.png new file mode 100644 index 0000000000000000000000000000000000000000..88fe724dc3c5bb910aa3cc464b6f505f074ea14a --- /dev/null +++ b/data/2403.03881.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d47bcac936e1a6d7dac47c44684ccbfbad2f52d9db39af3215be5b99cfd15d25 +size 418505 diff --git a/data/2403.03890v1.png b/data/2403.03890v1.png new file mode 100644 index 0000000000000000000000000000000000000000..4f30f3ff19b26c18901553cf35783c834c81941f --- /dev/null +++ b/data/2403.03890v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2764ab35a8be49a6d9dc7888103b8a2c41ccc40616a2e8d2701a9d35ce15d54 +size 835406 diff --git a/data/2403.03896v1.png b/data/2403.03896v1.png new file mode 100644 index 0000000000000000000000000000000000000000..9f43498fc0837fad0ee3621be43d2ba948ab2de3 --- /dev/null +++ b/data/2403.03896v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52159ff1f8348ac06070dd1721d5c2ca32bd07c1d9310f632d1da3654542235b +size 1191995 diff --git a/data/2403.04149.png b/data/2403.04149.png new file mode 100644 index 0000000000000000000000000000000000000000..e8ff716dfc9b12400698418d71dc4c5c722554d1 --- /dev/null +++ b/data/2403.04149.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d7c51efd9678552995afb70ab7c31cd93a0c4c5da3914831a23cd947c0a7484 +size 863319 diff --git a/data/2403.04198.png b/data/2403.04198.png new file mode 100644 index 0000000000000000000000000000000000000000..b1e21ddc9d252e7e59695e263f15f697c0ea8e24 --- /dev/null +++ b/data/2403.04198.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:648415995c49329b6af613d352c43c5c5c00ff3d6954abaa7bf41b1ce80bf010 +size 785229 diff --git a/data/2403.04245.png b/data/2403.04245.png new file mode 100644 index 0000000000000000000000000000000000000000..0e413d74ed5fcde9b963155d862479bbcf7d0761 --- /dev/null +++ b/data/2403.04245.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e98eae78bad3942093d76db60ff96330edc518f6b5e57b889f99309833ac0cc +size 776687 diff --git a/data/2403.04258.png b/data/2403.04258.png new file mode 100644 index 0000000000000000000000000000000000000000..a9074ed5d345830c6386b763fb98640154daa314 --- /dev/null +++ b/data/2403.04258.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d598e7509566d4d6449af6286e96ca62fecf76acd894142f62efed68f34b3c94 +size 928824 diff --git a/data/2403.04272v1.png b/data/2403.04272v1.png new file mode 100644 index 0000000000000000000000000000000000000000..7632b8355bc3f0c946d3541b52d6e1b3932fc621 --- /dev/null +++ b/data/2403.04272v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d7f69f0851f3f39084de7dd5252017a815fe4d745fd0f564bf15c2c6ba01751 +size 721109 diff --git a/data/2403.04290.png b/data/2403.04290.png new file mode 100644 index 0000000000000000000000000000000000000000..da04b67108d2bb1e26a3d4c012f3b0226baeb141 --- /dev/null +++ b/data/2403.04290.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47ddf0c5d3e263e2d4a77bf888c848fb7703d2592c0945c20ee612cb0b7173d4 +size 807473 diff --git a/data/2403.04303.png b/data/2403.04303.png new file mode 100644 index 0000000000000000000000000000000000000000..5dbdd6efea5d10ef7cf54ace44675b1d8cc04068 --- /dev/null +++ b/data/2403.04303.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4f1d7241296fea60fd6ae1ba53c00b6f64b150fed87272771b734dfc4340c81 +size 664259 diff --git a/data/2403.04321.png b/data/2403.04321.png new file mode 100644 index 0000000000000000000000000000000000000000..b0b667d6f9630b7cbe547cd5c0881415661ddb75 --- /dev/null +++ b/data/2403.04321.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0185a23aa366d0f1f365f7ac6a434ab002429f7edfe744848670410165f9ebb7 +size 981815 diff --git a/data/2403.04368v1.png b/data/2403.04368v1.png new file mode 100644 index 0000000000000000000000000000000000000000..2a93f6f17a7b592b4f6e661433d26bce1b3f8fc2 --- /dev/null +++ b/data/2403.04368v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebc7a3d5517a54fc29deef72aec22e361207110781409302f4acfc623f50480d +size 933152 diff --git a/data/2403.04381.png b/data/2403.04381.png new file mode 100644 index 0000000000000000000000000000000000000000..82ebe19fde3fad7f84d1e09140c4e9e38944c721 --- /dev/null +++ b/data/2403.04381.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0700a72da3bcd6bee7d847849c49e96833b92d74496a9aa5177e77393f5a9bf +size 826824 diff --git a/data/2403.04492.png b/data/2403.04492.png new file mode 100644 index 0000000000000000000000000000000000000000..4a415a45f8725f5ae3927124316fb08fb7725ee8 --- /dev/null +++ b/data/2403.04492.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaadb14a494f274941522cfe080f72d0370a940ca9d2df8e41621201e06d2df9 +size 753857 diff --git a/data/2403.04547.png b/data/2403.04547.png new file mode 100644 index 0000000000000000000000000000000000000000..f32285732ae205ee33b523a597bbfe32a3e89139 --- /dev/null +++ b/data/2403.04547.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5cd7fba277319b51a963761cdee6b81c3f8620f1fc850d83dbe85b64f188709 +size 630485 diff --git a/data/2403.04583.png b/data/2403.04583.png new file mode 100644 index 0000000000000000000000000000000000000000..5f82d846265a5bd861f1eaac52fcebcbd6d72377 --- /dev/null +++ b/data/2403.04583.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8451b793d94115df72b86596d22f674a2466728fad2f178b041f6cf6b2d5b507 +size 696506 diff --git a/data/2403.04700.png b/data/2403.04700.png new file mode 100644 index 0000000000000000000000000000000000000000..6a24d2a137d4aea8ad040d476194f4f9719869d1 --- /dev/null +++ b/data/2403.04700.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e96058e56a3a2e27b8076e4bed16c052fb85fc8b1ad9c7996309ff11d67d3b03 +size 715667 diff --git a/data/2403.04765.png b/data/2403.04765.png new file mode 100644 index 0000000000000000000000000000000000000000..8a45cfe0384956236313d24a90f8e6c48360f94c --- /dev/null +++ b/data/2403.04765.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad0f783889296c39a9b91d9385e61245c78307798508e71c2505bb8ebdb6e060 +size 734149 diff --git a/data/2403.05005.png b/data/2403.05005.png new file mode 100644 index 0000000000000000000000000000000000000000..7010a150d8b8afe01b2faa4e10240d939a3560f3 --- /dev/null +++ b/data/2403.05005.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d79a9b92148231218c26e6dea7021764863a5e046554cfa530951db6e0ca20f3 +size 1151071 diff --git a/data/2403.05061.png b/data/2403.05061.png new file mode 100644 index 0000000000000000000000000000000000000000..264aec6027bb6413c413aceac25e3c6b29b84f61 --- /dev/null +++ b/data/2403.05061.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d23422f80de64d3ad9b9b52810fdf49fbced29604cf9e0cdbbfc6c0c5d11d31d +size 774503 diff --git a/data/2403.05086.png b/data/2403.05086.png new file mode 100644 index 0000000000000000000000000000000000000000..04a5f12388c4698cbd931f913b3333b6f8189cb2 --- /dev/null +++ b/data/2403.05086.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc63ddbf4b031d85ee2e59a0c33c44523d5d18b1b5febe54ad41dda1e09b54fd +size 867670 diff --git a/data/2403.05087.png b/data/2403.05087.png new file mode 100644 index 0000000000000000000000000000000000000000..0b9b36645f362e166f7694fcfb6e27293125454f --- /dev/null +++ b/data/2403.05087.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0ee7dfbc7c7bff080d762982f7cb91e7b4bb8507ecd8946dfad3acea14d6374 +size 964478 diff --git a/data/2403.05094.png b/data/2403.05094.png new file mode 100644 index 0000000000000000000000000000000000000000..74352148751810248f883933bd053f74f1b7d75a --- /dev/null +++ b/data/2403.05094.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8d326cca2307010ff3792d88bdcc9301d525facb8cc2a831586e0a33071da1c +size 1367725 diff --git a/data/2403.05105.png b/data/2403.05105.png new file mode 100644 index 0000000000000000000000000000000000000000..146da161783019d67956caf9530fc213b06fc60f --- /dev/null +++ b/data/2403.05105.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4c39b71974fc8be8549dfc018740a519dacb6e93e9fd86d63e7624f6f2f6001 +size 897125 diff --git a/data/2403.05239.png b/data/2403.05239.png new file mode 100644 index 0000000000000000000000000000000000000000..bcc7bb42c46ea42844edaee42e5703c25b14d674 --- /dev/null +++ b/data/2403.05239.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c831d139ad0cf3bb0729f1cae65479907765383da0525b283a99c0d04265cfd3 +size 783371 diff --git a/data/2403.05247.png b/data/2403.05247.png new file mode 100644 index 0000000000000000000000000000000000000000..bac49fc59b00011ac397473d185e7e209eafa6ec --- /dev/null +++ b/data/2403.05247.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7098dabefc57e2e02a22e71a4f85d1ea079c95f42d451202546f4977b875888 +size 1031664 diff --git a/data/2403.05369.png b/data/2403.05369.png new file mode 100644 index 0000000000000000000000000000000000000000..ef22647b20119ccbae0a1a2e728ec6bafc8d8ce2 --- /dev/null +++ b/data/2403.05369.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fbb09176f0c6d0bf91dccd8d69b8c85806d244e79505cacb4320442b1bd39648 +size 1085398 diff --git a/data/2403.05419.png b/data/2403.05419.png new file mode 100644 index 0000000000000000000000000000000000000000..a75852ba6b8f6fe93531c59d940cacc5e4f6d8e1 --- /dev/null +++ b/data/2403.05419.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a81b3f3014f6d65e9fe1343a2922f5ace85cc8faa47059c54e25f46a3f682d4 +size 798806 diff --git a/data/2403.05817.png b/data/2403.05817.png new file mode 100644 index 0000000000000000000000000000000000000000..4ebe706a3ba6ef769b1d1d352015269ce68152b6 --- /dev/null +++ b/data/2403.05817.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e77b0b40b5d9a38eb6aab9f37150c1765ef5296b19261d247c1db024dc78cdc +size 740154 diff --git a/data/2403.05842.png b/data/2403.05842.png new file mode 100644 index 0000000000000000000000000000000000000000..1d44f2d4dd10ea6685c6dbe3500c23d9596af67e --- /dev/null +++ b/data/2403.05842.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:339d5caef5e13613e3767c85e93c720571b1ef1bbabedb5682c4ff77cb102db1 +size 733725 diff --git a/data/2403.05854.png b/data/2403.05854.png new file mode 100644 index 0000000000000000000000000000000000000000..0e0d799bd09f01912a144527f16a266c55e7b230 --- /dev/null +++ b/data/2403.05854.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b0852a18f59ee52ebf12c333d5bdf8b2d3545077299868fe2e127fe1e4f28a9 +size 780885 diff --git a/data/2403.05890.png b/data/2403.05890.png new file mode 100644 index 0000000000000000000000000000000000000000..bedb7d3b1cb549a1e48cf8f67265630ce0e2c857 --- /dev/null +++ b/data/2403.05890.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:900e705383f45cdc77fe8ade8c50f97f7843910427323b8440d588ddabdc396b +size 730475 diff --git a/data/2403.05897.png b/data/2403.05897.png new file mode 100644 index 0000000000000000000000000000000000000000..8c190afe25a63a6c9dfcba3945542d7437c798bd --- /dev/null +++ b/data/2403.05897.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:295f57df385ca548e172949dcffc11f45f1a03e800de396cc6bb551c0a1fca9d +size 992278 diff --git a/data/2403.05924v1.png b/data/2403.05924v1.png new file mode 100644 index 0000000000000000000000000000000000000000..aa25a7aaecaad5fd8e1bebce11b81201e7c6f01c --- /dev/null +++ b/data/2403.05924v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:685e2aa2440f9ad24081ccc0bfa3c0b6974da0e7a56f8bdfbd1046c182526a61 +size 852345 diff --git a/data/2403.05963.png b/data/2403.05963.png new file mode 100644 index 0000000000000000000000000000000000000000..6dcf14069af5b2d39c55ed60cfe51d183473f5b8 --- /dev/null +++ b/data/2403.05963.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b8dad266b209860de08740caf1d32290867f1777b2d5d43110bde13604ced8e +size 843203 diff --git a/data/2403.06092.png b/data/2403.06092.png new file mode 100644 index 0000000000000000000000000000000000000000..24a8c29dec5213c233802447b9ac1be161e7120d --- /dev/null +++ b/data/2403.06092.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40dfb344d318fb3f90779531f21acc218fd4f1fdebc34e45861211877b1a210e +size 947455 diff --git a/data/2403.06093.png b/data/2403.06093.png new file mode 100644 index 0000000000000000000000000000000000000000..281b69ca4c60fa625aeeff000ca61a2c3684380f --- /dev/null +++ b/data/2403.06093.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a5267d47ed8d3917747323b96e4efd5774d5f8fd05bc27abd1858bee2523761 +size 1223107 diff --git a/data/2403.06102.png b/data/2403.06102.png new file mode 100644 index 0000000000000000000000000000000000000000..97c3de2a5b8301014f75d1cb27d9a741a9eef9f0 --- /dev/null +++ b/data/2403.06102.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:759d3acd326a7453fdebc85f9500b91db3d9a57cf129fd8c6acab57e7e96a99d +size 778678 diff --git a/data/2403.06122.png b/data/2403.06122.png new file mode 100644 index 0000000000000000000000000000000000000000..160c2cbc9a40e5eb5c452b1dc9e9e382a7c4e4f8 --- /dev/null +++ b/data/2403.06122.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc15104e3709ce6d055d78cdc45a76fa5a9b10319461acd82e5169f42fe9f7fb +size 881690 diff --git a/data/2403.06135.png b/data/2403.06135.png new file mode 100644 index 0000000000000000000000000000000000000000..c098b8d1d997a473335c17fc1dd684a785e7bda5 --- /dev/null +++ b/data/2403.06135.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06c249d6109831df37bd83aad81ac12a508fbb810c714899db10d8e8f8913739 +size 1208493 diff --git a/data/2403.06205.png b/data/2403.06205.png new file mode 100644 index 0000000000000000000000000000000000000000..a87016c531220b8abf5b07c704d732e4b3dcf6af --- /dev/null +++ b/data/2403.06205.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffd181ac617b1d900cd2dd7df094649cbc69120948e5d9a12a0afd8a8e667309 +size 1099445 diff --git a/data/2403.06213.png b/data/2403.06213.png new file mode 100644 index 0000000000000000000000000000000000000000..d2957e44e176c99dc97275af957dbbac26bfe005 --- /dev/null +++ b/data/2403.06213.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9df875b69d88e4a2d246d8843210bf9f7e670244262ff923d3b9349475ee6847 +size 740868 diff --git a/data/2403.06225.png b/data/2403.06225.png new file mode 100644 index 0000000000000000000000000000000000000000..7e5881aa1c556f2f6674c9a73858cd2a5311b3bd --- /dev/null +++ b/data/2403.06225.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4887ba52d9f216299f0caba796fe6198eb8e553e58e8da267f8346875cb2944e +size 904153 diff --git a/data/2403.06247.png b/data/2403.06247.png new file mode 100644 index 0000000000000000000000000000000000000000..b275b8b97a9437bab6073944a24b6cd7b1743bed --- /dev/null +++ b/data/2403.06247.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3df9c4354d7217ce399971e47853c089cc7b751e85197bef90d1b38a7a4d9f4 +size 737482 diff --git a/data/2403.06258.png b/data/2403.06258.png new file mode 100644 index 0000000000000000000000000000000000000000..a74384f86aa9935be67d4a0ccd6c3e3e3eecab59 --- /dev/null +++ b/data/2403.06258.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98713d9aa47af688fab37c4ae4ec637816e22aafa874de68974ccad096f80e7c +size 997683 diff --git a/data/2403.06375.png b/data/2403.06375.png new file mode 100644 index 0000000000000000000000000000000000000000..34426774ce1983e3157033df70e0c131aae07058 --- /dev/null +++ b/data/2403.06375.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2848d23c8395aecd76a753b233d394a9e8d97c86fed9e728c35490bb069f98f5 +size 910580 diff --git a/data/2403.06392.png b/data/2403.06392.png new file mode 100644 index 0000000000000000000000000000000000000000..fb2bff95ef6238d7ccf9bee50cab32eaeb1fe2e0 --- /dev/null +++ b/data/2403.06392.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59e8bbfbfdd23e439a58e2f4713a80ece455a19d4aec2ee9c7925b99742299d3 +size 687907 diff --git a/data/2403.06403.png b/data/2403.06403.png new file mode 100644 index 0000000000000000000000000000000000000000..b1b49f01bdf3e24d98f0c73c2532a6ad8bdae9f5 --- /dev/null +++ b/data/2403.06403.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df337a46a06ac0eb1225eb68230a635bf1f5f4b1322dc7e01ee9a55af9a61f93 +size 399538 diff --git a/data/2403.06452.png b/data/2403.06452.png new file mode 100644 index 0000000000000000000000000000000000000000..f3b9a7903d98e5c52e83bb99aa08d50b55105764 --- /dev/null +++ b/data/2403.06452.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:935483793fc94360276d12142d2ac0d124c75c8e096791b8d5e57d10f7d61840 +size 1831225 diff --git a/data/2403.06462v2.png b/data/2403.06462v2.png new file mode 100644 index 0000000000000000000000000000000000000000..9be212ad686ecf50cfc4f16eb1e6993667ad7e47 --- /dev/null +++ b/data/2403.06462v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d4473715fc73a1dafcb7c6d97e55a45fa4e368593f6560b58ccb2f39e562606 +size 933657 diff --git a/data/2403.06495.png b/data/2403.06495.png new file mode 100644 index 0000000000000000000000000000000000000000..fc92989eef0e5b9bc1ec5a532344613300641c2a --- /dev/null +++ b/data/2403.06495.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e06d4e3aef9ea45325ecd67becebdc97c9a6c9cae85f020e6974e932efce3a80 +size 826786 diff --git a/data/2403.06592v1.png b/data/2403.06592v1.png new file mode 100644 index 0000000000000000000000000000000000000000..812422b05e4b4de2e3ca1f510a6a415abd107c2e --- /dev/null +++ b/data/2403.06592v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63810d080f5ec5402508a9502ca8b477d77dd16cc1002b8469ab467851db94df +size 777498 diff --git a/data/2403.06606.png b/data/2403.06606.png new file mode 100644 index 0000000000000000000000000000000000000000..f1f0925a80243022b7a4048656b7e1cd413a8a4c --- /dev/null +++ b/data/2403.06606.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1068ecd2d3df89264c2b2375f6b04ab160ffc80a5abf9b4a8b615aa75c5e89c0 +size 842077 diff --git a/data/2403.06668.png b/data/2403.06668.png new file mode 100644 index 0000000000000000000000000000000000000000..7b91e3244e3e3de6e90303bc89380bf3ac9fe6b4 --- /dev/null +++ b/data/2403.06668.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9375dcf8bf520a2a9cf915ad19ddb135bd6f2a77f82e444f355d8b0e12b68ad5 +size 763984 diff --git a/data/2403.06758.png b/data/2403.06758.png new file mode 100644 index 0000000000000000000000000000000000000000..1fc1b2cd6c746798dc4e73484d6c46036fc40d1a --- /dev/null +++ b/data/2403.06758.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e76bec9e21c41f88a40cf684377828c14832aeb7d5328ce3db715d19e02aa333 +size 1104723 diff --git a/data/2403.06793.png b/data/2403.06793.png new file mode 100644 index 0000000000000000000000000000000000000000..c33d8bd63b1734f4b6c8a0accc89b5ac5ef4df1a --- /dev/null +++ b/data/2403.06793.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b58d639637fe2d28113438deb716723a41b1e4cb377ab52e3ecd99ff85644758 +size 784453 diff --git a/data/2403.06846.png b/data/2403.06846.png new file mode 100644 index 0000000000000000000000000000000000000000..fb5a1813f0ed95bdc3b7be0358f75823584d2062 --- /dev/null +++ b/data/2403.06846.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19443a69d067a07ca4b42bd32e8c9ea59fb0bc46da34ff4e8bec405086935fd2 +size 884753 diff --git a/data/2403.06862.png b/data/2403.06862.png new file mode 100644 index 0000000000000000000000000000000000000000..2c323f9d1f5a671e4ea9a238a8329bd721e10c01 --- /dev/null +++ b/data/2403.06862.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ab26f68199b7ff1ed4b5481ffbf6dc9c3e9298a3c9712df6a55bfa5a36a6e61 +size 1111134 diff --git a/data/2403.06908v1.png b/data/2403.06908v1.png new file mode 100644 index 0000000000000000000000000000000000000000..15f049a863ff975dbd6c0fc6ce36decc9a3c18d0 --- /dev/null +++ b/data/2403.06908v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57f196db290706e7bb19148d3ed3f8bc037ce8012f62ebb75db2fbda2e9ac4e0 +size 1532823 diff --git a/data/2403.06912.png b/data/2403.06912.png new file mode 100644 index 0000000000000000000000000000000000000000..31fa5ff2bcd09cb0cfcd0360ee394fc8fa46e62f --- /dev/null +++ b/data/2403.06912.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:994f68a35d5c6310937c2ebd7456c417e5fc7585cc3fdc7d7540ee424a240cbc +size 1492969 diff --git a/data/2403.06946.png b/data/2403.06946.png new file mode 100644 index 0000000000000000000000000000000000000000..2699051789fdc8784964c72e0807fa94725d3e08 --- /dev/null +++ b/data/2403.06946.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f7632e0bf684ed3aaee3389f21445c8b53c2d9b01b6009a6484d7281c3f292b +size 821431 diff --git a/data/2403.06951.png b/data/2403.06951.png new file mode 100644 index 0000000000000000000000000000000000000000..9b421f7d490b0a4c0c99960c81f721a41971673c --- /dev/null +++ b/data/2403.06951.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1242e19f94ed3577664f61cb259d889e5ea3d401d86c79c024361669a3ed0379 +size 1627253 diff --git a/data/2403.06973.png b/data/2403.06973.png new file mode 100644 index 0000000000000000000000000000000000000000..de84c2d2ff3454e12fb217434f8783cfde67f5ff --- /dev/null +++ b/data/2403.06973.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb2cc24a73c4fd0c2d5ad63a59457cb610da35c7c0529b996db944b770def60f +size 898908 diff --git a/data/2403.06974.png b/data/2403.06974.png new file mode 100644 index 0000000000000000000000000000000000000000..f93f333933e6b2e2ffb614331cba440bb7bc7061 --- /dev/null +++ b/data/2403.06974.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7055039811283ba94de968b8461b83ba143f356df92ca496c1b332f1836862c +size 987332 diff --git a/data/2403.07203.png b/data/2403.07203.png new file mode 100644 index 0000000000000000000000000000000000000000..3f199ddc1ace325d4acd4d3e1c74c8330583afe7 --- /dev/null +++ b/data/2403.07203.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4627efca801bcf60a090fe1d3300c50153a5415dbf76c022137440a25c249bd7 +size 907234 diff --git a/data/2403.07214.png b/data/2403.07214.png new file mode 100644 index 0000000000000000000000000000000000000000..b7ec9df7672e3a7b8ed51b293b75a656b02e3588 --- /dev/null +++ b/data/2403.07214.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b4827444a4d27503b8b1bd8fcfa304d2197374437b380068dd2725acd313ff0 +size 907487 diff --git a/data/2403.07222v2.png b/data/2403.07222v2.png new file mode 100644 index 0000000000000000000000000000000000000000..3accccfb5e3cf6d58033a52c2e000cbb8941a8aa --- /dev/null +++ b/data/2403.07222v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:417a57bd19ec3bbb9da4fab1405aeb3725f4ecf2327da26443c2af8e4a1df75c +size 985086 diff --git a/data/2403.07234.png b/data/2403.07234.png new file mode 100644 index 0000000000000000000000000000000000000000..b3bf99af5ed8cb75b84449a566ccf4e9257ebd69 --- /dev/null +++ b/data/2403.07234.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd5dd252370d21350999ad740a9f1b1849dffb1695ebe86c04f5e68c7cf54cd9 +size 1739157 diff --git a/data/2403.07241.png b/data/2403.07241.png new file mode 100644 index 0000000000000000000000000000000000000000..f708f05554d89cf3b4e0b6b0745c2e12caee944d --- /dev/null +++ b/data/2403.07241.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09d591167a39fe0e80f9202ec84163af4ef4698ab8087e26877d43fcf664889c +size 812242 diff --git a/data/2403.07244.png b/data/2403.07244.png new file mode 100644 index 0000000000000000000000000000000000000000..cbb6809932b1b15f9d39f9474c30da7fbba940cf --- /dev/null +++ b/data/2403.07244.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83608ff8a7ecf639211b0f0ccc77085884f67a56042d97b5617029fdc97a5039 +size 796223 diff --git a/data/2403.07246.png b/data/2403.07246.png new file mode 100644 index 0000000000000000000000000000000000000000..41463a233295c3b3699f0722141f7e2b690ca694 --- /dev/null +++ b/data/2403.07246.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbe826e0c10355687180c287a5d8af888a8e11529d46b4881d4c2e593dafb57e +size 1032247 diff --git a/data/2403.07277v1.png b/data/2403.07277v1.png new file mode 100644 index 0000000000000000000000000000000000000000..a3344a7ef8e459c72913efc1f3cda517a9d00528 --- /dev/null +++ b/data/2403.07277v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dd0629e845ea2b084a1ff486691961df2dbe11cb2a12e5fbfbabaedb632b952 +size 809307 diff --git a/data/2403.07346v1.png b/data/2403.07346v1.png new file mode 100644 index 0000000000000000000000000000000000000000..8fcc3289c48fe8fbcdaef2f3011e595359746bbf --- /dev/null +++ b/data/2403.07346v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0353f0b531a7955804278b052ff7fb8410c7da8a3b4f91a43214d371a0f070ec +size 845293 diff --git a/data/2403.07347.png b/data/2403.07347.png new file mode 100644 index 0000000000000000000000000000000000000000..7e3be9939e235fd87c37328a6ecbdd1cc32aec70 --- /dev/null +++ b/data/2403.07347.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e4e6eec690a3a53eba9ab03fd0a5f349cc83a435fb8740620e7c3aebec8cc61 +size 927479 diff --git a/data/2403.07359v4.png b/data/2403.07359v4.png new file mode 100644 index 0000000000000000000000000000000000000000..d8c944a89cc339889ad463439a5d81d515eab2fc --- /dev/null +++ b/data/2403.07359v4.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f75313970da5d8e3e1c16290f7cd13d2e683a26c924343c7d69608e3ebfd3097 +size 872780 diff --git a/data/2403.07369.png b/data/2403.07369.png new file mode 100644 index 0000000000000000000000000000000000000000..da77d779ddb970fe0211f93a7e27134660a999bb --- /dev/null +++ b/data/2403.07369.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e51b7274057202d4bd214f297d88cb294f465aed1f2bcca65c19c444b7597d2c +size 599012 diff --git a/data/2403.07392.png b/data/2403.07392.png new file mode 100644 index 0000000000000000000000000000000000000000..ea012dfba1dd591a95b3eae20e04363fde80b70b --- /dev/null +++ b/data/2403.07392.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:034dd54ef9d4eb6aa3a450214451aa0ab91e8f889e31e87c70fc0f0caeffcb02 +size 746459 diff --git a/data/2403.07432.png b/data/2403.07432.png new file mode 100644 index 0000000000000000000000000000000000000000..da75626b40e155995a91b33591a11d5394070906 --- /dev/null +++ b/data/2403.07432.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d7d01fb4952eee8c397a2bc6da0d689f0fd075d31bc919d5412305a9f0d17dc +size 903700 diff --git a/data/2403.07518v1.png b/data/2403.07518v1.png new file mode 100644 index 0000000000000000000000000000000000000000..ffb4338b9418c61994bc1bc912e4da8eb465dd42 --- /dev/null +++ b/data/2403.07518v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0283d6b8089a5c11869a535d1ebb6390333b3b9c6e4d374d513fe9f46d10d92 +size 743367 diff --git a/data/2403.07532.png b/data/2403.07532.png new file mode 100644 index 0000000000000000000000000000000000000000..3762f82b9c1c728e01c53a22cc29ab42c8d8cb44 --- /dev/null +++ b/data/2403.07532.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8b623c64818a9a81801f59765034e791495e861ef973ef3e9accea509fa2624 +size 868257 diff --git a/data/2403.07535.png b/data/2403.07535.png new file mode 100644 index 0000000000000000000000000000000000000000..01467b2e4cef394a7c920dc7db99fd7b504ac5c2 --- /dev/null +++ b/data/2403.07535.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e465a026b2c87626308d831f63f2b34257a9a1d66bedaa27add804dd19930c26 +size 1343988 diff --git a/data/2403.07560v2.png b/data/2403.07560v2.png new file mode 100644 index 0000000000000000000000000000000000000000..def2118372aa3ad1f2dfce31f7a8e8e0225ae9b8 --- /dev/null +++ b/data/2403.07560v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f441e11cdaa4884cb2e1fc49466515ffd5a5f9bef87a96b4bb023cc5f599a793 +size 781042 diff --git a/data/2403.07589.png b/data/2403.07589.png new file mode 100644 index 0000000000000000000000000000000000000000..c5bf8ce107fc2f5a1fee90d70ad1edd8e2800d3a --- /dev/null +++ b/data/2403.07589.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6de729c99dc6812a97c188b980e310f7d2d73e008aa78f502cf8201e476bc81 +size 813895 diff --git a/data/2403.07592v1.png b/data/2403.07592v1.png new file mode 100644 index 0000000000000000000000000000000000000000..88e80d5abc6c63c0c96d839abc2531e392a6d645 --- /dev/null +++ b/data/2403.07592v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88eab8021e2ac5f70f5262d7d53a863e8480d93295711e93a5582b132d253efe +size 944527 diff --git a/data/2403.07630.png b/data/2403.07630.png new file mode 100644 index 0000000000000000000000000000000000000000..d9433cc64d0c3634faf4575c3a6fc637ee587d51 --- /dev/null +++ b/data/2403.07630.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:293ca20aaaf6a76efe4c3996cefb726e6dacb48cc536f8d1facce032d4ef688c +size 875861 diff --git a/data/2403.07636v2.png b/data/2403.07636v2.png new file mode 100644 index 0000000000000000000000000000000000000000..3d59e0305d3279910d3c8d1103ea8a6f7eaaa6ab --- /dev/null +++ b/data/2403.07636v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44544f325b7a970f503d2b3da30d0ef319e4f2d28a2554d36e97470ad2557997 +size 813583 diff --git a/data/2403.07684.png b/data/2403.07684.png new file mode 100644 index 0000000000000000000000000000000000000000..a25f67a2936595c9b44f2d54dbd490b19ba2a533 --- /dev/null +++ b/data/2403.07684.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41a83e9fc472b01321ca8a74dbe4b1a46a03a1a5a7200c3e19fd2634aca75e3a +size 856113 diff --git a/data/2403.07692.png b/data/2403.07692.png new file mode 100644 index 0000000000000000000000000000000000000000..92f53be8de5d97a05991f247e5f31df92614dada --- /dev/null +++ b/data/2403.07692.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e3ca73fe89d41a436f8f8ff67d5c6562d3b7382b46b4119bc079d3f77003dcd +size 722192 diff --git a/data/2403.07700.png b/data/2403.07700.png new file mode 100644 index 0000000000000000000000000000000000000000..86d5c7edcf1ae4b1b50381163cd536553cbcb6cf --- /dev/null +++ b/data/2403.07700.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5b6fe5340543071d48390026e05caa00b155a0faf76bfce72a379ec4ca9aeec +size 752400 diff --git a/data/2403.07705.png b/data/2403.07705.png new file mode 100644 index 0000000000000000000000000000000000000000..82dfd94ff5a6fbd37801a7e0c5b85da414775c04 --- /dev/null +++ b/data/2403.07705.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69003a0cef9b8a78de3ed962e8d0e550ebfb9d0c14642eb17268f9dcbace88f3 +size 802587 diff --git a/data/2403.07719.png b/data/2403.07719.png new file mode 100644 index 0000000000000000000000000000000000000000..581336a35870737905acba486a5e6b693583d1b7 --- /dev/null +++ b/data/2403.07719.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dbeeadc525455fbabcfebfb9dbd538ea17ae5dbee858368bb756f39fbd347a3 +size 747572 diff --git a/data/2403.07773.png b/data/2403.07773.png new file mode 100644 index 0000000000000000000000000000000000000000..bfbe879e39033533e6c4841c1c7be2e4f381877d --- /dev/null +++ b/data/2403.07773.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:687375aff11f27a091362b75601939ecd8f95b0872fd0c87a364dd89da965a91 +size 1231645 diff --git a/data/2403.07874.png b/data/2403.07874.png new file mode 100644 index 0000000000000000000000000000000000000000..a95d16fb3d57851428286d6c176cbbf13f17fd66 --- /dev/null +++ b/data/2403.07874.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2dd3fd234cc616ff9b23dd49724acbd9e84644f8b2374419b5c54416e550d635 +size 801981 diff --git a/data/2403.07939.png b/data/2403.07939.png new file mode 100644 index 0000000000000000000000000000000000000000..a235fd27886107a182978dda122d793a0a2ec3ce --- /dev/null +++ b/data/2403.07939.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a41bdbeeabe0550fcd8e8f16dbf8dcf39ab23b4eae53c0bbb13093c9070c6550 +size 820681 diff --git a/data/2403.08019.png b/data/2403.08019.png new file mode 100644 index 0000000000000000000000000000000000000000..1144a770b7a38cb93521fa933e4c61e9d470333d --- /dev/null +++ b/data/2403.08019.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:481e7f0f8e7018fc1b47f96bca2eb2e6619dc3ac6ae883b650a667bd5ea0c462 +size 851620 diff --git a/data/2403.08032.png b/data/2403.08032.png new file mode 100644 index 0000000000000000000000000000000000000000..262084e2bea85b795aca7653eaf7692994cae22d --- /dev/null +++ b/data/2403.08032.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3179cbad48eda39aaf60cab6c7f51577b3ce05c63a85036a72034eb91db91906 +size 436170 diff --git a/data/2403.08161.png b/data/2403.08161.png new file mode 100644 index 0000000000000000000000000000000000000000..2e0438003355ed657ef69d4347e7f4bee68df27a --- /dev/null +++ b/data/2403.08161.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb2d26cf831a37bddc6035291da90f4b7ffc41547a3a24542f9c1c245dc363c8 +size 809909 diff --git a/data/2403.08182.png b/data/2403.08182.png new file mode 100644 index 0000000000000000000000000000000000000000..5e0fbb37977a5c8e84c65a0f1bb057ada2d6f389 --- /dev/null +++ b/data/2403.08182.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bffab2457072d2cc5f2a5d997acd02b87a0c51135f2c4ad2b9b42423b992758 +size 1523824 diff --git a/data/2403.08262.png b/data/2403.08262.png new file mode 100644 index 0000000000000000000000000000000000000000..151893f49bc51f1caf9dc357ec38e4767d1b84b7 --- /dev/null +++ b/data/2403.08262.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fabb7909bb38a8c71cb887df04af1dcfe58707eadf3bff9415e81bde20191fc1 +size 851484 diff --git a/data/2403.08381.png b/data/2403.08381.png new file mode 100644 index 0000000000000000000000000000000000000000..7c5971d02f9787c87572e42cc1ec2e545d926ded --- /dev/null +++ b/data/2403.08381.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29917b17a843e9107767ababce2bce6fc10e776604fe2392f51892437b3935d1 +size 1258869 diff --git a/data/2403.08426.png b/data/2403.08426.png new file mode 100644 index 0000000000000000000000000000000000000000..c4d60ea8bbf838f00ceb546d91c71f6607678bc9 --- /dev/null +++ b/data/2403.08426.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f00265097ce59569d0cc5b77290a6d518441d83c60a863c2c144118cbd0ccd09 +size 871531 diff --git a/data/2403.08436.png b/data/2403.08436.png new file mode 100644 index 0000000000000000000000000000000000000000..1d465d96d64b69c592eeb5574e58e040ac7443f3 --- /dev/null +++ b/data/2403.08436.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b5498c1a0224b7248ce413ffbcc04b600a45246a027d6490ec2be269aad641b +size 1496795 diff --git a/data/2403.08506.png b/data/2403.08506.png new file mode 100644 index 0000000000000000000000000000000000000000..58f79907ef3028616189cb6f1d6e499827d26382 --- /dev/null +++ b/data/2403.08506.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c1e8ab2b024cbdee58c268133e95fe11876f0469add034c46786116bb72b676 +size 805422 diff --git a/data/2403.08568.png b/data/2403.08568.png new file mode 100644 index 0000000000000000000000000000000000000000..abf2d53fdfe54aeaa0dd09b7df1b6ec60693e409 --- /dev/null +++ b/data/2403.08568.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1ee577a92ead1f1626d36369d798536ec78cae44c1b31863b815215d406b094 +size 780123 diff --git a/data/2403.08629.png b/data/2403.08629.png new file mode 100644 index 0000000000000000000000000000000000000000..d6af0a90764ca578ab371cef7ce3dc45dd71846b --- /dev/null +++ b/data/2403.08629.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f75a16b5f381889ff466ccebb83468f534aba9e8871d937c3064d54646312d6 +size 1132398 diff --git a/data/2403.08639.png b/data/2403.08639.png new file mode 100644 index 0000000000000000000000000000000000000000..ec4ae49723d5a631c5292cd003f5f787bfb43622 --- /dev/null +++ b/data/2403.08639.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c499d0e9e2a261a73ab72e69c7f2b393be90ae10fd41f12a920349e57253221d +size 762779 diff --git a/data/2403.08748.png b/data/2403.08748.png new file mode 100644 index 0000000000000000000000000000000000000000..e5c3695be0702bddfbe3035cc3ada61b55b2a232 --- /dev/null +++ b/data/2403.08748.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c81233e4f6dd48afa9051de5a92e421c47a6a9a6d05d1cf3fc8cbb755cc3740 +size 1084025 diff --git a/data/2403.08768.png b/data/2403.08768.png new file mode 100644 index 0000000000000000000000000000000000000000..f47ab0e07e381d9963df03c6f2bdea8c3b8725ce --- /dev/null +++ b/data/2403.08768.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dff6bec92c5f10fdbaa1732a20d9255fba272061799f7bf089fb207c012d36b2 +size 1031089 diff --git a/data/2403.08770.png b/data/2403.08770.png new file mode 100644 index 0000000000000000000000000000000000000000..9f2e8faa4eb909a48808a00bb4f2b5a30e890a7d --- /dev/null +++ b/data/2403.08770.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec56c1dc5cc90d5e886b89e7ee6b92f94355add69a191c5c500d30d6e7e452cc +size 712014 diff --git a/data/2403.08848.png b/data/2403.08848.png new file mode 100644 index 0000000000000000000000000000000000000000..07592239078798e8e1d77548fbd2f91131a227e8 --- /dev/null +++ b/data/2403.08848.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9c7dba84b3e78cf4a6489b2b2b98033ad4eeb33f4d8cf6c96ebb1b8954fdc08 +size 999457 diff --git a/data/2403.08919.png b/data/2403.08919.png new file mode 100644 index 0000000000000000000000000000000000000000..d8e3cb0837f3c0549a4a0b5f44a74b0ba0bfd32b --- /dev/null +++ b/data/2403.08919.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:893c933c48bcdc5b20913a9ec08c3f5595916dd375d33d98598e84d92b12abac +size 782609 diff --git a/data/2403.09050.png b/data/2403.09050.png new file mode 100644 index 0000000000000000000000000000000000000000..4409e8e2f586927fd08def753d8333518517bbd9 --- /dev/null +++ b/data/2403.09050.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6fe363df35fbf65370a145ef2d4e1a537659251cacc05ad805fb61e96e4cc57 +size 855114 diff --git a/data/2403.09093v1.png b/data/2403.09093v1.png new file mode 100644 index 0000000000000000000000000000000000000000..6a8ba29af4a49ce63b8b703c83dd1015a00aef36 --- /dev/null +++ b/data/2403.09093v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:004cef22d316160cba34e11d17f5c33cfb55b401e93d3bfd4fd4da24490157bd +size 1183583 diff --git a/data/2403.09101.png b/data/2403.09101.png new file mode 100644 index 0000000000000000000000000000000000000000..8ef0d5d475fa45fa6581a6dfda018869cf2d30c1 --- /dev/null +++ b/data/2403.09101.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34e642619a54d2b3dcc91fdd50a8dcfd980f11e5ddecf226713ad7e923f3668f +size 792732 diff --git a/data/2403.09107.png b/data/2403.09107.png new file mode 100644 index 0000000000000000000000000000000000000000..51de668b39eae8f68b007bd7b6f99876b748a384 --- /dev/null +++ b/data/2403.09107.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1926da50c5ef359599370ff73f6cfcccfef216d94de0e4c63a09e87f5fdca758 +size 809013 diff --git a/data/2403.09124.png b/data/2403.09124.png new file mode 100644 index 0000000000000000000000000000000000000000..1555a5ed16ff3e7fb78acec233335bb232562d14 --- /dev/null +++ b/data/2403.09124.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9506940d6026ba2750d26937f54cbc7d0837b9b12bf5c977d6443c6f679c75a2 +size 972764 diff --git a/data/2403.09140.png b/data/2403.09140.png new file mode 100644 index 0000000000000000000000000000000000000000..0a5762147195be96fee78703985b2c68103a0252 --- /dev/null +++ b/data/2403.09140.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:704bf5bc7e423798f0ffec886a8a4338beead9e1ae67b3a04653966cc11ecd37 +size 869878 diff --git a/data/2403.09230.png b/data/2403.09230.png new file mode 100644 index 0000000000000000000000000000000000000000..be2a7ff51ac9d01304892cd058e6fbd7c4f7e9e7 --- /dev/null +++ b/data/2403.09230.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:282b292ff91afa8da1a812f415410c47dba89ce5afb3356a78f9216ae45a9820 +size 1194020 diff --git a/data/2403.09344.png b/data/2403.09344.png new file mode 100644 index 0000000000000000000000000000000000000000..cb77d6a44aceccfc9123d6a40ae774b5b445be0e --- /dev/null +++ b/data/2403.09344.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4e406ca6a3809627aa5f7623452e38ed800f6d72150d03c82ccc5fd30b85985 +size 911840 diff --git a/data/2403.09359.png b/data/2403.09359.png new file mode 100644 index 0000000000000000000000000000000000000000..6023c515bd503a58d82e53e04a486f239f94c8fd --- /dev/null +++ b/data/2403.09359.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c095fbbbd637e37b616f14cc9d5f5c6a3920bb8f6ed5d7aecab4ccb0899937df +size 917948 diff --git a/data/2403.09439.png b/data/2403.09439.png new file mode 100644 index 0000000000000000000000000000000000000000..314fc6dbc34e4f2ca804d535430394a0877beb0e --- /dev/null +++ b/data/2403.09439.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4b26642c5243a7b96e551ba9323d5367ce560f4bbf29932d10ed8d27b70246a +size 961827 diff --git a/data/2403.09480.png b/data/2403.09480.png new file mode 100644 index 0000000000000000000000000000000000000000..9883a2a24f61bf43cb73a3b7129e84424b5125f1 --- /dev/null +++ b/data/2403.09480.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed6e7cff3079f806136a82ba45269a8b608230dd52a7138a1e1c4643af5c3205 +size 817717 diff --git a/data/2403.09486.png b/data/2403.09486.png new file mode 100644 index 0000000000000000000000000000000000000000..cb87be1f7c7f05bb14333cdedc16e4cc1b42301e --- /dev/null +++ b/data/2403.09486.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0042cbbd6c17b6c27907340a85b4e236142ac1e526216e3110c5b287a85f1722 +size 494545 diff --git a/data/2403.09623.png b/data/2403.09623.png new file mode 100644 index 0000000000000000000000000000000000000000..a4d71844796537ad7e310ebed4f6b83dd5ff16fa --- /dev/null +++ b/data/2403.09623.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9428549ccd116cb4aca59f79a46550a35f3e34d21b2c4804a3df9e542b3d129 +size 1360424 diff --git a/data/2403.09630.png b/data/2403.09630.png new file mode 100644 index 0000000000000000000000000000000000000000..6573ec48a19c32cfcdecf7cc497381d03a7417e9 --- /dev/null +++ b/data/2403.09630.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08269287673854a8e6a62256e2e3194a531a72d022205471dfbfb5f3150da21b +size 1045250 diff --git a/data/2403.09632.png b/data/2403.09632.png new file mode 100644 index 0000000000000000000000000000000000000000..ac2063128a5411a8a5e476c3f8cc20e95b7e1890 --- /dev/null +++ b/data/2403.09632.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a361296dac1ea973795c8b7f28ec023dfc8cad6a4c5509224c11a4f935136564 +size 1187315 diff --git a/data/2403.09634.png b/data/2403.09634.png new file mode 100644 index 0000000000000000000000000000000000000000..7e0765db57aa61003862253d910b945e9ec7846f --- /dev/null +++ b/data/2403.09634.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cc3602fd7911460b239dd0a9093b7c7193dc2e9e93879234b631f1f93c13733 +size 769417 diff --git a/data/2403.09639.png b/data/2403.09639.png new file mode 100644 index 0000000000000000000000000000000000000000..69c0e122c747cee82458fb152d469015970a9728 --- /dev/null +++ b/data/2403.09639.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:340fa96c315b37b684141f20d704c667858093161d352f9ebbc47889470a8084 +size 911873 diff --git a/data/2403.09731.png b/data/2403.09731.png new file mode 100644 index 0000000000000000000000000000000000000000..aafb50a9b3cdaea2a65d5e99dd7aab2c6b9bb4c7 --- /dev/null +++ b/data/2403.09731.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cdac9adc76062be934f379efbcb2420982aabec57e24cfa3b38cbb0658aead3a +size 584537 diff --git a/data/2403.09914.png b/data/2403.09914.png new file mode 100644 index 0000000000000000000000000000000000000000..0af0605afa0c19b54b1ee1c1dc02b152dff29a08 --- /dev/null +++ b/data/2403.09914.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68d95c0cf4271d274e0e2038905bbd53e5efbf76270b0e42f545225a6aaa323c +size 964647 diff --git a/data/2403.10030.png b/data/2403.10030.png new file mode 100644 index 0000000000000000000000000000000000000000..1baa8e5551d1c2ccda74af5407feb73ad00e93b4 --- /dev/null +++ b/data/2403.10030.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fd03b187fbf98b88f5b696c2e07aa211b2675b11a2a75e7f479b7d95422880b +size 786975 diff --git a/data/2403.10052.png b/data/2403.10052.png new file mode 100644 index 0000000000000000000000000000000000000000..f43b3e7425f12c1deb42444ccf727fd6cd94278f --- /dev/null +++ b/data/2403.10052.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfa0bfcf912b1b0c268edb7dfc9d56a96d4d5a6e5827b2a27960922c3f60614b +size 718323 diff --git a/data/2403.10064.png b/data/2403.10064.png new file mode 100644 index 0000000000000000000000000000000000000000..32ee84022e2ca435bd6a54e84ed9141aa92c6b66 --- /dev/null +++ b/data/2403.10064.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b5bc55347d1667c34975b96799f638fe8a201759f92ece26fb97fc5137222b0 +size 960488 diff --git a/data/2403.10066.png b/data/2403.10066.png new file mode 100644 index 0000000000000000000000000000000000000000..7ec814ce7b0f10e81138bfd0e27ee9c2ff910a5a --- /dev/null +++ b/data/2403.10066.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8558f717fc1f9a26e850ac991f5aaf84be85c2e2056a8eb4f04242ec9da275dd +size 850454 diff --git a/data/2403.10071.png b/data/2403.10071.png new file mode 100644 index 0000000000000000000000000000000000000000..3613216a79eebb274c7116814fd4594ee47cdcf7 --- /dev/null +++ b/data/2403.10071.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00c03121fd33cc0fd6651e2115ef65fbd6900bea5490b043af21859bb57f1254 +size 847113 diff --git a/data/2403.10073.png b/data/2403.10073.png new file mode 100644 index 0000000000000000000000000000000000000000..6a53f789bdaea7c069a93537e30554524895a870 --- /dev/null +++ b/data/2403.10073.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e188788cd45f28571019608655cb71e715a79bd6417f16f0e0d6c4be04d3a4fa +size 724971 diff --git a/data/2403.10097.png b/data/2403.10097.png new file mode 100644 index 0000000000000000000000000000000000000000..a1404256db5ec93a9bfaa7d7fb39af2e18b234a2 --- /dev/null +++ b/data/2403.10097.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:419f11b156a4d4f0764963747cc1a14861b75dce383ff85df020c7f98217c3b4 +size 802619 diff --git a/data/2403.10099.png b/data/2403.10099.png new file mode 100644 index 0000000000000000000000000000000000000000..9931d4560267a33b26b30c86cd0ffbee0f41ff44 --- /dev/null +++ b/data/2403.10099.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48a2d54a12486a7797636d674edc2248670d896ebe4732ad007931d477b8b174 +size 926717 diff --git a/data/2403.10103.png b/data/2403.10103.png new file mode 100644 index 0000000000000000000000000000000000000000..2606588f97f090c48900e9a1b793ad2b1c519ec3 --- /dev/null +++ b/data/2403.10103.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d20ca2ee5f7aaa5c22479d8148a248385702bd1a96fab1f5474e8788fe003e7f +size 1105154 diff --git a/data/2403.10145.png b/data/2403.10145.png new file mode 100644 index 0000000000000000000000000000000000000000..b0477942bc752ecdc5fa10a53e27445cae161a19 --- /dev/null +++ b/data/2403.10145.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26e52b96ef17dcaa933b2ba5ef02adc339eeac2affd937823601744fdc6894f1 +size 842021 diff --git a/data/2403.10191.png b/data/2403.10191.png new file mode 100644 index 0000000000000000000000000000000000000000..92f89d9817d7d241099fc16fb8680f8ca9a3f821 --- /dev/null +++ b/data/2403.10191.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:867955ecb997cfe5c9d4cf9fb0d1719ef4c20e7566460e8168eea22d92d9f6ee +size 796118 diff --git a/data/2403.10254.png b/data/2403.10254.png new file mode 100644 index 0000000000000000000000000000000000000000..cef7b642d150d6cf9769d601dfb05cdd7e6e51b1 --- /dev/null +++ b/data/2403.10254.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e49aad093b770fcdbb8beb60472cd1fa12fc3aa0f3e0343384b8da8181ee331e +size 846489 diff --git a/data/2403.10255.png b/data/2403.10255.png new file mode 100644 index 0000000000000000000000000000000000000000..5f1e9f5fe3e82ea95f523e2a8b8cfde82f7962e1 --- /dev/null +++ b/data/2403.10255.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73343b6e3ced52aeaa7b52cc943f152c1b3b52793be06bf65b829fdec33df517 +size 1451991 diff --git a/data/2403.10255v1.png b/data/2403.10255v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5f1e9f5fe3e82ea95f523e2a8b8cfde82f7962e1 --- /dev/null +++ b/data/2403.10255v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73343b6e3ced52aeaa7b52cc943f152c1b3b52793be06bf65b829fdec33df517 +size 1451991 diff --git a/data/2403.10335.png b/data/2403.10335.png new file mode 100644 index 0000000000000000000000000000000000000000..624110b33bbb44abfe45df63850633af00fb1c03 --- /dev/null +++ b/data/2403.10335.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1f3975d198b4027b6a0617ab8823b81e7847b19da2a7f1de1be99720e04605b +size 853911 diff --git a/data/2403.10357.png b/data/2403.10357.png new file mode 100644 index 0000000000000000000000000000000000000000..9afe6b5b3f89f6c89090583cf357e3e06339dff0 --- /dev/null +++ b/data/2403.10357.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8636376e1be030ad9bd513e6de555e5685158f96059b59d3f76942835686642f +size 974125 diff --git a/data/2403.10362.png b/data/2403.10362.png new file mode 100644 index 0000000000000000000000000000000000000000..1815ec0a82280780871d13b9981881da45f12c0b --- /dev/null +++ b/data/2403.10362.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f11dbd208c01dcd8381e77d84e737345dec16326a21b875aa7f0d4efd9bc84a +size 815670 diff --git a/data/2403.10391.png b/data/2403.10391.png new file mode 100644 index 0000000000000000000000000000000000000000..8e1ae958926bbb461d7140369e7fd8117a934c49 --- /dev/null +++ b/data/2403.10391.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f13e97b81727cf821ef6006c3f9ab7e42edad563c0cf15d8338f46325264b726 +size 789401 diff --git a/data/2403.10518.png b/data/2403.10518.png new file mode 100644 index 0000000000000000000000000000000000000000..d709ec9cfece04a7c71cff2db12b534d94b7d278 --- /dev/null +++ b/data/2403.10518.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a54a11bbf6a6a13cacf371abe0ded7d4e4cd5c9d8884cd6105459478a9e06c36 +size 679405 diff --git a/data/2403.10519.png b/data/2403.10519.png new file mode 100644 index 0000000000000000000000000000000000000000..99cd6c79783154f138340e99b058eb03da117df8 --- /dev/null +++ b/data/2403.10519.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebe1265574f9ce5509507f312dfe1671343e37aa299fdacd2941022e6d50815b +size 759238 diff --git a/data/2403.10574.png b/data/2403.10574.png new file mode 100644 index 0000000000000000000000000000000000000000..db5fbc01939a20e7d45eec83956abeea2eb7fef6 --- /dev/null +++ b/data/2403.10574.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fed16d1e46c1c5becc671737c5fe1aba8a658434c2a2ff27e670672f7f5e962 +size 688110 diff --git a/data/2403.10615.png b/data/2403.10615.png new file mode 100644 index 0000000000000000000000000000000000000000..fe5dfa4e7d40c75cf3f42749b4bd447cd953a7dc --- /dev/null +++ b/data/2403.10615.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ea136b89ee5fed2b468e9e21e7a9c7dfadfe17fd2a2f962d8ce6b97fee77ff6 +size 1495518 diff --git a/data/2403.10701.png b/data/2403.10701.png new file mode 100644 index 0000000000000000000000000000000000000000..8a56909a69eab5a505796f876d4b5d0df22ac787 --- /dev/null +++ b/data/2403.10701.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39987f4ef3b61e1f23db64affc3138978f63407197f071dd95f90dc6eb414a83 +size 2675135 diff --git a/data/2403.10799.png b/data/2403.10799.png new file mode 100644 index 0000000000000000000000000000000000000000..c0ac52570d56ee717b32a7b724224adfdda88873 --- /dev/null +++ b/data/2403.10799.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60b4958fd8b2cab35a23fe35c9fd5cc0d13acd2edc5497e9e5713fd4cc3e9d9b +size 443433 diff --git a/data/2403.10815.png b/data/2403.10815.png new file mode 100644 index 0000000000000000000000000000000000000000..51172edc614e2db921204423eaa62ca91e997b02 --- /dev/null +++ b/data/2403.10815.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5af1412b701d8fb8fa8b19876bde403d0dc6313d5667baf231f0af5d600250de +size 828428 diff --git a/data/2403.10897.png b/data/2403.10897.png new file mode 100644 index 0000000000000000000000000000000000000000..a2a6d0fc71fcbb13b7ba080e6f73e2fa398316a1 --- /dev/null +++ b/data/2403.10897.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f72fb42ccada456c781581c4ce13630dfeff6ef56c16e9502ad23cbc56131711 +size 734647 diff --git a/data/2403.10988.png b/data/2403.10988.png new file mode 100644 index 0000000000000000000000000000000000000000..9737427808cafd287b8398aec725ba53aeda8184 --- /dev/null +++ b/data/2403.10988.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43c27083420cbc779681cfca56c7b37960b07a996423bd671a57cb5de6aa3878 +size 1150370 diff --git a/data/2403.11074.png b/data/2403.11074.png new file mode 100644 index 0000000000000000000000000000000000000000..76d00e73eb3acf57d19059792356b3465b9ca754 --- /dev/null +++ b/data/2403.11074.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97688e2c7a2feaf4fe48101f97bdd4ff50e86171462354884569987863b0c74b +size 922123 diff --git a/data/2403.11113.png b/data/2403.11113.png new file mode 100644 index 0000000000000000000000000000000000000000..56250613f3274ec5e0df2fc9dfa224a9eb614640 --- /dev/null +++ b/data/2403.11113.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa4075295af910016d26a0507b7cb54e87ec44b6bf0b36b0454eaf43208e66e9 +size 735169 diff --git a/data/2403.11157.png b/data/2403.11157.png new file mode 100644 index 0000000000000000000000000000000000000000..6d3f29c5c2dad81bc6f1ea079fa3bc2264787699 --- /dev/null +++ b/data/2403.11157.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3f3000485f5cb05e781e551485af7a2b60fc1b61177f29a54d78a599ecb0b54 +size 264228 diff --git a/data/2403.11162.png b/data/2403.11162.png new file mode 100644 index 0000000000000000000000000000000000000000..84313aae962047160b55d176e25a6537e71dab7f --- /dev/null +++ b/data/2403.11162.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29d10c750303ef978ea04939bc941ec043ed42eddd70099c253f506c334d1428 +size 853492 diff --git a/data/2403.11184.png b/data/2403.11184.png new file mode 100644 index 0000000000000000000000000000000000000000..54d1a7f4b37743ae2c1265bac2205fc4fc42305b --- /dev/null +++ b/data/2403.11184.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68c8d363d88a5effb3a9bd454b08a24641b1fc4e7a30aff4fcbca3cd15078e59 +size 790334 diff --git a/data/2403.11186.png b/data/2403.11186.png new file mode 100644 index 0000000000000000000000000000000000000000..e343a770e8b01dd74fec80e450e920b5944dcd05 --- /dev/null +++ b/data/2403.11186.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:848f80a11f8c6643658168ef99ec8530ecd741f727e26c89cc64b22f2fbfe5a4 +size 962832 diff --git a/data/2403.11193.png b/data/2403.11193.png new file mode 100644 index 0000000000000000000000000000000000000000..cbedb6cd7c287202d421b21169c0eee429cad5f6 --- /dev/null +++ b/data/2403.11193.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f69c9a6d34609f3099709228712f7b5165c9967b25f5275d95743c18ecb3de74 +size 788360 diff --git a/data/2403.11222.png b/data/2403.11222.png new file mode 100644 index 0000000000000000000000000000000000000000..d4bcbf40d8a29d1bb01caa5ceac9e7f7a7ffcd6d --- /dev/null +++ b/data/2403.11222.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baf83aaf7d1339ba21c125f9a39b919ea247a901ba6618f3aca2e973c2f61699 +size 821935 diff --git a/data/2403.11234.png b/data/2403.11234.png new file mode 100644 index 0000000000000000000000000000000000000000..c43a659e90a06d3cf85f264063b747dbdba74935 --- /dev/null +++ b/data/2403.11234.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f471134b7e2c5ec4981f31158e11d1490a351353f3f4ebc92d3527c9c17ca6e +size 776467 diff --git a/data/2403.11256.png b/data/2403.11256.png new file mode 100644 index 0000000000000000000000000000000000000000..ec3b575af7a4ccb6ef0f9c4f9fea0f86ca2ce956 --- /dev/null +++ b/data/2403.11256.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c886f8754c2b8f55a4a8d1993c9a5eaf9520a2562652fd684c1e1f4f16eaac20 +size 745399 diff --git a/data/2403.11270.png b/data/2403.11270.png new file mode 100644 index 0000000000000000000000000000000000000000..75efdf60244817d4898bbdbb4139baba16736cb1 --- /dev/null +++ b/data/2403.11270.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b691ab21654c3cd93ad5287a92a470f28efd336acbddb0ea430855d93f42efe +size 848912 diff --git a/data/2403.11284v1.png b/data/2403.11284v1.png new file mode 100644 index 0000000000000000000000000000000000000000..1c0b9e47574cb568320d57b1e69978ec3d3e253d --- /dev/null +++ b/data/2403.11284v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:abb22166340fd5898bc945f011ab8904cf0e184c2da1c17a8554ce2c66c8ef9e +size 980697 diff --git a/data/2403.11310.png b/data/2403.11310.png new file mode 100644 index 0000000000000000000000000000000000000000..e4bf0ac7010b3bc5ef6fdbffa376aae65eddc5bf --- /dev/null +++ b/data/2403.11310.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa568e739eb9e86f8756bb30465512106088960b31082bc40ff04f9cfc13d10e +size 727016 diff --git a/data/2403.11380.png b/data/2403.11380.png new file mode 100644 index 0000000000000000000000000000000000000000..33369523227ca5a8c42aa512ebb1de9c88b6aff1 --- /dev/null +++ b/data/2403.11380.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:386a9a1b18a71b601ebeb7dc5f204aa8a84a68ed060fe48bc76cc1912e7e9427 +size 709175 diff --git a/data/2403.11397.png b/data/2403.11397.png new file mode 100644 index 0000000000000000000000000000000000000000..8ed7be0f68e169b75fdadaf34458b15d2323f174 --- /dev/null +++ b/data/2403.11397.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f322183a110597b9a1a029ea5e30ad451a13605d189b83a6df0bb68ec0aa3f4 +size 1014913 diff --git a/data/2403.11448.png b/data/2403.11448.png new file mode 100644 index 0000000000000000000000000000000000000000..b28e961c5cca844fde59aadcb95da3a2b8d86cdb --- /dev/null +++ b/data/2403.11448.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d99a18a0c2f51573f957be28599e962d6a1605ac9ef6c52803f990f6f956caa4 +size 768970 diff --git a/data/2403.11463v2.png b/data/2403.11463v2.png new file mode 100644 index 0000000000000000000000000000000000000000..b8852ab2a8da2a3b7109a22370c197712f46145b --- /dev/null +++ b/data/2403.11463v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dae2546044f26f7f3eb16b9bcef25dd73c20491e6f620bf651d754a8b742cb43 +size 994083 diff --git a/data/2403.11492.png b/data/2403.11492.png new file mode 100644 index 0000000000000000000000000000000000000000..95e932d2720acbc00077192621194db3e7878175 --- /dev/null +++ b/data/2403.11492.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e33d6c78816767389adbbaee00e2a1bbe9baae7ff6dd8f7f75058fa423a4031 +size 810140 diff --git a/data/2403.11496.png b/data/2403.11496.png new file mode 100644 index 0000000000000000000000000000000000000000..b378c9025428430b169eac3550ce19232b38f005 --- /dev/null +++ b/data/2403.11496.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e725ca8e81174167315f92b504fe893c6a993bbdbafd2dcb3fe6c52e299fc3d +size 1563007 diff --git a/data/2403.11510.png b/data/2403.11510.png new file mode 100644 index 0000000000000000000000000000000000000000..ab33d5d0ea52116f2330eb85c1afd491999e2130 --- /dev/null +++ b/data/2403.11510.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cc11dafa31ac70692585d6c992ee20518cc390d64d52e3baf95986951e9fd74 +size 880108 diff --git a/data/2403.11529.png b/data/2403.11529.png new file mode 100644 index 0000000000000000000000000000000000000000..90ee77f32acef7110b43791514df5d20b00135cb --- /dev/null +++ b/data/2403.11529.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f5165170341002c836cd47b7554f86c7480e1364f59fbf99b5d89960c4752e9 +size 783167 diff --git a/data/2403.11530.png b/data/2403.11530.png new file mode 100644 index 0000000000000000000000000000000000000000..bef60dabb5abbc764840182ee598eebfef0c30b7 --- /dev/null +++ b/data/2403.11530.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:281f699dd98dc3671b8ed9fe98ba5b0be94ecbb5aadcc1f59732cc2b097d00df +size 745609 diff --git a/data/2403.11549.png b/data/2403.11549.png new file mode 100644 index 0000000000000000000000000000000000000000..35c5b0f7c78f9fdc7dfabf317e70ce34870ed4ad --- /dev/null +++ b/data/2403.11549.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81bf90c10fa8754b2653f5ebc4c492755059f9b70fef1466a17bdb282d94e31f +size 765558 diff --git a/data/2403.11674.png b/data/2403.11674.png new file mode 100644 index 0000000000000000000000000000000000000000..01bd404426e78e66ab42247ad36ef1c7d2d89864 --- /dev/null +++ b/data/2403.11674.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ece7118443b487ab12511e0fd81fa94bd6d50a6813b6d3c36703d1d223bcd27e +size 957384 diff --git a/data/2403.11708v2.png b/data/2403.11708v2.png new file mode 100644 index 0000000000000000000000000000000000000000..8606d61fb87fc8504710d0ccd7d49ea2e2330709 --- /dev/null +++ b/data/2403.11708v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6656add5c8c3786cd46c245238bf643ea1ef94bcd4cb834226d725961e751e1 +size 830948 diff --git a/data/2403.11812.png b/data/2403.11812.png new file mode 100644 index 0000000000000000000000000000000000000000..75326cf4f3fddc09fe2ce4bbbf2c8ba883bcbfab --- /dev/null +++ b/data/2403.11812.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97e37d553c27a42bf07a9119107196e4fa77e8466a784a87909047ad87d3facd +size 1007121 diff --git a/data/2403.11882.png b/data/2403.11882.png new file mode 100644 index 0000000000000000000000000000000000000000..844a4b274a11bd4c5d65b4dd8ee1ebd074aa0bb9 --- /dev/null +++ b/data/2403.11882.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d058d4aee670875e4cc96185c5ac5f1faeca4c11451e61904639d3239b6d2676 +size 802756 diff --git a/data/2403.12011.png b/data/2403.12011.png new file mode 100644 index 0000000000000000000000000000000000000000..8e6a645d6c9c0ff1b776baa46463693a9114260d --- /dev/null +++ b/data/2403.12011.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8adf2d303a1f05c5517b5ed8072da3fc0c87e16eab15704220477c564e777df +size 1843521 diff --git a/data/2403.12015.png b/data/2403.12015.png new file mode 100644 index 0000000000000000000000000000000000000000..d27e30e1e88cdcfa4759fd6ba3de6d22d809bbc4 --- /dev/null +++ b/data/2403.12015.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be3845aa44beb5a4ee428d97057f9dd4596a57d6a9d67b3f01762cf1e6d4532d +size 1515024 diff --git a/data/2403.12030v1.png b/data/2403.12030v1.png new file mode 100644 index 0000000000000000000000000000000000000000..ff1f052bc6ee2d937303c124e02f5e0ac2a5ed8b --- /dev/null +++ b/data/2403.12030v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7f23ad2147d8938620e569e703e0385a57f7b76b862798ea23f05f518ce8e7f +size 742775 diff --git a/data/2403.12033.png b/data/2403.12033.png new file mode 100644 index 0000000000000000000000000000000000000000..c0305681d8ac7850f14f30d274f958f4e1f46749 --- /dev/null +++ b/data/2403.12033.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:036a32bd36dc3464e3d228739628ecd4000fdbc95dd6e82924bc380eac47dcd4 +size 978369 diff --git a/data/2403.12202.png b/data/2403.12202.png new file mode 100644 index 0000000000000000000000000000000000000000..3257e1a44684bfdebfd27b330ffd44a93cf19a18 --- /dev/null +++ b/data/2403.12202.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a71551284338db698eb918fde7f6be978ce5d1eb9668b8e308f79b61cb0867db +size 818748 diff --git a/data/2403.12236.png b/data/2403.12236.png new file mode 100644 index 0000000000000000000000000000000000000000..cb04e9e3a6d78bb1492cd0838384f5a3ae22f93a --- /dev/null +++ b/data/2403.12236.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:350f6cd4061b2a16e1faa8e8435efbbc7f71c7bb2a4e2dfc5753d53f95959811 +size 794027 diff --git a/data/2403.12350.png b/data/2403.12350.png new file mode 100644 index 0000000000000000000000000000000000000000..ee49add2b6e6c3ce2fb47fe6e984e36c9c83efe9 --- /dev/null +++ b/data/2403.12350.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47e8ac3f99b205c5dceda42ed397a87f0ca3bc52f9272ed2653e0e8f23d2ff51 +size 787201 diff --git a/data/2403.12457.png b/data/2403.12457.png new file mode 100644 index 0000000000000000000000000000000000000000..f7eb0cd03671259de053d50ccbe7a96ad4dcb2a0 --- /dev/null +++ b/data/2403.12457.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:920ac3812e11ac5083780a6f64092c9cf01f666a43b649c44139a0c7508692ad +size 864882 diff --git a/data/2403.12473.png b/data/2403.12473.png new file mode 100644 index 0000000000000000000000000000000000000000..97f17b0aa1d9786417d6ccb4ab42a2c6eedf30af --- /dev/null +++ b/data/2403.12473.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:335571adcf22a88decfe8e0a688987db1d8a5670bc3fe4e2dd0bed613d5a7831 +size 879156 diff --git a/data/2403.12494.png b/data/2403.12494.png new file mode 100644 index 0000000000000000000000000000000000000000..5cb15e87917ebe692ae2ceae227a5375b3fa9a92 --- /dev/null +++ b/data/2403.12494.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3dcfde6759e2dfa7a4d98f1a0a55c210b7a0429b1db4248a334e13612c1ae702 +size 1158291 diff --git a/data/2403.12505.png b/data/2403.12505.png new file mode 100644 index 0000000000000000000000000000000000000000..e23e8234094ef8eab94f7c22070d62abfa6f2e55 --- /dev/null +++ b/data/2403.12505.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b543f7d716ab56b356b5c6c31480fc04ce386baf41f27e54a51ee30dfe7e5697 +size 831766 diff --git a/data/2403.12534.png b/data/2403.12534.png new file mode 100644 index 0000000000000000000000000000000000000000..119adc0cf33395750d944f3a3219b18e14f3e53e --- /dev/null +++ b/data/2403.12534.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:357bd591669d328562c6317e4cb67917dded51321cee72428b3de77608e35ca3 +size 822530 diff --git a/data/2403.12570.png b/data/2403.12570.png new file mode 100644 index 0000000000000000000000000000000000000000..f30f9a4ff95463fded7f413b5c32c2e9f9a80f61 --- /dev/null +++ b/data/2403.12570.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fb81a1ddd0860b2ce880bdc175b0ddb0fa0a8d92e1d97178da32bbe725eb6ae +size 816963 diff --git a/data/2403.12580.png b/data/2403.12580.png new file mode 100644 index 0000000000000000000000000000000000000000..08bc05b7d1778cc85771a2b57c68006ae8f35c60 --- /dev/null +++ b/data/2403.12580.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b5310bc112e0c98fb6884c472d1c56c60d62ec42e9827382a869dc01dc133ca +size 785961 diff --git a/data/2403.12710.png b/data/2403.12710.png new file mode 100644 index 0000000000000000000000000000000000000000..43bfbf2032c445cfc086a0f44ea0be66081448c6 --- /dev/null +++ b/data/2403.12710.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d91c24db71a90a686a3da39c21fc5ff7215b24f623928cadc34d2c82b778dfc +size 988385 diff --git a/data/2403.12722.png b/data/2403.12722.png new file mode 100644 index 0000000000000000000000000000000000000000..7e4d3e99d6d712b1b22ada645a6ce3b32682b839 --- /dev/null +++ b/data/2403.12722.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84f8c86cc1b86db0daf529a90026d126812cf5e096969378bf6bb6c082d56545 +size 1146546 diff --git a/data/2403.12728.png b/data/2403.12728.png new file mode 100644 index 0000000000000000000000000000000000000000..1b55383bb0fad957999f0fd68239b50a3f31942b --- /dev/null +++ b/data/2403.12728.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff681d2bf840bd7d4db83a82da08b058739dfb20a47aaca689889be58f76faec +size 881568 diff --git a/data/2403.12760.png b/data/2403.12760.png new file mode 100644 index 0000000000000000000000000000000000000000..91739efcc1724cba96f7f59a4e047a5a4e573cc9 --- /dev/null +++ b/data/2403.12760.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cedabbdde346dbb06fda8981d8311774ac13c01b95ef09647272a67da5fe6b59 +size 1657274 diff --git a/data/2403.12777.png b/data/2403.12777.png new file mode 100644 index 0000000000000000000000000000000000000000..dee330485d22dd1136f19920203f681a87d42b63 --- /dev/null +++ b/data/2403.12777.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:111751caae3d7b9bd8cf94d7e6d5aef10b142ed61e40d49efee66fc8f939a0a4 +size 910393 diff --git a/data/2403.12821.png b/data/2403.12821.png new file mode 100644 index 0000000000000000000000000000000000000000..ea5f96643472298edf8ed2472f93f81a3e7c6039 --- /dev/null +++ b/data/2403.12821.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:102d2bee1ca1225778a87c77204f56cc634e32af68de0c9d2f843e5898a12731 +size 787226 diff --git a/data/2403.12835.png b/data/2403.12835.png new file mode 100644 index 0000000000000000000000000000000000000000..6651369e3ec9508aafa2cd989450287daf82c99d --- /dev/null +++ b/data/2403.12835.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1899e24dc5ece934f0a750a020affe25d07269612d751f9a40043e46a3f039a4 +size 1038725 diff --git a/data/2403.12870.png b/data/2403.12870.png new file mode 100644 index 0000000000000000000000000000000000000000..3b7a6d7a6c84d556d9a22b5c3c29b0873d18dde4 --- /dev/null +++ b/data/2403.12870.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96de7a4276262aee109ab27f8919ed5b19fdc2a4485df2f9b21c19c429eb69f8 +size 966868 diff --git a/data/2403.12933.png b/data/2403.12933.png new file mode 100644 index 0000000000000000000000000000000000000000..03a73c80448e317e94464063f0c0afffad47a3b6 --- /dev/null +++ b/data/2403.12933.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0219fa43defb874dc8cb8b91824fc777b5792162939bd8702c05b647397d8a73 +size 1077571 diff --git a/data/2403.12962.png b/data/2403.12962.png new file mode 100644 index 0000000000000000000000000000000000000000..19e9bfb38de52b3093ae666ab65bae80539d49f2 --- /dev/null +++ b/data/2403.12962.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:875fb6be64df074730671825cc8c8e6b4e2a20162279e2e6b2ecde65805b51c7 +size 1336381 diff --git a/data/2403.13171.png b/data/2403.13171.png new file mode 100644 index 0000000000000000000000000000000000000000..a832ce27fe3280dd9cd5e6cd0b3f4bb060b76258 --- /dev/null +++ b/data/2403.13171.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79ea654450d2a0890fce6ed434725e3ba50989ba44bae04f1163defcf0d1127c +size 1129962 diff --git a/data/2403.13261.png b/data/2403.13261.png new file mode 100644 index 0000000000000000000000000000000000000000..9b79ce69daa0a3ded474e8fdae554886f0dc499a --- /dev/null +++ b/data/2403.13261.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b74ed0e4c156519bb6c1f75d2c276fbf6bc3311e42331719170ff7f3e4dedca5 +size 753736 diff --git a/data/2403.13263.png b/data/2403.13263.png new file mode 100644 index 0000000000000000000000000000000000000000..0d02eeb28dbc5ce8b52b53f0247b11247cc2f0a9 --- /dev/null +++ b/data/2403.13263.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41b25153b23c0ad1a9104d525971b92efd4f0544651ff0f36bf6806aff215c47 +size 918650 diff --git a/data/2403.13293.png b/data/2403.13293.png new file mode 100644 index 0000000000000000000000000000000000000000..f5a20c4136e4db8ae222addd72b08e6a78d4cbda --- /dev/null +++ b/data/2403.13293.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c05b9ef2ff3020e10575762432f764c49abd26ae8f337a4807b971874ebfcb8 +size 792602 diff --git a/data/2403.13304.png b/data/2403.13304.png new file mode 100644 index 0000000000000000000000000000000000000000..1f78bde1719dd20b6116adc8dd0d9ccc141f71e0 --- /dev/null +++ b/data/2403.13304.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f954e304c529c0cd1092bb9a8c351536a1d4d6866c7cad85496c58099d0fd81 +size 723766 diff --git a/data/2403.13347.png b/data/2403.13347.png new file mode 100644 index 0000000000000000000000000000000000000000..ed057bb977aaa43f342c4b8379121d4ed8eeffe1 --- /dev/null +++ b/data/2403.13347.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64b07cb8445af4586a5f5eb4c2187423febded30e66d1d501a792d4f427c883a +size 670420 diff --git a/data/2403.13351v1.png b/data/2403.13351v1.png new file mode 100644 index 0000000000000000000000000000000000000000..ccee1be89e68be5652c050e26191b78a962f1595 --- /dev/null +++ b/data/2403.13351v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c05e20ffbcd113a958e9ae8d3be7db5c4a83d59a2595749c2f3d1982bbfa07e8 +size 755255 diff --git a/data/2403.13417.png b/data/2403.13417.png new file mode 100644 index 0000000000000000000000000000000000000000..69d4e248b6bdee56ba941441396f98c9af790299 --- /dev/null +++ b/data/2403.13417.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06cade51c1e0374270be86758a37970b0639d61750369acc81733de5302c3df9 +size 823683 diff --git a/data/2403.13470v1.png b/data/2403.13470v1.png new file mode 100644 index 0000000000000000000000000000000000000000..94edd612ddc0cb97e31a1885d253d21d21f750e6 --- /dev/null +++ b/data/2403.13470v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77cea61211b2500d055f64b3a76ff9a2a43c9d3522b96a6289e04946161ba935 +size 1347978 diff --git a/data/2403.13512.png b/data/2403.13512.png new file mode 100644 index 0000000000000000000000000000000000000000..06884c5f8c6b3f4abbde8ebf7120682fc09d2e24 --- /dev/null +++ b/data/2403.13512.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91bb4641fe7f56069ff19b5c5f14cbf3d107fca4c5606676af30c13f70f0ac51 +size 905294 diff --git a/data/2403.13548.png b/data/2403.13548.png new file mode 100644 index 0000000000000000000000000000000000000000..001374ed5263e3dff25179b19c8ded14f8620f6f --- /dev/null +++ b/data/2403.13548.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15f5af80a1d0edc1320551f631919cfc5dcb73d9a899ea6c27f506c893d57300 +size 776306 diff --git a/data/2403.13647.png b/data/2403.13647.png new file mode 100644 index 0000000000000000000000000000000000000000..ddcaa5b8e9e4983995daea9f42dc31de76667dce --- /dev/null +++ b/data/2403.13647.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8aee2677a7a45345e513be0bcdecd92e7a73f118bae40e2f93271c04419610b +size 970180 diff --git a/data/2403.13667.png b/data/2403.13667.png new file mode 100644 index 0000000000000000000000000000000000000000..7f489327045e6d17954aa3a812138aa31a205b44 --- /dev/null +++ b/data/2403.13667.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6b903dcd9be9a7c21ec0b36ebe84c982b618d93697df608241aeb57eb53de7f +size 922895 diff --git a/data/2403.13683.png b/data/2403.13683.png new file mode 100644 index 0000000000000000000000000000000000000000..17193b5219e09254fb1da5b2cacab68f8369c3e5 --- /dev/null +++ b/data/2403.13683.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76b9240d08b7fc8900607b8f12995b890d86584a6767ac8c734ab6d08ca05dfe +size 772038 diff --git a/data/2403.13870.png b/data/2403.13870.png new file mode 100644 index 0000000000000000000000000000000000000000..04dccddfd690058d33640f407bc93a3c28cad676 --- /dev/null +++ b/data/2403.13870.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ef750b58b425126f7c788ceda168bbca8cc4f6477ff3cc26e18122e3777c169 +size 807052 diff --git a/data/2403.14003.png b/data/2403.14003.png new file mode 100644 index 0000000000000000000000000000000000000000..3fdaf11e84857420665eb838429eeff27e4497c9 --- /dev/null +++ b/data/2403.14003.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5ae9e2cbedbace27492aac8dc2cc9ec4ea68eeb871ed8fb181ada9611698b2d +size 790015 diff --git a/data/2403.14082.png b/data/2403.14082.png new file mode 100644 index 0000000000000000000000000000000000000000..744a6c9fa4764ab70dfe8de14245ead4b21aea2d --- /dev/null +++ b/data/2403.14082.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f45066cd14e6662acb7e67889123622ad0889197bba86d9a4ac12ae7daba8c1 +size 762705 diff --git a/data/2403.14101.png b/data/2403.14101.png new file mode 100644 index 0000000000000000000000000000000000000000..37cbb4a4156aac489a1ea15425c166f992fa1d00 --- /dev/null +++ b/data/2403.14101.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f3449d9199b469c0552e71524993a1881a5ba045e6dfe45ec8ab3e6809dcea4 +size 820684 diff --git a/data/2403.14118.png b/data/2403.14118.png new file mode 100644 index 0000000000000000000000000000000000000000..84cecfe01e321a17f3fa628b5a934cc0f011c644 --- /dev/null +++ b/data/2403.14118.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a8571c3e8cd80d561e0beeaed4409ad4c5bd39ba256e165258411bd6faf3591 +size 916487 diff --git a/data/2403.14158v1.png b/data/2403.14158v1.png new file mode 100644 index 0000000000000000000000000000000000000000..b56c8ddb8481dbecb74b22a5b63d74990406c0b3 --- /dev/null +++ b/data/2403.14158v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e51c94b4b2966c1f6de0d14ff67f5e3b8c3f13b340e72849d99248031342ce8 +size 955073 diff --git a/data/2403.14186.png b/data/2403.14186.png new file mode 100644 index 0000000000000000000000000000000000000000..7a66d608b342c7ef0d10eb41597f14b4d8fe627d --- /dev/null +++ b/data/2403.14186.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9eeb7cbbb691c169dcea4053227a22cbd506ebec580f3c6a09f427813922c396 +size 1148213 diff --git a/data/2403.14198v1.png b/data/2403.14198v1.png new file mode 100644 index 0000000000000000000000000000000000000000..88b1009c117c02d144d003d4b6ee39d444f9e099 --- /dev/null +++ b/data/2403.14198v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6b82f801f833f574f3b43470716f20b0b680ccb953b1fa95483d36d28ac37b7 +size 948734 diff --git a/data/2403.14291v1.png b/data/2403.14291v1.png new file mode 100644 index 0000000000000000000000000000000000000000..d15543fb960420b25d956551f2bb62fae45709cc --- /dev/null +++ b/data/2403.14291v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd3d2e4ab115cdf07871feef15199ccbbef9f2d3fd2207a8b88c0c552f6e6010 +size 1305862 diff --git a/data/2403.14302.png b/data/2403.14302.png new file mode 100644 index 0000000000000000000000000000000000000000..ba62263c2f92c468a48c3df7da3cbd8d67cf1700 --- /dev/null +++ b/data/2403.14302.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2c63df02abd912118bbb9e3f27e8c09e4257f4a4685cde67bcd915cd68a3c70 +size 707158 diff --git a/data/2403.14333.png b/data/2403.14333.png new file mode 100644 index 0000000000000000000000000000000000000000..7bc4d15e8521943d952f45e5f50e52e977dcf9e0 --- /dev/null +++ b/data/2403.14333.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09e4b81ac4ec975c6798c3ebb4816d1aed14c6a6f0927fff23cc181b016f519a +size 798103 diff --git a/data/2403.14366.png b/data/2403.14366.png new file mode 100644 index 0000000000000000000000000000000000000000..5b0906038e5f515bb631288d820d1c7a5aa731c0 --- /dev/null +++ b/data/2403.14366.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a3df41edb2ed4abd00d9c8bdd4e7877a9b2bdcbe2fe3447ea9e0632fd949720 +size 919559 diff --git a/data/2403.14418.png b/data/2403.14418.png new file mode 100644 index 0000000000000000000000000000000000000000..81b269f70ceaecaa3f69151bcdc5c0483e1e3431 --- /dev/null +++ b/data/2403.14418.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bedc76f9b2af8ded5d55192a4df8093f97eff317ec05535448e594b3725884df +size 956109 diff --git a/data/2403.14430.png b/data/2403.14430.png new file mode 100644 index 0000000000000000000000000000000000000000..b137d797d9d4a49137829935cba5c4233b68d4d4 --- /dev/null +++ b/data/2403.14430.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa06502047722e955faf0188b1cc461fc8a6b259c2b6c8e99fe9b49a16b4eb52 +size 909600 diff --git a/data/2403.14442.png b/data/2403.14442.png new file mode 100644 index 0000000000000000000000000000000000000000..f6df64a50346bf2475e793425a5e35e9c7163c42 --- /dev/null +++ b/data/2403.14442.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37e7388b41ea9bc6737e029aeef77862357eff193af3b6c3a146335884d81ad1 +size 759161 diff --git a/data/2403.14497.png b/data/2403.14497.png new file mode 100644 index 0000000000000000000000000000000000000000..83f8d2fbe0780089756f635ee0f951812ec16dda --- /dev/null +++ b/data/2403.14497.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8692a04f18668758a4b824f06e6fa1ae3d2aa893576bbf0fe3af0b467118412b +size 743834 diff --git a/data/2403.14513.png b/data/2403.14513.png new file mode 100644 index 0000000000000000000000000000000000000000..892124b1281ba4e12be560b6b6ae5c6b2519d35a --- /dev/null +++ b/data/2403.14513.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db96bbf1d6ef9ad2006b4cad12429bf7f5439c9c8572107a3ef9b593682c7d67 +size 1072203 diff --git a/data/2403.14552.png b/data/2403.14552.png new file mode 100644 index 0000000000000000000000000000000000000000..2c32fe8231550a5bda6815be573fc558cb61b0cd --- /dev/null +++ b/data/2403.14552.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1afb324a0794d8a43be6182a2966a195a106e83d36dae8d1f40621a5f440ea8c +size 785055 diff --git a/data/2403.14608.png b/data/2403.14608.png new file mode 100644 index 0000000000000000000000000000000000000000..9ac4def2697329aba35733047fb8ee702f804777 --- /dev/null +++ b/data/2403.14608.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54dcc38c12d28f7cbac52435bf387cbc4cb97c48339b5b734573a0ff3da663cb +size 907209 diff --git a/data/2403.14729.png b/data/2403.14729.png new file mode 100644 index 0000000000000000000000000000000000000000..8536f6dda3e2f779162817112a883f73de24b858 --- /dev/null +++ b/data/2403.14729.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1f6510c4fa5f5cd668c8a7ba97583cec967eacd5362119b72ec420f93f0aaab +size 758438 diff --git a/data/2403.14737.png b/data/2403.14737.png new file mode 100644 index 0000000000000000000000000000000000000000..74d5e73ec4bd621bf709bbec7b6e04fa83454879 --- /dev/null +++ b/data/2403.14737.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8e297a15fe19c9bfe7dd64f41e7888a4a47f34e94598e3537eebd3b2a858002 +size 808073 diff --git a/data/2403.14852.png b/data/2403.14852.png new file mode 100644 index 0000000000000000000000000000000000000000..912e569350842f5d2e0810922da4e3dd62d9e79f --- /dev/null +++ b/data/2403.14852.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:516f0ea5a2b98de09c1b2b454b964616086818b01e5cc5302aa18a448b0a9a8b +size 730888 diff --git a/data/2403.14870.png b/data/2403.14870.png new file mode 100644 index 0000000000000000000000000000000000000000..0613d1f9d864c9d1b98cd21c3ef13a1e5d64f38e --- /dev/null +++ b/data/2403.14870.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2f64f6ea3509b84f21c4cf881470d582fd8b4cd9ec037144b45a4f0aaf8660d +size 797805 diff --git a/data/2403.14886.png b/data/2403.14886.png new file mode 100644 index 0000000000000000000000000000000000000000..1ccc70ae24f4ab65e84e2b9902b94205b688a35e --- /dev/null +++ b/data/2403.14886.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af76cba7da23bf919ac5dc32a5192f3ed4858cac73ebe71e46aec0677edd5c1f +size 905079 diff --git a/data/2403.15008.png b/data/2403.15008.png new file mode 100644 index 0000000000000000000000000000000000000000..1506d4fa52538a4fd896e221702ae9e2a1bc7a4b --- /dev/null +++ b/data/2403.15008.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b220c4d7d2472d7884ae3780b6dd5d84ddb1d100f717c0a32173a13704e6c7f3 +size 892578 diff --git a/data/2403.15009v1.png b/data/2403.15009v1.png new file mode 100644 index 0000000000000000000000000000000000000000..cfe0c794f78658014e1d23abe8ad7b3a400c3cb4 --- /dev/null +++ b/data/2403.15009v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6fd7ff8f391ce1e961383bc1e469e75d7475594388a1f5cc6789cfcde2bbefb +size 429254 diff --git a/data/2403.15019.png b/data/2403.15019.png new file mode 100644 index 0000000000000000000000000000000000000000..a120a5b546e97ced6d3d033c36143399cc65be7f --- /dev/null +++ b/data/2403.15019.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e673133fc37a141a8d46d121797b2dc8bfa139e7d4b7191d24d31f3d1b8571d3 +size 856455 diff --git a/data/2403.15132.png b/data/2403.15132.png new file mode 100644 index 0000000000000000000000000000000000000000..1084a8e6a133e17cefd50844aa1ea7e534c41f99 --- /dev/null +++ b/data/2403.15132.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e61e5247563f549fcc19186dcd5b85da1befb9d78155e6bdfce59f874311a239 +size 801056 diff --git a/data/2403.15139.png b/data/2403.15139.png new file mode 100644 index 0000000000000000000000000000000000000000..63f66382f0842711205fe718f28b6c145d01366c --- /dev/null +++ b/data/2403.15139.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06bbf8b59d8cf34cbac93db4e6e1b19fa004e62c18f509b4c5e728bae3881b4e +size 861985 diff --git a/data/2403.15173.png b/data/2403.15173.png new file mode 100644 index 0000000000000000000000000000000000000000..f5a4bc6890d838295c995e3fcf0259423e7c4bbe --- /dev/null +++ b/data/2403.15173.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd120af707619c14af719d0e893279aa6c9f977ea98f9960dd118ea663eaa725 +size 826119 diff --git a/data/2403.15192.png b/data/2403.15192.png new file mode 100644 index 0000000000000000000000000000000000000000..645f515f82384fb7841b06603d56efab324c8cb4 --- /dev/null +++ b/data/2403.15192.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8f7faf79712f29f3c1a53181e9c0f7305e61fdaf35d2a8b3e694ef34981c2d8 +size 753688 diff --git a/data/2403.15194.png b/data/2403.15194.png new file mode 100644 index 0000000000000000000000000000000000000000..e94415a16dbfd37badd33ed62df9fb4aa3e99da0 --- /dev/null +++ b/data/2403.15194.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00a391b6a838129650f745a7513861372e778301ecb8171d07a9c4f12ba940db +size 839814 diff --git a/data/2403.15227.png b/data/2403.15227.png new file mode 100644 index 0000000000000000000000000000000000000000..5d2d13cd0a803d64ab49ed3e736b8d1ea77a3ff1 --- /dev/null +++ b/data/2403.15227.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d7c27b6ed8e6f763ce42b9bf4d6611837e59e4a8b18f3266cc14164a2d8ba25 +size 892533 diff --git a/data/2403.15234.png b/data/2403.15234.png new file mode 100644 index 0000000000000000000000000000000000000000..eac08dad5581dfcbdf67d0b7eda4dcc5001a43bc --- /dev/null +++ b/data/2403.15234.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd4708c6cb38efd7782950d67930a6d4d228915f38423f8247eafd1354689230 +size 866452 diff --git a/data/2403.15241.png b/data/2403.15241.png new file mode 100644 index 0000000000000000000000000000000000000000..1ae5bccd591b6329a3897ca1cbd7a8c6b14acfe5 --- /dev/null +++ b/data/2403.15241.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2768e8601329f1afb578013f3c7c0ab0f40734428f1b04f341b018b667b1cb58 +size 785907 diff --git a/data/2403.15330.png b/data/2403.15330.png new file mode 100644 index 0000000000000000000000000000000000000000..b33f6fccd29d86759a61640d40b629389ad6673f --- /dev/null +++ b/data/2403.15330.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26eae9e5d5f55ab9caaded60a1ec8a184067b40b8b51f29a84d8e01656dc59ca +size 769723 diff --git a/data/2403.15389.png b/data/2403.15389.png new file mode 100644 index 0000000000000000000000000000000000000000..c119c43b3e746b6d124f90c3f016a03b4c178bd6 --- /dev/null +++ b/data/2403.15389.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:938ccacca60345ef0d3915459ca6e2d61789d2aefdd8c00b210219cd0b15bca5 +size 810035 diff --git a/data/2403.15605.png b/data/2403.15605.png new file mode 100644 index 0000000000000000000000000000000000000000..07574bb83f9fb5be0921c0c83e2cb1afe47665b2 --- /dev/null +++ b/data/2403.15605.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bce49f1163b41c0106da9d0f78ef3e084e64e7fdf6faec4920f7f46969024c2d +size 795574 diff --git a/data/2403.15664.png b/data/2403.15664.png new file mode 100644 index 0000000000000000000000000000000000000000..5a1b2229c2e80dce0ca3c33d1bfbba83454d19a0 --- /dev/null +++ b/data/2403.15664.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8e826641582bd0123f818196ad11313e17b1d5d3fe9fbee15c9d9444f40044b +size 889915 diff --git a/data/2403.15679.png b/data/2403.15679.png new file mode 100644 index 0000000000000000000000000000000000000000..6ab112236d182a5ed37909cf8eae1b8e55c8f469 --- /dev/null +++ b/data/2403.15679.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f82ff42be8430c5afcc49d245075ca4c8691a0c13a1ffb0d4b96d46b54ae6b4c +size 1089753 diff --git a/data/2403.15681.png b/data/2403.15681.png new file mode 100644 index 0000000000000000000000000000000000000000..6fa191730e6cced2d438efbd95a7a129c336f21f --- /dev/null +++ b/data/2403.15681.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb5d91468ecbfa26adc95d91cc83cb96164ca97b3aaa2449773424d9607343d8 +size 820256 diff --git a/data/2403.15760.png b/data/2403.15760.png new file mode 100644 index 0000000000000000000000000000000000000000..d89431ea6b06a52fc7268a0efec9658fd58fa053 --- /dev/null +++ b/data/2403.15760.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d81fa35ff5d3f2d88b2ecebf076973b97b21671b5ad67e9672a061eb69bb5f2 +size 836181 diff --git a/data/2403.15789.png b/data/2403.15789.png new file mode 100644 index 0000000000000000000000000000000000000000..50b6fcb66193158f6dc1b7af5f178750099cec7a --- /dev/null +++ b/data/2403.15789.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6688865112276ad3d16cb72a245387bc75f74fe35eeb439e1479fc82c8c69b9 +size 707764 diff --git a/data/2403.15835.png b/data/2403.15835.png new file mode 100644 index 0000000000000000000000000000000000000000..cfa034cee7572c9dbf2df341142e79a2d077412c --- /dev/null +++ b/data/2403.15835.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aeac3b5bf6ce63c48b2fc68b5271fe8f9fe4bf92632d0ffb29b312aeb11d9051 +size 776594 diff --git a/data/2403.15891.png b/data/2403.15891.png new file mode 100644 index 0000000000000000000000000000000000000000..04c91e06bf3f28a6e578cc30aa204c6ccf463b01 --- /dev/null +++ b/data/2403.15891.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9da926362203948ac25a4ed3dd5d65bb936ed7d69ca43b265fdc67d18e2df037 +size 773565 diff --git a/data/2403.16002.png b/data/2403.16002.png new file mode 100644 index 0000000000000000000000000000000000000000..152a517b4f26ccd54f060547166ec5149036c5f6 --- /dev/null +++ b/data/2403.16002.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1430474d08c6274cfe6093822a47a0589b23b93fbd34510256417856b717d6a +size 717258 diff --git a/data/2403.16003.png b/data/2403.16003.png new file mode 100644 index 0000000000000000000000000000000000000000..8e002caa25de4f90196bef5f0c8dc0de9aab9e56 --- /dev/null +++ b/data/2403.16003.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ed2847923b156f932765231d8fdb405dbdaa72e2a367d256e3f9e3855693480 +size 1000476 diff --git a/data/2403.16005.png b/data/2403.16005.png new file mode 100644 index 0000000000000000000000000000000000000000..c7a30c9c0a37bffe70d4aa2b2f149b307e7a9ed8 --- /dev/null +++ b/data/2403.16005.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53dfd823e95cde0fc4b60d030fe0999eca2350363b6b8cfc92910e949ce994ed +size 808033 diff --git a/data/2403.16080.png b/data/2403.16080.png new file mode 100644 index 0000000000000000000000000000000000000000..78b87fccd1774bc0c03fb04c1e79bb4ed744cfd6 --- /dev/null +++ b/data/2403.16080.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:265feb15b217ae38a4497f87e98cd829975bb72e519f4de2abf953afba82119e +size 1044830 diff --git a/data/2403.16124.png b/data/2403.16124.png new file mode 100644 index 0000000000000000000000000000000000000000..be215435914bb50b9b3124d77b53085cd290f34a --- /dev/null +++ b/data/2403.16124.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7396aaa3fd657028a385a36cd5e671b7ffef2c94339edb6bc352181df0f88d24 +size 769736 diff --git a/data/2403.16131.png b/data/2403.16131.png new file mode 100644 index 0000000000000000000000000000000000000000..0f2d4b8702a7c3477d5b984ad6d183b90a3be4f7 --- /dev/null +++ b/data/2403.16131.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2a32cc15a21452d631077287dae86ba0fd22e97ada527853c1b373981f7ea1b +size 825482 diff --git a/data/2403.16141.png b/data/2403.16141.png new file mode 100644 index 0000000000000000000000000000000000000000..2d648697c14f2fb7485c0f42a483a34a54babdfc --- /dev/null +++ b/data/2403.16141.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:842e79083c7e4cda4f18555a48605b5500e89d5cb51d0f270191a92660faf999 +size 1200592 diff --git a/data/2403.16143.png b/data/2403.16143.png new file mode 100644 index 0000000000000000000000000000000000000000..f8858fbb0736623e4d33f73a198509bac07f60b4 --- /dev/null +++ b/data/2403.16143.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b53de4d78be8203018faaee5f1f95d6c37704c1d1a65c0dbb1ab6459f0400ce5 +size 897084 diff --git a/data/2403.16162.png b/data/2403.16162.png new file mode 100644 index 0000000000000000000000000000000000000000..3fe7ff66ff3a311f7ef07d782f9404164787c36d --- /dev/null +++ b/data/2403.16162.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c2f46b17b7af2385fc3c5134d12f92ac24fc00675c21f77ca369820232d7107 +size 1001571 diff --git a/data/2403.16182.png b/data/2403.16182.png new file mode 100644 index 0000000000000000000000000000000000000000..7cf22f64dd1c4a8bd0064a764e5fa2ae07ab607b --- /dev/null +++ b/data/2403.16182.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9c49bfa9e7abf559a64237a6908c770476155176490b7d978ddc580945e7eea +size 825569 diff --git a/data/2403.16194.png b/data/2403.16194.png new file mode 100644 index 0000000000000000000000000000000000000000..62590f6598d7dec4c465cc7dee9243b772cb628b --- /dev/null +++ b/data/2403.16194.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0df670c96fdac812deb9bda5b6530cc1e38d0713a99a04e8107228acc27a0021 +size 837991 diff --git a/data/2403.16205.png b/data/2403.16205.png new file mode 100644 index 0000000000000000000000000000000000000000..63028a46dd94d6dd6fc48c86d4a3e5ae3cc08443 --- /dev/null +++ b/data/2403.16205.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6d9d5ce745481c0d3eddbb7aecac8057e25466f3fce332b585a82d2d8488907 +size 866860 diff --git a/data/2403.16224.png b/data/2403.16224.png new file mode 100644 index 0000000000000000000000000000000000000000..7e6439b829a9ea9ece2072bf66e53b9610b887b9 --- /dev/null +++ b/data/2403.16224.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74236bf78b28e7c383f8b47229c50ce3660b71308aca951e62f503911df260be +size 843475 diff --git a/data/2403.16258.png b/data/2403.16258.png new file mode 100644 index 0000000000000000000000000000000000000000..8d4b5fef09040bc3fae686c7f2c87efdac62dd87 --- /dev/null +++ b/data/2403.16258.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fd42c45de50d7dcde9555e315bd22c4be64697eecd0dd0b1c5a29e494ec50e7 +size 792563 diff --git a/data/2403.16368.png b/data/2403.16368.png new file mode 100644 index 0000000000000000000000000000000000000000..8a07b68b417aa2d2a0e54bde72c7e39418f774bb --- /dev/null +++ b/data/2403.16368.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8677b5cc0c82b1435e997c30d46eb219776c054075bef16296252b6a0237f87d +size 755966 diff --git a/data/2403.16370.png b/data/2403.16370.png new file mode 100644 index 0000000000000000000000000000000000000000..612b6edeb021f3641371d2f34da29a755d64d52c --- /dev/null +++ b/data/2403.16370.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec90bcd8a8578a2190cb660bbfb2154478545e699e53b672b9e18ca4ddbe8890 +size 706678 diff --git a/data/2403.16376.png b/data/2403.16376.png new file mode 100644 index 0000000000000000000000000000000000000000..11884185b8d8a3b5a65ebde14e5270d4fc9eaf5d --- /dev/null +++ b/data/2403.16376.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2041d05308609ad9006ef64a4906b4b7d0755ecd64d7245e6491297542b88fa +size 864187 diff --git a/data/2403.16379.png b/data/2403.16379.png new file mode 100644 index 0000000000000000000000000000000000000000..7df2a22e321a8af579673c5e49a5e385f413f704 --- /dev/null +++ b/data/2403.16379.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4eb94b0ddc17c8e8bde06def52830a4383e1774ee04eab570bc3c13f08b89a15 +size 837721 diff --git a/data/2403.16385.png b/data/2403.16385.png new file mode 100644 index 0000000000000000000000000000000000000000..36464c3c961a9d108b001070336c844a9cf0ae95 --- /dev/null +++ b/data/2403.16385.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dba813d28043372c03c8a47bf0a440ab703be99ef1a233579c7e137b43609caf +size 798046 diff --git a/data/2403.16387.png b/data/2403.16387.png new file mode 100644 index 0000000000000000000000000000000000000000..2936c926accf7eaeb72d4b2329bf7d5111f03fd4 --- /dev/null +++ b/data/2403.16387.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:516c8823272ca66ad0d1283e4a38f0d26057018a250a760904fe14d823281382 +size 848444 diff --git a/data/2403.16398.png b/data/2403.16398.png new file mode 100644 index 0000000000000000000000000000000000000000..4decee77e7cb64c5d0041d8fe46621a24b35767a --- /dev/null +++ b/data/2403.16398.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:642ba2838ef3c9d0ddcdde8298ffd54ec390bd97d2dae79f64743e74cfd163c1 +size 781679 diff --git a/data/2403.16405.png b/data/2403.16405.png new file mode 100644 index 0000000000000000000000000000000000000000..1c133952d0122142b2933b09e9de7f49c4e847ba --- /dev/null +++ b/data/2403.16405.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec8f472d311ddbf0fa917c3d53452c16e91270358f8917281d8e1552a3f28cd7 +size 855212 diff --git a/data/2403.16412.png b/data/2403.16412.png new file mode 100644 index 0000000000000000000000000000000000000000..3c64db8b743206326cd1e6160682912bce843a13 --- /dev/null +++ b/data/2403.16412.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:216abe125df3eb40278ba576f4b42f0bdc9b6974a35ca8c1d1b4261c672fe295 +size 838759 diff --git a/data/2403.16439v1.png b/data/2403.16439v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f3e65108100111f34ad030ea2d0b994790851537 --- /dev/null +++ b/data/2403.16439v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7d26a14591db2bf54a60e7d935abc92504f4e724d83c4f97f29ef05496f2d6a +size 822561 diff --git a/data/2403.16440.png b/data/2403.16440.png new file mode 100644 index 0000000000000000000000000000000000000000..ef3a9e7983444c452bf6bf4ce518d51e95d1dabf --- /dev/null +++ b/data/2403.16440.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:692e82d98df4ddba32881f74cb98407b672730c62af8511ec83c34eca4c752a0 +size 703561 diff --git a/data/2403.16497.png b/data/2403.16497.png new file mode 100644 index 0000000000000000000000000000000000000000..f16eafae615e13bc6b5a84e6fa56a958311443b0 --- /dev/null +++ b/data/2403.16497.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:814f3bf7c3351ea938c2722abafc9468a78c8eb1670fe4771b064a5317ab20cc +size 461798 diff --git a/data/2403.16510.png b/data/2403.16510.png new file mode 100644 index 0000000000000000000000000000000000000000..dbf27b5c5fc757d998af3c4cff3a6e0f6ee4e085 --- /dev/null +++ b/data/2403.16510.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3de4a8907f0ab04b6e630448d94513e6de62f8579330119da9c8929421991b10 +size 1022001 diff --git a/data/2403.16605.png b/data/2403.16605.png new file mode 100644 index 0000000000000000000000000000000000000000..887b9f231d8700dddc93733ec929981b3684064a --- /dev/null +++ b/data/2403.16605.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a75df7c0b76e1cc77555a321d9373318c7303f272da49345103645c064f461d +size 984297 diff --git a/data/2403.16643v1.png b/data/2403.16643v1.png new file mode 100644 index 0000000000000000000000000000000000000000..2cd33cfce10036e1a709aae5ccfd03641bd924e4 --- /dev/null +++ b/data/2403.16643v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fc83ae72b580657e8fb880bd5668f26ba6b355c9cb58efc0a80a33daf207334 +size 1148226 diff --git a/data/2403.16646.png b/data/2403.16646.png new file mode 100644 index 0000000000000000000000000000000000000000..0f5b854849d8d74c7c209b4bb5fabf9a94cfab24 --- /dev/null +++ b/data/2403.16646.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caeb07887f674acba361397a612b647dae895d1d3658687c9b298212d5667d81 +size 847769 diff --git a/data/2403.16788.png b/data/2403.16788.png new file mode 100644 index 0000000000000000000000000000000000000000..8ac35afc618d8f69ac3a90619102dd281f4670ea --- /dev/null +++ b/data/2403.16788.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3c4568eee659b7de47db443b7affefe88f260a71863cbfad15d3f2394b64c4b +size 736651 diff --git a/data/2403.16897.png b/data/2403.16897.png new file mode 100644 index 0000000000000000000000000000000000000000..eb0b287613822ada0850d5c147d834fa7eeb9bd2 --- /dev/null +++ b/data/2403.16897.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d62668fa454614c7f7ec2c42dd325dc6decd767188f09004cc4d911137be209 +size 971939 diff --git a/data/2403.16937.png b/data/2403.16937.png new file mode 100644 index 0000000000000000000000000000000000000000..41ec214fc8983262803c2d2ef4bb13c33261c874 --- /dev/null +++ b/data/2403.16937.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:577b6721a9804f11c3e2cb9adb87bbb356c3cc0f77309a17a832ebb26cd40999 +size 770205 diff --git a/data/2403.16997.png b/data/2403.16997.png new file mode 100644 index 0000000000000000000000000000000000000000..3c8ce82eaf9b151e6e04f98d606ff389983eae36 --- /dev/null +++ b/data/2403.16997.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7d55b295efa5698d4ee509f6d1f82d1e0e18de7ea20ba41875ecbc815062541 +size 827848 diff --git a/data/2403.17000.png b/data/2403.17000.png new file mode 100644 index 0000000000000000000000000000000000000000..78358c987c669e8bdf0ead26b5b9df7dfa19d300 --- /dev/null +++ b/data/2403.17000.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8786b30693a90750a9c99ebaa1b8d1da2a8389b0b040466359fd77914a6d37e5 +size 1054951 diff --git a/data/2403.17001.png b/data/2403.17001.png new file mode 100644 index 0000000000000000000000000000000000000000..b949cf800334c3374ad009c554f390bdaa6cb9f3 --- /dev/null +++ b/data/2403.17001.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef32a0db0a4b4274f71b6230cf036702aabd12197d2606fa6e957483f4b1db1b +size 1062377 diff --git a/data/2403.17004.png b/data/2403.17004.png new file mode 100644 index 0000000000000000000000000000000000000000..695db25b66fddfaff64be32695c5ea8aeb10d86f --- /dev/null +++ b/data/2403.17004.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87583e425fcd8a0974e8b291ef983c49a8f010402046c7df49337a6d0173fc1a +size 757108 diff --git a/data/2403.17005v1.png b/data/2403.17005v1.png new file mode 100644 index 0000000000000000000000000000000000000000..6a2e040c9c9b2d2a73e3b96a17e68a97abd4ab72 --- /dev/null +++ b/data/2403.17005v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a05e60e6018122a47a107fd739bbe7c12334cdbc504827be5c58ef131f6803c +size 787558 diff --git a/data/2403.17006.png b/data/2403.17006.png new file mode 100644 index 0000000000000000000000000000000000000000..959c9b2f0296152ed5e3c23a0078401bcffb4345 --- /dev/null +++ b/data/2403.17006.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3c191b1606f71016b5763cce3eace29bc4cdd60586ab387c84b726b947b09ee +size 927729 diff --git a/data/2403.17094.png b/data/2403.17094.png new file mode 100644 index 0000000000000000000000000000000000000000..d4fd44cc885b1c58029b90a42d232f223d6209ab --- /dev/null +++ b/data/2403.17094.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e2daaf1e7b8d92f4d1e427a1f468e8c9b67af57929bbdeb88e880f906db20c9 +size 1199548 diff --git a/data/2403.17173.png b/data/2403.17173.png new file mode 100644 index 0000000000000000000000000000000000000000..0173c96f62c1c01bda147a94bc0f136cd8c9781e --- /dev/null +++ b/data/2403.17173.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ca08b597d5517c8b455f5639ca7a06408b151a39016a6965f2ec3d5560c7f46 +size 723828 diff --git a/data/2403.17188.png b/data/2403.17188.png new file mode 100644 index 0000000000000000000000000000000000000000..63d0556ccf8a1016827645c7080bdcedcdb018c4 --- /dev/null +++ b/data/2403.17188.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6fdd17a46a4c0ee7baf15845825cf6cd5e14795bc99406e915d8e53b4e1f31d +size 811014 diff --git a/data/2403.17301.png b/data/2403.17301.png new file mode 100644 index 0000000000000000000000000000000000000000..da3e63352b9ffa257260857e74afc53772a6cae8 --- /dev/null +++ b/data/2403.17301.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9a0912a289946852dcd7b80ef477caef897dcc34ae45a4238fb57738e72fcf2 +size 1037644 diff --git a/data/2403.17334.png b/data/2403.17334.png new file mode 100644 index 0000000000000000000000000000000000000000..4a4379c8adaa50abdacf1427bc20f26fc3977d00 --- /dev/null +++ b/data/2403.17334.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c3d627285d4fd60ae24b1e651ae16d0e5a6e708220e19953e21cbc26c083ab3 +size 1086798 diff --git a/data/2403.17360.png b/data/2403.17360.png new file mode 100644 index 0000000000000000000000000000000000000000..1ccbc4587a674e856a31090a388cb9ec877e624f --- /dev/null +++ b/data/2403.17360.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d36e81bdf975f0aa33b8cdacd814876856d675d93931b0f5bc0b95d43b304737 +size 1020953 diff --git a/data/2403.17373.png b/data/2403.17373.png new file mode 100644 index 0000000000000000000000000000000000000000..b926d25d82e9233ba83a01fabf7a0de8eb864e62 --- /dev/null +++ b/data/2403.17373.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:802129bb81876287627c1aaeba2e8d641caffacef4b974a5d96e301ffb0cb35c +size 751047 diff --git a/data/2403.17387.png b/data/2403.17387.png new file mode 100644 index 0000000000000000000000000000000000000000..07dea43ddfa5ea8b9298b414b8e9f4360ed112e1 --- /dev/null +++ b/data/2403.17387.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:337531236f0ae578a37cedd4ddad52952d8e805647c259fbd57b5859edb54bff +size 923988 diff --git a/data/2403.17409.png b/data/2403.17409.png new file mode 100644 index 0000000000000000000000000000000000000000..b61ae28ee0cc5579fb70f016260578223bc5af1c --- /dev/null +++ b/data/2403.17409.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f90818c2d9a8208b67f0ec5ff26bb279d0639c1a186ac30dd1d2db76ba67d5c3 +size 868116 diff --git a/data/2403.17420.png b/data/2403.17420.png new file mode 100644 index 0000000000000000000000000000000000000000..bb324177fac90468d3a040bfa463f9b678d39ee5 --- /dev/null +++ b/data/2403.17420.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4ddc9b07481990ea7da039be8e2a77fcff2ee38592dc906c81502feef1310e2 +size 926963 diff --git a/data/2403.17422.png b/data/2403.17422.png new file mode 100644 index 0000000000000000000000000000000000000000..ba62a10692e2b87942c430ffca579d9fb142ddb1 --- /dev/null +++ b/data/2403.17422.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cff375cacbe6a14db1ff1cd2ae88fcbf52d47639e5b07bba132eac7e88cada78 +size 938597 diff --git a/data/2403.17460.png b/data/2403.17460.png new file mode 100644 index 0000000000000000000000000000000000000000..c2d19e2e802912d246a5ca58e2ef4c7f37ce654f --- /dev/null +++ b/data/2403.17460.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:618a157acbd2ad7c728aa7c56b19d4e785245bf79075fff8213fe3fbbfe76fad +size 877919 diff --git a/data/2403.17465.png b/data/2403.17465.png new file mode 100644 index 0000000000000000000000000000000000000000..66496ff17df5c7227486c56c34fe28594da554dd --- /dev/null +++ b/data/2403.17465.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00bd99cdaf21c5d9b60d876f82e1dfce533ec638de7057c98f2a8397e53b20e2 +size 850419 diff --git a/data/2403.17496.png b/data/2403.17496.png new file mode 100644 index 0000000000000000000000000000000000000000..cf4e2b058a6445b754ce7b7d297bb34c12227a97 --- /dev/null +++ b/data/2403.17496.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c36b35b160065427c4763dc771e1b524b9b5c070e94ee44e641fe87f8d811f0e +size 1222429 diff --git a/data/2403.17502.png b/data/2403.17502.png new file mode 100644 index 0000000000000000000000000000000000000000..4281150e7ded74547081ae068c4a05d2ed149830 --- /dev/null +++ b/data/2403.17502.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5da429b3b99fc85ad9d3a4b892bb5861bee4a9e9d79deec90a37b194f0979f7c +size 741134 diff --git a/data/2403.17520.png b/data/2403.17520.png new file mode 100644 index 0000000000000000000000000000000000000000..555fb85ddc2fb8e4bcc6626e500ec4f172321cd6 --- /dev/null +++ b/data/2403.17520.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ac705c993918d10c393d8265a5dcbdd5ba76f72ca68b802951e2d6e5a6898e7 +size 815998 diff --git a/data/2403.17537.png b/data/2403.17537.png new file mode 100644 index 0000000000000000000000000000000000000000..3e92a17204421c518cd1b3d2db91df3372cb07ec --- /dev/null +++ b/data/2403.17537.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f9666d98a6a32ffee638c159803076541e56926332cdd256609f20b327b3914 +size 972756 diff --git a/data/2403.17589.png b/data/2403.17589.png new file mode 100644 index 0000000000000000000000000000000000000000..3ee318526d998c5735b6a72609a597881744a4c8 --- /dev/null +++ b/data/2403.17589.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b130ed3c4b6d2c8759835659537390550bfeca6572373361e5e4d117a3ef760b +size 713231 diff --git a/data/2403.17601.png b/data/2403.17601.png new file mode 100644 index 0000000000000000000000000000000000000000..27f03a96293d2db662ad8df0104de894ec78801b --- /dev/null +++ b/data/2403.17601.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7cac5fa1d060d8e030e2fc1a1b27bcbd8c06a3606342864a9a52b94adab4e35 +size 784010 diff --git a/data/2403.17610.png b/data/2403.17610.png new file mode 100644 index 0000000000000000000000000000000000000000..0576a504979caeb7e8939860f422ba07466ee512 --- /dev/null +++ b/data/2403.17610.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc93905751673450867c6464d7225783ca596707d1deb3a660af01d7593ce5a4 +size 923401 diff --git a/data/2403.17638v1.png b/data/2403.17638v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5418f06f79f00339b447e0a496d20992143135e8 --- /dev/null +++ b/data/2403.17638v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5abef91131d36564aab77dc9c222c0564019e6483fe67b55588d8ed8841c4e13 +size 946656 diff --git a/data/2403.17709.png b/data/2403.17709.png new file mode 100644 index 0000000000000000000000000000000000000000..554f9262b8abe8c540172099b2a321f095b6af19 --- /dev/null +++ b/data/2403.17709.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e55027a8aca6c2cc1c576f8f8cc6ba0f1c6c5d783cd471734482f98b40713fa +size 780492 diff --git a/data/2403.17719.png b/data/2403.17719.png new file mode 100644 index 0000000000000000000000000000000000000000..738f0fd394e0367841ed3f45bb8204b30f8357cd --- /dev/null +++ b/data/2403.17719.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5da9aa90b36998054c395a9a5a830123882ae057a62f227d185d3d5dd75e29f9 +size 806963 diff --git a/data/2403.17742.png b/data/2403.17742.png new file mode 100644 index 0000000000000000000000000000000000000000..2879e8956f3fe9c42494a231f5f5da98c5f0c3e5 --- /dev/null +++ b/data/2403.17742.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcf57f2b65341cb2f985be12a30fa589713a192ae4a1b666d5c27b1c1dac5155 +size 812883 diff --git a/data/2403.17749.png b/data/2403.17749.png new file mode 100644 index 0000000000000000000000000000000000000000..5a4798bdcafc34a02d45f5a26c8d8c9fd8600810 --- /dev/null +++ b/data/2403.17749.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a41dd52076d9986029f38b067865aa7c3d03335c5645221ba7bc5d9599d8d156 +size 806368 diff --git a/data/2403.17761.png b/data/2403.17761.png new file mode 100644 index 0000000000000000000000000000000000000000..1defc6fbf583083f6bcc869ac9bd1d3b308c5493 --- /dev/null +++ b/data/2403.17761.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c21c6a7f808e97fa3daa3f6777d6bc1f90f43e9f0684d0e6a81be8bd58751c70 +size 1056153 diff --git a/data/2403.17782.png b/data/2403.17782.png new file mode 100644 index 0000000000000000000000000000000000000000..b0e35eea0281988285efe0c8b259ba819e05faf4 --- /dev/null +++ b/data/2403.17782.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d81b32a3ffd5ca40eb0d78502f0052dbbf66ec3b3b59025729471d51a0a7d279 +size 949343 diff --git a/data/2403.17801.png b/data/2403.17801.png new file mode 100644 index 0000000000000000000000000000000000000000..7686b4dddefc800ff50bee840a0bbd57a0884e07 --- /dev/null +++ b/data/2403.17801.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2413dcdef2c3e8d3f2fbaad117688469b18c00561a31f5ffae88c6a92864dc16 +size 890402 diff --git a/data/2403.17870.png b/data/2403.17870.png new file mode 100644 index 0000000000000000000000000000000000000000..3c20c930a0de6b80cca841e823f6f0da098ea643 --- /dev/null +++ b/data/2403.17870.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0137319d20957bf4cfb68465b6d1c5253515df9afcd197a890a26f62fcf0dfa2 +size 752804 diff --git a/data/2403.17879v1.png b/data/2403.17879v1.png new file mode 100644 index 0000000000000000000000000000000000000000..972e130813ae1f5440815151e530e762d180307e --- /dev/null +++ b/data/2403.17879v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c885b646c40018c84ca3f87dfb051b499c5f43e70a83aa87fd053c85b82b8cb +size 701713 diff --git a/data/2403.17934.png b/data/2403.17934.png new file mode 100644 index 0000000000000000000000000000000000000000..8d006a87bd1e7758b840ed058d6c075ba0276361 --- /dev/null +++ b/data/2403.17934.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1f4285e4d5d7d9e83b343b9039df4d68a2c8cd73350394abd2b095012f109af +size 883478 diff --git a/data/2403.17935.png b/data/2403.17935.png new file mode 100644 index 0000000000000000000000000000000000000000..6f52ca3684b83e603f1776cf7dd05eb57f1d3914 --- /dev/null +++ b/data/2403.17935.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89fc5916a7612cabdb7342f0a6e1de5676c279636a6dce6e1eff44c10089817c +size 841175 diff --git a/data/2403.17936.png b/data/2403.17936.png new file mode 100644 index 0000000000000000000000000000000000000000..4587e273e1713271c268f42278b9d8d534270b07 --- /dev/null +++ b/data/2403.17936.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd8fdc3ce440fe02a8d929d06ee8b1e29c88763b28c86a979a5eab6062ab0814 +size 921097 diff --git a/data/2403.17998.png b/data/2403.17998.png new file mode 100644 index 0000000000000000000000000000000000000000..74d125af31cc0858e95a4b64a1d0cbc0b4d9710a --- /dev/null +++ b/data/2403.17998.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75c4618da6d47e9d7657ab6a44df4e19a197b083aa8e854f14a24a6b0416b67e +size 939995 diff --git a/data/2403.18036.png b/data/2403.18036.png new file mode 100644 index 0000000000000000000000000000000000000000..37f7ba1c50159f3cfa06d8b8bbe83703e0cab410 --- /dev/null +++ b/data/2403.18036.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bbd573ad3828d1ae30f3a67d222c05f8c561ce034e1be5eb863f463cfb66338 +size 1651488 diff --git a/data/2403.18092.png b/data/2403.18092.png new file mode 100644 index 0000000000000000000000000000000000000000..5a15f19ccc4c66567c3fc54c901d457b2d96670f --- /dev/null +++ b/data/2403.18092.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a62eae998c2168ff948be82d9162cb06f19c532553329a6b234ed36fe63e6722 +size 950739 diff --git a/data/2403.18144.png b/data/2403.18144.png new file mode 100644 index 0000000000000000000000000000000000000000..2c2ef96b14cc34319ba97d095b931ba2a375abfe --- /dev/null +++ b/data/2403.18144.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d331602540d382f4b79e2f9cb97ae35d14ea2b2ebc264aad2f3112833b3ccf5 +size 704218 diff --git a/data/2403.18186.png b/data/2403.18186.png new file mode 100644 index 0000000000000000000000000000000000000000..35a79b6df6af874400646c5d1c63ed2164897815 --- /dev/null +++ b/data/2403.18186.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72b147188f4d59f8ae4a9a965fe8149b24cae1873390d9354050cfdd48828d0e +size 1138554 diff --git a/data/2403.18271.png b/data/2403.18271.png new file mode 100644 index 0000000000000000000000000000000000000000..c865a220f70fabf6bda18783ab5a70dd3a9e83b0 --- /dev/null +++ b/data/2403.18271.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:233362d4ff29b5ad52ed7ac7d37ef2300c57062a2c05efcdf4c6c6f427c0a2b5 +size 702266 diff --git a/data/2403.18293.png b/data/2403.18293.png new file mode 100644 index 0000000000000000000000000000000000000000..17e2a4c2d792add24689b7544bd9fe274b4ba0c9 --- /dev/null +++ b/data/2403.18293.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4920648a7b304c70d210aba1ad97f9fde576e3d66e0997f576d44e9973f03dd1 +size 721780 diff --git a/data/2403.18342.png b/data/2403.18342.png new file mode 100644 index 0000000000000000000000000000000000000000..3ec5b864188ffac7ea1a1c231fcf64166a1dafd0 --- /dev/null +++ b/data/2403.18342.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b662a2cad8ca0c614de65b6dd8c90b8c4ad9496ddd487f629e1cd03f8bc74804 +size 1383457 diff --git a/data/2403.18356.png b/data/2403.18356.png new file mode 100644 index 0000000000000000000000000000000000000000..c01efb27d4c0eee2dcf5adac16a3e0ab53cd4aa8 --- /dev/null +++ b/data/2403.18356.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc784154b6933d07828864244032571eec6438a944f27b5b1e76106070844fe6 +size 1045733 diff --git a/data/2403.18360.png b/data/2403.18360.png new file mode 100644 index 0000000000000000000000000000000000000000..e2cdf72fb0c460ad93bac8a14ae8d3a890392c80 --- /dev/null +++ b/data/2403.18360.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab10547286dd3e625a19255fa7b311c090684be3313e64c404a09bc1b1c8e872 +size 728484 diff --git a/data/2403.18383.png b/data/2403.18383.png new file mode 100644 index 0000000000000000000000000000000000000000..f9bf94580c82f4dc2a8c0ea045ddd3fec1a51cd2 --- /dev/null +++ b/data/2403.18383.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67d84c72e5bfad23fa552e165e8b81cd63785927df25714a13ee5972a4b45ce7 +size 756326 diff --git a/data/2403.18442.png b/data/2403.18442.png new file mode 100644 index 0000000000000000000000000000000000000000..48c32916b26775c14f59d364cb61784c2577afb4 --- /dev/null +++ b/data/2403.18442.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eae694021cbe6816a2fd9eac74c8def97afd0622c29a48328326bf45d34faac5 +size 733771 diff --git a/data/2403.18447.png b/data/2403.18447.png new file mode 100644 index 0000000000000000000000000000000000000000..90e43e7500451114220bdb59fe78e62f73aa167b --- /dev/null +++ b/data/2403.18447.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8172f741c03ae5f76392f33d4b7093908390cfed6aa585b7403505e64c6ace3a +size 854766 diff --git a/data/2403.18452v1.png b/data/2403.18452v1.png new file mode 100644 index 0000000000000000000000000000000000000000..78cf552fd55f3e641b7980d1dc9d9d1bb8538046 --- /dev/null +++ b/data/2403.18452v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c17ad0d7acb7c54fe666621c0870ece344ebba899bf212d88a3e62d11bc6840 +size 878824 diff --git a/data/2403.18469.png b/data/2403.18469.png new file mode 100644 index 0000000000000000000000000000000000000000..0d95378d9c3652b504db421b21f6ebd3fbf487bf --- /dev/null +++ b/data/2403.18469.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:554602d9a604cae70efa0abbedcb837d505379267524ebaf8d938fbbd9544524 +size 1096359 diff --git a/data/2403.18548.png b/data/2403.18548.png new file mode 100644 index 0000000000000000000000000000000000000000..8c58106be155f51918179daef6593cd9a3637451 --- /dev/null +++ b/data/2403.18548.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebd9d628bd323e6f3e5b0a4d3fce8e2dd10d833c4cbc28d1f3fd9b7c2ea74997 +size 985156 diff --git a/data/2403.18550.png b/data/2403.18550.png new file mode 100644 index 0000000000000000000000000000000000000000..728e4c31ace14daddd42e164df27c4073d7956b9 --- /dev/null +++ b/data/2403.18550.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7d76cda1419e69af2f50bcea12559e68ed916fdc7b9a53992b787cd5d150743 +size 778380 diff --git a/data/2403.18551.png b/data/2403.18551.png new file mode 100644 index 0000000000000000000000000000000000000000..ee078a8d80c112e7fa66b4ef2376e549e817ccfd --- /dev/null +++ b/data/2403.18551.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70b4bdeb56f63732b0d91e1ca83d0061290780160f904afd0be83dc0620c7e4b +size 1110117 diff --git a/data/2403.18554.png b/data/2403.18554.png new file mode 100644 index 0000000000000000000000000000000000000000..ddd54a456626aa062e2fd48146ab1f6e0fb0bfc3 --- /dev/null +++ b/data/2403.18554.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a0f11f8591c0272342e397ba0a3622bfc2f6325ea3937eefb39f92260100586 +size 853330 diff --git a/data/2403.18575.png b/data/2403.18575.png new file mode 100644 index 0000000000000000000000000000000000000000..9f7358a9f2290c66fdadd38afa5e91cdaa6a354d --- /dev/null +++ b/data/2403.18575.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d7b741c72c98b51bcc334fca358f1892b75b29bcb1cf0a812c1ba4c1ef5f010 +size 753540 diff --git a/data/2403.18708.png b/data/2403.18708.png new file mode 100644 index 0000000000000000000000000000000000000000..534e6240f2e551182d86e1d3572abc8f56abd586 --- /dev/null +++ b/data/2403.18708.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f6ff8cc67c82020ab121bee42e575429599b1dde94123c1de164fe7dfc3928f +size 731448 diff --git a/data/2403.18775.png b/data/2403.18775.png new file mode 100644 index 0000000000000000000000000000000000000000..f079a584fa41803f82fd4d31ac65ccdffa45e4b6 --- /dev/null +++ b/data/2403.18775.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00b5f70fc43876b5a613278c8d570a76d0ac01c214ce4af16b6eb1c1610bfc3c +size 982815 diff --git a/data/2403.18791.png b/data/2403.18791.png new file mode 100644 index 0000000000000000000000000000000000000000..83ad75349567b24ea45ad687c49efcc6e1e2c8ad --- /dev/null +++ b/data/2403.18791.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a00d5a70f321eb338d63a72bcbbb6f3bcb1bff3af3d378861d6ecb175b92b32 +size 785366 diff --git a/data/2403.18807.png b/data/2403.18807.png new file mode 100644 index 0000000000000000000000000000000000000000..8252756e2e02de1f647a0c15e5e5ec937902e9c7 --- /dev/null +++ b/data/2403.18807.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d777091dbd95e64ac6ea010f08c8d7f096029164c3b82d7927062ca288d5da2 +size 968908 diff --git a/data/2403.18913.png b/data/2403.18913.png new file mode 100644 index 0000000000000000000000000000000000000000..fd5fa339370cf43ede8f8b6e26e38ae5482d275f --- /dev/null +++ b/data/2403.18913.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38e028cbea19f55e7fabb242f05762e02d62d713b90387a6c5cef085e7a85dd5 +size 766992 diff --git a/data/2403.18920.png b/data/2403.18920.png new file mode 100644 index 0000000000000000000000000000000000000000..5bee2f841d85146568bd2e59191429fbb75df464 --- /dev/null +++ b/data/2403.18920.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3e20989e29fc37c0d5487aec0a912000d2e5922a2abbf49cfd5bb8906c77c64 +size 821584 diff --git a/data/2403.18922.png b/data/2403.18922.png new file mode 100644 index 0000000000000000000000000000000000000000..777514bc84952700d542eab2fb3f335df7bc0678 --- /dev/null +++ b/data/2403.18922.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3813d6128c1f09a601486277fa3309f42b2098b32838b8eeb2de9b6274e12f4 +size 1112531 diff --git a/data/2403.18978.png b/data/2403.18978.png new file mode 100644 index 0000000000000000000000000000000000000000..cda5f16c5444dad0cd76fb1f467d1869bf351b3b --- /dev/null +++ b/data/2403.18978.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4521011a7d78e66f40d5c741d160e4439d9627186c40677ac83ffa223cb74537 +size 1599677 diff --git a/data/2403.19022.png b/data/2403.19022.png new file mode 100644 index 0000000000000000000000000000000000000000..26f5936f364ab87d4eb41ab8317a8923eb4a4b4a --- /dev/null +++ b/data/2403.19022.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d45114fac7007fa53a5acd057d99cd26868da1cfbf7d03d84b5bf2cb7a0d1784 +size 1058586 diff --git a/data/2403.19066.png b/data/2403.19066.png new file mode 100644 index 0000000000000000000000000000000000000000..31f1647e95adf69b1aca8772c3e3ae8cfb295c5e --- /dev/null +++ b/data/2403.19066.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ddfa631848258736279d2ec0bfa171f0ef405ae6281d7b1e8267ba2dfbfb843 +size 1127031 diff --git a/data/2403.19067.png b/data/2403.19067.png new file mode 100644 index 0000000000000000000000000000000000000000..7f11755ffdccc0fdafa388d1ca64a375ca27d118 --- /dev/null +++ b/data/2403.19067.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5d056b74849fb33b2a4298e2d0fbb58046f0f670ed22fe4c3f85b368f45d252 +size 810485 diff --git a/data/2403.19080.png b/data/2403.19080.png new file mode 100644 index 0000000000000000000000000000000000000000..4890054d41edc7dbbb09783e29d3176e0b499163 --- /dev/null +++ b/data/2403.19080.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e86142b1ff7a8b8d1bf4212a431fe8034ff7c20a6f2b02e4fcc50b30f798870 +size 823725 diff --git a/data/2403.19104.png b/data/2403.19104.png new file mode 100644 index 0000000000000000000000000000000000000000..a65a55f0f9397440de26d3d253f9c5008355aeb8 --- /dev/null +++ b/data/2403.19104.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6d3aa2c1a7a31abaded353dde5e4b25187f7118316323c802074d1c22eba0ca +size 789633 diff --git a/data/2403.19128.png b/data/2403.19128.png new file mode 100644 index 0000000000000000000000000000000000000000..689fe7f7e2a9fa917fe06efde8b69ab524d1441b --- /dev/null +++ b/data/2403.19128.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8284c15821142bf8b155f7a283ea220bf5a0b0ac4c2e3ce2d294ecce197336fe +size 783729 diff --git a/data/2403.19164.png b/data/2403.19164.png new file mode 100644 index 0000000000000000000000000000000000000000..c516676268ee5d00abd852541d67bfb3c353d737 --- /dev/null +++ b/data/2403.19164.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a309efcb64f8230d503f7148326b1f5395e333160913f74743829f7c6ed1e0d4 +size 1186110 diff --git a/data/2403.19205.png b/data/2403.19205.png new file mode 100644 index 0000000000000000000000000000000000000000..d33ce72971317acb019a0c0cde6caeb236c674f6 --- /dev/null +++ b/data/2403.19205.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87506c83a3dff28e2664da3813904415eb94d0c727729c8ad14b6d3288e81b22 +size 714474 diff --git a/data/2403.19220.png b/data/2403.19220.png new file mode 100644 index 0000000000000000000000000000000000000000..17fd523e97c4a9a5bb1b88270fc72ac02fb6f93a --- /dev/null +++ b/data/2403.19220.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e535269e3cda60c1097bdaf8d877a0cd87aa18512b15137822cbdb7f79246884 +size 949001 diff --git a/data/2403.19225.png b/data/2403.19225.png new file mode 100644 index 0000000000000000000000000000000000000000..9ce3c465f697c754283d2425619e69f468d928d0 --- /dev/null +++ b/data/2403.19225.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:febab0ea5f2e4e70b38bd7aed21cf50aa138484eea40b08bff0e63ce07b447a6 +size 276479 diff --git a/data/2403.19232.png b/data/2403.19232.png new file mode 100644 index 0000000000000000000000000000000000000000..239eecd58cf37f3a50c0b9954e4faf446fd13a13 --- /dev/null +++ b/data/2403.19232.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6a14f88ed54b2fc0cc4d534d50e4f9fcd69412415a0fd85fb3a90acd3db7ec8 +size 724714 diff --git a/data/2403.19235.png b/data/2403.19235.png new file mode 100644 index 0000000000000000000000000000000000000000..4ad564f2fd75017ddf859eab395a44eb43ee76e5 --- /dev/null +++ b/data/2403.19235.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c46e279240e5757e440f969aab92ac868e004e02633af4dc2620c359ababb25 +size 960729 diff --git a/data/2403.19242.png b/data/2403.19242.png new file mode 100644 index 0000000000000000000000000000000000000000..b5dc87320c3f9ec7a982c5698fd0944d381a983a --- /dev/null +++ b/data/2403.19242.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed49bc63f52cb81966a7a330211d7d396e1ee9ac08c1077b18dd4c036d0bd3d2 +size 1109260 diff --git a/data/2403.19278v1.png b/data/2403.19278v1.png new file mode 100644 index 0000000000000000000000000000000000000000..c39f9168339f52a95b1d8f3f492de7b31f8a7dee --- /dev/null +++ b/data/2403.19278v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f63995e987738a4f3eeae8fe19fd87bf9194b6668c1e4d58dd1d5dc9f84c4278 +size 1004344 diff --git a/data/2403.19314.png b/data/2403.19314.png new file mode 100644 index 0000000000000000000000000000000000000000..533fc11032d1a26bc18059e7911d688dee254e3d --- /dev/null +++ b/data/2403.19314.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64d3a4efeb3dd1654b431bd9b7702f9cef471e19c3baa0965750c02f145f09b6 +size 969071 diff --git a/data/2403.19326.png b/data/2403.19326.png new file mode 100644 index 0000000000000000000000000000000000000000..773a8c67d11568e9a479fcd57b76ebd6d6b42ce6 --- /dev/null +++ b/data/2403.19326.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47c1ba769b09c1880ef4ec17180b7635cf08c926f27f694bfbf3814d738381a0 +size 813627 diff --git a/data/2403.19334.png b/data/2403.19334.png new file mode 100644 index 0000000000000000000000000000000000000000..fc68b4f384fe868396e50377e3ce572d8e280761 --- /dev/null +++ b/data/2403.19334.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b70bedf9d6cac8d9acaa5720e3af9f7f3d4ff18211d7da61cac671cbb3b37945 +size 816738 diff --git a/data/2403.19366.png b/data/2403.19366.png new file mode 100644 index 0000000000000000000000000000000000000000..b97614590f6859cf4bbd6f7d3747bf2910fbbdfa --- /dev/null +++ b/data/2403.19366.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de50867062adc5001d4833ec22c553849539f6308bda6e6cc3c8875d31a78df4 +size 720654 diff --git a/data/2403.19412.png b/data/2403.19412.png new file mode 100644 index 0000000000000000000000000000000000000000..45475e7f7bf0bbe05ed59d51fd78d5348cc8c643 --- /dev/null +++ b/data/2403.19412.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb426a14ee71659d59f560678da9ed84bc1e5b902b99254aa53694b3ff71c2a8 +size 765085 diff --git a/data/2403.19473.png b/data/2403.19473.png new file mode 100644 index 0000000000000000000000000000000000000000..b2447a7ced45601588e3a31500107ca939b4be25 --- /dev/null +++ b/data/2403.19473.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61fddfe7193a50c019394677c1d3402d70af96593a3cb224c7b8b1e156257978 +size 873345 diff --git a/data/2403.19474.png b/data/2403.19474.png new file mode 100644 index 0000000000000000000000000000000000000000..e7cbe51b54b529420ca5dae6fc407025b1c12c26 --- /dev/null +++ b/data/2403.19474.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b2d0c4c62367278a1111df8ba4e52642106e06ad1cca06ab080e4c4f02f65b8 +size 933699 diff --git a/data/2403.19490.png b/data/2403.19490.png new file mode 100644 index 0000000000000000000000000000000000000000..ebe41c4b3bd33806174310e54b65ce4f86226814 --- /dev/null +++ b/data/2403.19490.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ae66773594b26e0218221a7142a07c8e874e12bfc64c055a1d899dc12459ae3 +size 813456 diff --git a/data/2403.19501.png b/data/2403.19501.png new file mode 100644 index 0000000000000000000000000000000000000000..bcbb2046a71111581036b2e0f38ba1f084c8cc18 --- /dev/null +++ b/data/2403.19501.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb5ec60c1ca304b444ea692083a773815544e1a65aeef2f3a1ad5aa80ab6a925 +size 1476772 diff --git a/data/2403.19517.png b/data/2403.19517.png new file mode 100644 index 0000000000000000000000000000000000000000..51ea40e1c622768ed7dcc0a420089ee2e43321ec --- /dev/null +++ b/data/2403.19517.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff9c81d5d945442ebf52c432012214b8ea82c0baf914ab310f11af0a04303445 +size 1490122 diff --git a/data/2403.19527.png b/data/2403.19527.png new file mode 100644 index 0000000000000000000000000000000000000000..00d095a2bc684e4523ca477eec96103a9435b9df --- /dev/null +++ b/data/2403.19527.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:884f34a6144d5a17b7fcf3fdaf7ea3cc8323b5c7c0621d2e28e940c57a0db4b3 +size 940614 diff --git a/data/2403.19539.png b/data/2403.19539.png new file mode 100644 index 0000000000000000000000000000000000000000..1c1c93e5e3a8ce2b9f08a254d17b0958ed56c7b7 --- /dev/null +++ b/data/2403.19539.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1e110e4cf67d6c7a2f7ab9e5a04e5c38dc859012f70cc801cd36fb1cc7eacd6 +size 948086 diff --git a/data/2403.19600.png b/data/2403.19600.png new file mode 100644 index 0000000000000000000000000000000000000000..08502aae82d8f690d49d7f983862df6802849669 --- /dev/null +++ b/data/2403.19600.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:229b037b4f43a7869bc88ec73c7e6a44f05e750ccdf85c5e0678442c3ee11294 +size 887729 diff --git a/data/2403.19780.png b/data/2403.19780.png new file mode 100644 index 0000000000000000000000000000000000000000..0fbe261afa33ffc2b82f7b751c25e2ef0bc7f175 --- /dev/null +++ b/data/2403.19780.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a77d8e9d325279f38b6855aa16e022aa4d954afd46a71669512c830470521a34 +size 1058555 diff --git a/data/2403.19811.png b/data/2403.19811.png new file mode 100644 index 0000000000000000000000000000000000000000..224c08eb1946712b27f083b92e01f6c3c6c77dd7 --- /dev/null +++ b/data/2403.19811.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4642fb5b9ce362a6f80252862568a2809bb2b8863e8d781989e46a926b00063a +size 762915 diff --git a/data/2403.19898.png b/data/2403.19898.png new file mode 100644 index 0000000000000000000000000000000000000000..3bbfe8a5b893e20013bfa12b28fba23b877257d1 --- /dev/null +++ b/data/2403.19898.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8e11fa6ca985bb666e1f0824596faec1dc9631e49966364535cd2e07b7a0ce8 +size 870581 diff --git a/data/2403.19904.png b/data/2403.19904.png new file mode 100644 index 0000000000000000000000000000000000000000..cc0aec7801c60c69b00c1a1978e3fbf3f693961e --- /dev/null +++ b/data/2403.19904.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aff1d1310f227b2042fdf40fd65f33a10004bad7300d2082d7f23ee46813a1d0 +size 884600 diff --git a/data/2403.19926.png b/data/2403.19926.png new file mode 100644 index 0000000000000000000000000000000000000000..c63ee30372af80e97144c18e81a44c8249a13082 --- /dev/null +++ b/data/2403.19926.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bed8aff421c75a204e95b84d734ee158b43b6a92e460a3c4ddf54bddbe91ccb8 +size 893218 diff --git a/data/2403.19944.png b/data/2403.19944.png new file mode 100644 index 0000000000000000000000000000000000000000..336bf885c627b256bf8e4c793a3ba772ce87d3f1 --- /dev/null +++ b/data/2403.19944.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c249eedf4a6220f773198a594a5e635af016847a1e17da1315e62acbbb5dd08e +size 744955 diff --git a/data/2403.19949.png b/data/2403.19949.png new file mode 100644 index 0000000000000000000000000000000000000000..a85b9a0a673dd5e091cc5823b839682ceb31cc28 --- /dev/null +++ b/data/2403.19949.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84b555400f099d9f89314ec67d3b19014bb1dac891e3625852a6e8db41c22c9e +size 773229 diff --git a/data/2403.19964.png b/data/2403.19964.png new file mode 100644 index 0000000000000000000000000000000000000000..3d1a5380d682c0605b79459a183a3d11d7a97d2e --- /dev/null +++ b/data/2403.19964.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62b34f900073af72ac8edfe353227d02a490e15278748a7e860880926fbca706 +size 1030976 diff --git a/data/2403.19967.png b/data/2403.19967.png new file mode 100644 index 0000000000000000000000000000000000000000..a4ab21bfa75d9c26c21b53c467aed61c80627b83 --- /dev/null +++ b/data/2403.19967.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:355340faa0ce6c8a60fcc150a1c683cb4657c2d082314bc06da27895cd339040 +size 749665 diff --git a/data/2403.19975.png b/data/2403.19975.png new file mode 100644 index 0000000000000000000000000000000000000000..6e64ba054ee4ac279c86afb6b35079c4c8ca8a06 --- /dev/null +++ b/data/2403.19975.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9f12fb0a2828e92a33f945e5d2f81fc9d9871859b2c910169e3f4ad4f666b12 +size 877503 diff --git a/data/2403.19976.png b/data/2403.19976.png new file mode 100644 index 0000000000000000000000000000000000000000..982714682b8ae1821c76794684563332dd3e1b61 --- /dev/null +++ b/data/2403.19976.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97ba9ed115d62a8fc63ad8186ecdf5f1197d8c6237460cd9f3c6d55e91399c2d +size 1152370 diff --git a/data/2403.19979.png b/data/2403.19979.png new file mode 100644 index 0000000000000000000000000000000000000000..402b34cae023ee0f307061ad0acfb0c7fd0ced40 --- /dev/null +++ b/data/2403.19979.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2879ab7e45d30aceb4a027e37ea117101c166c3c6f2823c6728a23cadc49f53 +size 820563 diff --git a/data/2403.20002.png b/data/2403.20002.png new file mode 100644 index 0000000000000000000000000000000000000000..e7b7500e0c8c96e7e3994255b69826bab515c3fc --- /dev/null +++ b/data/2403.20002.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45e03ed32706423a9e6fe8667b36cafe70e3cd1ddfdfb91a97453f07ed61b4d0 +size 785232 diff --git a/data/2403.20018.png b/data/2403.20018.png new file mode 100644 index 0000000000000000000000000000000000000000..9fa4fd909e3c518854ca6dce67f9da675419a36f --- /dev/null +++ b/data/2403.20018.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fe1a9fd1caf6a2a2ecab0d75ed8eac94bb0f87590bf5349ba5ced6a0b53ac54 +size 1123880 diff --git a/data/2403.20022.png b/data/2403.20022.png new file mode 100644 index 0000000000000000000000000000000000000000..1cac2cc5376e6f9b41cbafc9356b515b12dc799f --- /dev/null +++ b/data/2403.20022.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aab994af96601904311742ed4076c728c803d674b49ff2a30bd74deaf9f6b6d7 +size 1167356 diff --git a/data/2403.20031.png b/data/2403.20031.png new file mode 100644 index 0000000000000000000000000000000000000000..5f9273fb673ea827392d0cbc731bda9f76c1abc5 --- /dev/null +++ b/data/2403.20031.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf7533afb5aff383438e0435a16b0770e9a49e3f6432498ca5dbe73d3dd61ecf +size 877448 diff --git a/data/2403.20126.png b/data/2403.20126.png new file mode 100644 index 0000000000000000000000000000000000000000..360dff667bdd202cfd4ed7fbb6d7537b7fdafe77 --- /dev/null +++ b/data/2403.20126.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3f5e4f4b9d3ebf4e126d367bb16f9e064faa62fa6e1305c285f983db89335b7 +size 821165 diff --git a/data/2403.20142.png b/data/2403.20142.png new file mode 100644 index 0000000000000000000000000000000000000000..49fa7ca073a2e5637fac7848721b83a14b1afd31 --- /dev/null +++ b/data/2403.20142.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40d786057098d880b49debd1329a4b50adc476d761286acadd25f5dae9388380 +size 894123 diff --git a/data/2403.20225.png b/data/2403.20225.png new file mode 100644 index 0000000000000000000000000000000000000000..5269b5c56350f57f83e0f37a1f7305d1c00a9af7 --- /dev/null +++ b/data/2403.20225.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8cfa6fabbe3a1656e91771bc2a0eabcdba39176eed36f6e6bbf805db4e9ad3c1 +size 1360584 diff --git a/data/2403.20231.png b/data/2403.20231.png new file mode 100644 index 0000000000000000000000000000000000000000..756ed35d04bc522eff1946cadd695864eca5807d --- /dev/null +++ b/data/2403.20231.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a08c760e742b0774b1686c82252077ead8063e7a451b0dfb12075e48a7c0eaf5 +size 1296755 diff --git a/data/2403.20236.png b/data/2403.20236.png new file mode 100644 index 0000000000000000000000000000000000000000..b42c3310dd4fe8ae44fd52253d365f6c1ea0c1d3 --- /dev/null +++ b/data/2403.20236.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d4e6110257d32988187008ce71848bc6fccb2bc0b3710f943dec55b4c68a7eb +size 937132 diff --git a/data/2403.20249.png b/data/2403.20249.png new file mode 100644 index 0000000000000000000000000000000000000000..055b302bed8eeaa481fad0ba0ba4ed04ecbfe461 --- /dev/null +++ b/data/2403.20249.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd7aa82b289135e0fa938bd6cd3cecaa56c8b3695c0626525ca00bb20169a1f8 +size 1142293 diff --git a/data/2403.20254.png b/data/2403.20254.png new file mode 100644 index 0000000000000000000000000000000000000000..c2d45f2acae48553684e2d593c59a68e80662643 --- /dev/null +++ b/data/2403.20254.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3a4bbf9c0026a40c8e4aa98c8152116717e0c2238b0f2b46712c25e242dcf1d +size 791818 diff --git a/data/2403.20317.png b/data/2403.20317.png new file mode 100644 index 0000000000000000000000000000000000000000..90e3de2928a92cb46f7266b3bc54d17fe89b5307 --- /dev/null +++ b/data/2403.20317.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91a18a08982193f3e0dcaa78b325893e628c4eb545e0fd025eae38dcf0d98740 +size 849810 diff --git a/data/2403.20318.png b/data/2403.20318.png new file mode 100644 index 0000000000000000000000000000000000000000..0b9e042c70410372e7acd89734f513c093f56a57 --- /dev/null +++ b/data/2403.20318.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a34b30aeb8148a40bcb8e2ccf04cb5192fe93fe916ed706dadfae7c6c1f6831d +size 755071 diff --git a/data/2403.20320.png b/data/2403.20320.png new file mode 100644 index 0000000000000000000000000000000000000000..bb9d061e9687736ac1f14c24e995b27327d64d08 --- /dev/null +++ b/data/2403.20320.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c194b51d7e9d78e2705c27b81e6a2ab2559adfd37026f3debb11f7681361a22f +size 727013 diff --git a/data/2404.00095.png b/data/2404.00095.png new file mode 100644 index 0000000000000000000000000000000000000000..25a1c1d597af57d8d0fecaaaf74dd2a94883f29f --- /dev/null +++ b/data/2404.00095.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1a060013f5e2f871cc3a4b48394c236a7f67004c2b7e4fa8d47ecc6b71b2b97 +size 1090533 diff --git a/data/2404.00098.png b/data/2404.00098.png new file mode 100644 index 0000000000000000000000000000000000000000..675be14c739e8d72d088f557aa9bcb5b4556b27b --- /dev/null +++ b/data/2404.00098.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9486905e865172ca4b69ce74a07bca204cb634b8571ef5dc48cd5b2cae94a59c +size 910776 diff --git a/data/2404.00103.png b/data/2404.00103.png new file mode 100644 index 0000000000000000000000000000000000000000..e06f1ea6229319c03a3c6955631bc151439ee08b --- /dev/null +++ b/data/2404.00103.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c3143567e6794be99a0313dd0740931ec0b61dd48f0a62c4f96a3f4964948ea +size 698471 diff --git a/data/2404.00130.png b/data/2404.00130.png new file mode 100644 index 0000000000000000000000000000000000000000..7717a3689032e19628e55b208e8322786babf6d8 --- /dev/null +++ b/data/2404.00130.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d2b9d23d41ff8a2ebb98d34d6f04b4db1e4475090385c2c3fda43fcc7d0a551 +size 1012914 diff --git a/data/2404.00149.png b/data/2404.00149.png new file mode 100644 index 0000000000000000000000000000000000000000..d1e2db84c53eed92913366b217b1e17414762939 --- /dev/null +++ b/data/2404.00149.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6196aff8a4c2fe56b5634d1c0ec110de85b6b01360f4dcec856def80ab99cbe7 +size 818111 diff --git a/data/2404.00168.png b/data/2404.00168.png new file mode 100644 index 0000000000000000000000000000000000000000..ec89801c3b21d5959928a81a313c1085689b23ba --- /dev/null +++ b/data/2404.00168.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b9a17d7cbcfe75495cadf18a0f0327746f0a16a9d2b867e2d6a6a9af5a28e6f +size 1146421 diff --git a/data/2404.00228.png b/data/2404.00228.png new file mode 100644 index 0000000000000000000000000000000000000000..f57c4858182871128ba7af9c9272a1fcdcc9bbc7 --- /dev/null +++ b/data/2404.00228.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b63a95d7d0bfdc7473e0df8c7edc506e397cd81d352bc2b572492a5b0941f893 +size 780654 diff --git a/data/2404.00234v1.png b/data/2404.00234v1.png new file mode 100644 index 0000000000000000000000000000000000000000..6117dadd9a3023dab6627cd1d22cdac0eadf46f3 --- /dev/null +++ b/data/2404.00234v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4435756fbfbb2c4443beb3d6fbe9b7e6361ecae453fe3bc14e7ccc0db3ba42f +size 805280 diff --git a/data/2404.00252.png b/data/2404.00252.png new file mode 100644 index 0000000000000000000000000000000000000000..1b6accd27f26cc5f5482ab385fa67274cf9629a7 --- /dev/null +++ b/data/2404.00252.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:383dbeaf214549cb3d463e459c2b0ef479ae50702252d18050db0195173ef3d0 +size 870777 diff --git a/data/2404.00254.png b/data/2404.00254.png new file mode 100644 index 0000000000000000000000000000000000000000..4db2036fec413b3bf8be443a0c1ce4e7588afe8d --- /dev/null +++ b/data/2404.00254.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d63e4d7233cea9b95390730f1428dbb322e3928e398bfa8bd32c359fe2890f1c +size 798484 diff --git a/data/2404.00262.png b/data/2404.00262.png new file mode 100644 index 0000000000000000000000000000000000000000..87995d726bb6d514cbee19c19744b02c1176d955 --- /dev/null +++ b/data/2404.00262.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7c0bc7f420c47172d53aaff49da21db73655196c5b292d6e5d1ce0d2b13c70e +size 880060 diff --git a/data/2404.00269.png b/data/2404.00269.png new file mode 100644 index 0000000000000000000000000000000000000000..e3f4e5e3d02ed8991ed52a3c8449d5df8fca5c68 --- /dev/null +++ b/data/2404.00269.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92f0f7f41b210a7e449ff53fdca989f17987cf57f831f8f5b7172c34a950fee5 +size 1044964 diff --git a/data/2404.00292.png b/data/2404.00292.png new file mode 100644 index 0000000000000000000000000000000000000000..07c0cfc8ab712225ad941e3a4d6b6c3fd0d6915f --- /dev/null +++ b/data/2404.00292.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5803a399266b0f956634a09794157cd5eb0afcd86448a3dffe1a3da61d51d3cd +size 2019539 diff --git a/data/2404.00299.png b/data/2404.00299.png new file mode 100644 index 0000000000000000000000000000000000000000..69eb8cbc89baebca0011681ad6e3cb80c097aa7f --- /dev/null +++ b/data/2404.00299.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0f4254775afc0ddb1d3a3383a85c91631b03d686a8a53a0916da6bf0808c71c +size 977195 diff --git a/data/2404.00301.png b/data/2404.00301.png new file mode 100644 index 0000000000000000000000000000000000000000..ee92f7b4fd3d5c027a0b364d0d26cc6cd70e3b8f --- /dev/null +++ b/data/2404.00301.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6490b4f4530291ae449190a207a8f7290cfb87434660b6f88e321085522c9ad +size 1053525 diff --git a/data/2404.00312.png b/data/2404.00312.png new file mode 100644 index 0000000000000000000000000000000000000000..f4206997ce8143c74b5ec4192425eb9430073665 --- /dev/null +++ b/data/2404.00312.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03ed7172494d61eef07d12496aee7de7f797ed721d5c10acaa0884dc68b22fe7 +size 818821 diff --git a/data/2404.00330.png b/data/2404.00330.png new file mode 100644 index 0000000000000000000000000000000000000000..597d9b028c792eeab6f36efde25bac2d35deef9d --- /dev/null +++ b/data/2404.00330.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0b562551a2815cf27ab2b6d666f0285a1775cd1de34789bf0c76c90207a4690 +size 805235 diff --git a/data/2404.00368.png b/data/2404.00368.png new file mode 100644 index 0000000000000000000000000000000000000000..c0f250aeb65e105175c76e0959dd85dec5b9a8bc --- /dev/null +++ b/data/2404.00368.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6e031c5cde5227cf64ee462a03101605a53ae40a04e9fa7063100dc3b47c737 +size 987608 diff --git a/data/2404.00385.png b/data/2404.00385.png new file mode 100644 index 0000000000000000000000000000000000000000..24e1ba12b577e8306c1678aa55f43079bbb84936 --- /dev/null +++ b/data/2404.00385.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:549ed52c07025d106aaa9c65f5bc46ac459b83f56fb76cf6c4221bc90928967f +size 752708 diff --git a/data/2404.00417.png b/data/2404.00417.png new file mode 100644 index 0000000000000000000000000000000000000000..849b15e47e9976b8bb8de72d25c9eb5e05fd77cd --- /dev/null +++ b/data/2404.00417.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13ac3d219c427e8531899d9865f50b9588a3c522fca87e36e3254c5c265b2106 +size 807878 diff --git a/data/2404.00429.png b/data/2404.00429.png new file mode 100644 index 0000000000000000000000000000000000000000..aa3656526c0b139174d8867fb5f09107cbe774ae --- /dev/null +++ b/data/2404.00429.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1774a6405e171db5135618d11e3de0542f1e47645c30c2d0094178d24963856 +size 872755 diff --git a/data/2404.00485.png b/data/2404.00485.png new file mode 100644 index 0000000000000000000000000000000000000000..6c47c4bce307f00cc756aa0f09c0860e77f9c953 --- /dev/null +++ b/data/2404.00485.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ae5e4bbfb590c05c27f7582667fe73995a7f734264c230dc1be42fc74319440 +size 936134 diff --git a/data/2404.00521.png b/data/2404.00521.png new file mode 100644 index 0000000000000000000000000000000000000000..b6846a0a2a3d9e528c050d40fefeee1377cf7157 --- /dev/null +++ b/data/2404.00521.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44ad98444e01574e20977f52c7af639f762256e6596fa29bf567163c71ed04e4 +size 864405 diff --git a/data/2404.00524.png b/data/2404.00524.png new file mode 100644 index 0000000000000000000000000000000000000000..a268e5dde48b6174e3d2070cf6efe22c55675126 --- /dev/null +++ b/data/2404.00524.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ff74021406172f4776fe69bd0b029b2a52c88821898def2992ca64d71c38521 +size 957701 diff --git a/data/2404.00532.png b/data/2404.00532.png new file mode 100644 index 0000000000000000000000000000000000000000..90d5d7b8bc106fb52f3bb0beab333c674e759769 --- /dev/null +++ b/data/2404.00532.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:169e56cafe84cdb3c1de2fffd1170c723dc8c5b2137477fd3390a53a42dd8ca0 +size 769964 diff --git a/data/2404.00546.png b/data/2404.00546.png new file mode 100644 index 0000000000000000000000000000000000000000..de8e6aa66f98571ecfe3080b7d511a14b244b8b9 --- /dev/null +++ b/data/2404.00546.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17f5d45fd6b5e70bc7a20081ad7ce4e6f7fb2599b4d41038fdc72fc2c1007bd8 +size 718736 diff --git a/data/2404.00562.png b/data/2404.00562.png new file mode 100644 index 0000000000000000000000000000000000000000..a1f95b8507e246eb568a58f358e824d58de6a470 --- /dev/null +++ b/data/2404.00562.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f91ee1441caee6501e144225a1c4ace24c0a8aa5df176905bd9b5070c618a16 +size 836814 diff --git a/data/2404.00563.png b/data/2404.00563.png new file mode 100644 index 0000000000000000000000000000000000000000..99adc0d48fe66899a685b61e82d973eef7dc01b0 --- /dev/null +++ b/data/2404.00563.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13d7413e588973b248c298d9e382266ec8a96436e8524d0dd3e761057b8fc665 +size 727532 diff --git a/data/2404.00653.png b/data/2404.00653.png new file mode 100644 index 0000000000000000000000000000000000000000..a377d7c0e14785baa6290541fadf697a62f5bf5a --- /dev/null +++ b/data/2404.00653.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bc6170994656ae72e74c44d1a06d7cdc7de7f6897547129bb32700d187be802 +size 865057 diff --git a/data/2404.00658.png b/data/2404.00658.png new file mode 100644 index 0000000000000000000000000000000000000000..5ba8e5e162ac29f10d86c09682b98c4e86117104 --- /dev/null +++ b/data/2404.00658.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3d1a546d78cf023df8bf35bfc6d92a2e3e3286c9c7033441310f1e1d7fe9790 +size 760866 diff --git a/data/2404.00672v1.png b/data/2404.00672v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f3ba35c0436f00826ea625ff25036f9c40df3e78 --- /dev/null +++ b/data/2404.00672v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:238dfc255ad131828f0ce14a1eabfda392f4321cb2dc74733a6bdfd9d338b3e8 +size 780971 diff --git a/data/2404.00676.png b/data/2404.00676.png new file mode 100644 index 0000000000000000000000000000000000000000..a46cb7fb0d77075a6e10fe682cc982deb0169c41 --- /dev/null +++ b/data/2404.00676.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be8959be063514697ddc8b61866180648f6a506543e4eaa2ef60dd9a41c16380 +size 1102475 diff --git a/data/2404.00678.png b/data/2404.00678.png new file mode 100644 index 0000000000000000000000000000000000000000..a9b7317d8eb41a4d071764fa3e094a32913d382e --- /dev/null +++ b/data/2404.00678.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ffad29e8910ea5d61bf0b78ea4ee21a5b4943ac6968612862b84306e89ca526 +size 982609 diff --git a/data/2404.00679.png b/data/2404.00679.png new file mode 100644 index 0000000000000000000000000000000000000000..ff6cfa3cdc9eb5b8ef9a87296a4f99ce13b4c868 --- /dev/null +++ b/data/2404.00679.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0088734f163928cf5eab5f2556bd84b665a439df4acbbd346f6204ede1f458f4 +size 752772 diff --git a/data/2404.00680.png b/data/2404.00680.png new file mode 100644 index 0000000000000000000000000000000000000000..16cc86f88f0c4c9686a3d2d6f9bcf7119c7bcf86 --- /dev/null +++ b/data/2404.00680.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75d05b739562cc0582e2af4741e7fec1fe561554eda9140bd9e63a54da8db303 +size 905370 diff --git a/data/2404.00710.png b/data/2404.00710.png new file mode 100644 index 0000000000000000000000000000000000000000..fbe955c19db4b0da36c844599c43c33007548cbf --- /dev/null +++ b/data/2404.00710.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c407eab47e582a812b7441702d14150fd8fbbea23cb46160e0650df98f7aff1 +size 816528 diff --git a/data/2404.00741.png b/data/2404.00741.png new file mode 100644 index 0000000000000000000000000000000000000000..ed08bb1d6bea798bbe54901cf8b5129b8c30eea1 --- /dev/null +++ b/data/2404.00741.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92736a90c2176fe269eb241b7b35eb582298b7ce1e0d28e44ff258402ae1a7dc +size 763329 diff --git a/data/2404.00742.png b/data/2404.00742.png new file mode 100644 index 0000000000000000000000000000000000000000..d906905e0b9b6b63bf3ebb58918807ae9e7d651c --- /dev/null +++ b/data/2404.00742.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a30b0b978f04fbb4ba4f9f0f72464e2e7f712784f3ebe9d30f4bf707e760e25d +size 767171 diff --git a/data/2404.00777.png b/data/2404.00777.png new file mode 100644 index 0000000000000000000000000000000000000000..0cd4a8eacbb1f0cf62eb3c713e79cab96101e146 --- /dev/null +++ b/data/2404.00777.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3025a0852a516f9c1e4eed3228130ef4ad670e952bb1db561b5a2d923c4aacc +size 862678 diff --git a/data/2404.00801.png b/data/2404.00801.png new file mode 100644 index 0000000000000000000000000000000000000000..55194c2d107163f27add4f1e327917e4fc852246 --- /dev/null +++ b/data/2404.00801.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8848849e019da1a6b1fbdd21182b809306a74aea041370ae8388d51c133f3a5 +size 419351 diff --git a/data/2404.00815.png b/data/2404.00815.png new file mode 100644 index 0000000000000000000000000000000000000000..1b32885ee488d009fb47dba971067b78f9863d96 --- /dev/null +++ b/data/2404.00815.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67151aee15e7abe0bbcebd1857982edec33ab37ff435f77bb2b7e0083787c509 +size 1240587 diff --git a/data/2404.00834v1.png b/data/2404.00834v1.png new file mode 100644 index 0000000000000000000000000000000000000000..afcdc516213e772b5eeaeffba413602b646f7109 --- /dev/null +++ b/data/2404.00834v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fa8eac92883c3837cca71ff2b22611c0e198edc9aa009c5bf43e0757b06978e +size 919539 diff --git a/data/2404.00842v1.png b/data/2404.00842v1.png new file mode 100644 index 0000000000000000000000000000000000000000..b517af177c05bd350015628a24fb8316fab749cb --- /dev/null +++ b/data/2404.00842v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f67265de4046e52d49af820ee41736154245bc926ce5b639f96b908b6e8ccc4 +size 851476 diff --git a/data/2404.00847.png b/data/2404.00847.png new file mode 100644 index 0000000000000000000000000000000000000000..441e4a29d0d978ee0743b842840f0a4e8d6d0367 --- /dev/null +++ b/data/2404.00847.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7125ab3af960bdcb219955577f7dbbdf4d91e380e7496727e54e355b3fb0a61a +size 712033 diff --git a/data/2404.00849.png b/data/2404.00849.png new file mode 100644 index 0000000000000000000000000000000000000000..949a42445c15a30baa6f49f39cbe14c1f6e7565a --- /dev/null +++ b/data/2404.00849.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19e4498aabd70a247b481862defa1b65c8015a76ae19d25d889f1dc41a40b0a8 +size 775654 diff --git a/data/2404.00851.png b/data/2404.00851.png new file mode 100644 index 0000000000000000000000000000000000000000..f264cf25796445f4421b586a6766a8362887427f --- /dev/null +++ b/data/2404.00851.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0767af5d3738c6ba43bdb2efbe3753ec68a2747443131c0e91de4898b8c2a62 +size 754327 diff --git a/data/2404.00857.png b/data/2404.00857.png new file mode 100644 index 0000000000000000000000000000000000000000..f12f08905f44ff5f29e18026990524de7259ca2d --- /dev/null +++ b/data/2404.00857.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bb34a00fb6680849a95652be7dd27f5e9a030f62c08d134c12dd4fe0b5be97a +size 806093 diff --git a/data/2404.00874.png b/data/2404.00874.png new file mode 100644 index 0000000000000000000000000000000000000000..65c5953284975ecb3075e091c64f5574c6187d00 --- /dev/null +++ b/data/2404.00874.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f574aa1d50b94b5ff82e3e0f46c1cf633ba19f4e307be268c2d4d55ef70dbc38 +size 1165130 diff --git a/data/2404.00876.png b/data/2404.00876.png new file mode 100644 index 0000000000000000000000000000000000000000..979287bba7b40d898d06d6d5fa4d0680f60b8e94 --- /dev/null +++ b/data/2404.00876.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50082d7b7e6fae84e1e4b12d87f06bcb4e7bdb29b5fadbc23d8ad3d65c242726 +size 871244 diff --git a/data/2404.00906.png b/data/2404.00906.png new file mode 100644 index 0000000000000000000000000000000000000000..6bbc2e895492a3693f04bb108c3dd0647df5adb2 --- /dev/null +++ b/data/2404.00906.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2d766ba36b8307b5bfd922d55e6b22de971eb47343bfc54b17d96d2bdfc2a36 +size 879693 diff --git a/data/2404.00909v1.png b/data/2404.00909v1.png new file mode 100644 index 0000000000000000000000000000000000000000..205f4db98180b33d7b4d6ade505e1c1eb2e496b5 --- /dev/null +++ b/data/2404.00909v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fd4d744bea45141d13dcb350da591fff020d5631040e666baa090675fb6e766 +size 824646 diff --git a/data/2404.00913v1.png b/data/2404.00913v1.png new file mode 100644 index 0000000000000000000000000000000000000000..1fe149d7a918fc7575ec12819780e0fbc27a38b9 --- /dev/null +++ b/data/2404.00913v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:408751617c4c3a73fe10abf775c2bc0f3ece94da40d9fbc8449d32af1c510a87 +size 702424 diff --git a/data/2404.00915.png b/data/2404.00915.png new file mode 100644 index 0000000000000000000000000000000000000000..68d7dd0df363095e68565a90a561bf232fc035ba --- /dev/null +++ b/data/2404.00915.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d5256dc23767e204ffdb683761f9d83cefbb147f4d009f309be17d107d008c8 +size 789472 diff --git a/data/2404.00922.png b/data/2404.00922.png new file mode 100644 index 0000000000000000000000000000000000000000..b85d20211c64b1e7f7d8d7b63ad730fd8acf888f --- /dev/null +++ b/data/2404.00922.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e1d1f36ad529e90603e89c2e4580985f572479c1b56b5df10b3019ae5707c13 +size 1232189 diff --git a/data/2404.00925.png b/data/2404.00925.png new file mode 100644 index 0000000000000000000000000000000000000000..81e6ace07cb32c23e3bc6a0df13f17e86e6e7709 --- /dev/null +++ b/data/2404.00925.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4e74289176f7efc6a76553dc7e67d54a4108aeb70ee04bbab82299c8d3b86a9 +size 808371 diff --git a/data/2404.00928.png b/data/2404.00928.png new file mode 100644 index 0000000000000000000000000000000000000000..da7f26d0bab570b638f2afc21f68aa35d8c6d403 --- /dev/null +++ b/data/2404.00928.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d12e2e7b08e606c750b3a86e857a4d9ee2c827379bfa7a0342cdff51992bbad6 +size 802516 diff --git a/data/2404.00931.png b/data/2404.00931.png new file mode 100644 index 0000000000000000000000000000000000000000..a39e1aa242f2021c14af513a2802242bab03c789 --- /dev/null +++ b/data/2404.00931.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9e38593854bb2322fa8eb255ccc27c161289ca38decbaebc3602958b781c8ef +size 1052739 diff --git a/data/2404.00973.png b/data/2404.00973.png new file mode 100644 index 0000000000000000000000000000000000000000..4c7bd3c192e088d37b8f80d3e663734d3312a2cc --- /dev/null +++ b/data/2404.00973.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84f737f2f9929a6c35fe21779ce3e3bf402d9de5bbbe06e54a166e9281dc80b7 +size 1046355 diff --git a/data/2404.00974v1.png b/data/2404.00974v1.png new file mode 100644 index 0000000000000000000000000000000000000000..afa0e6b8975d6188f0d54a0695137aa60343429f --- /dev/null +++ b/data/2404.00974v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:423609061d33eeff8f39129c5807cee31f516ed79e145d92737a76c5d10e2b79 +size 985373 diff --git a/data/2404.00979.png b/data/2404.00979.png new file mode 100644 index 0000000000000000000000000000000000000000..bc6e6dab2574c78867c4ab081a43280c2b25b9b9 --- /dev/null +++ b/data/2404.00979.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e91d6d1272974a1e928d2ffde0112473a51ddc31a3a90ff61f232afea38643a1 +size 953089 diff --git a/data/2404.00989.png b/data/2404.00989.png new file mode 100644 index 0000000000000000000000000000000000000000..81e2f4fd5b2650692b91e1a06140244a4b597c5b --- /dev/null +++ b/data/2404.00989.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69dc2056a95d82bf1b761f101293600035059e0808385d2ee5d68db67027a1c6 +size 1422027 diff --git a/data/2404.00992v1.png b/data/2404.00992v1.png new file mode 100644 index 0000000000000000000000000000000000000000..ff67f805816858bdb3f02ed8f4e68c4e7971fb84 --- /dev/null +++ b/data/2404.00992v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d51a5355e9bb857af2c3217f8a38a1afae5e170bdb120948f897810dc239d850 +size 1132279 diff --git a/data/2404.01050.png b/data/2404.01050.png new file mode 100644 index 0000000000000000000000000000000000000000..30fc668e4551f621f4b7242120c9a68492ef01a8 --- /dev/null +++ b/data/2404.01050.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbb83c7a009c89d1021c1bf274979e3539541e47113ba7aff506b1bc435b2de7 +size 1436064 diff --git a/data/2404.01051.png b/data/2404.01051.png new file mode 100644 index 0000000000000000000000000000000000000000..da1a816c5d06a1cd93d64762323477a680c07af1 --- /dev/null +++ b/data/2404.01051.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37ef432c38700e4130a0d273feb58ede3a857ff719666e4474a7134d64ffc31d +size 794922 diff --git a/data/2404.01089.png b/data/2404.01089.png new file mode 100644 index 0000000000000000000000000000000000000000..0937b7fa704d1b41a1a0e4bca7cb745b26561176 --- /dev/null +++ b/data/2404.01089.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:967543e6c7feecf5cbaf87bdc0ab3f24e0312305657a9522dd3ff9dcce9affef +size 1236949 diff --git a/data/2404.01120.png b/data/2404.01120.png new file mode 100644 index 0000000000000000000000000000000000000000..4a202b62b23045db86ef743b577ffc513091f53d --- /dev/null +++ b/data/2404.01120.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f98f7ce928af47862010ef9ae7d07602ec2765e8a6c8f0e1926d5969be089e4 +size 819255 diff --git a/data/2404.01123.png b/data/2404.01123.png new file mode 100644 index 0000000000000000000000000000000000000000..29518c905190552aae769245a23ab0cd4648860d --- /dev/null +++ b/data/2404.01123.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d88ba35eb82c8bb6a2e3d2583458c50cb5d35b0e82aa225a20c5ebedd0fe3ba +size 1958056 diff --git a/data/2404.01143.png b/data/2404.01143.png new file mode 100644 index 0000000000000000000000000000000000000000..884b7862c54607224067e9d9605cf3fd3f39e15a --- /dev/null +++ b/data/2404.01143.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bf141ec58a690dfcfcc249c4e0dce1f13ec40b764ce4b4fab0486c2a5fc7666 +size 705492 diff --git a/data/2404.01156.png b/data/2404.01156.png new file mode 100644 index 0000000000000000000000000000000000000000..6394aa47c0a155bbff2cc8e071cdc28a08267fb3 --- /dev/null +++ b/data/2404.01156.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0be342a7318bca1557e9756ff175ecc4794d29b615b6fb36030204b46e5c3c77 +size 803534 diff --git a/data/2404.01179.png b/data/2404.01179.png new file mode 100644 index 0000000000000000000000000000000000000000..3903a4ef816ddf97137252ecf2912d6b7d114d86 --- /dev/null +++ b/data/2404.01179.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58fc7265d0c8285a7f336f5ed6ab3e749d01bd643f8f7204483720957a384099 +size 670988 diff --git a/data/2404.01203.png b/data/2404.01203.png new file mode 100644 index 0000000000000000000000000000000000000000..2c279ea6ad780abfcf3d53913a51f9f39212f198 --- /dev/null +++ b/data/2404.01203.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7ab5fa3f24727bce92c17132d5502591602af51f3defbbc934543a9fee81bca +size 754795 diff --git a/data/2404.01225.png b/data/2404.01225.png new file mode 100644 index 0000000000000000000000000000000000000000..81a7c1b6a823e6d5b30ca0b8a01252c2a4f993a4 --- /dev/null +++ b/data/2404.01225.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:584ee83256511e0a2c70977798b8c7af6718bbe73f5af8f856350ccd272c1c88 +size 935009 diff --git a/data/2404.01243.png b/data/2404.01243.png new file mode 100644 index 0000000000000000000000000000000000000000..6d42edae109730a9f517e7e78657eff2f5f23d04 --- /dev/null +++ b/data/2404.01243.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5169f33faa69454b4f16f24d6018a0c7f5ca243aefa0d39b6a84d7a0e1096a4 +size 1972607 diff --git a/data/2404.01260.png b/data/2404.01260.png new file mode 100644 index 0000000000000000000000000000000000000000..15c8af6354e1f875caabadf54e6c7236eedb9ec0 --- /dev/null +++ b/data/2404.01260.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e041927068d03e7685b9041b1041963c365896f9a21801470af27cbedfffe2e1 +size 836204 diff --git a/data/2404.01278.png b/data/2404.01278.png new file mode 100644 index 0000000000000000000000000000000000000000..7bd68829eeb7b1c1e33d7e0f3bee04c0b73b57a0 --- /dev/null +++ b/data/2404.01278.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7519f6d200a739167c28bc61a19bbdb2c517773f92abac6f1085871c754055f3 +size 801712 diff --git a/data/2404.01294.png b/data/2404.01294.png new file mode 100644 index 0000000000000000000000000000000000000000..ed9d2a74051336483836255dcacb90321557b389 --- /dev/null +++ b/data/2404.01294.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f91a883e069aa45a71ac6f5411644188ed94eb2c7286a7e90420f5bb08cb20e7 +size 1265889 diff --git a/data/2404.01297.png b/data/2404.01297.png new file mode 100644 index 0000000000000000000000000000000000000000..f886476e8f1977916af9843f5542ead3729c3936 --- /dev/null +++ b/data/2404.01297.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4ade693a91766a3066d3d9d745899d476195d381176f79b59960db3024839e5 +size 805926 diff --git a/data/2404.01342.png b/data/2404.01342.png new file mode 100644 index 0000000000000000000000000000000000000000..eacee7223b724c2634f5809d403937d7841f1fd2 --- /dev/null +++ b/data/2404.01342.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2e904a5f0b7bbfe8bed8dbdba4dc1bdd2af54fdc59b77d1e8543602b0d54ba2 +size 763385 diff --git a/data/2404.01351.png b/data/2404.01351.png new file mode 100644 index 0000000000000000000000000000000000000000..9b3a0a40c5b36563e321bb143a7d356a9d9f5ed0 --- /dev/null +++ b/data/2404.01351.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc036c4374412d18f944d53707b48bf0f0bec525a61041d2a78f2543f7f28c92 +size 841868 diff --git a/data/2404.01409.png b/data/2404.01409.png new file mode 100644 index 0000000000000000000000000000000000000000..c0f9f9f4e00be074ccab3537f6608941f4e418e8 --- /dev/null +++ b/data/2404.01409.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fa5f1a8ce557b317be4361c51a1722d7def37fc686691c62f79aa6a65133806 +size 801227 diff --git a/data/2404.01415.png b/data/2404.01415.png new file mode 100644 index 0000000000000000000000000000000000000000..35af3b017d265da85f62672c45d01514f0a59084 --- /dev/null +++ b/data/2404.01415.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d18384314b39c7d876c5fd80fb449328f62f6b75c21cd7bef9ce786108d85fd1 +size 800471 diff --git a/data/2404.01424.png b/data/2404.01424.png new file mode 100644 index 0000000000000000000000000000000000000000..d79e1c5705855287a5ddaa5a8d5e53d5439a2cf3 --- /dev/null +++ b/data/2404.01424.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d5de6191e824f17658199aca7a9dfecec361651f3731d3db94feb4672e48060 +size 951396 diff --git a/data/2404.01440.png b/data/2404.01440.png new file mode 100644 index 0000000000000000000000000000000000000000..27a1cf3789dccfc7a77c91c0ab6478da5088ea0d --- /dev/null +++ b/data/2404.01440.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a515cecb68f3a5a44a8f9119c7cf25a871f6da777b81545f38f983c4c7434be +size 755770 diff --git a/data/2404.01464.png b/data/2404.01464.png new file mode 100644 index 0000000000000000000000000000000000000000..eb30897634855da366d3119018cd78bf4edebbd4 --- /dev/null +++ b/data/2404.01464.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23ffd496784d5cba24d1b25767628bd23e4621ec0995c0da05836c2d3bb861ae +size 792825 diff --git a/data/2404.01491.png b/data/2404.01491.png new file mode 100644 index 0000000000000000000000000000000000000000..c467ece6f33cf68639846e70020fa7cd3cceef1e --- /dev/null +++ b/data/2404.01491.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f48354bdb65d9a9e05405935204938746c6c8838624d14e6d4bacc80a97ff057 +size 987011 diff --git a/data/2404.01509.png b/data/2404.01509.png new file mode 100644 index 0000000000000000000000000000000000000000..117d68f9c2c6bac49c0f4982cda52ffbda58af4b --- /dev/null +++ b/data/2404.01509.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8bc0d53552e97009e43595ca43bb483d02b59babc9dcd72257daf6a85a1107a +size 878802 diff --git a/data/2404.01518.png b/data/2404.01518.png new file mode 100644 index 0000000000000000000000000000000000000000..b0e6c150784d6b4ad8a77bc64da02521d1b1d106 --- /dev/null +++ b/data/2404.01518.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f51d9f102cd3d08d5aaa5fa9f78f4da91484cef6764e25e24dddbf4502f9f7f5 +size 722499 diff --git a/data/2404.01524.png b/data/2404.01524.png new file mode 100644 index 0000000000000000000000000000000000000000..fcd8b6817b6b11ada7f11bdc0c7fd8db212b7066 --- /dev/null +++ b/data/2404.01524.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe7f8ef52b4a83749ae693feadc141d2668ea40c193cfe65aeb151355af1e5cf +size 830916 diff --git a/data/2404.01543.png b/data/2404.01543.png new file mode 100644 index 0000000000000000000000000000000000000000..dbe200c434d641d0898b08b66abdc54ed9eb2a7e --- /dev/null +++ b/data/2404.01543.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32a9d3f2933d284fb0f65fd0efe3b199a276121be173ce33d75dd53ef2cee723 +size 896452 diff --git a/data/2404.01547v1.png b/data/2404.01547v1.png new file mode 100644 index 0000000000000000000000000000000000000000..53e4edddb54c94936fdab4d9bb9aab31e9323f52 --- /dev/null +++ b/data/2404.01547v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:891c11dbf7627634b2561be0f3a3c6e030678ba6f608589cf4395b5b2aff4774 +size 858340 diff --git a/data/2404.01591.png b/data/2404.01591.png new file mode 100644 index 0000000000000000000000000000000000000000..d3f39929adac27530a28c7f6b8282bac57e0739a --- /dev/null +++ b/data/2404.01591.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c97d76c441d7fb7381b83e45c2481c78e7f90f584201a8a543b656fad44ac07a +size 1005495 diff --git a/data/2404.01612.png b/data/2404.01612.png new file mode 100644 index 0000000000000000000000000000000000000000..75994904d6aaaffe8c57f14ae3532ad181dff347 --- /dev/null +++ b/data/2404.01612.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7ca7ba647bf602e373b909a9ca2fdbd400f05520664eede58503e194aea5165 +size 801309 diff --git a/data/2404.01628.png b/data/2404.01628.png new file mode 100644 index 0000000000000000000000000000000000000000..fb4de2094bd9f8ebc6e8e1ada66fb73424837738 --- /dev/null +++ b/data/2404.01628.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:774763c7d9a138724cc5dbc2ebc4523076d6bdd2126b9cb9b7f312ecfc8fba2f +size 770561 diff --git a/data/2404.01636.png b/data/2404.01636.png new file mode 100644 index 0000000000000000000000000000000000000000..925c4e624f4174ea2a4db1b97d1ed178b691aad1 --- /dev/null +++ b/data/2404.01636.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cbf08a83e86951c77b36bed8b74f82b86fc862e73c3f4a04a7f06e2ced4674e +size 1098749 diff --git a/data/2404.01686.png b/data/2404.01686.png new file mode 100644 index 0000000000000000000000000000000000000000..9227d63aa830e19a7b50454425a322a09a5d3c30 --- /dev/null +++ b/data/2404.01686.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d343e57162fb0b17368bdbdd2aec3d0d394b449ee1b9e851098df1b03c412aa +size 1339801 diff --git a/data/2404.01692.png b/data/2404.01692.png new file mode 100644 index 0000000000000000000000000000000000000000..2b1a248a547307e7ac84c120a67995523a503b77 --- /dev/null +++ b/data/2404.01692.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd8abaae671c52040ac7c44129ac015cede31dfc688c1aecbc633adb3b40fc62 +size 1033810 diff --git a/data/2404.01725.png b/data/2404.01725.png new file mode 100644 index 0000000000000000000000000000000000000000..d1141bada1b631198a9f61a302c3bea2774259f1 --- /dev/null +++ b/data/2404.01725.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e7b5deaea8662260e4c288ebbc7de2340a0d595a2698a74d51e4e345960c5cf +size 716564 diff --git a/data/2404.01727v1.png b/data/2404.01727v1.png new file mode 100644 index 0000000000000000000000000000000000000000..b531b311bfe7e05d11ed9e326acab41950805f8a --- /dev/null +++ b/data/2404.01727v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b15be87a5afd4037c2856f6d011f7b465173cc73033d0eaea8412e10bf2444cc +size 821834 diff --git a/data/2404.01743.png b/data/2404.01743.png new file mode 100644 index 0000000000000000000000000000000000000000..6afc84d1ed6ae1dc1cb1f91efb2d1062b01c632b --- /dev/null +++ b/data/2404.01743.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:deb0bbd8de8474ce0e1ac8459d65d5a606025b9e836a55ec5fc0a9a64f48ae22 +size 777963 diff --git a/data/2404.01751v1.png b/data/2404.01751v1.png new file mode 100644 index 0000000000000000000000000000000000000000..65557729b7d2ed827cdf4c8a0e45547ea8574a61 --- /dev/null +++ b/data/2404.01751v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:317ac4f3fc205c1848b1e1a55232e34ffd0bf7ab4b315c14d5c926f1e93d40cf +size 770544 diff --git a/data/2404.01758.png b/data/2404.01758.png new file mode 100644 index 0000000000000000000000000000000000000000..0b5818f5bec271ec5bcf44ca73062b7ec04e75f4 --- /dev/null +++ b/data/2404.01758.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cda7ea2d2f5646fca9e713718821b7a6219b68c6d23693a3ca5793867ccd716c +size 740022 diff --git a/data/2404.01775.png b/data/2404.01775.png new file mode 100644 index 0000000000000000000000000000000000000000..24387665b65639c8add9f0aa0dd208d3486275f7 --- /dev/null +++ b/data/2404.01775.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3aaec0c3434ca988a5d979279e65f7a90350b60757eedde520efea36c7cc4aa +size 746829 diff --git a/data/2404.01819.png b/data/2404.01819.png new file mode 100644 index 0000000000000000000000000000000000000000..482718fa52e5fff7ac2b53653dcfed94cdf00c46 --- /dev/null +++ b/data/2404.01819.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d71097cdc6cc1bb209f848569a12a3596ad3e12e9d3d7c7ca4b91317f64776d7 +size 794636 diff --git a/data/2404.01828.png b/data/2404.01828.png new file mode 100644 index 0000000000000000000000000000000000000000..573c4708fc479fb707efa38a0b87b781d78f07bc --- /dev/null +++ b/data/2404.01828.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57a4fac3472112a33a678d10b3460bc08f95166f520d8d33bba85229b0e48672 +size 806430 diff --git a/data/2404.01862v1.png b/data/2404.01862v1.png new file mode 100644 index 0000000000000000000000000000000000000000..dc15440e7478f6b51401fa051f88da2c0aae58be --- /dev/null +++ b/data/2404.01862v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:beae1f76d3905c695deea0135986ca93bc5267737e69a3a1ad48b7b3355dd094 +size 1108366 diff --git a/data/2404.01882.png b/data/2404.01882.png new file mode 100644 index 0000000000000000000000000000000000000000..b3af3eb172d034c725c8a7a6ea02173347887fea --- /dev/null +++ b/data/2404.01882.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2743bc67cebc08a27042223078a7869734eea79e8ba606db504dedb69615ae89 +size 863694 diff --git a/data/2404.01925v1.png b/data/2404.01925v1.png new file mode 100644 index 0000000000000000000000000000000000000000..ef0f5ec9a1cd70d7882bdadd4797a1c5721d6e88 --- /dev/null +++ b/data/2404.01925v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f3a25320a425f7c68bfa35dd032ec13c44918b1a875640508d7c7459a32bec0 +size 816075 diff --git a/data/2404.01933.png b/data/2404.01933.png new file mode 100644 index 0000000000000000000000000000000000000000..6d15c63a000686f1367d073b1a12aa51ce6a50eb --- /dev/null +++ b/data/2404.01933.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b4a10ef31c20ca43754c81373063cd47980d49b919842d0ca14d60c9f31ef6e +size 726834 diff --git a/data/2404.01941.png b/data/2404.01941.png new file mode 100644 index 0000000000000000000000000000000000000000..bfb52c0f344953fdc4e0866b50c70db6335f2bfd --- /dev/null +++ b/data/2404.01941.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63cce8c5bac503af27555a091757874465a7fc69c523e1f7c575a566c31e578e +size 1157813 diff --git a/data/2404.01943.png b/data/2404.01943.png new file mode 100644 index 0000000000000000000000000000000000000000..878d572eb39543058f7b74d5dcb548c5182cb55b --- /dev/null +++ b/data/2404.01943.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:960ed13e4f76bd642ca62e9cad76fd1b28199a543dbac7ce44462f32f16fb33a +size 876784 diff --git a/data/2404.01945.png b/data/2404.01945.png new file mode 100644 index 0000000000000000000000000000000000000000..d4fbf4e5462a67f28ae262d370adb878adb3017f --- /dev/null +++ b/data/2404.01945.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aaf5c72dc2c761daee627889224e2a7c8f1ae60eac2db875fb11ccfb4c5b9b4c +size 755400 diff --git a/data/2404.01976.png b/data/2404.01976.png new file mode 100644 index 0000000000000000000000000000000000000000..43ffe263666b7b3c63e8729c0e5673c8eed38fec --- /dev/null +++ b/data/2404.01976.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5caf20e5a4968c5fb707fc3cd75bf135f1cdd4257e3571a6da763e402821dae +size 861670 diff --git a/data/2404.01998.png b/data/2404.01998.png new file mode 100644 index 0000000000000000000000000000000000000000..9d5966437abe7647765efc0a6a774eb801209eab --- /dev/null +++ b/data/2404.01998.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7dd402133c2f3451f511d3edff5b4f6d8bb847218a46c33985a1bcbda01407c2 +size 1116291 diff --git a/data/2404.02041.png b/data/2404.02041.png new file mode 100644 index 0000000000000000000000000000000000000000..b20ab0ccd8ca971153fe7cd1253315d835731308 --- /dev/null +++ b/data/2404.02041.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4df2ac83ef1020e3ca87b4f2215321a2fd40837e2d139ef2f3cdb2a23480a895 +size 810943 diff --git a/data/2404.02072.png b/data/2404.02072.png new file mode 100644 index 0000000000000000000000000000000000000000..c9d5547793804db460626e4629bd54aa9c2712ed --- /dev/null +++ b/data/2404.02072.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6d96d405088e08765ac6f4d1056fd802db99ee87cca9b945eb818836fc73460 +size 974445 diff --git a/data/2404.02117.png b/data/2404.02117.png new file mode 100644 index 0000000000000000000000000000000000000000..09e0accc0e758b524a211e65c50e6fc702ee1b16 --- /dev/null +++ b/data/2404.02117.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7fe8825807f8f66d1c6ca18085799b480eb4c0a23cf371705735822cf69560e +size 765604 diff --git a/data/2404.02132.png b/data/2404.02132.png new file mode 100644 index 0000000000000000000000000000000000000000..943c06a6975649d24a26b5bd30babc71350e2c4e --- /dev/null +++ b/data/2404.02132.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12abbd8fc57219c4a50ee4693ea7230cd304e1db2235f7975465e3a6dc1cb43b +size 866033 diff --git a/data/2404.02145.png b/data/2404.02145.png new file mode 100644 index 0000000000000000000000000000000000000000..fcf1056df901697c60f2d7c98ad1be84e3b880a6 --- /dev/null +++ b/data/2404.02145.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b170f9a43c72d0edc6edb3a093cd10db5954dafe607af416ee02eff2c262794 +size 837604 diff --git a/data/2404.02152.png b/data/2404.02152.png new file mode 100644 index 0000000000000000000000000000000000000000..bf1498b08737c87d3832e4a292e50791f6ff98f7 --- /dev/null +++ b/data/2404.02152.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b82cc8617879cf9a8eabf2ae5dc2692162d3b6e15848a9f300fa2a717e0cf66 +size 1147850 diff --git a/data/2404.02155.png b/data/2404.02155.png new file mode 100644 index 0000000000000000000000000000000000000000..ec152ca9edb475a0874eb1aaf700f4d437780c4c --- /dev/null +++ b/data/2404.02155.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4291108b5dc4c1e32292c4925ee90ae62ebadef927f7a2444100ca8c0cf55f46 +size 785473 diff --git a/data/2404.02176.png b/data/2404.02176.png new file mode 100644 index 0000000000000000000000000000000000000000..845f4f759b8697512d2fe4632e6dc76c2e81ed36 --- /dev/null +++ b/data/2404.02176.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d3a98959381b46a45e6f7def9d0776068e6b3b70b780c79679634332a954319 +size 777001 diff --git a/data/2404.02185.png b/data/2404.02185.png new file mode 100644 index 0000000000000000000000000000000000000000..271827212a6d5e896af4b5d67822da871c08a6f6 --- /dev/null +++ b/data/2404.02185.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c788e7f0f75bc80a0f00b51ebd86c4be45b635d50b8794e968b545aac29df017 +size 791119 diff --git a/data/2404.02189.png b/data/2404.02189.png new file mode 100644 index 0000000000000000000000000000000000000000..c4f71379ab1f1e6c3a75d29e48faf43cad4bb01c --- /dev/null +++ b/data/2404.02189.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5b9c193a6662ff79c32fe675d65a45821bb05e7cd41bc821b6e187236102a92 +size 785845 diff --git a/data/2404.02227.png b/data/2404.02227.png new file mode 100644 index 0000000000000000000000000000000000000000..24d78d4951cef87b79799b8985139ccd8ade1ab9 --- /dev/null +++ b/data/2404.02227.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2b0c69bfc2727bf041e9a82e7be7f088adab804bde7b645105e2eb858cd3fdb +size 594151 diff --git a/data/2404.02233.png b/data/2404.02233.png new file mode 100644 index 0000000000000000000000000000000000000000..4efe754daf25dbaa8bc3b15d0c965d04e17071e9 --- /dev/null +++ b/data/2404.02233.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1e92aac03634a9082a64daca1125226db2f6a11d16fac94fe7beaa7ef66fb91 +size 779232 diff --git a/data/2404.02242.png b/data/2404.02242.png new file mode 100644 index 0000000000000000000000000000000000000000..ecd319390d1010397c60986877f939ccd65590c9 --- /dev/null +++ b/data/2404.02242.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0843b1f4b3d1bd9570801d510652de66bff71fc48d11e5b57fb85865c9c1ce74 +size 794298 diff --git a/data/2404.02257.png b/data/2404.02257.png new file mode 100644 index 0000000000000000000000000000000000000000..df5218d51fe058c099b48ee33910aa3310075e05 --- /dev/null +++ b/data/2404.02257.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cb00393d7024b586ddc21e80c013b20829cb8238ba2c24d84393ebc474bba45 +size 746295 diff --git a/data/2404.02285.png b/data/2404.02285.png new file mode 100644 index 0000000000000000000000000000000000000000..9991ad6396eb225269811c540e92ca61312a976a --- /dev/null +++ b/data/2404.02285.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a3b0b1345f126a511628da607d59aec8b432029d758a825f34df4d6a23ba1d9 +size 717360 diff --git a/data/2404.02388.png b/data/2404.02388.png new file mode 100644 index 0000000000000000000000000000000000000000..902cc36c08c1b7d91ca7ad807c11dbbff459a3c2 --- /dev/null +++ b/data/2404.02388.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa3127c244ce511b858a5f7fc573b249f58c5d577ebd69834cd275fbf7d6cf04 +size 812372 diff --git a/data/2404.02405.png b/data/2404.02405.png new file mode 100644 index 0000000000000000000000000000000000000000..b3c98e6b7f4b56425f4a437ec976367f6a41baaf --- /dev/null +++ b/data/2404.02405.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:026814c7e944f338f4114cfb5642037a3a429466e8a79e5db2f62e38b64a22c7 +size 771533 diff --git a/data/2404.02478.png b/data/2404.02478.png new file mode 100644 index 0000000000000000000000000000000000000000..e9d68b130ee6acd5c66d4020a0f2e97d2e0f5757 --- /dev/null +++ b/data/2404.02478.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9babf58b8096632cd070233d8a8af524ebf65fc2e4f582fba5bc242c26e1f1c1 +size 800011 diff --git a/data/2404.02585.png b/data/2404.02585.png new file mode 100644 index 0000000000000000000000000000000000000000..0b1601b9e07970d8dee692602229d3cc12ffd043 --- /dev/null +++ b/data/2404.02585.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09213e9d88d579db17c5c054e45c0e0c24c2377dd1316c0b7b26ed5d2c813bd2 +size 799500 diff --git a/data/2404.02638.png b/data/2404.02638.png new file mode 100644 index 0000000000000000000000000000000000000000..b11abc60d0134e0f910e61c11434caf27ee2e0ae --- /dev/null +++ b/data/2404.02638.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5c015f0a0bef97f7d618cb57865d25d294383d9266db97e6cd2c32a3d55031e +size 1074872 diff --git a/data/2404.02686.png b/data/2404.02686.png new file mode 100644 index 0000000000000000000000000000000000000000..da03a314fbb9f766170a200b4f6765d1950bed53 --- /dev/null +++ b/data/2404.02686.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5ed781c0e77332d09d2408eb8bf3951b5d6f7b4903afd05a681736769ac0bc1 +size 772010 diff --git a/data/2404.02742.png b/data/2404.02742.png new file mode 100644 index 0000000000000000000000000000000000000000..cf1c2346afb3de598df96247ba0f6c9e04bca5e5 --- /dev/null +++ b/data/2404.02742.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4865ce915f5d09f661500a942ee3f18d89d85c80702266de4017870edd90fa28 +size 997114 diff --git a/data/2404.02755.png b/data/2404.02755.png new file mode 100644 index 0000000000000000000000000000000000000000..ef97d2991dde89792d7328691e1c9a6a11dca2d6 --- /dev/null +++ b/data/2404.02755.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31b9bea25ed5346147231fddd07effca28f2a198d244ccf8f3305794d6d144d1 +size 775601 diff --git a/data/2404.02759.png b/data/2404.02759.png new file mode 100644 index 0000000000000000000000000000000000000000..bdb64c8440e1bf41576c98daa2716b379a7a2e61 --- /dev/null +++ b/data/2404.02759.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7699a6094fc3df099921c3a483ace8c48e626fcc5d0b428a3d5446e73aabfbae +size 794607 diff --git a/data/2404.02788.png b/data/2404.02788.png new file mode 100644 index 0000000000000000000000000000000000000000..fb6f6221e7250b35c13275c47a001720faddbd95 --- /dev/null +++ b/data/2404.02788.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13c4e797f01670c26d7f557c6ed4c312cdebf9e1f4133436dc9e7ee79c1e3278 +size 1514475 diff --git a/data/2404.02790.png b/data/2404.02790.png new file mode 100644 index 0000000000000000000000000000000000000000..ff5ea9151a0d57d83efad0ffd5cbff2d017cb251 --- /dev/null +++ b/data/2404.02790.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8992986d9505e6bfbb92a73fdd2f2d48a741d298102fb0589b4c78377160fd91 +size 1060775 diff --git a/data/2404.02883.png b/data/2404.02883.png new file mode 100644 index 0000000000000000000000000000000000000000..4074ac09a07063a547cb6f7917e5d35f142f7c62 --- /dev/null +++ b/data/2404.02883.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66e8fcd4ec8de3a88cd85f0aa5dfb02d24f13cd6c5d34eb31aa28c7ffd9e0c7e +size 726033 diff --git a/data/2404.02889.png b/data/2404.02889.png new file mode 100644 index 0000000000000000000000000000000000000000..5288ff2a01f3e5f9fa933a733e25cc4efbbce4f0 --- /dev/null +++ b/data/2404.02889.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61ac1badc5e5f4cfbc064355bc20bd38bb79b11fd1767ecf324c50e61dd73ae5 +size 870430 diff --git a/data/2404.02900.png b/data/2404.02900.png new file mode 100644 index 0000000000000000000000000000000000000000..4c5a5cd6f122ce1c9b89670b7f975d728955b1ed --- /dev/null +++ b/data/2404.02900.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17a1706a27bcf32816f6a55271cc2bd7117918ad9b5845cc02b9b098b15f942a +size 802122 diff --git a/data/2404.03070.png b/data/2404.03070.png new file mode 100644 index 0000000000000000000000000000000000000000..24ef72818aebfc21ef9c778720a2f5c1e77aff6c --- /dev/null +++ b/data/2404.03070.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d21e4aa1fb69626380c5bae0a0a3e47f44f4770e641cd3ce5241c6881344fc60 +size 950504 diff --git a/data/2404.03138.png b/data/2404.03138.png new file mode 100644 index 0000000000000000000000000000000000000000..b4f4dc109ae6001fe8a0c12e26849c93f9fe0469 --- /dev/null +++ b/data/2404.03138.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef472a7f5890be488afd4fe68855dbe9729537b8465ae40da941c1f983341037 +size 923859 diff --git a/data/2404.03159.png b/data/2404.03159.png new file mode 100644 index 0000000000000000000000000000000000000000..3dba2722df96a6a7ddf30d26602d0e6be733efe7 --- /dev/null +++ b/data/2404.03159.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67b9c7939a592ee5e1647cc6e8ae5d62bde2b4a0b8b0d5275ec21ec99b52f9e9 +size 809635 diff --git a/data/2404.03181v1.png b/data/2404.03181v1.png new file mode 100644 index 0000000000000000000000000000000000000000..381c6256005a6d44d35ea47155b2e656a4375c4d --- /dev/null +++ b/data/2404.03181v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7300ce5b51deed3da81e7c5b0e70d0a3c2b58d0594e0a2424213a415fa368dba +size 755130 diff --git a/data/2404.03183.png b/data/2404.03183.png new file mode 100644 index 0000000000000000000000000000000000000000..62469a6f4965f5e80acdef496c6f7bf598944d43 --- /dev/null +++ b/data/2404.03183.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:390e493f3658a0aec331a7a00be4e9cb49e4bd29884039e459c81d31ed259329 +size 815916 diff --git a/data/2404.03242.png b/data/2404.03242.png new file mode 100644 index 0000000000000000000000000000000000000000..1369b76af03055f7afb2742dd295428e8c3c85e7 --- /dev/null +++ b/data/2404.03242.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13ee8007e654743ee337db60a096f8232bf44d3492167be63264b65e7b27f166 +size 972436 diff --git a/data/2404.03296.png b/data/2404.03296.png new file mode 100644 index 0000000000000000000000000000000000000000..8323e830c81102d694456735d836bcd96a0a38ab --- /dev/null +++ b/data/2404.03296.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c0692049d00a0b51cdcab1c43760893906b2b8c0e858765dd59a6ad22b3e5e8 +size 770862 diff --git a/data/2404.03398.png b/data/2404.03398.png new file mode 100644 index 0000000000000000000000000000000000000000..e39023db99b9a8507da46edc12cb6d2f916cbd9e --- /dev/null +++ b/data/2404.03398.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db718368d2cb63792a14da79c649a5d83ae127f7299dd822ba13835e171a5a7e +size 763672 diff --git a/data/2404.03477.png b/data/2404.03477.png new file mode 100644 index 0000000000000000000000000000000000000000..2dab93b4804aeec217131ffaaaa13c9a4b93e364 --- /dev/null +++ b/data/2404.03477.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cc2d69e368622549e06578041bfabfe7292532edc495f2c5e5893fca7860b75 +size 949080 diff --git a/data/2404.03518.png b/data/2404.03518.png new file mode 100644 index 0000000000000000000000000000000000000000..81710b0b5bf59c10c6d4752f0e51a29630ad6862 --- /dev/null +++ b/data/2404.03518.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:904a6474d05176f7179d603b896875a3679e31d3db4b1efd6cbdf2909ca589ea +size 751732 diff --git a/data/2404.03566v1.png b/data/2404.03566v1.png new file mode 100644 index 0000000000000000000000000000000000000000..8f2552037610331c4b687bcd053cebc791374531 --- /dev/null +++ b/data/2404.03566v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5b84699f015ae61ceac5f59caacb0e503e326af5c6c23a1cf849634d53cd84a +size 1021476 diff --git a/data/2404.03635.png b/data/2404.03635.png new file mode 100644 index 0000000000000000000000000000000000000000..6c5ccaa27ee68aa05e8063572f5e53b34798e50d --- /dev/null +++ b/data/2404.03635.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f17c4294154e5551805d55443930dac6da903d2b9b3990f17c4a212831e66186 +size 940451 diff --git a/data/2404.03645.png b/data/2404.03645.png new file mode 100644 index 0000000000000000000000000000000000000000..a87d93ee72c475ea0497dc9fa62536106748e869 --- /dev/null +++ b/data/2404.03645.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71558c26dc7b98eb4de671baf00f1b9cb19f1880e4148ba4e109a3d50ea28cb7 +size 1113972 diff --git a/data/2404.03650v1.png b/data/2404.03650v1.png new file mode 100644 index 0000000000000000000000000000000000000000..a0300bd295ca7754bb6c1229a783c461c5770164 --- /dev/null +++ b/data/2404.03650v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3f8ad150aa22cdccceebf6d15d2c17098199d65a41e6cf9b49d8daae98be871 +size 841977 diff --git a/data/2404.03652.png b/data/2404.03652.png new file mode 100644 index 0000000000000000000000000000000000000000..74be56530d265115309f5e572165e545021302a0 --- /dev/null +++ b/data/2404.03652.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edb794b473ff6563b500c9fb3e7fee41b70c812832d108a88f505ee7a4368be4 +size 924726 diff --git a/data/2404.03656.png b/data/2404.03656.png new file mode 100644 index 0000000000000000000000000000000000000000..9c8d02344f52472cf39b49cdd6528102c12ea825 --- /dev/null +++ b/data/2404.03656.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3048a6b352ea20a64247c3d5b177470b7e0284928812a3b48658188c7cd6ca61 +size 1126177 diff --git a/data/2404.03658.png b/data/2404.03658.png new file mode 100644 index 0000000000000000000000000000000000000000..8923ff384ffd96720e29cc7ee2c4f89c349294cf --- /dev/null +++ b/data/2404.03658.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7857cfa589f928a9ada38b7996f749d707d8a207e86c234987ca58e32acffaa4 +size 1001044 diff --git a/data/2404.03778.png b/data/2404.03778.png new file mode 100644 index 0000000000000000000000000000000000000000..5bb46b9e9b540367f327311878ccc159798dafec --- /dev/null +++ b/data/2404.03778.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:331e61c6d46c5a02e48ca79b59af3f75321dbd5322a143cd29ebac26c3d517df +size 729346 diff --git a/data/2404.03789.png b/data/2404.03789.png new file mode 100644 index 0000000000000000000000000000000000000000..13b72a726df2e1c4afaf2ea31e796ad8fe0bc2b5 --- /dev/null +++ b/data/2404.03789.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55244ab2451855dde4b31a300e6c513c4f4b800dca5da5c8f1e6b34f8030eca0 +size 773035 diff --git a/data/2404.03831.png b/data/2404.03831.png new file mode 100644 index 0000000000000000000000000000000000000000..c0c912c3d9ce36e0d253fe90d08998ff3c7a17a9 --- /dev/null +++ b/data/2404.03831.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0ef6b83c27f017b1751cfc520e5cb7249d6eeda0592372fd103b1f54ac26704 +size 736731 diff --git a/data/2404.03913.png b/data/2404.03913.png new file mode 100644 index 0000000000000000000000000000000000000000..611dea06b0217f89962969f55d5bff9afe7eeb84 --- /dev/null +++ b/data/2404.03913.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7baf0b7ca4322f9babd34e89e46b04b2dd1f5f7b1dabca3ffc0f9de3297f5e66 +size 1323998 diff --git a/data/2404.03924.png b/data/2404.03924.png new file mode 100644 index 0000000000000000000000000000000000000000..30ebb827d15d221ad73244e3e128bd5169cbe17c --- /dev/null +++ b/data/2404.03924.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c46c2e7f25aabbbff355513030251a9fa68f7a11188b0be14cde0b69b1ee2e7d +size 811672 diff --git a/data/2404.03925.png b/data/2404.03925.png new file mode 100644 index 0000000000000000000000000000000000000000..dd5ae70d38e56bc40b050556ae32a5572b4c8f0e --- /dev/null +++ b/data/2404.03925.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2204466a69753e72639ff0a420c27ff263125e5cf42ac1ad3ca7d3444e0b036 +size 718759 diff --git a/data/2404.03999.png b/data/2404.03999.png new file mode 100644 index 0000000000000000000000000000000000000000..740b5b7cc830f7612418887ff86f312db76b5351 --- /dev/null +++ b/data/2404.03999.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30727316b591cceec605a63e6954a48ff7e8c71d5fbc68fb71080f7daca01def +size 635389 diff --git a/data/2404.04050.png b/data/2404.04050.png new file mode 100644 index 0000000000000000000000000000000000000000..a0a42b20a82fc4e3b4bb45eb5f46fed1aaebdb3a --- /dev/null +++ b/data/2404.04050.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a01ee8c36560078f7edf8b96a023a6a97f32d7c6359a55b9cb965dfe347d26a4 +size 668949 diff --git a/data/2404.04072.png b/data/2404.04072.png new file mode 100644 index 0000000000000000000000000000000000000000..cc1f5375317030b811f88c940ffa624e9d64b7d6 --- /dev/null +++ b/data/2404.04072.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0edc90fdff28b3a4450258811fd4ffe1a018cb867d357b21ccf7ab3f9783a5a +size 662412 diff --git a/data/2404.04095.png b/data/2404.04095.png new file mode 100644 index 0000000000000000000000000000000000000000..6d26ae467c929bf5d448c45ec7d01b5ce22dd3a2 --- /dev/null +++ b/data/2404.04095.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e3deafbfd6d66abe0d7026043a1a7c9b3aa7185a005631094131bd9c54084a4 +size 803977 diff --git a/data/2404.04104.png b/data/2404.04104.png new file mode 100644 index 0000000000000000000000000000000000000000..4258e9ff99c1223cc6973dc83919c8c552b119e3 --- /dev/null +++ b/data/2404.04104.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c81f029480604a67f639e2e10f548e3fb3733e36aeb8204e9d7d91c9f351e1bc +size 1067745 diff --git a/data/2404.04231.png b/data/2404.04231.png new file mode 100644 index 0000000000000000000000000000000000000000..d31d6249afa054a55dd64e4e208dc8ffdfd0ad0e --- /dev/null +++ b/data/2404.04231.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da4c3b24f478188dfcdec8c21dc355ce89a726e9037424fa941bfb662dc51716 +size 785716 diff --git a/data/2404.04242.png b/data/2404.04242.png new file mode 100644 index 0000000000000000000000000000000000000000..865b07a2799c97f440043a7ac1609c88599302ad --- /dev/null +++ b/data/2404.04242.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:183aefd00e6ff5cfa4cd8eda16f5d5fb5b786e7ff62faf55795a620153fe7873 +size 851978 diff --git a/data/2404.04318.png b/data/2404.04318.png new file mode 100644 index 0000000000000000000000000000000000000000..be3fc0e9cc176effe211da7b5a839d050cee5b13 --- /dev/null +++ b/data/2404.04318.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e6220b268d25cb718e40fc310f3b35222b6de68b11e2d4749e1d4908f5dcc66 +size 970965 diff --git a/data/2404.04319.png b/data/2404.04319.png new file mode 100644 index 0000000000000000000000000000000000000000..3eedbfa407d15879a8f24727e13fd414584a48f4 --- /dev/null +++ b/data/2404.04319.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e41a20f1459f5bcf8b0e8d023ce5c6faf15265bf55ecc9dea561806c5b7d7bb +size 1514832 diff --git a/data/2404.04346.png b/data/2404.04346.png new file mode 100644 index 0000000000000000000000000000000000000000..8d2eff052222b33f7557e50b71ba3a28e9469a4c --- /dev/null +++ b/data/2404.04346.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72496461e6ad61684e572b3a9dd4661186a42e7f46f63c50b56a8d9f4280d242 +size 1066146 diff --git a/data/2404.04430.png b/data/2404.04430.png new file mode 100644 index 0000000000000000000000000000000000000000..d3493b18188226b2a0baf227a1b92cbf0cba03de --- /dev/null +++ b/data/2404.04430.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:979c365d42ee788fce92e700e8036475119e51af4af06082fe9ef469ecc1bdb9 +size 783824 diff --git a/data/2404.04458.png b/data/2404.04458.png new file mode 100644 index 0000000000000000000000000000000000000000..df861f7ee98ce6ca0286f701c82b60bd9b0c78c5 --- /dev/null +++ b/data/2404.04458.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c07b29d279659b4d50fd7099976abe47d5ab51d53f45317c9d97d1fbeadefdcf +size 1083721 diff --git a/data/2404.04557.png b/data/2404.04557.png new file mode 100644 index 0000000000000000000000000000000000000000..86f754ea58f4fca372ba03ac61a34d0ff24fc3bd --- /dev/null +++ b/data/2404.04557.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6abebe5ab852466067eed8e2812c790ce911f27a8bc869c33a32cf84f4f74d6b +size 1171061 diff --git a/data/2404.04562.png b/data/2404.04562.png new file mode 100644 index 0000000000000000000000000000000000000000..7e6d3fc08ebb8d95b18f1d625c9d905fbc39fec3 --- /dev/null +++ b/data/2404.04562.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c65faebeb6027e160425b259960dbb6ea8d245b2e772b7ff5469787443deb50 +size 987663 diff --git a/data/2404.04565.png b/data/2404.04565.png new file mode 100644 index 0000000000000000000000000000000000000000..a2885b9d8a5d386b0518bf87beb3ac4d982b1761 --- /dev/null +++ b/data/2404.04565.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7db9afc713623cc56c4dd2f23c424e74a09a1a5bd1c97a7af6de9b2819c9c443 +size 1121352 diff --git a/data/2404.04624.png b/data/2404.04624.png new file mode 100644 index 0000000000000000000000000000000000000000..93f4c12deba7d702c9c164909f906d4d7cb70f0d --- /dev/null +++ b/data/2404.04624.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e21e0824e069884decb9e8cb618cd00f274366954f866d9b9102656e5b2d5a46 +size 731883 diff --git a/data/2404.04627.png b/data/2404.04627.png new file mode 100644 index 0000000000000000000000000000000000000000..7018afb666bdd071e3c416e7d637c7805d12a7da --- /dev/null +++ b/data/2404.04627.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b83e7da9f46859c70b183376514106dd7418c6e3160cb284264368eb768a094a +size 766845 diff --git a/data/2404.04647.png b/data/2404.04647.png new file mode 100644 index 0000000000000000000000000000000000000000..dc87e3c20ee13ad8b3c47c790d5eebfa9bc755a7 --- /dev/null +++ b/data/2404.04647.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eed5eac8cf73aa8fb6ee01f07f0aeb31f4d26322acd7d60189930fdf0042c437 +size 886225 diff --git a/data/2404.04650.png b/data/2404.04650.png new file mode 100644 index 0000000000000000000000000000000000000000..ba6ce5bf9ec512231b4ce5d1341b26bd177eb30a --- /dev/null +++ b/data/2404.04650.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a42ebe7e5d1d65ffe0e90f811d3a87c4585e9cb81c631946d5a8e8b609018391 +size 1107152 diff --git a/data/2404.04785.png b/data/2404.04785.png new file mode 100644 index 0000000000000000000000000000000000000000..881e0c11552c6c9d09fc3970f0afa0f72b48234d --- /dev/null +++ b/data/2404.04785.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f748af9cead7ee0fdac0e8f52a73122e09360fd8dfb24d424f46f710f74ce97 +size 747558 diff --git a/data/2404.04804.png b/data/2404.04804.png new file mode 100644 index 0000000000000000000000000000000000000000..d97ce597516f5ccc45e22ddff70dcdcc7dabc853 --- /dev/null +++ b/data/2404.04804.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdac3ecfa75579890db4200bfdacefc47a5b6b280e1bf14b686212d32c341d8d +size 932994 diff --git a/data/2404.04808.png b/data/2404.04808.png new file mode 100644 index 0000000000000000000000000000000000000000..d138d3ee73fd08b9bec56f4d0f89fb03e3c41ee5 --- /dev/null +++ b/data/2404.04808.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94b92fc00b6f516351e2f0729e8c9e93167950a486fc394102aa6ae54dc0f29d +size 710059 diff --git a/data/2404.04819.png b/data/2404.04819.png new file mode 100644 index 0000000000000000000000000000000000000000..3b85599126cd8d817934794d61ed6bb7254f53a4 --- /dev/null +++ b/data/2404.04819.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2ea081f1f45e706a15dee6c08dab69a6f83113687e013afb1c7b1080516c1b1 +size 903978 diff --git a/data/2404.04823.png b/data/2404.04823.png new file mode 100644 index 0000000000000000000000000000000000000000..10ab4386dc19e2bc200ebf8e876202fc90c7d6db --- /dev/null +++ b/data/2404.04823.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45671b309ca1e37e7f083985d65e7124ca01f6b17c26a9c120acd53713ece745 +size 990214 diff --git a/data/2404.04848.png b/data/2404.04848.png new file mode 100644 index 0000000000000000000000000000000000000000..975a563c531a42fef37687b52da65e6d3778d676 --- /dev/null +++ b/data/2404.04848.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b459ae6c7b325d12789037c8249c11c5f81895de559d0a62c26c9e8c0f48c80b +size 736499 diff --git a/data/2404.04876.png b/data/2404.04876.png new file mode 100644 index 0000000000000000000000000000000000000000..89ef276eae3259668719322eb0164ad75a02d32a --- /dev/null +++ b/data/2404.04876.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c25c360de93cd8e25d883499cf4bb646cd7cfb268a3410d8c188d412e0d3846a +size 962051 diff --git a/data/2404.04878.png b/data/2404.04878.png new file mode 100644 index 0000000000000000000000000000000000000000..7cf94f693e67c9b2f085456dea0c4bca8bb8000d --- /dev/null +++ b/data/2404.04878.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21d510afdd2c15b99026bd9019b61b74894c008afb437de52170caa3d44888da +size 727619 diff --git a/data/2404.04890.png b/data/2404.04890.png new file mode 100644 index 0000000000000000000000000000000000000000..f1bb4db8d4661b21267fe248e614c621439c5cc1 --- /dev/null +++ b/data/2404.04890.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40d4f009857368daec5496e7af93ee29b8aece4bee4df8b9bb349c6021fea7eb +size 805548 diff --git a/data/2404.04936.png b/data/2404.04936.png new file mode 100644 index 0000000000000000000000000000000000000000..a87d7aca0dfa38e9135c48a4b3bb0bfb1ef11b66 --- /dev/null +++ b/data/2404.04936.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55809cdb43c74b86a32a3236036e151dbd4924d8d0fb985143213307cddba6aa +size 747075 diff --git a/data/2404.04956.png b/data/2404.04956.png new file mode 100644 index 0000000000000000000000000000000000000000..203f21cee238a632ad02c271a31b53504a796d08 --- /dev/null +++ b/data/2404.04956.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b63390addfe4519ab68e2cb60fb7560ef61426dba63b2708b80407c003cbdd31 +size 960920 diff --git a/data/2404.04960.png b/data/2404.04960.png new file mode 100644 index 0000000000000000000000000000000000000000..84d22d87ded90336055c97ffd33283f8571cea7e --- /dev/null +++ b/data/2404.04960.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99f2e8bd1c251f5ea922ea2a792a469315c499ad6008b52efe4c06710ea1efbd +size 805591 diff --git a/data/2404.04996.png b/data/2404.04996.png new file mode 100644 index 0000000000000000000000000000000000000000..380336ab60561a602fc803c5d4b20a2748efa8de --- /dev/null +++ b/data/2404.04996.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08150a332e09cbbc50006c56222aef2089dce7f42e6cb5877a5cd49c0e17d2ea +size 780078 diff --git a/data/2404.05001.png b/data/2404.05001.png new file mode 100644 index 0000000000000000000000000000000000000000..16bf2c16dfb72b57d182669733c4ece8a909d9bc --- /dev/null +++ b/data/2404.05001.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8040f97bfab3719df197f3caf55893f510904dab2663300d014800407288a54 +size 1001700 diff --git a/data/2404.05016.png b/data/2404.05016.png new file mode 100644 index 0000000000000000000000000000000000000000..804264a535e9339f312be760dfd206b21fc6d845 --- /dev/null +++ b/data/2404.05016.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99a00cb47239d1770846e737244e80d1cb1f92a9f645fcefd7cad5045d8b3be3 +size 931730 diff --git a/data/2404.05063.png b/data/2404.05063.png new file mode 100644 index 0000000000000000000000000000000000000000..1ad413b1db505187475c6b179f1bfe13b4560516 --- /dev/null +++ b/data/2404.05063.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab867e1ac07cd60c6f76a94192a849bbffb6c5575c20526d52ab8693d1d00b37 +size 826533 diff --git a/data/2404.05136.png b/data/2404.05136.png new file mode 100644 index 0000000000000000000000000000000000000000..6915505ecd981b51be2381ac6953c8bd449c24f2 --- /dev/null +++ b/data/2404.05136.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a72e0408fcadf7cd2d86628a953ca99c90d394fcfc142aebc51f94daf1c3495d +size 927453 diff --git a/data/2404.05145.png b/data/2404.05145.png new file mode 100644 index 0000000000000000000000000000000000000000..00f4f6aa2e17a638f41af5f849f33eaeedaf1373 --- /dev/null +++ b/data/2404.05145.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:799418abf2a546f7c91fa2ebd2bcff7559a04d4521c33c48ef59e09eb6e176ad +size 735991 diff --git a/data/2404.05206.png b/data/2404.05206.png new file mode 100644 index 0000000000000000000000000000000000000000..2d601e2efd3c300e345f4f0d5e48c5080ef736ca --- /dev/null +++ b/data/2404.05206.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ea36c494152185df3c23b09b711070f6a8ca005adfd7bf76601f1d6c785e825 +size 1033346 diff --git a/data/2404.05207.png b/data/2404.05207.png new file mode 100644 index 0000000000000000000000000000000000000000..cb6713bff36cfdceca3728807561ab164c0c9729 --- /dev/null +++ b/data/2404.05207.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4b945d9ceca4f5c1995a2f12de0fa769fb23418768a1755f48fbd718784b958 +size 728587 diff --git a/data/2404.05218v1.png b/data/2404.05218v1.png new file mode 100644 index 0000000000000000000000000000000000000000..358d6a6c75cb6652ea4c313e93e7e10117f7cabe --- /dev/null +++ b/data/2404.05218v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:493427c4c83c956656a8aa78f0405af9e1814e98a96f45894daccc225f7b8f9b +size 758629 diff --git a/data/2404.05225.png b/data/2404.05225.png new file mode 100644 index 0000000000000000000000000000000000000000..305c8da0714338b8574c5fc7677f2c60077e5d9e --- /dev/null +++ b/data/2404.05225.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f2ecd19dda51565b978cf83a2277bed7eff0d0b058d4b14e29998c2be0cf32e +size 777971 diff --git a/data/2404.05231.png b/data/2404.05231.png new file mode 100644 index 0000000000000000000000000000000000000000..3487cc581ab2a4cc44f3e6339ca5bc77bdf84ed4 --- /dev/null +++ b/data/2404.05231.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90fe36f4f1ff5a88ca197376a3ca222018b58ed20048c08412a75e2a5e6c47b2 +size 745287 diff --git a/data/2404.05384.png b/data/2404.05384.png new file mode 100644 index 0000000000000000000000000000000000000000..019bc85413955342ac9095523ba6aba9271f4507 --- /dev/null +++ b/data/2404.05384.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dec3152ebf72182cca9cc0b2812121042a6f5c511caf583af14f639b62bb2ed9 +size 938554 diff --git a/data/2404.05426.png b/data/2404.05426.png new file mode 100644 index 0000000000000000000000000000000000000000..e09d0f40e2be98549eaecfc6e1f9476b501f622e --- /dev/null +++ b/data/2404.05426.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbdb60bdce3a228adb1f6dcc21f4e5649d733d7f43612013c4be7c60f1f70a08 +size 819223 diff --git a/data/2404.05490.png b/data/2404.05490.png new file mode 100644 index 0000000000000000000000000000000000000000..df7fca89bcb62a5bb30ab9a837fc9bd5e76a4c78 --- /dev/null +++ b/data/2404.05490.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ec65d9d5115a2f15936075034bd100f57e91e24c4716c59d3819b2b301b700f +size 772012 diff --git a/data/2404.05558.png b/data/2404.05558.png new file mode 100644 index 0000000000000000000000000000000000000000..27376c744a0da8ec48b31dd9d49b3506ab322986 --- /dev/null +++ b/data/2404.05558.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f27ac2f0130ee2669c7dd20e81ea640ccbf8915b922f8b4264392986f3ee6032 +size 830575 diff --git a/data/2404.05559.png b/data/2404.05559.png new file mode 100644 index 0000000000000000000000000000000000000000..312b82ff3d8194af44f8041b3265288f20c49be3 --- /dev/null +++ b/data/2404.05559.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cdb5de4612ccbaf1e7aaf090407c6d8f76ac48c0c5a5141398fc05abf5f83d5 +size 834514 diff --git a/data/2404.05621.png b/data/2404.05621.png new file mode 100644 index 0000000000000000000000000000000000000000..e13a0e4641a65b5bffc7b3521d9425b5e60ce51a --- /dev/null +++ b/data/2404.05621.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1822431d6db8a6b75593a6af5284e18512736fceca070b3440a1631de05bde5 +size 799434 diff --git a/data/2404.05626.png b/data/2404.05626.png new file mode 100644 index 0000000000000000000000000000000000000000..1c980874023daf431272681b63e591a06df3ea87 --- /dev/null +++ b/data/2404.05626.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d35818403dbcca52837dfd900779c3833213e593795fdbcd9ceb6adf24bd4bd +size 855023 diff --git a/data/2404.05657.png b/data/2404.05657.png new file mode 100644 index 0000000000000000000000000000000000000000..82bc92bc198f752fdafe0c151131d729110a2709 --- /dev/null +++ b/data/2404.05657.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a8f1f7e0349611919cbd98bb17ae93de225517481b36461c4a9e355bffd4d68 +size 604543 diff --git a/data/2404.05661.png b/data/2404.05661.png new file mode 100644 index 0000000000000000000000000000000000000000..101d8af6bb41647a83d611df9d2518328d55f480 --- /dev/null +++ b/data/2404.05661.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d8728fa382290c30e3509ab8af64680abff9f310b00f083ec86edd5a0629269 +size 1012160 diff --git a/data/2404.05662.png b/data/2404.05662.png new file mode 100644 index 0000000000000000000000000000000000000000..7a6469a4af917278099eaac8ad9fea5e4c258ddf --- /dev/null +++ b/data/2404.05662.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcae69047a8606e8d54f9badc2e45cf929b85a83959106bc9666bd99b158c97c +size 651589 diff --git a/data/2404.05675.png b/data/2404.05675.png new file mode 100644 index 0000000000000000000000000000000000000000..a004fa6b23eb26922239585b38591a3ae22149c8 --- /dev/null +++ b/data/2404.05675.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f76d692ae7975d50904c3229da35515875eeb891662a9d27a8fc217d13ac288 +size 752303 diff --git a/data/2404.05687.png b/data/2404.05687.png new file mode 100644 index 0000000000000000000000000000000000000000..710dbd431f31e395facd7813afc1be84965a7fce --- /dev/null +++ b/data/2404.05687.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76425ad117b694248d686aa748949692e310fdef7550af84d78c658224a30121 +size 740307 diff --git a/data/2404.05726v2.png b/data/2404.05726v2.png new file mode 100644 index 0000000000000000000000000000000000000000..d0b9b558dc40549cdaa71915e09c7f16cd85143a --- /dev/null +++ b/data/2404.05726v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5dff463627ade9e05a9162988f51aacd3bbf6d306d3dc4e6055a0b56f16e3ee1 +size 740866 diff --git a/data/2404.06044.png b/data/2404.06044.png new file mode 100644 index 0000000000000000000000000000000000000000..787dd187c896efcedda4e015d301a11e4a05e2ee --- /dev/null +++ b/data/2404.06044.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7c8f2b8488cc9a8a110dc066a4d35a3303d5dddfbef786b6444c9b233881b6b +size 736406 diff --git a/data/2404.06065.png b/data/2404.06065.png new file mode 100644 index 0000000000000000000000000000000000000000..4b6e711c3c59a1125c32c9beec26eb79862a591e --- /dev/null +++ b/data/2404.06065.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9de437a69e71cae2bfa2caf47002ad968bac165a333bbd169b8c2f7dc9e80f2d +size 691047 diff --git a/data/2404.06194.png b/data/2404.06194.png new file mode 100644 index 0000000000000000000000000000000000000000..f307407928e99d088739445288d430738f34a08c --- /dev/null +++ b/data/2404.06194.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:605da6e220039a2e2136f8c59e0d7d05fce79d566fe7a5ac5e8ebd183dad8e73 +size 677333 diff --git a/data/2404.06244.png b/data/2404.06244.png new file mode 100644 index 0000000000000000000000000000000000000000..e08c2f033228e0e811e6f47c7326abe2aef2b085 --- /dev/null +++ b/data/2404.06244.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05c11fcc8b64661c062f897ee9ee788a56f618f912603aae1585dd2b75b34a80 +size 787608 diff --git a/data/2404.06270.png b/data/2404.06270.png new file mode 100644 index 0000000000000000000000000000000000000000..47a76c132ca0d8040ed67b8cb5ed1a623c8e8c7d --- /dev/null +++ b/data/2404.06270.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3aac025faac393829b8c10e1a38b5ef1e37efc0e6488ea3e1e5b673bf9eb5185 +size 764797 diff --git a/data/2404.06337.png b/data/2404.06337.png new file mode 100644 index 0000000000000000000000000000000000000000..54d1b5a589d4a3a94957f4c65577289817d49c73 --- /dev/null +++ b/data/2404.06337.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d29703c19726e49218866e347424d93236816f899fc0d7abf711f92ad42be2e6 +size 969134 diff --git a/data/2404.06350.png b/data/2404.06350.png new file mode 100644 index 0000000000000000000000000000000000000000..d5c059ff37d4233ef35e1a88726b810fa96e15be --- /dev/null +++ b/data/2404.06350.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:637bfd66c494725f46b928de3e9401e2c74fe2097ff813e59ad74715522ec8eb +size 744742 diff --git a/data/2404.06351.png b/data/2404.06351.png new file mode 100644 index 0000000000000000000000000000000000000000..2166bdf5a16acb176cd2e0c66c630811bb49c6e1 --- /dev/null +++ b/data/2404.06351.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddb991a2717c7390fb2bfd99d004bd1c133c0fa6f71691cd9b758685c0c2d4bf +size 759314 diff --git a/data/2404.06443.png b/data/2404.06443.png new file mode 100644 index 0000000000000000000000000000000000000000..106e3fc06bf30f9ad984b68d4f4acd1010e01938 --- /dev/null +++ b/data/2404.06443.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aac7134f89fafe32b7633bf0d14b4150c25f71e4f94548e9f641169c559a21eb +size 854256 diff --git a/data/2404.06511.png b/data/2404.06511.png new file mode 100644 index 0000000000000000000000000000000000000000..67b32256d9e97efe622ca63dc88bd44902932b2c --- /dev/null +++ b/data/2404.06511.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72354cf84d4cad4ec3c1a309bef47890be4ec22f8498e54bb17efd5345eead41 +size 749061 diff --git a/data/2404.06542.png b/data/2404.06542.png new file mode 100644 index 0000000000000000000000000000000000000000..9cd0b2bb9a4a770c6fd94b99af53263f8a53e440 --- /dev/null +++ b/data/2404.06542.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd690765f54cc07fdebbba7d16424a25495a741267d0fa1941185b812e15ca6e +size 1044416 diff --git a/data/2404.06609.png b/data/2404.06609.png new file mode 100644 index 0000000000000000000000000000000000000000..91fa770c8710b778bf6090236278965f97e04d3e --- /dev/null +++ b/data/2404.06609.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edb88305d8b1003d27cb158cbe5376a9c1df7e98b6e371e713ceb49f50b36230 +size 1485736 diff --git a/data/2404.06663.png b/data/2404.06663.png new file mode 100644 index 0000000000000000000000000000000000000000..cad93a6fd37a03ed059ad267d03130e9dde27514 --- /dev/null +++ b/data/2404.06663.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:292abb976dc39990c3bb02e8e28b14ad97e360ad97899f68bb7b0e8b45752d75 +size 825118 diff --git a/data/2404.06692.png b/data/2404.06692.png new file mode 100644 index 0000000000000000000000000000000000000000..ec714c803e6a7ef1289ec252776c94b37526fc88 --- /dev/null +++ b/data/2404.06692.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c1a5b3583a23f0ae4be937645706ff215e924ba285ad76346d0229ba92f97fe +size 1300072 diff --git a/data/2404.06842.png b/data/2404.06842.png new file mode 100644 index 0000000000000000000000000000000000000000..5d8caaeae147e0bb0fb9ec844f0f66b6fe7a7265 --- /dev/null +++ b/data/2404.06842.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94568b0da243026b0fa627c4815f49bdca6ce2e74e045ea0c3779c69b07d1eb0 +size 1009068 diff --git a/data/2404.06851.png b/data/2404.06851.png new file mode 100644 index 0000000000000000000000000000000000000000..729c43e9928c181a2ac0b00044df45943801dfca --- /dev/null +++ b/data/2404.06851.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c86cb38b095ea0965b875124aa2334dd1a52817adee68cbfb8a3712668b8c79 +size 1066981 diff --git a/data/2404.06913.png b/data/2404.06913.png new file mode 100644 index 0000000000000000000000000000000000000000..7294dfc2f4f66cdb4e4bb4848283a6affcfa5620 --- /dev/null +++ b/data/2404.06913.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67c2dac0184133d2b3b08c8e8854fb6f19a8e5ab5d1942aeac28182646ecb691 +size 840036 diff --git a/data/2404.06918.png b/data/2404.06918.png new file mode 100644 index 0000000000000000000000000000000000000000..313a616db377c9b7c7e29dc5ccd6fc21b173c843 --- /dev/null +++ b/data/2404.06918.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f903553af195c5dcd68a8b417aeaa4f71f9e70a44cdf771025c51d1fba89c91 +size 792174 diff --git a/data/2404.07155.png b/data/2404.07155.png new file mode 100644 index 0000000000000000000000000000000000000000..7d1864c42f943eb7e79b6fb63a1e897075154ec1 --- /dev/null +++ b/data/2404.07155.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25be1ecc9826996d04a06c3fb14c2327f966bb51e3cb2f2f425df09f370fe621 +size 878052 diff --git a/data/2404.07177v1.png b/data/2404.07177v1.png new file mode 100644 index 0000000000000000000000000000000000000000..a4442b09cfc92d413b34a0c7142fe8860d6f8a99 --- /dev/null +++ b/data/2404.07177v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0f7fc5d6f41e6216433470222c9a455add6a1b54269cf63fb16f1f508f7fdb3 +size 667575 diff --git a/data/2404.07178.png b/data/2404.07178.png new file mode 100644 index 0000000000000000000000000000000000000000..5b3dd74440d150c087187dd936bee1e0881790c7 --- /dev/null +++ b/data/2404.07178.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea7b176cf087e556c189110a61499547671c57475f849a725fd734a9c05be6d3 +size 1428496 diff --git a/data/2404.07292.png b/data/2404.07292.png new file mode 100644 index 0000000000000000000000000000000000000000..3f31dca649be76a155dffa2be69d8bdc7ad32f9b --- /dev/null +++ b/data/2404.07292.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4eca2880fbc82427e475bb72d20a76c2b64ce5e07c13842ae655f89715c778f6 +size 794794 diff --git a/data/2404.07445.png b/data/2404.07445.png new file mode 100644 index 0000000000000000000000000000000000000000..3e1449671ca26f7087492ac810cdb906a5f7aa8a --- /dev/null +++ b/data/2404.07445.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:289cad87ec25782afa66df5841945792711c3f42aba0d0c6b5d25acc1ce4c4cc +size 840655 diff --git a/data/2404.07448.png b/data/2404.07448.png new file mode 100644 index 0000000000000000000000000000000000000000..8b240b88165940559d9d3f48737417da3193bba2 --- /dev/null +++ b/data/2404.07448.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b33bc5c8e961d903295d357a0fade80d18bca5c81274d5d080d7833583dd9e4 +size 826249 diff --git a/data/2404.07449.png b/data/2404.07449.png new file mode 100644 index 0000000000000000000000000000000000000000..4598b7637fe10d1e31bb20ee630796fc854b8b03 --- /dev/null +++ b/data/2404.07449.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b67285255426ac91a7f99bde52e8aac9cf6ef29a959da8989ed01349aaece110 +size 1315161 diff --git a/data/2404.07474.png b/data/2404.07474.png new file mode 100644 index 0000000000000000000000000000000000000000..c2e45346ff48edd0179a5750dfa6a5b646c758c7 --- /dev/null +++ b/data/2404.07474.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:891ea8a16a9d3b5ef19aa51c7ac3538faabffba81ec4e965d4a59cc7d7ea96f5 +size 923160 diff --git a/data/2404.07487.png b/data/2404.07487.png new file mode 100644 index 0000000000000000000000000000000000000000..29ae0bad6843de673c9df1e2367da11858a87763 --- /dev/null +++ b/data/2404.07487.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:417e6f378c82bbe0e96e5d99ef605a6ec294b5d8de44e7f5ff107ce6e0fc971f +size 735456 diff --git a/data/2404.07543.png b/data/2404.07543.png new file mode 100644 index 0000000000000000000000000000000000000000..d64bc27c152ea4c9cb7c92904669942cfa325623 --- /dev/null +++ b/data/2404.07543.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cca05953fd9f23a22be773292899f9f5f937ba42945b40e6a4f57c1fe3c370e1 +size 979110 diff --git a/data/2404.07603.png b/data/2404.07603.png new file mode 100644 index 0000000000000000000000000000000000000000..65ec0b43c4a29a530f2c37f70d0828f1e62fbd5c --- /dev/null +++ b/data/2404.07603.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:133623607f9e9b8b8c980b79facac464f90e0536300c30f8e393ec936129f91d +size 762945 diff --git a/data/2404.07610.png b/data/2404.07610.png new file mode 100644 index 0000000000000000000000000000000000000000..62b28ae38077d0c9854fecd929b53128a5969be5 --- /dev/null +++ b/data/2404.07610.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8b2ad0d7511b73c08eb388300a464e31b7fa257e0e777367b8e64fc784b8165 +size 819743 diff --git a/data/2404.07713.png b/data/2404.07713.png new file mode 100644 index 0000000000000000000000000000000000000000..db10b93f17aba5ec34717414e2758c2530e546b1 --- /dev/null +++ b/data/2404.07713.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:448166418c31707109d71c73379fd5bb2cb56e26c4117ffc51a94a7e9d61eac3 +size 797614 diff --git a/data/2404.07850.png b/data/2404.07850.png new file mode 100644 index 0000000000000000000000000000000000000000..dba64d5e5be19a8e13900f510ac980aca736464c --- /dev/null +++ b/data/2404.07850.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:278ed897adffd2a8c2a3b1e03a23b8de297ce2f36c95a6ae71a10b64768596c9 +size 1597750 diff --git a/data/2404.07933.png b/data/2404.07933.png new file mode 100644 index 0000000000000000000000000000000000000000..efb5fe919c91c1681ff984508b395ed70e7fc1d7 --- /dev/null +++ b/data/2404.07933.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a4f43625751d44285d478808fa66e6f9bdc9ad5ad5d12a54bfc40076ea0e5ed +size 887301 diff --git a/data/2404.07949.png b/data/2404.07949.png new file mode 100644 index 0000000000000000000000000000000000000000..5b33f90783319692b04a172442d29fa50ce552ad --- /dev/null +++ b/data/2404.07949.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44ecd221bb806c0d63758759e33173deccc1ca31f22b5b3a4b736fbfb4b630bc +size 1481307 diff --git a/data/2404.07985v1.png b/data/2404.07985v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f22d684679b5c692603d9235056695cfc5b5bf7e --- /dev/null +++ b/data/2404.07985v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2685462eafda4707e34a2d69a1d6004b6ea8a88e346b1562188df9a5ed84c3e2 +size 930378 diff --git a/data/2404.07990v1.png b/data/2404.07990v1.png new file mode 100644 index 0000000000000000000000000000000000000000..249b418298ddf986446ce287de56d5bf49b29d07 --- /dev/null +++ b/data/2404.07990v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef9b00e485348ce7f2b10d2763160491e5143b5ef0c119d6fcb0d72a9d030b48 +size 766342 diff --git a/data/2404.07991.png b/data/2404.07991.png new file mode 100644 index 0000000000000000000000000000000000000000..d153b9be1151104eddfe409932bdb6889d0f3361 --- /dev/null +++ b/data/2404.07991.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f11004e0a7be1d0339c673a6bf000ac3d2d61ef87f91c61fa7d44806f1c3c3e7 +size 775741 diff --git a/data/2404.07992v1.png b/data/2404.07992v1.png new file mode 100644 index 0000000000000000000000000000000000000000..a187cac90427137d07b3b58f0a5c577c1d837e9b --- /dev/null +++ b/data/2404.07992v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef956dd199aa0d09d392f4075bdb1c0cdf47eae8d6a32abdedc343ad4f071b4c +size 1359140 diff --git a/data/2404.08027.png b/data/2404.08027.png new file mode 100644 index 0000000000000000000000000000000000000000..bbb3c6c4f95d83438cf9b26a246c7bcb818ec35e --- /dev/null +++ b/data/2404.08027.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97ee04d0d1ce094987579c09d54f6249b8ad76b2accdc28c7a408ed1931cc342 +size 803734 diff --git a/data/2404.08079.png b/data/2404.08079.png new file mode 100644 index 0000000000000000000000000000000000000000..8f1a9c59d0656eb9a080474547ac9662dc7d288c --- /dev/null +++ b/data/2404.08079.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca4874a7ba0e9537e789e1b2acebecbe9385e8d7807a4eecf3ad6e4ee82547d1 +size 885387 diff --git a/data/2404.08392.png b/data/2404.08392.png new file mode 100644 index 0000000000000000000000000000000000000000..961ff4b7a3edf05d05d04d0b8205d7d924d47d90 --- /dev/null +++ b/data/2404.08392.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4978e6ad417d59b5f98abf7fda046b70a19343b61abd2760ca458ba4439d1ea5 +size 521111 diff --git a/data/2404.08450.png b/data/2404.08450.png new file mode 100644 index 0000000000000000000000000000000000000000..8a48540401e165d6bfcd8c5c86769e3b227d3fbc --- /dev/null +++ b/data/2404.08450.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31979d717831b1b1080f4f6cdc0d5e28932bbac75ef1753ffb66b30f49e4c09c +size 897290 diff --git a/data/2404.08514v2.png b/data/2404.08514v2.png new file mode 100644 index 0000000000000000000000000000000000000000..243ed0493d9edc8ff0bda9323d495d09e9aa1cb0 --- /dev/null +++ b/data/2404.08514v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c16b8368faa992cd684b1691067d19c28d96c7233a84a261f36f4a864000a16a +size 971110 diff --git a/data/2404.08531.png b/data/2404.08531.png new file mode 100644 index 0000000000000000000000000000000000000000..67df2fa7d147dfb8a26394dc5b2e754dd9324c0a --- /dev/null +++ b/data/2404.08531.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d172dc9351602f99178b20e0bd0259fcbe5e4c19986cf15750635f98835d65d1 +size 789716 diff --git a/data/2404.08540.png b/data/2404.08540.png new file mode 100644 index 0000000000000000000000000000000000000000..88f65315b5fa094c613a1dc2f6be45f6e3d66e59 --- /dev/null +++ b/data/2404.08540.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1102ac0df7dcfc3180ef11e0e87414a43efaea038450d26e6333a53f7ac80eed +size 796651 diff --git a/data/2404.08636.png b/data/2404.08636.png new file mode 100644 index 0000000000000000000000000000000000000000..d098eef4f099ef985c740f35c407e916d2d0b569 --- /dev/null +++ b/data/2404.08636.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f599f8ea0ac68e0e25cd0adcddb5703791a77429c211cfe72dc15df13d0b695b +size 917076 diff --git a/data/2404.08640.png b/data/2404.08640.png new file mode 100644 index 0000000000000000000000000000000000000000..fa1e7f50d89edb66f02985071042cd12a9fd120b --- /dev/null +++ b/data/2404.08640.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b60197674c295afe4e56a93908cc169ae47d1d3f16895977d4d90546b4e6605 +size 1203334 diff --git a/data/2404.08921.png b/data/2404.08921.png new file mode 100644 index 0000000000000000000000000000000000000000..770b6afe2975266aa13c0d1ac9c2620a0e583813 --- /dev/null +++ b/data/2404.08921.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0cf0e126e34abd07cc13b12b43ffa46b4befe18f0688aa7c86c25973e90e9cf6 +size 1276374 diff --git a/data/2404.08951.png b/data/2404.08951.png new file mode 100644 index 0000000000000000000000000000000000000000..b11874931e6a93e600bb29bab2f3b4ceccd9dfab --- /dev/null +++ b/data/2404.08951.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2fb0d57b772f97d40dad3371836851b023f2c05e30514bb1a94c0a6577523383 +size 778331 diff --git a/data/2404.08958.png b/data/2404.08958.png new file mode 100644 index 0000000000000000000000000000000000000000..e622f84e9bd281fa69d442eb4742fbf801ab7a5f --- /dev/null +++ b/data/2404.08958.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1dd972121444428211bbc656ebed651834a60be372f878951b9eefd065ddd96 +size 826948 diff --git a/data/2404.08968.png b/data/2404.08968.png new file mode 100644 index 0000000000000000000000000000000000000000..ab9c60d7b4fef68e386fc8361cf80d3c3ece82fe --- /dev/null +++ b/data/2404.08968.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1164a4cc40f36c5b2cf1557c61f9d433f0e8628a1a41b52da09f58d13ac39345 +size 1016695 diff --git a/data/2404.08978.png b/data/2404.08978.png new file mode 100644 index 0000000000000000000000000000000000000000..3db4707b9dbca93b692cc871f64d565d6b59cc26 --- /dev/null +++ b/data/2404.08978.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:734076d3bac9034d5315a12a89d96c44cc8349511a00aa36476ae7fb8c0fcb3b +size 834706 diff --git a/data/2404.09001.png b/data/2404.09001.png new file mode 100644 index 0000000000000000000000000000000000000000..4c5173044281174607d8708a794a544000e18852 --- /dev/null +++ b/data/2404.09001.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e25eb116a2cc4b821eba147da5f4bb621b16f3f20882b6bdb8216ba3b3ca2aa +size 802059 diff --git a/data/2404.09011.png b/data/2404.09011.png new file mode 100644 index 0000000000000000000000000000000000000000..027cca05f8b1a01b796c8e8367ab9cc9b3df3571 --- /dev/null +++ b/data/2404.09011.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46ae8229a219150adf49cbaa9dc03d40cdca14842ee9d37988664077c588b861 +size 747712 diff --git a/data/2404.09216.png b/data/2404.09216.png new file mode 100644 index 0000000000000000000000000000000000000000..a402a3aa34d82eaf83e3ddfc638b7513cb1ddb8c --- /dev/null +++ b/data/2404.09216.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84f219226e3b8c2c7b04af1be4f90382feb76acee82d5b2f7059bd73ee0d9bb6 +size 1686860 diff --git a/data/2404.09263.png b/data/2404.09263.png new file mode 100644 index 0000000000000000000000000000000000000000..21682ee9499ba63a869c1603a4e49729c06845ae --- /dev/null +++ b/data/2404.09263.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a5e8b0f94bc7e186dea14c57df5d3cd77982011267cb028c39ef7b951a8ae24 +size 757219 diff --git a/data/2404.09389.png b/data/2404.09389.png new file mode 100644 index 0000000000000000000000000000000000000000..ef395d90b57eb717f541267ac237808a49da4bbe --- /dev/null +++ b/data/2404.09389.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6124b2137c3533b865bd1f8844b6388c5aa84d489aaeba01bf0dbd052f08e8dc +size 788897 diff --git a/data/2404.09401.png b/data/2404.09401.png new file mode 100644 index 0000000000000000000000000000000000000000..b1ebf620c92b86139658918c0f98ee7ec70f23a6 --- /dev/null +++ b/data/2404.09401.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8408f7ebd14056e4d9ddfff8e541806af9c0976d6b98a9f7d74dd6944bb77dbd +size 981074 diff --git a/data/2404.09451.png b/data/2404.09451.png new file mode 100644 index 0000000000000000000000000000000000000000..35981df51b1ae5ff142d9efb79c33d5839757ab6 --- /dev/null +++ b/data/2404.09451.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba3c9a55f0bcce2bdf04824b86f09e5f9180059aa1b0592dc07e4d06f443a9d5 +size 744198 diff --git a/data/2404.09454v1.png b/data/2404.09454v1.png new file mode 100644 index 0000000000000000000000000000000000000000..2ba617e5b2d10bb4e7e37cccb52764199840d197 --- /dev/null +++ b/data/2404.09454v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:890feeaed8b52e7bdd28f07ca1849a458bac419a0305b5d1eccd62fa40c8d688 +size 769696 diff --git a/data/2404.09465.png b/data/2404.09465.png new file mode 100644 index 0000000000000000000000000000000000000000..2863d91f12586f92b7b2b2166ae44eeac8366518 --- /dev/null +++ b/data/2404.09465.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb23867d280cc1ccf5a9b61e1ef814971edf0ab1dd0b337381f232b7a5172540 +size 1195366 diff --git a/data/2404.09490.png b/data/2404.09490.png new file mode 100644 index 0000000000000000000000000000000000000000..81e98d3aeb097e750aa3261e1561466a128a0494 --- /dev/null +++ b/data/2404.09490.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc25538a18d4e014f28a8bf6c35a501677038d09c8e122e8d1fff16de3bfe128 +size 447515 diff --git a/data/2404.09502.png b/data/2404.09502.png new file mode 100644 index 0000000000000000000000000000000000000000..359d4078d3e7b377b7002eda5f84f9c2b5666d92 --- /dev/null +++ b/data/2404.09502.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d89c6dd31724f0c219cac1f38260c089adc0411436b2aaafef1d724dd435523 +size 741606 diff --git a/data/2404.09736.png b/data/2404.09736.png new file mode 100644 index 0000000000000000000000000000000000000000..860bc9b652ca7ebda373e65c3b757d9a162e1cc5 --- /dev/null +++ b/data/2404.09736.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a489e44a5c4d691b19c6c8201859a96d9c9f41ffcaa13f461fc31b47768bf06 +size 816714 diff --git a/data/2404.09819.png b/data/2404.09819.png new file mode 100644 index 0000000000000000000000000000000000000000..d6e2bee311f01af0d7ac59e42a7fc00b38f9ee5b --- /dev/null +++ b/data/2404.09819.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b33809f9511eb0c953f55b88546e12e216955ad3f8ec681692447c5e09b82ff +size 797290 diff --git a/data/2404.09833v1.png b/data/2404.09833v1.png new file mode 100644 index 0000000000000000000000000000000000000000..c1e6ffb052dc5447b885fbd31126b3f3671f6789 --- /dev/null +++ b/data/2404.09833v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a1a5bb829348ce76ed6012c6ab38b4003d8acac945701cc42667a9a804a0f4e +size 1464675 diff --git a/data/2404.09884.png b/data/2404.09884.png new file mode 100644 index 0000000000000000000000000000000000000000..92b09d87adabd2452d64a34c990b4430beee6d72 --- /dev/null +++ b/data/2404.09884.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b911c5f7ac624b122229f847eb0ea62067a0111ab15371c949a731c269025a4 +size 734660 diff --git a/data/2404.09993.png b/data/2404.09993.png new file mode 100644 index 0000000000000000000000000000000000000000..82f866c16f8de37607f304fc12422d22ed27c807 --- /dev/null +++ b/data/2404.09993.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80d6fab9d4ae531b6728683c604dfe626fcd5477048fafb0d61d6e2dc6ac3c65 +size 974252 diff --git a/data/2404.10124.png b/data/2404.10124.png new file mode 100644 index 0000000000000000000000000000000000000000..69127003952f84052f43fdd81dc822caa069132b --- /dev/null +++ b/data/2404.10124.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f45206af344095106cd0152dbf79f1e0aef5a543b5b8c75c87a790dce2b1eb39 +size 768472 diff --git a/data/2404.10193.png b/data/2404.10193.png new file mode 100644 index 0000000000000000000000000000000000000000..003f1a48cf94007587c38af8b138277a9f870ed5 --- /dev/null +++ b/data/2404.10193.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52eb59a538aff43d3adedd0efcfbd8967befecc0f5fdc7893b8f4845aa2dd078 +size 714647 diff --git a/data/2404.10227.png b/data/2404.10227.png new file mode 100644 index 0000000000000000000000000000000000000000..c504a6313a99a5f793c2b1737afdd19c8ff06d9d --- /dev/null +++ b/data/2404.10227.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d259e2d770d0bfdf4ad89437be92322cf471c5b6d5538190eeb2babdd2258ded +size 777395 diff --git a/data/2404.10241.png b/data/2404.10241.png new file mode 100644 index 0000000000000000000000000000000000000000..edb485dc0c5745ef380fe9e6a264450883ae06aa --- /dev/null +++ b/data/2404.10241.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d24ce1d24267951adb425ed22193a1b8811d460031b932ffcc0f13eb4510468 +size 838963 diff --git a/data/2404.10242.png b/data/2404.10242.png new file mode 100644 index 0000000000000000000000000000000000000000..c0df8c9766bf0752a8533d198f87d7bf048bc302 --- /dev/null +++ b/data/2404.10242.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26ec819508532b9d98299ace5cb87c9f0331131f12e45335ce05ca5b00172717 +size 792140 diff --git a/data/2404.10322v1.png b/data/2404.10322v1.png new file mode 100644 index 0000000000000000000000000000000000000000..294100a1b1c7e73625aa541c8a182afdcfce7dce --- /dev/null +++ b/data/2404.10322v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec51255719fe48ff259dd16dbcbdd79132c4f6685a88beb015d84d2c1856da84 +size 735950 diff --git a/data/2404.10438v1.png b/data/2404.10438v1.png new file mode 100644 index 0000000000000000000000000000000000000000..752c2dfd636eb08182c6dac3a1397d061c260a1a --- /dev/null +++ b/data/2404.10438v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:356c3f9609c3d932b59438fce3bfef0e6401a8014d4cab0d2619c33caf711fea +size 776148 diff --git a/data/2404.10603.png b/data/2404.10603.png new file mode 100644 index 0000000000000000000000000000000000000000..f7c49cfda823e57920b644c41a1bdb7dd4869345 --- /dev/null +++ b/data/2404.10603.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f8b539a5b0955b429a1081f42026e306b71126cb8e28276da62c28f94095db0 +size 1377487 diff --git a/data/2404.10633.png b/data/2404.10633.png new file mode 100644 index 0000000000000000000000000000000000000000..de66e722981fad7fbad22c8d475be43759531e02 --- /dev/null +++ b/data/2404.10633.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c8b034d7abde9040c084714f6dc8a9e328d7f39681d5d066a0ec1a205fb80ff +size 861871 diff --git a/data/2404.10716.png b/data/2404.10716.png new file mode 100644 index 0000000000000000000000000000000000000000..742f68b3bfe1d83a5fbc371c721f6a3004eb5965 --- /dev/null +++ b/data/2404.10716.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a8ead1b0a3dc708febe0b8697c8bb4b57b8f104c006117d7551732610e544f3 +size 762667 diff --git a/data/2404.10766v1.png b/data/2404.10766v1.png new file mode 100644 index 0000000000000000000000000000000000000000..c36ae7e9fb4962fbab570363664cc1d15ca0f763 --- /dev/null +++ b/data/2404.10766v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:848fc62456bdbb47acf61aee68aa274aa6ec302931ec764325ec3644d4c64eaf +size 330327 diff --git a/data/2404.10880.png b/data/2404.10880.png new file mode 100644 index 0000000000000000000000000000000000000000..936e746476d01bd89f2ec606b752ae8bf3db96b6 --- /dev/null +++ b/data/2404.10880.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35c5e2b841d956ffff2e13340be6c1a0aca740e1c95c6baf9bd4b78638aebb97 +size 654190 diff --git a/data/2404.10966v2.png b/data/2404.10966v2.png new file mode 100644 index 0000000000000000000000000000000000000000..6feaf1fc91a286df7e995b20fdb7b114da12fb2b --- /dev/null +++ b/data/2404.10966v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f74ff83c2b678e34a33f9e1f9b0e7253b65c7776acd14775915a8f4582bef94 +size 734188 diff --git a/data/2404.11062.png b/data/2404.11062.png new file mode 100644 index 0000000000000000000000000000000000000000..737d2d4c6c5aa359110bbe9acef4765ee80a1d65 --- /dev/null +++ b/data/2404.11062.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5536ab16c05f376368cdb93351bf4fb2c9e3125a2b811362e9d29858983b3cc +size 859669 diff --git a/data/2404.11120.png b/data/2404.11120.png new file mode 100644 index 0000000000000000000000000000000000000000..4d852b85e7cef006c0b5c24d12492bbc38cdf8ba --- /dev/null +++ b/data/2404.11120.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43d11eafe7422d205f8b3531284a4d0b5af4c16bad6946f4c02e33d43c6262d0 +size 995322 diff --git a/data/2404.11139v1.png b/data/2404.11139v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f2c4a400ad89bf9c4e54490e7bd759ca6f66acfc --- /dev/null +++ b/data/2404.11139v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba7a74de1a44c3492f7526c1047d37b1d944e694d0665107e5f506719620899d +size 829533 diff --git a/data/2404.11151.png b/data/2404.11151.png new file mode 100644 index 0000000000000000000000000000000000000000..c3ef4cf51471dcbf37c8f443d6487e4dbb9c5ffb --- /dev/null +++ b/data/2404.11151.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5239cd71acd8a8eee867e68d3269fa39d4ecdb9b638e6fcca4d649d117daa770 +size 938106 diff --git a/data/2404.11156.png b/data/2404.11156.png new file mode 100644 index 0000000000000000000000000000000000000000..829d44cad1cb33c0f29640f17aeb3ddb50ac1c01 --- /dev/null +++ b/data/2404.11156.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f81c4a46693e43bdc5b75744e0fc5f0d9129551e97c0cd9aca5612a099f11812 +size 983133 diff --git a/data/2404.11207v1.png b/data/2404.11207v1.png new file mode 100644 index 0000000000000000000000000000000000000000..6316dfddaf004da35bb05f51d707dccb936a333b --- /dev/null +++ b/data/2404.11207v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caf79fb4dab1593c19c57709109e4298fbc7883f0bd2d18010834f792cddacc1 +size 795500 diff --git a/data/2404.11273.png b/data/2404.11273.png new file mode 100644 index 0000000000000000000000000000000000000000..b7531cc9a18429b20042eddb655d268827a3ab8b --- /dev/null +++ b/data/2404.11273.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cab2111cb76362c0757ba4be8cde48dafbb151ed4fd38be0b74526f9f781ded1 +size 796391 diff --git a/data/2404.11291.png b/data/2404.11291.png new file mode 100644 index 0000000000000000000000000000000000000000..be7fe5031c1e436561f500e8d52b79452c53791e --- /dev/null +++ b/data/2404.11291.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09225c8cce815472ad7fbd39ca41a7858e172dd5c5829ec3e226700649a3a341 +size 1022277 diff --git a/data/2404.11511.png b/data/2404.11511.png new file mode 100644 index 0000000000000000000000000000000000000000..984338c4ed88457377822e451290a1cb8d36d0a6 --- /dev/null +++ b/data/2404.11511.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b7979cb3bb8158e2c157658aa5d1471479b46dc3751489b78a786e4be124b0a +size 849770 diff --git a/data/2404.11590.png b/data/2404.11590.png new file mode 100644 index 0000000000000000000000000000000000000000..73a29504f619ba5a53c87a572f93e6b087a00b30 --- /dev/null +++ b/data/2404.11590.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d8909ea90a0ae244908cb12e6e54b973aa6ab3227de1854706090f4f2744f02 +size 839121 diff --git a/data/2404.11699.png b/data/2404.11699.png new file mode 100644 index 0000000000000000000000000000000000000000..1d6a4adf876c4ea0773894e0af8c4ee28da84697 --- /dev/null +++ b/data/2404.11699.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba2a9a521905d0f9fbd3df3914f59881d325da3d11189584203026ffd92da0ce +size 728853 diff --git a/data/2404.11732.png b/data/2404.11732.png new file mode 100644 index 0000000000000000000000000000000000000000..04f17f47ca3fc8b51153c03eb7ca62b8ae4f1adf --- /dev/null +++ b/data/2404.11732.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cfa80844ea957db5c930068952bc2cb70263bda677fc0c22cbdf2dbd6850bb52 +size 794914 diff --git a/data/2404.11884.png b/data/2404.11884.png new file mode 100644 index 0000000000000000000000000000000000000000..bcc68b6a5f425921985d4ee86389a6ef13c04b4d --- /dev/null +++ b/data/2404.11884.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c789a931b7c0b7a285c06552721b75f5ed46e623996e8c55476c964a8003cba +size 1184050 diff --git a/data/2404.11958.png b/data/2404.11958.png new file mode 100644 index 0000000000000000000000000000000000000000..ac1c18ce406b4829a7c7f17dd92e0027347cfed3 --- /dev/null +++ b/data/2404.11958.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3504aec70fe488720e77155163f38a8ed8561fb049f18fb1181a83d5ca8d42b +size 808686 diff --git a/data/2404.11987.png b/data/2404.11987.png new file mode 100644 index 0000000000000000000000000000000000000000..88ffe2c001732fedafba5b5f0e6e0683fcaaf662 --- /dev/null +++ b/data/2404.11987.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a3d9a2642aadcf17449f95f722aa7d68b0103f6652056141e4f6eb097fafcd1 +size 1130175 diff --git a/data/2404.12168.png b/data/2404.12168.png new file mode 100644 index 0000000000000000000000000000000000000000..408f8cf3abdb01e6ce21ee72d6584b2fb5c50d56 --- /dev/null +++ b/data/2404.12168.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae2daa26da025ea46cf2d24ea9739431de11190bc83924433466b3874404ca2a +size 838059 diff --git a/data/2404.12203.png b/data/2404.12203.png new file mode 100644 index 0000000000000000000000000000000000000000..f2d31e50d825623699536056437966106aca5fb2 --- /dev/null +++ b/data/2404.12203.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:430dd20c383d0f163080e893c87f6412c03f065bf7562d84472840cf74941069 +size 818227 diff --git a/data/2404.12209.png b/data/2404.12209.png new file mode 100644 index 0000000000000000000000000000000000000000..fe1739206477e9042fd4883c6e84880f147d4684 --- /dev/null +++ b/data/2404.12209.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee9b5d495b56031a014960647c797afeeb63bf4aa15a14e2057975909c75d17e +size 819910 diff --git a/data/2404.12235.png b/data/2404.12235.png new file mode 100644 index 0000000000000000000000000000000000000000..1e5560b4deb1c10d86b05402cb73a5337108b4a8 --- /dev/null +++ b/data/2404.12235.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55cf7c7ce2c51865ed1aaf6479bcc4f956d1443261835fd30ab3090c90497634 +size 1005831 diff --git a/data/2404.12322.png b/data/2404.12322.png new file mode 100644 index 0000000000000000000000000000000000000000..c2380615fdb45fdec5183cc947e383087db5d704 --- /dev/null +++ b/data/2404.12322.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f00dc41cfe9823d7a396399e620ff0ef53f44dc3f50e4e6b7fea6f46ad5f282b +size 1113688 diff --git a/data/2404.12383.png b/data/2404.12383.png new file mode 100644 index 0000000000000000000000000000000000000000..15ec989c6b34f5fd60abb6370f9dace96468bf6d --- /dev/null +++ b/data/2404.12383.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baa27fd44142f54799df4cf218d0c7f22342e5faffa9c52718df094dae47899c +size 785668 diff --git a/data/2404.12391.png b/data/2404.12391.png new file mode 100644 index 0000000000000000000000000000000000000000..ff8fa6017abef5ef92c43ed1da53bdab1566bce6 --- /dev/null +++ b/data/2404.12391.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba5ad463ad18d60a1ddfa71d402c6f3e1d76544b2db77eee2a56f56b886160f2 +size 895778 diff --git a/data/2404.12538.png b/data/2404.12538.png new file mode 100644 index 0000000000000000000000000000000000000000..a1b6bc389ed874c785740ea0907e2c4f000f49fc --- /dev/null +++ b/data/2404.12538.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57251f4840044ca84e7f3f88827273f81bc045ca2e3d34da635e37c30287a573 +size 996901 diff --git a/data/2404.12887.png b/data/2404.12887.png new file mode 100644 index 0000000000000000000000000000000000000000..4476f80a88d3d73fa3c6edc5d9b4f0579bf90f88 --- /dev/null +++ b/data/2404.12887.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2619070e03c01295769e123e11d5c58eb48cce37f0f41daa1df19fec67a8ebe5 +size 983310 diff --git a/data/2404.13024.png b/data/2404.13024.png new file mode 100644 index 0000000000000000000000000000000000000000..c12fafd1fdf121d62568c892568f90ebd80cc789 --- /dev/null +++ b/data/2404.13024.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46ad4d03a2ee8eed73329a70d31e009fce014c475ed57ed79e8aecc41de4f237 +size 992621 diff --git a/data/2404.13103.png b/data/2404.13103.png new file mode 100644 index 0000000000000000000000000000000000000000..6ba69177b563e52f771dd00b0f826f1db18b55fe --- /dev/null +++ b/data/2404.13103.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7483ebe849c1c0c8b7c87abe8a133f2b2951c0e2396704f41305a0b8269e413e +size 838118 diff --git a/data/2404.13153.png b/data/2404.13153.png new file mode 100644 index 0000000000000000000000000000000000000000..0caa8cd120cfb1fa19edef8327c1d91ac0bf564a --- /dev/null +++ b/data/2404.13153.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:925b46e4289eda98b534755d0d352c6e5ea655af11e31221b70693541b91ed0e +size 777078 diff --git a/data/2404.13534.png b/data/2404.13534.png new file mode 100644 index 0000000000000000000000000000000000000000..9780b20ec400916e1402bb3de8572941d916b0db --- /dev/null +++ b/data/2404.13534.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98610a5d6868b441e4dc938785785c83583beea7672e6b85b0f823b4aedb7f0f +size 721653 diff --git a/data/2404.13541.png b/data/2404.13541.png new file mode 100644 index 0000000000000000000000000000000000000000..2b90e25be363ee48bf98af45fc9fef877eae3162 --- /dev/null +++ b/data/2404.13541.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc6d724525ac2665b97a5aa0e9e9620410fb040dac34a26f2811eb8f2d0c806e +size 946320 diff --git a/data/2404.13605.png b/data/2404.13605.png new file mode 100644 index 0000000000000000000000000000000000000000..c3fb7fb7006f1d91d86af4dfd4b61b2df055e4bb --- /dev/null +++ b/data/2404.13605.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7b221ed8eb59229df56be31ed277f5d2de0b66027b8442199b7a3ffedfe43bc +size 768805 diff --git a/data/2404.13819.png b/data/2404.13819.png new file mode 100644 index 0000000000000000000000000000000000000000..4b4fc5497581952e9f922b2814be3c6eff5ec6a1 --- /dev/null +++ b/data/2404.13819.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d42fd3519ff1b5593a9f5504da18448de3e4027d565ea795b6eb09453bc14731 +size 1221206 diff --git a/data/2404.14006.png b/data/2404.14006.png new file mode 100644 index 0000000000000000000000000000000000000000..2bc55f1678ccf11527d8e6c898f8bf9ee443bf2d --- /dev/null +++ b/data/2404.14006.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21dcb167441049fced06cc6b846b654036eef26e4abb06e086be77910a451a33 +size 790947 diff --git a/data/2404.14016.png b/data/2404.14016.png new file mode 100644 index 0000000000000000000000000000000000000000..71d9f0266820e3c208c81df333b859ae7faa8e5f --- /dev/null +++ b/data/2404.14016.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69271de4b804efd52b3eb66ab038ecb4e8b6f3c1e9b4901ceeee02a85ffb92bc +size 701621 diff --git a/data/2404.14034.png b/data/2404.14034.png new file mode 100644 index 0000000000000000000000000000000000000000..aaa5a0b5ec335ad52ee5130a93080384de375c8b --- /dev/null +++ b/data/2404.14034.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddde75df16ef676d6f6c60dcb8981e2dcb370c092288ddbf87de1285501b77c2 +size 939841 diff --git a/data/2404.14044.png b/data/2404.14044.png new file mode 100644 index 0000000000000000000000000000000000000000..916a14aed53aef9fe46d816647d31099235e54cc --- /dev/null +++ b/data/2404.14044.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6079465ac8c0ee841d1dbd413a59cc0597b74fbc18daedf20305f7a7ab1a02d +size 1045198 diff --git a/data/2404.14410.png b/data/2404.14410.png new file mode 100644 index 0000000000000000000000000000000000000000..019c1de3165734f40205c17c37cc6d6a834b0350 --- /dev/null +++ b/data/2404.14410.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2df8e28bcd4de25464a60f68da5f3e51df69b55f4ed49b175ba785eecb80f5a +size 803082 diff --git a/data/2404.14412v1.png b/data/2404.14412v1.png new file mode 100644 index 0000000000000000000000000000000000000000..904931f4d7758f26d32632a657fb66ff5e5cbec7 --- /dev/null +++ b/data/2404.14412v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1964e49f16f5e92815ed6ee2845c7801a436ce67a93df7a831fa63732a985bdb +size 780561 diff --git a/data/2404.14471.png b/data/2404.14471.png new file mode 100644 index 0000000000000000000000000000000000000000..7a21afbbdc5ec2f3a9d59010e8ab9ed1c56f5dec --- /dev/null +++ b/data/2404.14471.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa07fb55720fb293d10c0f74c6f7bd523ad5b272ca4ca8587c40d250bddcd5b0 +size 814156 diff --git a/data/2404.14542.png b/data/2404.14542.png new file mode 100644 index 0000000000000000000000000000000000000000..9474b456f370903adc06e212ad734bce02121e26 --- /dev/null +++ b/data/2404.14542.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25f28214785d108ece7570091635e5f0d93b9b1073d8610147856f4ad5cc972d +size 774058 diff --git a/data/2404.14759.png b/data/2404.14759.png new file mode 100644 index 0000000000000000000000000000000000000000..430c26255f4c7f46c3ef0eb7da5da2b2c0f04890 --- /dev/null +++ b/data/2404.14759.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:053a3f502ab8afea359d6347dc1cfb6255f484bd30353ee86b3d3d92af2b03e0 +size 875190 diff --git a/data/2404.14808v1.png b/data/2404.14808v1.png new file mode 100644 index 0000000000000000000000000000000000000000..338b5f7a022d18afa2129bbfb648441ae112e8f9 --- /dev/null +++ b/data/2404.14808v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:581ba16cdd293772cf85afc9a0ed0a6aebe98b8300a87f24c6b76e82de5b4521 +size 782241 diff --git a/data/2404.14908.png b/data/2404.14908.png new file mode 100644 index 0000000000000000000000000000000000000000..9068191f24107c81c1d75160017e5c81d0f5e284 --- /dev/null +++ b/data/2404.14908.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ab86253749b1e35841d67ed592bbde4a75d242d9a8499a4f40360c98f316943 +size 954540 diff --git a/data/2404.14949.png b/data/2404.14949.png new file mode 100644 index 0000000000000000000000000000000000000000..32c591d143a6d1dc13e516387cd2ae77b4b5bce6 --- /dev/null +++ b/data/2404.14949.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9533eaf7f98f8140d750601ce5a32cd903a15f389a3e33e666f91cb7901ffd8 +size 823028 diff --git a/data/2404.15010.png b/data/2404.15010.png new file mode 100644 index 0000000000000000000000000000000000000000..dcaea246e036cba6d3880a0f955287d9ae3cca52 --- /dev/null +++ b/data/2404.15010.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff6efd9f61786566e5282dbc36e895a77152ded993678a9e2aa1fa0272c21087 +size 801300 diff --git a/data/2404.15081.png b/data/2404.15081.png new file mode 100644 index 0000000000000000000000000000000000000000..8537493184f4683bcd025fe8c9f148eeec0b973a --- /dev/null +++ b/data/2404.15081.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c5e7c411819ced9466efe402f981cf30ed396c70ce09b8aa8ea067f6d019e88 +size 1390682 diff --git a/data/2404.15263.png b/data/2404.15263.png new file mode 100644 index 0000000000000000000000000000000000000000..a59719e45db7ca5eb2c978a237b7d2e27e05da31 --- /dev/null +++ b/data/2404.15263.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57b2db757598eb46832edf65cee854a5e2149dbad25a36a1ca9515fd6ff40f4d +size 713336 diff --git a/data/2404.15383.png b/data/2404.15383.png new file mode 100644 index 0000000000000000000000000000000000000000..7adfa7b93cc411830245f8c0b59cd3289a8e1a72 --- /dev/null +++ b/data/2404.15383.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69e15870ee06611086d18f3f2792a095870f53cfe56dc05bd519abc89e27787e +size 1285455 diff --git a/data/2404.15516.png b/data/2404.15516.png new file mode 100644 index 0000000000000000000000000000000000000000..2c90514dbf68653dbad82f8578938fe4fab7258b --- /dev/null +++ b/data/2404.15516.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:024c66167b13bda11a96813fe733d56070ad3b4a19b598831581b5817f7101f7 +size 851691 diff --git a/data/2404.15620.png b/data/2404.15620.png new file mode 100644 index 0000000000000000000000000000000000000000..f5b6e868dccf6f1237a18d22e6c809c72bc08695 --- /dev/null +++ b/data/2404.15620.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5ce8b3162ab7eef6014840c1aa2397d4a6807534b26b33c63df12c566e6997d +size 810056 diff --git a/data/2404.15655.png b/data/2404.15655.png new file mode 100644 index 0000000000000000000000000000000000000000..83f0225b766a66fb168d20be1fc5b75c0b258783 --- /dev/null +++ b/data/2404.15655.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cced01b56712e18ec771f1a00666cd42bcd75dbbc3b84827f316b464c125f8e3 +size 855732 diff --git a/data/2404.15672.png b/data/2404.15672.png new file mode 100644 index 0000000000000000000000000000000000000000..cda8054b610f38d0e68cfe8d3541fa569c64e5a0 --- /dev/null +++ b/data/2404.15672.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5868b635c1072eb6441bbbc8a33eb5f0469fc3d0cff62a2f302ecb1e68e810c +size 860632 diff --git a/data/2404.15707.png b/data/2404.15707.png new file mode 100644 index 0000000000000000000000000000000000000000..45cf36d4e044d53b3e5a5feee623984d884bb9bb --- /dev/null +++ b/data/2404.15707.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:563f077082806b15fb86afd6b0000d115445152118a02c5d8a0b723f8f484aac +size 952439 diff --git a/data/2404.15815.png b/data/2404.15815.png new file mode 100644 index 0000000000000000000000000000000000000000..246e5b742e7f83a3d0f40d1a871840ec9eb56da4 --- /dev/null +++ b/data/2404.15815.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05f8e632fe3fbd9490f0b7f1a4cefeca97812c3f4ec3b08871192cf3ee17a250 +size 236988 diff --git a/data/2404.15882.png b/data/2404.15882.png new file mode 100644 index 0000000000000000000000000000000000000000..bc7b04ed85be5bb4874fbb21a8e1d05c9be80eff --- /dev/null +++ b/data/2404.15882.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8b633e8698280619402ba4f82e44326532769b769515c016357135368abcd4e +size 892531 diff --git a/data/2404.15891v2.png b/data/2404.15891v2.png new file mode 100644 index 0000000000000000000000000000000000000000..1b4056d6f003d45f8673df84533ddf81634a1242 --- /dev/null +++ b/data/2404.15891v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc2087b1c2b76ec24b43e1e17f2fad759e1c5bcbe034ded7a027fef7146dae91 +size 1221242 diff --git a/data/2404.16030.png b/data/2404.16030.png new file mode 100644 index 0000000000000000000000000000000000000000..7a0a57d19fb60756258e1152c19b0ef2bbb35ca2 --- /dev/null +++ b/data/2404.16030.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad816a26829c984e5c4d8a80fc5299fa760f7e5ac1343b9382001e483c8a969c +size 907207 diff --git a/data/2404.16035v1.png b/data/2404.16035v1.png new file mode 100644 index 0000000000000000000000000000000000000000..b7c9affcf4c8bd220d1c3464caf72ca99dfff688 --- /dev/null +++ b/data/2404.16035v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20ef1864690bd743b276a64cddf9fef2e09c80c3ba11de788100d533c0e680e4 +size 963818 diff --git a/data/2404.16123.png b/data/2404.16123.png new file mode 100644 index 0000000000000000000000000000000000000000..4b82e201933fa7e90017a3d2f2cea18bd0962fe9 --- /dev/null +++ b/data/2404.16123.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1124f87361a311b0dfe0b15a051726398fb2e669390ed48f15f864a9b597f9ea +size 717341 diff --git a/data/2404.16222.png b/data/2404.16222.png new file mode 100644 index 0000000000000000000000000000000000000000..23cd8b501a2cdd9dbf8509568171bf5d7dc4eb8f --- /dev/null +++ b/data/2404.16222.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:399754e7e578d0f4c368cf881300dd292e81b246283a49b821308d9b19dcbac7 +size 883481 diff --git a/data/2404.16306.png b/data/2404.16306.png new file mode 100644 index 0000000000000000000000000000000000000000..97797530adc85169c08d973cb4362c410e8c8776 --- /dev/null +++ b/data/2404.16306.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b7f2ef13d03feeda5f2cd88bba977223df2cc8c5152eeecb89a6a94b59df2f3 +size 1025938 diff --git a/data/2404.16451.png b/data/2404.16451.png new file mode 100644 index 0000000000000000000000000000000000000000..9a2676421078f6224c74aa03b8e9d73c65504d78 --- /dev/null +++ b/data/2404.16451.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:724ccb8779c28a864b1740ca0e55187a70a761d67a9f2135e49153e1404c120f +size 760612 diff --git a/data/2404.16452.png b/data/2404.16452.png new file mode 100644 index 0000000000000000000000000000000000000000..2180365e7e0fbc8f02b999b4a245349be4b914f2 --- /dev/null +++ b/data/2404.16452.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5f69a1915067a38d809d12d3e78b8b9e83976f0d888288ef6f82439a771129e +size 1055128 diff --git a/data/2404.16456.png b/data/2404.16456.png new file mode 100644 index 0000000000000000000000000000000000000000..4fd5dc5dfea393affe40b63550fd787cf23a0109 --- /dev/null +++ b/data/2404.16456.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf4a0342c41219faedd55512836c281a8ccc2f7c9bd4e70a805081f0b90a993d +size 725037 diff --git a/data/2404.16493.png b/data/2404.16493.png new file mode 100644 index 0000000000000000000000000000000000000000..c91af73df78c61b30978b72f9f2a552e4dbf16b1 --- /dev/null +++ b/data/2404.16493.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6dafbae8bed4d4ac962bb68939bb4b95969df45e06b6047f348fcd5a32118ad9 +size 909191 diff --git a/data/2404.16510.png b/data/2404.16510.png new file mode 100644 index 0000000000000000000000000000000000000000..55ccd6990a49cc854df447915627ce83cb077f28 --- /dev/null +++ b/data/2404.16510.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d016e4d18d5f0bc37c24f3b3160e92705fd650d0aeea68b4ee6a4b30fd3c9234 +size 1362553 diff --git a/data/2404.16552.png b/data/2404.16552.png new file mode 100644 index 0000000000000000000000000000000000000000..c9214c7853f28b7861599a129ff3444d652c6b38 --- /dev/null +++ b/data/2404.16552.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24da15441c5e8645f3ff8cf7292660c1be5b38b3efaf4c329bbd0ebf4d423400 +size 675965 diff --git a/data/2404.16622.png b/data/2404.16622.png new file mode 100644 index 0000000000000000000000000000000000000000..ec149b57086d00a7c644244832686a26a453a8ef --- /dev/null +++ b/data/2404.16622.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb399e3a238f378dd3a5ab0cf5321e7250d5575c89a48a59a093825871635647 +size 1169197 diff --git a/data/2404.16670.png b/data/2404.16670.png new file mode 100644 index 0000000000000000000000000000000000000000..2de090f337080e33d6a32689948824a7e7777b80 --- /dev/null +++ b/data/2404.16670.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c80c4b4d9b96b46bede6dff9796b514e3d866a382a4993022bbec96a9b1ec167 +size 808683 diff --git a/data/2404.16752.png b/data/2404.16752.png new file mode 100644 index 0000000000000000000000000000000000000000..6b89cd22dfb3df82d12c13004ba2a23537983828 --- /dev/null +++ b/data/2404.16752.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:726f077bc6a3d85f39d643df34b846de4bdc3085ed15fa71e4e48a3ed0dc087d +size 1071755 diff --git a/data/2404.17184.png b/data/2404.17184.png new file mode 100644 index 0000000000000000000000000000000000000000..c6fd406a435a5ee4fb7204c42a2bfd72d0537331 --- /dev/null +++ b/data/2404.17184.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eed3a5b670a7698426e3eec068c0ecc0ae781102a2bc4bd845742281a3992d0c +size 827072 diff --git a/data/2404.17340.png b/data/2404.17340.png new file mode 100644 index 0000000000000000000000000000000000000000..e5f786ec9ca93c1875e96e0d968d785694d4885c --- /dev/null +++ b/data/2404.17340.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:936b437159a9111a1dd3bf3f0b473ef4a28a3b5a5020a79f88da8b0ced8cfa3b +size 555050 diff --git a/data/2404.17528.png b/data/2404.17528.png new file mode 100644 index 0000000000000000000000000000000000000000..c5cf602bf2e601923324a3e84f01b735489c75e5 --- /dev/null +++ b/data/2404.17528.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4416a11cec2b9e627544e4b2dec7865e6a629e6553f9ae5c233b3bb870c7e435 +size 870647 diff --git a/data/2404.17620.png b/data/2404.17620.png new file mode 100644 index 0000000000000000000000000000000000000000..008e599d1798e28b0fe4a50ebe0c57947cae8e6b --- /dev/null +++ b/data/2404.17620.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9d2b6d005096668826b6a2ccb4cb9c8ace97c341e5c2f5ab62d438078a83a66 +size 897924 diff --git a/data/2404.17753.png b/data/2404.17753.png new file mode 100644 index 0000000000000000000000000000000000000000..0f183fb182b8915fbcf69dddd86d145081c4bcd0 --- /dev/null +++ b/data/2404.17753.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c39586f94164be902de9987f94d8925402759affdd1d315d459b0dd38e912e95 +size 766320 diff --git a/data/2404.17825.png b/data/2404.17825.png new file mode 100644 index 0000000000000000000000000000000000000000..b915730618ab81eb890d8d38f9cace28d05ed4e3 --- /dev/null +++ b/data/2404.17825.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00a74feba858e074621ef67421251da4a18fbc2935e16093f12aa4adaa95fa30 +size 797258 diff --git a/data/2404.18135.png b/data/2404.18135.png new file mode 100644 index 0000000000000000000000000000000000000000..e80b627d472c103e52927910e5885584e22afda6 --- /dev/null +++ b/data/2404.18135.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:120e49db2f1792d76d45d1412164c6f3d83329318926affb8574b4321418c4b6 +size 228675 diff --git a/data/2404.18150.png b/data/2404.18150.png new file mode 100644 index 0000000000000000000000000000000000000000..5951336b34ba0777eae7ab05d9a3301371b4bfdf --- /dev/null +++ b/data/2404.18150.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84decf3bd80f30cfc05456cf6aca30daf44e4b4fd13033aec41735cc49cb9d1d +size 1022633 diff --git a/data/2404.18156.png b/data/2404.18156.png new file mode 100644 index 0000000000000000000000000000000000000000..095ca358863db20954dd8829b9bcb35dfcfebf95 --- /dev/null +++ b/data/2404.18156.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c285fb8cf41681f4cb30271d3031b62f6c7f747fd593829e61cf45d6ca458390 +size 1120060 diff --git a/data/2404.18399.png b/data/2404.18399.png new file mode 100644 index 0000000000000000000000000000000000000000..5f11027430455bea997b6a9c6278ab3127c674d6 --- /dev/null +++ b/data/2404.18399.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a23732577411f5ecd932452e0754658cb69eaecfa30c111816ee8f241550de9 +size 962147 diff --git a/data/2404.18433.png b/data/2404.18433.png new file mode 100644 index 0000000000000000000000000000000000000000..9073035f5b37840a64c0d855e7770e553245a4d0 --- /dev/null +++ b/data/2404.18433.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b578d471b43db27fab4129060a8acdb2d9f02aaea9738ec70fcfbd91c209ac13 +size 990239 diff --git a/data/2404.18448.png b/data/2404.18448.png new file mode 100644 index 0000000000000000000000000000000000000000..7cda92f42560dfdb21c5bf106d00631d0c427c91 --- /dev/null +++ b/data/2404.18448.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f47dfa1301f9a56be3750eb7f18189888fa42648d5d0306465b4b786539e8689 +size 868972 diff --git a/data/2404.18630.png b/data/2404.18630.png new file mode 100644 index 0000000000000000000000000000000000000000..8bb08af624e074b6654ae298a57e859816cc28d7 --- /dev/null +++ b/data/2404.18630.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a2635982ec67d3740d4441f1afa9415a2ebfa040da9f1b09a6c9fdf79d030b8 +size 997982 diff --git a/data/2404.18873v1.png b/data/2404.18873v1.png new file mode 100644 index 0000000000000000000000000000000000000000..42e2569dd4b8f0b5ba45f2c476fb080c42ab5947 --- /dev/null +++ b/data/2404.18873v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccbe07107753937f03959649c9356e8f1a9e12a4ac6dde7ffc038030b20adc9a +size 1204676 diff --git a/data/2404.18962.png b/data/2404.18962.png new file mode 100644 index 0000000000000000000000000000000000000000..9d1a86734d8278b86eca8004295e381f76584648 --- /dev/null +++ b/data/2404.18962.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5944ea97bb93e6d6bf30ed0ee6a00899214caa146721d8dbf572c5b5f0198426 +size 783348 diff --git a/data/2404.19110.png b/data/2404.19110.png new file mode 100644 index 0000000000000000000000000000000000000000..9e81cb8373c61a9e11623a023240c85d8661812c --- /dev/null +++ b/data/2404.19110.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09757119611b75aa24502a3ef924d81d36420a06f96a28659a737713abfd4e6b +size 1762440 diff --git a/data/2404.19174.png b/data/2404.19174.png new file mode 100644 index 0000000000000000000000000000000000000000..4f6d910a776d15ee96e72d1c39d512740c06b508 --- /dev/null +++ b/data/2404.19174.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2e9bd2d65091edb46383d2f6b413021f8a5998361de57f86fff76deb2937fdc +size 708406 diff --git a/data/2404.19250.png b/data/2404.19250.png new file mode 100644 index 0000000000000000000000000000000000000000..c2ca61fe030c67014d9a6121c397a29a6cf42b24 --- /dev/null +++ b/data/2404.19250.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d80fc135c5b72db4a08216f0fa5c49468bd5786a97d62e8e6df7728d6f65901b +size 790582 diff --git a/data/2404.19294.png b/data/2404.19294.png new file mode 100644 index 0000000000000000000000000000000000000000..60a08ce621e1cbc681df5a3d2e43ebf8873154b8 --- /dev/null +++ b/data/2404.19294.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38f3160d78ce696c513f2b2eb34755694f1631df00084faa564f0f6001912aed +size 879021 diff --git a/data/2404.19384.png b/data/2404.19384.png new file mode 100644 index 0000000000000000000000000000000000000000..61324b688e9a77e2863f1f59497eb42804c59ae1 --- /dev/null +++ b/data/2404.19384.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98ffb9d9d66ecb32d0c9d4caf4a144e1c3be843b04043c36c1254ba43bb79897 +size 725939 diff --git a/data/2404.19417.png b/data/2404.19417.png new file mode 100644 index 0000000000000000000000000000000000000000..e46d66c27d04c44725bae0a052cc92658c7b07fa --- /dev/null +++ b/data/2404.19417.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41483e4e2385a15180da716d589a1aac847e93e33862fdecfc37a84010b65aac +size 772158 diff --git a/data/2404.19531.png b/data/2404.19531.png new file mode 100644 index 0000000000000000000000000000000000000000..ea40fd7b02dd86270c66833786ba88aa57cdff33 --- /dev/null +++ b/data/2404.19531.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:034a588597dcd9e7d73e46b744e0cebb670d115e4c0819ecdcbff418f7116349 +size 879262 diff --git a/data/2404.19696.png b/data/2404.19696.png new file mode 100644 index 0000000000000000000000000000000000000000..b195801f9976f333134222bea5ca2ccc49a03415 --- /dev/null +++ b/data/2404.19696.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:213889c5d917ee3a57dc03fe007463485f39d44a869cc80754a58d28db962f22 +size 841648 diff --git a/data/2404.19722v1.png b/data/2404.19722v1.png new file mode 100644 index 0000000000000000000000000000000000000000..bc7df58ae2266e1753bf86b7a90da816644d383a --- /dev/null +++ b/data/2404.19722v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:337dd115b6d0659a82d5aa46ccaf84ae61685307e1a7ae3d17809cb806e9b258 +size 1112587 diff --git a/data/2404.19752.png b/data/2404.19752.png new file mode 100644 index 0000000000000000000000000000000000000000..64252d36a1f423db3e4f1f7e322be47ed5c96138 --- /dev/null +++ b/data/2404.19752.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4efe86bc93cd9da01d98374d11c555fe79afa4262ccaa4af4c5c34ed4dbed9e7 +size 896670 diff --git a/data/2405.00181.png b/data/2405.00181.png new file mode 100644 index 0000000000000000000000000000000000000000..54f508656a1e2446b8ff10318ca8f1cb3a735e6a --- /dev/null +++ b/data/2405.00181.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f223324f9203b3959ad5dcd0218b86b278fa07be6c3337cd4b3ebc44cf791274 +size 801758 diff --git a/data/2405.00244.png b/data/2405.00244.png new file mode 100644 index 0000000000000000000000000000000000000000..ab45e0df4edf56e8aa49c6cacd1ce0ae25233960 --- /dev/null +++ b/data/2405.00244.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49cdf71bc42c82f2a810e9e7322847b2a1d0639d23adca35fbc676b1ec6fb023 +size 1016653 diff --git a/data/2405.00256.png b/data/2405.00256.png new file mode 100644 index 0000000000000000000000000000000000000000..74dccd1e66207ff9344c9f80ce3441a4dd9d4285 --- /dev/null +++ b/data/2405.00256.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cbcea25ec6fc7837bf80d2b9384773d2d370caa2a20ee7f56220cc7b8b280e9 +size 887954 diff --git a/data/2405.00340.png b/data/2405.00340.png new file mode 100644 index 0000000000000000000000000000000000000000..b2a5af0344ef3709e3ff422eaa8baf7c86919131 --- /dev/null +++ b/data/2405.00340.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bae4de29e0a7493b75f9262db8d3cdab4a8d05db145f989c59250da70bb3793 +size 965392 diff --git a/data/2405.00378.png b/data/2405.00378.png new file mode 100644 index 0000000000000000000000000000000000000000..41af7bddceed373346f70f76d88cc5bdd43fba65 --- /dev/null +++ b/data/2405.00378.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e7933c09aafb7a1062b613bed8bf8612da6c9d6762741bfb40ac16ce8217891 +size 799412 diff --git a/data/2405.00587.png b/data/2405.00587.png new file mode 100644 index 0000000000000000000000000000000000000000..d515b5b6edd496260f62b2ee1f2cd8ea852eb340 --- /dev/null +++ b/data/2405.00587.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2c282e294b4790b13b195f5b961e5a822f5f3788ecd172f19692cb4c9257b5b +size 919605 diff --git a/data/2405.00900.png b/data/2405.00900.png new file mode 100644 index 0000000000000000000000000000000000000000..ad9d09a5da27887eb5fe374f6f386a8ba62ea2d3 --- /dev/null +++ b/data/2405.00900.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4cd39a9e41350549be203b3ff1ed17388ca248068505c8df9305bc8981a5e82e +size 1073054 diff --git a/data/2405.00906.png b/data/2405.00906.png new file mode 100644 index 0000000000000000000000000000000000000000..f34a4ad0274722b5233a7eff7ecf2b90af4f5abb --- /dev/null +++ b/data/2405.00906.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:424ec4bc69f4eda9f250dbdf6f6990fb3eac25bfe976aac5532d8aa28d9b62ce +size 1248492 diff --git a/data/2405.00984.png b/data/2405.00984.png new file mode 100644 index 0000000000000000000000000000000000000000..d6cdd3e8ade44cf0319c3d838021730d9da64786 --- /dev/null +++ b/data/2405.00984.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:327a0d2fabd86782184de7ec9d648f435c81b42aba096261f0cf9189d024cfcf +size 821156 diff --git a/data/2405.01356.png b/data/2405.01356.png new file mode 100644 index 0000000000000000000000000000000000000000..a683478d4b056e750d3455af41409f3682ac3ed2 --- /dev/null +++ b/data/2405.01356.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d39dfc174c45293668febd753136b702f5715fba7e6a063483b80c50ca2aa66 +size 1409313 diff --git a/data/2405.01538.png b/data/2405.01538.png new file mode 100644 index 0000000000000000000000000000000000000000..378d8872ef2190e7304d004f97af56defd34c5ec --- /dev/null +++ b/data/2405.01538.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7909b6a44d8fb337cb6cbc79e4e1587164026d252396ed0e40abd030c0876f78 +size 883645 diff --git a/data/2405.01662.png b/data/2405.01662.png new file mode 100644 index 0000000000000000000000000000000000000000..1963268887dc257653e663c3514a78e1310b7ac0 --- /dev/null +++ b/data/2405.01662.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c2499a1c62489c06107e8891c39d06ed6e5362bc272ecdd3511f4ed75fd6672 +size 564940 diff --git a/data/2405.02066.png b/data/2405.02066.png new file mode 100644 index 0000000000000000000000000000000000000000..57f4935f5ecd84ec878356c5ea6685229e0c067c --- /dev/null +++ b/data/2405.02066.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4b8b6bffd5ea23b9f98919de8dce7fbc1a77b6c712d6368e0a251c6ce318fee +size 815322 diff --git a/data/2405.02266.png b/data/2405.02266.png new file mode 100644 index 0000000000000000000000000000000000000000..2ef2caf0fa257c5156b2d2caacdce6c9a05904ff --- /dev/null +++ b/data/2405.02266.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d1f2aef93b5903632e0196bc450eeace3faf2554dedc6b0b8cee381c3ca2692 +size 791338 diff --git a/data/2405.02581.png b/data/2405.02581.png new file mode 100644 index 0000000000000000000000000000000000000000..6c51719c24756ba5eb585b36e12fc63512b0e0ce --- /dev/null +++ b/data/2405.02581.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b165af1ad36f93c481a276b541c72f4068380a3bf11f9b9ff922ba85c58be08d +size 727880 diff --git a/data/2405.02608.png b/data/2405.02608.png new file mode 100644 index 0000000000000000000000000000000000000000..dd41a877e34782b92896ce479473e07266c52172 --- /dev/null +++ b/data/2405.02608.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b03fc7faa983ee92fae9a8730cff9cdd63868e00578c19ec3c42e86e9a169c68 +size 923282 diff --git a/data/2405.02781.png b/data/2405.02781.png new file mode 100644 index 0000000000000000000000000000000000000000..ea86f009ae3ee0b9e3fe851dcd7e41632f49dcec --- /dev/null +++ b/data/2405.02781.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0432192812bb04c19cc8ebebed86c6ccc466054e7bcd163fa6b673d8fa0fa17b +size 675493 diff --git a/data/2405.02859.png b/data/2405.02859.png new file mode 100644 index 0000000000000000000000000000000000000000..16dec036fd1d28aeca4769a914177f0b6dd8edda --- /dev/null +++ b/data/2405.02859.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff5f9ad47e087843f468bc97b6178250be64e2063a010e4cdccbd57376865b77 +size 1098923 diff --git a/data/2405.02911.png b/data/2405.02911.png new file mode 100644 index 0000000000000000000000000000000000000000..f0f9e4aef64201429a695ad25cc0be0dfdf93706 --- /dev/null +++ b/data/2405.02911.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:759fe3b3fde8c77753915f342ff49303141b6a2c21a2fe49e77f50e002d91a4a +size 1264419 diff --git a/data/2405.02954.png b/data/2405.02954.png new file mode 100644 index 0000000000000000000000000000000000000000..532ae010e2b3206d4a3319245d44d855fa9dcaa5 --- /dev/null +++ b/data/2405.02954.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ce07e09a70114ef021035489c5176f77ee2f2c64bab0499a4968daa77d91049 +size 497597 diff --git a/data/2405.02962v1.png b/data/2405.02962v1.png new file mode 100644 index 0000000000000000000000000000000000000000..5d388e6d35ffb8207026f1256be3cc0f0f48e831 --- /dev/null +++ b/data/2405.02962v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32d0db8ecbdb401974269d0220cb853fa81b4b2f31986cef8ee227ba895bfcb3 +size 1558774 diff --git a/data/2405.03144.png b/data/2405.03144.png new file mode 100644 index 0000000000000000000000000000000000000000..6450e69e4d5cf14a0c2c2ddd17bc99a3312c6946 --- /dev/null +++ b/data/2405.03144.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4161dfc46b34649fcd4dfa38dd1a06589617f5f3b44c175d84cdafdf6a6eaa51 +size 777269 diff --git a/data/2405.03178.png b/data/2405.03178.png new file mode 100644 index 0000000000000000000000000000000000000000..320f0748b8cca32df6b1b69627a968d4b6ba9a08 --- /dev/null +++ b/data/2405.03178.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7da60fb1403342586d15c0e47faad703eaacd4f1cce6689156a3c774b63230f1 +size 919481 diff --git a/data/2405.03388.png b/data/2405.03388.png new file mode 100644 index 0000000000000000000000000000000000000000..61fe479d32e05b1233d732f8d6c372b8be8d6da5 --- /dev/null +++ b/data/2405.03388.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14ed9de80386f32f827149a39c30deb36ac3cb29e964532d10a632bfd38c101a +size 1079707 diff --git a/data/2405.03413v2.png b/data/2405.03413v2.png new file mode 100644 index 0000000000000000000000000000000000000000..8740a6c5a32425358f3bd45d1a7e98fa23e128d6 --- /dev/null +++ b/data/2405.03413v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2c0a425608e4449ea8aa9bf2a56e86966c5783d7d972a4f51a7cf5e8f19957c +size 968081 diff --git a/data/2405.04115.png b/data/2405.04115.png new file mode 100644 index 0000000000000000000000000000000000000000..e95273693d2055a49874efeea083ad57aabee0f0 --- /dev/null +++ b/data/2405.04115.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f96e5f863bef22c9622f1d6080b324583a68994693c5111922d7fbe0135f93be +size 759839 diff --git a/data/2405.04167.png b/data/2405.04167.png new file mode 100644 index 0000000000000000000000000000000000000000..df7e45388ba34725dfb56b5d4fc18b37dac1f122 --- /dev/null +++ b/data/2405.04167.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d549c049d696ef8a9daed4bc3281cef7ce4cc909ac1e13d5a2b1c35ca6e7ffc +size 745301 diff --git a/data/2405.04309.png b/data/2405.04309.png new file mode 100644 index 0000000000000000000000000000000000000000..a76e470ceabe23b9ad4e22c256d73ca8eda1191a --- /dev/null +++ b/data/2405.04309.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bfc39ef619985f7458b8dff9a212603a6505d8424de4e19f560a2e00c9bedd9 +size 803022 diff --git a/data/2405.04356.png b/data/2405.04356.png new file mode 100644 index 0000000000000000000000000000000000000000..b01198d0fa916e6b12b79ca1670a2befdba061fa --- /dev/null +++ b/data/2405.04356.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d7005750ca82e6b0d3f7fe43e543c2c39bb79689c005e1c8f26aa9be2f5330c +size 1038099 diff --git a/data/2405.04408.png b/data/2405.04408.png new file mode 100644 index 0000000000000000000000000000000000000000..422253a865b3599d8fe4cc0f672796ca30baaf4e --- /dev/null +++ b/data/2405.04408.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5329c57c237cd279daba40da088ca2868d00d65ba11898b75bac01813b19f78 +size 884157 diff --git a/data/2405.04534.png b/data/2405.04534.png new file mode 100644 index 0000000000000000000000000000000000000000..417bbd03dceee3db52cd2d01f2dafd2cd20d9b15 --- /dev/null +++ b/data/2405.04534.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba6302523d5c27a81ff5feefbaa6c5f9ebfdab80cf4ec17b063bc3808c98f225 +size 1148247 diff --git a/data/2405.04741.png b/data/2405.04741.png new file mode 100644 index 0000000000000000000000000000000000000000..7ddab44e9d257305e678f9acc2d84fc102a62ef2 --- /dev/null +++ b/data/2405.04741.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7d1ce85411302349228e0b5750d071d807452bcdbc475f2cf383ad9f3cfd808 +size 751753 diff --git a/data/2405.04771.png b/data/2405.04771.png new file mode 100644 index 0000000000000000000000000000000000000000..99ba0d0680025107e81d11533998884876d37499 --- /dev/null +++ b/data/2405.04771.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf2564c04f017b56974da87c888304d9f59ae21d2d09065bd7d970bad9806f2a +size 725001 diff --git a/data/2405.04953.png b/data/2405.04953.png new file mode 100644 index 0000000000000000000000000000000000000000..8b2e4030c085256b2aefacafc587e7c5f80278cc --- /dev/null +++ b/data/2405.04953.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8caa15c42db76f30a96207ff199c68fc6b80c6f294a2858e9f48b4ca0a5145c +size 776067 diff --git a/data/2405.04966.png b/data/2405.04966.png new file mode 100644 index 0000000000000000000000000000000000000000..b5b5d3d5290c07a1e0be35db9d9a92839c54849f --- /dev/null +++ b/data/2405.04966.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a682120059f1387bad46cb26a5d54c413d3a38d54f9f47056563117fc355792 +size 783840 diff --git a/data/2405.05010.png b/data/2405.05010.png new file mode 100644 index 0000000000000000000000000000000000000000..0206966c43c232134d75f4b9c8cf380b5a75b4e3 --- /dev/null +++ b/data/2405.05010.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88b31d04c45adc7e0210afae80f5b962a60fae1ca0572563ba3edc5451781cd5 +size 530377 diff --git a/data/2405.05216.png b/data/2405.05216.png new file mode 100644 index 0000000000000000000000000000000000000000..35d83d26568d29dccc0e67d4fca115149efae186 --- /dev/null +++ b/data/2405.05216.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:119bca50ccf984b10a167e460d62bc6c4dc1bfabe0f55f40c109509f863de255 +size 1410348 diff --git a/data/2405.05252.png b/data/2405.05252.png new file mode 100644 index 0000000000000000000000000000000000000000..1aa322be8add3f0fa6e116a65a3a5bf549261388 --- /dev/null +++ b/data/2405.05252.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d8af9e896cd4d72d4f577f3f8c0ec776fb7a008672c949dc88167d7396b1ded +size 839806 diff --git a/data/2405.05259.png b/data/2405.05259.png new file mode 100644 index 0000000000000000000000000000000000000000..b547d09e634b65b11cf8d47a6dcdc1fed2999c9d --- /dev/null +++ b/data/2405.05259.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07f19ec029a53b75d8b839d30a32f792babb7750808336a50f40906c6c500c9e +size 728228 diff --git a/data/2405.05502.png b/data/2405.05502.png new file mode 100644 index 0000000000000000000000000000000000000000..70643561fc267bd0c8483ad99ead27d98f65133b --- /dev/null +++ b/data/2405.05502.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a2680ef875de41fac3e980c19b93baf111a9e97d104d16e0d768b4722cbe678 +size 740193 diff --git a/data/2405.05587.png b/data/2405.05587.png new file mode 100644 index 0000000000000000000000000000000000000000..7654b091a26c281ec7696b7f1ff71c03468da3ce --- /dev/null +++ b/data/2405.05587.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d599a79920faf8e41622d9b543817e0ba84384b3e4d36378f2788f98ed91cb79 +size 733351 diff --git a/data/2405.05588.png b/data/2405.05588.png new file mode 100644 index 0000000000000000000000000000000000000000..32d1b6c351e2704af6752d505c5278034b467a7f --- /dev/null +++ b/data/2405.05588.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50bc14eafa4f778531cb06700311dac05fdf21ff0a89177c232bf83588d4ea92 +size 800640 diff --git a/data/2405.05605.png b/data/2405.05605.png new file mode 100644 index 0000000000000000000000000000000000000000..88bec6f7a2c04358f8ff995c3b2df4aae6e9b981 --- /dev/null +++ b/data/2405.05605.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:234f62a306586fce601723f34cd57e79f3684fe39ac5d6440723d94d7fa6f61f +size 748995 diff --git a/data/2405.05714.png b/data/2405.05714.png new file mode 100644 index 0000000000000000000000000000000000000000..e54a6fe983902a466f59c68e45be4d3f6cfbd119 --- /dev/null +++ b/data/2405.05714.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ae83a33754e2a1d230842b996af8302bb9b5985173499d3b4b80b8cb0ce4b5d +size 772301 diff --git a/data/2405.06214.png b/data/2405.06214.png new file mode 100644 index 0000000000000000000000000000000000000000..29e55370cb83be4d403b2092887b9125848d7fc3 --- /dev/null +++ b/data/2405.06214.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cd9e5cfb9c79ce71bef065ff09a90d36703854033d46ccb1f10dea1fb8b2cab +size 952911 diff --git a/data/2405.06216.png b/data/2405.06216.png new file mode 100644 index 0000000000000000000000000000000000000000..167d1e28cc8c2ecbcbc1cd7d64cf74caf6341800 --- /dev/null +++ b/data/2405.06216.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:852b3801bc1b5415381b2a65ff501c5327dd5432d211a0394515c371bc382c2b +size 805491 diff --git a/data/2405.06283.png b/data/2405.06283.png new file mode 100644 index 0000000000000000000000000000000000000000..6d881a1f8277336c1c06b7af31eb84fbb252d601 --- /dev/null +++ b/data/2405.06283.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:454e87a8ae7c2f7b86668f446ffae5fade954b45ad1576aba9e503556c35923d +size 845459 diff --git a/data/2405.06284.png b/data/2405.06284.png new file mode 100644 index 0000000000000000000000000000000000000000..bb35a299d22b8111363bf750d6bea2ceb6f6adb9 --- /dev/null +++ b/data/2405.06284.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2b2545569c152bd5f916aa4577642567aafbac495f0c7d2e3602b0dd516c8e4 +size 806976 diff --git a/data/2405.06536.png b/data/2405.06536.png new file mode 100644 index 0000000000000000000000000000000000000000..f2ffe5856e9ac8b1550b1fea2f16653f310578ed --- /dev/null +++ b/data/2405.06536.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c42fec1b9ae7a2eb28765e9293620d20b5b3aa13f9b9a57137d2cb35c2639278 +size 931750 diff --git a/data/2405.06586.png b/data/2405.06586.png new file mode 100644 index 0000000000000000000000000000000000000000..6946588928c98b39b666b5a713451d0dff45c464 --- /dev/null +++ b/data/2405.06586.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9efd1e6f51af52fe4638f8f86802d79d2e64270aea88c817703969c279af3030 +size 153128 diff --git a/data/2405.06600.png b/data/2405.06600.png new file mode 100644 index 0000000000000000000000000000000000000000..36c4eb6cb20d297d832c1bdbfaa65ecd89e2ce68 --- /dev/null +++ b/data/2405.06600.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c21ad2e959ae806339aeaa1c0e5642ba830dee597a38a1443eb161ca0df16ee +size 808401 diff --git a/data/2405.06828.png b/data/2405.06828.png new file mode 100644 index 0000000000000000000000000000000000000000..9bc04174b69640a47d48059c3eaeb240f24e7384 --- /dev/null +++ b/data/2405.06828.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dd36a8556d8100f579e40ac465b34d38475c1b979f94907a8d6ec7e17a13f2 +size 770367 diff --git a/data/2405.06849.png b/data/2405.06849.png new file mode 100644 index 0000000000000000000000000000000000000000..7424831822888866e076d55c213b8a9e269a1d4d --- /dev/null +++ b/data/2405.06849.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67906f1a93a42f0479d9f7092ea10ac74f9e3fbc748a7f737f395d604a3d037b +size 723982 diff --git a/data/2405.06880.png b/data/2405.06880.png new file mode 100644 index 0000000000000000000000000000000000000000..d8c81f512fe1f478e7f3c6f502808928938a787d --- /dev/null +++ b/data/2405.06880.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54f99af4568f4932737c95c5c7e302c3195621d8535ed978350843b69fba3530 +size 780886 diff --git a/data/2405.06887.png b/data/2405.06887.png new file mode 100644 index 0000000000000000000000000000000000000000..5975d8c6a9e399b06e792b125d2e53c329126411 --- /dev/null +++ b/data/2405.06887.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4e2060bc7abe9d1aeba0d88090fda002c5f5cff61d5a68d098b12bb62226dbc +size 1170385 diff --git a/data/2405.06903.png b/data/2405.06903.png new file mode 100644 index 0000000000000000000000000000000000000000..8418df0d69052b2470b76a2694240f81001cb69a --- /dev/null +++ b/data/2405.06903.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed86532cad6fe898945662f746c5ea869e21fe5db91c41639e67879005924d53 +size 815709 diff --git a/data/2405.07011.png b/data/2405.07011.png new file mode 100644 index 0000000000000000000000000000000000000000..c1689a29b7e00454bb9e0ae617dac78ed3ad725b --- /dev/null +++ b/data/2405.07011.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c1f71be449701ff79ae3bc550e4e42f74c6c063b9af4cf901231164e0f15189 +size 813176 diff --git a/data/2405.07044v1.png b/data/2405.07044v1.png new file mode 100644 index 0000000000000000000000000000000000000000..4d6756fd541189fc1dfb4941ef9256122edd0e57 --- /dev/null +++ b/data/2405.07044v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc5743585b9a506b32f54b3c532a8cd2d4082888c17b8a2483e9a8630ef1dd72 +size 778166 diff --git a/data/2405.07201.png b/data/2405.07201.png new file mode 100644 index 0000000000000000000000000000000000000000..4cf4482321a117d8a4dcc3b4ae5afe01c85ed724 --- /dev/null +++ b/data/2405.07201.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6cb4fb2046a94bbafe166ab095278c17bf0e3ea7912c9488aaeacd13f0cae84 +size 949413 diff --git a/data/2405.07364.png b/data/2405.07364.png new file mode 100644 index 0000000000000000000000000000000000000000..09afdbb0606528c0d0a2a1b911113f3dfa4fcbf8 --- /dev/null +++ b/data/2405.07364.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c032108ca3237c916f21e4558f8ae65a1bf1e7c3cde13941cf54e83dc28ffda3 +size 714139 diff --git a/data/2405.07472.png b/data/2405.07472.png new file mode 100644 index 0000000000000000000000000000000000000000..35128cf7f8a96a3a2927ba6bd3e89585ca18a5c9 --- /dev/null +++ b/data/2405.07472.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f4a079876d1e9119f070f1a2c109f516cf36913dffda412d6c2b9ebc7e1be24 +size 1042176 diff --git a/data/2405.07481.png b/data/2405.07481.png new file mode 100644 index 0000000000000000000000000000000000000000..49a5a5379c5ace31e7505c23f422a99790451dda --- /dev/null +++ b/data/2405.07481.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a69d03e6a97d0af95e91de71c302ed505ac295ac0b7de56b08f29c2d8212bb3 +size 794966 diff --git a/data/2405.07648.png b/data/2405.07648.png new file mode 100644 index 0000000000000000000000000000000000000000..faaf2a5b85e8a26182bc70efaf1393d3ad8176bd --- /dev/null +++ b/data/2405.07648.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ced5bca19bc25253841b45b8c1a1813f81e82292e259864ec06292f90f11fd8 +size 1260152 diff --git a/data/2405.07784v1.png b/data/2405.07784v1.png new file mode 100644 index 0000000000000000000000000000000000000000..f44b36f6e4f826843cc3daeaa33dca62af17c8f7 --- /dev/null +++ b/data/2405.07784v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ebafe6d3f69890fa99e4d81e4aaae1474f35c4425c865aa8611083befb56cc4 +size 961753 diff --git a/data/2405.07933.png b/data/2405.07933.png new file mode 100644 index 0000000000000000000000000000000000000000..d66101fed43b8a524f9891bf7e37edcb150f2a50 --- /dev/null +++ b/data/2405.07933.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e20ce2ef88e0b3d957a526992b5cbf713fbd56f6acf055f77e509f3de34f19b +size 937656 diff --git a/data/2405.07991.png b/data/2405.07991.png new file mode 100644 index 0000000000000000000000000000000000000000..235a45e98a66077723ff0c4f26fbbe719bd62eb1 --- /dev/null +++ b/data/2405.07991.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bea52736c9afa9d305f2fd7b53f6ec839c8a182eead0dc0f60b72abce7ffd686 +size 2091563 diff --git a/data/2405.08322.png b/data/2405.08322.png new file mode 100644 index 0000000000000000000000000000000000000000..5017dcb4298ad811c46169c583163fc8d8c6d150 --- /dev/null +++ b/data/2405.08322.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b31bab6fde82f7f73068194ef82443b4093fc79bff062dfba840ab3cdbd4be0 +size 935850 diff --git a/data/2405.08458.png b/data/2405.08458.png new file mode 100644 index 0000000000000000000000000000000000000000..c6fc11f6e0aa6bb1d0b9311fe6b606ab795c100e --- /dev/null +++ b/data/2405.08458.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44a3c4589847c0b1e0f39216e71b4462875be9bdda4f8acea5941791dde3e118 +size 1218927 diff --git a/data/2405.08533.png b/data/2405.08533.png new file mode 100644 index 0000000000000000000000000000000000000000..cf5f7a24af2ab9d211ce1e41d635ffdb57dfc61a --- /dev/null +++ b/data/2405.08533.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61df5129a0c15d2db29e800f987bca2c647dd9c7d843db9c4382e5ea4ccc01ab +size 862117 diff --git a/data/2405.08609.png b/data/2405.08609.png new file mode 100644 index 0000000000000000000000000000000000000000..e89a4fbef2fce0916bab0a0a82044ce50f211372 --- /dev/null +++ b/data/2405.08609.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16d3398e0eb5b595f13e2f304c1523771c98f542164879799128a4e988775e21 +size 530549 diff --git a/data/2405.08815.png b/data/2405.08815.png new file mode 100644 index 0000000000000000000000000000000000000000..fa68f2348b5a336a2a315efcfa33ae380d0f70b1 --- /dev/null +++ b/data/2405.08815.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4765135e0fb848101f103caf48b5d23af4ff5a953b47686f2f290cffad9f0e26 +size 1132356 diff --git a/data/2405.08909.png b/data/2405.08909.png new file mode 100644 index 0000000000000000000000000000000000000000..0e96d37204fc89a87c1b7cdfcfd66cf60035a9ab --- /dev/null +++ b/data/2405.08909.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:917c87d4d4df68d3889038151560dbe33e90e54710b83da55adb9cd74b099947 +size 657869 diff --git a/data/2405.09342.png b/data/2405.09342.png new file mode 100644 index 0000000000000000000000000000000000000000..5909af7ef57744b457406c1c0126d054f3dd984e --- /dev/null +++ b/data/2405.09342.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b07159219b730819c4d2fb6c7bcda918c83a9d0890b031fce0a7a824e3f329c6 +size 1092417 diff --git a/data/2405.09713.png b/data/2405.09713.png new file mode 100644 index 0000000000000000000000000000000000000000..79cb3205374b8602cdb93e1b4f5f43ed4bad50f2 --- /dev/null +++ b/data/2405.09713.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5bf55f39f3a1439c789308690f753e383074fd69ae6de15f38212a01950b9548 +size 796297 diff --git a/data/2405.09771.png b/data/2405.09771.png new file mode 100644 index 0000000000000000000000000000000000000000..b992ce1470cb152f3eb6b8790d51a0b17fb28fcd --- /dev/null +++ b/data/2405.09771.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb0fddec499938b36b9281708c9a768042505575b9b5140f82205c3200b6ff55 +size 788005 diff --git a/data/2405.09879.png b/data/2405.09879.png new file mode 100644 index 0000000000000000000000000000000000000000..6fb2bcb796efada4692c44683157bd49e5f79143 --- /dev/null +++ b/data/2405.09879.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07f81c8b620f324becfea40eb812627391219e49dfcb80abdaa32bc839ba608e +size 996727 diff --git a/data/2405.09882.png b/data/2405.09882.png new file mode 100644 index 0000000000000000000000000000000000000000..c1472ee52903cdaf6de1bb847067830eeb61eec7 --- /dev/null +++ b/data/2405.09882.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e0a9160c21728e97517ddacb93c16d9c33e80ff4b77af027727813fc50d15cf +size 900602 diff --git a/data/2405.09924.png b/data/2405.09924.png new file mode 100644 index 0000000000000000000000000000000000000000..92c77c19847000a1ab782fbbc5dfb2da5de2b9fc --- /dev/null +++ b/data/2405.09924.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb1a3aee6f25751a7dad48d8fa2085a735b468b718ee130ab38b65a1e60f34c5 +size 1193286 diff --git a/data/2405.09931.png b/data/2405.09931.png new file mode 100644 index 0000000000000000000000000000000000000000..1ab6a7a79048bc8ab670021d254cce690e41550b --- /dev/null +++ b/data/2405.09931.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7a9cb7316e856f0339055e707e10af9eb7e199d7bf9dbbdf56b16eb3f63d784 +size 1037788 diff --git a/data/2405.09996.png b/data/2405.09996.png new file mode 100644 index 0000000000000000000000000000000000000000..7fb96db993efaa259e129f5fcbb72b7546b9f2e0 --- /dev/null +++ b/data/2405.09996.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6d0090009a78bb10549519f9612e6786e57b3f74c97141059633f0cd0302d84 +size 1006654 diff --git a/data/2405.10037v1.png b/data/2405.10037v1.png new file mode 100644 index 0000000000000000000000000000000000000000..6dee30016c2247749fc0077d324eb26aa768da65 --- /dev/null +++ b/data/2405.10037v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f099b84ae89170d654b9bcd5b537adc35cb946467ab2725477be89dcd60cee90 +size 795128 diff --git a/data/2405.10053.png b/data/2405.10053.png new file mode 100644 index 0000000000000000000000000000000000000000..b56499a5cd70fbf44b06df429c45d082b5c96806 --- /dev/null +++ b/data/2405.10053.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3082e82bab963b044806dc20461c7a5cc4978b7c2b89e0aec1e22f235bc0faea +size 885145 diff --git a/data/2405.10185.png b/data/2405.10185.png new file mode 100644 index 0000000000000000000000000000000000000000..2415f69cec77f619a37d65c01a596a7daeff514b --- /dev/null +++ b/data/2405.10185.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de23720ebba73aa02c0b4c36083e08e66c3aeeeb45a8316231c0b4ee20e17961 +size 817813 diff --git a/data/2405.10272.png b/data/2405.10272.png new file mode 100644 index 0000000000000000000000000000000000000000..66088d29e9cee28e56bf2eb2609896bdd7f70f99 --- /dev/null +++ b/data/2405.10272.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:391ac40ec97dc4760db79779269f4ff6cdf0c66e1bf0ecd4c57516167e9a1d35 +size 891794 diff --git a/data/2405.10286.png b/data/2405.10286.png new file mode 100644 index 0000000000000000000000000000000000000000..adaf8700ae54875c664ecb21c032cb9526850e90 --- /dev/null +++ b/data/2405.10286.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8b4daccc29a305b8cd872ecee9191e5d7d854d96ace60388432b524e5449a2d +size 775214 diff --git a/data/2405.10575.png b/data/2405.10575.png new file mode 100644 index 0000000000000000000000000000000000000000..4677dfd394b6242357c511c349cf559904fff264 --- /dev/null +++ b/data/2405.10575.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e56d8dbe6c9aa7707b6805c3f2229a5e1407d5ad013b57203568bd2bfb668042 +size 933490 diff --git a/data/2405.10612.png b/data/2405.10612.png new file mode 100644 index 0000000000000000000000000000000000000000..963738bcfb4178af686e1b57172b00d40f651714 --- /dev/null +++ b/data/2405.10612.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9de1cf01cd5a661dd64fb667ca08595e946624c90bd0b0c44dac1669cc54d68 +size 771755 diff --git a/data/2405.10690.png b/data/2405.10690.png new file mode 100644 index 0000000000000000000000000000000000000000..dedc4f574c1ff14123871aada09c4991aff657d5 --- /dev/null +++ b/data/2405.10690.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8875f4495ae8ff7abcbf5ec61eb454f2c08492f90b8bfe9dc92827296b76e7ed +size 447560 diff --git a/data/2405.11481.png b/data/2405.11481.png new file mode 100644 index 0000000000000000000000000000000000000000..2e9c8b6eefe193a96436764ec180670bd5a4a405 --- /dev/null +++ b/data/2405.11481.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48ea9163c8fe2fb18ea42cd788c41d8539d757e75fa4e5cd8120f08ae3df5ad4 +size 693443 diff --git a/data/2405.11483.png b/data/2405.11483.png new file mode 100644 index 0000000000000000000000000000000000000000..721ca4164bcab971b9fa0ae990215689e65c7694 --- /dev/null +++ b/data/2405.11483.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59e2fffc8a8de0e1b99d1a6f79b8279785b1e72bc4971889e579cd4d62e0afc8 +size 847019 diff --git a/data/2405.11487.png b/data/2405.11487.png new file mode 100644 index 0000000000000000000000000000000000000000..056ae113fb801469a19deb63aa2a4fd20fa6c8d3 --- /dev/null +++ b/data/2405.11487.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d76da4d985526a09eac885347b67594184118f077f87186aeccb88e13eb1852e +size 962726 diff --git a/data/2405.11618.png b/data/2405.11618.png new file mode 100644 index 0000000000000000000000000000000000000000..50b66c2b70a20130d6dbe64ea93a61cfb6f6afe9 --- /dev/null +++ b/data/2405.11618.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a95a5046a670eb6cb6113548d30486f2ef6cc9ac562e1ecba18e342c2536d90 +size 741316 diff --git a/data/2405.11643.png b/data/2405.11643.png new file mode 100644 index 0000000000000000000000000000000000000000..02fd3ad7436b93c8c324260c8f356f87963a1237 --- /dev/null +++ b/data/2405.11643.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2ffd31415f068a4093ace5086e478d20d3aee3d2e2c212638d45f9377862c20 +size 1147402 diff --git a/data/2405.11867.png b/data/2405.11867.png new file mode 100644 index 0000000000000000000000000000000000000000..fd5f9e597ecb4ad6b5741dfacf3f0c2e24086391 --- /dev/null +++ b/data/2405.11867.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16ad724ae3efbcc77bd2cc9c705fad059b8ea2b6db85790f01583a98199381a1 +size 1072910 diff --git a/data/2405.11905.png b/data/2405.11905.png new file mode 100644 index 0000000000000000000000000000000000000000..8a296bff6ef5613e81c0af5954e9b0e5bafe6e29 --- /dev/null +++ b/data/2405.11905.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b0e8f590866e90dca4d8810f480fa9a4029d91f51a9badd7aae2e2425755480 +size 702280 diff --git a/data/2405.11913.png b/data/2405.11913.png new file mode 100644 index 0000000000000000000000000000000000000000..fa48a114b752e62236b59897f43f2083edeb2375 --- /dev/null +++ b/data/2405.11913.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2eb50945ee6bd5181ede63dcdb4772c792fff930c3171ac044b54ff4f040fa75 +size 803784 diff --git a/data/2405.12057.png b/data/2405.12057.png new file mode 100644 index 0000000000000000000000000000000000000000..d4c0e81f7f4173049b8f35a4664826cdac2c5760 --- /dev/null +++ b/data/2405.12057.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:659cce0d89b2305367ffe2660a33fb7e9f71b3f94ecd2a1e907b4efd5e271bcc +size 424014 diff --git a/data/2405.12200.png b/data/2405.12200.png new file mode 100644 index 0000000000000000000000000000000000000000..5ffa5b2c493ccb9408a357c378cb58f8b1666544 --- /dev/null +++ b/data/2405.12200.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d52c6539902eb6b0a53eb2c92c1e5f240f83c9444231aa183053e80b70b6a70d +size 1087526 diff --git a/data/2405.12509.png b/data/2405.12509.png new file mode 100644 index 0000000000000000000000000000000000000000..5f0267d86b0fda94f71aa8dfee9c9b029472fdc6 --- /dev/null +++ b/data/2405.12509.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce8556f5bd5861038f19a57fea8e399cc0d4855e36b3c8a1f6c7a4efd0bfcd08 +size 846754 diff --git a/data/2405.12724.png b/data/2405.12724.png new file mode 100644 index 0000000000000000000000000000000000000000..e9a0e5dde6ab52560eca30c8c064a14fe574d52c --- /dev/null +++ b/data/2405.12724.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b35a40515884fbcbca4a42ce4a45dd1a214c3a12bd17618eeb5442b0e6a7c37 +size 731858 diff --git a/data/2405.12725.png b/data/2405.12725.png new file mode 100644 index 0000000000000000000000000000000000000000..8bc4dbe9d29106c520e716d109f667f952e8909b --- /dev/null +++ b/data/2405.12725.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8783bafcb83a645123e4535c74e8f915a982221df89a385d1c4e138fef44dec +size 806376 diff --git a/data/2405.12759.png b/data/2405.12759.png new file mode 100644 index 0000000000000000000000000000000000000000..efa62567894aab7c22a095255408b546d134ab76 --- /dev/null +++ b/data/2405.12759.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e593d4a02857ecb07b37f7594bff06b528d77d4e7e0d4c3f5ff08463b3b85789 +size 875987 diff --git a/data/2405.12978.png b/data/2405.12978.png new file mode 100644 index 0000000000000000000000000000000000000000..14f7bee7bd30cd9b80c17fb819bfe0977fb15ab3 --- /dev/null +++ b/data/2405.12978.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36bec2834d075f6722d8120a56222c17e89c29384ba61bd3dc719029ba85f51f +size 750338 diff --git a/data/2405.12979.png b/data/2405.12979.png new file mode 100644 index 0000000000000000000000000000000000000000..6b66fcd0b2d04d7b0b5ab76e6f7a119543a09d83 --- /dev/null +++ b/data/2405.12979.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fed662105e6714ccc3aeee6b8d4452e2a4d32750fdd46f13fdfc01ee502e5c8a +size 757386 diff --git a/data/2405.13194.png b/data/2405.13194.png new file mode 100644 index 0000000000000000000000000000000000000000..59836a6831bcf8d8c8f4a0a54ae8a0fc55720024 --- /dev/null +++ b/data/2405.13194.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:497ebd8346358a9463f65445df5bf0fbb1de97d13d1bac8f7542ed585a28e924 +size 693277 diff --git a/data/2405.13870.png b/data/2405.13870.png new file mode 100644 index 0000000000000000000000000000000000000000..711831f91b9e105397d1a8b3210537d66c508a1a --- /dev/null +++ b/data/2405.13870.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ccc19a1626a505e5ed0e3d46ca3ee5b610c004f6753643044b8b649386a3d47 +size 1337783 diff --git a/data/2405.14062.png b/data/2405.14062.png new file mode 100644 index 0000000000000000000000000000000000000000..7b4eecbade1370d49c616c31fd883f1fb40663b6 --- /dev/null +++ b/data/2405.14062.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbf6cd57eed9d265626537846fe5f6ce3e7f3205bfedfb8ca99843fe55d8f917 +size 829789 diff --git a/data/2405.14077.png b/data/2405.14077.png new file mode 100644 index 0000000000000000000000000000000000000000..90cf59f1aeb85dd32013060f726015ac5eed50f1 --- /dev/null +++ b/data/2405.14077.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d6e35f5b9a8c5dd1799c322696129762106b412a4788f8b58e6e1a4a8fab1ee +size 863724 diff --git a/data/2405.14136.png b/data/2405.14136.png new file mode 100644 index 0000000000000000000000000000000000000000..9017240cb36b807887b8149365d4767da4f3f176 --- /dev/null +++ b/data/2405.14136.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4550eed714bd66651d223f60873cf0ee62f9f3f98e3f2e46da4d60fb6b2a169f +size 794556 diff --git a/data/2405.14294v1.png b/data/2405.14294v1.png new file mode 100644 index 0000000000000000000000000000000000000000..097a337c2d76e0ae820a7373800bdf4f0621f6eb --- /dev/null +++ b/data/2405.14294v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99cc2731f31801bb21f0b4b8eb09a993d06255ee940dd0e54e7c95929c1bb466 +size 572430 diff --git a/data/2405.14467.png b/data/2405.14467.png new file mode 100644 index 0000000000000000000000000000000000000000..af802db5c9f2229281e0f4c49dc88ae9d15d1d4a --- /dev/null +++ b/data/2405.14467.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3256fffdd26ef2ba95ce1626ae74adb57ef65f0540dac3e7649b58cafdbfd772 +size 967821 diff --git a/data/2405.14497.png b/data/2405.14497.png new file mode 100644 index 0000000000000000000000000000000000000000..02d794fa4362a22d372695706bac79d5f709b72c --- /dev/null +++ b/data/2405.14497.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d14bfd902b6bc982999c411f6f8632640bddb6f600ae4b128f8b0d64dcf1363e +size 760449 diff --git a/data/2405.14602.png b/data/2405.14602.png new file mode 100644 index 0000000000000000000000000000000000000000..d9e313046c20d8a12ff1c10ebcd921f347483074 --- /dev/null +++ b/data/2405.14602.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af37239ceaaaa42612f49099d4ce5965f48ff61550058b5db09c1abb8ced9372 +size 543973 diff --git a/data/2405.14677.png b/data/2405.14677.png new file mode 100644 index 0000000000000000000000000000000000000000..e17fbaa4cfe7e6cc8926b64435f14a438fcffb9f --- /dev/null +++ b/data/2405.14677.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd17e66fe313bfccb10433573a9e6d721b8fa06a92ad0e5557865922d3ae0ba2 +size 568660 diff --git a/data/2405.14832.png b/data/2405.14832.png new file mode 100644 index 0000000000000000000000000000000000000000..e98a7b4f619365479d85b4769478240f94e4644f --- /dev/null +++ b/data/2405.14832.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcb3f7c82fecf28d84b4226e76bbbd76503b4e99a811bc5d58d85332d8689b78 +size 503869 diff --git a/data/2405.14847.png b/data/2405.14847.png new file mode 100644 index 0000000000000000000000000000000000000000..376055f99e06abde412f3519fc1c193282e9c4a4 --- /dev/null +++ b/data/2405.14847.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7585e0246079d430d5ed6752e9d55f3ada4175604cf0bc47e707ace22b2ec3c +size 1197656 diff --git a/data/2405.14855.png b/data/2405.14855.png new file mode 100644 index 0000000000000000000000000000000000000000..fd70d44aac22bad73a34986093cdb3a659da5c79 --- /dev/null +++ b/data/2405.14855.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e46cd3531e49cce2d344761dcdd552a0d1ea192e9d7d65701476ad24e4d6327 +size 1683011 diff --git a/data/2405.14873.png b/data/2405.14873.png new file mode 100644 index 0000000000000000000000000000000000000000..a632a3964dc14b0f3348d1d31229c54004e9aa96 --- /dev/null +++ b/data/2405.14873.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33b2556b570cc0df929d81eee1af1386539a29f50e9345da92a0ab43254126ae +size 944011 diff --git a/data/2405.14881.png b/data/2405.14881.png new file mode 100644 index 0000000000000000000000000000000000000000..53bdc143d9bd87ad85f71757abe44b750d3689e3 --- /dev/null +++ b/data/2405.14881.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:436a262d78ff270dbc72e6ae15492fb7fb6ec702606ed557a5d70fb5b0b0eaf5 +size 1220995 diff --git a/data/2405.14934.png b/data/2405.14934.png new file mode 100644 index 0000000000000000000000000000000000000000..b42ffb5de196869727abb9d238e444dbd5ad35a8 --- /dev/null +++ b/data/2405.14934.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e80ec6e83035f965983544323662b0743727233d8c8850bb126a196f38ee69fb +size 791119 diff --git a/data/2405.15160.png b/data/2405.15160.png new file mode 100644 index 0000000000000000000000000000000000000000..3201ec8b80f0dccdd03dd08a2dbb161d0970bfbc --- /dev/null +++ b/data/2405.15160.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bad6be35a4be86d63f0594af959aaaaf8ae65bec158bfb2636df0d817533d91 +size 584656 diff --git a/data/2405.15188.png b/data/2405.15188.png new file mode 100644 index 0000000000000000000000000000000000000000..fae8ff12044ca8263cd7b00a60fc9daa25447170 --- /dev/null +++ b/data/2405.15188.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f645606d39e5c1d3d9c8df11033fa67d82eb2935c7b4c556291487f265e3405b +size 806891 diff --git a/data/2405.15217.png b/data/2405.15217.png new file mode 100644 index 0000000000000000000000000000000000000000..f6f13e919eeef15ed2d4f4cc2331fbcf11a54208 --- /dev/null +++ b/data/2405.15217.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64f21f01ccfca162a09d38773ef3d898bcafb0df0e7f96f01383bfa6d31a08b8 +size 801444 diff --git a/data/2405.15225.png b/data/2405.15225.png new file mode 100644 index 0000000000000000000000000000000000000000..a9c91da30561f3c4b853a5d45ed0b9463ff2b3d9 --- /dev/null +++ b/data/2405.15225.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2f79fb2ee44704717232bf2cc898c0907135fac336c0a6cc4b550e31ec804f3 +size 857530 diff --git a/data/2405.15265.png b/data/2405.15265.png new file mode 100644 index 0000000000000000000000000000000000000000..d96e7b0a04193841a697ba91ea786b0b98887f0f --- /dev/null +++ b/data/2405.15265.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7a491fdb3f519047fd877d25df62652680481135410799806d1fb07151b7e06 +size 820648 diff --git a/data/2405.15605v2.png b/data/2405.15605v2.png new file mode 100644 index 0000000000000000000000000000000000000000..8d96702b5683639f4735ef1d5e7cdd9fd91a2535 --- /dev/null +++ b/data/2405.15605v2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce2f2e6080d0fd6cff33fdb7eb3d15676a1e1c5e61011b14b1fe02a3d53a5165 +size 553758 diff --git a/data/2405.15658.png b/data/2405.15658.png new file mode 100644 index 0000000000000000000000000000000000000000..4790dd337527a188a9ae0ec4b56bd00c96cee920 --- /dev/null +++ b/data/2405.15658.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d76d57c276ce753af6b240cd42e4e5797fa88de3dbfdd86880028b4e9e6d7b53 +size 540931 diff --git a/data/2405.15684.png b/data/2405.15684.png new file mode 100644 index 0000000000000000000000000000000000000000..1dca30bc816a4a7752e3a568fa09e12211df34fc --- /dev/null +++ b/data/2405.15684.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17d01b54f5e4c0d57aabcf9011910b7aa0bccc5d0822c71bde279ce94de328a8 +size 562320 diff --git a/data/2405.16009.png b/data/2405.16009.png new file mode 100644 index 0000000000000000000000000000000000000000..a5acff1304b066d36339f45845a5ba46a0d67978 --- /dev/null +++ b/data/2405.16009.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1749a7f1189977a48bd6c5c562fc0e9e8943d4909a53edd9c017a0fced80ff3 +size 535326 diff --git a/data/2405.16038.png b/data/2405.16038.png new file mode 100644 index 0000000000000000000000000000000000000000..868a6609f392eb56535b887d3683945756d33055 --- /dev/null +++ b/data/2405.16038.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc6e58b4537469afaf01423f2316736555fdfa42a1770091b81c86e3d2af5199 +size 985015 diff --git a/data/2405.16108.png b/data/2405.16108.png new file mode 100644 index 0000000000000000000000000000000000000000..953aebb574f0f6c8902c35f7f1bd31cf3e83968e --- /dev/null +++ b/data/2405.16108.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:105165c087102c770723b18a9e03604d700475662a768f825f61c0f1d5733e51 +size 760229 diff --git a/data/2405.16585.png b/data/2405.16585.png new file mode 100644 index 0000000000000000000000000000000000000000..e1e6bf5ede42e16f1c9c638a912a22b54a4b9e31 --- /dev/null +++ b/data/2405.16585.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:705d899e4e9329c84596949c5b90072d904f5d935a433bedaf5f76af86a45f66 +size 767632 diff --git a/data/2405.16790.png b/data/2405.16790.png new file mode 100644 index 0000000000000000000000000000000000000000..72324636e567e0c176c481113a6a0205d90bdafa --- /dev/null +++ b/data/2405.16790.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78d7eb84e517596329a9e8a6775ba48fb7a6660eef1a27c8ab09fc7d614fb752 +size 890405 diff --git a/data/2405.16873.png b/data/2405.16873.png new file mode 100644 index 0000000000000000000000000000000000000000..11eb565f3ff1933dad3c20ef1db923c56ee17638 --- /dev/null +++ b/data/2405.16873.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daf3fa9dd571dd5673c135bf01dc64dacf3623d78a645ca374c06f22133957b2 +size 520774 diff --git a/data/2405.16925.png b/data/2405.16925.png new file mode 100644 index 0000000000000000000000000000000000000000..2271100be3f0605e4961770e6046c75ca2e0a4d6 --- /dev/null +++ b/data/2405.16925.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3198626edb8f6726266bf2f858814539cd82bbb1fe67d989d353845358a96bce +size 876858 diff --git a/data/2405.16996.png b/data/2405.16996.png new file mode 100644 index 0000000000000000000000000000000000000000..c69d8e2d9d2876974884bdbab44f057b307afd18 --- /dev/null +++ b/data/2405.16996.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f6c60df701ecc86f3c1169f11879f014630da4a94865b74f316f24d1b9316b2 +size 724080 diff --git a/data/2405.17104v1.png b/data/2405.17104v1.png new file mode 100644 index 0000000000000000000000000000000000000000..34a18b6a91d9e8f52354cffe6530c5f221178013 --- /dev/null +++ b/data/2405.17104v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31e7dea0f14029fe028c5c926f4114344d6ede33aaf5e5b4ec9b4d9d0e6d6c57 +size 552309 diff --git a/data/2405.17240.png b/data/2405.17240.png new file mode 100644 index 0000000000000000000000000000000000000000..04699d9032bfbad94e0649963e38e9864ba639d8 --- /dev/null +++ b/data/2405.17240.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a888c46a9e6d2ae5836be9f0428197bc4baceedf220cec98d07be2e7c569229 +size 799404 diff --git a/data/2405.17405.png b/data/2405.17405.png new file mode 100644 index 0000000000000000000000000000000000000000..c7045d4f2a45f86129a89851fa61ca6d222b089e --- /dev/null +++ b/data/2405.17405.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0375894e8131e50732a4c120c1c5ac6db40680c1c0bee2f88d5100033e46148 +size 930243 diff --git a/data/2405.17429.png b/data/2405.17429.png new file mode 100644 index 0000000000000000000000000000000000000000..eaff5c41029ab8f3bde7318750d607cc666c5cec --- /dev/null +++ b/data/2405.17429.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f01c7d17e7f8cf73d4fcd6ac02f8797ae5f33e691a163f3e01c9291659dc52b +size 630225 diff --git a/data/2405.17725.png b/data/2405.17725.png new file mode 100644 index 0000000000000000000000000000000000000000..d95df35fd8772d9d2bfe0b9152898829db17e484 --- /dev/null +++ b/data/2405.17725.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fba43a02e2aa7aa4e92a9039798a3d8b696425bf8c5217d61c037c0564e53055 +size 1272866 diff --git a/data/2405.17765.png b/data/2405.17765.png new file mode 100644 index 0000000000000000000000000000000000000000..55096936b59e7e5cccfac507b3a7abe6c30d9cc3 --- /dev/null +++ b/data/2405.17765.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bb4887b0c9ccfa4e17cceb59d7b51d652b0fb7b5ddb9ab97b205e9a8c8e011d +size 977762 diff --git a/data/2405.17876.png b/data/2405.17876.png new file mode 100644 index 0000000000000000000000000000000000000000..626120b9b51c91a0d238edbbdc1719a98ec744b3 --- /dev/null +++ b/data/2405.17876.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff31669e4965ca4bcdbf43e635d5eb53d34b34dfad1677cd7a1fdb4266608478 +size 733904 diff --git a/data/2405.18131.png b/data/2405.18131.png new file mode 100644 index 0000000000000000000000000000000000000000..91b82e308538585afbccfa5cdc1f3c47e7e2c93a --- /dev/null +++ b/data/2405.18131.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04339362f63a6a89d7015fc95386b95ad61c9af372e56b518f22767ec3179d5b +size 828212 diff --git a/data/2405.18322.png b/data/2405.18322.png new file mode 100644 index 0000000000000000000000000000000000000000..21db1b8f7b5a6a74c12428d04b9d9ec35d7634cc --- /dev/null +++ b/data/2405.18322.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdaf3b265ff6f79ab3b8babfc71d45c99624dfb7b9c7febe47b37f4bb9bfd629 +size 875576 diff --git a/data/2405.18416v1.png b/data/2405.18416v1.png new file mode 100644 index 0000000000000000000000000000000000000000..acc942679f013806c6bf9eb5389f66d00c1e6d7d --- /dev/null +++ b/data/2405.18416v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2d4d4538b2aaa8b3c8650898a6e4e0304b11f5a0be3f1adbca6491a03259318 +size 536388 diff --git a/data/2405.18437.png b/data/2405.18437.png new file mode 100644 index 0000000000000000000000000000000000000000..bc913c2d38b76f6ff9348ab2da08f0dbce54994a --- /dev/null +++ b/data/2405.18437.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:449d5fede5c276c7081f7fbc4e144ff1b42174b5b82e350c824c7259b7fc13f5 +size 799722 diff --git a/data/2405.18572.png b/data/2405.18572.png new file mode 100644 index 0000000000000000000000000000000000000000..26d68a8010c9f8aa86926d06156dc01d3c252432 --- /dev/null +++ b/data/2405.18572.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cab86e213bcb3e7d8b8f6f005990b6439dc2ba574e53b9e115f46737853bdc63 +size 475362 diff --git a/data/2405.18706.png b/data/2405.18706.png new file mode 100644 index 0000000000000000000000000000000000000000..d8fcd0d62cb979faee3d2c385acfd4dcfb512e2a --- /dev/null +++ b/data/2405.18706.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da95d9626d0d1b279971679ef8afc7a2f1a19d3113d00ff86970c062f058ce3e +size 1099445 diff --git a/data/2405.18715.png b/data/2405.18715.png new file mode 100644 index 0000000000000000000000000000000000000000..79994500be6317f61695965a868933fd6c8e4d54 --- /dev/null +++ b/data/2405.18715.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d894af9c5f4c8c99c76432afd77382d7ee10ba1d0865813cabf43069c5e1db4f +size 1377141 diff --git a/data/2405.18810.png b/data/2405.18810.png new file mode 100644 index 0000000000000000000000000000000000000000..716bdb681833e8e0a31036ff123bdac36c1e4f6e --- /dev/null +++ b/data/2405.18810.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4395801727c94d4882bd42fa9b86a67f20e8131803dc23639ca4208fe47efb38 +size 797065 diff --git a/data/2405.19005.png b/data/2405.19005.png new file mode 100644 index 0000000000000000000000000000000000000000..6b320017a7d7a206da2e649f8c3946aa987ce65a --- /dev/null +++ b/data/2405.19005.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebdc1993a7d5c80880fddde10bd234136a98b8f8ae05b9b670911ffa90c91f39 +size 537749 diff --git a/data/2405.19074.png b/data/2405.19074.png new file mode 100644 index 0000000000000000000000000000000000000000..bd116d1174043a973de68eeca743658d52929ee4 --- /dev/null +++ b/data/2405.19074.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50167a92969e200b5856f88836fcccf5d84d7fe70908afb2ed99f452c4199e9c +size 717081 diff --git a/data/2405.19283.png b/data/2405.19283.png new file mode 100644 index 0000000000000000000000000000000000000000..d3ef640838867f3dc1b2cdab98b56305d462eae3 --- /dev/null +++ b/data/2405.19283.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5c1b93f26fb1318e1d1857bb38954f17a428edad3aa847c52804523414bc313 +size 808327 diff --git a/data/2405.19295.png b/data/2405.19295.png new file mode 100644 index 0000000000000000000000000000000000000000..9d52fb1c69090fd7622eef079645b7bd39adf815 --- /dev/null +++ b/data/2405.19295.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a3d74982e4b8ff6d30005f81c942a87844fdaab8b1ade9f98f6553e3c282c1e +size 846181 diff --git a/data/2405.19465.png b/data/2405.19465.png new file mode 100644 index 0000000000000000000000000000000000000000..f9edb0f99b92faa88366c24143f3a7172b219010 --- /dev/null +++ b/data/2405.19465.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fe10e5fc2577c4099032ca360907844c4422a4b0540b1f575d77c3e9e8cb20a +size 1039537 diff --git a/data/2405.19646.png b/data/2405.19646.png new file mode 100644 index 0000000000000000000000000000000000000000..4ec0c7f76ce335cd611bf14a0751d5c6c910bbb7 --- /dev/null +++ b/data/2405.19646.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc6edbb07ca161af8cd12e9e759a2812daa4bd5dd7ccc792c1726d3bc8ad4bf1 +size 1087327 diff --git a/data/2405.19718.png b/data/2405.19718.png new file mode 100644 index 0000000000000000000000000000000000000000..36b5486d3b9280e79cb6087dbe1f7d4a12e23e7f --- /dev/null +++ b/data/2405.19718.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce643f9163fd6d76655d9a7d25c45cd325c21390fb04c39871da7bd99ecdb38e +size 1419273 diff --git a/data/2405.19775.png b/data/2405.19775.png new file mode 100644 index 0000000000000000000000000000000000000000..af6de0170358a893e4c38503669db74f718caab3 --- /dev/null +++ b/data/2405.19775.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:111cfa22d61474839a87ba1a2c515ff712c78ad32f145e048bccb4726fcb3078 +size 727302 diff --git a/data/2405.19819.png b/data/2405.19819.png new file mode 100644 index 0000000000000000000000000000000000000000..910f82818ee41f33e3c36bfee075e71a1fd03697 --- /dev/null +++ b/data/2405.19819.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12f77238a39ccad50492d99c341f6816103eec0962d1adbef2d3507e3816f98e +size 1123418 diff --git a/data/2405.19833.png b/data/2405.19833.png new file mode 100644 index 0000000000000000000000000000000000000000..63b2ef03f01bdabc90094eaed3518ffa1add4043 --- /dev/null +++ b/data/2405.19833.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bde21ae08368fb4f7bd22efcf7d90c00bcc0986a7e77b4ffa4242e03128c51ed +size 712605 diff --git a/data/2405.19876.png b/data/2405.19876.png new file mode 100644 index 0000000000000000000000000000000000000000..1f242a00573d90d93312b75435d1f83f3da4dbf5 --- /dev/null +++ b/data/2405.19876.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df055deddf3f906845f0421766f24d3a8a82b87a711c65c361f59adeaa329b11 +size 1387165 diff --git a/data/2405.19899.png b/data/2405.19899.png new file mode 100644 index 0000000000000000000000000000000000000000..dc12f0261e02f5c03e933261e8f308b79e412d94 --- /dev/null +++ b/data/2405.19899.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af96520740ae29dc8b63e5a3a341d1f4940b91b3d31b3ff01b65576959eefa1d +size 741584 diff --git a/data/2405.19902.png b/data/2405.19902.png new file mode 100644 index 0000000000000000000000000000000000000000..6e1f0668ece83a62777e99b0a664fdf2a1ca4c5e --- /dev/null +++ b/data/2405.19902.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a18aef15ff76fd43ab06f543a70ee2164df7dfa3227c064403816a58cf856294 +size 805635 diff --git a/data/2405.20161v1.png b/data/2405.20161v1.png new file mode 100644 index 0000000000000000000000000000000000000000..a07b704347bfdf3fb5cfa7870cfcdb083adeb629 --- /dev/null +++ b/data/2405.20161v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3afcd3c046607dd68df61451660caeea25e840a575ab13cd995ea7aae6be38e +size 721157 diff --git a/data/2405.20305.png b/data/2405.20305.png new file mode 100644 index 0000000000000000000000000000000000000000..a6bba6b33928f1ff618a6aa6ced532e995d6f8a9 --- /dev/null +++ b/data/2405.20305.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1b8e493e808cb28aedea892901c163dbafa3fac4e1cc7500c058ca51dcd42a6 +size 810584 diff --git a/data/2405.20319v1.png b/data/2405.20319v1.png new file mode 100644 index 0000000000000000000000000000000000000000..8a80166b335065678612da8e05ed05992f77804f --- /dev/null +++ b/data/2405.20319v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d78fbf211fd445f64f06903255396bc935e2ddd246c355133f8e680a25f4500c +size 882436 diff --git a/data/2405.20324.png b/data/2405.20324.png new file mode 100644 index 0000000000000000000000000000000000000000..67b5167ed4c65608c280dd88238cd40d0215d2eb --- /dev/null +++ b/data/2405.20324.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3060fc703d89807d0e3a5daff69644656826f51837a2a33d9ab123582d58d885 +size 2320964 diff --git a/data/2405.20654v1.png b/data/2405.20654v1.png new file mode 100644 index 0000000000000000000000000000000000000000..d373242575057b8665d49c2af142df32c9c3781d --- /dev/null +++ b/data/2405.20654v1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b48c00eb5f0e8028274c0676796942f87449fb03798ce7fddaa01ea00b378b43 +size 849742 diff --git a/data/2405.20729.png b/data/2405.20729.png new file mode 100644 index 0000000000000000000000000000000000000000..2f9d9a1cab4b5e6a927afd07ed67d05fce3f43d7 --- /dev/null +++ b/data/2405.20729.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f6e30404a73e738c9095e7ef39ff0960a8a4b0f035e9bf6614fdfedd431b565 +size 820175 diff --git a/data/2405.20786.png b/data/2405.20786.png new file mode 100644 index 0000000000000000000000000000000000000000..fc484c0b759bdcb7d46da12ef86117bcce6ba9ba --- /dev/null +++ b/data/2405.20786.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b29e891c9ae1538ad4d894c08a8502eccad52608fced17c7c02c8cc1437fb7e +size 814555 diff --git a/fiftyone.yml b/fiftyone.yml new file mode 100644 index 0000000000000000000000000000000000000000..0a5ca2510aedb9c9a597747a2dd3db849255f0ee --- /dev/null +++ b/fiftyone.yml @@ -0,0 +1,5 @@ +format: FiftyOneDataset +name: cvpr2024_papers +tags: +- fiftyone +- image diff --git a/metadata.json b/metadata.json new file mode 100644 index 0000000000000000000000000000000000000000..01b357127c52caef6557d902fe70d0c421ae5506 --- /dev/null +++ b/metadata.json @@ -0,0 +1 @@ +{"_id": {"$oid": "6669f097da8041005727ef3f"}, "name": "cvpr2024_papers", "slug": "cvpr2024-papers", "version": "0.24.0", "created_at": {"$date": 1718218903512}, "last_loaded_at": {"$date": 1718218903674}, "sample_collection_name": "samples.6669f097da8041005727ef3f", "persistent": true, "media_type": "image", "group_media_types": {}, "tags": [], "info": {}, "app_config": {"media_fields": ["filepath"], "grid_media_field": "filepath", "modal_media_field": "filepath", "plugins": {}, "media_fallback": false}, "classes": {}, "default_classes": [], "mask_targets": {}, "default_mask_targets": {}, "skeletons": {}, "sample_fields": [{"name": "id", "ftype": "fiftyone.core.fields.ObjectIdField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "_id", "description": null, "info": null}, {"name": "filepath", "ftype": "fiftyone.core.fields.StringField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "filepath", "description": null, "info": null}, {"name": "tags", "ftype": "fiftyone.core.fields.ListField", "embedded_doc_type": null, "subfield": "fiftyone.core.fields.StringField", "fields": [], "db_field": "tags", "description": null, "info": null}, {"name": "_media_type", "ftype": "fiftyone.core.fields.StringField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "_media_type", "description": null, "info": null}, {"name": "_rand", "ftype": "fiftyone.core.fields.FloatField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "_rand", "description": null, "info": null}, {"name": "_dataset_id", "ftype": "fiftyone.core.fields.ObjectIdField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "_dataset_id", "description": null, "info": null}, {"name": "metadata", "ftype": "fiftyone.core.fields.EmbeddedDocumentField", "embedded_doc_type": "fiftyone.core.metadata.ImageMetadata", "subfield": null, "fields": [{"name": "size_bytes", "ftype": "fiftyone.core.fields.IntField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "size_bytes", "description": null, "info": null}, {"name": "mime_type", "ftype": "fiftyone.core.fields.StringField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "mime_type", "description": null, "info": null}, {"name": "width", "ftype": "fiftyone.core.fields.IntField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "width", "description": null, "info": null}, {"name": "height", "ftype": "fiftyone.core.fields.IntField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "height", "description": null, "info": null}, {"name": "num_channels", "ftype": "fiftyone.core.fields.IntField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "num_channels", "description": null, "info": null}], "db_field": "metadata", "description": null, "info": null}, {"name": "arXiv_link", "ftype": "fiftyone.core.fields.StringField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "arXiv_link", "description": "Link to paper abstract on arxiv", "info": null}, {"name": "other_link", "ftype": "fiftyone.core.fields.StringField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "other_link", "description": "Link to other resource found for paper", "info": null}, {"name": "title", "ftype": "fiftyone.core.fields.StringField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "title", "description": "The title of the paper", "info": null}, {"name": "abstract", "ftype": "fiftyone.core.fields.StringField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "abstract", "description": "The abstract of the paper", "info": null}, {"name": "authors_list", "ftype": "fiftyone.core.fields.ListField", "embedded_doc_type": null, "subfield": "fiftyone.core.fields.StringField", "fields": [], "db_field": "authors_list", "description": "The authors listed on the paper", "info": null}, {"name": "keywords", "ftype": "fiftyone.core.fields.ListField", "embedded_doc_type": null, "subfield": "fiftyone.core.fields.StringField", "fields": [], "db_field": "keywords", "description": "Keywords associated with this paper, extracted using GPT-4o", "info": null}, {"name": "category_name", "ftype": "fiftyone.core.fields.StringField", "embedded_doc_type": null, "subfield": null, "fields": [], "db_field": "category_name", "description": "The category of the paper", "info": null}, {"name": "all_categories", "ftype": "fiftyone.core.fields.ListField", "embedded_doc_type": null, "subfield": "fiftyone.core.fields.StringField", "fields": [], "db_field": "all_categories", "description": "The authors listed on the paper", "info": null}], "frame_fields": [], "saved_views": [], "workspaces": [], "annotation_runs": {}, "brain_methods": {}, "evaluations": {}, "runs": {}} \ No newline at end of file diff --git a/samples.json b/samples.json new file mode 100644 index 0000000000000000000000000000000000000000..c379a0e7f6bf2ba561530cbe4239cadbfbcc1275 --- /dev/null +++ b/samples.json @@ -0,0 +1 @@ +{"samples": [{"_id": {"$oid": "6669f097da8041005727ef40"}, "filepath": "data/2401.16001.png", "tags": [], "_media_type": "image", "_rand": 0.9999041891525833, "arXiv_link": "https://arxiv.org/abs/2401.16001", "other_link": "", "title": "Semantic-Aware Multi-Label Adversarial Attacks", "abstract": "Deep learning methods can not only detect false data injection attacks (FDIA)\nbut also locate attacks of FDIA. Although adversarial false data injection\nattacks (AFDIA) based on deep learning vulnerabilities have been studied in the\nfield of single-label FDIA detection, the adversarial attack and defense\nagainst multi-label FDIA locational detection are still not involved. To bridge\nthis gap, this paper first explores the multi-label adversarial example attacks\nagainst multi-label FDIA locational detectors and proposes a general\nmulti-label adversarial attack framework, namely muLti-labEl adverSarial falSe\ndata injectiON attack (LESSON). The proposed LESSON attack framework includes\nthree key designs, namely Perturbing State Variables, Tailored Loss Function\nDesign, and Change of Variables, which can help find suitable multi-label\nadversarial perturbations within the physical constraints to circumvent both\nBad Data Detection (BDD) and Neural Attack Location (NAL). Four typical LESSON\nattacks based on the proposed framework and two dimensions of attack objectives\nare examined, and the experimental results demonstrate the effectiveness of the\nproposed attack framework, posing serious and pressing security concerns in\nsmart grids.", "keywords": [], "authors_list": ["Hassan Mahmood", "Ehsan Elhamifar"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef41"}, "filepath": "data/2308.04556.png", "tags": [], "_media_type": "image", "_rand": 0.9998286702515592, "arXiv_link": "https://arxiv.org/abs/2308.04556", "other_link": "https://github.com/NVlabs/FocalFormer3D}.", "title": "HINTED: Hard Instance Enhanced Detector with Mixed-Density Feature Fusion for Sparsely-Supervised 3D Object Detection", "abstract": "False negatives (FN) in 3D object detection, {\\em e.g.}, missing predictions\nof pedestrians, vehicles, or other obstacles, can lead to potentially dangerous\nsituations in autonomous driving. While being fatal, this issue is understudied\nin many current 3D detection methods. In this work, we propose Hard Instance\nProbing (HIP), a general pipeline that identifies \\textit{FN} in a multi-stage\nmanner and guides the models to focus on excavating difficult instances. For 3D\nobject detection, we instantiate this method as FocalFormer3D, a simple yet\neffective detector that excels at excavating difficult objects and improving\nprediction recall. FocalFormer3D features a multi-stage query generation to\ndiscover hard objects and a box-level transformer decoder to efficiently\ndistinguish objects from massive object candidates. Experimental results on the\nnuScenes and Waymo datasets validate the superior performance of FocalFormer3D.\nThe advantage leads to strong performance on both detection and tracking, in\nboth LiDAR and multi-modal settings. Notably, FocalFormer3D achieves a 70.5 mAP\nand 73.9 NDS on nuScenes detection benchmark, while the nuScenes tracking\nbenchmark shows 72.1 AMOTA, both ranking 1st place on the nuScenes LiDAR\nleaderboard. Our code is available at\n\\url{https://github.com/NVlabs/FocalFormer3D}.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Qiming Xia", "Wei Ye", "Hai Wu", "Shijia Zhao", "Leyuan Xing", "Xun Huang", "Jinhao Deng", "Xin Li", "Chenglu Wen", "Cheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef42"}, "filepath": "data/2311.17833.png", "tags": [], "_media_type": "image", "_rand": 0.9992862318456817, "arXiv_link": "https://arxiv.org/abs/2311.17833", "other_link": "", "title": "DiG-IN: Diffusion Guidance for Investigating Networks - Uncovering Classifier Differences, Neuron Visualisations, and Visual Counterfactual Explanations", "abstract": "While deep learning has led to huge progress in complex image classification\ntasks like ImageNet, unexpected failure modes, e.g. via spurious features, call\ninto question how reliably these classifiers work in the wild. Furthermore, for\nsafety-critical tasks the black-box nature of their decisions is problematic,\nand explanations or at least methods which make decisions plausible are needed\nurgently. In this paper, we address these problems by generating images that\noptimize a classifier-derived objective using a framework for guided image\ngeneration. We analyze the decisions of image classifiers by visual\ncounterfactual explanations (VCEs), detection of systematic mistakes by\nanalyzing images where classifiers maximally disagree, and visualization of\nneurons and spurious features. In this way, we validate existing observations,\ne.g. the shape bias of adversarially robust models, as well as novel failure\nmodes, e.g. systematic errors of zero-shot CLIP classifiers. Moreover, our VCEs\noutperform previous work while being more versatile.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Maximilian Augustin", "Yannic Neuhaus", "Matthias Hein"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef43"}, "filepath": "data/2402.08714.png", "tags": [], "_media_type": "image", "_rand": 0.9997799520223412, "arXiv_link": "https://arxiv.org/abs/2402.08714", "other_link": "", "title": "PRDP: Proximal Reward Difference Prediction for Large-Scale Reward Finetuning of Diffusion Models", "abstract": "Reward finetuning has emerged as a promising approach to aligning foundation\nmodels with downstream objectives. Remarkable success has been achieved in the\nlanguage domain by using reinforcement learning (RL) to maximize rewards that\nreflect human preference. However, in the vision domain, existing RL-based\nreward finetuning methods are limited by their instability in large-scale\ntraining, rendering them incapable of generalizing to complex, unseen prompts.\nIn this paper, we propose Proximal Reward Difference Prediction (PRDP),\nenabling stable black-box reward finetuning for diffusion models for the first\ntime on large-scale prompt datasets with over 100K prompts. Our key innovation\nis the Reward Difference Prediction (RDP) objective that has the same optimal\nsolution as the RL objective while enjoying better training stability.\nSpecifically, the RDP objective is a supervised regression objective that tasks\nthe diffusion model with predicting the reward difference of generated image\npairs from their denoising trajectories. We theoretically prove that the\ndiffusion model that obtains perfect reward difference prediction is exactly\nthe maximizer of the RL objective. We further develop an online algorithm with\nproximal updates to stably optimize the RDP objective. In experiments, we\ndemonstrate that PRDP can match the reward maximization ability of\nwell-established RL-based methods in small-scale training. Furthermore, through\nlarge-scale training on text prompts from the Human Preference Dataset v2 and\nthe Pick-a-Pic v1 dataset, PRDP achieves superior generation quality on a\ndiverse set of complex, unseen prompts whereas RL-based methods completely\nfail.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Fei Deng", "Qifei Wang", "Wei Wei", "Tingbo Hou", "Matthias Grundmann"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef44"}, "filepath": "data/2312.15010.png", "tags": [], "_media_type": "image", "_rand": 0.9999678644313978, "arXiv_link": "https://arxiv.org/abs/2312.15010", "other_link": "", "title": "SI-MIL: Taming Deep MIL for Self-Interpretability in Gigapixel Histopathology", "abstract": "Introducing interpretability and reasoning into Multiple Instance Learning\n(MIL) methods for Whole Slide Image (WSI) analysis is challenging, given the\ncomplexity of gigapixel slides. Traditionally, MIL interpretability is limited\nto identifying salient regions deemed pertinent for downstream tasks, offering\nlittle insight to the end-user (pathologist) regarding the rationale behind\nthese selections. To address this, we propose Self-Interpretable MIL (SI-MIL),\na method intrinsically designed for interpretability from the very outset.\nSI-MIL employs a deep MIL framework to guide an interpretable branch grounded\non handcrafted pathological features, facilitating linear predictions. Beyond\nidentifying salient regions, SI-MIL uniquely provides feature-level\ninterpretations rooted in pathological insights for WSIs. Notably, SI-MIL, with\nits linear prediction constraints, challenges the prevalent myth of an\ninevitable trade-off between model interpretability and performance,\ndemonstrating competitive results compared to state-of-the-art methods on\nWSI-level prediction tasks across three cancer types. In addition, we\nthoroughly benchmark the local and global-interpretability of SI-MIL in terms\nof statistical analysis, a domain expert study, and desiderata of\ninterpretability, namely, user-friendliness and faithfulness.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Saarthak Kapse", "Pushpak Pati", "Srijan Das", "Jingwei Zhang", "Chao Chen", "Maria Vakalopoulou", "Joel Saltz", "Dimitris Samaras", "Rajarsi Gupta", "Prateek Prasanna"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef45"}, "filepath": "data/2404.08079.png", "tags": [], "_media_type": "image", "_rand": 0.9999985582094338, "arXiv_link": "https://arxiv.org/abs/2404.08079", "other_link": "", "title": "DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning Models", "abstract": "Recent advances in decentralized deep learning algorithms have demonstrated\ncutting-edge performance on various tasks with large pre-trained models.\nHowever, a pivotal prerequisite for achieving this level of competitiveness is\nthe significant communication and computation overheads when updating these\nmodels, which prohibits the applications of them to real-world scenarios. To\naddress this issue, drawing inspiration from advanced model merging techniques\nwithout requiring additional training, we introduce the Decentralized Iterative\nMerging-And-Training (DIMAT) paradigm--a novel decentralized deep learning\nframework. Within DIMAT, each agent is trained on their local data and\nperiodically merged with their neighboring agents using advanced model merging\ntechniques like activation matching until convergence is achieved. DIMAT\nprovably converges with the best available rate for nonconvex functions with\nvarious first-order methods, while yielding tighter error bounds compared to\nthe popular existing approaches. We conduct a comprehensive empirical analysis\nto validate DIMAT's superiority over baselines across diverse computer vision\ntasks sourced from multiple datasets. Empirical results validate our\ntheoretical claims by showing that DIMAT attains faster and higher initial gain\nin accuracy with independent and identically distributed (IID) and non-IID\ndata, incurring lower communication overhead. This DIMAT paradigm presents a\nnew opportunity for the future decentralized learning, enhancing its\nadaptability to real-world with sparse and light-weight communication and\ncomputation.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Nastaran Saadati", "Minh Pham", "Nasla Saleem", "Joshua R. Waite", "Aditya Balu", "Zhanhong Jiang", "Chinmay Hegde", "Soumik Sarkar"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition", "Optimization and Control"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef46"}, "filepath": "data/2310.05364.png", "tags": [], "_media_type": "image", "_rand": 0.9996093258523002, "arXiv_link": "https://arxiv.org/abs/2310.05364", "other_link": "", "title": "Mask4Align: Aligned Entity Prompting with Color Masks for Multi-Entity Localization Problem", "abstract": "The objective of Entity Alignment (EA) is to identify equivalent entity pairs\nfrom multiple Knowledge Graphs (KGs) and create a more comprehensive and\nunified KG. The majority of EA methods have primarily focused on the structural\nmodality of KGs, lacking exploration of multi-modal information. A few\nmulti-modal EA methods have made good attempts in this field. Still, they have\ntwo shortcomings: (1) inconsistent and inefficient modality modeling that\ndesigns complex and distinct models for each modality; (2) ineffective modality\nfusion due to the heterogeneous nature of modalities in EA. To tackle these\nchallenges, we propose PathFusion, consisting of two main components: (1) MSP,\na unified modeling approach that simplifies the alignment process by\nconstructing paths connecting entities and modality nodes to represent multiple\nmodalities; (2) IRF, an iterative fusion method that effectively combines\ninformation from different modalities using the path as an information carrier.\nExperimental results on real-world datasets demonstrate the superiority of\nPathFusion over state-of-the-art methods, with 22.4%-28.9% absolute improvement\non Hits@1, and 0.194-0.245 absolute improvement on MRR.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Haoquan Zhang", "Ronggang Huang", "Yi Xie", "Huaidong Zhang"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef47"}, "filepath": "data/2404.00678.png", "tags": [], "_media_type": "image", "_rand": 0.9992649796228699, "arXiv_link": "https://arxiv.org/abs/2404.00678", "other_link": "", "title": "OmniSDF: Scene Reconstruction using Omnidirectional Signed Distance Functions and Adaptive Binoctrees", "abstract": "We present a method to reconstruct indoor and outdoor static scene geometry\nand appearance from an omnidirectional video moving in a small circular sweep.\nThis setting is challenging because of the small baseline and large depth\nranges, making it difficult to find ray crossings. To better constrain the\noptimization, we estimate geometry as a signed distance field within a\nspherical binoctree data structure and use a complementary efficient tree\ntraversal strategy based on a breadth-first search for sampling. Unlike regular\ngrids or trees, the shape of this structure well-matches the camera setting,\ncreating a better memory-quality trade-off. From an initial depth estimate, the\nbinoctree is adaptively subdivided throughout the optimization; previous\nmethods use a fixed depth that leaves the scene undersampled. In comparison\nwith three neural optimization methods and two non-neural methods, ours shows\ndecreased geometry error on average, especially in a detailed scene, while\nsignificantly reducing the required number of voxels to represent such details.", "keywords": ["Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Hakyeong Kim", "Andreas Meuleman", "Hyeonjoong Jang", "James Tompkin", "Min H. Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef48"}, "filepath": "data/2404.13024.png", "tags": [], "_media_type": "image", "_rand": 0.9991506989146026, "arXiv_link": "https://arxiv.org/abs/2404.13024", "other_link": "", "title": "BANF: Band-limited Neural Fields for Levels of Detail Reconstruction", "abstract": "Largely due to their implicit nature, neural fields lack a direct mechanism\nfor filtering, as Fourier analysis from discrete signal processing is not\ndirectly applicable to these representations. Effective filtering of neural\nfields is critical to enable level-of-detail processing in downstream\napplications, and support operations that involve sampling the field on regular\ngrids (e.g. marching cubes). Existing methods that attempt to decompose neural\nfields in the frequency domain either resort to heuristics or require extensive\nmodifications to the neural field architecture. We show that via a simple\nmodification, one can obtain neural fields that are low-pass filtered, and in\nturn show how this can be exploited to obtain a frequency decomposition of the\nentire signal. We demonstrate the validity of our technique by investigating\nlevel-of-detail reconstruction, and showing how coarser representations can be\ncomputed effectively.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ahan Shabanov", "Shrisudhan Govindarajan", "Cody Reading", "Leili Goli", "Daniel Rebain", "Kwang Moo Yi", "Andrea Tagliasacchi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef49"}, "filepath": "data/2403.07589.png", "tags": [], "_media_type": "image", "_rand": 0.999628720274447, "arXiv_link": "https://arxiv.org/abs/2403.07589", "other_link": "", "title": "PeLK: Parameter-efficient Large Kernel ConvNets with Peripheral Convolution", "abstract": "Recently, some large kernel convnets strike back with appealing performance\nand efficiency. However, given the square complexity of convolution, scaling up\nkernels can bring about an enormous amount of parameters and the proliferated\nparameters can induce severe optimization problem. Due to these issues, current\nCNNs compromise to scale up to 51x51 in the form of stripe convolution (i.e.,\n51x5 + 5x51) and start to saturate as the kernel size continues growing. In\nthis paper, we delve into addressing these vital issues and explore whether we\ncan continue scaling up kernels for more performance gains. Inspired by human\nvision, we propose a human-like peripheral convolution that efficiently reduces\nover 90% parameter count of dense grid convolution through parameter sharing,\nand manage to scale up kernel size to extremely large. Our peripheral\nconvolution behaves highly similar to human, reducing the complexity of\nconvolution from O(K^2) to O(logK) without backfiring performance. Built on\nthis, we propose Parameter-efficient Large Kernel Network (PeLK). Our PeLK\noutperforms modern vision Transformers and ConvNet architectures like Swin,\nConvNeXt, RepLKNet and SLaK on various vision tasks including ImageNet\nclassification, semantic segmentation on ADE20K and object detection on MS\nCOCO. For the first time, we successfully scale up the kernel size of CNNs to\nan unprecedented 101x101 and demonstrate consistent improvements.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Honghao Chen", "Xiangxiang Chu", "Renyongjian", "Xin Zhao", "Kaiqi Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef4a"}, "filepath": "data/2312.04746.png", "tags": [], "_media_type": "image", "_rand": 0.9994528820447783, "arXiv_link": "https://arxiv.org/abs/2312.04746", "other_link": "", "title": "Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos", "abstract": "Diagnosis in histopathology requires a global whole slide images (WSIs)\nanalysis, requiring pathologists to compound evidence from different WSI\npatches. The gigapixel scale of WSIs poses a challenge for histopathology\nmulti-modal models. Training multi-model models for histopathology requires\ninstruction tuning datasets, which currently contain information for individual\nimage patches, without a spatial grounding of the concepts within each patch\nand without a wider view of the WSI. Therefore, they lack sufficient diagnostic\ncapacity for histopathology. To bridge this gap, we introduce Quilt-Instruct, a\nlarge-scale dataset of 107,131 histopathology-specific instruction\nquestion/answer pairs, grounded within diagnostically relevant image patches\nthat make up the WSI. Our dataset is collected by leveraging educational\nhistopathology videos from YouTube, which provides spatial localization of\nnarrations by automatically extracting the narrators' cursor positions.\nQuilt-Instruct supports contextual reasoning by extracting diagnosis and\nsupporting facts from the entire WSI. Using Quilt-Instruct, we train\nQuilt-LLaVA, which can reason beyond the given single image patch, enabling\ndiagnostic reasoning across patches. To evaluate Quilt-LLaVA, we propose a\ncomprehensive evaluation dataset created from 985 images and 1283\nhuman-generated question-answers. We also thoroughly evaluate Quilt-LLaVA using\npublic histopathology datasets, where Quilt-LLaVA significantly outperforms\nSOTA by over 10% on relative GPT-4 score and 4% and 9% on open and closed set\nVQA. Our code, data, and model are publicly accessible at\nquilt-llava.github.io.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Mehmet Saygin Seyfioglu", "Wisdom Ikezogwo", "Fatemeh Ghezloo", "Ranjay Krishna", "Linda Shapiro"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef4b"}, "filepath": "data/2312.05995.png", "tags": [], "_media_type": "image", "_rand": 0.9990398750236056, "arXiv_link": "https://arxiv.org/abs/2312.05995", "other_link": "https://github.com/javrtg/C2P.", "title": "From Correspondences to Pose: Non-minimal Certifiably Optimal Relative Pose without Disambiguation", "abstract": "Estimating the relative camera pose from $n \\geq 5$ correspondences between\ntwo calibrated views is a fundamental task in computer vision. This process\ntypically involves two stages: 1) estimating the essential matrix between the\nviews, and 2) disambiguating among the four candidate relative poses that\nsatisfy the epipolar geometry. In this paper, we demonstrate a novel approach\nthat, for the first time, bypasses the second stage. Specifically, we show that\nit is possible to directly estimate the correct relative camera pose from\ncorrespondences without needing a post-processing step to enforce the\ncheirality constraint on the correspondences. Building on recent advances in\ncertifiable non-minimal optimization, we frame the relative pose estimation as\na Quadratically Constrained Quadratic Program (QCQP). By applying the\nappropriate constraints, we ensure the estimation of a camera pose that\ncorresponds to a valid 3D geometry and that is globally optimal when certified.\nWe validate our method through exhaustive synthetic and real-world experiments,\nconfirming the efficacy, efficiency and accuracy of the proposed approach. Code\nis available at https://github.com/javrtg/C2P.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Javier Tirado-Gar\u00edn", "Javier Civera"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef4c"}, "filepath": "data/2312.08886.png", "tags": [], "_media_type": "image", "_rand": 0.9998877703411144, "arXiv_link": "https://arxiv.org/abs/2312.08886", "other_link": "", "title": "Diffusion-based Blind Text Image Super-Resolution", "abstract": "Recovering degraded low-resolution text images is challenging, especially for\nChinese text images with complex strokes and severe degradation in real-world\nscenarios. Ensuring both text fidelity and style realness is crucial for\nhigh-quality text image super-resolution. Recently, diffusion models have\nachieved great success in natural image synthesis and restoration due to their\npowerful data distribution modeling abilities and data generation capabilities.\nIn this work, we propose an Image Diffusion Model (IDM) to restore text images\nwith realistic styles. For diffusion models, they are not only suitable for\nmodeling realistic image distribution but also appropriate for learning text\ndistribution. Since text prior is important to guarantee the correctness of the\nrestored text structure according to existing arts, we also propose a Text\nDiffusion Model (TDM) for text recognition which can guide IDM to generate text\nimages with correct structures. We further propose a Mixture of Multi-modality\nmodule (MoM) to make these two diffusion models cooperate with each other in\nall the diffusion steps. Extensive experiments on synthetic and real-world\ndatasets demonstrate that our Diffusion-based Blind Text Image Super-Resolution\n(DiffTSR) can restore text images with more accurate text structures as well as\nmore realistic appearances simultaneously.", "keywords": ["Low-level vision", "Document analysis and understanding"], "authors_list": ["Yuzhe Zhang", "jiawei zhang", "Hao Li", "Zhouxia Wang", "Luwei Hou", "Dongqing Zou", "Liheng Bian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef4d"}, "filepath": "data/2309.09818.png", "tags": [], "_media_type": "image", "_rand": 0.9999376015188811, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2309.09818", "other_link": "https://grasp-anything-2023.github.io.", "title": "Language-driven Grasp Detection", "abstract": "Foundation models such as ChatGPT have made significant strides in robotic\ntasks due to their universal representation of real-world domains. In this\npaper, we leverage foundation models to tackle grasp detection, a persistent\nchallenge in robotics with broad industrial applications. Despite numerous\ngrasp datasets, their object diversity remains limited compared to real-world\nfigures. Fortunately, foundation models possess an extensive repository of\nreal-world knowledge, including objects we encounter in our daily lives. As a\nconsequence, a promising solution to the limited representation in previous\ngrasp datasets is to harness the universal knowledge embedded in these\nfoundation models. We present Grasp-Anything, a new large-scale grasp dataset\nsynthesized from foundation models to implement this solution. Grasp-Anything\nexcels in diversity and magnitude, boasting 1M samples with text descriptions\nand more than 3M objects, surpassing prior datasets. Empirically, we show that\nGrasp-Anything successfully facilitates zero-shot grasp detection on\nvision-based tasks and real-world robotic experiments. Our dataset and code are\navailable at https://grasp-anything-2023.github.io.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["An Dinh Vuong", "Minh Nhat VU", "Baoru Huang", "Nghia Nguyen", "Hieu Le", "Thieu Vo", "Thieu Vo", "Anh Nguyen"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef4e"}, "filepath": "data/2401.06209.png", "tags": [], "_media_type": "image", "_rand": 0.999463034847091, "arXiv_link": "http://export.arxiv.org/abs/2401.06209", "other_link": "", "title": "Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs", "abstract": "Is vision good enough for language? Recent advancements in multimodal models\nprimarily stem from the powerful reasoning abilities of large language models\n(LLMs). However, the visual component typically depends only on the\ninstance-level contrastive language-image pre-training (CLIP). Our research\nreveals that the visual capabilities in recent multimodal LLMs (MLLMs) still\nexhibit systematic shortcomings. To understand the roots of these errors, we\nexplore the gap between the visual embedding space of CLIP and vision-only\nself-supervised learning. We identify ''CLIP-blind pairs'' - images that CLIP\nperceives as similar despite their clear visual differences. With these pairs,\nwe construct the Multimodal Visual Patterns (MMVP) benchmark. MMVP exposes\nareas where state-of-the-art systems, including GPT-4V, struggle with\nstraightforward questions across nine basic visual patterns, often providing\nincorrect answers and hallucinated explanations. We further evaluate various\nCLIP-based vision-and-language models and found a notable correlation between\nvisual patterns that challenge CLIP models and those problematic for multimodal\nLLMs. As an initial effort to address these issues, we propose a Mixture of\nFeatures (MoF) approach, demonstrating that integrating vision self-supervised\nlearning features with MLLMs can significantly enhance their visual grounding\ncapabilities. Together, our research suggests visual representation learning\nremains an open challenge, and accurate visual grounding is crucial for future\nsuccessful multimodal systems.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Shengbang Tong", "Zhuang Liu", "Zhuang Liu", "Yuexiang Zhai", "Yi Ma", "Yann LeCun", "Saining Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef4f"}, "filepath": "data/2404.02155.png", "tags": [], "_media_type": "image", "_rand": 0.9994081816682616, "arXiv_link": "https://arxiv.org/abs/2404.02155", "other_link": "", "title": "Alpha Invariance: On Inverse Scaling Between Distance and Volume Density in Neural Radiance Fields", "abstract": "Scale-ambiguity in 3D scene dimensions leads to magnitude-ambiguity of\nvolumetric densities in neural radiance fields, i.e., the densities double when\nscene size is halved, and vice versa. We call this property alpha invariance.\nFor NeRFs to better maintain alpha invariance, we recommend 1) parameterizing\nboth distance and volume densities in log space, and 2) a\ndiscretization-agnostic initialization strategy to guarantee high ray\ntransmittance. We revisit a few popular radiance field models and find that\nthese systems use various heuristics to deal with issues arising from scene\nscaling. We test their behaviors and show our recipe to be more robust.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Joshua Ahn", "Haochen Wang", "Raymond A. Yeh", "Greg Shakhnarovich"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef50"}, "filepath": "data/2310.19721.png", "tags": [], "_media_type": "image", "_rand": 0.9998864896549264, "arXiv_link": "https://arxiv.org/abs/2310.19721", "other_link": "https://github.com/MedICL-VU/ProMISe.", "title": "Prompt-Driven Referring Image Segmentation with Instance Contrasting", "abstract": "To address prevalent issues in medical imaging, such as data acquisition\nchallenges and label availability, transfer learning from natural to medical\nimage domains serves as a viable strategy to produce reliable segmentation\nresults. However, several existing barriers between domains need to be broken\ndown, including addressing contrast discrepancies, managing anatomical\nvariability, and adapting 2D pretrained models for 3D segmentation tasks. In\nthis paper, we propose ProMISe,a prompt-driven 3D medical image segmentation\nmodel using only a single point prompt to leverage knowledge from a pretrained\n2D image foundation model. In particular, we use the pretrained vision\ntransformer from the Segment Anything Model (SAM) and integrate lightweight\nadapters to extract depth-related (3D) spatial context without updating the\npretrained weights. For robust results, a hybrid network with complementary\nencoders is designed, and a boundary-aware loss is proposed to achieve precise\nboundaries. We evaluate our model on two public datasets for colon and pancreas\ntumor segmentations, respectively. Compared to the state-of-the-art\nsegmentation methods with and without prompt engineering, our proposed method\nachieves superior performance. The code is publicly available at\nhttps://github.com/MedICL-VU/ProMISe.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chao Shang", "Zichen Song", "Heqian Qiu", "Lanxiao Wang", "Fanman Meng", "Hongliang Li"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef51"}, "filepath": "data/2312.04433.png", "tags": [], "_media_type": "image", "_rand": 0.9993367687292779, "arXiv_link": "https://arxiv.org/abs/2312.04433", "other_link": "https://dreamvideo-t2v.github.io.", "title": "DreamVideo: Composing Your Dream Videos with Customized Subject and Motion", "abstract": "Customized generation using diffusion models has made impressive progress in\nimage generation, but remains unsatisfactory in the challenging video\ngeneration task, as it requires the controllability of both subjects and\nmotions. To that end, we present DreamVideo, a novel approach to generating\npersonalized videos from a few static images of the desired subject and a few\nvideos of target motion. DreamVideo decouples this task into two stages,\nsubject learning and motion learning, by leveraging a pre-trained video\ndiffusion model. The subject learning aims to accurately capture the fine\nappearance of the subject from provided images, which is achieved by combining\ntextual inversion and fine-tuning of our carefully designed identity adapter.\nIn motion learning, we architect a motion adapter and fine-tune it on the given\nvideos to effectively model the target motion pattern. Combining these two\nlightweight and efficient adapters allows for flexible customization of any\nsubject with any motion. Extensive experimental results demonstrate the\nsuperior performance of our DreamVideo over the state-of-the-art methods for\ncustomized video generation. Our project page is at\nhttps://dreamvideo-t2v.github.io.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Yujie Wei", "Shiwei Zhang", "Zhiwu Qing", "Hangjie Yuan", "Zhiheng Liu", "Yu Liu", "Yingya Zhang", "Jingren Zhou", "Hongming Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef52"}, "filepath": "data/2404.19696.png", "tags": [], "_media_type": "image", "_rand": 0.9995920265926818, "arXiv_link": "https://arxiv.org/abs/2404.19696", "other_link": "", "title": "Multi-Attribute Interactions Matter for 3D Visual Grounding", "abstract": "3D visual grounding is a challenging task that often requires direct and\ndense supervision, notably the semantic label for each object in the scene. In\nthis paper, we instead study the naturally supervised setting that learns from\nonly 3D scene and QA pairs, where prior works underperform. We propose the\nLanguage-Regularized Concept Learner (LARC), which uses constraints from\nlanguage as regularization to significantly improve the accuracy of\nneuro-symbolic concept learners in the naturally supervised setting. Our\napproach is based on two core insights: the first is that language constraints\n(e.g., a word's relation to another) can serve as effective regularization for\nstructured representations in neuro-symbolic models; the second is that we can\nquery large language models to distill such constraints from language\nproperties. We show that LARC improves performance of prior works in naturally\nsupervised 3D visual grounding, and demonstrates a wide range of 3D visual\nreasoning capabilities-from zero-shot composition, to data efficiency and\ntransferability. Our method represents a promising step towards regularizing\nstructured visual reasoning frameworks with language-based priors, for learning\nin settings without dense supervision.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Can Xu", "Yuehui Han", "Rui Xu", "Le Hui", "Jin Xie", "Jian Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef53"}, "filepath": "data/2312.06358.png", "tags": [], "_media_type": "image", "_rand": 0.9995955465835042, "arXiv_link": "https://arxiv.org/abs/2312.06358", "other_link": "https://github.com/eigenvivek/DiffPose.", "title": "Intraoperative 2D/3D Image Registration via Differentiable X-ray Rendering", "abstract": "Surgical decisions are informed by aligning rapid portable 2D intraoperative\nimages (e.g., X-rays) to a high-fidelity 3D preoperative reference scan (e.g.,\nCT). 2D/3D image registration often fails in practice: conventional\noptimization methods are prohibitively slow and susceptible to local minima,\nwhile neural networks trained on small datasets fail on new patients or require\nimpractical landmark supervision. We present DiffPose, a self-supervised\napproach that leverages patient-specific simulation and differentiable\nphysics-based rendering to achieve accurate 2D/3D registration without relying\non manually labeled data. Preoperatively, a CNN is trained to regress the pose\nof a randomly oriented synthetic X-ray rendered from the preoperative CT. The\nCNN then initializes rapid intraoperative test-time optimization that uses the\ndifferentiable X-ray renderer to refine the solution. Our work further proposes\nseveral geometrically principled methods for sampling camera poses from\n$\\mathbf{SE}(3)$, for sparse differentiable rendering, and for driving\nregistration in the tangent space $\\mathfrak{se}(3)$ with geodesic and\nmultiscale locality-sensitive losses. DiffPose achieves sub-millimeter accuracy\nacross surgical datasets at intraoperative speeds, improving upon existing\nunsupervised methods by an order of magnitude and even outperforming supervised\nbaselines. Our code is available at https://github.com/eigenvivek/DiffPose.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Vivek Gopalakrishnan", "Neel Dey", "Polina Golland"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef54"}, "filepath": "data/2403.04290.png", "tags": [], "_media_type": "image", "_rand": 0.9997881221919271, "arXiv_link": "https://arxiv.org/abs/2403.04290", "other_link": "", "title": "MedM2G: Unifying Medical Multi-Modal Generation via Cross-Guided Diffusion with Visual Invariant", "abstract": "Medical generative models, acknowledged for their high-quality sample\ngeneration ability, have accelerated the fast growth of medical applications.\nHowever, recent works concentrate on separate medical generation models for\ndistinct medical tasks and are restricted to inadequate medical multi-modal\nknowledge, constraining medical comprehensive diagnosis. In this paper, we\npropose MedM2G, a Medical Multi-Modal Generative framework, with the key\ninnovation to align, extract, and generate medical multi-modal within a unified\nmodel. Extending beyond single or two medical modalities, we efficiently align\nmedical multi-modal through the central alignment approach in the unified\nspace. Significantly, our framework extracts valuable clinical knowledge by\npreserving the medical visual invariant of each imaging modal, thereby\nenhancing specific medical information for multi-modal generation. By\nconditioning the adaptive cross-guided parameters into the multi-flow diffusion\nframework, our model promotes flexible interactions among medical multi-modal\nfor generation. MedM2G is the first medical generative model that unifies\nmedical generation tasks of text-to-image, image-to-text, and unified\ngeneration of medical modalities (CT, MRI, X-ray). It performs 5 medical\ngeneration tasks across 10 datasets, consistently outperforming various\nstate-of-the-art works.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Chenlu Zhan", "Gaoang Wang", "Yu LIN", "Hongwei Wang", "Jian Wu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef55"}, "filepath": "data/2402.19387.png", "tags": [], "_media_type": "image", "_rand": 0.9997757338613117, "arXiv_link": "https://arxiv.org/abs/2402.19387", "other_link": "", "title": "SeD: Semantic-Aware Discriminator for Image Super-Resolution", "abstract": "Generative Adversarial Networks (GANs) have been widely used to recover vivid\ntextures in image super-resolution (SR) tasks. In particular, one discriminator\nis utilized to enable the SR network to learn the distribution of real-world\nhigh-quality images in an adversarial training manner. However, the\ndistribution learning is overly coarse-grained, which is susceptible to virtual\ntextures and causes counter-intuitive generation results. To mitigate this, we\npropose the simple and effective Semantic-aware Discriminator (denoted as SeD),\nwhich encourages the SR network to learn the fine-grained distributions by\nintroducing the semantics of images as a condition. Concretely, we aim to\nexcavate the semantics of images from a well-trained semantic extractor. Under\ndifferent semantics, the discriminator is able to distinguish the real-fake\nimages individually and adaptively, which guides the SR network to learn the\nmore fine-grained semantic-aware textures. To obtain accurate and abundant\nsemantics, we take full advantage of recently popular pretrained vision models\n(PVMs) with extensive datasets, and then incorporate its semantic features into\nthe discriminator through a well-designed spatial cross-attention module. In\nthis way, our proposed semantic-aware discriminator empowered the SR network to\nproduce more photo-realistic and pleasing images. Extensive experiments on two\ntypical tasks, i.e., SR and Real SR have demonstrated the effectiveness of our\nproposed methods.", "keywords": ["Low-level vision"], "authors_list": ["Bingchen Li", "Xin Li", "Hanxin Zhu", "YEYING JIN", "Ruoyu Feng", "Zhizheng Zhang", "Zhibo Chen"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef56"}, "filepath": "data/2308.06412.png", "tags": [], "_media_type": "image", "_rand": 0.9999980236871954, "arXiv_link": "https://arxiv.org/abs/2308.06412", "other_link": "https://github.com/xiaofeng94/SAS-Det}.", "title": "Taming Self-Training for Open-Vocabulary Object Detection", "abstract": "Recent studies have shown promising performance in open-vocabulary object\ndetection (OVD) by utilizing pseudo labels (PLs) from pretrained vision and\nlanguage models (VLMs). However, teacher-student self-training, a powerful and\nwidely used paradigm to leverage PLs, is rarely explored for OVD. This work\nidentifies two challenges of using self-training in OVD: noisy PLs from VLMs\nand frequent distribution changes of PLs. To address these challenges, we\npropose SAS-Det that tames self-training for OVD from two key perspectives.\nFirst, we present a split-and-fusion (SAF) head that splits a standard\ndetection into an open-branch and a closed-branch. This design can reduce noisy\nsupervision from pseudo boxes. Moreover, the two branches learn complementary\nknowledge from different training data, significantly enhancing performance\nwhen fused together. Second, in our view, unlike in closed-set tasks, the PL\ndistributions in OVD are solely determined by the teacher model. We introduce a\nperiodic update strategy to decrease the number of updates to the teacher,\nthereby decreasing the frequency of changes in PL distributions, which\nstabilizes the training process. Extensive experiments demonstrate SAS-Det is\nboth efficient and effective. SAS-Det outperforms recent models of the same\nscale by a clear margin and achieves 37.4 AP50 and 29.1 APr on novel categories\nof the COCO and LVIS benchmarks, respectively. Code is available at\n\\url{https://github.com/xiaofeng94/SAS-Det}.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Shiyu Zhao", "Samuel Schulter", "Long Zhao", "Zhixing Zhang", "Vijay Kumar BG", "Yumin Suh", "Manmohan Chandraker", "Dimitris N. Metaxas"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef57"}, "filepath": "data/2401.10219.png", "tags": [], "_media_type": "image", "_rand": 0.9991834749322889, "arXiv_link": "https://arxiv.org/abs/2401.10219", "other_link": "", "title": "Edit One for All: Interactive Batch Image Editing", "abstract": "In recent years, image editing has advanced remarkably. With increased human\ncontrol, it is now possible to edit an image in a plethora of ways; from\nspecifying in text what we want to change, to straight up dragging the contents\nof the image in an interactive point-based manner. However, most of the focus\nhas remained on editing single images at a time. Whether and how we can\nsimultaneously edit large batches of images has remained understudied. With the\ngoal of minimizing human supervision in the editing process, this paper\npresents a novel method for interactive batch image editing using StyleGAN as\nthe medium. Given an edit specified by users in an example image (e.g., make\nthe face frontal), our method can automatically transfer that edit to other\ntest images, so that regardless of their initial state (pose), they all arrive\nat the same final state (e.g., all facing front). Extensive experiments\ndemonstrate that edits performed using our method have similar visual quality\nto existing single-image-editing methods, while having more visual consistency\nand saving significant time and human effort.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Thao Nguyen", "Utkarsh Ojha", "Yuheng Li", "Haotian Liu", "Yong Jae Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef58"}, "filepath": "data/2312.13980v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999507287788312, "arXiv_link": "https://arxiv.org/abs/2312.13980v1", "other_link": "https://desaixie.github.io/carve-3d.", "title": "Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion Models with RL Finetuning", "abstract": "Recent advancements in the text-to-3D task leverage finetuned text-to-image\ndiffusion models to generate multi-view images, followed by NeRF\nreconstruction. Yet, existing supervised finetuned (SFT) diffusion models still\nsuffer from multi-view inconsistency and the resulting NeRF artifacts. Although\ntraining longer with SFT improves consistency, it also causes distribution\nshift, which reduces diversity and realistic details. We argue that the SFT of\nmulti-view diffusion models resembles the instruction finetuning stage of the\nLLM alignment pipeline and can benefit from RL finetuning (RLFT) methods.\nEssentially, RLFT methods optimize models beyond their SFT data distribution by\nusing their own outputs, effectively mitigating distribution shift. To this\nend, we introduce Carve3D, a RLFT method coupled with the Multi-view\nReconstruction Consistency (MRC) metric, to improve the consistency of\nmulti-view diffusion models. To compute MRC on a set of multi-view images, we\ncompare them with their corresponding renderings of the reconstructed NeRF at\nthe same viewpoints. We validate the robustness of MRC with extensive\nexperiments conducted under controlled inconsistency levels. We enhance the\nbase RLFT algorithm to stabilize the training process, reduce distribution\nshift, and identify scaling laws. Through qualitative and quantitative\nexperiments, along with a user study, we demonstrate Carve3D's improved\nmulti-view consistency, the resulting superior NeRF reconstruction quality, and\nminimal distribution shift compared to longer SFT. Project webpage:\nhttps://desaixie.github.io/carve-3d.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Desai Xie", "Jiahao Li", "Hao Tan", "Xin Sun", "Zhixin Shu", "Yi Zhou", "Sai Bi", "Soren Pirk", "Soeren Pirk", "ARIE KAUFMAN"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef59"}, "filepath": "data/2306.08045.png", "tags": [], "_media_type": "image", "_rand": 0.9997153959338855, "arXiv_link": "https://arxiv.org/abs/2306.08045", "other_link": "", "title": "Density-Guided Semi-Supervised 3D Semantic Segmentation with Dual-Space Hardness Sampling", "abstract": "We introduce a novel superpoint-based transformer architecture for efficient\nsemantic segmentation of large-scale 3D scenes. Our method incorporates a fast\nalgorithm to partition point clouds into a hierarchical superpoint structure,\nwhich makes our preprocessing 7 times faster than existing superpoint-based\napproaches. Additionally, we leverage a self-attention mechanism to capture the\nrelationships between superpoints at multiple scales, leading to\nstate-of-the-art performance on three challenging benchmark datasets: S3DIS\n(76.0% mIoU 6-fold validation), KITTI-360 (63.5% on Val), and DALES (79.6%).\nWith only 212k parameters, our approach is up to 200 times more compact than\nother state-of-the-art models while maintaining similar performance.\nFurthermore, our model can be trained on a single GPU in 3 hours for a fold of\nthe S3DIS dataset, which is 7x to 70x fewer GPU-hours than the best-performing\nmethods. Our code and models are accessible at\ngithub.com/drprojects/superpoint_transformer.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Jianan Li", "Qiulei Dong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef5a"}, "filepath": "data/2403.06205.png", "tags": [], "_media_type": "image", "_rand": 0.999318765751326, "arXiv_link": "https://arxiv.org/abs/2403.06205", "other_link": "", "title": "S-DyRF: Reference-Based Stylized Radiance Fields for Dynamic Scenes", "abstract": "Current 3D stylization methods often assume static scenes, which violates the\ndynamic nature of our real world. To address this limitation, we present\nS-DyRF, a reference-based spatio-temporal stylization method for dynamic neural\nradiance fields. However, stylizing dynamic 3D scenes is inherently challenging\ndue to the limited availability of stylized reference images along the temporal\naxis. Our key insight lies in introducing additional temporal cues besides the\nprovided reference. To this end, we generate temporal pseudo-references from\nthe given stylized reference. These pseudo-references facilitate the\npropagation of style information from the reference to the entire dynamic 3D\nscene. For coarse style transfer, we enforce novel views and times to mimic the\nstyle details present in pseudo-references at the feature level. To preserve\nhigh-frequency details, we create a collection of stylized temporal pseudo-rays\nfrom temporal pseudo-references. These pseudo-rays serve as detailed and\nexplicit stylization guidance for achieving fine style transfer. Experiments on\nboth synthetic and real-world datasets demonstrate that our method yields\nplausible stylized results of space-time view synthesis on dynamic 3D scenes.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Xingyi Li", "Zhiguo Cao", "Yizheng Wu", "Kewei Wang", "Ke Xian", "Zhe Wang", "Guosheng Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef5b"}, "filepath": "data/2401.02436.png", "tags": [], "_media_type": "image", "_rand": 0.9999814141041338, "arXiv_link": "https://arxiv.org/abs/2401.02436", "other_link": "", "title": "Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis", "abstract": "Recently, high-fidelity scene reconstruction with an optimized 3D Gaussian\nsplat representation has been introduced for novel view synthesis from sparse\nimage sets. Making such representations suitable for applications like network\nstreaming and rendering on low-power devices requires significantly reduced\nmemory consumption as well as improved rendering efficiency. We propose a\ncompressed 3D Gaussian splat representation that utilizes sensitivity-aware\nvector clustering with quantization-aware training to compress directional\ncolors and Gaussian parameters. The learned codebooks have low bitrates and\nachieve a compression rate of up to $31\\times$ on real-world scenes with only\nminimal degradation of visual quality. We demonstrate that the compressed splat\nrepresentation can be efficiently rendered with hardware rasterization on\nlightweight GPUs at up to $4\\times$ higher framerates than reported via an\noptimized GPU compute pipeline. Extensive experiments across multiple datasets\ndemonstrate the robustness and rendering speed of the proposed approach.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Simon Niedermayr", "Josef Stumpfegger", "r\u00fcdiger westermann"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef5c"}, "filepath": "data/2311.15264.png", "tags": [], "_media_type": "image", "_rand": 0.9994413531167964, "arXiv_link": "https://arxiv.org/abs/2311.15264", "other_link": "https://github.com/nicoboou/chadavit.", "title": "ChAda-ViT : Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images", "abstract": "Unlike color photography images, which are consistently encoded into RGB\nchannels, biological images encompass various modalities, where the type of\nmicroscopy and the meaning of each channel varies with each experiment.\nImportantly, the number of channels can range from one to a dozen and their\ncorrelation is often comparatively much lower than RGB, as each of them brings\nspecific information content. This aspect is largely overlooked by methods\ndesigned out of the bioimage field, and current solutions mostly focus on\nintra-channel spatial attention, often ignoring the relationship between\nchannels, yet crucial in most biological applications. Importantly, the\nvariable channel type and count prevent the projection of several experiments\nto a unified representation for large scale pre-training. In this study, we\npropose ChAda-ViT, a novel Channel Adaptive Vision Transformer architecture\nemploying an Inter-Channel Attention mechanism on images with an arbitrary\nnumber, order and type of channels. We also introduce IDRCell100k, a bioimage\ndataset with a rich set of 79 experiments covering 7 microscope modalities,\nwith a multitude of channel types, and counts varying from 1 to 10 per\nexperiment. Our architecture, trained in a self-supervised manner, outperforms\nexisting approaches in several biologically relevant downstream tasks.\nAdditionally, it can be used to bridge the gap for the first time between\nassays with different microscopes, channel numbers or types by embedding\nvarious image and experimental modalities into a unified biological image\nrepresentation. The latter should facilitate interdisciplinary studies and pave\nthe way for better adoption of deep learning in biological image-based\nanalyses. Code and Data available at https://github.com/nicoboou/chadavit.", "keywords": [], "authors_list": ["Nicolas Bourriez", "Ihab Bendidi", "Cohen Ethan", "Gabriel Watkinson", "Maxime Sanchez", "Guillaume Bollot", "Auguste Genovesio"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef5d"}, "filepath": "data/2401.00094.png", "tags": [], "_media_type": "image", "_rand": 0.9994391997682334, "arXiv_link": "https://arxiv.org/abs/2401.00094", "other_link": "https://github.com/xiaofeng94/Gen-Enhanced-Negs}.", "title": "Generating Enhanced Negatives for Training Language-Based Object Detectors", "abstract": "The recent progress in language-based open-vocabulary object detection can be\nlargely attributed to finding better ways of leveraging large-scale data with\nfree-form text annotations. Training such models with a discriminative\nobjective function has proven successful, but requires good positive and\nnegative samples. However, the free-form nature and the open vocabulary of\nobject descriptions make the space of negatives extremely large. Prior works\nrandomly sample negatives or use rule-based techniques to build them. In\ncontrast, we propose to leverage the vast knowledge built into modern\ngenerative models to automatically build negatives that are more relevant to\nthe original data. Specifically, we use large-language-models to generate\nnegative text descriptions, and text-to-image diffusion models to also generate\ncorresponding negative images. Our experimental analysis confirms the relevance\nof the generated negative data, and its use in language-based detectors\nimproves performance on two complex benchmarks. Code is available at\n\\url{https://github.com/xiaofeng94/Gen-Enhanced-Negs}.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Shiyu Zhao", "Long Zhao", "Vijay Kumar BG", "Yumin Suh", "Dimitris N. Metaxas", "Manmohan Chandraker", "Samuel Schulter"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef5e"}, "filepath": "data/2307.13497.png", "tags": [], "_media_type": "image", "_rand": 0.9996823131797175, "arXiv_link": "https://arxiv.org/abs/2307.13497", "other_link": "", "title": "Named Entity Driven Zero-Shot Image Manipulation", "abstract": "The Zero-Shot Learning (ZSL) task pertains to the identification of entities\nor relations in texts that were not seen during training. ZSL has emerged as a\ncritical research area due to the scarcity of labeled data in specific domains,\nand its applications have grown significantly in recent years. With the advent\nof large pretrained language models, several novel methods have been proposed,\nresulting in substantial improvements in ZSL performance. There is a growing\ndemand, both in the research community and industry, for a comprehensive ZSL\nframework that facilitates the development and accessibility of the latest\nmethods and pretrained models.In this study, we propose a novel ZSL framework\ncalled Zshot that aims to address the aforementioned challenges. Our primary\nobjective is to provide a platform that allows researchers to compare different\nstate-of-the-art ZSL methods with standard benchmark datasets. Additionally, we\nhave designed our framework to support the industry with readily available APIs\nfor production under the standard SpaCy NLP pipeline. Our API is extendible and\nevaluable, moreover, we include numerous enhancements such as boosting the\naccuracy with pipeline ensembling and visualization utilities available as a\nSpaCy extension.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhida Feng", "Li Chen", "Jing Tian", "Jiaxiang Liu", "Shikun Feng"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef5f"}, "filepath": "data/2404.00252.png", "tags": [], "_media_type": "image", "_rand": 0.9990371161164757, "arXiv_link": "https://arxiv.org/abs/2404.00252", "other_link": "", "title": "Learned Scanpaths Aid Blind Panoramic Video Quality Assessment", "abstract": "Panoramic videos have the advantage of providing an immersive and interactive\nviewing experience. Nevertheless, their spherical nature gives rise to various\nand uncertain user viewing behaviors, which poses significant challenges for\npanoramic video quality assessment (PVQA). In this work, we propose an\nend-to-end optimized, blind PVQA method with explicit modeling of user viewing\npatterns through visual scanpaths. Our method consists of two modules: a\nscanpath generator and a quality assessor. The scanpath generator is initially\ntrained to predict future scanpaths by minimizing their expected code length\nand then jointly optimized with the quality assessor for quality prediction.\nOur blind PVQA method enables direct quality assessment of panoramic images by\ntreating them as videos composed of identical frames. Experiments on three\npublic panoramic image and video quality datasets, encompassing both synthetic\nand authentic distortions, validate the superiority of our blind PVQA model\nover existing methods.", "keywords": [], "authors_list": ["Kanglong FAN", "Wen Wen", "Mu Li", "YIFAN PENG", "Kede Ma"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef60"}, "filepath": "data/2309.05203.png", "tags": [], "_media_type": "image", "_rand": 0.9997243229712128, "arXiv_link": "https://arxiv.org/abs/2309.05203", "other_link": "https://github.com/SCIR-HI/ArtificiallyR2R.", "title": "Molecular Data Programming: Towards Molecule Pseudo-labeling with Systematic Weak Supervision", "abstract": "Molecule discovery serves as a cornerstone in numerous scientific domains,\nfueling the development of new materials and innovative drug designs. Recent\ndevelopments of in-silico molecule discovery have highlighted the promising\nresults of cross-modal techniques, which bridge molecular structures with their\ndescriptive annotations. However, these cross-modal methods frequently\nencounter the issue of data scarcity, hampering their performance and\napplication. In this paper, we address the low-resource challenge by utilizing\nartificially-real data generated by Large Language Models (LLMs). We first\nintroduce a retrieval-based prompting strategy to construct high-quality pseudo\ndata, then explore the optimal method to effectively leverage this pseudo data.\nExperiments show that using pseudo data for domain adaptation outperforms all\nexisting methods, while also requiring a smaller model scale, reduced data size\nand lower training cost, highlighting its efficiency. Furthermore, our method\nshows a sustained improvement as the volume of pseudo data increases, revealing\nthe great potential of pseudo data in advancing low-resource cross-modal\nmolecule discovery. Our code and data are available at\nhttps://github.com/SCIR-HI/ArtificiallyR2R.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Xin Juan", "Kaixiong Zhou", "Ninghao Liu", "Tianlong Chen", "Xin Wang"], "category_name": "Computation and Language", "all_categories": ["Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef61"}, "filepath": "data/2312.16217.png", "tags": [], "_media_type": "image", "_rand": 0.9995903695687005, "arXiv_link": "https://arxiv.org/abs/2312.16217", "other_link": "https://sites.google.com/view/manipllm.", "title": "ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation", "abstract": "Robot manipulation relies on accurately predicting contact points and\nend-effector directions to ensure successful operation. However, learning-based\nrobot manipulation, trained on a limited category within a simulator, often\nstruggles to achieve generalizability, especially when confronted with\nextensive categories. Therefore, we introduce an innovative approach for robot\nmanipulation that leverages the robust reasoning capabilities of Multimodal\nLarge Language Models (MLLMs) to enhance the stability and generalization of\nmanipulation. By fine-tuning the injected adapters, we preserve the inherent\ncommon sense and reasoning ability of the MLLMs while equipping them with the\nability for manipulation. The fundamental insight lies in the introduced\nfine-tuning paradigm, encompassing object category understanding, affordance\nprior reasoning, and object-centric pose prediction to stimulate the reasoning\nability of MLLM in manipulation. During inference, our approach utilizes an RGB\nimage and text prompt to predict the end effector's pose in chain of thoughts.\nAfter the initial contact is established, an active impedance adaptation policy\nis introduced to plan the upcoming waypoints in a closed-loop manner. Moreover,\nin real world, we design a test-time adaptation (TTA) strategy for manipulation\nto enable the model better adapt to the current real-world scene configuration.\nExperiments in simulator and real-world show the promising performance of\nManipLLM. More details and demonstrations can be found at\nhttps://sites.google.com/view/manipllm.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Xiaoqi Li", "Mingxu Zhang", "Yiran Geng", "Haoran Geng", "Haoran Geng", "Yuxing Long", "Yan Shen", "Renrui Zhang", "Jiaming Liu", "Hao Dong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef62"}, "filepath": "data/2403.08568.png", "tags": [], "_media_type": "image", "_rand": 0.9996507497058572, "arXiv_link": "https://arxiv.org/abs/2403.08568", "other_link": "", "title": "Consistent Prompting for Rehearsal-Free Continual Learning", "abstract": "Continual learning empowers models to adapt autonomously to the ever-changing\nenvironment or data streams without forgetting old knowledge. Prompt-based\napproaches are built on frozen pre-trained models to learn the task-specific\nprompts and classifiers efficiently. Existing prompt-based methods are\ninconsistent between training and testing, limiting their effectiveness. Two\ntypes of inconsistency are revealed. Test predictions are made from all\nclassifiers while training only focuses on the current task classifier without\nholistic alignment, leading to Classifier inconsistency. Prompt inconsistency\nindicates that the prompt selected during testing may not correspond to the one\nassociated with this task during training. In this paper, we propose a novel\nprompt-based method, Consistent Prompting (CPrompt), for more aligned training\nand testing. Specifically, all existing classifiers are exposed to prompt\ntraining, resulting in classifier consistency learning. In addition, prompt\nconsistency learning is proposed to enhance prediction robustness and boost\nprompt selection accuracy. Our Consistent Prompting surpasses its prompt-based\ncounterparts and achieves state-of-the-art performance on multiple continual\nlearning benchmarks. Detailed analysis shows that improvements come from more\nconsistent training and testing.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Zhanxin Gao", "Jun Cen", "Xiaobin Chang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef63"}, "filepath": "data/2401.11739.png", "tags": [], "_media_type": "image", "_rand": 0.9993663686778331, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2401.11739", "other_link": "", "title": "Pixel-level Semantic Correspondence through Layout-aware Representation Learning and Multi-scale Matching Integration", "abstract": "Diffusion models have recently received increasing research attention for\ntheir remarkable transfer abilities in semantic segmentation tasks. However,\ngenerating fine-grained segmentation masks with diffusion models often requires\nadditional training on annotated datasets, leaving it unclear to what extent\npre-trained diffusion models alone understand the semantic relations of their\ngenerated images. To address this question, we leverage the semantic knowledge\nextracted from Stable Diffusion (SD) and aim to develop an image segmentor\ncapable of generating fine-grained segmentation maps without any additional\ntraining. The primary difficulty stems from the fact that semantically\nmeaningful feature maps typically exist only in the spatially lower-dimensional\nlayers, which poses a challenge in directly extracting pixel-level semantic\nrelations from these feature maps. To overcome this issue, our framework\nidentifies semantic correspondences between image pixels and spatial locations\nof low-dimensional feature maps by exploiting SD's generation process and\nutilizes them for constructing image-resolution segmentation maps. In extensive\nexperiments, the produced segmentation maps are demonstrated to be well\ndelineated and capture detailed parts of the images, indicating the existence\nof highly accurate pixel-level semantic knowledge in diffusion models.", "keywords": [], "authors_list": ["Yixuan Sun", "Zhangyue Yin", "Haibo Wang", "Yan Wang", "Xipeng Qiu", "Weifeng Ge", "Wenqiang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef64"}, "filepath": "data/2311.17389.png", "tags": [], "_media_type": "image", "_rand": 0.9994900128945574, "arXiv_link": "https://arxiv.org/abs/2311.17389", "other_link": "", "title": "360Loc: A Dataset and Benchmark for Omnidirectional Visual Localization with Cross-device Queries", "abstract": "Portable 360$^\\circ$ cameras are becoming a cheap and efficient tool to\nestablish large visual databases. By capturing omnidirectional views of a\nscene, these cameras could expedite building environment models that are\nessential for visual localization. However, such an advantage is often\noverlooked due to the lack of valuable datasets. This paper introduces a new\nbenchmark dataset, 360Loc, composed of 360$^\\circ$ images with ground truth\nposes for visual localization. We present a practical implementation of\n360$^\\circ$ mapping combining 360$^\\circ$ images with lidar data to generate\nthe ground truth 6DoF poses. 360Loc is the first dataset and benchmark that\nexplores the challenge of cross-device visual positioning, involving\n360$^\\circ$ reference frames, and query frames from pinhole, ultra-wide FoV\nfisheye, and 360$^\\circ$ cameras. We propose a virtual camera approach to\ngenerate lower-FoV query frames from 360$^\\circ$ images, which ensures a fair\ncomparison of performance among different query types in visual localization\ntasks. We also extend this virtual camera approach to feature matching-based\nand pose regression-based methods to alleviate the performance loss caused by\nthe cross-device domain gap, and evaluate its effectiveness against\nstate-of-the-art baselines. We demonstrate that omnidirectional visual\nlocalization is more robust in challenging large-scale scenes with symmetries\nand repetitive structures. These results provide new insights into 360-camera\nmapping and omnidirectional visual localization with cross-device queries.", "keywords": [], "authors_list": ["Huajian Huang", "Changkun Liu", "Yipeng Zhu", "Hui Cheng", "Tristan Braud", "Sai-Kit Yeung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef65"}, "filepath": "data/2312.04884.png", "tags": [], "_media_type": "image", "_rand": 0.9998520024275279, "arXiv_link": "https://arxiv.org/abs/2312.04884", "other_link": "https://github.com/ZYM-PKU/UDiffText", "title": "Layout-Agnostic Scene Text Image Synthesis with Diffusion Models", "abstract": "Text-to-Image (T2I) generation methods based on diffusion model have garnered\nsignificant attention in the last few years. Although these image synthesis\nmethods produce visually appealing results, they frequently exhibit spelling\nerrors when rendering text within the generated images. Such errors manifest as\nmissing, incorrect or extraneous characters, thereby severely constraining the\nperformance of text image generation based on diffusion models. To address the\naforementioned issue, this paper proposes a novel approach for text image\ngeneration, utilizing a pre-trained diffusion model (i.e., Stable Diffusion\n[27]). Our approach involves the design and training of a light-weight\ncharacter-level text encoder, which replaces the original CLIP encoder and\nprovides more robust text embeddings as conditional guidance. Then, we\nfine-tune the diffusion model using a large-scale dataset, incorporating local\nattention control under the supervision of character-level segmentation maps.\nFinally, by employing an inference stage refinement process, we achieve a\nnotably high sequence accuracy when synthesizing text in arbitrarily given\nimages. Both qualitative and quantitative results demonstrate the superiority\nof our method to the state of the art. Furthermore, we showcase several\npotential applications of the proposed UDiffText, including text-centric image\nsynthesis, scene text editing, etc. Code and model will be available at\nhttps://github.com/ZYM-PKU/UDiffText .", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Qilong Zhangli", "Jindong Jiang", "Di Liu", "Licheng Yu", "Xiaoliang Dai", "Ankit Ramchandani", "Guan Pang", "Dimitris N. Metaxas", "Praveen Krishnan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef66"}, "filepath": "data/2312.15540.png", "tags": [], "_media_type": "image", "_rand": 0.9997385046300162, "arXiv_link": "https://arxiv.org/abs/2312.15540", "other_link": "", "title": "Amodal Completion via Progressive Mixed Context Diffusion", "abstract": "Our brain can effortlessly recognize objects even when partially hidden from\nview. Seeing the visible of the hidden is called amodal completion; however,\nthis task remains a challenge for generative AI despite rapid progress. We\npropose to sidestep many of the difficulties of existing approaches, which\ntypically involve a two-step process of predicting amodal masks and then\ngenerating pixels. Our method involves thinking outside the box, literally! We\ngo outside the object bounding box to use its context to guide a pre-trained\ndiffusion inpainting model, and then progressively grow the occluded object and\ntrim the extra background. We overcome two technical challenges: 1) how to be\nfree of unwanted co-occurrence bias, which tends to regenerate similar\noccluders, and 2) how to judge if an amodal completion has succeeded. Our\namodal completion method exhibits improved photorealistic completion results\ncompared to existing approaches in numerous successful completion cases. And\nthe best part? It doesn't require any special training or fine-tuning of\nmodels.", "keywords": [], "authors_list": ["Katherine Xu", "Lingzhi Zhang", "Jianbo Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef67"}, "filepath": "data/2311.10982.png", "tags": [], "_media_type": "image", "_rand": 0.9993399724107415, "arXiv_link": "https://arxiv.org/abs/2311.10982", "other_link": "", "title": "Make Pixels Dance: High-Dynamic Video Generation", "abstract": "Creating high-dynamic videos such as motion-rich actions and sophisticated\nvisual effects poses a significant challenge in the field of artificial\nintelligence. Unfortunately, current state-of-the-art video generation methods,\nprimarily focusing on text-to-video generation, tend to produce video clips\nwith minimal motions despite maintaining high fidelity. We argue that relying\nsolely on text instructions is insufficient and suboptimal for video\ngeneration. In this paper, we introduce PixelDance, a novel approach based on\ndiffusion models that incorporates image instructions for both the first and\nlast frames in conjunction with text instructions for video generation.\nComprehensive experimental results demonstrate that PixelDance trained with\npublic data exhibits significantly better proficiency in synthesizing videos\nwith complex scenes and intricate motions, setting a new standard for video\ngeneration.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Yan Zeng", "Guoqiang Wei", "Jiani Zheng", "Jiaxin Zou", "Yang Wei", "Yuchen Zhang", "Yuchen Zhang", "Hang Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef68"}, "filepath": "data/2401.08577.png", "tags": [], "_media_type": "image", "_rand": 0.9999434660581573, "arXiv_link": "https://arxiv.org/abs/2401.08577", "other_link": "", "title": "MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World", "abstract": "Human beings possess the capability to multiply a melange of multisensory\ncues while actively exploring and interacting with the 3D world. Current\nmulti-modal large language models, however, passively absorb sensory data as\ninputs, lacking the capacity to actively interact with the objects in the 3D\nenvironment and dynamically collect their multisensory information. To usher in\nthe study of this area, we propose MultiPLY, a multisensory embodied large\nlanguage model that could incorporate multisensory interactive data, including\nvisual, audio, tactile, and thermal information into large language models,\nthereby establishing the correlation among words, actions, and percepts. To\nthis end, we first collect Multisensory Universe, a large-scale multisensory\ninteraction dataset comprising 500k data by deploying an LLM-powered embodied\nagent to engage with the 3D environment. To perform instruction tuning with\npre-trained LLM on such generated data, we first encode the 3D scene as\nabstracted object-centric representations and then introduce action tokens\ndenoting that the embodied agent takes certain actions within the environment,\nas well as state tokens that represent the multisensory state observations of\nthe agent at each time step. In the inference time, MultiPLY could generate\naction tokens, instructing the agent to take the action in the environment and\nobtain the next multisensory state observation. The observation is then\nappended back to the LLM via state tokens to generate subsequent text or action\ntokens. We demonstrate that MultiPLY outperforms baselines by a large margin\nthrough a diverse set of embodied tasks involving object retrieval, tool use,\nmultisensory captioning, and task decomposition.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yining Hong", "Zishuo Zheng", "Peihao Chen", "Yian Wang", "Junyan Li", "Chuang Gan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef69"}, "filepath": "data/2405.15658.png", "tags": [], "_media_type": "image", "_rand": 0.9990066522673202, "arXiv_link": "https://arxiv.org/abs/2405.15658", "other_link": "https://github.com/RobertLuo1/HDC}{here}$.", "title": "Referring Expression Counting", "abstract": "The newly proposed Generalized Referring Expression Segmentation (GRES)\namplifies the formulation of classic RES by involving multiple/non-target\nscenarios. Recent approaches focus on optimizing the last modality-fused\nfeature which is directly utilized for segmentation and object-existence\nidentification. However, the attempt to integrate all-grained information into\na single joint representation is impractical in GRES due to the increased\ncomplexity of the spatial relationships among instances and deceptive text\ndescriptions. Furthermore, the subsequent binary target justification across\nall referent scenarios fails to specify their inherent differences, leading to\nambiguity in object understanding. To address the weakness, we propose a\n$\\textbf{H}$ierarchical Semantic $\\textbf{D}$ecoding with $\\textbf{C}$ounting\nAssistance framework (HDC). It hierarchically transfers complementary modality\ninformation across granularities, and then aggregates each well-aligned\nsemantic correspondence for multi-level decoding. Moreover, with complete\nsemantic context modeling, we endow HDC with explicit counting capability to\nfacilitate comprehensive object perception in multiple/single/non-target\nsettings. Experimental results on gRefCOCO, Ref-ZOM, R-RefCOCO, and RefCOCO\nbenchmarks demonstrate the effectiveness and rationality of HDC which\noutperforms the state-of-the-art GRES methods by a remarkable margin. Code will\nbe available $\\href{https://github.com/RobertLuo1/HDC}{here}$.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Siyang Dai", "Jun Liu", "Ngai-Man Cheung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef6a"}, "filepath": "data/2401.04394.png", "tags": [], "_media_type": "image", "_rand": 0.9998864406087389, "arXiv_link": "https://arxiv.org/abs/2401.04394", "other_link": "https://yusiissy.github.io/SonicVisionLM.github.io/", "title": "SonicVisionLM: Playing Sound with Vision Language Models", "abstract": "There has been a growing interest in the task of generating sound for silent\nvideos, primarily because of its practicality in streamlining video\npost-production. However, existing methods for video-sound generation attempt\nto directly create sound from visual representations, which can be challenging\ndue to the difficulty of aligning visual representations with audio\nrepresentations. In this paper, we present SonicVisionLM, a novel framework\naimed at generating a wide range of sound effects by leveraging vision-language\nmodels(VLMs). Instead of generating audio directly from video, we use the\ncapabilities of powerful VLMs. When provided with a silent video, our approach\nfirst identifies events within the video using a VLM to suggest possible sounds\nthat match the video content. This shift in approach transforms the challenging\ntask of aligning image and audio into more well-studied sub-problems of\naligning image-to-text and text-to-audio through the popular diffusion models.\nTo improve the quality of audio recommendations with LLMs, we have collected an\nextensive dataset that maps text descriptions to specific sound effects and\ndeveloped a time-controlled audio adapter. Our approach surpasses current\nstate-of-the-art methods for converting video to audio, enhancing\nsynchronization with the visuals, and improving alignment between audio and\nvideo components. Project page:\nhttps://yusiissy.github.io/SonicVisionLM.github.io/", "keywords": ["Multimodal models and vision-language models", "Image and video generation and manipulation"], "authors_list": ["Zhifeng Xie", "Shengye Yu", "Qile He", "Mengtian Li"], "category_name": "Multimedia", "all_categories": ["Multimedia", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef6b"}, "filepath": "data/2403.11310.png", "tags": [], "_media_type": "image", "_rand": 0.9997479537477695, "arXiv_link": "https://arxiv.org/abs/2403.11310", "other_link": "https://github.com/davidpengucf/DAF-DG}.", "title": "A Dual-Augmentor Framework for Domain Generalization in 3D Human Pose Estimation", "abstract": "3D human pose data collected in controlled laboratory settings present\nchallenges for pose estimators that generalize across diverse scenarios. To\naddress this, domain generalization is employed. Current methodologies in\ndomain generalization for 3D human pose estimation typically utilize\nadversarial training to generate synthetic poses for training. Nonetheless,\nthese approaches exhibit several limitations. First, the lack of prior\ninformation about the target domain complicates the application of suitable\naugmentation through a single pose augmentor, affecting generalization on\ntarget domains. Moreover, adversarial training's discriminator tends to enforce\nsimilarity between source and synthesized poses, impeding the exploration of\nout-of-source distributions. Furthermore, the pose estimator's optimization is\nnot exposed to domain shifts, limiting its overall generalization ability.\n To address these limitations, we propose a novel framework featuring two pose\naugmentors: the weak and the strong augmentors. Our framework employs\ndifferential strategies for generation and discrimination processes,\nfacilitating the preservation of knowledge related to source poses and the\nexploration of out-of-source distributions without prior information about\ntarget poses. Besides, we leverage meta-optimization to simulate domain shifts\nin the optimization process of the pose estimator, thereby improving its\ngeneralization ability. Our proposed approach significantly outperforms\nexisting methods, as demonstrated through comprehensive experiments on various\nbenchmark datasets.Our code will be released at\n\\url{https://github.com/davidpengucf/DAF-DG}.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Qucheng Peng", "Ce Zheng", "Chen Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef6c"}, "filepath": "data/2304.11523.png", "tags": [], "_media_type": "image", "_rand": 0.999784274563355, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2304.11523", "other_link": "", "title": "ProMotion: Prototypes As Motion Learners", "abstract": "Optical flow is an indispensable building block for various important\ncomputer vision tasks, including motion estimation, object tracking, and\ndisparity measurement. In this work, we propose TransFlow, a pure transformer\narchitecture for optical flow estimation. Compared to dominant CNN-based\nmethods, TransFlow demonstrates three advantages. First, it provides more\naccurate correlation and trustworthy matching in flow estimation by utilizing\nspatial self-attention and cross-attention mechanisms between adjacent frames\nto effectively capture global dependencies; Second, it recovers more\ncompromised information (e.g., occlusion and motion blur) in flow estimation\nthrough long-range temporal association in dynamic scenes; Third, it enables a\nconcise self-learning paradigm and effectively eliminate the complex and\nlaborious multi-stage pre-training procedures. We achieve the state-of-the-art\nresults on the Sintel, KITTI-15, as well as several downstream tasks, including\nvideo object detection, interpolation and stabilization. For its efficacy, we\nhope TransFlow could serve as a flexible baseline for optical flow estimation.", "keywords": ["Low-level vision"], "authors_list": ["Yawen Lu", "Dongfang Liu", "Qifan Wang", "Cheng Han", "Yiming Cui", "Yiming Cui", "Zhiwen Cao", "Xueling Zhang", "Yingjie Victor Chen", "Heng Fan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef6d"}, "filepath": "data/2404.01945.png", "tags": [], "_media_type": "image", "_rand": 0.99977959292848, "arXiv_link": "https://arxiv.org/abs/2404.01945", "other_link": "", "title": "Event-assisted Low-Light Video Object Segmentation", "abstract": "In the realm of video object segmentation (VOS), the challenge of operating\nunder low-light conditions persists, resulting in notably degraded image\nquality and compromised accuracy when comparing query and memory frames for\nsimilarity computation. Event cameras, characterized by their high dynamic\nrange and ability to capture motion information of objects, offer promise in\nenhancing object visibility and aiding VOS methods under such low-light\nconditions. This paper introduces a pioneering framework tailored for low-light\nVOS, leveraging event camera data to elevate segmentation accuracy. Our\napproach hinges on two pivotal components: the Adaptive Cross-Modal Fusion\n(ACMF) module, aimed at extracting pertinent features while fusing image and\nevent modalities to mitigate noise interference, and the Event-Guided Memory\nMatching (EGMM) module, designed to rectify the issue of inaccurate matching\nprevalent in low-light settings. Additionally, we present the creation of a\nsynthetic LLE-DAVIS dataset and the curation of a real-world LLE-VOS dataset,\nencompassing frames and events. Experimental evaluations corroborate the\nefficacy of our method across both datasets, affirming its effectiveness in\nlow-light scenarios.", "keywords": ["Low-level vision"], "authors_list": ["Li Hebei", "Jin Wang", "Jiahui Yuan", "Yue Li", "Wenming Weng", "Yansong Peng", "Yueyi Zhang", "Zhiwei Xiong", "Xiaoyan Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef6e"}, "filepath": "data/2402.18862.png", "tags": [], "_media_type": "image", "_rand": 0.9990609207211503, "arXiv_link": "https://arxiv.org/abs/2402.18862", "other_link": "https://gitlab.com/viper-purdue/continual-compression", "title": "Towards Backward-Compatible Continual Learning of Image Compression", "abstract": "This paper explores the possibility of extending the capability of\npre-trained neural image compressors (e.g., adapting to new data or target\nbitrates) without breaking backward compatibility, the ability to decode\nbitstreams encoded by the original model. We refer to this problem as continual\nlearning of image compression. Our initial findings show that baseline\nsolutions, such as end-to-end fine-tuning, do not preserve the desired backward\ncompatibility. To tackle this, we propose a knowledge replay training strategy\nthat effectively addresses this issue. We also design a new model architecture\nthat enables more effective continual learning than existing baselines.\nExperiments are conducted for two scenarios: data-incremental learning and\nrate-incremental learning. The main conclusion of this paper is that neural\nimage compressors can be fine-tuned to achieve better performance (compared to\ntheir pre-trained version) on new data and rates without compromising backward\ncompatibility. Our code is available at\nhttps://gitlab.com/viper-purdue/continual-compression", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhihao Duan", "Ming Lu", "Justin Yang", "Jiangpeng He", "Zhan Ma", "Fengqing Zhu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef6f"}, "filepath": "data/2404.14410.png", "tags": [], "_media_type": "image", "_rand": 0.999255316191465, "arXiv_link": "http://export.arxiv.org/abs/2404.14410", "other_link": "", "title": "Guess The Unseen: Dynamic 3D Scene Reconstruction from Partial 2D Glimpses", "abstract": "In this paper, we present a method to reconstruct the world and multiple\ndynamic humans in 3D from a monocular video input. As a key idea, we represent\nboth the world and multiple humans via the recently emerging 3D Gaussian\nSplatting (3D-GS) representation, enabling to conveniently and efficiently\ncompose and render them together. In particular, we address the scenarios with\nseverely limited and sparse observations in 3D human reconstruction, a common\nchallenge encountered in the real world. To tackle this challenge, we introduce\na novel approach to optimize the 3D-GS representation in a canonical space by\nfusing the sparse cues in the common space, where we leverage a pre-trained 2D\ndiffusion model to synthesize unseen views while keeping the consistency with\nthe observed 2D appearances. We demonstrate our method can reconstruct\nhigh-quality animatable 3D humans in various challenging examples, in the\npresence of occlusion, image crops, few-shot, and extremely sparse\nobservations. After reconstruction, our method is capable of not only rendering\nthe scene in any novel views at arbitrary time instances, but also editing the\n3D scene by removing individual humans or applying different motions for each\nhuman. Through various experiments, we demonstrate the quality and efficiency\nof our methods over alternative existing approaches.", "keywords": ["Scene analysis and understanding", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Inhee Lee", "Byungjun Kim", "Hanbyul Joo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef70"}, "filepath": "data/2401.00374.png", "tags": [], "_media_type": "image", "_rand": 0.9990124182084537, "arXiv_link": "https://arxiv.org/abs/2401.00374", "other_link": "https://pantomatrix.github.io/EMAGE/", "title": "EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling", "abstract": "We propose EMAGE, a framework to generate full-body human gestures from audio\nand masked gestures, encompassing facial, local body, hands, and global\nmovements. To achieve this, we first introduce BEAT2 (BEAT-SMPLX-FLAME), a new\nmesh-level holistic co-speech dataset. BEAT2 combines a MoShed SMPL-X body with\nFLAME head parameters and further refines the modeling of head, neck, and\nfinger movements, offering a community-standardized, high-quality 3D motion\ncaptured dataset. EMAGE leverages masked body gesture priors during training to\nboost inference performance. It involves a Masked Audio Gesture Transformer,\nfacilitating joint training on audio-to-gesture generation and masked gesture\nreconstruction to effectively encode audio and body gesture hints. Encoded body\nhints from masked gestures are then separately employed to generate facial and\nbody movements. Moreover, EMAGE adaptively merges speech features from the\naudio's rhythm and content and utilizes four compositional VQ-VAEs to enhance\nthe results' fidelity and diversity. Experiments demonstrate that EMAGE\ngenerates holistic gestures with state-of-the-art performance and is flexible\nin accepting predefined spatial-temporal gesture inputs, generating complete,\naudio-synchronized results. Our code and dataset are available\nhttps://pantomatrix.github.io/EMAGE/", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Haiyang Liu", "Zihao Zhu", "Giorgio Becherini", "YICHEN PENG", "Mingyang Su", "YOU ZHOU", "Xuefei Zhe", "Naoya Iwamoto", "Bo Zheng", "Michael J. Black"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef71"}, "filepath": "data/2403.18548.png", "tags": [], "_media_type": "image", "_rand": 0.9996105367839099, "arXiv_link": "https://arxiv.org/abs/2403.18548", "other_link": "https://github.com/Xiaofeng-life/SFSNiD.", "title": "A Semi-supervised Nighttime Dehazing Baseline with Spatial-Frequency Aware and Realistic Brightness Constraint", "abstract": "Existing research based on deep learning has extensively explored the problem\nof daytime image dehazing. However, few studies have considered the\ncharacteristics of nighttime hazy scenes. There are two distinctions between\nnighttime and daytime haze. First, there may be multiple active colored light\nsources with lower illumination intensity in nighttime scenes, which may cause\nhaze, glow and noise with localized, coupled and frequency inconsistent\ncharacteristics. Second, due to the domain discrepancy between simulated and\nreal-world data, unrealistic brightness may occur when applying a dehazing\nmodel trained on simulated data to real-world data. To address the above two\nissues, we propose a semi-supervised model for real-world nighttime dehazing.\nFirst, the spatial attention and frequency spectrum filtering are implemented\nas a spatial-frequency domain information interaction module to handle the\nfirst issue. Second, a pseudo-label-based retraining strategy and a local\nwindow-based brightness loss for semi-supervised training process is designed\nto suppress haze and glow while achieving realistic brightness. Experiments on\npublic benchmarks validate the effectiveness of the proposed method and its\nsuperiority over state-of-the-art methods. The source code and Supplementary\nMaterials are placed in the https://github.com/Xiaofeng-life/SFSNiD.", "keywords": ["Low-level vision"], "authors_list": ["Xiaofeng Cong", "Jie Gui", "Jing Zhang", "Junming Hou", "Hao Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef72"}, "filepath": "data/2312.01571.png", "tags": [], "_media_type": "image", "_rand": 0.999981274256185, "arXiv_link": "https://arxiv.org/abs/2312.01571", "other_link": "https://github.com/GaryJiajia/OFv2_ICL_VQA.", "title": "How to Configure Good In-Context Sequence for Visual Question Answering", "abstract": "Inspired by the success of Large Language Models in dealing with new tasks\nvia In-Context Learning (ICL) in NLP, researchers have also developed Large\nVision-Language Models (LVLMs) with ICL capabilities. However, when\nimplementing ICL using these LVLMs, researchers usually resort to the simplest\nway like random sampling to configure the in-context sequence, thus leading to\nsub-optimal results. To enhance the ICL performance, in this study, we use\nVisual Question Answering (VQA) as case study to explore diverse in-context\nconfigurations to find the powerful ones. Additionally, through observing the\nchanges of the LVLM outputs by altering the in-context sequence, we gain\ninsights into the inner properties of LVLMs, improving our understanding of\nthem. Specifically, to explore in-context configurations, we design diverse\nretrieval methods and employ different strategies to manipulate the retrieved\ndemonstrations. Through exhaustive experiments on three VQA datasets: VQAv2,\nVizWiz, and OK-VQA, we uncover three important inner properties of the applied\nLVLM and demonstrate which strategies can consistently improve the ICL VQA\nperformance. Our code is provided in:\nhttps://github.com/GaryJiajia/OFv2_ICL_VQA.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Li Li", "Jiawei Peng", "huiyi chen", "Chongyang Gao", "Xu Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef73"}, "filepath": "data/2404.07610.png", "tags": [], "_media_type": "image", "_rand": 0.9999345892453338, "arXiv_link": "https://arxiv.org/abs/2404.07610", "other_link": "", "title": "Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval", "abstract": "There has been significant attention to the research on dense video\ncaptioning, which aims to automatically localize and caption all events within\nuntrimmed video. Several studies introduce methods by designing dense video\ncaptioning as a multitasking problem of event localization and event captioning\nto consider inter-task relations. However, addressing both tasks using only\nvisual input is challenging due to the lack of semantic content. In this study,\nwe address this by proposing a novel framework inspired by the cognitive\ninformation processing of humans. Our model utilizes external memory to\nincorporate prior knowledge. The memory retrieval method is proposed with\ncross-modal video-to-text matching. To effectively incorporate retrieved text\nfeatures, the versatile encoder and the decoder with visual and textual\ncross-attention modules are designed. Comparative experiments have been\nconducted to show the effectiveness of the proposed method on ActivityNet\nCaptions and YouCook2 datasets. Experimental results show promising performance\nof our model without extensive pretraining from a large video dataset.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Minkuk Kim", "Hyeon Bae Kim", "Jinyoung Moon", "Jinwoo Choi", "Seong Tae Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef74"}, "filepath": "data/2312.08885.png", "tags": [], "_media_type": "image", "_rand": 0.9998181692266549, "arXiv_link": "https://arxiv.org/abs/2312.08885", "other_link": "", "title": "Towards Text-guided 3D Scene Composition", "abstract": "We are witnessing significant breakthroughs in the technology for generating\n3D objects from text. Existing approaches either leverage large text-to-image\nmodels to optimize a 3D representation or train 3D generators on object-centric\ndatasets. Generating entire scenes, however, remains very challenging as a\nscene contains multiple 3D objects, diverse and scattered. In this work, we\nintroduce SceneWiz3D, a novel approach to synthesize high-fidelity 3D scenes\nfrom text. We marry the locality of objects with globality of scenes by\nintroducing a hybrid 3D representation: explicit for objects and implicit for\nscenes. Remarkably, an object, being represented explicitly, can be either\ngenerated from text using conventional text-to-3D approaches, or provided by\nusers. To configure the layout of the scene and automatically place objects, we\napply the Particle Swarm Optimization technique during the optimization\nprocess. Furthermore, it is difficult for certain parts of the scene (e.g.,\ncorners, occlusion) to receive multi-view supervision, leading to inferior\ngeometry. We incorporate an RGBD panorama diffusion model to mitigate it,\nresulting in high-quality geometry. Extensive evaluation supports that our\napproach achieves superior quality over previous approaches, enabling the\ngeneration of detailed and view-consistent 3D scenes.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Qihang Zhang", "Chaoyang Wang", "Aliaksandr Siarohin", "Peiye Zhuang", "Yinghao Xu", "Ceyuan Yang", "Dahua Lin", "Bolei Zhou", "Sergey Tulyakov", "Hsin-Ying Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef75"}, "filepath": "data/2404.07449.png", "tags": [], "_media_type": "image", "_rand": 0.999961287651265, "arXiv_link": "https://arxiv.org/abs/2404.07449", "other_link": "", "title": "Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs", "abstract": "Integration of Large Language Models (LLMs) into visual domain tasks,\nresulting in visual-LLMs (V-LLMs), has enabled exceptional performance in\nvision-language tasks, particularly for visual question answering (VQA).\nHowever, existing V-LLMs (e.g. BLIP-2, LLaVA) demonstrate weak spatial\nreasoning and localization awareness. Despite generating highly descriptive and\nelaborate textual answers, these models fail at simple tasks like\ndistinguishing a left vs right location. In this work, we explore how\nimage-space coordinate based instruction fine-tuning objectives could inject\nspatial awareness into V-LLMs. We discover optimal coordinate representations,\ndata-efficient instruction fine-tuning objectives, and pseudo-data generation\nstrategies that lead to improved spatial awareness in V-LLMs. Additionally, our\nresulting model improves VQA across image and video domains, reduces undesired\nhallucination, and generates better contextual object descriptions. Experiments\nacross 5 vision-language tasks involving 14 different datasets establish the\nclear performance improvements achieved by our proposed framework.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Kanchana Ranasinghe", "Satya Narayan Shukla", "Omid Poursaeed", "Michael Ryoo", "Tsung-Yu Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef76"}, "filepath": "data/2405.00906.png", "tags": [], "_media_type": "image", "_rand": 0.9997671664134592, "arXiv_link": "https://arxiv.org/abs/2405.00906", "other_link": "", "title": "Finding Lottery Tickets in Vision Models via Data-driven Spectral Foresight Pruning", "abstract": "Vision transformers have revolutionized computer vision, but their\ncomputational demands present challenges for training and deployment. This\npaper introduces LOTUS (LOttery Transformers with Ultra Sparsity), a novel\nmethod that leverages data lottery ticket selection and sparsity pruning to\naccelerate vision transformer training while maintaining accuracy. Our approach\nfocuses on identifying and utilizing the most informative data subsets and\neliminating redundant model parameters to optimize the training process.\nThrough extensive experiments, we demonstrate the effectiveness of LOTUS in\nachieving rapid convergence and high accuracy with significantly reduced\ncomputational requirements. This work highlights the potential of combining\ndata selection and sparsity techniques for efficient vision transformer\ntraining, opening doors for further research and development in this area.", "keywords": [], "authors_list": ["Leonardo Iurada", "Marco Ciccone", "Tatiana Tommasi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef77"}, "filepath": "data/2403.19904.png", "tags": [], "_media_type": "image", "_rand": 0.999032477150974, "arXiv_link": "https://arxiv.org/abs/2403.19904", "other_link": "https://82magnolia.github.io/fgpl/.", "title": "Fully Geometric Panoramic Localization", "abstract": "We introduce a lightweight and accurate localization method that only\nutilizes the geometry of 2D-3D lines. Given a pre-captured 3D map, our approach\nlocalizes a panorama image, taking advantage of the holistic 360 view. The\nsystem mitigates potential privacy breaches or domain discrepancies by avoiding\ntrained or hand-crafted visual descriptors. However, as lines alone can be\nambiguous, we express distinctive yet compact spatial contexts from\nrelationships between lines, namely the dominant directions of parallel lines\nand the intersection between non-parallel lines. The resulting representations\nare efficient in processing time and memory compared to conventional visual\ndescriptor-based methods. Given the groups of dominant line directions and\ntheir intersections, we accelerate the search process to test thousands of pose\ncandidates in less than a millisecond without sacrificing accuracy. We\nempirically show that the proposed 2D-3D matching can localize panoramas for\nchallenging scenes with similar structures, dramatic domain shifts or\nillumination changes. Our fully geometric approach does not involve extensive\nparameter tuning or neural network training, making it a practical algorithm\nthat can be readily deployed in the real world. Project page including the code\nis available through this link: https://82magnolia.github.io/fgpl/.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Junho Kim", "Jiwon Jeong", "Young Min Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef78"}, "filepath": "data/2309.13524.png", "tags": [], "_media_type": "image", "_rand": 0.9996523516638028, "arXiv_link": "https://arxiv.org/abs/2309.13524", "other_link": "https://github.com/River-Zhang/GTA.", "title": "VS: Reconstructing Clothed 3D Human from Single Image via Vertex Shift", "abstract": "Reconstructing 3D clothed human avatars from single images is a challenging\ntask, especially when encountering complex poses and loose clothing. Current\nmethods exhibit limitations in performance, largely attributable to their\ndependence on insufficient 2D image features and inconsistent query methods.\nOwing to this, we present the Global-correlated 3D-decoupling Transformer for\nclothed Avatar reconstruction (GTA), a novel transformer-based architecture\nthat reconstructs clothed human avatars from monocular images. Our approach\nleverages transformer architectures by utilizing a Vision Transformer model as\nan encoder for capturing global-correlated image features. Subsequently, our\ninnovative 3D-decoupling decoder employs cross-attention to decouple tri-plane\nfeatures, using learnable embeddings as queries for cross-plane generation. To\neffectively enhance feature fusion with the tri-plane 3D feature and human body\nprior, we propose a hybrid prior fusion strategy combining spatial and\nprior-enhanced queries, leveraging the benefits of spatial localization and\nhuman body prior knowledge. Comprehensive experiments on CAPE and THuman2.0\ndatasets illustrate that our method outperforms state-of-the-art approaches in\nboth geometry and texture reconstruction, exhibiting high robustness to\nchallenging poses and loose clothing, and producing higher-resolution textures.\nCodes will be available at https://github.com/River-Zhang/GTA.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Leyuan Liu", "Yuhan Li", "Yunqi Gao", "Changxin Gao", "Yuanyuan Liu", "Jingying Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef79"}, "filepath": "data/2405.00244.png", "tags": [], "_media_type": "image", "_rand": 0.9995585399845693, "arXiv_link": "https://arxiv.org/abs/2405.00244", "other_link": "https://github.com/yungsyu99/Real-HDRV.", "title": "Towards Real-World HDR Video Reconstruction: A Large-Scale Benchmark Dataset and A Two-Stage Alignment Network", "abstract": "As an important and practical way to obtain high dynamic range (HDR) video,\nHDR video reconstruction from sequences with alternating exposures is still\nless explored, mainly due to the lack of large-scale real-world datasets.\nExisting methods are mostly trained on synthetic datasets, which perform poorly\nin real scenes. In this work, to facilitate the development of real-world HDR\nvideo reconstruction, we present Real-HDRV, a large-scale real-world benchmark\ndataset for HDR video reconstruction, featuring various scenes, diverse motion\npatterns, and high-quality labels. Specifically, our dataset contains 500\nLDRs-HDRs video pairs, comprising about 28,000 LDR frames and 4,000 HDR labels,\ncovering daytime, nighttime, indoor, and outdoor scenes. To our best knowledge,\nour dataset is the largest real-world HDR video reconstruction dataset.\nCorrespondingly, we propose an end-to-end network for HDR video reconstruction,\nwhere a novel two-stage strategy is designed to perform alignment sequentially.\nSpecifically, the first stage performs global alignment with the adaptively\nestimated global offsets, reducing the difficulty of subsequent alignment. The\nsecond stage implicitly performs local alignment in a coarse-to-fine manner at\nthe feature level using the adaptive separable convolution. Extensive\nexperiments demonstrate that: (1) models trained on our dataset can achieve\nbetter performance on real scenes than those trained on synthetic datasets; (2)\nour method outperforms previous state-of-the-art methods. Our dataset is\navailable at https://github.com/yungsyu99/Real-HDRV.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Yong Shu", "Liquan Shen", "Xiangyu Hu", "Mengyao Li", "Zihao Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef7a"}, "filepath": "data/2311.18482.png", "tags": [], "_media_type": "image", "_rand": 0.9999726219227177, "arXiv_link": "https://arxiv.org/abs/2311.18482", "other_link": "", "title": "Language Embedded 3D Gaussians for Open-Vocabulary Scene Understanding", "abstract": "Open-vocabulary querying in 3D space is challenging but essential for scene\nunderstanding tasks such as object localization and segmentation.\nLanguage-embedded scene representations have made progress by incorporating\nlanguage features into 3D spaces. However, their efficacy heavily depends on\nneural networks that are resource-intensive in training and rendering. Although\nrecent 3D Gaussians offer efficient and high-quality novel view synthesis,\ndirectly embedding language features in them leads to prohibitive memory usage\nand decreased performance. In this work, we introduce Language Embedded 3D\nGaussians, a novel scene representation for open-vocabulary query tasks.\nInstead of embedding high-dimensional raw semantic features on 3D Gaussians, we\npropose a dedicated quantization scheme that drastically alleviates the memory\nrequirement, and a novel embedding procedure that achieves smoother yet high\naccuracy query, countering the multi-view feature inconsistencies and the\nhigh-frequency inductive bias in point-based representations. Our comprehensive\nexperiments show that our representation achieves the best visual quality and\nlanguage querying accuracy across current language-embedded representations,\nwhile maintaining real-time rendering frame rates on a single desktop GPU.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Jin-Chuan Shi", "Miao Wang", "Haobin Duan", "Shaohua Guan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef7b"}, "filepath": "data/2312.02069.png", "tags": [], "_media_type": "image", "_rand": 0.9999978584075074, "arXiv_link": "https://arxiv.org/abs/2312.02069", "other_link": "", "title": "GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians", "abstract": "We introduce GaussianAvatars, a new method to create photorealistic head\navatars that are fully controllable in terms of expression, pose, and\nviewpoint. The core idea is a dynamic 3D representation based on 3D Gaussian\nsplats that are rigged to a parametric morphable face model. This combination\nfacilitates photorealistic rendering while allowing for precise animation\ncontrol via the underlying parametric model, e.g., through expression transfer\nfrom a driving sequence or by manually changing the morphable model parameters.\nWe parameterize each splat by a local coordinate frame of a triangle and\noptimize for explicit displacement offset to obtain a more accurate geometric\nrepresentation. During avatar reconstruction, we jointly optimize for the\nmorphable model parameters and Gaussian splat parameters in an end-to-end\nfashion. We demonstrate the animation capabilities of our photorealistic avatar\nin several challenging scenarios. For instance, we show reenactments from a\ndriving video, where our method outperforms existing works by a significant\nmargin.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Shenhan Qian", "Tobias Kirschstein", "Liam Schoneveld", "Davide Davoli", "Simon Giebenhain", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef7c"}, "filepath": "data/2311.10356.png", "tags": [], "_media_type": "image", "_rand": 0.9994742183610957, "arXiv_link": "https://arxiv.org/abs/2311.10356", "other_link": "", "title": "Garment Recovery with Shape and Deformation Priors", "abstract": "While modeling people wearing tight-fitting clothing has made great strides\nin recent years, loose-fitting clothing remains a challenge. We propose a\nmethod that delivers realistic garment models from real-world images,\nregardless of garment shape or deformation. To this end, we introduce a fitting\napproach that utilizes shape and deformation priors learned from synthetic data\nto accurately capture garment shapes and deformations, including large ones.\nNot only does our approach recover the garment geometry accurately, it also\nyields models that can be directly used by downstream applications such as\nanimation and simulation.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Ren Li", "Corentin Dumery", "Beno\u00eet Guillard", "Pascal Fua"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef7d"}, "filepath": "data/2312.12870.png", "tags": [], "_media_type": "image", "_rand": 0.9998958720762351, "arXiv_link": "https://arxiv.org/abs/2312.12870", "other_link": "https://vjwq.github.io/AV-CONV/.", "title": "The Audio-Visual Conversational Graph: From an Egocentric-Exocentric Perspective", "abstract": "In recent years, the thriving development of research related to egocentric\nvideos has provided a unique perspective for the study of conversational\ninteractions, where both visual and audio signals play a crucial role. While\nmost prior work focus on learning about behaviors that directly involve the\ncamera wearer, we introduce the Ego-Exocentric Conversational Graph Prediction\nproblem, marking the first attempt to infer exocentric conversational\ninteractions from egocentric videos. We propose a unified multi-modal framework\n-- Audio-Visual Conversational Attention (AV-CONV), for the joint prediction of\nconversation behaviors -- speaking and listening -- for both the camera wearer\nas well as all other social partners present in the egocentric video.\nSpecifically, we adopt the self-attention mechanism to model the\nrepresentations across-time, across-subjects, and across-modalities. To\nvalidate our method, we conduct experiments on a challenging egocentric video\ndataset that includes multi-speaker and multi-conversation scenarios. Our\nresults demonstrate the superior performance of our method compared to a series\nof baselines. We also present detailed ablation studies to assess the\ncontribution of each component in our model. Check our project page at\nhttps://vjwq.github.io/AV-CONV/.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Wenqi Jia", "Miao Liu", "Hao Jiang", "Ishwarya Ananthabhotla", "James Rehg", "Vamsi Krishna Ithapu", "Ruohan Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef7e"}, "filepath": "data/2403.18708.png", "tags": [], "_media_type": "image", "_rand": 0.9992013160846458, "arXiv_link": "https://arxiv.org/abs/2403.18708", "other_link": "", "title": "Dense Vision Transformer Compression with Few Samples", "abstract": "Few-shot model compression aims to compress a large model into a more compact\none with only a tiny training set (even without labels). Block-level pruning\nhas recently emerged as a leading technique in achieving high accuracy and low\nlatency in few-shot CNN compression. But, few-shot compression for Vision\nTransformers (ViT) remains largely unexplored, which presents a new challenge.\nIn particular, the issue of sparse compression exists in traditional CNN\nfew-shot methods, which can only produce very few compressed models of\ndifferent model sizes. This paper proposes a novel framework for few-shot ViT\ncompression named DC-ViT. Instead of dropping the entire block, DC-ViT\nselectively eliminates the attention module while retaining and reusing\nportions of the MLP module. DC-ViT enables dense compression, which outputs\nnumerous compressed models that densely populate the range of model complexity.\nDC-ViT outperforms state-of-the-art few-shot compression methods by a\nsignificant margin of 10 percentage points, along with lower latency in the\ncompression of ViT and its variants.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Hanxiao Zhang", "Yifan Zhou", "Guo-Hua Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef7f"}, "filepath": "data/2306.13643.png", "tags": [], "_media_type": "image", "_rand": 0.9997809764263285, "arXiv_link": "http://export.arxiv.org/abs/2306.13643", "other_link": "https://github.com/cvg/LightGlue.", "title": "Structure-from-Motion from Pixel-wise Correspondences", "abstract": "We introduce LightGlue, a deep neural network that learns to match local\nfeatures across images. We revisit multiple design decisions of SuperGlue, the\nstate of the art in sparse matching, and derive simple but effective\nimprovements. Cumulatively, they make LightGlue more efficient - in terms of\nboth memory and computation, more accurate, and much easier to train. One key\nproperty is that LightGlue is adaptive to the difficulty of the problem: the\ninference is much faster on image pairs that are intuitively easy to match, for\nexample because of a larger visual overlap or limited appearance change. This\nopens up exciting prospects for deploying deep matchers in latency-sensitive\napplications like 3D reconstruction. The code and trained models are publicly\navailable at https://github.com/cvg/LightGlue.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Philipp Lindenberger", "Paul-Edouard Sarlin", "Marc Pollefeys"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef80"}, "filepath": "data/2405.19833.png", "tags": [], "_media_type": "image", "_rand": 0.9993934958101125, "arXiv_link": "https://arxiv.org/abs/2405.19833", "other_link": "https://github.com/MartaYang/KITRO.", "title": "KITRO: Refining Human Mesh by 2D Clues and Kinematic-tree Rotation", "abstract": "2D keypoints are commonly used as an additional cue to refine estimated 3D\nhuman meshes. Current methods optimize the pose and shape parameters with a\nreprojection loss on the provided 2D keypoints. Such an approach, while simple\nand intuitive, has limited effectiveness because the optimal solution is hard\nto find in ambiguous parameter space and may sacrifice depth. Additionally,\ndivergent gradients from distal joints complicate and deviate the refinement of\nproximal joints in the kinematic chain. To address these, we introduce\nKinematic-Tree Rotation (KITRO), a novel mesh refinement strategy that\nexplicitly models depth and human kinematic-tree structure. KITRO treats\nrefinement from a bone-wise perspective. Unlike previous methods which perform\ngradient-based optimizations, our method calculates bone directions in closed\nform. By accounting for the 2D pose, bone length, and parent joint's depth, the\ncalculation results in two possible directions for each child joint. We then\nuse a decision tree to trace binary choices for all bones along the human\nskeleton's kinematic-tree to select the most probable hypothesis. Our\nexperiments across various datasets and baseline models demonstrate that KITRO\nsignificantly improves 3D joint estimation accuracy and achieves an ideal 2D\nfit simultaneously. Our code available at: https://github.com/MartaYang/KITRO.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Fengyuan Yang", "Kerui Gu", "Angela Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef81"}, "filepath": "data/2312.02432.png", "tags": [], "_media_type": "image", "_rand": 0.9995961558461463, "arXiv_link": "https://arxiv.org/abs/2312.02432", "other_link": "", "title": "Orthogonal Adaptation for Modular Customization of Diffusion Models", "abstract": "Customization techniques for text-to-image models have paved the way for a\nwide range of previously unattainable applications, enabling the generation of\nspecific concepts across diverse contexts and styles. While existing methods\nfacilitate high-fidelity customization for individual concepts or a limited,\npre-defined set of them, they fall short of achieving scalability, where a\nsingle model can seamlessly render countless concepts. In this paper, we\naddress a new problem called Modular Customization, with the goal of\nefficiently merging customized models that were fine-tuned independently for\nindividual concepts. This allows the merged model to jointly synthesize\nconcepts in one image without compromising fidelity or incurring any additional\ncomputational costs.\n To address this problem, we introduce Orthogonal Adaptation, a method\ndesigned to encourage the customized models, which do not have access to each\nother during fine-tuning, to have orthogonal residual weights. This ensures\nthat during inference time, the customized models can be summed with minimal\ninterference.\n Our proposed method is both simple and versatile, applicable to nearly all\noptimizable weights in the model architecture. Through an extensive set of\nquantitative and qualitative evaluations, our method consistently outperforms\nrelevant baselines in terms of efficiency and identity preservation,\ndemonstrating a significant leap toward scalable customization of diffusion\nmodels.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ryan Po", "Guandao Yang", "Kfir Aberman", "Gordon Wetzstein"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef82"}, "filepath": "data/2306.15612.png", "tags": [], "_media_type": "image", "_rand": 0.9996196484803287, "arXiv_link": "https://arxiv.org/abs/2306.15612", "other_link": "", "title": "Adaptive Multi-Modal Cross-Entropy Loss for Stereo Matching", "abstract": "Despite the great success of deep learning in stereo matching, recovering\naccurate disparity maps is still challenging. Currently, L1 and cross-entropy\nare the two most widely used losses for stereo network training. Compared with\nthe former, the latter usually performs better thanks to its probability\nmodeling and direct supervision to the cost volume. However, how to accurately\nmodel the stereo ground-truth for cross-entropy loss remains largely\nunder-explored. Existing works simply assume that the ground-truth\ndistributions are uni-modal, which ignores the fact that most of the edge\npixels can be multi-modal. In this paper, a novel adaptive multi-modal\ncross-entropy loss (ADL) is proposed to guide the networks to learn different\ndistribution patterns for each pixel. Moreover, we optimize the disparity\nestimator to further alleviate the bleeding or misalignment artifacts in\ninference. Extensive experimental results show that our method is generic and\ncan help classic stereo networks regain state-of-the-art performance. In\nparticular, GANet with our method ranks $1^{st}$ on both the KITTI 2015 and\n2012 benchmarks among the published methods. Meanwhile, excellent\nsynthetic-to-realistic generalization performance can be achieved by simply\nreplacing the traditional loss with ours.", "keywords": [], "authors_list": ["Peng Xu", "Zhiyu Xiang", "Chengyu Qiao", "Jingyun Fu", "Tianyu Pu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef83"}, "filepath": "data/2310.11440.png", "tags": [], "_media_type": "image", "_rand": 0.9992789849108366, "arXiv_link": "https://arxiv.org/abs/2310.11440", "other_link": "", "title": "EvalCrafter: Benchmarking and Evaluating Large Video Generation Models", "abstract": "The vision and language generative models have been overgrown in recent\nyears. For video generation, various open-sourced models and public-available\nservices have been developed to generate high-quality videos. However, these\nmethods often use a few metrics, e.g., FVD or IS, to evaluate the performance.\nWe argue that it is hard to judge the large conditional generative models from\nthe simple metrics since these models are often trained on very large datasets\nwith multi-aspect abilities. Thus, we propose a novel framework and pipeline\nfor exhaustively evaluating the performance of the generated videos. Our\napproach involves generating a diverse and comprehensive list of 700 prompts\nfor text-to-video generation, which is based on an analysis of real-world user\ndata and generated with the assistance of a large language model. Then, we\nevaluate the state-of-the-art video generative models on our carefully designed\nbenchmark, in terms of visual qualities, content qualities, motion qualities,\nand text-video alignment with 17 well-selected objective metrics. To obtain the\nfinal leaderboard of the models, we further fit a series of coefficients to\nalign the objective metrics to the users' opinions. Based on the proposed human\nalignment method, our final score shows a higher correlation than simply\naveraging the metrics, showing the effectiveness of the proposed evaluation\nmethod.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yaofang Liu", "Xiaodong Cun", "Xuebo Liu", "Xintao Wang", "Yong Zhang", "Haoxin Chen", "Yang Liu", "Tieyong Zeng", "Raymond Chan", "Ying Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef84"}, "filepath": "data/2404.03159.png", "tags": [], "_media_type": "image", "_rand": 0.9994664651983673, "arXiv_link": "https://arxiv.org/abs/2404.03159", "other_link": "https://github.com/cwc1260/HandDiff.", "title": "HandDiff: 3D Hand Pose Estimation with Diffusion on Image-Point Cloud", "abstract": "Extracting keypoint locations from input hand frames, known as 3D hand pose\nestimation, is a critical task in various human-computer interaction\napplications. Essentially, the 3D hand pose estimation can be regarded as a 3D\npoint subset generative problem conditioned on input frames. Thanks to the\nrecent significant progress on diffusion-based generative models, hand pose\nestimation can also benefit from the diffusion model to estimate keypoint\nlocations with high quality. However, directly deploying the existing diffusion\nmodels to solve hand pose estimation is non-trivial, since they cannot achieve\nthe complex permutation mapping and precise localization. Based on this\nmotivation, this paper proposes HandDiff, a diffusion-based hand pose\nestimation model that iteratively denoises accurate hand pose conditioned on\nhand-shaped image-point clouds. In order to recover keypoint permutation and\naccurate location, we further introduce joint-wise condition and local detail\ncondition. Experimental results demonstrate that the proposed HandDiff\nsignificantly outperforms the existing approaches on four challenging hand pose\nbenchmark datasets. Codes and pre-trained models are publicly available at\nhttps://github.com/cwc1260/HandDiff.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["WENCAN CHENG", "WENCAN CHENG", "Hao Tang", "Luc Van Gool", "Jong Hwan Ko"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef85"}, "filepath": "data/2312.03732.png", "tags": [], "_media_type": "image", "_rand": 0.9995912619509285, "arXiv_link": "https://arxiv.org/abs/2312.03732", "other_link": "", "title": "Tuning Stable Rank Shrinkage: Aiming at the Overlooked Structural Risk in Fine-tuning", "abstract": "As large language models (LLMs) have become increasingly compute and memory\nintensive, parameter-efficient fine-tuning (PEFT) methods are now a common\nstrategy to fine-tune LLMs. A popular PEFT method is Low-Rank Adapters (LoRA),\nwhich adds trainable low-rank \"adapters\" to selected layers. Each adapter\nconsists of a low-rank matrix product, multiplicatively scaled by a\nrank-dependent factor. This scaling factor, which divides adapters by a factor\nof the rank, results in slowed learning and stunted performance for LoRA with\nhigher-rank adapters. Consequently, the use of LoRA in practice has generally\nbeen limited to very low ranks. In this work, we study the impact of the\nscaling factor on the learning process and prove that LoRA adapters should be\ndivided by a factor of the square root of the rank. Modifying LoRA with the\nappropriate scaling factor, which we call the rank-stabilized LoRA (rsLoRA)\nmethod, easily provides for a fine-tuning compute/performance trade-off, where\nlarger ranks can be used to trade off increased computational resources during\ntraining for better fine-tuning performance, with no change in inference\ncomputing cost.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Sicong Shen", "Yang Zhou", "Bingzheng Wei", "Eric Chang", "Yan Xu"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef86"}, "filepath": "data/2401.01173.png", "tags": [], "_media_type": "image", "_rand": 0.9994695980203137, "arXiv_link": "https://arxiv.org/abs/2401.01173", "other_link": "", "title": "En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D Synthetic Data", "abstract": "We present En3D, an enhanced generative scheme for sculpting high-quality 3D\nhuman avatars. Unlike previous works that rely on scarce 3D datasets or limited\n2D collections with imbalanced viewing angles and imprecise pose priors, our\napproach aims to develop a zero-shot 3D generative scheme capable of producing\nvisually realistic, geometrically accurate and content-wise diverse 3D humans\nwithout relying on pre-existing 3D or 2D assets. To address this challenge, we\nintroduce a meticulously crafted workflow that implements accurate physical\nmodeling to learn the enhanced 3D generative model from synthetic 2D data.\nDuring inference, we integrate optimization modules to bridge the gap between\nrealistic appearances and coarse 3D shapes. Specifically, En3D comprises three\nmodules: a 3D generator that accurately models generalizable 3D humans with\nrealistic appearance from synthesized balanced, diverse, and structured human\nimages; a geometry sculptor that enhances shape quality using multi-view normal\nconstraints for intricate human anatomy; and a texturing module that\ndisentangles explicit texture maps with fidelity and editability, leveraging\nsemantical UV partitioning and a differentiable rasterizer. Experimental\nresults show that our approach significantly outperforms prior works in terms\nof image quality, geometry accuracy and content diversity. We also showcase the\napplicability of our generated avatars for animation and editing, as well as\nthe scalability of our approach for content-style free adaptation.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Yifang Men", "Biwen Lei", "Yuan Yao", "Miaomiao Cui", "Zhouhui Lian", "Xuansong Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef87"}, "filepath": "data/2312.02480.png", "tags": [], "_media_type": "image", "_rand": 0.9997063445538427, "arXiv_link": "https://arxiv.org/abs/2312.02480", "other_link": "", "title": "Differentiable Point-based Inverse Rendering", "abstract": "We present differentiable point-based inverse rendering, DPIR, an\nanalysis-by-synthesis method that processes images captured under diverse\nilluminations to estimate shape and spatially-varying BRDF. To this end, we\nadopt point-based rendering, eliminating the need for multiple samplings per\nray, typical of volumetric rendering, thus significantly enhancing the speed of\ninverse rendering. To realize this idea, we devise a hybrid point-volumetric\nrepresentation for geometry and a regularized basis-BRDF representation for\nreflectance. The hybrid geometric representation enables fast rendering through\npoint-based splatting while retaining the geometric details and stability\ninherent to SDF-based representations. The regularized basis-BRDF mitigates the\nill-posedness of inverse rendering stemming from limited light-view angular\nsamples. We also propose an efficient shadow detection method using point-based\nshadow map rendering. Our extensive evaluations demonstrate that DPIR\noutperforms prior works in terms of reconstruction accuracy, computational\nefficiency, and memory footprint. Furthermore, our explicit point-based\nrepresentation and rendering enables intuitive geometry and reflectance\nediting.", "keywords": ["Efficient and scalable vision", "Computational imaging and physics-based vision"], "authors_list": ["Hoon-Gyu Chung", "Seokjun Choi", "Seung-Hwan Baek"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef88"}, "filepath": "data/2402.17351.png", "tags": [], "_media_type": "image", "_rand": 0.999052395724012, "arXiv_link": "https://arxiv.org/abs/2402.17351", "other_link": "", "title": "ICP-Flow: LiDAR Scene Flow Estimation with ICP", "abstract": "Scene flow characterizes the 3D motion between two LiDAR scans captured by an\nautonomous vehicle at nearby timesteps. Prevalent methods consider scene flow\nas point-wise unconstrained flow vectors that can be learned by either\nlarge-scale training beforehand or time-consuming optimization at inference.\nHowever, these methods do not take into account that objects in autonomous\ndriving often move rigidly. We incorporate this rigid-motion assumption into\nour design, where the goal is to associate objects over scans and then estimate\nthe locally rigid transformations. We propose ICP-Flow, a learning-free flow\nestimator. The core of our design is the conventional Iterative Closest Point\n(ICP) algorithm, which aligns the objects over time and outputs the\ncorresponding rigid transformations. Crucially, to aid ICP, we propose a\nhistogram-based initialization that discovers the most likely translation, thus\nproviding a good starting point for ICP. The complete scene flow is then\nrecovered from the rigid transformations. We outperform state-of-the-art\nbaselines, including supervised models, on the Waymo dataset and perform\ncompetitively on Argoverse-v2 and nuScenes. Further, we train a feedforward\nneural network, supervised by the pseudo labels from our model, and achieve top\nperformance among all models capable of real-time inference. We validate the\nadvantage of our model on scene flow estimation with longer temporal gaps, up\nto 0.4 seconds where other models fail to deliver meaningful results.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Yancong Lin", "Holger Caesar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef89"}, "filepath": "data/2404.06350.png", "tags": [], "_media_type": "image", "_rand": 0.999395821012404, "arXiv_link": "https://arxiv.org/abs/2404.06350", "other_link": "https://github.com/ljzycmd/DFRSC}.", "title": "Rolling Shutter Correction with Intermediate Distortion Flow Estimation", "abstract": "This paper proposes to correct the rolling shutter (RS) distorted images by\nestimating the distortion flow from the global shutter (GS) to RS directly.\nExisting methods usually perform correction using the undistortion flow from\nthe RS to GS. They initially predict the flow from consecutive RS frames,\nsubsequently rescaling it as the displacement fields from the RS frame to the\nunderlying GS image using time-dependent scaling factors. Following this,\nRS-aware forward warping is employed to convert the RS image into its GS\ncounterpart. Nevertheless, this strategy is prone to two shortcomings. First,\nthe undistortion flow estimation is rendered inaccurate by merely linear\nscaling the flow, due to the complex non-linear motion nature. Second, RS-aware\nforward warping often results in unavoidable artifacts. To address these\nlimitations, we introduce a new framework that directly estimates the\ndistortion flow and rectifies the RS image with the backward warping operation.\nMore specifically, we first propose a global correlation-based flow attention\nmechanism to estimate the initial distortion flow and GS feature jointly, which\nare then refined by the following coarse-to-fine decoder layers. Additionally,\na multi-distortion flow prediction strategy is integrated to mitigate the issue\nof inaccurate flow estimation further. Experimental results validate the\neffectiveness of the proposed method, which outperforms state-of-the-art\napproaches on various benchmarks while maintaining high efficiency. The project\nis available at \\url{https://github.com/ljzycmd/DFRSC}.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Mingdeng Cao", "Sidi Yang", "Yujiu Yang", "Yinqiang Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef8a"}, "filepath": "data/2405.19283.png", "tags": [], "_media_type": "image", "_rand": 0.9996566885602932, "arXiv_link": "https://arxiv.org/abs/2405.19283", "other_link": "", "title": "Programmable Motion Generation for Open-set Motion Control Tasks", "abstract": "Character animation in real-world scenarios necessitates a variety of\nconstraints, such as trajectories, key-frames, interactions, etc. Existing\nmethodologies typically treat single or a finite set of these constraint(s) as\nseparate control tasks. They are often specialized, and the tasks they address\nare rarely extendable or customizable. We categorize these as solutions to the\nclose-set motion control problem. In response to the complexity of practical\nmotion control, we propose and attempt to solve the open-set motion control\nproblem. This problem is characterized by an open and fully customizable set of\nmotion control tasks. To address this, we introduce a new paradigm,\nprogrammable motion generation. In this paradigm, any given motion control task\nis broken down into a combination of atomic constraints. These constraints are\nthen programmed into an error function that quantifies the degree to which a\nmotion sequence adheres to them. We utilize a pre-trained motion generation\nmodel and optimize its latent code to minimize the error function of the\ngenerated motion. Consequently, the generated motion not only inherits the\nprior of the generative model but also satisfies the required constraints.\nExperiments show that we can generate high-quality motions when addressing a\nwide range of unseen tasks. These tasks encompass motion control by motion\ndynamics, geometric constraints, physical laws, interactions with scenes,\nobjects or the character own body parts, etc. All of these are achieved in a\nunified approach, without the need for ad-hoc paired training data collection\nor specialized network designs. During the programming of novel tasks, we\nobserved the emergence of new skills beyond those of the prior model. With the\nassistance of large language models, we also achieved automatic programming. We\nhope that this work will pave the way for the motion control of general AI\nagents.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hanchao Liu", "Xiaohang Zhan", "Shaoli Huang", "Tai-Jiang Mu", "Ying Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef8b"}, "filepath": "data/2312.02145.png", "tags": [], "_media_type": "image", "_rand": 0.9994384569446503, "arXiv_link": "https://arxiv.org/abs/2312.02145", "other_link": "https://marigoldmonodepth.github.io.", "title": "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation", "abstract": "Monocular depth estimation is a fundamental computer vision task. Recovering\n3D depth from a single image is geometrically ill-posed and requires scene\nunderstanding, so it is not surprising that the rise of deep learning has led\nto a breakthrough. The impressive progress of monocular depth estimators has\nmirrored the growth in model capacity, from relatively modest CNNs to large\nTransformer architectures. Still, monocular depth estimators tend to struggle\nwhen presented with images with unfamiliar content and layout, since their\nknowledge of the visual world is restricted by the data seen during training,\nand challenged by zero-shot generalization to new domains. This motivates us to\nexplore whether the extensive priors captured in recent generative diffusion\nmodels can enable better, more generalizable depth estimation. We introduce\nMarigold, a method for affine-invariant monocular depth estimation that is\nderived from Stable Diffusion and retains its rich prior knowledge. The\nestimator can be fine-tuned in a couple of days on a single GPU using only\nsynthetic training data. It delivers state-of-the-art performance across a wide\nrange of datasets, including over 20% performance gains in specific cases.\nProject page: https://marigoldmonodepth.github.io.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Bingxin Ke", "Anton Obukhov", "Shengyu Huang", "Nando Metzger", "Rodrigo Caye Daudt", "Konrad Schindler"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef8c"}, "filepath": "data/2312.08869.png", "tags": [], "_media_type": "image", "_rand": 0.9997570053905689, "arXiv_link": "https://arxiv.org/abs/2312.08869", "other_link": "", "title": "I'M HOI: Inertia-aware Monocular Capture of 3D Human-Object Interactions", "abstract": "We are living in a world surrounded by diverse and \"smart\" devices with rich\nmodalities of sensing ability. Conveniently capturing the interactions between\nus humans and these objects remains far-reaching. In this paper, we present\nI'm-HOI, a monocular scheme to faithfully capture the 3D motions of both the\nhuman and object in a novel setting: using a minimal amount of RGB camera and\nobject-mounted Inertial Measurement Unit (IMU). It combines general motion\ninference and category-aware refinement. For the former, we introduce a\nholistic human-object tracking method to fuse the IMU signals and the RGB\nstream and progressively recover the human motions and subsequently the\ncompanion object motions. For the latter, we tailor a category-aware motion\ndiffusion model, which is conditioned on both the raw IMU observations and the\nresults from the previous stage under over-parameterization representation. It\nsignificantly refines the initial results and generates vivid body, hand, and\nobject motions. Moreover, we contribute a large dataset with ground truth human\nand object motions, dense RGB inputs, and rich object-mounted IMU measurements.\nExtensive experiments demonstrate the effectiveness of I'm-HOI under a hybrid\ncapture setting. Our dataset and code will be released to the community.", "keywords": [], "authors_list": ["Chengfeng Zhao", "Juze Zhang", "Jiashen Du", "Ziwei Shan", "Junye Wang", "Jingyi Yu", "Jingya Wang", "Lan Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef8d"}, "filepath": "data/2312.07488.png", "tags": [], "_media_type": "image", "_rand": 0.9990846615077125, "arXiv_link": "https://arxiv.org/abs/2312.07488", "other_link": "https://github.com/opendilab/LMDrive", "title": "LMDrive: Closed-Loop End-to-End Driving with Large Language Models", "abstract": "Despite significant recent progress in the field of autonomous driving,\nmodern methods still struggle and can incur serious accidents when encountering\nlong-tail unforeseen events and challenging urban scenarios. On the one hand,\nlarge language models (LLM) have shown impressive reasoning capabilities that\napproach \"Artificial General Intelligence\". On the other hand, previous\nautonomous driving methods tend to rely on limited-format inputs (e.g. sensor\ndata and navigation waypoints), restricting the vehicle's ability to understand\nlanguage information and interact with humans. To this end, this paper\nintroduces LMDrive, a novel language-guided, end-to-end, closed-loop autonomous\ndriving framework. LMDrive uniquely processes and integrates multi-modal sensor\ndata with natural language instructions, enabling interaction with humans and\nnavigation software in realistic instructional settings. To facilitate further\nresearch in language-based closed-loop autonomous driving, we also publicly\nrelease the corresponding dataset which includes approximately 64K\ninstruction-following data clips, and the LangAuto benchmark that tests the\nsystem's ability to handle complex instructions and challenging driving\nscenarios. Extensive closed-loop experiments are conducted to demonstrate\nLMDrive's effectiveness. To the best of our knowledge, we're the very first\nwork to leverage LLMs for closed-loop end-to-end autonomous driving. Codes,\nmodels, and datasets can be found at https://github.com/opendilab/LMDrive", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Hao Shao", "Yuxuan Hu", "Letian Wang", "Guanglu Song", "Steven L. Waslander", "Yu Liu", "Hongsheng Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef8e"}, "filepath": "data/2403.01316.png", "tags": [], "_media_type": "image", "_rand": 0.9990522833610652, "arXiv_link": "https://arxiv.org/abs/2403.01316", "other_link": "https://tum-traffic-dataset.github.io/tumtraf-v2x.", "title": "TUMTraf V2X Cooperative Perception Dataset", "abstract": "Cooperative perception offers several benefits for enhancing the capabilities\nof autonomous vehicles and improving road safety. Using roadside sensors in\naddition to onboard sensors increases reliability and extends the sensor range.\nExternal sensors offer higher situational awareness for automated vehicles and\nprevent occlusions. We propose CoopDet3D, a cooperative multi-modal fusion\nmodel, and TUMTraf-V2X, a perception dataset, for the cooperative 3D object\ndetection and tracking task. Our dataset contains 2,000 labeled point clouds\nand 5,000 labeled images from five roadside and four onboard sensors. It\nincludes 30k 3D boxes with track IDs and precise GPS and IMU data. We labeled\neight categories and covered occlusion scenarios with challenging driving\nmaneuvers, like traffic violations, near-miss events, overtaking, and U-turns.\nThrough multiple experiments, we show that our CoopDet3D camera-LiDAR fusion\nmodel achieves an increase of +14.36 3D mAP compared to a vehicle camera-LiDAR\nfusion model. Finally, we make our dataset, model, labeling tool, and dev-kit\npublicly available on our website:\nhttps://tum-traffic-dataset.github.io/tumtraf-v2x.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Walter Zimmer", "Gerhard Arya Wardana", "Suren Sritharan", "Xingcheng Zhou", "Rui Song", "Alois Knoll"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef8f"}, "filepath": "data/2403.14198v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993929704116723, "arXiv_link": "https://arxiv.org/abs/2403.14198v1", "other_link": "https://github.com/liguopeng0923/UCVGL.", "title": "Unleashing Unlabeled Data: A Paradigm for Cross-View Geo-Localization", "abstract": "This paper investigates the effective utilization of unlabeled data for\nlarge-area cross-view geo-localization (CVGL), encompassing both unsupervised\nand semi-supervised settings. Common approaches to CVGL rely on\nground-satellite image pairs and employ label-driven supervised training.\nHowever, the cost of collecting precise cross-view image pairs hinders the\ndeployment of CVGL in real-life scenarios. Without the pairs, CVGL will be more\nchallenging to handle the significant imaging and spatial gaps between ground\nand satellite images. To this end, we propose an unsupervised framework\nincluding a cross-view projection to guide the model for retrieving initial\npseudo-labels and a fast re-ranking mechanism to refine the pseudo-labels by\nleveraging the fact that ``the perfectly paired ground-satellite image is\nlocated in a unique and identical scene\". The framework exhibits competitive\nperformance compared with supervised works on three open-source benchmarks. Our\ncode and models will be released on https://github.com/liguopeng0923/UCVGL.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Guopeng Li", "Ming Qian", "Gui-Song Xia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef90"}, "filepath": "data/2310.17154.png", "tags": [], "_media_type": "image", "_rand": 0.9991804074155727, "arXiv_link": "https://arxiv.org/abs/2310.17154", "other_link": "", "title": "Deep Imbalanced Regression via Hierarchical Classification Adjustment", "abstract": "Regression tasks in computer vision, such as age estimation or counting, are\noften formulated into classification by quantizing the target space into\nclasses. Yet real-world data is often imbalanced -- the majority of training\nsamples lie in a head range of target values, while a minority of samples span\na usually larger tail range. By selecting the class quantization, one can\nadjust imbalanced regression targets into balanced classification outputs,\nthough there are trade-offs in balancing classification accuracy and\nquantization error. To improve regression performance over the entire range of\ndata, we propose to construct hierarchical classifiers for solving imbalanced\nregression tasks. The fine-grained classifiers limit the quantization error\nwhile being modulated by the coarse predictions to ensure high accuracy.\nStandard hierarchical classification approaches, however, when applied to the\nregression problem, fail to ensure that predicted ranges remain consistent\nacross the hierarchy. As such, we propose a range-preserving distillation\nprocess that can effectively learn a single classifier from the set of\nhierarchical classifiers. Our novel hierarchical classification adjustment\n(HCA) for imbalanced regression shows superior results on three diverse tasks:\nage estimation, crowd counting and depth estimation. We will release the source\ncode upon acceptance.", "keywords": ["Biometrics and human analysis", "Efficient and scalable vision"], "authors_list": ["Haipeng Xiong", "Angela Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef91"}, "filepath": "data/2403.16405.png", "tags": [], "_media_type": "image", "_rand": 0.9995906112193668, "arXiv_link": "https://arxiv.org/abs/2403.16405", "other_link": "", "title": "Ensemble Diversity Facilitates Adversarial Transferability", "abstract": "The integration of an ensemble of deep learning models has been extensively\nexplored to enhance defense against adversarial attacks. The diversity among\nsub-models increases the attack cost required to deceive the majority of the\nensemble, thereby improving the adversarial robustness. While existing\napproaches mainly center on increasing diversity in feature representations or\ndispersion of first-order gradients with respect to input, the limited\ncorrelation between these diversity metrics and adversarial robustness\nconstrains the performance of ensemble adversarial defense. In this work, we\naim to enhance ensemble diversity by reducing attack transferability. We\nidentify second-order gradients, which depict the loss curvature, as a key\nfactor in adversarial robustness. Computing the Hessian matrix involved in\nsecond-order gradients is computationally expensive. To address this, we\napproximate the Hessian-vector product using differential approximation. Given\nthat low curvature provides better robustness, our ensemble model was designed\nto consider the influence of curvature among different sub-models. We introduce\na novel regularizer to train multiple more-diverse low-curvature network\nmodels. Extensive experiments across various datasets demonstrate that our\nensemble model exhibits superior robustness against a range of attacks,\nunderscoring the effectiveness of our approach.", "keywords": [], "authors_list": ["Bowen Tang", "Zheng Wang", "Yi Bin", "Qi Dou", "Yang Yang", "Heng Tao Shen"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef92"}, "filepath": "data/2403.19235.png", "tags": [], "_media_type": "image", "_rand": 0.9999536801109492, "arXiv_link": "https://arxiv.org/abs/2403.19235", "other_link": "", "title": "DreamSalon: A Staged Diffusion Framework for Preserving Identity-Context in Editable Face Generation", "abstract": "While large-scale pre-trained text-to-image models can synthesize diverse and\nhigh-quality human-centered images, novel challenges arise with a nuanced task\nof \"identity fine editing\": precisely modifying specific features of a subject\nwhile maintaining its inherent identity and context. Existing personalization\nmethods either require time-consuming optimization or learning additional\nencoders, adept in \"identity re-contextualization\". However, they often\nstruggle with detailed and sensitive tasks like human face editing. To address\nthese challenges, we introduce DreamSalon, a noise-guided, staged-editing\nframework, uniquely focusing on detailed image manipulations and\nidentity-context preservation. By discerning editing and boosting stages via\nthe frequency and gradient of predicted noises, DreamSalon first performs\ndetailed manipulations on specific features in the editing stage, guided by\nhigh-frequency information, and then employs stochastic denoising in the\nboosting stage to improve image quality. For more precise editing, DreamSalon\nsemantically mixes source and target textual prompts, guided by differences in\ntheir embedding covariances, to direct the model's focus on specific\nmanipulation areas. Our experiments demonstrate DreamSalon's ability to\nefficiently and faithfully edit fine details on human faces, outperforming\nexisting methods both qualitatively and quantitatively.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Haonan Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef93"}, "filepath": "data/2405.10286.png", "tags": [], "_media_type": "image", "_rand": 0.9997575094065713, "arXiv_link": "https://arxiv.org/abs/2405.10286", "other_link": "", "title": "FFF: Fixing Flawed Foundations in contrastive pre-training results in very strong Vision-Language models", "abstract": "Despite noise and caption quality having been acknowledged as important\nfactors impacting vision-language contrastive pre-training, in this paper, we\nshow that the full potential of improving the training process by addressing\nsuch issues is yet to be realized. Specifically, we firstly study and analyze\ntwo issues affecting training: incorrect assignment of negative pairs, and low\ncaption quality and diversity. Then, we devise effective solutions for\naddressing both problems, which essentially require training with multiple true\npositive pairs. Finally, we propose training with sigmoid loss to address such\na requirement. We show very large gains over the current state-of-the-art for\nboth image recognition ($\\sim +6\\%$ on average over 11 datasets) and image\nretrieval ($\\sim +19\\%$ on Flickr30k and $\\sim +15\\%$ on MSCOCO).", "keywords": [], "authors_list": ["Adrian Bulat", "Yassine Ouali", "Georgios Tzimiropoulos"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef94"}, "filepath": "data/2403.12030v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991655512689543, "arXiv_link": "https://arxiv.org/abs/2403.12030v1", "other_link": "https://github.com/sun-hailong/CVPR24-Ease", "title": "Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning", "abstract": "Class-Incremental Learning (CIL) requires a learning system to continually\nlearn new classes without forgetting. Despite the strong performance of\nPre-Trained Models (PTMs) in CIL, a critical issue persists: learning new\nclasses often results in the overwriting of old ones. Excessive modification of\nthe network causes forgetting, while minimal adjustments lead to an inadequate\nfit for new classes. As a result, it is desired to figure out a way of\nefficient model updating without harming former knowledge. In this paper, we\npropose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL. To enable model\nupdating without conflict, we train a distinct lightweight adapter module for\neach new task, aiming to create task-specific subspaces. These adapters span a\nhigh-dimensional feature space, enabling joint decision-making across multiple\nsubspaces. As data evolves, the expanding subspaces render the old class\nclassifiers incompatible with new-stage spaces. Correspondingly, we design a\nsemantic-guided prototype complement strategy that synthesizes old classes' new\nfeatures without using any old class instance. Extensive experiments on seven\nbenchmark datasets verify EASE's state-of-the-art performance. Code is\navailable at: https://github.com/sun-hailong/CVPR24-Ease", "keywords": ["Efficient and scalable vision"], "authors_list": ["Da-Wei Zhou", "Hai-Long Sun", "Han-Jia Ye", "De-Chuan Zhan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef95"}, "filepath": "data/2312.02512v2.png", "tags": [], "_media_type": "image", "_rand": 0.99947472843012, "arXiv_link": "https://arxiv.org/html/2312.02512v2", "other_link": "https://choijeongsoo.github.io/av2av.", "title": "AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation", "abstract": "This paper proposes a novel direct Audio-Visual Speech to Audio-Visual Speech\nTranslation (AV2AV) framework, where the input and output of the system are\nmultimodal (i.e., audio and visual speech). With the proposed AV2AV, two key\nadvantages can be brought: 1) We can perform real-like conversations with\nindividuals worldwide in a virtual meeting by utilizing our own primary\nlanguages. In contrast to Speech-to-Speech Translation (A2A), which solely\ntranslates between audio modalities, the proposed AV2AV directly translates\nbetween audio-visual speech. This capability enhances the dialogue experience\nby presenting synchronized lip movements along with the translated speech. 2)\nWe can improve the robustness of the spoken language translation system. By\nemploying the complementary information of audio-visual speech, the system can\neffectively translate spoken language even in the presence of acoustic noise,\nshowcasing robust performance. To mitigate the problem of the absence of a\nparallel AV2AV translation dataset, we propose to train our spoken language\ntranslation system with the audio-only dataset of A2A. This is done by learning\nunified audio-visual speech representations through self-supervised learning in\nadvance to train the translation system. Moreover, we propose an AV-Renderer\nthat can generate raw audio and video in parallel. It is designed with\nzero-shot speaker modeling, thus the speaker in source audio-visual speech can\nbe maintained at the target translated audio-visual speech. The effectiveness\nof AV2AV is evaluated with extensive experiments in a many-to-many language\ntranslation setting. Demo page is available on\nhttps://choijeongsoo.github.io/av2av.", "keywords": ["Multimodal models and vision-language models", "Image and video generation and manipulation"], "authors_list": ["Jeongsoo Choi", "Se Jin Park", "Minsu Kim", "Yong Man Ro"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Multimedia", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef96"}, "filepath": "data/2403.14886.png", "tags": [], "_media_type": "image", "_rand": 0.9996660018757796, "arXiv_link": "https://arxiv.org/abs/2403.14886", "other_link": "https://github.com/zeeshanhayder/DSGG}.", "title": "DSGG: Dense Relation Transformer for an End-to-end Scene Graph Generation", "abstract": "Scene graph generation aims to capture detailed spatial and semantic\nrelationships between objects in an image, which is challenging due to\nincomplete labelling, long-tailed relationship categories, and relational\nsemantic overlap. Existing Transformer-based methods either employ distinct\nqueries for objects and predicates or utilize holistic queries for relation\ntriplets and hence often suffer from limited capacity in learning low-frequency\nrelationships. In this paper, we present a new Transformer-based method, called\nDSGG, that views scene graph detection as a direct graph prediction problem\nbased on a unique set of graph-aware queries. In particular, each graph-aware\nquery encodes a compact representation of both the node and all of its\nrelations in the graph, acquired through the utilization of a relaxed sub-graph\nmatching during the training process. Moreover, to address the problem of\nrelational semantic overlap, we utilize a strategy for relation distillation,\naiming to efficiently learn multiple instances of semantic relationships.\nExtensive experiments on the VG and the PSG datasets show that our model\nachieves state-of-the-art results, showing a significant improvement of 3.5\\%\nand 6.7\\% in mR@50 and mR@100 for the scene-graph generation task and achieves\nan even more substantial improvement of 8.5\\% and 10.3\\% in mR@50 and mR@100\nfor the panoptic scene graph generation task. Code is available at\n\\url{https://github.com/zeeshanhayder/DSGG}.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Zeeshan Hayder", "Xuming He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef97"}, "filepath": "data/2309.00024.png", "tags": [], "_media_type": "image", "_rand": 0.9991711049387013, "arXiv_link": "https://arxiv.org/abs/2309.00024", "other_link": "", "title": "Learn from View Correlation: An Anchor Enhancement Strategy for Multi-view Clustering", "abstract": "Anchor-based multi-view graph clustering (AMVGC) has received abundant\nattention owing to its high efficiency and the capability to capture\ncomplementary structural information across multiple views. Intuitively, a\nhigh-quality anchor graph plays an essential role in the success of AMVGC.\nHowever, the existing AMVGC methods only consider single-structure information,\ni.e., local or global structure, which provides insufficient information for\nthe learning task. To be specific, the over-scattered global structure leads to\nlearned anchors failing to depict the cluster partition well. In contrast, the\nlocal structure with an improper similarity measure results in potentially\ninaccurate anchor assignment, ultimately leading to sub-optimal clustering\nperformance. To tackle the issue, we propose a novel anchor-based multi-view\ngraph clustering framework termed Efficient Multi-View Graph Clustering with\nLocal and Global Structure Preservation (EMVGC-LG). Specifically, a unified\nframework with a theoretical guarantee is designed to capture local and global\ninformation. Besides, EMVGC-LG jointly optimizes anchor construction and graph\nlearning to enhance the clustering quality. In addition, EMVGC-LG inherits the\nlinear complexity of existing AMVGC methods respecting the sample number, which\nis time-economical and scales well with the data size. Extensive experiments\ndemonstrate the effectiveness and efficiency of our proposed method.", "keywords": [], "authors_list": ["Suyuan Liu", "KE LIANG", "Zhibin Dong", "Siwei Wang", "Xihong Yang", "sihang zhou", "En Zhu", "Xinwang Liu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef98"}, "filepath": "data/2308.16876v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992397619197846, "arXiv_link": "https://arxiv.org/abs/2308.16876v2", "other_link": "https://neu-vi.github.io/SportsSlomo/.", "title": "SportsSloMo: A New Benchmark and Baselines for Human-centric Video Frame Interpolation", "abstract": "Human-centric video frame interpolation has great potential for improving\npeople's entertainment experiences and finding commercial applications in the\nsports analysis industry, e.g., synthesizing slow-motion videos. Although there\nare multiple benchmark datasets available in the community, none of them is\ndedicated for human-centric scenarios. To bridge this gap, we introduce\nSportsSloMo, a benchmark consisting of more than 130K video clips and 1M video\nframes of high-resolution ($\\geq$720p) slow-motion sports videos crawled from\nYouTube. We re-train several state-of-the-art methods on our benchmark, and the\nresults show a decrease in their accuracy compared to other datasets. It\nhighlights the difficulty of our benchmark and suggests that it poses\nsignificant challenges even for the best-performing methods, as human bodies\nare highly deformable and occlusions are frequent in sports videos. To improve\nthe accuracy, we introduce two loss terms considering the human-aware priors,\nwhere we add auxiliary supervision to panoptic segmentation and human keypoints\ndetection, respectively. The loss terms are model agnostic and can be easily\nplugged into any video frame interpolation approaches. Experimental results\nvalidate the effectiveness of our proposed loss terms, leading to consistent\nperformance improvement over 5 existing models, which establish strong baseline\nmodels on our benchmark. The dataset and code can be found at:\nhttps://neu-vi.github.io/SportsSlomo/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jiaben Chen", "Huaizu Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef99"}, "filepath": "data/2404.07474.png", "tags": [], "_media_type": "image", "_rand": 0.9992153518673702, "arXiv_link": "https://arxiv.org/abs/2404.07474", "other_link": "", "title": "G-NeRF: Geometry-enhanced Novel View Synthesis from Single-View Images", "abstract": "Novel view synthesis aims to generate new view images of a given view image\ncollection. Recent attempts address this problem relying on 3D geometry priors\n(e.g., shapes, sizes, and positions) learned from multi-view images. However,\nsuch methods encounter the following limitations: 1) they require a set of\nmulti-view images as training data for a specific scene (e.g., face, car or\nchair), which is often unavailable in many real-world scenarios; 2) they fail\nto extract the geometry priors from single-view images due to the lack of\nmulti-view supervision. In this paper, we propose a Geometry-enhanced NeRF\n(G-NeRF), which seeks to enhance the geometry priors by a geometry-guided\nmulti-view synthesis approach, followed by a depth-aware training. In the\nsynthesis process, inspired that existing 3D GAN models can unconditionally\nsynthesize high-fidelity multi-view images, we seek to adopt off-the-shelf 3D\nGAN models, such as EG3D, as a free source to provide geometry priors through\nsynthesizing multi-view data. Simultaneously, to further improve the geometry\nquality of the synthetic data, we introduce a truncation method to effectively\nsample latent codes within 3D GAN models. To tackle the absence of multi-view\nsupervision for single-view images, we design the depth-aware training\napproach, incorporating a depth-aware discriminator to guide geometry priors\nthrough depth maps. Experiments demonstrate the effectiveness of our method in\nterms of both qualitative and quantitative results.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Zixiong Huang", "Qi Chen", "Libo Sun", "Yifan Yang", "Naizhou Wang", "Qi Wu", "Mingkui Tan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef9a"}, "filepath": "data/2312.05039.png", "tags": [], "_media_type": "image", "_rand": 0.9998224033777762, "arXiv_link": "https://arxiv.org/abs/2312.05039", "other_link": "https://smartmask-gen.github.io", "title": "MaskPLAN: Masked Generative Layout Planning from Partial Input", "abstract": "The field of generative image inpainting and object insertion has made\nsignificant progress with the recent advent of latent diffusion models.\nUtilizing a precise object mask can greatly enhance these applications.\nHowever, due to the challenges users encounter in creating high-fidelity masks,\nthere is a tendency for these methods to rely on more coarse masks (e.g.,\nbounding box) for these applications. This results in limited control and\ncompromised background content preservation. To overcome these limitations, we\nintroduce SmartMask, which allows any novice user to create detailed masks for\nprecise object insertion. Combined with a ControlNet-Inpaint model, our\nexperiments demonstrate that SmartMask achieves superior object insertion\nquality, preserving the background content more effectively than previous\nmethods. Notably, unlike prior works the proposed approach can also be used\neven without user-mask guidance, which allows it to perform mask-free object\ninsertion at diverse positions and scales. Furthermore, we find that when used\niteratively with a novel instruction-tuning based planning model, SmartMask can\nbe used to design detailed layouts from scratch. As compared with user-scribble\nbased layout design, we observe that SmartMask allows for better quality\noutputs with layout-to-image generation methods. Project page is available at\nhttps://smartmask-gen.github.io", "keywords": ["Image and video generation and manipulation", "Scene analysis and understanding"], "authors_list": ["Hang Zhang", "Anton Savov", "Benjamin Dillenburger"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Human-Computer Interaction", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef9b"}, "filepath": "data/2312.03700.png", "tags": [], "_media_type": "image", "_rand": 0.999973737796672, "arXiv_link": "https://arxiv.org/abs/2312.03700", "other_link": "https://github.com/csuhan/OneLLM", "title": "OneLLM: One Framework to Align All Modalities with Language", "abstract": "Multimodal large language models (MLLMs) have gained significant attention\ndue to their strong multimodal understanding capability. However, existing\nworks rely heavily on modality-specific encoders, which usually differ in\narchitecture and are limited to common modalities. In this paper, we present\nOneLLM, an MLLM that aligns eight modalities to language using a unified\nframework. We achieve this through a unified multimodal encoder and a\nprogressive multimodal alignment pipeline. In detail, we first train an image\nprojection module to connect a vision encoder with LLM. Then, we build a\nuniversal projection module (UPM) by mixing multiple image projection modules\nand dynamic routing. Finally, we progressively align more modalities to LLM\nwith the UPM. To fully leverage the potential of OneLLM in following\ninstructions, we also curated a comprehensive multimodal instruction dataset,\nincluding 2M items from image, audio, video, point cloud, depth/normal map, IMU\nand fMRI brain activity. OneLLM is evaluated on 25 diverse benchmarks,\nencompassing tasks such as multimodal captioning, question answering and\nreasoning, where it delivers excellent performance. Code, data, model and\nonline demo are available at https://github.com/csuhan/OneLLM", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jiaming Han", "Kaixiong Gong", "Yiyuan Zhang", "Jiaqi Wang", "Kaipeng Zhang", "Dahua Lin", "Yu Qiao", "Peng Gao", "Xiangyu Yue"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef9c"}, "filepath": "data/2403.07532.png", "tags": [], "_media_type": "image", "_rand": 0.9994018633036463, "arXiv_link": "https://arxiv.org/abs/2403.07532", "other_link": "", "title": "Open-World Semantic Segmentation Including Class Similarity", "abstract": "Interpreting camera data is key for autonomously acting systems, such as\nautonomous vehicles. Vision systems that operate in real-world environments\nmust be able to understand their surroundings and need the ability to deal with\nnovel situations. This paper tackles open-world semantic segmentation, i.e.,\nthe variant of interpreting image data in which objects occur that have not\nbeen seen during training. We propose a novel approach that performs accurate\nclosed-world semantic segmentation and, at the same time, can identify new\ncategories without requiring any additional training data. Our approach\nadditionally provides a similarity measure for every newly discovered class in\nan image to a known category, which can be useful information in downstream\ntasks such as planning or mapping. Through extensive experiments, we show that\nour model achieves state-of-the-art results on classes known from training data\nas well as for anomaly segmentation and can distinguish between different\nunknown classes.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Matteo Sodano", "Federico Magistri", "Lucas Nunes", "Jens Behley", "Cyrill Stachniss"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef9d"}, "filepath": "data/2307.16449.png", "tags": [], "_media_type": "image", "_rand": 0.9993365912773065, "arXiv_link": "https://arxiv.org/abs/2307.16449", "other_link": "", "title": "MovieChat: From Dense Token to Sparse Memory for Long Video Understanding", "abstract": "Recently, integrating video foundation models and large language models to\nbuild a video understanding system can overcome the limitations of specific\npre-defined vision tasks. Yet, existing systems can only handle videos with\nvery few frames. For long videos, the computation complexity, memory cost, and\nlong-term temporal connection impose additional challenges. Taking advantage of\nthe Atkinson-Shiffrin memory model, with tokens in Transformers being employed\nas the carriers of memory in combination with our specially designed memory\nmechanism, we propose the MovieChat to overcome these challenges. MovieChat\nachieves state-of-the-art performance in long video understanding, along with\nthe released MovieChat-1K benchmark with 1K long video and 14K manual\nannotations for validation of the effectiveness of our method.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Enxin Song", "Wenhao Chai", "Guanhong Wang", "Haoyang Zhou", "Feiyang Wu", "Yucheng Zhang", "Tian Ye", "Haozhe Chi", "Xun Guo", "Yanting Zhang", "Yan Lu", "Jenq-Neng Hwang", "Gaoang Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef9e"}, "filepath": "data/2311.06783.png", "tags": [], "_media_type": "image", "_rand": 0.999759272466395, "arXiv_link": "https://arxiv.org/abs/2311.06783", "other_link": "https://q-future.github.io/Q-Instruct.", "title": "Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models", "abstract": "Multi-modality foundation models, as represented by GPT-4V, have brought a\nnew paradigm for low-level visual perception and understanding tasks, that can\nrespond to a broad range of natural human instructions in a model. While\nexisting foundation models have shown exciting potentials on low-level visual\ntasks, their related abilities are still preliminary and need to be improved.\nIn order to enhance these models, we conduct a large-scale subjective\nexperiment collecting a vast number of real human feedbacks on low-level\nvision. Each feedback follows a pathway that starts with a detailed description\non the low-level visual appearance (*e.g. clarity, color, brightness* of an\nimage, and ends with an overall conclusion, with an average length of 45 words.\nThe constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on\n18,973 images with diverse low-level appearance. Moreover, to enable foundation\nmodels to robustly respond to diverse types of questions, we design a\nGPT-participated conversion to process these feedbacks into diverse-format 200K\ninstruction-response pairs. Experimental results indicate that the\n**Q-Instruct** consistently elevates low-level perception and understanding\nabilities across several foundational models. We anticipate that our datasets\ncan pave the way for a future that general intelligence can perceive,\nunderstand low-level visual appearance and evaluate visual quality like a\nhuman. Our dataset, model zoo, and demo is published at:\nhttps://q-future.github.io/Q-Instruct.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Haoning Wu", "Zicheng Zhang", "Erli Zhang", "Chaofeng Chen", "Liang Liao", "Annan Wang", "Kaixin Xu", "Chunyi Li", "Jingwen Hou", "Guangtao Zhai", "Xue Geng", "Wenxiu Sun", "Qiong Yan", "Weisi Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727ef9f"}, "filepath": "data/2403.12760.png", "tags": [], "_media_type": "image", "_rand": 0.9998143152461956, "arXiv_link": "https://arxiv.org/abs/2403.12760", "other_link": "", "title": "WaveFace: Authentic Face Restoration with Efficient Frequency Recovery", "abstract": "Although diffusion models are rising as a powerful solution for blind face\nrestoration, they are criticized for two problems: 1) slow training and\ninference speed, and 2) failure in preserving identity and recovering\nfine-grained facial details. In this work, we propose WaveFace to solve the\nproblems in the frequency domain, where low- and high-frequency components\ndecomposed by wavelet transformation are considered individually to maximize\nauthenticity as well as efficiency. The diffusion model is applied to recover\nthe low-frequency component only, which presents general information of the\noriginal image but 1/16 in size. To preserve the original identity, the\ngeneration is conditioned on the low-frequency component of low-quality images\nat each denoising step. Meanwhile, high-frequency components at multiple\ndecomposition levels are handled by a unified network, which recovers complex\nfacial details in a single step. Evaluations on four benchmark datasets show\nthat: 1) WaveFace outperforms state-of-the-art methods in authenticity,\nespecially in terms of identity preservation, and 2) authentic images are\nrestored with the efficiency 10x faster than existing diffusion model-based BFR\nmethods.", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Yunqi Miao", "Jiankang Deng", "Jungong Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa0"}, "filepath": "data/2403.04149.png", "tags": [], "_media_type": "image", "_rand": 0.9991485833763941, "arXiv_link": "https://arxiv.org/abs/2403.04149", "other_link": "", "title": "MAP: MAsk-Pruning for Source-Free Model Intellectual Property Protection", "abstract": "Deep learning has achieved remarkable progress in various applications,\nheightening the importance of safeguarding the intellectual property (IP) of\nwell-trained models. It entails not only authorizing usage but also ensuring\nthe deployment of models in authorized data domains, i.e., making models\nexclusive to certain target domains. Previous methods necessitate concurrent\naccess to source training data and target unauthorized data when performing IP\nprotection, making them risky and inefficient for decentralized private data.\nIn this paper, we target a practical setting where only a well-trained source\nmodel is available and investigate how we can realize IP protection. To achieve\nthis, we propose a novel MAsk Pruning (MAP) framework. MAP stems from an\nintuitive hypothesis, i.e., there are target-related parameters in a\nwell-trained model, locating and pruning them is the key to IP protection.\nTechnically, MAP freezes the source model and learns a target-specific binary\nmask to prevent unauthorized data usage while minimizing performance\ndegradation on authorized data. Moreover, we introduce a new metric aimed at\nachieving a better balance between source and target performance degradation.\nTo verify the effectiveness and versatility, we have evaluated MAP in a variety\nof scenarios, including vanilla source-available, practical source-free, and\nchallenging data-free. Extensive experiments indicate that MAP yields new\nstate-of-the-art performance.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Boyang Peng", "Sanqing Qu", "Yong Wu", "Tianpei Zou", "Lianghua He", "Alois Knoll", "Guang Chen", "Changjun Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa1"}, "filepath": "data/2404.02585.png", "tags": [], "_media_type": "image", "_rand": 0.9995491511550958, "arXiv_link": "https://arxiv.org/abs/2404.02585", "other_link": "https://github.com/jiahaolu97/anything-unsegmentable.", "title": "Unsegment Anything by Simulating Deformation", "abstract": "Foundation segmentation models, while powerful, pose a significant risk: they\nenable users to effortlessly extract any objects from any digital content with\na single click, potentially leading to copyright infringement or malicious\nmisuse. To mitigate this risk, we introduce a new task \"Anything Unsegmentable\"\nto grant any image \"the right to be unsegmented\". The ambitious pursuit of the\ntask is to achieve highly transferable adversarial attacks against all\nprompt-based segmentation models, regardless of model parameterizations and\nprompts. We highlight the non-transferable and heterogeneous nature of\nprompt-specific adversarial noises. Our approach focuses on disrupting image\nencoder features to achieve prompt-agnostic attacks. Intriguingly, targeted\nfeature attacks exhibit better transferability compared to untargeted ones,\nsuggesting the optimal update direction aligns with the image manifold. Based\non the observations, we design a novel attack named Unsegment Anything by\nSimulating Deformation (UAD). Our attack optimizes a differentiable deformation\nfunction to create a target deformed image, which alters structural information\nwhile preserving achievable feature distance by adversarial example. Extensive\nexperiments verify the effectiveness of our approach, compromising a variety of\npromptable segmentation models with different architectures and prompt\ninterfaces. We release the code at\nhttps://github.com/jiahaolu97/anything-unsegmentable.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jiahao Lu", "Xingyi Yang", "Xinchao Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa2"}, "filepath": "data/2312.03799.png", "tags": [], "_media_type": "image", "_rand": 0.9994474614450574, "arXiv_link": "https://arxiv.org/abs/2312.03799", "other_link": "https://tub-rip.github.io/eventpenguins/", "title": "Low-power, Continuous Remote Behavioral Localization with Event Cameras", "abstract": "Researchers in natural science need reliable methods for quantifying animal\nbehavior. Recently, numerous computer vision methods emerged to automate the\nprocess. However, observing wild species at remote locations remains a\nchallenging task due to difficult lighting conditions and constraints on power\nsupply and data storage. Event cameras offer unique advantages for\nbattery-dependent remote monitoring due to their low power consumption and high\ndynamic range capabilities. We use this novel sensor to quantify a behavior in\nChinstrap penguins called ecstatic display. We formulate the problem as a\ntemporal action detection task, determining the start and end times of the\nbehavior. For this purpose, we recorded a colony of breeding penguins in\nAntarctica for several weeks and labeled event data on 16 nests. The developed\nmethod consists of a generator of candidate time intervals (proposals) and a\nclassifier of the actions within them. The experiments show that the event\ncameras' natural response to motion is effective for continuous behavior\nmonitoring and detection, reaching a mean average precision (mAP) of 58% (which\nincreases to 63% in good weather conditions). The results also demonstrate the\nrobustness against various lighting conditions contained in the challenging\ndataset. The low-power capabilities of the event camera allow it to record\nsignificantly longer than with a conventional camera. This work pioneers the\nuse of event cameras for remote wildlife observation, opening new\ninterdisciplinary opportunities. https://tub-rip.github.io/eventpenguins/", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Friedhelm Hamann", "Suman Ghosh", "Ignacio Juarez Martinez", "Tom Hart", "Alex Kacelnik", "Guillermo Gallego"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa3"}, "filepath": "data/2309.16585.png", "tags": [], "_media_type": "image", "_rand": 0.999506585633356, "arXiv_link": "https://arxiv.org/abs/2309.16585", "other_link": "https://github.com/gsgen3d/gsgen", "title": "Text-to-3D using Gaussian Splatting", "abstract": "Automatic text-to-3D generation that combines Score Distillation Sampling\n(SDS) with the optimization of volume rendering has achieved remarkable\nprogress in synthesizing realistic 3D objects. Yet most existing text-to-3D\nmethods by SDS and volume rendering suffer from inaccurate geometry, e.g., the\nJanus issue, since it is hard to explicitly integrate 3D priors into implicit\n3D representations. Besides, it is usually time-consuming for them to generate\nelaborate 3D models with rich colors. In response, this paper proposes GSGEN, a\nnovel method that adopts Gaussian Splatting, a recent state-of-the-art\nrepresentation, to text-to-3D generation. GSGEN aims at generating high-quality\n3D objects and addressing existing shortcomings by exploiting the explicit\nnature of Gaussian Splatting that enables the incorporation of 3D prior.\nSpecifically, our method adopts a progressive optimization strategy, which\nincludes a geometry optimization stage and an appearance refinement stage. In\ngeometry optimization, a coarse representation is established under 3D point\ncloud diffusion prior along with the ordinary 2D SDS optimization, ensuring a\nsensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians\nundergo an iterative appearance refinement to enrich texture details. In this\nstage, we increase the number of Gaussians by compactness-based densification\nto enhance continuity and improve fidelity. With these designs, our approach\ncan generate 3D assets with delicate details and accurate geometry. Extensive\nevaluations demonstrate the effectiveness of our method, especially for\ncapturing high-frequency components. Our code is available at\nhttps://github.com/gsgen3d/gsgen", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zilong Chen", "Feng Wang", "Yikai Wang", "Huaping Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa4"}, "filepath": "data/2308.14316v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990907414784725, "arXiv_link": "https://arxiv.org/abs/2308.14316v2", "other_link": "https://github.com/Paranioar/UniPT.", "title": "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory", "abstract": "Parameter-efficient transfer learning (PETL), i.e., fine-tuning a small\nportion of parameters, is an effective strategy for adapting pre-trained models\nto downstream domains. To further reduce the memory demand, recent PETL works\nfocus on the more valuable memory-efficient characteristic. In this paper, we\nargue that the scalability, adaptability, and generalizability of\nstate-of-the-art methods are hindered by structural dependency and pertinency\non specific pre-trained backbones. To this end, we propose a new\nmemory-efficient PETL strategy, Universal Parallel Tuning (UniPT), to mitigate\nthese weaknesses. Specifically, we facilitate the transfer process via a\nlightweight and learnable parallel network, which consists of: 1) A parallel\ninteraction module that decouples the sequential connections and processes the\nintermediate activations detachedly from the pre-trained network. 2) A\nconfidence aggregation module that learns optimal strategies adaptively for\nintegrating cross-layer features. We evaluate UniPT with different backbones\n(e.g., T5, VSE$\\infty$, CLIP4Clip, Clip-ViL, and MDETR) on various\nvision-and-language and pure NLP tasks. Extensive ablations on 18 datasets have\nvalidated that UniPT can not only dramatically reduce memory consumption and\noutperform the best competitor, but also achieve competitive performance over\nother plain PETL methods with lower training memory overhead. Our code is\npublicly available at: https://github.com/Paranioar/UniPT.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Haiwen Diao", "Bo Wan", "Ying Zhang", "Xu Jia", "Huchuan Lu", "Long Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa5"}, "filepath": "data/2309.04437.png", "tags": [], "_media_type": "image", "_rand": 0.9997804144052903, "arXiv_link": "https://arxiv.org/abs/2309.04437", "other_link": "", "title": "Single-View Refractive Index Tomography with Neural Fields", "abstract": "Refractive Index Tomography is the inverse problem of reconstructing the\ncontinuously-varying 3D refractive index in a scene using 2D projected image\nmeasurements. Although a purely refractive field is not directly visible, it\nbends light rays as they travel through space, thus providing a signal for\nreconstruction. The effects of such fields appear in many scientific computer\nvision settings, ranging from refraction due to transparent cells in microscopy\nto the lensing of distant galaxies caused by dark matter in astrophysics.\nReconstructing these fields is particularly difficult due to the complex\nnonlinear effects of the refractive field on observed images. Furthermore,\nwhile standard 3D reconstruction and tomography settings typically have access\nto observations of the scene from many viewpoints, many refractive index\ntomography problem settings only have access to images observed from a single\nviewpoint. We introduce a method that leverages prior knowledge of light\nsources scattered throughout the refractive medium to help disambiguate the\nsingle-view refractive index tomography problem. We differentiably trace curved\nrays through a neural field representation of the refractive field, and\noptimize its parameters to best reproduce the observed image. We demonstrate\nthe efficacy of our approach by reconstructing simulated refractive fields,\nanalyze the effects of light source distribution on the recovered field, and\ntest our method on a simulated dark matter mapping problem where we\nsuccessfully recover the 3D refractive field caused by a realistic dark matter\ndistribution.", "keywords": ["Deep learning architectures and techniques", "Computational imaging and physics-based vision"], "authors_list": ["Brandon Zhao", "Aviad Levis", "Liam Connor", "Pratul P. Srinivasan", "Katherine Bouman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa6"}, "filepath": "data/2306.04216.png", "tags": [], "_media_type": "image", "_rand": 0.9993422666391111, "arXiv_link": "https://arxiv.org/abs/2306.04216", "other_link": "https://mmsum-dataset.github.io/}", "title": "MMSum: A Dataset for Multimodal Summarization and Thumbnail Generation of Videos", "abstract": "Multimodal summarization with multimodal output (MSMO) has emerged as a\npromising research direction. Nonetheless, numerous limitations exist within\nexisting public MSMO datasets, including insufficient maintenance, data\ninaccessibility, limited size, and the absence of proper categorization, which\npose significant challenges. To address these challenges and provide a\ncomprehensive dataset for this new direction, we have meticulously curated the\n\\textbf{MMSum} dataset. Our new dataset features (1) Human-validated summaries\nfor both video and textual content, providing superior human instruction and\nlabels for multimodal learning. (2) Comprehensively and meticulously arranged\ncategorization, spanning 17 principal categories and 170 subcategories to\nencapsulate a diverse array of real-world scenarios. (3) Benchmark tests\nperformed on the proposed dataset to assess various tasks and methods,\nincluding \\textit{video summarization}, \\textit{text summarization}, and\n\\textit{multimodal summarization}. To champion accessibility and collaboration,\nwe will release the \\textbf{MMSum} dataset and the data collection tool as\nfully open-source resources, fostering transparency and accelerating future\ndevelopments. Our project website can be found\nat~\\url{https://mmsum-dataset.github.io/}", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Jielin Qiu", "Jiacheng Zhu", "William Han", "Aditesh Kumar", "Karthik Mittal", "Claire Jin", "Zhengyuan Yang", "Linjie Li", "Jianfeng Wang", "DING ZHAO", "Bo Li", "Lijuan Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa7"}, "filepath": "data/2311.17950.png", "tags": [], "_media_type": "image", "_rand": 0.9992694273270398, "arXiv_link": "https://arxiv.org/abs/2311.17950", "other_link": "", "title": "Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching", "abstract": "The lightweight \"local-match-global\" matching introduced by SRe2L\nsuccessfully creates a distilled dataset with comprehensive information on the\nfull 224x224 ImageNet-1k. However, this one-sided approach is limited to a\nparticular backbone, layer, and statistics, which limits the improvement of the\ngeneralization of a distilled dataset. We suggest that sufficient and various\n\"local-match-global\" matching are more precise and effective than a single one\nand has the ability to create a distilled dataset with richer information and\nbetter generalization. We call this perspective \"generalized matching\" and\npropose Generalized Various Backbone and Statistical Matching (G-VBSM) in this\nwork, which aims to create a synthetic dataset with densities, ensuring\nconsistency with the complete dataset across various backbones, layers, and\nstatistics. As experimentally demonstrated, G-VBSM is the first algorithm to\nobtain strong performance across both small-scale and large-scale datasets.\nSpecifically, G-VBSM achieves a performance of 38.7% on CIFAR-100 with\n128-width ConvNet, 47.6% on Tiny-ImageNet with ResNet18, and 31.4% on the full\n224x224 ImageNet-1k with ResNet18, under images per class (IPC) 10, 50, and 10,\nrespectively. These results surpass all SOTA methods by margins of 3.9%, 6.5%,\nand 10.1%, respectively.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Shitong Shao", "Zeyuan Yin", "Muxin Zhou", "Xindong Zhang", "Zhiqiang Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa8"}, "filepath": "data/2404.01050.png", "tags": [], "_media_type": "image", "_rand": 0.9991501920118313, "arXiv_link": "https://arxiv.org/abs/2404.01050", "other_link": "https://github.com/haofengl/DragNoise.", "title": "Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation", "abstract": "Point-based interactive editing serves as an essential tool to complement the\ncontrollability of existing generative models. A concurrent work,\nDragDiffusion, updates the diffusion latent map in response to user inputs,\ncausing global latent map alterations. This results in imprecise preservation\nof the original content and unsuccessful editing due to gradient vanishing. In\ncontrast, we present DragNoise, offering robust and accelerated editing without\nretracing the latent map. The core rationale of DragNoise lies in utilizing the\npredicted noise output of each U-Net as a semantic editor. This approach is\ngrounded in two critical observations: firstly, the bottleneck features of\nU-Net inherently possess semantically rich features ideal for interactive\nediting; secondly, high-level semantics, established early in the denoising\nprocess, show minimal variation in subsequent stages. Leveraging these\ninsights, DragNoise edits diffusion semantics in a single denoising step and\nefficiently propagates these changes, ensuring stability and efficiency in\ndiffusion editing. Comparative experiments reveal that DragNoise achieves\nsuperior control and semantic retention, reducing the optimization time by over\n50% compared to DragDiffusion. Our codes are available at\nhttps://github.com/haofengl/DragNoise.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Haofeng Liu", "Chenshu Xu", "Yifei Yang", "Lihua Zeng", "Shengfeng He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Human-Computer Interaction", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efa9"}, "filepath": "data/2404.06663.png", "tags": [], "_media_type": "image", "_rand": 0.9992366697339821, "arXiv_link": "https://arxiv.org/abs/2404.06663", "other_link": "", "title": "CMA: A Chromaticity Map Adapter for Robust Detection of Screen-Recapture Document Images", "abstract": "Document Presentation Attack Detection (DPAD) is an important measure in\nprotecting the authenticity of a document image. However, recent DPAD methods\ndemand additional resources, such as manual effort in collecting additional\ndata or knowing the parameters of acquisition devices. This work proposes a\nDPAD method based on multi-modal disentangled traces (MMDT) without the above\ndrawbacks. We first disentangle the recaptured traces by a self-supervised\ndisentanglement and synthesis network to enhance the generalization capacity in\ndocument images with different contents and layouts. Then, unlike the existing\nDPAD approaches that rely only on data in the RGB domain, we propose to\nexplicitly employ the disentangled recaptured traces as new modalities in the\ntransformer backbone through adaptive multi-modal adapters to fuse RGB/trace\nfeatures efficiently. Visualization of the disentangled traces confirms the\neffectiveness of the proposed method in different document contents. Extensive\nexperiments on three benchmark datasets demonstrate the superiority of our MMDT\nmethod on representing forensic traces of recapturing distortion.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Changsheng Chen", "Liangwei Lin", "Yongqi Chen", "Bin Li", "Jishen Zeng", "Jiwu Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efaa"}, "filepath": "data/2402.18929.png", "tags": [], "_media_type": "image", "_rand": 0.9991596156482754, "arXiv_link": "https://arxiv.org/abs/2402.18929", "other_link": "", "title": "Navigating Beyond Dropout: An Intriguing Solution towards Generalizable Image Super-Resolution", "abstract": "Deep learning has led to a dramatic leap on Single Image Super-Resolution\n(SISR) performances in recent years. %Despite the substantial advancement%\nWhile most existing work assumes a simple and fixed degradation model (e.g.,\nbicubic downsampling), the research of Blind SR seeks to improve model\ngeneralization ability with unknown degradation. Recently, Kong et al pioneer\nthe investigation of a more suitable training strategy for Blind SR using\nDropout. Although such method indeed brings substantial generalization\nimprovements via mitigating overfitting, we argue that Dropout simultaneously\nintroduces undesirable side-effect that compromises model's capacity to\nfaithfully reconstruct fine details. We show both the theoretical and\nexperimental analyses in our paper, and furthermore, we present another easy\nyet effective training strategy that enhances the generalization ability of the\nmodel by simply modulating its first and second-order features statistics.\nExperimental results have shown that our method could serve as a model-agnostic\nregularization and outperforms Dropout on seven benchmark datasets including\nboth synthetic and real-world scenarios.", "keywords": ["Low-level vision"], "authors_list": ["Hongjun Wang", "Jiyuan Chen", "Yinqiang Zheng", "Tieyong Zeng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efab"}, "filepath": "data/2404.05063.png", "tags": [], "_media_type": "image", "_rand": 0.9998272955637728, "arXiv_link": "https://arxiv.org/abs/2404.05063", "other_link": "", "title": "AUEditNet: Dual-Branch Facial Action Unit Intensity Manipulation with Implicit Disentanglement", "abstract": "Facial action unit (AU) intensity plays a pivotal role in quantifying\nfine-grained expression behaviors, which is an effective condition for facial\nexpression manipulation. However, publicly available datasets containing\nintensity annotations for multiple AUs remain severely limited, often featuring\na restricted number of subjects. This limitation places challenges to the AU\nintensity manipulation in images due to disentanglement issues, leading\nresearchers to resort to other large datasets with pretrained AU intensity\nestimators for pseudo labels. In addressing this constraint and fully\nleveraging manual annotations of AU intensities for precise manipulation, we\nintroduce AUEditNet. Our proposed model achieves impressive intensity\nmanipulation across 12 AUs, trained effectively with only 18 subjects.\nUtilizing a dual-branch architecture, our approach achieves comprehensive\ndisentanglement of facial attributes and identity without necessitating\nadditional loss functions or implementing with large batch sizes. This approach\noffers a potential solution to achieve desired facial attribute editing despite\nthe dataset's limited subject count. Our experiments demonstrate AUEditNet's\nsuperior accuracy in editing AU intensities, affirming its capability in\ndisentangling facial attributes and identity within a limited subject pool.\nAUEditNet allows conditioning by either intensity values or target images,\neliminating the need for constructing AU combinations for specific facial\nexpression synthesis. Moreover, AU intensity estimation, as a downstream task,\nvalidates the consistency between real and edited images, confirming the\neffectiveness of our proposed AU intensity manipulation method.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Shiwei Jin", "Zhen Wang", "Lei Wang", "Peng Liu", "Ning Bi", "Truong Nguyen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efac"}, "filepath": "data/2402.17563v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996976460479382, "arXiv_link": "https://arxiv.org/abs/2402.17563v1", "other_link": "", "title": "Structure-Guided Adversarial Training of Diffusion Models", "abstract": "Diffusion models have demonstrated exceptional efficacy in various generative\napplications. While existing models focus on minimizing a weighted sum of\ndenoising score matching losses for data distribution modeling, their training\nprimarily emphasizes instance-level optimization, overlooking valuable\nstructural information within each mini-batch, indicative of pair-wise\nrelationships among samples. To address this limitation, we introduce\nStructure-guided Adversarial training of Diffusion Models (SADM). In this\npioneering approach, we compel the model to learn manifold structures between\nsamples in each training batch. To ensure the model captures authentic manifold\nstructures in the data distribution, we advocate adversarial training of the\ndiffusion generator against a novel structure discriminator in a minimax game,\ndistinguishing real manifold structures from the generated ones. SADM\nsubstantially improves existing diffusion transformers (DiT) and outperforms\nexisting methods in image generation and cross-domain fine-tuning tasks across\n12 datasets, establishing a new state-of-the-art FID of 1.58 and 2.11 on\nImageNet for class-conditional image generation at resolutions of 256x256 and\n512x512, respectively.", "keywords": [], "authors_list": ["Ling Yang", "Haotian Qian", "Zhilong Zhang", "Jingwei Liu", "Bin CUI"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efad"}, "filepath": "data/2312.05889.png", "tags": [], "_media_type": "image", "_rand": 0.999083690136819, "arXiv_link": "https://arxiv.org/abs/2312.05889", "other_link": "", "title": "SuperPrimitive: Scene Reconstruction at a Primitive Level", "abstract": "Joint camera pose and dense geometry estimation from a set of images or a\nmonocular video remains a challenging problem due to its computational\ncomplexity and inherent visual ambiguities. Most dense incremental\nreconstruction systems operate directly on image pixels and solve for their 3D\npositions using multi-view geometry cues. Such pixel-level approaches suffer\nfrom ambiguities or violations of multi-view consistency (e.g. caused by\ntextureless or specular surfaces).\n We address this issue with a new image representation which we call a\nSuperPrimitive. SuperPrimitives are obtained by splitting images into\nsemantically correlated local regions and enhancing them with estimated surface\nnormal directions, both of which are predicted by state-of-the-art single image\nneural networks. This provides a local geometry estimate per SuperPrimitive,\nwhile their relative positions are adjusted based on multi-view observations.\n We demonstrate the versatility of our new representation by addressing three\n3D reconstruction tasks: depth completion, few-view structure from motion, and\nmonocular dense visual odometry.", "keywords": ["Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Kirill Mazur", "Gwangbin Bae", "Andrew J. Davison"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efae"}, "filepath": "data/2404.17753.png", "tags": [], "_media_type": "image", "_rand": 0.9992187890808748, "arXiv_link": "https://arxiv.org/abs/2404.17753", "other_link": "https://github.com/YCaigogogo/CVPR24-CODER.", "title": "Leveraging Cross-Modal Neighbor Representation for Improved CLIP Classification", "abstract": "CLIP showcases exceptional cross-modal matching capabilities due to its\ntraining on image-text contrastive learning tasks. However, without specific\noptimization for unimodal scenarios, its performance in single-modality feature\nextraction might be suboptimal. Despite this, some studies have directly used\nCLIP's image encoder for tasks like few-shot classification, introducing a\nmisalignment between its pre-training objectives and feature extraction\nmethods. This inconsistency can diminish the quality of the image's feature\nrepresentation, adversely affecting CLIP's effectiveness in target tasks. In\nthis paper, we view text features as precise neighbors of image features in\nCLIP's space and present a novel CrOss-moDal nEighbor Representation(CODER)\nbased on the distance structure between images and their neighbor texts. This\nfeature extraction method aligns better with CLIP's pre-training objectives,\nthereby fully leveraging CLIP's robust cross-modal capabilities. The key to\nconstruct a high-quality CODER lies in how to create a vast amount of\nhigh-quality and diverse texts to match with images. We introduce the Auto Text\nGenerator(ATG) to automatically generate the required texts in a data-free and\ntraining-free manner. We apply CODER to CLIP's zero-shot and few-shot image\nclassification tasks. Experiment results across various datasets and models\nconfirm CODER's effectiveness. Code is available\nat:https://github.com/YCaigogogo/CVPR24-CODER.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Chao Yi", "Lu Ren", "De-Chuan Zhan", "Han-Jia Ye"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efaf"}, "filepath": "data/2403.10815.png", "tags": [], "_media_type": "image", "_rand": 0.9996853106954672, "arXiv_link": "https://arxiv.org/abs/2403.10815", "other_link": "https://github.com/UCSC-VLAA/MicroDiffusion.", "title": "MicroDiffusion: Implicit Representation-Guided Diffusion for 3D Reconstruction from Limited 2D Microscopy Projections", "abstract": "Volumetric optical microscopy using non-diffracting beams enables rapid\nimaging of 3D volumes by projecting them axially to 2D images but lacks crucial\ndepth information. Addressing this, we introduce MicroDiffusion, a pioneering\ntool facilitating high-quality, depth-resolved 3D volume reconstruction from\nlimited 2D projections. While existing Implicit Neural Representation (INR)\nmodels often yield incomplete outputs and Denoising Diffusion Probabilistic\nModels (DDPM) excel at capturing details, our method integrates INR's\nstructural coherence with DDPM's fine-detail enhancement capabilities. We\npretrain an INR model to transform 2D axially-projected images into a\npreliminary 3D volume. This pretrained INR acts as a global prior guiding\nDDPM's generative process through a linear interpolation between INR outputs\nand noise inputs. This strategy enriches the diffusion process with structured\n3D information, enhancing detail and reducing noise in localized 2D images. By\nconditioning the diffusion model on the closest 2D projection, MicroDiffusion\nsubstantially enhances fidelity in resulting 3D reconstructions, surpassing INR\nand standard DDPM outputs with unparalleled image quality and structural\nfidelity. Our code and dataset are available at\nhttps://github.com/UCSC-VLAA/MicroDiffusion.", "keywords": ["Deep learning architectures and techniques", "Medical imaging and biological vision"], "authors_list": ["mude hui", "Zihao Wei", "Hongru Zhu", "Fei Xia", "Yuyin Zhou"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb0"}, "filepath": "data/2311.18729.png", "tags": [], "_media_type": "image", "_rand": 0.9999566796680758, "arXiv_link": "https://arxiv.org/abs/2311.18729", "other_link": "", "title": "Portrait4D: Learning One-Shot 4D Head Avatar Synthesis using Synthetic Data", "abstract": "Existing one-shot 4D head synthesis methods usually learn from monocular\nvideos with the aid of 3DMM reconstruction, yet the latter is evenly\nchallenging which restricts them from reasonable 4D head synthesis. We present\na method to learn one-shot 4D head synthesis via large-scale synthetic data.\nThe key is to first learn a part-wise 4D generative model from monocular images\nvia adversarial learning, to synthesize multi-view images of diverse identities\nand full motions as training data; then leverage a transformer-based animatable\ntriplane reconstructor to learn 4D head reconstruction using the synthetic\ndata. A novel learning strategy is enforced to enhance the generalizability to\nreal images by disentangling the learning process of 3D reconstruction and\nreenactment. Experiments demonstrate our superiority over the prior art.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Yu Deng", "Duomin Wang", "Xiaohang Ren", "Xingyu Chen", "Baoyuan Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb1"}, "filepath": "data/2312.04043.png", "tags": [], "_media_type": "image", "_rand": 0.9999732413643061, "arXiv_link": "https://arxiv.org/abs/2312.04043", "other_link": "", "title": "Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes", "abstract": "In this paper, we democratise 3D content creation, enabling precise\ngeneration of 3D shapes from abstract sketches while overcoming limitations\ntied to drawing skills. We introduce a novel part-level modelling and alignment\nframework that facilitates abstraction modelling and cross-modal\ncorrespondence. Leveraging the same part-level decoder, our approach seamlessly\nextends to sketch modelling by establishing correspondence between CLIPasso\nedgemaps and projected 3D part regions, eliminating the need for a dataset\npairing human sketches and 3D shapes. Additionally, our method introduces a\nseamless in-position editing process as a byproduct of cross-modal part-aligned\nmodelling. Operating in a low-dimensional implicit space, our approach\nsignificantly reduces computational demands and processing time.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Hmrishav Bandyopadhyay", "Subhadeep Koley", "Ayan Das", "Ayan Kumar Bhunia", "Aneeshan Sain", "Pinaki Nath Chowdhury", "Tao Xiang", "Yi-Zhe Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb2"}, "filepath": "data/2311.17663.png", "tags": [], "_media_type": "image", "_rand": 0.999682726717415, "arXiv_link": "https://arxiv.org/abs/2311.17663", "other_link": "https://github.com/haomo-ai/Cam4DOcc.", "title": "Cam4DOcc: Benchmark for Camera-Only 4D Occupancy Forecasting in Autonomous Driving Applications", "abstract": "Understanding how the surrounding environment changes is crucial for\nperforming downstream tasks safely and reliably in autonomous driving\napplications. Recent occupancy estimation techniques using only camera images\nas input can provide dense occupancy representations of large-scale scenes\nbased on the current observation. However, they are mostly limited to\nrepresenting the current 3D space and do not consider the future state of\nsurrounding objects along the time axis. To extend camera-only occupancy\nestimation into spatiotemporal prediction, we propose Cam4DOcc, a new benchmark\nfor camera-only 4D occupancy forecasting, evaluating the surrounding scene\nchanges in a near future. We build our benchmark based on multiple publicly\navailable datasets, including nuScenes, nuScenes-Occupancy, and Lyft-Level5,\nwhich provides sequential occupancy states of general movable and static\nobjects, as well as their 3D backward centripetal flow. To establish this\nbenchmark for future research with comprehensive comparisons, we introduce four\nbaseline types from diverse camera-based perception and prediction\nimplementations, including a static-world occupancy model, voxelization of\npoint cloud prediction, 2D-3D instance-based prediction, and our proposed novel\nend-to-end 4D occupancy forecasting network. Furthermore, the standardized\nevaluation protocol for preset multiple tasks is also provided to compare the\nperformance of all the proposed baselines on present and future occupancy\nestimation with respect to objects of interest in autonomous driving scenarios.\nThe dataset and our implementation of all four baselines in the proposed\nCam4DOcc benchmark will be released here: https://github.com/haomo-ai/Cam4DOcc.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Junyi Ma", "Xieyuanli Chen", "Jiawei Huang", "Jingyi Xu", "Zhen Luo", "Jintao Xu", "Weihao Gu", "Rui Ai", "Hesheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb3"}, "filepath": "data/2403.06951.png", "tags": [], "_media_type": "image", "_rand": 0.9996795006510054, "arXiv_link": "https://arxiv.org/abs/2403.06951", "other_link": "https://tianhao-qi.github.io/DEADiff/.", "title": "DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations", "abstract": "The diffusion-based text-to-image model harbors immense potential in\ntransferring reference style. However, current encoder-based approaches\nsignificantly impair the text controllability of text-to-image models while\ntransferring styles. In this paper, we introduce DEADiff to address this issue\nusing the following two strategies: 1) a mechanism to decouple the style and\nsemantics of reference images. The decoupled feature representations are first\nextracted by Q-Formers which are instructed by different text descriptions.\nThen they are injected into mutually exclusive subsets of cross-attention\nlayers for better disentanglement. 2) A non-reconstructive learning method. The\nQ-Formers are trained using paired images rather than the identical target, in\nwhich the reference image and the ground-truth image are with the same style or\nsemantics. We show that DEADiff attains the best visual stylization results and\noptimal balance between the text controllability inherent in the text-to-image\nmodel and style similarity to the reference image, as demonstrated both\nquantitatively and qualitatively. Our project page is\nhttps://tianhao-qi.github.io/DEADiff/.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Tianhao Qi", "Shancheng Fang", "Yanze Wu", "Hongtao Xie", "Jiawei Liu", "Lang chen", "Qian HE", "Yongdong Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb4"}, "filepath": "data/2403.09480.png", "tags": [], "_media_type": "image", "_rand": 0.9998508883188233, "arXiv_link": "https://arxiv.org/abs/2403.09480", "other_link": "", "title": "What Sketch Explainability Really Means for Downstream Tasks ?", "abstract": "In this paper, we explore the unique modality of sketch for explainability,\nemphasising the profound impact of human strokes compared to conventional\npixel-oriented studies. Beyond explanations of network behavior, we discern the\ngenuine implications of explainability across diverse downstream sketch-related\ntasks. We propose a lightweight and portable explainability solution -- a\nseamless plugin that integrates effortlessly with any pre-trained model,\neliminating the need for re-training. Demonstrating its adaptability, we\npresent four applications: highly studied retrieval and generation, and\ncompletely novel assisted drawing and sketch adversarial attacks. The\ncentrepiece to our solution is a stroke-level attribution map that takes\ndifferent forms when linked with downstream tasks. By addressing the inherent\nnon-differentiability of rasterisation, we enable explanations at both coarse\nstroke level (SLA) and partial stroke level (P-SLA), each with its advantages\nfor specific downstream tasks.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Hmrishav Bandyopadhyay", "Pinaki Nath Chowdhury", "Ayan Kumar Bhunia", "Aneeshan Sain", "Tao Xiang", "Yi-Zhe Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb5"}, "filepath": "data/2402.18969.png", "tags": [], "_media_type": "image", "_rand": 0.9994338723626278, "arXiv_link": "https://arxiv.org/abs/2402.18969", "other_link": "", "title": "OHTA: One-shot Hand Avatar via Data-driven Implicit Priors", "abstract": "In this paper, we delve into the creation of one-shot hand avatars, attaining\nhigh-fidelity and drivable hand representations swiftly from a single image.\nWith the burgeoning domains of the digital human, the need for quick and\npersonalized hand avatar creation has become increasingly critical. Existing\ntechniques typically require extensive input data and may prove cumbersome or\neven impractical in certain scenarios. To enhance accessibility, we present a\nnovel method OHTA (One-shot Hand avaTAr) that enables the creation of detailed\nhand avatars from merely one image. OHTA tackles the inherent difficulties of\nthis data-limited problem by learning and utilizing data-driven hand priors.\nSpecifically, we design a hand prior model initially employed for 1) learning\nvarious hand priors with available data and subsequently for 2) the inversion\nand fitting of the target identity with prior knowledge. OHTA demonstrates the\ncapability to create high-fidelity hand avatars with consistent animatable\nquality, solely relying on a single image. Furthermore, we illustrate the\nversatility of OHTA through diverse applications, encompassing text-to-avatar\nconversion, hand editing, and identity latent space manipulation.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis", "Image and video generation and manipulation"], "authors_list": ["Xiaozheng Zheng", "Chao Wen", "Zhuo Su", "Zeran Xu", "Zhaohu Li", "Yang Zhao", "Zhou Xue"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb6"}, "filepath": "data/2402.18372.png", "tags": [], "_media_type": "image", "_rand": 0.9992492758246168, "arXiv_link": "https://arxiv.org/abs/2402.18372", "other_link": "", "title": "FedUV: Uniformity and Variance for Heterogeneous Federated Learning", "abstract": "Federated learning is a promising framework to train neural networks with\nwidely distributed data. However, performance degrades heavily with\nheterogeneously distributed data. Recent work has shown this is due to the\nfinal layer of the network being most prone to local bias, some finding success\nfreezing the final layer as an orthogonal classifier. We investigate the\ntraining dynamics of the classifier by applying SVD to the weights motivated by\nthe observation that freezing weights results in constant singular values. We\nfind that there are differences when training in IID and non-IID settings.\nBased on this finding, we introduce two regularization terms for local training\nto continuously emulate IID settings: (1) variance in the dimension-wise\nprobability distribution of the classifier and (2) hyperspherical uniformity of\nrepresentations of the encoder. These regularizations promote local models to\nact as if it were in an IID setting regardless of the local data distribution,\nthus offsetting proneness to bias while being flexible to the data. On\nextensive experiments in both label-shift and feature-shift settings, we verify\nthat our method achieves highest performance by a large margin especially in\nhighly non-IID cases in addition to being scalable to larger models and\ndatasets.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ha Min Son", "Moon-Hyun Kim", "Tai-Myoung Chung", "Chao Huang", "Xin Liu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Distributed, Parallel, and Cluster Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb7"}, "filepath": "data/2310.08471.png", "tags": [], "_media_type": "image", "_rand": 0.9995411067774417, "arXiv_link": "https://arxiv.org/abs/2310.08471", "other_link": "", "title": "WinSyn: A High Resolution Testbed for Synthetic Data", "abstract": "We present WinSyn, a unique dataset and testbed for creating high-quality\nsynthetic data with procedural modeling techniques. The dataset contains\nhigh-resolution photographs of windows, selected from locations around the\nworld, with 89,318 individual window crops showcasing diverse geometric and\nmaterial characteristics. We evaluate a procedural model by training semantic\nsegmentation networks on both synthetic and real images and then comparing\ntheir performances on a shared test set of real images. Specifically, we\nmeasure the difference in mean Intersection over Union (mIoU) and determine the\neffective number of real images to match synthetic data's training performance.\nWe design a baseline procedural model as a benchmark and provide 21,290\nsynthetically generated images. By tuning the procedural model, key factors are\nidentified which significantly influence the model's fidelity in replicating\nreal-world scenarios. Importantly, we highlight the challenge of procedural\nmodeling using current techniques, especially in their ability to replicate the\nspatial semantics of real-world scenarios. This insight is critical because of\nthe potential of procedural models to bridge to hidden scene aspects such as\ndepth, reflectivity, material properties, and lighting conditions.", "keywords": [], "authors_list": ["Tom Kelly", "John Femiani", "Peter Wonka"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb8"}, "filepath": "data/2403.00712.png", "tags": [], "_media_type": "image", "_rand": 0.9993208438377793, "arXiv_link": "https://arxiv.org/abs/2403.00712", "other_link": "https://github.com/baegwangbin/DSINE.", "title": "Rethinking Inductive Biases for Surface Normal Estimation", "abstract": "Despite the growing demand for accurate surface normal estimation models,\nexisting methods use general-purpose dense prediction models, adopting the same\ninductive biases as other tasks. In this paper, we discuss the inductive biases\nneeded for surface normal estimation and propose to (1) utilize the per-pixel\nray direction and (2) encode the relationship between neighboring surface\nnormals by learning their relative rotation. The proposed method can generate\ncrisp - yet, piecewise smooth - predictions for challenging in-the-wild images\nof arbitrary resolution and aspect ratio. Compared to a recent ViT-based\nstate-of-the-art model, our method shows a stronger generalization ability,\ndespite being trained on an orders of magnitude smaller dataset. The code is\navailable at https://github.com/baegwangbin/DSINE.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Gwangbin Bae", "Andrew J. Davison"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efb9"}, "filepath": "data/2312.13319.png", "tags": [], "_media_type": "image", "_rand": 0.9990928135655704, "arXiv_link": "https://arxiv.org/abs/2312.13319", "other_link": "", "title": "In2SET: Intra-Inter Similarity Exploiting Transformer for Dual-Camera Compressive Hyperspectral Imaging", "abstract": "Dual-Camera Compressed Hyperspectral Imaging (DCCHI) offers the capability to\nreconstruct 3D Hyperspectral Image (HSI) by fusing compressive and Panchromatic\n(PAN) image, which has shown great potential for snapshot hyperspectral imaging\nin practice. In this paper, we introduce a novel DCCHI reconstruction network,\nthe Intra-Inter Similarity Exploiting Transformer (In2SET). Our key insight is\nto make full use of the PAN image to assist the reconstruction. To this end, we\npropose using the intra-similarity within the PAN image as a proxy for\napproximating the intra-similarity in the original HSI, thereby offering an\nenhanced content prior for more accurate HSI reconstruction. Furthermore, we\naim to align the features from the underlying HSI with those of the PAN image,\nmaintaining semantic consistency and introducing new contextual information for\nthe reconstruction process. By integrating In2SET into a PAN-guided unrolling\nframework, our method substantially enhances the spatial-spectral fidelity and\ndetail of the reconstructed images, providing a more comprehensive and accurate\ndepiction of the scene. Extensive experiments conducted on both real and\nsimulated datasets demonstrate that our approach consistently outperforms\nexisting state-of-the-art methods in terms of reconstruction quality and\ncomputational complexity. Code will be released.", "keywords": ["Computational imaging and physics-based vision", "Remote sensing and photogrammetry"], "authors_list": ["Xin Wang", "Lizhi Wang", "Xiangtian Ma", "Maoqing Zhang", "Lin Zhu", "Hua Huang"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efba"}, "filepath": "data/2312.02974.png", "tags": [], "_media_type": "image", "_rand": 0.9990459299375812, "arXiv_link": "https://arxiv.org/abs/2312.02974", "other_link": "", "title": "Describing Differences in Image Sets with Natural Language", "abstract": "How do two sets of images differ? Discerning set-level differences is crucial\nfor understanding model behaviors and analyzing datasets, yet manually sifting\nthrough thousands of images is impractical. To aid in this discovery process,\nwe explore the task of automatically describing the differences between two\n$\\textbf{sets}$ of images, which we term Set Difference Captioning. This task\ntakes in image sets $D_A$ and $D_B$, and outputs a description that is more\noften true on $D_A$ than $D_B$. We outline a two-stage approach that first\nproposes candidate difference descriptions from image sets and then re-ranks\nthe candidates by checking how well they can differentiate the two sets. We\nintroduce VisDiff, which first captions the images and prompts a language model\nto propose candidate descriptions, then re-ranks these descriptions using CLIP.\nTo evaluate VisDiff, we collect VisDiffBench, a dataset with 187 paired image\nsets with ground truth difference descriptions. We apply VisDiff to various\ndomains, such as comparing datasets (e.g., ImageNet vs. ImageNetV2), comparing\nclassification models (e.g., zero-shot CLIP vs. supervised ResNet), summarizing\nmodel failure modes (supervised ResNet), characterizing differences between\ngenerative models (e.g., StableDiffusionV1 and V2), and discovering what makes\nimages memorable. Using VisDiff, we are able to find interesting and previously\nunknown differences in datasets and models, demonstrating its utility in\nrevealing nuanced insights.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Lisa Dunlap", "Yuhui Zhang", "Xiaohan Wang", "Ruiqi Zhong", "Trevor Darrell", "Jacob Steinhardt", "Joseph Gonzalez", "Serena Yeung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Computers and Society", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efbb"}, "filepath": "data/2403.09344.png", "tags": [], "_media_type": "image", "_rand": 0.999886249465454, "arXiv_link": "https://arxiv.org/abs/2403.09344", "other_link": "", "title": "SketchINR: A First Look into Sketches as Implicit Neural Representations", "abstract": "We propose SketchINR, to advance the representation of vector sketches with\nimplicit neural models. A variable length vector sketch is compressed into a\nlatent space of fixed dimension that implicitly encodes the underlying shape as\na function of time and strokes. The learned function predicts the $xy$ point\ncoordinates in a sketch at each time and stroke. Despite its simplicity,\nSketchINR outperforms existing representations at multiple tasks: (i) Encoding\nan entire sketch dataset into a fixed size latent vector, SketchINR gives\n$60\\times$ and $10\\times$ data compression over raster and vector sketches,\nrespectively. (ii) SketchINR's auto-decoder provides a much higher-fidelity\nrepresentation than other learned vector sketch representations, and is\nuniquely able to scale to complex vector sketches such as FS-COCO. (iii)\nSketchINR supports parallelisation that can decode/render $\\sim$$100\\times$\nfaster than other learned vector representations such as SketchRNN. (iv)\nSketchINR, for the first time, emulates the human ability to reproduce a sketch\nwith varying abstraction in terms of number and complexity of strokes. As a\nfirst look at implicit sketches, SketchINR's compact high-fidelity\nrepresentation will support future work in modelling long and complex sketches.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Hmrishav Bandyopadhyay", "Ayan Kumar Bhunia", "Pinaki Nath Chowdhury", "Aneeshan Sain", "Tao Xiang", "Timothy Hospedales", "Yi-Zhe Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efbc"}, "filepath": "data/2404.16493.png", "tags": [], "_media_type": "image", "_rand": 0.9992243002518603, "arXiv_link": "https://arxiv.org/abs/2404.16493", "other_link": "https://github.com/hailanyi/CPD.", "title": "Commonsense Prototype for Outdoor Unsupervised 3D Object Detection", "abstract": "The prevalent approaches of unsupervised 3D object detection follow\ncluster-based pseudo-label generation and iterative self-training processes.\nHowever, the challenge arises due to the sparsity of LiDAR scans, which leads\nto pseudo-labels with erroneous size and position, resulting in subpar\ndetection performance. To tackle this problem, this paper introduces a\nCommonsense Prototype-based Detector, termed CPD, for unsupervised 3D object\ndetection. CPD first constructs Commonsense Prototype (CProto) characterized by\nhigh-quality bounding box and dense points, based on commonsense intuition.\nSubsequently, CPD refines the low-quality pseudo-labels by leveraging the size\nprior from CProto. Furthermore, CPD enhances the detection accuracy of sparsely\nscanned objects by the geometric knowledge from CProto. CPD outperforms\nstate-of-the-art unsupervised 3D detectors on Waymo Open Dataset (WOD),\nPandaSet, and KITTI datasets by a large margin. Besides, by training CPD on WOD\nand testing on KITTI, CPD attains 90.85% and 81.01% 3D Average Precision on\neasy and moderate car classes, respectively. These achievements position CPD in\nclose proximity to fully supervised detectors, highlighting the significance of\nour method. The code will be available at https://github.com/hailanyi/CPD.", "keywords": [], "authors_list": ["Hai Wu", "Shijia Zhao", "Xun Huang", "Chenglu Wen", "Xin Li", "Cheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efbd"}, "filepath": "data/2404.00992v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993202834498611, "arXiv_link": "https://arxiv.org/html/2404.00992v1", "other_link": "", "title": "Global and Hierarchical Geometry Consistency Priors for Few-shot NeRFs in Indoor Scenes", "abstract": "Neural Radiance Field (NeRF) technology has made significant strides in\ncreating novel viewpoints. However, its effectiveness is hampered when working\nwith sparsely available views, often leading to performance dips due to\noverfitting. FreeNeRF attempts to overcome this limitation by integrating\nimplicit geometry regularization, which incrementally improves both geometry\nand textures. Nonetheless, an initial low positional encoding bandwidth results\nin the exclusion of high-frequency elements. The quest for a holistic approach\nthat simultaneously addresses overfitting and the preservation of\nhigh-frequency details remains ongoing. This study introduces a novel feature\nmatching based sparse geometry regularization module. This module excels in\npinpointing high-frequency keypoints, thereby safeguarding the integrity of\nfine details. Through progressive refinement of geometry and textures across\nNeRF iterations, we unveil an effective few-shot neural rendering architecture,\ndesignated as SGCNeRF, for enhanced novel view synthesis. Our experiments\ndemonstrate that SGCNeRF not only achieves superior geometry-consistent\noutcomes but also surpasses FreeNeRF, with improvements of 0.7 dB and 0.6 dB in\nPSNR on the LLFF and DTU datasets, respectively.", "keywords": ["Deep learning architectures and techniques", "Low-level vision"], "authors_list": ["Xiaotian Sun", "Qingshan Xu", "Xinjie Yang", "Yu Zang", "Cheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efbe"}, "filepath": "data/2311.16516.png", "tags": [], "_media_type": "image", "_rand": 0.9993602813802338, "arXiv_link": "https://arxiv.org/abs/2311.16516", "other_link": "", "title": "Segment Every Out-of-Distribution Object", "abstract": "Semantic segmentation models, while effective for in-distribution categories,\nface challenges in real-world deployment due to encountering\nout-of-distribution (OoD) objects. Detecting these OoD objects is crucial for\nsafety-critical applications. Existing methods rely on anomaly scores, but\nchoosing a suitable threshold for generating masks presents difficulties and\ncan lead to fragmentation and inaccuracy. This paper introduces a method to\nconvert anomaly \\textbf{S}core \\textbf{T}o segmentation \\textbf{M}ask, called\nS2M, a simple and effective framework for OoD detection in semantic\nsegmentation. Unlike assigning anomaly scores to pixels, S2M directly segments\nthe entire OoD object. By transforming anomaly scores into prompts for a\npromptable segmentation model, S2M eliminates the need for threshold selection.\nExtensive experiments demonstrate that S2M outperforms the state-of-the-art by\napproximately 20% in IoU and 40% in mean F1 score, on average, across various\nbenchmarks including Fishyscapes, Segment-Me-If-You-Can, and RoadAnomaly\ndatasets.", "keywords": [], "authors_list": ["Wenjie Zhao", "Jia Li", "Xin Dong", "Yu Xiang", "Yunhui Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efbf"}, "filepath": "data/2404.05206.png", "tags": [], "_media_type": "image", "_rand": 0.9997472913380226, "arXiv_link": "https://arxiv.org/abs/2404.05206", "other_link": "", "title": "Learning to Segment Referred Objects from Narrated Egocentric Videos", "abstract": "We propose a novel self-supervised embedding to learn how actions sound from\nnarrated in-the-wild egocentric videos. Whereas existing methods rely on\ncurated data with known audio-visual correspondence, our multimodal\ncontrastive-consensus coding (MC3) embedding reinforces the associations\nbetween audio, language, and vision when all modality pairs agree, while\ndiminishing those associations when any one pair does not. We show our approach\ncan successfully discover how the long tail of human actions sound from\negocentric video, outperforming an array of recent multimodal embedding\ntechniques on two datasets (Ego4D and EPIC-Sounds) and multiple cross-modal\ntasks.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yuhan Shen", "Huiyu Wang", "Xitong Yang", "Matt Feiszli", "Ehsan Elhamifar", "Lorenzo Torresani", "Effrosyni Mavroudi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc0"}, "filepath": "data/2401.04716.png", "tags": [], "_media_type": "image", "_rand": 0.9997106028167829, "arXiv_link": "https://arxiv.org/abs/2401.04716", "other_link": "https://xiaobai1217.github.io/Low-Resource-Vision/.", "title": "Low-Resource Vision Challenges for Foundation Models", "abstract": "Low-resource settings are well-established in natural language processing,\nwhere many languages lack sufficient data for deep learning at scale. However,\nlow-resource problems are under-explored in computer vision. In this paper, we\naddress this gap and explore the challenges of low-resource image tasks with\nvision foundation models. We first collect a benchmark of genuinely\nlow-resource image data, covering historic maps, circuit diagrams, and\nmechanical drawings. These low-resource settings all share three challenges:\ndata scarcity, fine-grained differences, and the distribution shift from\nnatural images to the specialized domain of interest. While existing foundation\nmodels have shown impressive generalizability, we find they cannot transfer\nwell to our low-resource tasks. To begin to tackle the challenges of\nlow-resource vision, we introduce one simple baseline per challenge.\nSpecifically, we i) enlarge the data space by generative models, ii) adopt the\nbest sub-kernels to encode local regions for fine-grained difference discovery\nand iii) learn attention for specialized domains. Experiments on our three\nlow-resource tasks demonstrate our proposals already provide a better baseline\nthan transfer learning, data augmentation, and fine-grained methods. This\nhighlights the unique characteristics and challenges of low-resource vision for\nfoundation models that warrant further investigation. Project page:\nhttps://xiaobai1217.github.io/Low-Resource-Vision/.", "keywords": ["Document analysis and understanding"], "authors_list": ["Yunhua Zhang", "Hazel Doughty", "Cees G. M. Snoek"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc1"}, "filepath": "data/2311.17261.png", "tags": [], "_media_type": "image", "_rand": 0.9997763086090078, "arXiv_link": "https://arxiv.org/abs/2311.17261", "other_link": "", "title": "SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion Priors", "abstract": "We propose SceneTex, a novel method for effectively generating high-quality\nand style-consistent textures for indoor scenes using depth-to-image diffusion\npriors. Unlike previous methods that either iteratively warp 2D views onto a\nmesh surface or distillate diffusion latent features without accurate geometric\nand style cues, SceneTex formulates the texture synthesis task as an\noptimization problem in the RGB space where style and geometry consistency are\nproperly reflected. At its core, SceneTex proposes a multiresolution texture\nfield to implicitly encode the mesh appearance. We optimize the target texture\nvia a score-distillation-based objective function in respective RGB renderings.\nTo further secure the style consistency across views, we introduce a\ncross-attention decoder to predict the RGB values by cross-attending to the\npre-sampled reference locations in each instance. SceneTex enables various and\naccurate texture synthesis for 3D-FRONT scenes, demonstrating significant\nimprovements in visual quality and prompt fidelity over the prior texture\ngeneration methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Dave Zhenyu Chen", "Haoxuan Li", "Hsin-Ying Lee", "Sergey Tulyakov", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc2"}, "filepath": "data/2311.18231.png", "tags": [], "_media_type": "image", "_rand": 0.9997410858400438, "arXiv_link": "https://arxiv.org/abs/2311.18231", "other_link": "https://github.com/htyao89/Textual-based_Class-aware_prompt_tuning/", "title": "TCP: Textual-based Class-aware Prompt tuning for Visual-Language Model", "abstract": "Prompt tuning represents a valuable technique for adapting pre-trained\nvisual-language models (VLM) to various downstream tasks. Recent advancements\nin CoOp-based methods propose a set of learnable domain-shared or\nimage-conditional textual tokens to facilitate the generation of task-specific\ntextual classifiers. However, those textual tokens have a limited\ngeneralization ability regarding unseen domains, as they cannot dynamically\nadjust to the distribution of testing classes. To tackle this issue, we present\na novel Textual-based Class-aware Prompt tuning(TCP) that explicitly\nincorporates prior knowledge about classes to enhance their discriminability.\nThe critical concept of TCP involves leveraging Textual Knowledge Embedding\n(TKE) to map the high generalizability of class-level textual knowledge into\nclass-aware textual tokens. By seamlessly integrating these class-aware prompts\ninto the Text Encoder, a dynamic class-aware classifier is generated to enhance\ndiscriminability for unseen domains. During inference, TKE dynamically\ngenerates class-aware prompts related to the unseen classes. Comprehensive\nevaluations demonstrate that TKE serves as a plug-and-play module effortlessly\ncombinable with existing methods. Furthermore, TCP consistently achieves\nsuperior performance while demanding less training time.\nCode:https://github.com/htyao89/Textual-based_Class-aware_prompt_tuning/", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Hantao Yao", "Rui Zhang", "Changsheng Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc3"}, "filepath": "data/2401.05334.png", "tags": [], "_media_type": "image", "_rand": 0.9997672934363421, "arXiv_link": "http://export.arxiv.org/abs/2401.05334", "other_link": "", "title": "URHand: Universal Relightable Hands", "abstract": "Existing photorealistic relightable hand models require extensive\nidentity-specific observations in different views, poses, and illuminations,\nand face challenges in generalizing to natural illuminations and novel\nidentities. To bridge this gap, we present URHand, the first universal\nrelightable hand model that generalizes across viewpoints, poses,\nilluminations, and identities. Our model allows few-shot personalization using\nimages captured with a mobile phone, and is ready to be photorealistically\nrendered under novel illuminations. To simplify the personalization process\nwhile retaining photorealism, we build a powerful universal relightable prior\nbased on neural relighting from multi-view images of hands captured in a light\nstage with hundreds of identities. The key challenge is scaling the\ncross-identity training while maintaining personalized fidelity and sharp\ndetails without compromising generalization under natural illuminations. To\nthis end, we propose a spatially varying linear lighting model as the neural\nrenderer that takes physics-inspired shading as input feature. By removing\nnon-linear activations and bias, our specifically designed lighting model\nexplicitly keeps the linearity of light transport. This enables single-stage\ntraining from light-stage data while generalizing to real-time rendering under\narbitrary continuous illuminations across diverse identities. In addition, we\nintroduce the joint learning of a physically based model and our neural\nrelighting model, which further improves fidelity and generalization. Extensive\nexperiments show that our approach achieves superior performance over existing\nmethods in terms of both quality and generalizability. We also demonstrate\nquick personalization of URHand from a short phone scan of an unseen identity.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Computational imaging and physics-based vision"], "authors_list": ["Zhaoxi Chen", "Gyeongsik Moon", "Kaiwen Guo", "Chen Cao", "Stanislav Pidhorskyi", "Tomas Simon", "Rohan Joshi", "Yuan Dong", "Yichen Xu", "Bernardo Pires", "He Wen", "Lucas Evans", "Bo Peng", "Julia Buffalini", "Autumn Trimble", "Kevyn McPhail", "Melissa Schoeller", "Shoou-I Yu", "Javier Romero", "Michael Zollhoefer", "Yaser Sheikh", "Ziwei Liu", "Shunsuke Saito"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc4"}, "filepath": "data/2404.08640.png", "tags": [], "_media_type": "image", "_rand": 0.9996067684204355, "arXiv_link": "https://arxiv.org/abs/2404.08640", "other_link": "", "title": "EventEgo3D: 3D Human Motion Capture from Egocentric Event Streams", "abstract": "Monocular egocentric 3D human motion capture is a challenging and actively\nresearched problem. Existing methods use synchronously operating visual sensors\n(e.g. RGB cameras) and often fail under low lighting and fast motions, which\ncan be restricting in many applications involving head-mounted devices. In\nresponse to the existing limitations, this paper 1) introduces a new problem,\ni.e., 3D human motion capture from an egocentric monocular event camera with a\nfisheye lens, and 2) proposes the first approach to it called EventEgo3D\n(EE3D). Event streams have high temporal resolution and provide reliable cues\nfor 3D human motion capture under high-speed human motions and rapidly changing\nillumination. The proposed EE3D framework is specifically tailored for learning\nwith event streams in the LNES representation, enabling high 3D reconstruction\naccuracy. We also design a prototype of a mobile head-mounted device with an\nevent camera and record a real dataset with event observations and the\nground-truth 3D human poses (in addition to the synthetic dataset). Our EE3D\ndemonstrates robustness and superior 3D accuracy compared to existing solutions\nacross various challenging experiments while supporting real-time 3D pose\nupdate rates of 140Hz.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Christen Millerdurai", "Hiroyasu Akada", "Jian Wang", "Diogo Luvizon", "Christian Theobalt", "Vladislav Golyanik"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc5"}, "filepath": "data/2405.02066.png", "tags": [], "_media_type": "image", "_rand": 0.9991856601993363, "arXiv_link": "https://arxiv.org/abs/2405.02066", "other_link": "", "title": "WateRF: Robust Watermarks in Radiance Fields for Protection of Copyrights", "abstract": "The advances in the Neural Radiance Fields (NeRF) research offer extensive\napplications in diverse domains, but protecting their copyrights has not yet\nbeen researched in depth. Recently, NeRF watermarking has been considered one\nof the pivotal solutions for safely deploying NeRF-based 3D representations.\nHowever, existing methods are designed to apply only to implicit or explicit\nNeRF representations. In this work, we introduce an innovative watermarking\nmethod that can be employed in both representations of NeRF. This is achieved\nby fine-tuning NeRF to embed binary messages in the rendering process. In\ndetail, we propose utilizing the discrete wavelet transform in the NeRF space\nfor watermarking. Furthermore, we adopt a deferred back-propagation technique\nand introduce a combination with the patch-wise loss to improve rendering\nquality and bit accuracy with minimum trade-offs. We evaluate our method in\nthree different aspects: capacity, invisibility, and robustness of the embedded\nwatermarks in the 2D-rendered images. Our method achieves state-of-the-art\nperformance with faster training speed over the compared state-of-the-art\nmethods.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Youngdong Jang", "Dong In Lee", "MinHyuk Jang", "Jong Wook Kim", "Feng Yang", "Sangpil Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc6"}, "filepath": "data/2405.08909.png", "tags": [], "_media_type": "image", "_rand": 0.9993811141796033, "arXiv_link": "https://arxiv.org/abs/2405.08909", "other_link": "https://github.com/dsx0511/ADA-Track.", "title": "ADA-Track: End-to-End Multi-Camera 3D Multi-Object Tracking with Alternating Detection and Association", "abstract": "Many query-based approaches for 3D Multi-Object Tracking (MOT) adopt the\ntracking-by-attention paradigm, utilizing track queries for identity-consistent\ndetection and object queries for identity-agnostic track spawning.\nTracking-by-attention, however, entangles detection and tracking queries in one\nembedding for both the detection and tracking task, which is sub-optimal. Other\napproaches resemble the tracking-by-detection paradigm, detecting objects using\ndecoupled track and detection queries followed by a subsequent association.\nThese methods, however, do not leverage synergies between the detection and\nassociation task. Combining the strengths of both paradigms, we introduce\nADA-Track, a novel end-to-end framework for 3D MOT from multi-view cameras. We\nintroduce a learnable data association module based on edge-augmented\ncross-attention, leveraging appearance and geometric features. Furthermore, we\nintegrate this association module into the decoder layer of a DETR-based 3D\ndetector, enabling simultaneous DETR-like query-to-image cross-attention for\ndetection and query-to-query cross-attention for data association. By stacking\nthese decoder layers, queries are refined for the detection and association\ntask alternately, effectively harnessing the task dependencies. We evaluate our\nmethod on the nuScenes dataset and demonstrate the advantage of our approach\ncompared to the two previous paradigms. Code is available at\nhttps://github.com/dsx0511/ADA-Track.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Shuxiao Ding", "Lukas Schneider", "Marius Cordts", "J\u00fcrgen Gall"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc7"}, "filepath": "data/2403.13512.png", "tags": [], "_media_type": "image", "_rand": 0.9997738983961283, "arXiv_link": "https://arxiv.org/abs/2403.13512", "other_link": "https://github.com/shicaiwei123/SDD-CVPR2024", "title": "Scale Decoupled Distillation", "abstract": "Logit knowledge distillation attracts increasing attention due to its\npracticality in recent studies. However, it often suffers inferior performance\ncompared to the feature knowledge distillation. In this paper, we argue that\nexisting logit-based methods may be sub-optimal since they only leverage the\nglobal logit output that couples multiple semantic knowledge. This may transfer\nambiguous knowledge to the student and mislead its learning. To this end, we\npropose a simple but effective method, i.e., Scale Decoupled Distillation\n(SDD), for logit knowledge distillation. SDD decouples the global logit output\ninto multiple local logit outputs and establishes distillation pipelines for\nthem. This helps the student to mine and inherit fine-grained and unambiguous\nlogit knowledge. Moreover, the decoupled knowledge can be further divided into\nconsistent and complementary logit knowledge that transfers the semantic\ninformation and sample ambiguity, respectively. By increasing the weight of\ncomplementary parts, SDD can guide the student to focus more on ambiguous\nsamples, improving its discrimination ability. Extensive experiments on several\nbenchmark datasets demonstrate the effectiveness of SDD for wide\nteacher-student pairs, especially in the fine-grained classification task. Code\nis available at: https://github.com/shicaiwei123/SDD-CVPR2024", "keywords": [], "authors_list": ["Shicai Wei", "Chunbo Luo", "Yang Luo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc8"}, "filepath": "data/2404.03658.png", "tags": [], "_media_type": "image", "_rand": 0.9996613819686381, "arXiv_link": "https://arxiv.org/abs/2404.03658", "other_link": "https://ruili3.github.io/kyn.", "title": "Know Your Neighbors: Improving Single-View Reconstruction via Spatial Vision-Language Reasoning", "abstract": "Recovering the 3D scene geometry from a single view is a fundamental yet\nill-posed problem in computer vision. While classical depth estimation methods\ninfer only a 2.5D scene representation limited to the image plane, recent\napproaches based on radiance fields reconstruct a full 3D representation.\nHowever, these methods still struggle with occluded regions since inferring\ngeometry without visual observation requires (i) semantic knowledge of the\nsurroundings, and (ii) reasoning about spatial context. We propose KYN, a novel\nmethod for single-view scene reconstruction that reasons about semantic and\nspatial context to predict each point's density. We introduce a vision-language\nmodulation module to enrich point features with fine-grained semantic\ninformation. We aggregate point representations across the scene through a\nlanguage-guided spatial attention mechanism to yield per-point density\npredictions aware of the 3D semantic context. We show that KYN improves 3D\nshape recovery compared to predicting density for each 3D point in isolation.\nWe achieve state-of-the-art results in scene and object reconstruction on\nKITTI-360, and show improved zero-shot generalization compared to prior work.\nProject page: https://ruili3.github.io/kyn.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Rui Li", "Tobias Fischer", "Mattia Segu", "Marc Pollefeys", "Luc Van Gool", "Federico Tombari"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efc9"}, "filepath": "data/2312.10240.png", "tags": [], "_media_type": "image", "_rand": 0.999193848645121, "arXiv_link": "https://arxiv.org/abs/2312.10240", "other_link": "https://github.com/google-research/google-research/tree/master/richhf_18k.", "title": "Rich Human Feedback for Text-to-Image Generation", "abstract": "Recent Text-to-Image (T2I) generation models such as Stable Diffusion and\nImagen have made significant progress in generating high-resolution images\nbased on text descriptions. However, many generated images still suffer from\nissues such as artifacts/implausibility, misalignment with text descriptions,\nand low aesthetic quality. Inspired by the success of Reinforcement Learning\nwith Human Feedback (RLHF) for large language models, prior works collected\nhuman-provided scores as feedback on generated images and trained a reward\nmodel to improve the T2I generation. In this paper, we enrich the feedback\nsignal by (i) marking image regions that are implausible or misaligned with the\ntext, and (ii) annotating which words in the text prompt are misrepresented or\nmissing on the image. We collect such rich human feedback on 18K generated\nimages (RichHF-18K) and train a multimodal transformer to predict the rich\nfeedback automatically. We show that the predicted rich human feedback can be\nleveraged to improve image generation, for example, by selecting high-quality\ntraining data to finetune and improve the generative models, or by creating\nmasks with predicted heatmaps to inpaint the problematic regions. Notably, the\nimprovements generalize to models (Muse) beyond those used to generate the\nimages on which human feedback data were collected (Stable Diffusion variants).\nThe RichHF-18K data set will be released in our GitHub repository:\nhttps://github.com/google-research/google-research/tree/master/richhf_18k.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Youwei Liang", "Junfeng He", "Gang Li", "Peizhao Li", "Arseniy Klimovskiy", "Nicholas Carolan", "Jiao Sun", "Jordi Pont-Tuset", "Sarah Young", "Feng Yang", "Junjie Ke", "Krishnamurthy Dvijotham", "Katherine Collins", "Yiwen Luo", "Yang Li", "Kai Kohlhoff", "Deepak Ramachandran", "Vidhya Navalpakkam"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efca"}, "filepath": "data/2403.05817.png", "tags": [], "_media_type": "image", "_rand": 0.9998780849992855, "arXiv_link": "https://arxiv.org/abs/2403.05817", "other_link": "https://github.com/zhanggang001/HEDNet.", "title": "SAFDNet: A Simple and Effective Network for Fully Sparse 3D Object Detection", "abstract": "LiDAR-based 3D object detection plays an essential role in autonomous\ndriving. Existing high-performing 3D object detectors usually build dense\nfeature maps in the backbone network and prediction head. However, the\ncomputational costs introduced by the dense feature maps grow quadratically as\nthe perception range increases, making these models hard to scale up to\nlong-range detection. Some recent works have attempted to construct fully\nsparse detectors to solve this issue; nevertheless, the resulting models either\nrely on a complex multi-stage pipeline or exhibit inferior performance. In this\nwork, we propose SAFDNet, a straightforward yet highly effective architecture,\ntailored for fully sparse 3D object detection. In SAFDNet, an adaptive feature\ndiffusion strategy is designed to address the center feature missing problem.\nWe conducted extensive experiments on Waymo Open, nuScenes, and Argoverse2\ndatasets. SAFDNet performed slightly better than the previous SOTA on the first\ntwo datasets but much better on the last dataset, which features long-range\ndetection, verifying the efficacy of SAFDNet in scenarios where long-range\ndetection is required. Notably, on Argoverse2, SAFDNet surpassed the previous\nbest hybrid detector HEDNet by 2.6% mAP while being 2.1x faster, and yielded\n2.1% mAP gains over the previous best sparse detector FSDv2 while being 1.3x\nfaster. The code will be available at https://github.com/zhanggang001/HEDNet.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Gang Zhang", "Chen Junnan", "Guohuan Gao", "Jianmin Li", "Si Liu", "Xiaolin Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efcb"}, "filepath": "data/2403.17409.png", "tags": [], "_media_type": "image", "_rand": 0.9991000055332787, "arXiv_link": "https://arxiv.org/abs/2403.17409", "other_link": "", "title": "Neural Clustering based Visual Representation Learning", "abstract": "We investigate a fundamental aspect of machine vision: the measurement of\nfeatures, by revisiting clustering, one of the most classic approaches in\nmachine learning and data analysis. Existing visual feature extractors,\nincluding ConvNets, ViTs, and MLPs, represent an image as rectangular regions.\nThough prevalent, such a grid-style paradigm is built upon engineering practice\nand lacks explicit modeling of data distribution. In this work, we propose\nfeature extraction with clustering (FEC), a conceptually elegant yet\nsurprisingly ad-hoc interpretable neural clustering framework, which views\nfeature extraction as a process of selecting representatives from data and thus\nautomatically captures the underlying data distribution. Given an image, FEC\nalternates between grouping pixels into individual clusters to abstract\nrepresentatives and updating the deep features of pixels with current\nrepresentatives. Such an iterative working mechanism is implemented in the form\nof several neural layers and the final representatives can be used for\ndownstream tasks. The cluster assignments across layers, which can be viewed\nand inspected by humans, make the forward process of FEC fully transparent and\nempower it with promising ad-hoc interpretability. Extensive experiments on\nvarious visual recognition models and tasks verify the effectiveness,\ngenerality, and interpretability of FEC. We expect this work will provoke a\nrethink of the current de facto grid-style paradigm.", "keywords": [], "authors_list": ["Guikun Chen", "Xia Li", "Yi Yang", "Wenguan Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efcc"}, "filepath": "data/2403.02241.png", "tags": [], "_media_type": "image", "_rand": 0.9992968184429241, "arXiv_link": "https://arxiv.org/abs/2403.02241", "other_link": "", "title": "Neural Redshift: Random Networks are not Random Functions", "abstract": "Our understanding of the generalization capabilities of neural networks (NNs)\nis still incomplete. Prevailing explanations are based on implicit biases of\ngradient descent (GD) but they cannot account for the capabilities of models\nfrom gradient-free methods nor the simplicity bias recently observed in\nuntrained networks. This paper seeks other sources of generalization in NNs.\n Findings. To understand the inductive biases provided by architectures\nindependently from GD, we examine untrained, random-weight networks. Even\nsimple MLPs show strong inductive biases: uniform sampling in weight space\nyields a very biased distribution of functions in terms of complexity. But\nunlike common wisdom, NNs do not have an inherent \"simplicity bias\". This\nproperty depends on components such as ReLUs, residual connections, and layer\nnormalizations. Alternative architectures can be built with a bias for any\nlevel of complexity. Transformers also inherit all these properties from their\nbuilding blocks.\n Implications. We provide a fresh explanation for the success of deep learning\nindependent from gradient-based training. It points at promising avenues for\ncontrolling the solutions implemented by trained models.", "keywords": [], "authors_list": ["Damien Teney", "Armand Nicolicioiu", "Valentin Hartmann", "Ehsan Abbasnejad"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efcd"}, "filepath": "data/2312.16222.png", "tags": [], "_media_type": "image", "_rand": 0.999906724938262, "arXiv_link": "https://arxiv.org/abs/2312.16222", "other_link": "http://github.com/happychenpipi/EventSAM.", "title": "Segment Any Event Streams via Weighted Adaptation of Pivotal Tokens", "abstract": "In this paper, we delve into the nuanced challenge of tailoring the Segment\nAnything Models (SAMs) for integration with event data, with the overarching\nobjective of attaining robust and universal object segmentation within the\nevent-centric domain. One pivotal issue at the heart of this endeavor is the\nprecise alignment and calibration of embeddings derived from event-centric data\nsuch that they harmoniously coincide with those originating from RGB imagery.\nCapitalizing on the vast repositories of datasets with paired events and RGB\nimages, our proposition is to harness and extrapolate the profound knowledge\nencapsulated within the pre-trained SAM framework. As a cornerstone to\nachieving this, we introduce a multi-scale feature distillation methodology.\nThis methodology rigorously optimizes the alignment of token embeddings\noriginating from event data with their RGB image counterparts, thereby\npreserving and enhancing the robustness of the overall architecture.\nConsidering the distinct significance that token embeddings from intermediate\nlayers hold for higher-level embeddings, our strategy is centered on accurately\ncalibrating the pivotal token embeddings. This targeted calibration is aimed at\neffectively managing the discrepancies in high-level embeddings originating\nfrom both the event and image domains. Extensive experiments on different\ndatasets demonstrate the effectiveness of the proposed distillation method.\nCode in http://github.com/happychenpipi/EventSAM.", "keywords": [], "authors_list": ["Zhiwen Chen", "Zhiyu Zhu", "Yifan Zhang", "Junhui Hou", "Guangming Shi", "Jinjian Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efce"}, "filepath": "data/2403.11530.png", "tags": [], "_media_type": "image", "_rand": 0.999544358441718, "arXiv_link": "https://arxiv.org/abs/2403.11530", "other_link": "https://github.com/bjzhb666/GS-LoRA}.", "title": "Continual Forgetting for Pre-trained Vision Models", "abstract": "For privacy and security concerns, the need to erase unwanted information\nfrom pre-trained vision models is becoming evident nowadays. In real-world\nscenarios, erasure requests originate at any time from both users and model\nowners. These requests usually form a sequence. Therefore, under such a\nsetting, selective information is expected to be continuously removed from a\npre-trained model while maintaining the rest. We define this problem as\ncontinual forgetting and identify two key challenges. (i) For unwanted\nknowledge, efficient and effective deleting is crucial. (ii) For remaining\nknowledge, the impact brought by the forgetting procedure should be minimal. To\naddress them, we propose Group Sparse LoRA (GS-LoRA). Specifically, towards\n(i), we use LoRA modules to fine-tune the FFN layers in Transformer blocks for\neach forgetting task independently, and towards (ii), a simple group sparse\nregularization is adopted, enabling automatic selection of specific LoRA groups\nand zeroing out the others. GS-LoRA is effective, parameter-efficient,\ndata-efficient, and easy to implement. We conduct extensive experiments on face\nrecognition, object detection and image classification and demonstrate that\nGS-LoRA manages to forget specific classes with minimal impact on other\nclasses. Codes will be released on \\url{https://github.com/bjzhb666/GS-LoRA}.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Hongbo Zhao", "Bolin Ni", "Junsong Fan", "Yuxi Wang", "Yuntao Chen", "Gaofeng Meng", "Zhaoxiang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efcf"}, "filepath": "data/2403.01053.png", "tags": [], "_media_type": "image", "_rand": 0.9991504553429301, "arXiv_link": "https://arxiv.org/abs/2403.01053", "other_link": "", "title": "Seeing Unseen: Discover Novel Biomedical Concepts via Geometry-Constrained Probabilistic Modeling", "abstract": "Machine learning holds tremendous promise for transforming the fundamental\npractice of scientific discovery by virtue of its data-driven nature. With the\never-increasing stream of research data collection, it would be appealing to\nautonomously explore patterns and insights from observational data for\ndiscovering novel classes of phenotypes and concepts. However, in the\nbiomedical domain, there are several challenges inherently presented in the\ncumulated data which hamper the progress of novel class discovery. The\nnon-i.i.d. data distribution accompanied by the severe imbalance among\ndifferent groups of classes essentially leads to ambiguous and biased semantic\nrepresentations. In this work, we present a geometry-constrained probabilistic\nmodeling treatment to resolve the identified issues. First, we propose to\nparameterize the approximated posterior of instance embedding as a marginal von\nMisesFisher distribution to account for the interference of distributional\nlatent bias. Then, we incorporate a suite of critical geometric properties to\nimpose proper constraints on the layout of constructed embedding space, which\nin turn minimizes the uncontrollable risk for unknown class learning and\nstructuring. Furthermore, a spectral graph-theoretic method is devised to\nestimate the number of potential novel classes. It inherits two intriguing\nmerits compared to existent approaches, namely high computational efficiency\nand flexibility for taxonomy-adaptive estimation. Extensive experiments across\nvarious biomedical scenarios substantiate the effectiveness and general\napplicability of our method.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jianan Fan", "Dongnan Liu", "Hang Chang", "Heng Huang", "Mei Chen", "Weidong Cai"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd0"}, "filepath": "data/2310.16667.png", "tags": [], "_media_type": "image", "_rand": 0.9997033581011604, "arXiv_link": "https://arxiv.org/abs/2310.16667", "other_link": "https://github.com/CVMI-Lab/CoDet.", "title": "Exploring Region-Word Alignment in Built-in Detector for Open-Vocabulary Object Detection", "abstract": "Deriving reliable region-word alignment from image-text pairs is critical to\nlearn object-level vision-language representations for open-vocabulary object\ndetection. Existing methods typically rely on pre-trained or self-trained\nvision-language models for alignment, which are prone to limitations in\nlocalization accuracy or generalization capabilities. In this paper, we propose\nCoDet, a novel approach that overcomes the reliance on pre-aligned\nvision-language space by reformulating region-word alignment as a co-occurring\nobject discovery problem. Intuitively, by grouping images that mention a shared\nconcept in their captions, objects corresponding to the shared concept shall\nexhibit high co-occurrence among the group. CoDet then leverages visual\nsimilarities to discover the co-occurring objects and align them with the\nshared concept. Extensive experiments demonstrate that CoDet has superior\nperformances and compelling scalability in open-vocabulary detection, e.g., by\nscaling up the visual backbone, CoDet achieves 37.0 $\\text{AP}^m_{novel}$ and\n44.7 $\\text{AP}^m_{all}$ on OV-LVIS, surpassing the previous SoTA by 4.2\n$\\text{AP}^m_{novel}$ and 9.8 $\\text{AP}^m_{all}$. Code is available at\nhttps://github.com/CVMI-Lab/CoDet.", "keywords": ["Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Heng Zhang", "Qiuyu Zhao", "Linyu Zheng", "Hao Zeng", "Zhiwei Ge", "Tianhao Li", "Sulong Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd1"}, "filepath": "data/2404.14949.png", "tags": [], "_media_type": "image", "_rand": 0.9991889527516654, "arXiv_link": "https://arxiv.org/abs/2404.14949", "other_link": "", "title": "Blind Image Quality Assessment Based on Geometric Order Learning", "abstract": "Image Quality Assessment (IQA) models benefit significantly from semantic\ninformation, which allows them to treat different types of objects distinctly.\nCurrently, leveraging semantic information to enhance IQA is a crucial research\ndirection. Traditional methods, hindered by a lack of sufficiently annotated\ndata, have employed the CLIP image-text pretraining model as their backbone to\ngain semantic awareness. However, the generalist nature of these pre-trained\nVision-Language (VL) models often renders them suboptimal for IQA-specific\ntasks. Recent approaches have attempted to address this mismatch using prompt\ntechnology, but these solutions have shortcomings. Existing prompt-based VL\nmodels overly focus on incremental semantic information from text, neglecting\nthe rich insights available from visual data analysis. This imbalance limits\ntheir performance improvements in IQA tasks. This paper introduces an\ninnovative multi-modal prompt-based methodology for IQA. Our approach employs\ncarefully crafted prompts that synergistically mine incremental semantic\ninformation from both visual and linguistic data. Specifically, in the visual\nbranch, we introduce a multi-layer prompt structure to enhance the VL model's\nadaptability. In the text branch, we deploy a dual-prompt scheme that steers\nthe model to recognize and differentiate between scene category and distortion\ntype, thereby refining the model's capacity to assess image quality. Our\nexperimental findings underscore the effectiveness of our method over existing\nBlind Image Quality Assessment (BIQA) approaches. Notably, it demonstrates\ncompetitive performance across various datasets. Our method achieves Spearman\nRank Correlation Coefficient (SRCC) values of 0.961(surpassing 0.946 in CSIQ)\nand 0.941 (exceeding 0.930 in KADID), illustrating its robustness and accuracy\nin diverse contexts.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Nyeong-Ho Shin", "Seon-Ho Lee", "Chang-Su Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd2"}, "filepath": "data/2402.19161v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997994255863171, "arXiv_link": "https://arxiv.org/abs/2402.19161v1", "other_link": "", "title": "MemoNav: Working Memory Model for Visual Navigation", "abstract": "Image-goal navigation is a challenging task that requires an agent to\nnavigate to a goal indicated by an image in unfamiliar environments. Existing\nmethods utilizing diverse scene memories suffer from inefficient exploration\nsince they use all historical observations for decision-making without\nconsidering the goal-relevant fraction. To address this limitation, we present\nMemoNav, a novel memory model for image-goal navigation, which utilizes a\nworking memory-inspired pipeline to improve navigation performance.\nSpecifically, we employ three types of navigation memory. The node features on\na map are stored in the short-term memory (STM), as these features are\ndynamically updated. A forgetting module then retains the informative STM\nfraction to increase efficiency. We also introduce long-term memory (LTM) to\nlearn global scene representations by progressively aggregating STM features.\nSubsequently, a graph attention module encodes the retained STM and the LTM to\ngenerate working memory (WM) which contains the scene features essential for\nefficient navigation. The synergy among these three memory types boosts\nnavigation performance by enabling the agent to learn and leverage\ngoal-relevant scene features within a topological map. Our evaluation on\nmulti-goal tasks demonstrates that MemoNav significantly outperforms previous\nmethods across all difficulty levels in both Gibson and Matterport3D scenes.\nQualitative results further illustrate that MemoNav plans more efficient\nroutes.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Hongxin Li", "Zeyu Wang", "Xu Yang", "yuran Yang", "Shuqi Mei", "Zhaoxiang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd3"}, "filepath": "data/2403.16440.png", "tags": [], "_media_type": "image", "_rand": 0.9998735379544775, "arXiv_link": "https://arxiv.org/abs/2403.16440", "other_link": "https://github.com/VDIGPKU/RCBEVDet.", "title": "RCBEVDet: Radar-camera Fusion in Bird\u2019s Eye View for 3D Object Detection", "abstract": "Three-dimensional object detection is one of the key tasks in autonomous\ndriving. To reduce costs in practice, low-cost multi-view cameras for 3D object\ndetection are proposed to replace the expansive LiDAR sensors. However, relying\nsolely on cameras is difficult to achieve highly accurate and robust 3D object\ndetection. An effective solution to this issue is combining multi-view cameras\nwith the economical millimeter-wave radar sensor to achieve more reliable\nmulti-modal 3D object detection. In this paper, we introduce RCBEVDet, a\nradar-camera fusion 3D object detection method in the bird's eye view (BEV).\nSpecifically, we first design RadarBEVNet for radar BEV feature extraction.\nRadarBEVNet consists of a dual-stream radar backbone and a Radar Cross-Section\n(RCS) aware BEV encoder. In the dual-stream radar backbone, a point-based\nencoder and a transformer-based encoder are proposed to extract radar features,\nwith an injection and extraction module to facilitate communication between the\ntwo encoders. The RCS-aware BEV encoder takes RCS as the object size prior to\nscattering the point feature in BEV. Besides, we present the Cross-Attention\nMulti-layer Fusion module to automatically align the multi-modal BEV feature\nfrom radar and camera with the deformable attention mechanism, and then fuse\nthe feature with channel and spatial fusion layers. Experimental results show\nthat RCBEVDet achieves new state-of-the-art radar-camera fusion results on\nnuScenes and view-of-delft (VoD) 3D object detection benchmarks. Furthermore,\nRCBEVDet achieves better 3D detection results than all real-time camera-only\nand radar-camera 3D object detectors with a faster inference speed at 21~28\nFPS. The source code will be released at https://github.com/VDIGPKU/RCBEVDet.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Zhiwei Lin", "Zhe Liu", "Zhongyu Xia", "Xinhao Wang", "Yongtao Wang", "Shengxiang Qi", "Yang Dong", "Nan Dong", "Le Zhang", "Ce Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd4"}, "filepath": "data/2312.07856.png", "tags": [], "_media_type": "image", "_rand": 0.9998853258423046, "arXiv_link": "https://arxiv.org/abs/2312.07856", "other_link": "https://github.com/heekhero/DTL.", "title": "Instance-based Max-margin for Practical Few-shot Recognition", "abstract": "When pre-trained models become rapidly larger, the cost of fine-tuning on\ndownstream tasks steadily increases, too. To economically fine-tune these\nmodels, parameter-efficient transfer learning (PETL) is proposed, which only\ntunes a tiny subset of trainable parameters to efficiently learn quality\nrepresentations. However, current PETL methods are facing the dilemma that\nduring training the GPU memory footprint is not effectively reduced as\ntrainable parameters. PETL will likely fail, too, if the full fine-tuning\nencounters the out-of-GPU-memory issue. This phenomenon happens because\ntrainable parameters from these methods are generally entangled with the\nbackbone, such that a lot of intermediate states have to be stored in GPU\nmemory for gradient propagation. To alleviate this problem, we introduce\nDisentangled Transfer Learning (DTL), which disentangles the trainable\nparameters from the backbone using a lightweight Compact Side Network (CSN). By\nprogressively extracting task-specific information with a few low-rank linear\nmappings and appropriately adding the information back to the backbone, CSN\neffectively realizes knowledge transfer in various downstream tasks. We\nconducted extensive experiments to validate the effectiveness of our method.\nThe proposed method not only reduces a large amount of GPU memory usage and\ntrainable parameters, but also outperforms existing PETL methods by a\nsignificant margin in accuracy, achieving new state-of-the-art on several\nstandard benchmarks. The code is available at https://github.com/heekhero/DTL.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Minghao Fu", "Ke Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd5"}, "filepath": "data/2312.02051.png", "tags": [], "_media_type": "image", "_rand": 0.9995665447564661, "arXiv_link": "https://arxiv.org/abs/2312.02051", "other_link": "", "title": "TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding", "abstract": "This work proposes TimeChat, a time-sensitive multimodal large language model\nspecifically designed for long video understanding. Our model incorporates two\nkey architectural contributions: (1) a timestamp-aware frame encoder that binds\nvisual content with the timestamp of each frame, and (2) a sliding video\nQ-Former that produces a video token sequence of varying lengths to accommodate\nvideos of various durations. Additionally, we construct an instruction-tuning\ndataset, encompassing 6 tasks and a total of 125K instances, to further enhance\nTimeChat's instruction-following performance. Experiment results across various\nvideo understanding tasks, such as dense captioning, temporal grounding, and\nhighlight detection, demonstrate TimeChat's strong zero-shot temporal\nlocalization and reasoning capabilities. For example, it achieves +9.2 F1 score\nand +2.8 CIDEr on YouCook2, +5.8 HIT@1 on QVHighlights, and +27.5 R@1 (IoU=0.5)\non Charades-STA, compared to state-of-the-art video large language models,\nholding the potential to serve as a versatile video assistant for long-form\nvideo comprehension tasks and satisfy realistic user requirements.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Shuhuai Ren", "Linli Yao", "Shicheng Li", "Xu Sun", "Lu Hou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd6"}, "filepath": "data/2404.12383.png", "tags": [], "_media_type": "image", "_rand": 0.9994856999071173, "arXiv_link": "https://arxiv.org/abs/2404.12383", "other_link": "https://judyye.github.io/ghop-www", "title": "G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis", "abstract": "We propose G-HOP, a denoising diffusion based generative prior for\nhand-object interactions that allows modeling both the 3D object and a human\nhand, conditioned on the object category. To learn a 3D spatial diffusion model\nthat can capture this joint distribution, we represent the human hand via a\nskeletal distance field to obtain a representation aligned with the (latent)\nsigned distance field for the object. We show that this hand-object prior can\nthen serve as generic guidance to facilitate other tasks like reconstruction\nfrom interaction clip and human grasp synthesis. We believe that our model,\ntrained by aggregating seven diverse real-world interaction datasets spanning\nacross 155 categories, represents a first approach that allows jointly\ngenerating both hand and object. Our empirical evaluations demonstrate the\nbenefit of this joint prior in video-based reconstruction and human grasp\nsynthesis, outperforming current task-specific baselines.\n Project website: https://judyye.github.io/ghop-www", "keywords": ["Biometrics and human analysis"], "authors_list": ["Yufei Ye", "Abhinav Gupta", "Kris Kitani", "Shubham Tulsiani"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd7"}, "filepath": "data/2311.18445v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992774849851365, "arXiv_link": "https://arxiv.org/abs/2311.18445v1", "other_link": "", "title": "VTimeLLM: Empower LLM to Grasp Video Moments", "abstract": "Large language models (LLMs) have shown remarkable text understanding\ncapabilities, which have been extended as Video LLMs to handle video data for\ncomprehending visual details. However, existing Video LLMs can only provide a\ncoarse description of the entire video, failing to capture the precise start\nand end time boundary of specific events. In this paper, we solve this issue\nvia proposing VTimeLLM, a novel Video LLM designed for fine-grained video\nmoment understanding and reasoning with respect to time boundary. Specifically,\nour VTimeLLM adopts a boundary-aware three-stage training strategy, which\nrespectively utilizes image-text pairs for feature alignment, multiple-event\nvideos to increase temporal-boundary awareness, and high-quality\nvideo-instruction tuning to further improve temporal understanding ability as\nwell as align with human intents. Extensive experiments demonstrate that in\nfine-grained time-related comprehension tasks for videos such as Temporal Video\nGrounding and Dense Video Captioning, VTimeLLM significantly outperforms\nexisting Video LLMs. Besides, benefits from the fine-grained temporal\nunderstanding of the videos further enable VTimeLLM to beat existing Video LLMs\nin video dialogue benchmark, showing its superior cross-modal understanding and\nreasoning abilities.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Image and video generation and manipulation"], "authors_list": ["Bin Huang", "Xin Wang", "Hong Chen", "Zihan Song", "Wenwu Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd8"}, "filepath": "data/2403.05005.png", "tags": [], "_media_type": "image", "_rand": 0.9994861588519192, "arXiv_link": "https://arxiv.org/abs/2403.05005", "other_link": "", "title": "DITTO: Dual and Integrated Latent Topologies for Implicit 3D Reconstruction", "abstract": "We propose a novel concept of dual and integrated latent topologies (DITTO in\nshort) for implicit 3D reconstruction from noisy and sparse point clouds. Most\nexisting methods predominantly focus on single latent type, such as point or\ngrid latents. In contrast, the proposed DITTO leverages both point and grid\nlatents (i.e., dual latent) to enhance their strengths, the stability of grid\nlatents and the detail-rich capability of point latents. Concretely, DITTO\nconsists of dual latent encoder and integrated implicit decoder. In the dual\nlatent encoder, a dual latent layer, which is the key module block composing\nthe encoder, refines both latents in parallel, maintaining their distinct\nshapes and enabling recursive interaction. Notably, a newly proposed dynamic\nsparse point transformer within the dual latent layer effectively refines point\nlatents. Then, the integrated implicit decoder systematically combines these\nrefined latents, achieving high-fidelity 3D reconstruction and surpassing\nprevious state-of-the-art methods on object- and scene-level datasets,\nespecially in thin and detailed structures.", "keywords": [], "authors_list": ["Jaehyeok Shim", "Kyungdon Joo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efd9"}, "filepath": "data/2403.14186.png", "tags": [], "_media_type": "image", "_rand": 0.9992196383932245, "arXiv_link": "https://arxiv.org/abs/2403.14186", "other_link": "", "title": "StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN", "abstract": "We propose a method that can generate cinemagraphs automatically from a still\nlandscape image using a pre-trained StyleGAN. Inspired by the success of recent\nunconditional video generation, we leverage a powerful pre-trained image\ngenerator to synthesize high-quality cinemagraphs. Unlike previous approaches\nthat mainly utilize the latent space of a pre-trained StyleGAN, our approach\nutilizes its deep feature space for both GAN inversion and cinemagraph\ngeneration. Specifically, we propose multi-scale deep feature warping (MSDFW),\nwhich warps the intermediate features of a pre-trained StyleGAN at different\nresolutions. By using MSDFW, the generated cinemagraphs are of high resolution\nand exhibit plausible looping animation. We demonstrate the superiority of our\nmethod through user studies and quantitative comparisons with state-of-the-art\ncinemagraph generation methods and a video generation method that uses a\npre-trained StyleGAN.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jongwoo Choi", "Kwanggyoon Seo", "Amirsaman Ashtari", "Junyong Noh"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efda"}, "filepath": "data/2403.17387.png", "tags": [], "_media_type": "image", "_rand": 0.9991834533886506, "arXiv_link": "https://arxiv.org/abs/2403.17387", "other_link": "", "title": "Decoupled Pseudo-labeling in Semi-Supervised Monocular 3D Object Detection", "abstract": "We delve into pseudo-labeling for semi-supervised monocular 3D object\ndetection (SSM3OD) and discover two primary issues: a misalignment between the\nprediction quality of 3D and 2D attributes and the tendency of depth\nsupervision derived from pseudo-labels to be noisy, leading to significant\noptimization conflicts with other reliable forms of supervision. We introduce a\nnovel decoupled pseudo-labeling (DPL) approach for SSM3OD. Our approach\nfeatures a Decoupled Pseudo-label Generation (DPG) module, designed to\nefficiently generate pseudo-labels by separately processing 2D and 3D\nattributes. This module incorporates a unique homography-based method for\nidentifying dependable pseudo-labels in BEV space, specifically for 3D\nattributes. Additionally, we present a DepthGradient Projection (DGP) module to\nmitigate optimization conflicts caused by noisy depth supervision of\npseudo-labels, effectively decoupling the depth gradient and removing\nconflicting gradients. This dual decoupling strategy-at both the pseudo-label\ngeneration and gradient levels-significantly improves the utilization of\npseudo-labels in SSM3OD. Our comprehensive experiments on the KITTI benchmark\ndemonstrate the superiority of our method over existing approaches.", "keywords": [], "authors_list": ["Jiacheng Zhang", "Jiaming Li", "Xiangru Lin", "Wei Zhang", "Xiao Tan", "Junyu Han", "Errui Ding", "Jingdong Wang", "Guanbin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efdb"}, "filepath": "data/2404.01751v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992718887919619, "arXiv_link": "https://arxiv.org/abs/2404.01751v1", "other_link": "", "title": "T-VSL: Text-Guided Visual Sound Source Localization in Mixtures", "abstract": "Visual sound source localization poses a significant challenge in identifying\nthe semantic region of each sounding source within a video. Existing\nself-supervised and weakly supervised source localization methods struggle to\naccurately distinguish the semantic regions of each sounding object,\nparticularly in multi-source mixtures. These methods often rely on audio-visual\ncorrespondence as guidance, which can lead to substantial performance drops in\ncomplex multi-source localization scenarios. The lack of access to individual\nsource sounds in multi-source mixtures during training exacerbates the\ndifficulty of learning effective audio-visual correspondence for localization.\nTo address this limitation, in this paper, we propose incorporating the text\nmodality as an intermediate feature guide using tri-modal joint embedding\nmodels (e.g., AudioCLIP) to disentangle the semantic audio-visual source\ncorrespondence in multi-source mixtures. Our framework, dubbed T-VSL, begins by\npredicting the class of sounding entities in mixtures. Subsequently, the\ntextual representation of each sounding source is employed as guidance to\ndisentangle fine-grained audio-visual source correspondence from multi-source\nmixtures, leveraging the tri-modal AudioCLIP embedding. This approach enables\nour framework to handle a flexible number of sources and exhibits promising\nzero-shot transferability to unseen classes during test time. Extensive\nexperiments conducted on the MUSIC, VGGSound, and VGGSound-Instruments datasets\ndemonstrate significant performance improvements over state-of-the-art methods.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Tanvir Mahmud", "Yapeng Tian", "Diana Marculescu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efdc"}, "filepath": "data/2307.00764.png", "tags": [], "_media_type": "image", "_rand": 0.9993598488940351, "arXiv_link": "http://export.arxiv.org/abs/2307.00764", "other_link": "https://github.com/berkeley-hipie/HIPIE.", "title": "USE: Universal Segment Embeddings for Open-Vocabulary Image Segmentation", "abstract": "Open-vocabulary image segmentation aims to partition an image into semantic\nregions according to arbitrary text descriptions. However, complex visual\nscenes can be naturally decomposed into simpler parts and abstracted at\nmultiple levels of granularity, introducing inherent segmentation ambiguity.\nUnlike existing methods that typically sidestep this ambiguity and treat it as\nan external factor, our approach actively incorporates a hierarchical\nrepresentation encompassing different semantic-levels into the learning\nprocess. We propose a decoupled text-image fusion mechanism and representation\nlearning modules for both \"things\" and \"stuff\". Additionally, we systematically\nexamine the differences that exist in the textual and visual features between\nthese types of categories. Our resulting model, named HIPIE, tackles\nHIerarchical, oPen-vocabulary, and unIvErsal segmentation tasks within a\nunified framework. Benchmarked on over 40 datasets, e.g., ADE20K, COCO,\nPascal-VOC Part, RefCOCO/RefCOCOg, ODinW and SeginW, HIPIE achieves the\nstate-of-the-art results at various levels of image comprehension, including\nsemantic-level (e.g., semantic segmentation), instance-level (e.g.,\npanoptic/referring segmentation and object detection), as well as part-level\n(e.g., part/subpart segmentation) tasks. Our code is released at\nhttps://github.com/berkeley-hipie/HIPIE.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Xiaoqi Wang", "Wenbin He", "Xiwei Xuan", "Clint Sebastian", "Jorge Piazentin Ono", "Xin Li", "Sima Behpour", "Thang Doan", "Liang Gou", "Shen", "Liu Ren"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efdd"}, "filepath": "data/2403.13683.png", "tags": [], "_media_type": "image", "_rand": 0.9994094674397471, "arXiv_link": "https://arxiv.org/abs/2403.13683", "other_link": "https://github.com/sailor-z/DVMNet/.", "title": "DVMNet: Computing Relative Pose for Unseen Objects Beyond Hypotheses", "abstract": "Determining the relative pose of an object between two images is pivotal to\nthe success of generalizable object pose estimation. Existing approaches\ntypically approximate the continuous pose representation with a large number of\ndiscrete pose hypotheses, which incurs a computationally expensive process of\nscoring each hypothesis at test time. By contrast, we present a Deep Voxel\nMatching Network (DVMNet) that eliminates the need for pose hypotheses and\ncomputes the relative object pose in a single pass. To this end, we map the two\ninput RGB images, reference and query, to their respective voxelized 3D\nrepresentations. We then pass the resulting voxels through a pose estimation\nmodule, where the voxels are aligned and the pose is computed in an end-to-end\nfashion by solving a least-squares problem. To enhance robustness, we introduce\na weighted closest voxel algorithm capable of mitigating the impact of noisy\nvoxels. We conduct extensive experiments on the CO3D, LINEMOD, and Objaverse\ndatasets, demonstrating that our method delivers more accurate relative pose\nestimates for novel objects at a lower computational cost compared to\nstate-of-the-art methods. Our code is released at:\nhttps://github.com/sailor-z/DVMNet/.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Chen Zhao", "Tong Zhang", "Zheng Dang", "Mathieu Salzmann"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efde"}, "filepath": "data/2312.12337.png", "tags": [], "_media_type": "image", "_rand": 0.9995028633384552, "arXiv_link": "https://arxiv.org/abs/2312.12337", "other_link": "", "title": "pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction", "abstract": "We introduce pixelSplat, a feed-forward model that learns to reconstruct 3D\nradiance fields parameterized by 3D Gaussian primitives from pairs of images.\nOur model features real-time and memory-efficient rendering for scalable\ntraining as well as fast 3D reconstruction at inference time. To overcome local\nminima inherent to sparse and locally supported representations, we predict a\ndense probability distribution over 3D and sample Gaussian means from that\nprobability distribution. We make this sampling operation differentiable via a\nreparameterization trick, allowing us to back-propagate gradients through the\nGaussian splatting representation. We benchmark our method on wide-baseline\nnovel view synthesis on the real-world RealEstate10k and ACID datasets, where\nwe outperform state-of-the-art light field transformers and accelerate\nrendering by 2.5 orders of magnitude while reconstructing an interpretable and\neditable 3D radiance field.", "keywords": ["Efficient and scalable vision"], "authors_list": ["David Charatan", "Sizhe Lester Li", "Andrea Tagliasacchi", "Vincent Sitzmann"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efdf"}, "filepath": "data/2311.18829.png", "tags": [], "_media_type": "image", "_rand": 0.9991078173550595, "arXiv_link": "https://arxiv.org/abs/2311.18829", "other_link": "https://wangyanhui666.github.io/MicroCinema.github.io/", "title": "MicroCinema: A Divide-and-Conquer Approach for Text-to-Video Generation", "abstract": "We present MicroCinema, a straightforward yet effective framework for\nhigh-quality and coherent text-to-video generation. Unlike existing approaches\nthat align text prompts with video directly, MicroCinema introduces a\nDivide-and-Conquer strategy which divides the text-to-video into a two-stage\nprocess: text-to-image generation and image\\&text-to-video generation. This\nstrategy offers two significant advantages. a) It allows us to take full\nadvantage of the recent advances in text-to-image models, such as Stable\nDiffusion, Midjourney, and DALLE, to generate photorealistic and highly\ndetailed images. b) Leveraging the generated image, the model can allocate less\nfocus to fine-grained appearance details, prioritizing the efficient learning\nof motion dynamics. To implement this strategy effectively, we introduce two\ncore designs. First, we propose the Appearance Injection Network, enhancing the\npreservation of the appearance of the given image. Second, we introduce the\nAppearance Noise Prior, a novel mechanism aimed at maintaining the capabilities\nof pre-trained 2D diffusion models. These design elements empower MicroCinema\nto generate high-quality videos with precise motion, guided by the provided\ntext prompts. Extensive experiments demonstrate the superiority of the proposed\nframework. Concretely, MicroCinema achieves SOTA zero-shot FVD of 342.86 on\nUCF-101 and 377.40 on MSR-VTT. See\nhttps://wangyanhui666.github.io/MicroCinema.github.io/ for video samples.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yanhui Wang", "Jianmin Bao", "Wenming Weng", "Ruoyu Feng", "Dacheng Yin", "Tao Yang", "Jingxu Zhang", "Qi Dai", "Zhiyuan Zhao", "Chunyu Wang", "Kai Qiu", "Yuhui Yuan", "Xiaoyan Sun", "Chong Luo", "Baining Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe0"}, "filepath": "data/2312.08878.png", "tags": [], "_media_type": "image", "_rand": 0.9994351642314324, "arXiv_link": "https://arxiv.org/abs/2312.08878", "other_link": "", "title": "Domain Prompt Learning with Quaternion Networks", "abstract": "Prompt learning has emerged as an effective and data-efficient technique in\nlarge Vision-Language Models (VLMs). However, when adapting VLMs to specialized\ndomains such as remote sensing and medical imaging, domain prompt learning\nremains underexplored. While large-scale domain-specific foundation models can\nhelp tackle this challenge, their concentration on a single vision level makes\nit challenging to prompt both vision and language modalities. To overcome this,\nwe propose to leverage domain-specific knowledge from domain-specific\nfoundation models to transfer the robust recognition ability of VLMs from\ngeneralized to specialized domains, using quaternion networks. Specifically,\nthe proposed method involves using domain-specific vision features from\ndomain-specific foundation models to guide the transformation of generalized\ncontextual embeddings from the language branch into a specialized space within\nthe quaternion networks. Moreover, we present a hierarchical approach that\ngenerates vision prompt features by analyzing intermodal relationships between\nhierarchical language prompt features and domain-specific vision features. In\nthis way, quaternion networks can effectively mine the intermodal relationships\nin the specific domain, facilitating domain-specific vision-language\ncontrastive learning. Extensive experiments on domain-specific datasets show\nthat our proposed method achieves new state-of-the-art results in prompt\nlearning.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Medical imaging and biological vision"], "authors_list": ["Qinglong Cao", "Zhengqin Xu", "Yuntian Chen", "Chao Ma", "Xiaokang Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe1"}, "filepath": "data/2404.10227.png", "tags": [], "_media_type": "image", "_rand": 0.9990231821890496, "arXiv_link": "https://arxiv.org/abs/2404.10227", "other_link": "", "title": "MS-MANO: Enabling Hand Pose Tracking with Biomechanical Constraints", "abstract": "This work proposes a novel learning framework for visual hand dynamics\nanalysis that takes into account the physiological aspects of hand motion. The\nexisting models, which are simplified joint-actuated systems, often produce\nunnatural motions. To address this, we integrate a musculoskeletal system with\na learnable parametric hand model, MANO, to create a new model, MS-MANO. This\nmodel emulates the dynamics of muscles and tendons to drive the skeletal\nsystem, imposing physiologically realistic constraints on the resulting torque\ntrajectories. We further propose a simulation-in-the-loop pose refinement\nframework, BioPR, that refines the initial estimated pose through a multi-layer\nperceptron (MLP) network. Our evaluation of the accuracy of MS-MANO and the\nefficacy of the BioPR is conducted in two separate parts. The accuracy of\nMS-MANO is compared with MyoSuite, while the efficacy of BioPR is benchmarked\nagainst two large-scale public datasets and two recent state-of-the-art\nmethods. The results demonstrate that our approach consistently improves the\nbaseline methods both quantitatively and qualitatively.", "keywords": ["Biometrics and human analysis", "Medical imaging and biological vision"], "authors_list": ["Pengfei Xie", "Wenqiang Xu", "Tutian Tang", "Zhenjun Yu", "Cewu Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe2"}, "filepath": "data/2404.05558.png", "tags": [], "_media_type": "image", "_rand": 0.9991777997723341, "arXiv_link": "https://arxiv.org/abs/2404.05558", "other_link": "https://github.com/WooKyoungHan/JDEC.", "title": "JDEC: JPEG Decoding via Enhanced Continuous Cosine Coefficients", "abstract": "We propose a practical approach to JPEG image decoding, utilizing a local\nimplicit neural representation with continuous cosine formulation. The JPEG\nalgorithm significantly quantizes discrete cosine transform (DCT) spectra to\nachieve a high compression rate, inevitably resulting in quality degradation\nwhile encoding an image. We have designed a continuous cosine spectrum\nestimator to address the quality degradation issue that restores the distorted\nspectrum. By leveraging local DCT formulations, our network has the privilege\nto exploit dequantization and upsampling simultaneously. Our proposed model\nenables decoding compressed images directly across different quality factors\nusing a single pre-trained model without relying on a conventional JPEG\ndecoder. As a result, our proposed network achieves state-of-the-art\nperformance in flexible color image JPEG artifact removal tasks. Our source\ncode is available at https://github.com/WooKyoungHan/JDEC.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Woo Kyoung Han", "Sunghoon Im", "Jaedeok Kim", "Kyong Hwan Jin"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe3"}, "filepath": "data/2312.12470.png", "tags": [], "_media_type": "image", "_rand": 0.9998917994521695, "arXiv_link": "https://arxiv.org/abs/2312.12470", "other_link": "https://github.com/Lsan2401/RMSIN.", "title": "Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation", "abstract": "Referring Remote Sensing Image Segmentation (RRSIS) is a new challenge that\ncombines computer vision and natural language processing, delineating specific\nregions in aerial images as described by textual queries. Traditional Referring\nImage Segmentation (RIS) approaches have been impeded by the complex spatial\nscales and orientations found in aerial imagery, leading to suboptimal\nsegmentation results. To address these challenges, we introduce the Rotated\nMulti-Scale Interaction Network (RMSIN), an innovative approach designed for\nthe unique demands of RRSIS. RMSIN incorporates an Intra-scale Interaction\nModule (IIM) to effectively address the fine-grained detail required at\nmultiple scales and a Cross-scale Interaction Module (CIM) for integrating\nthese details coherently across the network. Furthermore, RMSIN employs an\nAdaptive Rotated Convolution (ARC) to account for the diverse orientations of\nobjects, a novel contribution that significantly enhances segmentation\naccuracy. To assess the efficacy of RMSIN, we have curated an expansive dataset\ncomprising 17,402 image-caption-mask triplets, which is unparalleled in terms\nof scale and variety. This dataset not only presents the model with a wide\nrange of spatial and rotational scenarios but also establishes a stringent\nbenchmark for the RRSIS task, ensuring a rigorous evaluation of performance.\nOur experimental evaluations demonstrate the exceptional performance of RMSIN,\nsurpassing existing state-of-the-art models by a significant margin. All\ndatasets and code are made available at https://github.com/Lsan2401/RMSIN.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Sihan liu", "Yiwei Ma", "Xiaoqing Zhang", "Haowei Wang", "Jiayi Ji", "Xiaoshuai Sun", "Rongrong Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe4"}, "filepath": "data/2311.01357.png", "tags": [], "_media_type": "image", "_rand": 0.9993578052113583, "arXiv_link": "https://arxiv.org/abs/2311.01357", "other_link": "", "title": "IDGuard: Robust, General, Identity-centric POI Proactive Defense Against Face Editing Abuse", "abstract": "Notwithstanding offering convenience and entertainment to society, Deepfake\nface swapping has caused critical privacy issues with the rapid development of\ndeep generative models. Due to imperceptible artifacts in high-quality\nsynthetic images, passive detection models against face swapping in recent\nyears usually suffer performance damping regarding the generalizability issue.\nTherefore, several studies have been attempted to proactively protect the\noriginal images against malicious manipulations by inserting invisible signals\nin advance. However, the existing proactive defense approaches demonstrate\nunsatisfactory results with respect to visual quality, detection accuracy, and\nsource tracing ability. In this study, to fulfill the research gap, we propose\nthe first robust identity perceptual watermarking framework that concurrently\nperforms detection and source tracing against Deepfake face swapping\nproactively. We assign identity semantics regarding the image contents to the\nwatermarks and devise an unpredictable and nonreversible chaotic encryption\nsystem to ensure watermark confidentiality. The watermarks are encoded and\nrecovered by jointly training an encoder-decoder framework along with\nadversarial image manipulations. Falsification and source tracing are\naccomplished by justifying the consistency between the content-matched identity\nperceptual watermark and the recovered robust watermark from the image.\nExtensive experiments demonstrate state-of-the-art detection performance on\nDeepfake face swapping under both cross-dataset and cross-manipulation\nsettings.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Yunshu Dai", "Jianwei Fei", "Fangjun Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe5"}, "filepath": "data/2403.15008.png", "tags": [], "_media_type": "image", "_rand": 0.9997180267087616, "arXiv_link": "https://arxiv.org/abs/2403.15008", "other_link": "https://yanzq95.github.io/projectpage/TOFDC/index.html", "title": "Tri-Perspective View Decomposition for Geometry-Aware Depth Completion", "abstract": "Depth completion is a vital task for autonomous driving, as it involves\nreconstructing the precise 3D geometry of a scene from sparse and noisy depth\nmeasurements. However, most existing methods either rely only on 2D depth\nrepresentations or directly incorporate raw 3D point clouds for compensation,\nwhich are still insufficient to capture the fine-grained 3D geometry of the\nscene. To address this challenge, we introduce Tri-Perspective view\nDecomposition (TPVD), a novel framework that can explicitly model 3D geometry.\nIn particular, (1) TPVD ingeniously decomposes the original point cloud into\nthree 2D views, one of which corresponds to the sparse depth input. (2) We\ndesign TPV Fusion to update the 2D TPV features through recurrent 2D-3D-2D\naggregation, where a Distance-Aware Spherical Convolution (DASC) is applied.\n(3) By adaptively choosing TPV affinitive neighbors, the newly proposed\nGeometric Spatial Propagation Network (GSPN) further improves the geometric\nconsistency. As a result, our TPVD outperforms existing methods on KITTI,\nNYUv2, and SUN RGBD. Furthermore, we build a novel depth completion dataset\nnamed TOFDC, which is acquired by the time-of-flight (TOF) sensor and the color\ncamera on smartphones. Project page:\nhttps://yanzq95.github.io/projectpage/TOFDC/index.html", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Zhiqiang Yan", "Yuankai Lin", "Kun Wang", "Yupeng Zheng", "Yufei Wang", "Zhenyu Zhang", "Jun Li", "Jian Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe6"}, "filepath": "data/2403.03431.png", "tags": [], "_media_type": "image", "_rand": 0.9994954743401253, "arXiv_link": "https://arxiv.org/abs/2403.03431", "other_link": "", "title": "Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing", "abstract": "Deep Text-to-Image Synthesis (TIS) models such as Stable Diffusion have\nrecently gained significant popularity for creative Text-to-image generation.\nYet, for domain-specific scenarios, tuning-free Text-guided Image Editing (TIE)\nis of greater importance for application developers, which modify objects or\nobject properties in images by manipulating feature components in attention\nlayers during the generation process. However, little is known about what\nsemantic meanings these attention layers have learned and which parts of the\nattention maps contribute to the success of image editing. In this paper, we\nconduct an in-depth probing analysis and demonstrate that cross-attention maps\nin Stable Diffusion often contain object attribution information that can\nresult in editing failures. In contrast, self-attention maps play a crucial\nrole in preserving the geometric and shape details of the source image during\nthe transformation to the target image. Our analysis offers valuable insights\ninto understanding cross and self-attention maps in diffusion models. Moreover,\nbased on our findings, we simplify popular image editing methods and propose a\nmore straightforward yet more stable and efficient tuning-free procedure that\nonly modifies self-attention maps of the specified attention layers during the\ndenoising process. Experimental results show that our simplified method\nconsistently surpasses the performance of popular approaches on multiple\ndatasets.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Bingyan Liu", "Chengyu Wang", "Tingfeng Cao", "Kui Jia", "Jun Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe7"}, "filepath": "data/2401.08570.png", "tags": [], "_media_type": "image", "_rand": 0.9996572075310666, "arXiv_link": "https://arxiv.org/abs/2401.08570", "other_link": "https://sanweiliti.github.io/ROHM/ROHM.html.", "title": "RoHM: Robust Human Motion Reconstruction via Diffusion", "abstract": "We propose RoHM, an approach for robust 3D human motion reconstruction from\nmonocular RGB(-D) videos in the presence of noise and occlusions. Most previous\napproaches either train neural networks to directly regress motion in 3D or\nlearn data-driven motion priors and combine them with optimization at test\ntime. The former do not recover globally coherent motion and fail under\nocclusions; the latter are time-consuming, prone to local minima, and require\nmanual tuning. To overcome these shortcomings, we exploit the iterative,\ndenoising nature of diffusion models. RoHM is a novel diffusion-based motion\nmodel that, conditioned on noisy and occluded input data, reconstructs\ncomplete, plausible motions in consistent global coordinates. Given the\ncomplexity of the problem -- requiring one to address different tasks\n(denoising and infilling) in different solution spaces (local and global\nmotion) -- we decompose it into two sub-tasks and learn two models, one for\nglobal trajectory and one for local motion. To capture the correlations between\nthe two, we then introduce a novel conditioning module, combining it with an\niterative inference scheme. We apply RoHM to a variety of tasks -- from motion\nreconstruction and denoising to spatial and temporal infilling. Extensive\nexperiments on three popular datasets show that our method outperforms\nstate-of-the-art approaches qualitatively and quantitatively, while being\nfaster at test time. The code is available at\nhttps://sanweiliti.github.io/ROHM/ROHM.html.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Siwei Zhang", "Bharat Lal Bhatnagar", "Yuanlu Xu", "Alexander Winkler", "Petr Kadlecek", "Siyu Tang", "Federica Bogo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe8"}, "filepath": "data/2403.00436.png", "tags": [], "_media_type": "image", "_rand": 0.9997617270778011, "arXiv_link": "https://arxiv.org/abs/2403.00436", "other_link": "", "title": "Abductive Ego-View Accident Video Understanding for Safe Driving Perception", "abstract": "We present MM-AU, a novel dataset for Multi-Modal Accident video\nUnderstanding. MM-AU contains 11,727 in-the-wild ego-view accident videos, each\nwith temporally aligned text descriptions. We annotate over 2.23 million object\nboxes and 58,650 pairs of video-based accident reasons, covering 58 accident\ncategories. MM-AU supports various accident understanding tasks, particularly\nmultimodal video diffusion to understand accident cause-effect chains for safe\ndriving. With MM-AU, we present an Abductive accident Video understanding\nframework for Safe Driving perception (AdVersa-SD). AdVersa-SD performs video\ndiffusion via an Object-Centric Video Diffusion (OAVD) method which is driven\nby an abductive CLIP model. This model involves a contrastive interaction loss\nto learn the pair co-occurrence of normal, near-accident, accident frames with\nthe corresponding text descriptions, such as accident reasons, prevention\nadvice, and accident categories. OAVD enforces the causal region learning while\nfixing the content of the original frame background in video generation, to\nfind the dominant cause-effect chain for certain accidents. Extensive\nexperiments verify the abductive ability of AdVersa-SD and the superiority of\nOAVD against the state-of-the-art diffusion models. Additionally, we provide\ncareful benchmark evaluations for object detection and accident reason\nanswering since AdVersa-SD relies on precise object and accident reason\ninformation.", "keywords": ["Scene analysis and understanding", "Image and video generation and manipulation"], "authors_list": ["Jianwu Fang", "Lei-lei Li", "Junfei Zhou", "Junbin Xiao", "Hongkai Yu", "Chen Lv", "Jianru Xue", "Tat-seng Chua"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efe9"}, "filepath": "data/2401.10226.png", "tags": [], "_media_type": "image", "_rand": 0.9997518584832631, "arXiv_link": "https://arxiv.org/abs/2401.10226", "other_link": "", "title": "Towards Language-Driven Video Inpainting via Multimodal Large Language Models", "abstract": "We introduce a new task -- language-driven video inpainting, which uses\nnatural language instructions to guide the inpainting process. This approach\novercomes the limitations of traditional video inpainting methods that depend\non manually labeled binary masks, a process often tedious and labor-intensive.\nWe present the Remove Objects from Videos by Instructions (ROVI) dataset,\ncontaining 5,650 videos and 9,091 inpainting results, to support training and\nevaluation for this task. We also propose a novel diffusion-based\nlanguage-driven video inpainting framework, the first end-to-end baseline for\nthis task, integrating Multimodal Large Language Models to understand and\nexecute complex language-based inpainting requests effectively. Our\ncomprehensive results showcase the dataset's versatility and the model's\neffectiveness in various language-instructed inpainting scenarios. We will make\ndatasets, code, and models publicly available.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jianzong Wu", "Xiangtai Li", "Chenyang Si", "Shangchen Zhou", "Jingkang Yang", "Jiangning Zhang", "Yining Li", "Kai Chen", "Yunhai Tong", "Ziwei Liu", "Chen Change Loy"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efea"}, "filepath": "data/2403.02138.png", "tags": [], "_media_type": "image", "_rand": 0.9997879695519839, "arXiv_link": "https://arxiv.org/abs/2403.02138", "other_link": "", "title": "Self-Supervised Facial Representation Learning with Facial Region Awareness", "abstract": "Self-supervised pre-training has been proved to be effective in learning\ntransferable representations that benefit various visual tasks. This paper asks\nthis question: can self-supervised pre-training learn general facial\nrepresentations for various facial analysis tasks? Recent efforts toward this\ngoal are limited to treating each face image as a whole, i.e., learning\nconsistent facial representations at the image-level, which overlooks the\nconsistency of local facial representations (i.e., facial regions like eyes,\nnose, etc). In this work, we make a first attempt to propose a novel\nself-supervised facial representation learning framework to learn consistent\nglobal and local facial representations, Facial Region Awareness (FRA).\nSpecifically, we explicitly enforce the consistency of facial regions by\nmatching the local facial representations across views, which are extracted\nwith learned heatmaps highlighting the facial regions. Inspired by the mask\nprediction in supervised semantic segmentation, we obtain the heatmaps via\ncosine similarity between the per-pixel projection of feature maps and facial\nmask embeddings computed from learnable positional embeddings, which leverage\nthe attention mechanism to globally look up the facial image for facial\nregions. To learn such heatmaps, we formulate the learning of facial mask\nembeddings as a deep clustering problem by assigning the pixel features from\nthe feature maps to them. The transfer learning results on facial\nclassification and regression tasks show that our FRA outperforms previous\npre-trained models and more importantly, using ResNet as the unified backbone\nfor various tasks, our FRA achieves comparable or even better performance\ncompared with SOTA methods in facial analysis tasks.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Zheng Gao", "Ioannis Patras"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efeb"}, "filepath": "data/2311.17919.png", "tags": [], "_media_type": "image", "_rand": 0.9991854043483535, "arXiv_link": "https://arxiv.org/abs/2311.17919", "other_link": "https://dangeng.github.io/visual_anagrams/", "title": "Visual Anagrams: Synthesizing Multi-View Optical Illusions with Diffusion Models", "abstract": "We address the problem of synthesizing multi-view optical illusions: images\nthat change appearance upon a transformation, such as a flip or rotation. We\npropose a simple, zero-shot method for obtaining these illusions from\noff-the-shelf text-to-image diffusion models. During the reverse diffusion\nprocess, we estimate the noise from different views of a noisy image, and then\ncombine these noise estimates together and denoise the image. A theoretical\nanalysis suggests that this method works precisely for views that can be\nwritten as orthogonal transformations, of which permutations are a subset. This\nleads to the idea of a visual anagram--an image that changes appearance under\nsome rearrangement of pixels. This includes rotations and flips, but also more\nexotic pixel permutations such as a jigsaw rearrangement. Our approach also\nnaturally extends to illusions with more than two views. We provide both\nqualitative and quantitative results demonstrating the effectiveness and\nflexibility of our method. Please see our project webpage for additional\nvisualizations and results: https://dangeng.github.io/visual_anagrams/", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Daniel Geng", "Inbum Park", "Andrew Owens"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efec"}, "filepath": "data/2403.17373.png", "tags": [], "_media_type": "image", "_rand": 0.999603136656699, "arXiv_link": "https://arxiv.org/abs/2403.17373", "other_link": "", "title": "AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving", "abstract": "Autonomous vehicle (AV) systems rely on robust perception models as a\ncornerstone of safety assurance. However, objects encountered on the road\nexhibit a long-tailed distribution, with rare or unseen categories posing\nchallenges to a deployed perception model. This necessitates an expensive\nprocess of continuously curating and annotating data with significant human\neffort. We propose to leverage recent advances in vision-language and large\nlanguage models to design an Automatic Data Engine (AIDE) that automatically\nidentifies issues, efficiently curates data, improves the model through\nauto-labeling, and verifies the model through generation of diverse scenarios.\nThis process operates iteratively, allowing for continuous self-improvement of\nthe model. We further establish a benchmark for open-world detection on AV\ndatasets to comprehensively evaluate various learning paradigms, demonstrating\nour method's superior performance at a reduced cost.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Mingfu Liang", "Jong-Chyi Su", "Samuel Schulter", "Sparsh Garg", "Shiyu Zhao", "Ying Wu", "Manmohan Chandraker"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efed"}, "filepath": "data/2309.10649.png", "tags": [], "_media_type": "image", "_rand": 0.9995195414703539, "arXiv_link": "https://arxiv.org/abs/2309.10649", "other_link": "", "title": "Hierarchical Intra-modal Correlation Learning for Label-free 3D Semantic Segmentation", "abstract": "Current state-of-the-art point cloud-based perception methods usually rely on\nlarge-scale labeled data, which requires expensive manual annotations. A\nnatural option is to explore the unsupervised methodology for 3D perception\ntasks. However, such methods often face substantial performance-drop\ndifficulties. Fortunately, we found that there exist amounts of image-based\ndatasets and an alternative can be proposed, i.e., transferring the knowledge\nin the 2D images to 3D point clouds. Specifically, we propose a novel approach\nfor the challenging cross-modal and cross-domain adaptation task by fully\nexploring the relationship between images and point clouds and designing\neffective feature alignment strategies. Without any 3D labels, our method\nachieves state-of-the-art performance for 3D point cloud semantic segmentation\non SemanticKITTI by using the knowledge of KITTI360 and GTA5, compared to\nexisting unsupervised and weakly-supervised baselines.", "keywords": [], "authors_list": ["Xin Kang", "Lei Chu", "Jiahao Li", "Xuejin Chen", "Yan Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efee"}, "filepath": "data/2402.18853.png", "tags": [], "_media_type": "image", "_rand": 0.9993868899860331, "arXiv_link": "https://arxiv.org/abs/2402.18853", "other_link": "", "title": "Rethinking Multi-domain Generalization with A General Learning Objective", "abstract": "Multi-domain generalization (mDG) is universally aimed to minimize the\ndiscrepancy between training and testing distributions to enhance\nmarginal-to-label distribution mapping. However, existing mDG literature lacks\na general learning objective paradigm and often imposes constraints on static\ntarget marginal distributions. In this paper, we propose to leverage a\n$Y$-mapping to relax the constraint. We rethink the learning objective for mDG\nand design a new \\textbf{general learning objective} to interpret and analyze\nmost existing mDG wisdom. This general objective is bifurcated into two\nsynergistic amis: learning domain-independent conditional features and\nmaximizing a posterior. Explorations also extend to two effective\nregularization terms that incorporate prior information and suppress invalid\ncausality, alleviating the issues that come with relaxed constraints. We\ntheoretically contribute an upper bound for the domain alignment of\ndomain-independent conditional features, disclosing that many previous mDG\nendeavors actually \\textbf{optimize partially the objective} and thus lead to\nlimited performance. As such, our study distills a general learning objective\ninto four practical components, providing a general, robust, and flexible\nmechanism to handle complex domain shifts. Extensive empirical results indicate\nthat the proposed objective with $Y$-mapping leads to substantially better mDG\nperformance in various downstream tasks, including regression, segmentation,\nand classification.", "keywords": [], "authors_list": ["Zhaorui Tan", "Xi Yang", "Kaizhu Huang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efef"}, "filepath": "data/2402.19082.png", "tags": [], "_media_type": "image", "_rand": 0.9998557046349943, "arXiv_link": "https://arxiv.org/abs/2402.19082", "other_link": "", "title": "VideoMAC: Video Masked Autoencoders Meet ConvNets", "abstract": "Recently, the advancement of self-supervised learning techniques, like masked\nautoencoders (MAE), has greatly influenced visual representation learning for\nimages and videos. Nevertheless, it is worth noting that the predominant\napproaches in existing masked image / video modeling rely excessively on\nresource-intensive vision transformers (ViTs) as the feature encoder. In this\npaper, we propose a new approach termed as \\textbf{VideoMAC}, which combines\nvideo masked autoencoders with resource-friendly ConvNets. Specifically,\nVideoMAC employs symmetric masking on randomly sampled pairs of video frames.\nTo prevent the issue of mask pattern dissipation, we utilize ConvNets which are\nimplemented with sparse convolutional operators as encoders. Simultaneously, we\npresent a simple yet effective masked video modeling (MVM) approach, a dual\nencoder architecture comprising an online encoder and an exponential moving\naverage target encoder, aimed to facilitate inter-frame reconstruction\nconsistency in videos. Additionally, we demonstrate that VideoMAC, empowering\nclassical (ResNet) / modern (ConvNeXt) convolutional encoders to harness the\nbenefits of MVM, outperforms ViT-based approaches on downstream tasks,\nincluding video object segmentation (+\\textbf{5.2\\%} / \\textbf{6.4\\%}\n$\\mathcal{J}\\&\\mathcal{F}$), body part propagation (+\\textbf{6.3\\%} /\n\\textbf{3.1\\%} mIoU), and human pose tracking (+\\textbf{10.2\\%} /\n\\textbf{11.1\\%} PCK@0.1).", "keywords": ["Efficient and scalable vision"], "authors_list": ["Gensheng Pei", "Tao Chen", "Xiruo Jiang", "\u5218\u534e\u5cf0 Liu", "Zeren Sun", "Yazhou Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff0"}, "filepath": "data/2312.08883.png", "tags": [], "_media_type": "image", "_rand": 0.999792706524844, "arXiv_link": "https://arxiv.org/abs/2312.08883", "other_link": "https://xuanyuzhang21.github.io/project/editguard/.", "title": "EditGuard: Versatile Image Watermarking for Tamper Localization and Copyright Protection", "abstract": "In the era where AI-generated content (AIGC) models can produce stunning and\nlifelike images, the lingering shadow of unauthorized reproductions and\nmalicious tampering poses imminent threats to copyright integrity and\ninformation security. Current image watermarking methods, while widely accepted\nfor safeguarding visual content, can only protect copyright and ensure\ntraceability. They fall short in localizing increasingly realistic image\ntampering, potentially leading to trust crises, privacy violations, and legal\ndisputes. To solve this challenge, we propose an innovative proactive forensics\nframework EditGuard, to unify copyright protection and tamper-agnostic\nlocalization, especially for AIGC-based editing methods. It can offer a\nmeticulous embedding of imperceptible watermarks and precise decoding of\ntampered areas and copyright information. Leveraging our observed fragility and\nlocality of image-into-image steganography, the realization of EditGuard can be\nconverted into a united image-bit steganography issue, thus completely\ndecoupling the training process from the tampering types. Extensive experiments\ndemonstrate that our EditGuard balances the tamper localization accuracy,\ncopyright recovery precision, and generalizability to various AIGC-based\ntampering methods, especially for image forgery that is difficult for the naked\neye to detect. The project page is available at\nhttps://xuanyuzhang21.github.io/project/editguard/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xuanyu Zhang", "Runyi Li", "Jiwen Yu", "Youmin Xu", "Weiqi Li", "Jian Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff1"}, "filepath": "data/2401.09740.png", "tags": [], "_media_type": "image", "_rand": 0.9998351258507581, "arXiv_link": "https://arxiv.org/abs/2401.09740", "other_link": "", "title": "Re-thinking Data Availability Attacks Against Deep Neural Networks", "abstract": "Backdoors and adversarial examples are the two primary threats currently\nfaced by deep neural networks (DNNs). Both attacks attempt to hijack the model\nbehaviors with unintended outputs by introducing (small) perturbations to the\ninputs. Backdoor attacks, despite the high success rates, often require a\nstrong assumption, which is not always easy to achieve in reality. Adversarial\nexample attacks, which put relatively weaker assumptions on attackers, often\ndemand high computational resources, yet do not always yield satisfactory\nsuccess rates when attacking mainstream black-box models in the real world.\nThese limitations motivate the following research question: can model hijacking\nbe achieved more simply, with a higher attack success rate and more reasonable\nassumptions? In this paper, we propose CleanSheet, a new model hijacking attack\nthat obtains the high performance of backdoor attacks without requiring the\nadversary to tamper with the model training process. CleanSheet exploits\nvulnerabilities in DNNs stemming from the training data. Specifically, our key\nidea is to treat part of the clean training data of the target model as\n\"poisoned data,\" and capture the characteristics of these data that are more\nsensitive to the model (typically called robust features) to construct\n\"triggers.\" These triggers can be added to any input example to mislead the\ntarget model, similar to backdoor attacks. We validate the effectiveness of\nCleanSheet through extensive experiments on 5 datasets, 79 normally trained\nmodels, 68 pruned models, and 39 defensive models. Results show that CleanSheet\nexhibits performance comparable to state-of-the-art backdoor attacks, achieving\nan average attack success rate (ASR) of 97.5% on CIFAR-100 and 92.4% on GTSRB,\nrespectively. Furthermore, CleanSheet consistently maintains a high ASR, when\nconfronted with various mainstream backdoor defenses.", "keywords": [], "authors_list": ["Bin Fang", "Bo Li", "Shuang Wu", "Shouhong Ding", "Ran Yi", "Lizhuang Ma"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff2"}, "filepath": "data/2405.12200.png", "tags": [], "_media_type": "image", "_rand": 0.9990789040153061, "arXiv_link": "https://arxiv.org/abs/2405.12200", "other_link": "", "title": "Multi-View Attentive Contextualization for Multi-View 3D Object Detection", "abstract": "We present Multi-View Attentive Contextualization (MvACon), a simple yet\neffective method for improving 2D-to-3D feature lifting in query-based\nmulti-view 3D (MV3D) object detection. Despite remarkable progress witnessed in\nthe field of query-based MV3D object detection, prior art often suffers from\neither the lack of exploiting high-resolution 2D features in dense\nattention-based lifting, due to high computational costs, or from\ninsufficiently dense grounding of 3D queries to multi-scale 2D features in\nsparse attention-based lifting. Our proposed MvACon hits the two birds with one\nstone using a representationally dense yet computationally sparse attentive\nfeature contextualization scheme that is agnostic to specific 2D-to-3D feature\nlifting approaches. In experiments, the proposed MvACon is thoroughly tested on\nthe nuScenes benchmark, using both the BEVFormer and its recent 3D deformable\nattention (DFA3D) variant, as well as the PETR, showing consistent detection\nperformance improvement, especially in enhancing performance in location,\norientation, and velocity prediction. It is also tested on the Waymo-mini\nbenchmark using BEVFormer with similar improvement. We qualitatively and\nquantitatively show that global cluster-based contexts effectively encode dense\nscene-level contexts for MV3D object detection. The promising results of our\nproposed MvACon reinforces the adage in computer vision -- ``(contextualized)\nfeature matters\".", "keywords": ["Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Xianpeng Liu", "Ce Zheng", "Ming Qian", "Nan Xue", "Chen Chen", "Zhebin Zhang", "Chen Li", "Tianfu Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff3"}, "filepath": "data/2403.05897.png", "tags": [], "_media_type": "image", "_rand": 0.9994841025281264, "arXiv_link": "https://arxiv.org/abs/2403.05897", "other_link": "https://github.com/cnulab/RealNet.", "title": "RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection", "abstract": "Self-supervised feature reconstruction methods have shown promising advances\nin industrial image anomaly detection and localization. Despite this progress,\nthese methods still face challenges in synthesizing realistic and diverse\nanomaly samples, as well as addressing the feature redundancy and pre-training\nbias of pre-trained feature. In this work, we introduce RealNet, a feature\nreconstruction network with realistic synthetic anomaly and adaptive feature\nselection. It is incorporated with three key innovations: First, we propose\nStrength-controllable Diffusion Anomaly Synthesis (SDAS), a diffusion\nprocess-based synthesis strategy capable of generating samples with varying\nanomaly strengths that mimic the distribution of real anomalous samples.\nSecond, we develop Anomaly-aware Features Selection (AFS), a method for\nselecting representative and discriminative pre-trained feature subsets to\nimprove anomaly detection performance while controlling computational costs.\nThird, we introduce Reconstruction Residuals Selection (RRS), a strategy that\nadaptively selects discriminative residuals for comprehensive identification of\nanomalous regions across multiple levels of granularity. We assess RealNet on\nfour benchmark datasets, and our results demonstrate significant improvements\nin both Image AUROC and Pixel AUROC compared to the current state-o-the-art\nmethods. The code, data, and models are available at\nhttps://github.com/cnulab/RealNet.", "keywords": [], "authors_list": ["Ximiao Zhang", "Min Xu", "Xiuzhuang Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff4"}, "filepath": "data/2402.14795.png", "tags": [], "_media_type": "image", "_rand": 0.9993576565651126, "arXiv_link": "https://arxiv.org/abs/2402.14795", "other_link": "https://cyber-demo.github.io", "title": "CyberDemo: Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation", "abstract": "We introduce CyberDemo, a novel approach to robotic imitation learning that\nleverages simulated human demonstrations for real-world tasks. By incorporating\nextensive data augmentation in a simulated environment, CyberDemo outperforms\ntraditional in-domain real-world demonstrations when transferred to the real\nworld, handling diverse physical and visual conditions. Regardless of its\naffordability and convenience in data collection, CyberDemo outperforms\nbaseline methods in terms of success rates across various tasks and exhibits\ngeneralizability with previously unseen objects. For example, it can rotate\nnovel tetra-valve and penta-valve, despite human demonstrations only involving\ntri-valves. Our research demonstrates the significant potential of simulated\nhuman demonstrations for real-world dexterous manipulation tasks. More details\ncan be found at https://cyber-demo.github.io", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jun Wang", "Yuzhe Qin", "Kaiming Kuang", "Yigit Korkmaz", "Akhilan Gurumoorthy", "Hao Su", "Xiaolong Wang"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff5"}, "filepath": "data/2403.17422.png", "tags": [], "_media_type": "image", "_rand": 0.9993529111836995, "arXiv_link": "https://arxiv.org/abs/2403.17422", "other_link": "", "title": "InterHandGen: Two-Hand Interaction Generation via Cascaded Reverse Diffusion", "abstract": "We present InterHandGen, a novel framework that learns the generative prior\nof two-hand interaction. Sampling from our model yields plausible and diverse\ntwo-hand shapes in close interaction with or without an object. Our prior can\nbe incorporated into any optimization or learning methods to reduce ambiguity\nin an ill-posed setup. Our key observation is that directly modeling the joint\ndistribution of multiple instances imposes high learning complexity due to its\ncombinatorial nature. Thus, we propose to decompose the modeling of joint\ndistribution into the modeling of factored unconditional and conditional single\ninstance distribution. In particular, we introduce a diffusion model that\nlearns the single-hand distribution unconditional and conditional to another\nhand via conditioning dropout. For sampling, we combine anti-penetration and\nclassifier-free guidance to enable plausible generation. Furthermore, we\nestablish the rigorous evaluation protocol of two-hand synthesis, where our\nmethod significantly outperforms baseline generative models in terms of\nplausibility and diversity. We also demonstrate that our diffusion prior can\nboost the performance of two-hand reconstruction from monocular in-the-wild\nimages, achieving new state-of-the-art accuracy.", "keywords": ["Biometrics and human analysis", "Image and video generation and manipulation"], "authors_list": ["Jihyun Lee", "Shunsuke Saito", "Giljoo Nam", "Minhyuk Sung", "Tae-Kyun Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff6"}, "filepath": "data/2404.01342.png", "tags": [], "_media_type": "image", "_rand": 0.9998814706118407, "arXiv_link": "https://arxiv.org/abs/2404.01342", "other_link": "https://github.com/OpenGVLab/DiffAgent.", "title": "DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model", "abstract": "Text-to-image (T2I) generative models have attracted significant attention\nand found extensive applications within and beyond academic research. For\nexample, the Civitai community, a platform for T2I innovation, currently hosts\nan impressive array of 74,492 distinct models. However, this diversity presents\na formidable challenge in selecting the most appropriate model and parameters,\na process that typically requires numerous trials. Drawing inspiration from the\ntool usage research of large language models (LLMs), we introduce DiffAgent, an\nLLM agent designed to screen the accurate selection in seconds via API calls.\nDiffAgent leverages a novel two-stage training framework, SFTA, enabling it to\naccurately align T2I API responses with user input in accordance with human\npreferences. To train and evaluate DiffAgent's capabilities, we present\nDABench, a comprehensive dataset encompassing an extensive range of T2I APIs\nfrom the community. Our evaluations reveal that DiffAgent not only excels in\nidentifying the appropriate T2I API but also underscores the effectiveness of\nthe SFTA training framework. Codes are available at\nhttps://github.com/OpenGVLab/DiffAgent.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Lirui Zhao", "Yue Yang", "Kaipeng Zhang", "Wenqi Shao", "Yuxin Zhang", "Yu Qiao", "Ping Luo", "Rongrong Ji"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff7"}, "filepath": "data/2306.11290.png", "tags": [], "_media_type": "image", "_rand": 0.999505647130056, "arXiv_link": "https://arxiv.org/abs/2306.11290", "other_link": "", "title": "Habitat Synthetic Scenes Dataset (HSSD-200): An Analysis of 3D Scene Scale and Realism Tradeoffs for ObjectGoal Navigation", "abstract": "We contribute the Habitat Synthetic Scene Dataset, a dataset of 211\nhigh-quality 3D scenes, and use it to test navigation agent generalization to\nrealistic 3D environments. Our dataset represents real interiors and contains a\ndiverse set of 18,656 models of real-world objects. We investigate the impact\nof synthetic 3D scene dataset scale and realism on the task of training\nembodied agents to find and navigate to objects (ObjectGoal navigation). By\ncomparing to synthetic 3D scene datasets from prior work, we find that scale\nhelps in generalization, but the benefits quickly saturate, making visual\nfidelity and correlation to real-world scenes more important. Our experiments\nshow that agents trained on our smaller-scale dataset can match or outperform\nagents trained on much larger datasets. Surprisingly, we observe that agents\ntrained on just 122 scenes from our dataset outperform agents trained on 10,000\nscenes from the ProcTHOR-10K dataset in terms of zero-shot generalization in\nreal-world scanned environments.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Mukul Khanna", "Yongsen Mao", "Hanxiao Jiang", "Sanjay Haresh", "Brennan Shacklett", "Dhruv Batra", "Alexander William Clegg", "Eric Undersander", "Angel Xuan Chang", "Manolis Savva"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff8"}, "filepath": "data/2312.15406.png", "tags": [], "_media_type": "image", "_rand": 0.9996930273545122, "arXiv_link": "https://arxiv.org/abs/2312.15406", "other_link": "", "title": "Objects as volumes: A stochastic geometry view of opaque solids", "abstract": "We develop a theory for the representation of opaque solids as volumes.\nStarting from a stochastic representation of opaque solids as random indicator\nfunctions, we prove the conditions under which such solids can be modeled using\nexponential volumetric transport. We also derive expressions for the volumetric\nattenuation coefficient as a functional of the probability distributions of the\nunderlying indicator functions. We generalize our theory to account for\nisotropic and anisotropic scattering at different parts of the solid, and for\nrepresentations of opaque solids as stochastic implicit surfaces. We derive our\nvolumetric representation from first principles, which ensures that it\nsatisfies physical constraints such as reciprocity and reversibility. We use\nour theory to explain, compare, and correct previous volumetric\nrepresentations, as well as propose meaningful extensions that lead to improved\nperformance in 3D reconstruction tasks.", "keywords": [], "authors_list": ["Bailey Miller", "Hanyu Chen", "Alice Lai", "Ioannis Gkioulekas"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727eff9"}, "filepath": "data/2403.03715.png", "tags": [], "_media_type": "image", "_rand": 0.9996602623892625, "arXiv_link": "https://arxiv.org/abs/2403.03715", "other_link": "https://github.com/joeyz0z/MeaCap.", "title": "MeaCap: Memory-Augmented Zero-shot Image Captioning", "abstract": "Zero-shot image captioning (IC) without well-paired image-text data can be\ndivided into two categories, training-free and text-only-training. Generally,\nthese two types of methods realize zero-shot IC by integrating pretrained\nvision-language models like CLIP for image-text similarity evaluation and a\npre-trained language model (LM) for caption generation. The main difference\nbetween them is whether using a textual corpus to train the LM. Though\nachieving attractive performance w.r.t. some metrics, existing methods often\nexhibit some common drawbacks. Training-free methods tend to produce\nhallucinations, while text-only-training often lose generalization capability.\nTo move forward, in this paper, we propose a novel Memory-Augmented zero-shot\nimage Captioning framework (MeaCap). Specifically, equipped with a textual\nmemory, we introduce a retrieve-then-filter module to get key concepts that are\nhighly related to the image. By deploying our proposed memory-augmented\nvisual-related fusion score in a keywords-to-sentence LM, MeaCap can generate\nconcept-centered captions that keep high consistency with the image with fewer\nhallucinations and more world-knowledge. The framework of MeaCap achieves the\nstate-of-the-art performance on a series of zero-shot IC settings. Our code is\navailable at https://github.com/joeyz0z/MeaCap.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Zequn Zeng", "Yan Xie", "Hao Zhang", "Chiyu Chen", "Zhengjue Wang", "Bo Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727effa"}, "filepath": "data/2402.19144.png", "tags": [], "_media_type": "image", "_rand": 0.9999419707884637, "arXiv_link": "https://arxiv.org/abs/2402.19144", "other_link": "", "title": "Weakly Supervised Monocular 3D Detection with a Single-View Image", "abstract": "Monocular 3D detection (M3D) aims for precise 3D object localization from a\nsingle-view image which usually involves labor-intensive annotation of 3D\ndetection boxes. Weakly supervised M3D has recently been studied to obviate the\n3D annotation process by leveraging many existing 2D annotations, but it often\nrequires extra training data such as LiDAR point clouds or multi-view images\nwhich greatly degrades its applicability and usability in various applications.\nWe propose SKD-WM3D, a weakly supervised monocular 3D detection framework that\nexploits depth information to achieve M3D with a single-view image exclusively\nwithout any 3D annotations or other training data. One key design in SKD-WM3D\nis a self-knowledge distillation framework, which transforms image features\ninto 3D-like representations by fusing depth information and effectively\nmitigates the inherent depth ambiguity in monocular scenarios with little\ncomputational overhead in inference. In addition, we design an\nuncertainty-aware distillation loss and a gradient-targeted transfer modulation\nstrategy which facilitate knowledge acquisition and knowledge transfer,\nrespectively. Extensive experiments show that SKD-WM3D surpasses the\nstate-of-the-art clearly and is even on par with many fully supervised methods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xueying Jiang", "Sheng Jin", "Lewei Lu", "Xiaoqin Zhang", "Shijian Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727effb"}, "filepath": "data/2403.07773.png", "tags": [], "_media_type": "image", "_rand": 0.9993585084584993, "arXiv_link": "https://arxiv.org/abs/2403.07773", "other_link": "https://github.com/zoomin-lee/SemCity.", "title": "SemCity: Semantic Scene Generation with Triplane Diffusion", "abstract": "We present \"SemCity,\" a 3D diffusion model for semantic scene generation in\nreal-world outdoor environments. Most 3D diffusion models focus on generating a\nsingle object, synthetic indoor scenes, or synthetic outdoor scenes, while the\ngeneration of real-world outdoor scenes is rarely addressed. In this paper, we\nconcentrate on generating a real-outdoor scene through learning a diffusion\nmodel on a real-world outdoor dataset. In contrast to synthetic data,\nreal-outdoor datasets often contain more empty spaces due to sensor\nlimitations, causing challenges in learning real-outdoor distributions. To\naddress this issue, we exploit a triplane representation as a proxy form of\nscene distributions to be learned by our diffusion model. Furthermore, we\npropose a triplane manipulation that integrates seamlessly with our triplane\ndiffusion model. The manipulation improves our diffusion model's applicability\nin a variety of downstream tasks related to outdoor scene generation such as\nscene inpainting, scene outpainting, and semantic scene completion refinements.\nIn experimental results, we demonstrate that our triplane diffusion model shows\nmeaningful generation results compared with existing work in a real-outdoor\ndataset, SemanticKITTI. We also show our triplane manipulation facilitates\nseamlessly adding, removing, or modifying objects within a scene. Further, it\nalso enables the expansion of scenes toward a city-level scale. Finally, we\nevaluate our method on semantic scene completion refinements where our\ndiffusion model enhances predictions of semantic scene completion networks by\nlearning scene distribution. Our code is available at\nhttps://github.com/zoomin-lee/SemCity.", "keywords": ["Scene analysis and understanding", "Image and video generation and manipulation"], "authors_list": ["Jumin Lee", "Sebin Lee", "Changho Jo", "Woobin Im", "Ju-hyeong Seon", "Sung-Eui Yoon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727effc"}, "filepath": "data/2401.01042.png", "tags": [], "_media_type": "image", "_rand": 0.9991460254341483, "arXiv_link": "https://arxiv.org/abs/2401.01042", "other_link": "", "title": "SD2Event: Self-supervised Learning of Dynamic Detectors and Contextual Descriptors for Event Cameras", "abstract": "Event-based cameras provide accurate and high temporal resolution\nmeasurements for performing computer vision tasks in challenging scenarios,\nsuch as high-dynamic range environments and fast-motion maneuvers. Despite\ntheir advantages, utilizing deep learning for event-based vision encounters a\nsignificant obstacle due to the scarcity of annotated data caused by the\nrelatively recent emergence of event-based cameras. To overcome this\nlimitation, leveraging the knowledge available from annotated data obtained\nwith conventional frame-based cameras presents an effective solution based on\nunsupervised domain adaptation. We propose a new algorithm tailored for\nadapting a deep neural network trained on annotated frame-based data to\ngeneralize well on event-based unannotated data. Our approach incorporates\nuncorrelated conditioning and self-supervised learning in an adversarial\nlearning scheme to close the gap between the two source and target domains. By\napplying self-supervised learning, the algorithm learns to align the\nrepresentations of event-based data with those from frame-based camera data,\nthereby facilitating knowledge transfer.Furthermore, the inclusion of\nuncorrelated conditioning ensures that the adapted model effectively\ndistinguishes between event-based and conventional data, enhancing its ability\nto classify event-based images accurately.Through empirical experimentation and\nevaluation, we demonstrate that our algorithm surpasses existing approaches\ndesigned for the same purpose using two benchmarks. The superior performance of\nour solution is attributed to its ability to effectively utilize annotated data\nfrom frame-based cameras and transfer the acquired knowledge to the event-based\nvision domain.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yuan Gao", "Yuqing Zhu", "Xinjun Li", "Yimin Du", "Tianzhu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727effd"}, "filepath": "data/2404.01518.png", "tags": [], "_media_type": "image", "_rand": 0.9996641864661497, "arXiv_link": "https://arxiv.org/abs/2404.01518", "other_link": "", "title": "Temporally Consistent Unbalanced Optimal Transport for Unsupervised Action Segmentation", "abstract": "We propose a novel approach to the action segmentation task for long,\nuntrimmed videos, based on solving an optimal transport problem. By encoding a\ntemporal consistency prior into a Gromov-Wasserstein problem, we are able to\ndecode a temporally consistent segmentation from a noisy affinity/matching cost\nmatrix between video frames and action classes. Unlike previous approaches, our\nmethod does not require knowing the action order for a video to attain temporal\nconsistency. Furthermore, our resulting (fused) Gromov-Wasserstein problem can\nbe efficiently solved on GPUs using a few iterations of projected mirror\ndescent. We demonstrate the effectiveness of our method in an unsupervised\nlearning setting, where our method is used to generate pseudo-labels for\nself-training. We evaluate our segmentation approach and unsupervised learning\npipeline on the Breakfast, 50-Salads, YouTube Instructions and Desktop Assembly\ndatasets, yielding state-of-the-art results for the unsupervised video action\nsegmentation task.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ming Xu", "Stephen Gould"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727effe"}, "filepath": "data/2403.01414.png", "tags": [], "_media_type": "image", "_rand": 0.9999597124169347, "arXiv_link": "https://arxiv.org/abs/2403.01414", "other_link": "", "title": "Unsigned Orthogonal Distance Fields: An Accurate Neural Implicit Representation for Diverse 3D Shapes", "abstract": "Neural implicit representation of geometric shapes has witnessed considerable\nadvancements in recent years. However, common distance field based implicit\nrepresentations, specifically signed distance field (SDF) for watertight shapes\nor unsigned distance field (UDF) for arbitrary shapes, routinely suffer from\ndegradation of reconstruction accuracy when converting to explicit surface\npoints and meshes. In this paper, we introduce a novel neural implicit\nrepresentation based on unsigned orthogonal distance fields (UODFs). In UODFs,\nthe minimal unsigned distance from any spatial point to the shape surface is\ndefined solely in one orthogonal direction, contrasting with the\nmulti-directional determination made by SDF and UDF. Consequently, every point\nin the 3D UODFs can directly access its closest surface points along three\northogonal directions. This distinctive feature leverages the accurate\nreconstruction of surface points without interpolation errors. We verify the\neffectiveness of UODFs through a range of reconstruction examples, extending\nfrom simple watertight or non-watertight shapes to complex shapes that include\nhollows, internal or assembling structures.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["YuJie Lu", "Long Wan", "Nayu Ding", "Yulong Wang", "Shuhan Shen", "Shen Cai", "Lin Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727efff"}, "filepath": "data/2405.10612.png", "tags": [], "_media_type": "image", "_rand": 0.9993682765681433, "arXiv_link": "https://arxiv.org/abs/2405.10612", "other_link": "https://github.com/20000yshust/SWARM.", "title": "Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers", "abstract": "Given the power of vision transformers, a new learning paradigm, pre-training\nand then prompting, makes it more efficient and effective to address downstream\nvisual recognition tasks. In this paper, we identify a novel security threat\ntowards such a paradigm from the perspective of backdoor attacks. Specifically,\nan extra prompt token, called the switch token in this work, can turn the\nbackdoor mode on, i.e., converting a benign model into a backdoored one. Once\nunder the backdoor mode, a specific trigger can force the model to predict a\ntarget class. It poses a severe risk to the users of cloud API, since the\nmalicious behavior can not be activated and detected under the benign mode,\nthus making the attack very stealthy. To attack a pre-trained model, our\nproposed attack, named SWARM, learns a trigger and prompt tokens including a\nswitch token. They are optimized with the clean loss which encourages the model\nalways behaves normally even the trigger presents, and the backdoor loss that\nensures the backdoor can be activated by the trigger when the switch is on.\nBesides, we utilize the cross-mode feature distillation to reduce the effect of\nthe switch token on clean samples. The experiments on diverse visual\nrecognition tasks confirm the success of our switchable backdoor attack, i.e.,\nachieving 95%+ attack success rate, and also being hard to be detected and\nremoved. Our code is available at https://github.com/20000yshust/SWARM.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Sheng Yang", "Jiawang Bai", "Kuofeng Gao", "Yong Yang", "Yiming Li", "Shu-Tao Xia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Cryptography and Security", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f000"}, "filepath": "data/2405.14847.png", "tags": [], "_media_type": "image", "_rand": 0.9994941810304817, "arXiv_link": "https://arxiv.org/abs/2405.14847", "other_link": "https://lwwu2.github.io/nde/}.", "title": "Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling", "abstract": "Novel-view synthesis of specular objects like shiny metals or glossy paints\nremains a significant challenge. Not only the glossy appearance but also global\nillumination effects, including reflections of other objects in the\nenvironment, are critical components to faithfully reproduce a scene. In this\npaper, we present Neural Directional Encoding (NDE), a view-dependent\nappearance encoding of neural radiance fields (NeRF) for rendering specular\nobjects. NDE transfers the concept of feature-grid-based spatial encoding to\nthe angular domain, significantly improving the ability to model high-frequency\nangular signals. In contrast to previous methods that use encoding functions\nwith only angular input, we additionally cone-trace spatial features to obtain\na spatially varying directional encoding, which addresses the challenging\ninterreflection effects. Extensive experiments on both synthetic and real\ndatasets show that a NeRF model with NDE (1) outperforms the state of the art\non view synthesis of specular objects, and (2) works with small networks to\nallow fast (real-time) inference. The project webpage and source code are\navailable at: \\url{https://lwwu2.github.io/nde/}.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation", "Computational imaging and physics-based vision"], "authors_list": ["Liwen Wu", "Sai Bi", "Zexiang Xu", "Fujun Luan", "Kai Zhang", "Iliyan Georgiev", "Kalyan Sunkavalli", "Ravi Ramamoorthi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f001"}, "filepath": "data/2312.01305.png", "tags": [], "_media_type": "image", "_rand": 0.9994453145664614, "arXiv_link": "https://arxiv.org/abs/2312.01305", "other_link": "", "title": "ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models", "abstract": "Generating novel views of an object from a single image is a challenging\ntask. It requires an understanding of the underlying 3D structure of the object\nfrom an image and rendering high-quality, spatially consistent new views. While\nrecent methods for view synthesis based on diffusion have shown great progress,\nachieving consistency among various view estimates and at the same time abiding\nby the desired camera pose remains a critical problem yet to be solved. In this\nwork, we demonstrate a strikingly simple method, where we utilize a pre-trained\nvideo diffusion model to solve this problem. Our key idea is that synthesizing\na novel view could be reformulated as synthesizing a video of a camera going\naround the object of interest -- a scanning video -- which then allows us to\nleverage the powerful priors that a video diffusion model would have learned.\nThus, to perform novel-view synthesis, we create a smooth camera trajectory to\nthe target view that we wish to render, and denoise using both a\nview-conditioned diffusion model and a video diffusion model. By doing so, we\nobtain a highly consistent novel view synthesis, outperforming the state of the\nart.", "keywords": [], "authors_list": ["Jeong-gi Kwak", "Erqun Dong", "Yuhe Jin", "Hanseok Ko", "Shweta Mahajan", "Kwang Moo Yi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f002"}, "filepath": "data/2403.16643v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994002950211902, "arXiv_link": "https://arxiv.org/abs/2403.16643v1", "other_link": "https://github.com/ProAirVerse/Self-Adaptive-Guidance-Diffusion.git.", "title": "Self-Adaptive Reality-Guided Diffusion for Artifact-Free Super-Resolution", "abstract": "Artifact-free super-resolution (SR) aims to translate low-resolution images\ninto their high-resolution counterparts with a strict integrity of the original\ncontent, eliminating any distortions or synthetic details. While traditional\ndiffusion-based SR techniques have demonstrated remarkable abilities to enhance\nimage detail, they are prone to artifact introduction during iterative\nprocedures. Such artifacts, ranging from trivial noise to unauthentic textures,\ndeviate from the true structure of the source image, thus challenging the\nintegrity of the super-resolution process. In this work, we propose\nSelf-Adaptive Reality-Guided Diffusion (SARGD), a training-free method that\ndelves into the latent space to effectively identify and mitigate the\npropagation of artifacts. Our SARGD begins by using an artifact detector to\nidentify implausible pixels, creating a binary mask that highlights artifacts.\nFollowing this, the Reality Guidance Refinement (RGR) process refines artifacts\nby integrating this mask with realistic latent representations, improving\nalignment with the original image. Nonetheless, initial realistic-latent\nrepresentations from lower-quality images result in over-smoothing in the final\noutput. To address this, we introduce a Self-Adaptive Guidance (SAG) mechanism.\nIt dynamically computes a reality score, enhancing the sharpness of the\nrealistic latent. These alternating mechanisms collectively achieve\nartifact-free super-resolution. Extensive experiments demonstrate the\nsuperiority of our method, delivering detailed artifact-free high-resolution\nimages while reducing sampling steps by 2X. We release our code at\nhttps://github.com/ProAirVerse/Self-Adaptive-Guidance-Diffusion.git.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Qingping Zheng", "Ling Zheng", "Yuanfan Guo", "Ying Li", "Songcen Xu", "Jiankang Deng", "Hang Xu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f003"}, "filepath": "data/2312.14937.png", "tags": [], "_media_type": "image", "_rand": 0.9998475875664729, "arXiv_link": "https://arxiv.org/abs/2312.14937", "other_link": "https://yihua7.github.io/SC-GS-web/", "title": "SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes", "abstract": "Novel view synthesis for dynamic scenes is still a challenging problem in\ncomputer vision and graphics. Recently, Gaussian splatting has emerged as a\nrobust technique to represent static scenes and enable high-quality and\nreal-time novel view synthesis. Building upon this technique, we propose a new\nrepresentation that explicitly decomposes the motion and appearance of dynamic\nscenes into sparse control points and dense Gaussians, respectively. Our key\nidea is to use sparse control points, significantly fewer in number than the\nGaussians, to learn compact 6 DoF transformation bases, which can be locally\ninterpolated through learned interpolation weights to yield the motion field of\n3D Gaussians. We employ a deformation MLP to predict time-varying 6 DoF\ntransformations for each control point, which reduces learning complexities,\nenhances learning abilities, and facilitates obtaining temporal and spatial\ncoherent motion patterns. Then, we jointly learn the 3D Gaussians, the\ncanonical space locations of control points, and the deformation MLP to\nreconstruct the appearance, geometry, and dynamics of 3D scenes. During\nlearning, the location and number of control points are adaptively adjusted to\naccommodate varying motion complexities in different regions, and an ARAP loss\nfollowing the principle of as rigid as possible is developed to enforce spatial\ncontinuity and local rigidity of learned motions. Finally, thanks to the\nexplicit sparse motion representation and its decomposition from appearance,\nour method can enable user-controlled motion editing while retaining\nhigh-fidelity appearances. Extensive experiments demonstrate that our approach\noutperforms existing approaches on novel view synthesis with a high rendering\nspeed and enables novel appearance-preserved motion editing applications.\nProject page: https://yihua7.github.io/SC-GS-web/", "keywords": ["Scene analysis and understanding", "Image and video generation and manipulation"], "authors_list": ["Yihua Huang", "Yangtian Sun", "Ziyi Yang", "Xiaoyang Lyu", "Yan-Pei Cao", "Xiaojuan Qi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f004"}, "filepath": "data/2404.09465.png", "tags": [], "_media_type": "image", "_rand": 0.9991284244616315, "arXiv_link": "https://arxiv.org/abs/2404.09465", "other_link": "http://physcene.github.io.", "title": "PHYSCENE: Physically Interactable 3D Scene Synthesis for Embodied AI", "abstract": "With recent developments in Embodied Artificial Intelligence (EAI) research,\nthere has been a growing demand for high-quality, large-scale interactive scene\ngeneration. While prior methods in scene synthesis have prioritized the\nnaturalness and realism of the generated scenes, the physical plausibility and\ninteractivity of scenes have been largely left unexplored. To address this\ndisparity, we introduce PhyScene, a novel method dedicated to generating\ninteractive 3D scenes characterized by realistic layouts, articulated objects,\nand rich physical interactivity tailored for embodied agents. Based on a\nconditional diffusion model for capturing scene layouts, we devise novel\nphysics- and interactivity-based guidance mechanisms that integrate constraints\nfrom object collision, room layout, and object reachability. Through extensive\nexperiments, we demonstrate that PhyScene effectively leverages these guidance\nfunctions for physically interactable scene synthesis, outperforming existing\nstate-of-the-art scene synthesis methods by a large margin. Our findings\nsuggest that the scenes generated by PhyScene hold considerable potential for\nfacilitating diverse skill acquisition among agents within interactive\nenvironments, thereby catalyzing further advancements in embodied AI research.\nProject website: http://physcene.github.io.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yandan Yang", "Baoxiong Jia", "Peiyuan Zhi", "Siyuan Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f005"}, "filepath": "data/2312.14239.png", "tags": [], "_media_type": "image", "_rand": 0.9991551641712161, "arXiv_link": "https://arxiv.org/abs/2312.14239", "other_link": "", "title": "PlatoNeRF: 3D Reconstruction in Plato\u2019s Cave via Single-View Two-Bounce Lidar", "abstract": "3D reconstruction from a single-view is challenging because of the ambiguity\nfrom monocular cues and lack of information about occluded regions. Neural\nradiance fields (NeRF), while popular for view synthesis and 3D reconstruction,\nare typically reliant on multi-view images. Existing methods for single-view 3D\nreconstruction with NeRF rely on either data priors to hallucinate views of\noccluded regions, which may not be physically accurate, or shadows observed by\nRGB cameras, which are difficult to detect in ambient light and low albedo\nbackgrounds. We propose using time-of-flight data captured by a single-photon\navalanche diode to overcome these limitations. Our method models two-bounce\noptical paths with NeRF, using lidar transient data for supervision. By\nleveraging the advantages of both NeRF and two-bounce light measured by lidar,\nwe demonstrate that we can reconstruct visible and occluded geometry without\ndata priors or reliance on controlled ambient lighting or scene albedo. In\naddition, we demonstrate improved generalization under practical constraints on\nsensor spatial- and temporal-resolution. We believe our method is a promising\ndirection as single-photon lidars become ubiquitous on consumer devices, such\nas phones, tablets, and headsets.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Tzofi Klinghoffer", "Xiaoyu Xiang", "Siddharth Somasundaram", "Yuchen Fan", "Christian Richardt", "Ramesh Raskar", "Rakesh Ranjan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f006"}, "filepath": "data/2404.00680.png", "tags": [], "_media_type": "image", "_rand": 0.9990985692681182, "arXiv_link": "https://arxiv.org/abs/2404.00680", "other_link": "", "title": "Learning to Rank Patches for Unbiased Image Redundancy Reduction", "abstract": "Images suffer from heavy spatial redundancy because pixels in neighboring\nregions are spatially correlated. Existing approaches strive to overcome this\nlimitation by reducing less meaningful image regions. However, current leading\nmethods rely on supervisory signals. They may compel models to preserve content\nthat aligns with labeled categories and discard content belonging to unlabeled\ncategories. This categorical inductive bias makes these methods less effective\nin real-world scenarios. To address this issue, we propose a self-supervised\nframework for image redundancy reduction called Learning to Rank Patches\n(LTRP). We observe that image reconstruction of masked image modeling models is\nsensitive to the removal of visible patches when the masking ratio is high\n(e.g., 90\\%). Building upon it, we implement LTRP via two steps: inferring the\nsemantic density score of each patch by quantifying variation between\nreconstructions with and without this patch, and learning to rank the patches\nwith the pseudo score. The entire process is self-supervised, thus getting out\nof the dilemma of categorical inductive bias. We design extensive experiments\non different datasets and tasks. The results demonstrate that LTRP outperforms\nboth supervised and other self-supervised methods due to the fair assessment of\nimage content.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Yang Luo", "Zhineng Chen", "Peng Zhou", "Zuxuan Wu", "Xieping Gao", "Yu-Gang Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f007"}, "filepath": "data/2404.10322v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996580720779884, "arXiv_link": "https://arxiv.org/abs/2404.10322v1", "other_link": "https://github.com/Matt-Su/DR-Adapter.", "title": "Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation", "abstract": "Few-shot semantic segmentation (FSS) has achieved great success on segmenting\nobjects of novel classes, supported by only a few annotated samples. However,\nexisting FSS methods often underperform in the presence of domain shifts,\nespecially when encountering new domain styles that are unseen during training.\nIt is suboptimal to directly adapt or generalize the entire model to new\ndomains in the few-shot scenario. Instead, our key idea is to adapt a small\nadapter for rectifying diverse target domain styles to the source domain.\nConsequently, the rectified target domain features can fittingly benefit from\nthe well-optimized source domain segmentation model, which is intently trained\non sufficient source domain data. Training domain-rectifying adapter requires\nsufficiently diverse target domains. We thus propose a novel local-global style\nperturbation method to simulate diverse potential target domains by\nperturbating the feature channel statistics of the individual images and\ncollective statistics of the entire source domain, respectively. Additionally,\nwe propose a cyclic domain alignment module to facilitate the adapter\neffectively rectifying domains using a reverse domain rectification\nsupervision. The adapter is trained to rectify the image features from diverse\nsynthesized target domains to align with the source domain. During testing on\ntarget domains, we start by rectifying the image features and then conduct\nfew-shot segmentation on the domain-rectified features. Extensive experiments\ndemonstrate the effectiveness of our method, achieving promising results on\ncross-domain few-shot semantic segmentation tasks. Our code is available at\nhttps://github.com/Matt-Su/DR-Adapter.", "keywords": [], "authors_list": ["Jiapeng Su", "Qi Fan", "Wenjie Pei", "Guangming Lu", "Fanglin Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f008"}, "filepath": "data/2311.14155.png", "tags": [], "_media_type": "image", "_rand": 0.9998123732332602, "arXiv_link": "https://arxiv.org/abs/2311.14155", "other_link": "https://github.com/nv-nguyen/gigaPose", "title": "GigaPose: Fast and Robust Novel Object Pose Estimation via One Correspondence", "abstract": "We present GigaPose, a fast, robust, and accurate method for CAD-based novel\nobject pose estimation in RGB images. GigaPose first leverages discriminative\n\"templates\", rendered images of the CAD models, to recover the out-of-plane\nrotation and then uses patch correspondences to estimate the four remaining\nparameters. Our approach samples templates in only a two-degrees-of-freedom\nspace instead of the usual three and matches the input image to the templates\nusing fast nearest-neighbor search in feature space, results in a speedup\nfactor of 35x compared to the state of the art. Moreover, GigaPose is\nsignificantly more robust to segmentation errors. Our extensive evaluation on\nthe seven core datasets of the BOP challenge demonstrates that it achieves\nstate-of-the-art accuracy and can be seamlessly integrated with existing\nrefinement methods. Additionally, we show the potential of GigaPose with 3D\nmodels predicted by recent work on 3D reconstruction from a single image,\nrelaxing the need for CAD models and making 6D pose object estimation much more\nconvenient. Our source code and trained models are publicly available at\nhttps://github.com/nv-nguyen/gigaPose", "keywords": ["Efficient and scalable vision"], "authors_list": ["Van Nguyen Nguyen", "Thibault Groueix", "Mathieu Salzmann", "Vincent Lepetit"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f009"}, "filepath": "data/2403.11492.png", "tags": [], "_media_type": "image", "_rand": 0.9997687001166419, "arXiv_link": "https://arxiv.org/abs/2403.11492", "other_link": "https://github.com/opendilab/SmartRefine/", "title": "SmartRefine: A Scenario-Adaptive Refinement Framework for Efficient Motion Prediction", "abstract": "Predicting the future motion of surrounding agents is essential for\nautonomous vehicles (AVs) to operate safely in dynamic, human-robot-mixed\nenvironments. Context information, such as road maps and surrounding agents'\nstates, provides crucial geometric and semantic information for motion behavior\nprediction. To this end, recent works explore two-stage prediction frameworks\nwhere coarse trajectories are first proposed, and then used to select critical\ncontext information for trajectory refinement. However, they either incur a\nlarge amount of computation or bring limited improvement, if not both. In this\npaper, we introduce a novel scenario-adaptive refinement strategy, named\nSmartRefine, to refine prediction with minimal additional computation.\nSpecifically, SmartRefine can comprehensively adapt refinement configurations\nbased on each scenario's properties, and smartly chooses the number of\nrefinement iterations by introducing a quality score to measure the prediction\nquality and remaining refinement potential of each scenario. SmartRefine is\ndesigned as a generic and flexible approach that can be seamlessly integrated\ninto most state-of-the-art motion prediction models. Experiments on Argoverse\n(1 & 2) show that our method consistently improves the prediction accuracy of\nmultiple state-of-the-art prediction models. Specifically, by adding\nSmartRefine to QCNet, we outperform all published ensemble-free works on the\nArgoverse 2 leaderboard (single agent track) at submission. Comprehensive\nstudies are also conducted to ablate design choices and explore the mechanism\nbehind multi-iteration refinement. Codes are available at\nhttps://github.com/opendilab/SmartRefine/", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Yang Zhou", "Hao Shao", "Letian Wang", "Steven L. Waslander", "Hongsheng Li", "Yu Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f00a"}, "filepath": "data/2312.04557.png", "tags": [], "_media_type": "image", "_rand": 0.999666136107422, "arXiv_link": "https://arxiv.org/abs/2312.04557", "other_link": "", "title": "GenTron: Diffusion Transformers for Image and Video Generation", "abstract": "In this study, we explore Transformer-based diffusion models for image and\nvideo generation. Despite the dominance of Transformer architectures in various\nfields due to their flexibility and scalability, the visual generative domain\nprimarily utilizes CNN-based U-Net architectures, particularly in\ndiffusion-based models. We introduce GenTron, a family of Generative models\nemploying Transformer-based diffusion, to address this gap. Our initial step\nwas to adapt Diffusion Transformers (DiTs) from class to text conditioning, a\nprocess involving thorough empirical exploration of the conditioning mechanism.\nWe then scale GenTron from approximately 900M to over 3B parameters, observing\nsignificant improvements in visual quality. Furthermore, we extend GenTron to\ntext-to-video generation, incorporating novel motion-free guidance to enhance\nvideo quality. In human evaluations against SDXL, GenTron achieves a 51.1% win\nrate in visual quality (with a 19.8% draw rate), and a 42.3% win rate in text\nalignment (with a 42.9% draw rate). GenTron also excels in the T2I-CompBench,\nunderscoring its strengths in compositional generation. We believe this work\nwill provide meaningful insights and serve as a valuable reference for future\nresearch.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Shoufa Chen", "Mengmeng Xu", "Jiawei Ren", "Yuren Cong", "Sen He", "Yanping Xie", "Animesh Sinha", "Ping Luo", "Tao Xiang", "Juan-Manuel P\u00e9rez-R\u00faa"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f00b"}, "filepath": "data/2403.14729.png", "tags": [], "_media_type": "image", "_rand": 0.9993614680889992, "arXiv_link": "https://arxiv.org/abs/2403.14729", "other_link": "", "title": "Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch", "abstract": "Current techniques for deep neural network (DNN) pruning often involve\nintricate multi-step processes that require domain-specific expertise, making\ntheir widespread adoption challenging. To address the limitation, the\nOnly-Train-Once (OTO) and OTOv2 are proposed to eliminate the need for\nadditional fine-tuning steps by directly training and compressing a general DNN\nfrom scratch. Nevertheless, the static design of optimizers (in OTO) can lead\nto convergence issues of local optima. In this paper, we proposed the\nAuto-Train-Once (ATO), an innovative network pruning algorithm designed to\nautomatically reduce the computational and storage costs of DNNs. During the\nmodel training phase, our approach not only trains the target model but also\nleverages a controller network as an architecture generator to guide the\nlearning of target model weights. Furthermore, we developed a novel stochastic\ngradient algorithm that enhances the coordination between model training and\ncontroller network training, thereby improving pruning performance. We provide\na comprehensive convergence analysis as well as extensive experiments, and the\nresults show that our approach achieves state-of-the-art performance across\nvarious model architectures (including ResNet18, ResNet34, ResNet50, ResNet56,\nand MobileNetv2) on standard benchmark datasets (CIFAR-10, CIFAR-100, and\nImageNet).", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xidong Wu", "Shangqian Gao", "Zeyu Zhang", "Zhenzhen Li", "Runxue Bao", "Yanfu Zhang", "Xiaoqian Wang", "Heng Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f00c"}, "filepath": "data/2402.19326.png", "tags": [], "_media_type": "image", "_rand": 0.9999694265151725, "arXiv_link": "https://arxiv.org/abs/2402.19326", "other_link": "https://github.com/ls1rius/WSI_FiVE.", "title": "Generalizable Whole Slide Image Classification with Fine-Grained Visual-Semantic Interaction", "abstract": "Whole Slide Image (WSI) classification is often formulated as a Multiple\nInstance Learning (MIL) problem. Recently, Vision-Language Models (VLMs) have\ndemonstrated remarkable performance in WSI classification. However, existing\nmethods leverage coarse-grained pathogenetic descriptions for visual\nrepresentation supervision, which are insufficient to capture the complex\nvisual appearance of pathogenetic images, hindering the generalizability of\nmodels on diverse downstream tasks. Additionally, processing high-resolution\nWSIs can be computationally expensive. In this paper, we propose a novel\n\"Fine-grained Visual-Semantic Interaction\" (FiVE) framework for WSI\nclassification. It is designed to enhance the model's generalizability by\nleveraging the interaction between localized visual patterns and fine-grained\npathological semantics. Specifically, with meticulously designed queries, we\nstart by utilizing a large language model to extract fine-grained pathological\ndescriptions from various non-standardized raw reports. The output descriptions\nare then reconstructed into fine-grained labels used for training. By\nintroducing a Task-specific Fine-grained Semantics (TFS) module, we enable\nprompts to capture crucial visual information in WSIs, which enhances\nrepresentation learning and augments generalization capabilities significantly.\nFurthermore, given that pathological visual patterns are redundantly\ndistributed across tissue slices, we sample a subset of visual instances during\ntraining. Our method demonstrates robust generalizability and strong\ntransferability, dominantly outperforming the counterparts on the TCGA Lung\nCancer dataset with at least 9.19% higher accuracy in few-shot experiments. The\ncode is available at: https://github.com/ls1rius/WSI_FiVE.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Medical imaging and biological vision"], "authors_list": ["Hao Li", "Ying Chen", "Yifei Chen", "Rongshan Yu", "Wenxian Yang", "Liansheng Wang", "Bowen Ding", "Yuchen Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f00d"}, "filepath": "data/2402.00627.png", "tags": [], "_media_type": "image", "_rand": 0.9999847076616537, "arXiv_link": "https://arxiv.org/abs/2402.00627", "other_link": "https://github.com/VamosC/CapHuman.", "title": "CapHuman: Capture Your Moments in Parallel Universes", "abstract": "We concentrate on a novel human-centric image synthesis task, that is, given\nonly one reference facial photograph, it is expected to generate specific\nindividual images with diverse head positions, poses, facial expressions, and\nilluminations in different contexts. To accomplish this goal, we argue that our\ngenerative model should be capable of the following favorable characteristics:\n(1) a strong visual and semantic understanding of our world and human society\nfor basic object and human image generation. (2) generalizable identity\npreservation ability. (3) flexible and fine-grained head control. Recently,\nlarge pre-trained text-to-image diffusion models have shown remarkable results,\nserving as a powerful generative foundation. As a basis, we aim to unleash the\nabove two capabilities of the pre-trained model. In this work, we present a new\nframework named CapHuman. We embrace the \"encode then learn to align\" paradigm,\nwhich enables generalizable identity preservation for new individuals without\ncumbersome tuning at inference. CapHuman encodes identity features and then\nlearns to align them into the latent space. Moreover, we introduce the 3D\nfacial prior to equip our model with control over the human head in a flexible\nand 3D-consistent manner. Extensive qualitative and quantitative analyses\ndemonstrate our CapHuman can produce well-identity-preserved, photo-realistic,\nand high-fidelity portraits with content-rich representations and various head\nrenditions, superior to established baselines. Code and checkpoint will be\nreleased at https://github.com/VamosC/CapHuman.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Chao Liang", "Fan Ma", "Linchao Zhu", "Yingying Deng", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f00e"}, "filepath": "data/2403.07700.png", "tags": [], "_media_type": "image", "_rand": 0.9991978825891213, "arXiv_link": "https://arxiv.org/abs/2403.07700", "other_link": "", "title": "CuVLER: Enhanced Unsupervised Object Discoveries through Exhaustive Self-Supervised Transformers", "abstract": "In this paper, we introduce VoteCut, an innovative method for unsupervised\nobject discovery that leverages feature representations from multiple\nself-supervised models. VoteCut employs normalized-cut based graph\npartitioning, clustering and a pixel voting approach. Additionally, We present\nCuVLER (Cut-Vote-and-LEaRn), a zero-shot model, trained using pseudo-labels,\ngenerated by VoteCut, and a novel soft target loss to refine segmentation\naccuracy. Through rigorous evaluations across multiple datasets and several\nunsupervised setups, our methods demonstrate significant improvements in\ncomparison to previous state-of-the-art models. Our ablation studies further\nhighlight the contributions of each component, revealing the robustness and\nefficacy of our approach. Collectively, VoteCut and CuVLER pave the way for\nfuture advancements in image segmentation.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Shahaf Arica", "Or Rubin", "Sapir Gershov", "Shlomi Laufer"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f00f"}, "filepath": "data/2311.16711.png", "tags": [], "_media_type": "image", "_rand": 0.999590559118419, "arXiv_link": "https://arxiv.org/abs/2311.16711", "other_link": "https://leditsplusplus-project.static.hf.space", "title": "LEDITS++: Limitless Image Editing using Text-to-Image Models", "abstract": "Text-to-image diffusion models have recently received increasing interest for\ntheir astonishing ability to produce high-fidelity images from solely text\ninputs. Subsequent research efforts aim to exploit and apply their capabilities\nto real image editing. However, existing image-to-image methods are often\ninefficient, imprecise, and of limited versatility. They either require\ntime-consuming fine-tuning, deviate unnecessarily strongly from the input\nimage, and/or lack support for multiple, simultaneous edits. To address these\nissues, we introduce LEDITS++, an efficient yet versatile and precise textual\nimage manipulation technique. LEDITS++'s novel inversion approach requires no\ntuning nor optimization and produces high-fidelity results with a few diffusion\nsteps. Second, our methodology supports multiple simultaneous edits and is\narchitecture-agnostic. Third, we use a novel implicit masking technique that\nlimits changes to relevant image regions. We propose the novel TEdBench++\nbenchmark as part of our exhaustive evaluation. Our results demonstrate the\ncapabilities of LEDITS++ and its improvements over previous methods. The\nproject page is available at https://leditsplusplus-project.static.hf.space .", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Manuel Brack", "Felix Friedrich", "Katharina Kornmeier", "Linoy Tsaban", "Patrick Schramowski", "Kristian Kersting", "Apolin\u00e1rio Passos"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Human-Computer Interaction", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f010"}, "filepath": "data/2311.10802.png", "tags": [], "_media_type": "image", "_rand": 0.9999484507712264, "arXiv_link": "https://arxiv.org/abs/2311.10802", "other_link": "", "title": "Are Conventional SNNs Really Efficient? A Perspective from Network Quantization", "abstract": "Spiking Neural Networks (SNNs) have been widely praised for their high energy\nefficiency and immense potential. However, comprehensive research that\ncritically contrasts and correlates SNNs with quantized Artificial Neural\nNetworks (ANNs) remains scant, often leading to skewed comparisons lacking\nfairness towards ANNs. This paper introduces a unified perspective,\nillustrating that the time steps in SNNs and quantized bit-widths of activation\nvalues present analogous representations. Building on this, we present a more\npragmatic and rational approach to estimating the energy consumption of SNNs.\nDiverging from the conventional Synaptic Operations (SynOps), we champion the\n\"Bit Budget\" concept. This notion permits an intricate discourse on\nstrategically allocating computational and storage resources between weights,\nactivation values, and temporal steps under stringent hardware constraints.\nGuided by the Bit Budget paradigm, we discern that pivoting efforts towards\nspike patterns and weight quantization, rather than temporal attributes,\nelicits profound implications for model performance. Utilizing the Bit Budget\nfor holistic design consideration of SNNs elevates model performance across\ndiverse data types, encompassing static imagery and neuromorphic datasets. Our\nrevelations bridge the theoretical chasm between SNNs and quantized ANNs and\nilluminate a pragmatic trajectory for future endeavors in energy-efficient\nneural computations.", "keywords": [], "authors_list": ["Guobin Shen", "Dongcheng Zhao", "Tenglong Li", "Jindong Li", "Yi Zeng"], "category_name": "Neural and Evolutionary Computing", "all_categories": ["Neural and Evolutionary Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f011"}, "filepath": "data/2402.07739v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997478743055956, "arXiv_link": "https://arxiv.org/abs/2402.07739v1", "other_link": "", "title": "Task-conditioned adaptation of visual features in multi-task policy learning", "abstract": "Successfully addressing a wide variety of tasks is a core ability of\nautonomous agents, which requires flexibly adapting the underlying\ndecision-making strategies and, as we argue in this work, also adapting the\nunderlying perception modules. An analogical argument would be the human visual\nsystem, which uses top-down signals to focus attention determined by the\ncurrent task. Similarly, in this work, we adapt pre-trained large vision models\nconditioned on specific downstream tasks in the context of multi-task policy\nlearning. We introduce task-conditioned adapters that do not require finetuning\nany pre-trained weights, combined with a single policy trained with behavior\ncloning and capable of addressing multiple tasks. We condition the policy and\nvisual adapters on task embeddings, which can be selected at inference if the\ntask is known, or alternatively inferred from a set of example demonstrations.\nTo this end, we propose a new optimization-based estimator. We evaluate the\nmethod on a wide variety of tasks of the CortexBench benchmark and show that,\ncompared to existing work, it can be addressed with a single policy. In\nparticular, we demonstrate that adapting visual features is a key design choice\nand that the method generalizes to unseen tasks given visual demonstrations.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Pierre Marza", "Laetitia Matignon", "Olivier Simonin", "Christian Wolf"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f012"}, "filepath": "data/2311.07042.png", "tags": [], "_media_type": "image", "_rand": 0.9991296368804982, "arXiv_link": "https://arxiv.org/abs/2311.07042", "other_link": "", "title": "Open-Vocabulary Video Anomaly Detection", "abstract": "Video anomaly detection (VAD) with weak supervision has achieved remarkable\nperformance in utilizing video-level labels to discriminate whether a video\nframe is normal or abnormal. However, current approaches are inherently limited\nto a closed-set setting and may struggle in open-world applications where there\ncan be anomaly categories in the test data unseen during training. A few recent\nstudies attempt to tackle a more realistic setting, open-set VAD, which aims to\ndetect unseen anomalies given seen anomalies and normal videos. However, such a\nsetting focuses on predicting frame anomaly scores, having no ability to\nrecognize the specific categories of anomalies, despite the fact that this\nability is essential for building more informed video surveillance systems.\nThis paper takes a step further and explores open-vocabulary video anomaly\ndetection (OVVAD), in which we aim to leverage pre-trained large models to\ndetect and categorize seen and unseen anomalies. To this end, we propose a\nmodel that decouples OVVAD into two mutually complementary tasks --\nclass-agnostic detection and class-specific classification -- and jointly\noptimizes both tasks. Particularly, we devise a semantic knowledge injection\nmodule to introduce semantic knowledge from large language models for the\ndetection task, and design a novel anomaly synthesis module to generate pseudo\nunseen anomaly videos with the help of large vision generation models for the\nclassification task. These semantic knowledge and synthesis anomalies\nsubstantially extend our model's capability in detecting and categorizing a\nvariety of seen and unseen anomalies. Extensive experiments on three\nwidely-used benchmarks demonstrate our model achieves state-of-the-art\nperformance on OVVAD task.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Peng Wu", "Xuerong Zhou", "Guansong Pang", "Yujia Sun", "Jing Liu", "Peng Wang", "Yanning Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f013"}, "filepath": "data/2402.10401.png", "tags": [], "_media_type": "image", "_rand": 0.9994302497902326, "arXiv_link": "https://arxiv.org/abs/2402.10401", "other_link": "", "title": "ManiFPT: Defining and Analyzing Fingerprints of Generative Models", "abstract": "Recent works have shown that generative models leave traces of their\nunderlying generative process on the generated samples, broadly referred to as\nfingerprints of a generative model, and have studied their utility in detecting\nsynthetic images from real ones. However, the extend to which these\nfingerprints can distinguish between various types of synthetic image and help\nidentify the underlying generative process remain under-explored. In\nparticular, the very definition of a fingerprint remains unclear, to our\nknowledge. To that end, in this work, we formalize the definition of artifact\nand fingerprint in generative models, propose an algorithm for computing them\nin practice, and finally study its effectiveness in distinguishing a large\narray of different generative models. We find that using our proposed\ndefinition can significantly improve the performance on the task of identifying\nthe underlying generative process from samples (model attribution) compared to\nexisting methods. Additionally, we study the structure of the fingerprints, and\nobserve that it is very predictive of the effect of different design choices on\nthe generative process.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Hae Jin Song", "Mahyar Khayatkhoei", "Wael AbdAlmageed"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f014"}, "filepath": "data/2403.07874.png", "tags": [], "_media_type": "image", "_rand": 0.9999853919439482, "arXiv_link": "https://arxiv.org/abs/2403.07874", "other_link": "https://github.com/zh460045050/V2L-Tokenizer.", "title": "Beyond Text: Frozen Large Language Models in Visual Signal Comprehension", "abstract": "In this work, we investigate the potential of a large language model (LLM) to\ndirectly comprehend visual signals without the necessity of fine-tuning on\nmulti-modal datasets. The foundational concept of our method views an image as\na linguistic entity, and translates it to a set of discrete words derived from\nthe LLM's vocabulary. To achieve this, we present the Vision-to-Language\nTokenizer, abbreviated as V2T Tokenizer, which transforms an image into a\n``foreign language'' with the combined aid of an encoder-decoder, the LLM\nvocabulary, and a CLIP model. With this innovative image encoding, the LLM\ngains the ability not only for visual comprehension but also for image\ndenoising and restoration in an auto-regressive fashion-crucially, without any\nfine-tuning. We undertake rigorous experiments to validate our method,\nencompassing understanding tasks like image recognition, image captioning, and\nvisual question answering, as well as image denoising tasks like inpainting,\noutpainting, deblurring, and shift restoration. Code and models are available\nat https://github.com/zh460045050/V2L-Tokenizer.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Image and video generation and manipulation"], "authors_list": ["Lei Zhu", "Fangyun Wei", "Yanye Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f015"}, "filepath": "data/2312.14124.png", "tags": [], "_media_type": "image", "_rand": 0.9999971810139493, "arXiv_link": "https://arxiv.org/abs/2312.14124", "other_link": "", "title": "Neural Point Cloud Diffusion for Disentangled 3D Shape and Appearance Generation", "abstract": "Controllable generation of 3D assets is important for many practical\napplications like content creation in movies, games and engineering, as well as\nin AR/VR. Recently, diffusion models have shown remarkable results in\ngeneration quality of 3D objects. However, none of the existing models enable\ndisentangled generation to control the shape and appearance separately. For the\nfirst time, we present a suitable representation for 3D diffusion models to\nenable such disentanglement by introducing a hybrid point cloud and neural\nradiance field approach. We model a diffusion process over point positions\njointly with a high-dimensional feature space for a local density and radiance\ndecoder. While the point positions represent the coarse shape of the object,\nthe point features allow modeling the geometry and appearance details. This\ndisentanglement enables us to sample both independently and therefore to\ncontrol both separately. Our approach sets a new state of the art in generation\ncompared to previous disentanglement-capable methods by reduced FID scores of\n30-90% and is on-par with other non disentanglement-capable state-of-the art\nmethods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Philipp Schr\u00f6ppel", "Christopher Wewer", "Jan Lenssen", "Eddy Ilg", "Thomas Brox"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f016"}, "filepath": "data/2404.04565.png", "tags": [], "_media_type": "image", "_rand": 0.9990510292304755, "arXiv_link": "https://arxiv.org/abs/2404.04565", "other_link": "", "title": "SportsHHI: A Dataset for Human-Human Interaction Detection in Sports Videos", "abstract": "Video-based visual relation detection tasks, such as video scene graph\ngeneration, play important roles in fine-grained video understanding. However,\ncurrent video visual relation detection datasets have two main limitations that\nhinder the progress of research in this area. First, they do not explore\ncomplex human-human interactions in multi-person scenarios. Second, the\nrelation types of existing datasets have relatively low-level semantics and can\nbe often recognized by appearance or simple prior information, without the need\nfor detailed spatio-temporal context reasoning. Nevertheless, comprehending\nhigh-level interactions between humans is crucial for understanding complex\nmulti-person videos, such as sports and surveillance videos. To address this\nissue, we propose a new video visual relation detection task: video human-human\ninteraction detection, and build a dataset named SportsHHI for it. SportsHHI\ncontains 34 high-level interaction classes from basketball and volleyball\nsports. 118,075 human bounding boxes and 50,649 interaction instances are\nannotated on 11,398 keyframes. To benchmark this, we propose a two-stage\nbaseline method and conduct extensive experiments to reveal the key factors for\na successful human-human interaction detector. We hope that SportsHHI can\nstimulate research on human interaction understanding in videos and promote the\ndevelopment of spatio-temporal context modeling techniques in video visual\nrelation detection.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Tao Wu", "Runyu He", "Gangshan Wu", "Limin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f017"}, "filepath": "data/2403.07244.png", "tags": [], "_media_type": "image", "_rand": 0.9995063951776793, "arXiv_link": "https://arxiv.org/abs/2403.07244", "other_link": "", "title": "Time-Efficient Light-Field Acquisition Using Coded Aperture and Events", "abstract": "We propose a computational imaging method for time-efficient light-field\nacquisition that combines a coded aperture with an event-based camera.\nDifferent from the conventional coded-aperture imaging method, our method\napplies a sequence of coding patterns during a single exposure for an image\nframe. The parallax information, which is related to the differences in coding\npatterns, is recorded as events. The image frame and events, all of which are\nmeasured in a single exposure, are jointly used to computationally reconstruct\na light field. We also designed an algorithm pipeline for our method that is\nend-to-end trainable on the basis of deep optics and compatible with real\ncamera hardware. We experimentally showed that our method can achieve more\naccurate reconstruction than several other imaging methods with a single\nexposure. We also developed a hardware prototype with the potential to complete\nthe measurement on the camera within 22 msec and demonstrated that light fields\nfrom real 3-D scenes can be obtained with convincing visual quality. Our\nsoftware and supplementary video are available from our project website.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Shuji Habuchi", "Keita Takahashi", "Chihiro Tsutake", "Toshiaki Fujii", "Hajime Nagahara"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f018"}, "filepath": "data/2312.04670v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991046836349884, "arXiv_link": "https://arxiv.org/abs/2312.04670v1", "other_link": "", "title": "Rapid Motor Adaptation for Robotic Manipulator Arms", "abstract": "Developing generalizable manipulation skills is a core challenge in embodied\nAI. This includes generalization across diverse task configurations,\nencompassing variations in object shape, density, friction coefficient, and\nexternal disturbances such as forces applied to the robot. Rapid Motor\nAdaptation (RMA) offers a promising solution to this challenge. It posits that\nessential hidden variables influencing an agent's task performance, such as\nobject mass and shape, can be effectively inferred from the agent's action and\nproprioceptive history. Drawing inspiration from RMA in locomotion and in-hand\nrotation, we use depth perception to develop agents tailored for rapid motor\nadaptation in a variety of manipulation tasks. We evaluated our agents on four\nchallenging tasks from the Maniskill2 benchmark, namely pick-and-place\noperations with hundreds of objects from the YCB and EGAD datasets, peg\ninsertion with precise position and orientation, and operating a variety of\nfaucets and handles, with customized environment variations. Empirical results\ndemonstrate that our agents surpass state-of-the-art methods like automatic\ndomain randomization and vision-based policies, obtaining better generalization\nperformance and sample efficiency.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yichao Liang", "Kevin Ellis", "Jo\u00e3o F. Henriques"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f019"}, "filepath": "data/2405.08458.png", "tags": [], "_media_type": "image", "_rand": 0.9991354319377618, "arXiv_link": "https://arxiv.org/abs/2405.08458", "other_link": "", "title": "Rethinking Prior Information Generation with CLIP for Few-Shot Segmentation", "abstract": "Few-shot segmentation remains challenging due to the limitations of its\nlabeling information for unseen classes. Most previous approaches rely on\nextracting high-level feature maps from the frozen visual encoder to compute\nthe pixel-wise similarity as a key prior guidance for the decoder. However,\nsuch a prior representation suffers from coarse granularity and poor\ngeneralization to new classes since these high-level feature maps have obvious\ncategory bias. In this work, we propose to replace the visual prior\nrepresentation with the visual-text alignment capacity to capture more reliable\nguidance and enhance the model generalization. Specifically, we design two\nkinds of training-free prior information generation strategy that attempts to\nutilize the semantic alignment capability of the Contrastive Language-Image\nPre-training model (CLIP) to locate the target class. Besides, to acquire more\naccurate prior guidance, we build a high-order relationship of attention maps\nand utilize it to refine the initial prior information. Experiments on both the\nPASCAL-5{i} and COCO-20{i} datasets show that our method obtains a clearly\nsubstantial improvement and reaches the new state-of-the-art performance.", "keywords": [], "authors_list": ["Jin Wang", "Bingfeng Zhang", "Jian Pang", "Honglong Chen", "Weifeng Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f01a"}, "filepath": "data/2403.02611.png", "tags": [], "_media_type": "image", "_rand": 0.9995605546318391, "arXiv_link": "https://arxiv.org/abs/2403.02611", "other_link": "https://github.com/PieceZhang/MPT-CataBlur.", "title": "A Unified Framework for Microscopy Defocus Deblur with Multi-Pyramid Transformer and Contrastive Learning", "abstract": "Defocus blur is a persistent problem in microscope imaging that poses harm to\npathology interpretation and medical intervention in cell microscopy and\nmicroscope surgery. To address this problem, a unified framework including the\nmulti-pyramid transformer (MPT) and extended frequency contrastive\nregularization (EFCR) is proposed to tackle two outstanding challenges in\nmicroscopy deblur: longer attention span and data deficiency. The MPT employs\nan explicit pyramid structure at each network stage that integrates the\ncross-scale window attention (CSWA), the intra-scale channel attention (ISCA),\nand the feature-enhancing feed-forward network (FEFN) to capture long-range\ncross-scale spatial interaction and global channel context. The EFCR addresses\nthe data deficiency problem by exploring latent deblur signals from different\nfrequency bands. It also enables deblur knowledge transfer to learn\ncross-domain information from extra data, improving deblur performance for\nlabeled and unlabeled data. Extensive experiments and downstream task\nvalidation show the framework achieves state-of-the-art performance across\nmultiple datasets. Project page: https://github.com/PieceZhang/MPT-CataBlur.", "keywords": ["Medical imaging and biological vision", "Low-level vision"], "authors_list": ["Yuelin Zhang", "Pengyu Zheng", "Wanquan Yan", "Chengyu Fang", "Shing Shin Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f01b"}, "filepath": "data/2311.08359.png", "tags": [], "_media_type": "image", "_rand": 0.9991665188664992, "arXiv_link": "https://arxiv.org/abs/2311.08359", "other_link": "https://kimialabmayo.github.io/PathDino-Page/", "title": "Rotation-Agnostic Image Representation Learning for Digital Pathology", "abstract": "This paper addresses complex challenges in histopathological image analysis\nthrough three key contributions. Firstly, it introduces a fast patch selection\nmethod, FPS, for whole-slide image (WSI) analysis, significantly reducing\ncomputational cost while maintaining accuracy. Secondly, it presents PathDino,\na lightweight histopathology feature extractor with a minimal configuration of\nfive Transformer blocks and only 9 million parameters, markedly fewer than\nalternatives. Thirdly, it introduces a rotation-agnostic representation\nlearning paradigm using self-supervised learning, effectively mitigating\noverfitting. We also show that our compact model outperforms existing\nstate-of-the-art histopathology-specific vision transformers on 12 diverse\ndatasets, including both internal datasets spanning four sites (breast, liver,\nskin, and colorectal) and seven public datasets (PANDA, CAMELYON16, BRACS,\nDigestPath, Kather, PanNuke, and WSSS4LUAD). Notably, even with a training\ndataset of 6 million histopathology patches from The Cancer Genome Atlas\n(TCGA), our approach demonstrates an average 8.5% improvement in patch-level\nmajority vote performance. These contributions provide a robust framework for\nenhancing image analysis in digital pathology, rigorously validated through\nextensive evaluation. Project Page:\nhttps://kimialabmayo.github.io/PathDino-Page/", "keywords": ["Efficient and scalable vision"], "authors_list": ["Saghir Alfasly", "Abubakr Shafique", "Peyman Nejat", "Jibran Khan", "Areej Alsaafin", "Ghazal Alabtah", "Hamid Tizhoosh"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f01c"}, "filepath": "data/2405.16873.png", "tags": [], "_media_type": "image", "_rand": 0.9998823209923347, "arXiv_link": "https://arxiv.org/abs/2405.16873", "other_link": "", "title": "Weakly Misalignment-free Adaptive Feature Alignment for UAVs-based Multimodal Object Detection", "abstract": "In the field of 3D object detection tasks, fusing heterogeneous features from\nLiDAR and camera sensors into a unified Bird's Eye View (BEV) representation is\na widely adopted paradigm. However, existing methods are often compromised by\nimprecise sensor calibration, resulting in feature misalignment in LiDAR-camera\nBEV fusion. Moreover, such inaccuracies result in errors in depth estimation\nfor the camera branch, ultimately causing misalignment between LiDAR and camera\nBEV features. In this work, we propose a novel ContrastAlign approach that\nutilizes contrastive learning to enhance the alignment of heterogeneous\nmodalities, thereby improving the robustness of the fusion process.\nSpecifically, our approach includes the L-Instance module, which directly\noutputs LiDAR instance features within LiDAR BEV features. Then, we introduce\nthe C-Instance module, which predicts camera instance features through RoI\n(Region of Interest) pooling on the camera BEV features. We propose the\nInstanceFusion module, which utilizes contrastive learning to generate similar\ninstance features across heterogeneous modalities. We then use graph matching\nto calculate the similarity between the neighboring camera instance features\nand the similarity instance features to complete the alignment of instance\nfeatures. Our method achieves state-of-the-art performance, with an mAP of\n70.3%, surpassing BEVFusion by 1.8% on the nuScenes validation set.\nImportantly, our method outperforms BEVFusion by 7.3% under conditions with\nmisalignment noise.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chen Chen", "Jiahao Qi", "Xingyue Liu", "Kangcheng Bin", "Ruigang Fu", "Xikun Hu", "Ping Zhong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f01d"}, "filepath": "data/2401.04390.png", "tags": [], "_media_type": "image", "_rand": 0.9999201839336266, "arXiv_link": "https://arxiv.org/abs/2401.04390", "other_link": "", "title": "Learning with Structural Labels for Learning with Noisy Labels", "abstract": "Labor-intensive labeling becomes a bottleneck in developing computer vision\nalgorithms based on deep learning. For this reason, dealing with imperfect\nlabels has increasingly gained attention and has become an active field of\nstudy. We address learning with noisy labels (LNL) problem, which is formalized\nas a task of finding a structured manifold in the midst of noisy data. In this\nframework, we provide a proper objective function and an optimization algorithm\nbased on two expectation-maximization (EM) cycles. The separate networks\nassociated with the two EM cycles collaborate to optimize the objective\nfunction, where one model is for distinguishing clean labels from corrupted\nones while the other is for refurbishing the corrupted labels. This approach\nresults in a non-collapsing LNL-flywheel model in the end. Experiments show\nthat our algorithm achieves state-of-the-art performance in multiple standard\nbenchmarks with substantial margins under various types of label noise.", "keywords": [], "authors_list": ["Noo-ri Kim", "Jin-Seop Lee", "Jee-Hyong Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f01e"}, "filepath": "data/2312.01663.png", "tags": [], "_media_type": "image", "_rand": 0.9991625498408031, "arXiv_link": "https://arxiv.org/abs/2312.01663", "other_link": "", "title": "Customize your NeRF: Adaptive Source Driven 3D Scene Editing via Local-Global Iterative Training", "abstract": "In this paper, we target the adaptive source driven 3D scene editing task by\nproposing a CustomNeRF model that unifies a text description or a reference\nimage as the editing prompt. However, obtaining desired editing results\nconformed with the editing prompt is nontrivial since there exist two\nsignificant challenges, including accurate editing of only foreground regions\nand multi-view consistency given a single-view reference image. To tackle the\nfirst challenge, we propose a Local-Global Iterative Editing (LGIE) training\nscheme that alternates between foreground region editing and full-image\nediting, aimed at foreground-only manipulation while preserving the background.\nFor the second challenge, we also design a class-guided regularization that\nexploits class priors within the generation model to alleviate the\ninconsistency problem among different views in image-driven editing. Extensive\nexperiments show that our CustomNeRF produces precise editing results under\nvarious real scenes for both text- and image-driven settings.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Runze He", "Shaofei Huang", "Xuecheng Nie", "Tianrui Hui", "Luoqi Liu", "Jiao Dai", "Jizhong Han", "Guanbin Li", "Si Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f01f"}, "filepath": "data/2403.01753.png", "tags": [], "_media_type": "image", "_rand": 0.9996992089076677, "arXiv_link": "https://arxiv.org/abs/2403.01753", "other_link": "https://github.com/zju-vipa/training_free_model_merging.", "title": "Training-free Pretrained Model Merging", "abstract": "Recently, model merging techniques have surfaced as a solution to combine\nmultiple single-talent models into a single multi-talent model. However,\nprevious endeavors in this field have either necessitated additional training\nor fine-tuning processes, or require that the models possess the same\npre-trained initialization. In this work, we identify a common drawback in\nprior works w.r.t. the inconsistency of unit similarity in the weight space and\nthe activation space. To address this inconsistency, we propose an innovative\nmodel merging framework, coined as merging under dual-space constraints\n(MuDSC). Specifically, instead of solely maximizing the objective of a single\nspace, we advocate for the exploration of permutation matrices situated in a\nregion with a unified high similarity in the dual space, achieved through the\nlinear combination of activation and weight similarity matrices. In order to\nenhance usability, we have also incorporated adaptations for group structure,\nincluding Multi-Head Attention and Group Normalization. Comprehensive\nexperimental comparisons demonstrate that MuDSC can significantly boost the\nperformance of merged models with various task combinations and architectures.\nFurthermore, the visualization of the merged model within the multi-task loss\nlandscape reveals that MuDSC enables the merged model to reside in the\noverlapping segment, featuring a unified lower loss for each task. Our code is\npublicly available at https://github.com/zju-vipa/training_free_model_merging.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhengqi Xu", "Ke Yuan", "Huiqiong Wang", "Yong Wang", "Mingli Song", "Jie Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f020"}, "filepath": "data/2211.11018.png", "tags": [], "_media_type": "image", "_rand": 0.9995275311472313, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2211.11018", "other_link": "https://magicvideo.github.io/#}", "title": "SNED: Superposition Network Architecture Search for Efficient Video Diffusion Model", "abstract": "We present an efficient text-to-video generation framework based on latent\ndiffusion models, termed MagicVideo. MagicVideo can generate smooth video clips\nthat are concordant with the given text descriptions. Due to a novel and\nefficient 3D U-Net design and modeling video distributions in a low-dimensional\nspace, MagicVideo can synthesize video clips with 256x256 spatial resolution on\na single GPU card, which takes around 64x fewer computations than the Video\nDiffusion Models (VDM) in terms of FLOPs. In specific, unlike existing works\nthat directly train video models in the RGB space, we use a pre-trained VAE to\nmap video clips into a low-dimensional latent space and learn the distribution\nof videos' latent codes via a diffusion model. Besides, we introduce two new\ndesigns to adapt the U-Net denoiser trained on image tasks to video data: a\nframe-wise lightweight adaptor for the image-to-video distribution adjustment\nand a directed temporal attention module to capture temporal dependencies\nacross frames. Thus, we can exploit the informative weights of convolution\noperators from a text-to-image model for accelerating video training. To\nameliorate the pixel dithering in the generated videos, we also propose a novel\nVideoVAE auto-encoder for better RGB reconstruction. We conduct extensive\nexperiments and demonstrate that MagicVideo can generate high-quality video\nclips with either realistic or imaginary content. Refer to\n\\url{https://magicvideo.github.io/#} for more examples.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Zhengang Li", "Yan Kang", "Yuchen Liu", "Difan Liu", "Tobias Hinz", "Feng Liu", "Yanzhi Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f021"}, "filepath": "data/2403.11510.png", "tags": [], "_media_type": "image", "_rand": 0.9999541554434804, "arXiv_link": "https://arxiv.org/abs/2403.11510", "other_link": "", "title": "GenFlow: Generalizable Recurrent Flow for 6D Pose Refinement of Novel Objects", "abstract": "Despite the progress of learning-based methods for 6D object pose estimation,\nthe trade-off between accuracy and scalability for novel objects still exists.\nSpecifically, previous methods for novel objects do not make good use of the\ntarget object's 3D shape information since they focus on generalization by\nprocessing the shape indirectly, making them less effective. We present\nGenFlow, an approach that enables both accuracy and generalization to novel\nobjects with the guidance of the target object's shape. Our method predicts\noptical flow between the rendered image and the observed image and refines the\n6D pose iteratively. It boosts the performance by a constraint of the 3D shape\nand the generalizable geometric knowledge learned from an end-to-end\ndifferentiable system. We further improve our model by designing a cascade\nnetwork architecture to exploit the multi-scale correlations and coarse-to-fine\nrefinement. GenFlow ranked first on the unseen object pose estimation\nbenchmarks in both the RGB and RGB-D cases. It also achieves performance\ncompetitive with existing state-of-the-art methods for the seen object pose\nestimation without any fine-tuning.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Sungphill Moon", "Hyeontae Son", "Dongcheol Hur", "Sangwook Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f022"}, "filepath": "data/2311.15421.png", "tags": [], "_media_type": "image", "_rand": 0.9999000784657263, "arXiv_link": "https://arxiv.org/abs/2311.15421", "other_link": "", "title": "Making Visual Sense of Oracle Bones for You and Me", "abstract": "Creating multi-view wire art (MVWA), a static 3D sculpture with diverse\ninterpretations from different viewpoints, is a complex task even for skilled\nartists. In response, we present DreamWire, an AI system enabling everyone to\ncraft MVWA easily. Users express their vision through text prompts or\nscribbles, freeing them from intricate 3D wire organisation. Our approach\nsynergises 3D B\\'ezier curves, Prim's algorithm, and knowledge distillation\nfrom diffusion models or their variants (e.g., ControlNet). This blend enables\nthe system to represent 3D wire art, ensuring spatial continuity and overcoming\ndata scarcity. Extensive evaluation and analysis are conducted to shed insight\non the inner workings of the proposed system, including the trade-off between\nconnectivity and visual aesthetics.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Runqi Qiao", "LAN YANG", "Kaiyue Pang", "Honggang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f023"}, "filepath": "data/2312.16170v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997653198843653, "arXiv_link": "https://arxiv.org/abs/2312.16170v1", "other_link": "https://github.com/OpenRobotLab/EmbodiedScan.", "title": "EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI", "abstract": "In the realm of computer vision and robotics, embodied agents are expected to\nexplore their environment and carry out human instructions. This necessitates\nthe ability to fully understand 3D scenes given their first-person observations\nand contextualize them into language for interaction. However, traditional\nresearch focuses more on scene-level input and output setups from a global\nview. To address the gap, we introduce EmbodiedScan, a multi-modal, ego-centric\n3D perception dataset and benchmark for holistic 3D scene understanding. It\nencompasses over 5k scans encapsulating 1M ego-centric RGB-D views, 1M language\nprompts, 160k 3D-oriented boxes spanning over 760 categories, some of which\npartially align with LVIS, and dense semantic occupancy with 80 common\ncategories. Building upon this database, we introduce a baseline framework\nnamed Embodied Perceptron. It is capable of processing an arbitrary number of\nmulti-modal inputs and demonstrates remarkable 3D perception capabilities, both\nwithin the two series of benchmarks we set up, i.e., fundamental 3D perception\ntasks and language-grounded tasks, and in the wild. Codes, datasets, and\nbenchmarks will be available at https://github.com/OpenRobotLab/EmbodiedScan.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Tai Wang", "Xiaohan Mao", "Chenming Zhu", "Runsen Xu", "Ruiyuan Lyu", "Peisen Li", "Xiao Chen", "Wenwei Zhang", "Kai Chen", "Tianfan Xue", "Xihui Liu", "Cewu Lu", "Dahua Lin", "Jiangmiao Pang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f024"}, "filepath": "data/2308.10627.png", "tags": [], "_media_type": "image", "_rand": 0.9991865145545684, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2308.10627", "other_link": "", "title": "HouseCat6D - A Large-Scale Multi-Modal Category Level 6D Object Perception Dataset with Household Objects in Realistic Scenarios", "abstract": "6D pose estimation pipelines that rely on RGB-only or RGB-D data show\nlimitations for photometrically challenging objects with e.g. textureless\nsurfaces, reflections or transparency. A supervised learning-based method\nutilising complementary polarisation information as input modality is proposed\nto overcome such limitations. This supervised approach is then extended to a\nself-supervised paradigm by leveraging physical characteristics of polarised\nlight, thus eliminating the need for annotated real data. The methods achieve\nsignificant advancements in pose estimation by leveraging geometric information\nfrom polarised light and incorporating shape priors and invertible physical\nconstraints.", "keywords": [], "authors_list": ["HyunJun Jung", "Shun-Cheng Wu", "Patrick Ruhkamp", "Guangyao Zhai", "Hannah Schieber", "Giulia Rizzoli", "Pengyuan Wang", "Hongcheng Zhao", "Lorenzo Garattoni", "Sven Meier", "Daniel Roth", "Nassir Navab", "Benjamin Busam"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f025"}, "filepath": "data/2403.20018.png", "tags": [], "_media_type": "image", "_rand": 0.9998680280903517, "arXiv_link": "https://arxiv.org/abs/2403.20018", "other_link": "https://github.com/WU-CVGL/SCINeRF.", "title": "SCINeRF: Neural Radiance Fields from a Snapshot Compressive Image", "abstract": "In this paper, we explore the potential of Snapshot Compressive Imaging (SCI)\ntechnique for recovering the underlying 3D scene representation from a single\ntemporal compressed image. SCI is a cost-effective method that enables the\nrecording of high-dimensional data, such as hyperspectral or temporal\ninformation, into a single image using low-cost 2D imaging sensors. To achieve\nthis, a series of specially designed 2D masks are usually employed, which not\nonly reduces storage requirements but also offers potential privacy protection.\nInspired by this, to take one step further, our approach builds upon the\npowerful 3D scene representation capabilities of neural radiance fields (NeRF).\nSpecifically, we formulate the physical imaging process of SCI as part of the\ntraining of NeRF, allowing us to exploit its impressive performance in\ncapturing complex scene structures. To assess the effectiveness of our method,\nwe conduct extensive evaluations using both synthetic data and real data\ncaptured by our SCI system. Extensive experimental results demonstrate that our\nproposed approach surpasses the state-of-the-art methods in terms of image\nreconstruction and novel view image synthesis. Moreover, our method also\nexhibits the ability to restore high frame-rate multi-view consistent images by\nleveraging SCI and the rendering capabilities of NeRF. The code is available at\nhttps://github.com/WU-CVGL/SCINeRF.", "keywords": ["Computational imaging and physics-based vision", "Image and video generation and manipulation"], "authors_list": ["Yunhao Li", "Xiaodong Wang", "Ping Wang", "Xin Yuan", "Peidong Liu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f026"}, "filepath": "data/2311.16510.png", "tags": [], "_media_type": "image", "_rand": 0.9995972630465829, "arXiv_link": "https://arxiv.org/abs/2311.16510", "other_link": "", "title": "Source-Free Domain Adaptation with Frozen Multimodal Foundation Model", "abstract": "Source-Free Domain Adaptation (SFDA) aims to adapt a source model for a\ntarget domain, with only access to unlabeled target training data and the\nsource model pre-trained on a supervised source domain. Relying on pseudo\nlabeling and/or auxiliary supervision, conventional methods are inevitably\nerror-prone. To mitigate this limitation, in this work we for the first time\nexplore the potentials of off-the-shelf vision-language (ViL) multimodal models\n(e.g.,CLIP) with rich whilst heterogeneous knowledge. We find that directly\napplying the ViL model to the target domain in a zero-shot fashion is\nunsatisfactory, as it is not specialized for this particular task but largely\ngeneric. To make it task specific, we propose a novel Distilling multimodal\nFoundation model(DIFO)approach. Specifically, DIFO alternates between two steps\nduring adaptation: (i) Customizing the ViL model by maximizing the mutual\ninformation with the target model in a prompt learning manner, (ii) Distilling\nthe knowledge of this customized ViL model to the target model. For more\nfine-grained and reliable distillation, we further introduce two effective\nregularization terms, namely most-likely category encouragement and predictive\nconsistency. Extensive experiments show that DIFO significantly outperforms the\nstate-of-the-art alternatives. Code is here", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Song Tang", "Wenxin Su", "Mao Ye", "Xiatian Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f027"}, "filepath": "data/2312.12490.png", "tags": [], "_media_type": "image", "_rand": 0.9999668069963149, "arXiv_link": "https://arxiv.org/abs/2312.12490", "other_link": "", "title": "InstructVideo: Instructing Video Diffusion Models with Human Feedback", "abstract": "Diffusion models have emerged as the de facto paradigm for video generation.\nHowever, their reliance on web-scale data of varied quality often yields\nresults that are visually unappealing and misaligned with the textual prompts.\nTo tackle this problem, we propose InstructVideo to instruct text-to-video\ndiffusion models with human feedback by reward fine-tuning. InstructVideo has\ntwo key ingredients: 1) To ameliorate the cost of reward fine-tuning induced by\ngenerating through the full DDIM sampling chain, we recast reward fine-tuning\nas editing. By leveraging the diffusion process to corrupt a sampled video,\nInstructVideo requires only partial inference of the DDIM sampling chain,\nreducing fine-tuning cost while improving fine-tuning efficiency. 2) To\nmitigate the absence of a dedicated video reward model for human preferences,\nwe repurpose established image reward models, e.g., HPSv2. To this end, we\npropose Segmental Video Reward, a mechanism to provide reward signals based on\nsegmental sparse sampling, and Temporally Attenuated Reward, a method that\nmitigates temporal modeling degradation during fine-tuning. Extensive\nexperiments, both qualitative and quantitative, validate the practicality and\nefficacy of using image reward models in InstructVideo, significantly enhancing\nthe visual quality of generated videos without compromising generalization\ncapabilities. Code and models will be made publicly available.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques", "Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Hangjie Yuan", "Shiwei Zhang", "Xiang Wang", "Yujie Wei", "Tao Feng", "Yining Pan", "Yingya Zhang", "Ziwei Liu", "Samuel Albanie", "Dong Ni"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f028"}, "filepath": "data/2405.00984.png", "tags": [], "_media_type": "image", "_rand": 0.9993884495519681, "arXiv_link": "https://arxiv.org/abs/2405.00984", "other_link": "", "title": "FREE: Faster and Better Data-Free Meta-Learning", "abstract": "Data-Free Meta-Learning (DFML) aims to extract knowledge from a collection of\npre-trained models without requiring the original data, presenting practical\nbenefits in contexts constrained by data privacy concerns. Current DFML methods\nprimarily focus on the data recovery from these pre-trained models. However,\nthey suffer from slow recovery speed and overlook gaps inherent in\nheterogeneous pre-trained models. In response to these challenges, we introduce\nthe Faster and Better Data-Free Meta-Learning (FREE) framework, which contains:\n(i) a meta-generator for rapidly recovering training tasks from pre-trained\nmodels; and (ii) a meta-learner for generalizing to new unseen tasks.\nSpecifically, within the module Faster Inversion via Meta-Generator, each\npre-trained model is perceived as a distinct task. The meta-generator can\nrapidly adapt to a specific task in just five steps, significantly accelerating\nthe data recovery. Furthermore, we propose Better Generalization via\nMeta-Learner and introduce an implicit gradient alignment algorithm to optimize\nthe meta-learner. This is achieved as aligned gradient directions alleviate\npotential conflicts among tasks from heterogeneous pre-trained models.\nEmpirical experiments on multiple benchmarks affirm the superiority of our\napproach, marking a notable speed-up (20$\\times$) and performance enhancement\n(1.42\\% $\\sim$ 4.78\\%) in comparison to the state-of-the-art.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yongxian Wei", "Zixuan Hu", "Zhenyi Wang", "Li Shen", "Chun Yuan", "Dacheng Tao"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f029"}, "filepath": "data/2403.12033.png", "tags": [], "_media_type": "image", "_rand": 0.9996815343096852, "arXiv_link": "https://arxiv.org/abs/2403.12033", "other_link": "https://github.com/zhangce01/HiKER-SGG.", "title": "HiKER-SGG: Hierarchical Knowledge Enhanced Robust Scene Graph Generation", "abstract": "Being able to understand visual scenes is a precursor for many downstream\ntasks, including autonomous driving, robotics, and other vision-based\napproaches. A common approach enabling the ability to reason over visual data\nis Scene Graph Generation (SGG); however, many existing approaches assume\nundisturbed vision, i.e., the absence of real-world corruptions such as fog,\nsnow, smoke, as well as non-uniform perturbations like sun glare or water\ndrops. In this work, we propose a novel SGG benchmark containing procedurally\ngenerated weather corruptions and other transformations over the Visual Genome\ndataset. Further, we introduce a corresponding approach, Hierarchical Knowledge\nEnhanced Robust Scene Graph Generation (HiKER-SGG), providing a strong baseline\nfor scene graph generation under such challenging setting. At its core,\nHiKER-SGG utilizes a hierarchical knowledge graph in order to refine its\npredictions from coarse initial estimates to detailed predictions. In our\nextensive experiments, we show that HiKER-SGG does not only demonstrate\nsuperior performance on corrupted images in a zero-shot manner, but also\noutperforms current state-of-the-art methods on uncorrupted SGG tasks. Code is\navailable at https://github.com/zhangce01/HiKER-SGG.", "keywords": [], "authors_list": ["Ce Zhang", "Simon Stepputtis", "Joseph Campbell", "Katia Sycara", "Yaqi Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f02a"}, "filepath": "data/2402.19014.png", "tags": [], "_media_type": "image", "_rand": 0.9990103325857774, "arXiv_link": "https://arxiv.org/abs/2402.19014", "other_link": "", "title": "Enhancing Visual Document Understanding with Contrastive Learning in Large Visual-Language Models", "abstract": "Recently, the advent of Large Visual-Language Models (LVLMs) has received\nincreasing attention across various domains, particularly in the field of\nvisual document understanding (VDU). Different from conventional\nvision-language tasks, VDU is specifically concerned with text-rich scenarios\ncontaining abundant document elements. Nevertheless, the importance of\nfine-grained features remains largely unexplored within the community of LVLMs,\nleading to suboptimal performance in text-rich scenarios. In this paper, we\nabbreviate it as the fine-grained feature collapse issue. With the aim of\nfilling this gap, we propose a contrastive learning framework, termed Document\nObject COntrastive learning (DoCo), specifically tailored for the downstream\ntasks of VDU. DoCo leverages an auxiliary multimodal encoder to obtain the\nfeatures of document objects and align them to the visual features generated by\nthe vision encoder of LVLM, which enhances visual representation in text-rich\nscenarios. It can represent that the contrastive learning between the visual\nholistic representations and the multimodal fine-grained features of document\nobjects can assist the vision encoder in acquiring more effective visual cues,\nthereby enhancing the comprehension of text-rich documents in LVLMs. We also\ndemonstrate that the proposed DoCo serves as a plug-and-play pre-training\nmethod, which can be employed in the pre-training of various LVLMs without\ninducing any increase in computational complexity during the inference process.\nExtensive experimental results on multiple benchmarks of VDU reveal that LVLMs\nequipped with our proposed DoCo can achieve superior performance and mitigate\nthe gap between VDU and generic vision-language tasks.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Document analysis and understanding"], "authors_list": ["Xin Li", "Yunfei Wu", "Xinghua Jiang", "ZhiHao Guo", "Mingming Gong", "Haoyu Cao", "Yinsong Liu", "Deqiang Jiang", "Xing Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f02b"}, "filepath": "data/2311.13099.png", "tags": [], "_media_type": "image", "_rand": 0.9992805202515924, "arXiv_link": "https://arxiv.org/abs/2311.13099", "other_link": "https://fytalon.github.io/pienerf/.", "title": "PIE-NeRF: Physics-based Interactive Elastodynamics with NeRF", "abstract": "We show that physics-based simulations can be seamlessly integrated with NeRF\nto generate high-quality elastodynamics of real-world objects. Unlike existing\nmethods, we discretize nonlinear hyperelasticity in a meshless way, obviating\nthe necessity for intermediate auxiliary shape proxies like a tetrahedral mesh\nor voxel grid. A quadratic generalized moving least square (Q-GMLS) is employed\nto capture nonlinear dynamics and large deformation on the implicit model. Such\nmeshless integration enables versatile simulations of complex and codimensional\nshapes. We adaptively place the least-square kernels according to the NeRF\ndensity field to significantly reduce the complexity of the nonlinear\nsimulation. As a result, physically realistic animations can be conveniently\nsynthesized using our method for a wide range of hyperelastic materials at an\ninteractive rate. For more information, please visit our project page at\nhttps://fytalon.github.io/pienerf/.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Yutao Feng", "Yintong Shang", "Xuan Li", "Tianjia Shao", "Chenfanfu Jiang", "Yin Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f02c"}, "filepath": "data/2311.17082.png", "tags": [], "_media_type": "image", "_rand": 0.999715605204951, "arXiv_link": "https://arxiv.org/abs/2311.17082", "other_link": "", "title": "DreamPropeller: Supercharge Text-to-3D Generation with Parallel Sampling", "abstract": "Recent methods such as Score Distillation Sampling (SDS) and Variational\nScore Distillation (VSD) using 2D diffusion models for text-to-3D generation\nhave demonstrated impressive generation quality. However, the long generation\ntime of such algorithms significantly degrades the user experience. To tackle\nthis problem, we propose DreamPropeller, a drop-in acceleration algorithm that\ncan be wrapped around any existing text-to-3D generation pipeline based on\nscore distillation. Our framework generalizes Picard iterations, a classical\nalgorithm for parallel sampling an ODE path, and can account for non-ODE paths\nsuch as momentum-based gradient updates and changes in dimensions during the\noptimization process as in many cases of 3D generation. We show that our\nalgorithm trades parallel compute for wallclock time and empirically achieves\nup to 4.7x speedup with a negligible drop in generation quality for all tested\nframeworks.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Linqi Zhou", "Andy Shih", "Chenlin Meng", "Stefano Ermon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f02d"}, "filepath": "data/2307.09283.png", "tags": [], "_media_type": "image", "_rand": 0.9992943597197959, "arXiv_link": "https://arxiv.org/abs/2307.09283", "other_link": "https://github.com/THU-MIG/RepViT}.", "title": "RepViT: Revisiting Mobile CNN From ViT Perspective", "abstract": "Recently, lightweight Vision Transformers (ViTs) demonstrate superior\nperformance and lower latency, compared with lightweight Convolutional Neural\nNetworks (CNNs), on resource-constrained mobile devices. Researchers have\ndiscovered many structural connections between lightweight ViTs and lightweight\nCNNs. However, the notable architectural disparities in the block structure,\nmacro, and micro designs between them have not been adequately examined. In\nthis study, we revisit the efficient design of lightweight CNNs from ViT\nperspective and emphasize their promising prospect for mobile devices.\nSpecifically, we incrementally enhance the mobile-friendliness of a standard\nlightweight CNN, \\ie, MobileNetV3, by integrating the efficient architectural\ndesigns of lightweight ViTs. This ends up with a new family of pure lightweight\nCNNs, namely RepViT. Extensive experiments show that RepViT outperforms\nexisting state-of-the-art lightweight ViTs and exhibits favorable latency in\nvarious vision tasks. Notably, on ImageNet, RepViT achieves over 80\\% top-1\naccuracy with 1.0 ms latency on an iPhone 12, which is the first time for a\nlightweight model, to the best of our knowledge. Besides, when RepViT meets\nSAM, our RepViT-SAM can achieve nearly 10$\\times$ faster inference than the\nadvanced MobileSAM. Codes and models are available at\n\\url{https://github.com/THU-MIG/RepViT}.", "keywords": [], "authors_list": ["Ao Wang", "Hui Chen", "Zijia Lin", "Jungong Han", "Guiguang Ding"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f02e"}, "filepath": "data/2402.17414v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999170862653729, "arXiv_link": "https://arxiv.org/abs/2402.17414v1", "other_link": "https://github.com/microsoft/DCVC.", "title": "Neural Video Compression with Feature Modulation", "abstract": "The emerging conditional coding-based neural video codec (NVC) shows\nsuperiority over commonly-used residual coding-based codec and the latest NVC\nalready claims to outperform the best traditional codec. However, there still\nexist critical problems blocking the practicality of NVC. In this paper, we\npropose a powerful conditional coding-based NVC that solves two critical\nproblems via feature modulation. The first is how to support a wide quality\nrange in a single model. Previous NVC with this capability only supports about\n3.8 dB PSNR range on average. To tackle this limitation, we modulate the latent\nfeature of the current frame via the learnable quantization scaler. During the\ntraining, we specially design the uniform quantization parameter sampling\nmechanism to improve the harmonization of encoding and quantization. This\nresults in a better learning of the quantization scaler and helps our NVC\nsupport about 11.4 dB PSNR range. The second is how to make NVC still work\nunder a long prediction chain. We expose that the previous SOTA NVC has an\nobvious quality degradation problem when using a large intra-period setting. To\nthis end, we propose modulating the temporal feature with a periodically\nrefreshing mechanism to boost the quality. %Besides solving the above two\nproblems, we also design a single model that can support both RGB and YUV\ncolorspaces. Notably, under single intra-frame setting, our codec can achieve\n29.7\\% bitrate saving over previous SOTA NVC with 16\\% MACs reduction. Our\ncodec serves as a notable landmark in the journey of NVC evolution. The codes\nare at https://github.com/microsoft/DCVC.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jiahao Li", "Bin Li", "Yan Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f02f"}, "filepath": "data/2403.13647.png", "tags": [], "_media_type": "image", "_rand": 0.999931228454127, "arXiv_link": "https://arxiv.org/abs/2403.13647", "other_link": "", "title": "Meta-Point Learning and Refining for Category-Agnostic Pose Estimation", "abstract": "Category-agnostic pose estimation (CAPE) aims to predict keypoints for\narbitrary classes given a few support images annotated with keypoints. Existing\nmethods only rely on the features extracted at support keypoints to predict or\nrefine the keypoints on query image, but a few support feature vectors are\nlocal and inadequate for CAPE. Considering that human can quickly perceive\npotential keypoints of arbitrary objects, we propose a novel framework for CAPE\nbased on such potential keypoints (named as meta-points). Specifically, we\nmaintain learnable embeddings to capture inherent information of various\nkeypoints, which interact with image feature maps to produce meta-points\nwithout any support. The produced meta-points could serve as meaningful\npotential keypoints for CAPE. Due to the inevitable gap between inherency and\nannotation, we finally utilize the identities and details offered by support\nkeypoints to assign and refine meta-points to desired keypoints in query image.\nIn addition, we propose a progressive deformable point decoder and a slacked\nregression loss for better prediction and supervision. Our novel framework not\nonly reveals the inherency of keypoints but also outperforms existing methods\nof CAPE. Comprehensive experiments and in-depth studies on large-scale MP-100\ndataset demonstrate the effectiveness of our framework.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Junjie Chen", "Jiebin Yan", "Yuming Fang", "Li Niu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f030"}, "filepath": "data/2404.12168.png", "tags": [], "_media_type": "image", "_rand": 0.9998882820731929, "arXiv_link": "https://arxiv.org/abs/2404.12168", "other_link": "", "title": "Real-World Efficient Blind Motion Deblurring via Blur Pixel Discretization", "abstract": "As recent advances in mobile camera technology have enabled the capability to\ncapture high-resolution images, such as 4K images, the demand for an efficient\ndeblurring model handling large motion has increased. In this paper, we\ndiscover that the image residual errors, i.e., blur-sharp pixel differences,\ncan be grouped into some categories according to their motion blur type and how\ncomplex their neighboring pixels are. Inspired by this, we decompose the\ndeblurring (regression) task into blur pixel discretization (pixel-level blur\nclassification) and discrete-to-continuous conversion (regression with blur\nclass map) tasks. Specifically, we generate the discretized image residual\nerrors by identifying the blur pixels and then transform them to a continuous\nform, which is computationally more efficient than naively solving the original\nregression problem with continuous values. Here, we found that the\ndiscretization result, i.e., blur segmentation map, remarkably exhibits visual\nsimilarity with the image residual errors. As a result, our efficient model\nshows comparable performance to state-of-the-art methods in realistic\nbenchmarks, while our method is up to 10 times computationally more efficient.", "keywords": ["Low-level vision"], "authors_list": ["Insoo Kim", "Jae Seok Choi", "Geonseok Seo", "Kinam Kwon", "Jinwoo Shin", "Hyong-Euk Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f031"}, "filepath": "data/2401.06312.png", "tags": [], "_media_type": "image", "_rand": 0.9992294787888906, "arXiv_link": "https://arxiv.org/abs/2401.06312", "other_link": "https://github.com/LabShuHangGU/MIA-VSR.", "title": "Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention", "abstract": "Recently, Vision Transformer has achieved great success in recovering missing\ndetails in low-resolution sequences, i.e., the video super-resolution (VSR)\ntask. Despite its superiority in VSR accuracy, the heavy computational burden\nas well as the large memory footprint hinder the deployment of\nTransformer-based VSR models on constrained devices. In this paper, we address\nthe above issue by proposing a novel feature-level masked processing framework:\nVSR with Masked Intra and inter frame Attention (MIA-VSR). The core of MIA-VSR\nis leveraging feature-level temporal continuity between adjacent frames to\nreduce redundant computations and make more rational use of previously enhanced\nSR features. Concretely, we propose an intra-frame and inter-frame attention\nblock which takes the respective roles of past features and input features into\nconsideration and only exploits previously enhanced features to provide\nsupplementary information. In addition, an adaptive block-wise mask prediction\nmodule is developed to skip unimportant computations according to feature\nsimilarity between adjacent frames. We conduct detailed ablation studies to\nvalidate our contributions and compare the proposed method with recent\nstate-of-the-art VSR approaches. The experimental results demonstrate that\nMIA-VSR improves the memory and computation efficiency over state-of-the-art\nmethods, without trading off PSNR accuracy. The code is available at\nhttps://github.com/LabShuHangGU/MIA-VSR.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Xingyu Zhou", "Leheng Zhang", "Xiaorui Zhao", "Keze Wang", "Leida Li", "Shuhang Gu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f032"}, "filepath": "data/2402.10099.png", "tags": [], "_media_type": "image", "_rand": 0.9994686982407515, "arXiv_link": "https://arxiv.org/abs/2402.10099", "other_link": "", "title": "Any-Shift Prompting for Generalization over Distributions", "abstract": "Image-language models with prompt learning have shown remarkable advances in\nnumerous downstream vision tasks. Nevertheless, conventional prompt learning\nmethods overfit their training distribution and lose the generalization ability\non test distributions. To improve generalization across various distribution\nshifts, we propose any-shift prompting: a general probabilistic inference\nframework that considers the relationship between training and test\ndistributions during prompt learning. We explicitly connect training and test\ndistributions in the latent space by constructing training and test prompts in\na hierarchical architecture. Within this framework, the test prompt exploits\nthe distribution relationships to guide the generalization of the CLIP\nimage-language model from training to any test distribution. To effectively\nencode the distribution information and their relationships, we further\nintroduce a transformer inference network with a pseudo-shift training\nmechanism. The network generates the tailored test prompt with both training\nand test information in a feedforward pass, avoiding extra training costs at\ntest time. Extensive experiments on twenty-three datasets demonstrate the\neffectiveness of any-shift prompting on the generalization over various\ndistribution shifts.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Zehao Xiao", "Jiayi Shen", "Mohammad Mahdi Derakhshani", "Shengcai Liao", "Cees G. M. Snoek"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f033"}, "filepath": "data/2312.09222.png", "tags": [], "_media_type": "image", "_rand": 0.9991860433340402, "arXiv_link": "https://arxiv.org/abs/2312.09222", "other_link": "", "title": "Mosaic-SDF for 3D Generative Models", "abstract": "Current diffusion or flow-based generative models for 3D shapes divide to\ntwo: distilling pre-trained 2D image diffusion models, and training directly on\n3D shapes. When training a diffusion or flow models on 3D shapes a crucial\ndesign choice is the shape representation. An effective shape representation\nneeds to adhere three design principles: it should allow an efficient\nconversion of large 3D datasets to the representation form; it should provide a\ngood tradeoff of approximation power versus number of parameters; and it should\nhave a simple tensorial form that is compatible with existing powerful neural\narchitectures. While standard 3D shape representations such as volumetric grids\nand point clouds do not adhere to all these principles simultaneously, we\nadvocate in this paper a new representation that does. We introduce Mosaic-SDF\n(M-SDF): a simple 3D shape representation that approximates the Signed Distance\nFunction (SDF) of a given shape by using a set of local grids spread near the\nshape's boundary. The M-SDF representation is fast to compute for each shape\nindividually making it readily parallelizable; it is parameter efficient as it\nonly covers the space around the shape's boundary; and it has a simple matrix\nform, compatible with Transformer-based architectures. We demonstrate the\nefficacy of the M-SDF representation by using it to train a 3D generative flow\nmodel including class-conditioned generation with the 3D Warehouse dataset, and\ntext-to-3D generation using a dataset of about 600k caption-shape pairs.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Lior Yariv", "Omri Puny", "Oran Gafni", "Yaron Lipman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f034"}, "filepath": "data/2403.01944.png", "tags": [], "_media_type": "image", "_rand": 0.9998714027231056, "arXiv_link": "https://arxiv.org/abs/2403.01944", "other_link": "https://github.com/nis-research/afa-augment", "title": "Fourier-basis functions to bridge augmentation gap: Rethinking frequency augmentation in image classification", "abstract": "Computer vision models normally witness degraded performance when deployed in\nreal-world scenarios, due to unexpected changes in inputs that were not\naccounted for during training. Data augmentation is commonly used to address\nthis issue, as it aims to increase data variety and reduce the distribution gap\nbetween training and test data. However, common visual augmentations might not\nguarantee extensive robustness of computer vision models. In this paper, we\npropose Auxiliary Fourier-basis Augmentation (AFA), a complementary technique\ntargeting augmentation in the frequency domain and filling the augmentation gap\nleft by visual augmentations. We demonstrate the utility of augmentation via\nFourier-basis additive noise in a straightforward and efficient adversarial\nsetting. Our results show that AFA benefits the robustness of models against\ncommon corruptions, OOD generalization, and consistency of performance of\nmodels against increasing perturbations, with negligible deficit to the\nstandard performance of models. It can be seamlessly integrated with other\naugmentation techniques to further boost performance. Code and models can be\nfound at: https://github.com/nis-research/afa-augment", "keywords": ["Efficient and scalable vision"], "authors_list": ["Mei Vaish", "Shunxin Wang", "Nicola Strisciuglio"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f035"}, "filepath": "data/2403.10391.png", "tags": [], "_media_type": "image", "_rand": 0.9997380592532061, "arXiv_link": "https://arxiv.org/abs/2403.10391", "other_link": "", "title": "CDMAD: Class-Distribution-Mismatch-Aware Debiasing for Class-Imbalanced Semi-Supervised Learning", "abstract": "Pseudo-label-based semi-supervised learning (SSL) algorithms trained on a\nclass-imbalanced set face two cascading challenges: 1) Classifiers tend to be\nbiased towards majority classes, and 2) Biased pseudo-labels are used for\ntraining. It is difficult to appropriately re-balance the classifiers in SSL\nbecause the class distribution of an unlabeled set is often unknown and could\nbe mismatched with that of a labeled set. We propose a novel class-imbalanced\nSSL algorithm called class-distribution-mismatch-aware debiasing (CDMAD). For\neach iteration of training, CDMAD first assesses the classifier's biased degree\ntowards each class by calculating the logits on an image without any patterns\n(e.g., solid color image), which can be considered irrelevant to the training\nset. CDMAD then refines biased pseudo-labels of the base SSL algorithm by\nensuring the classifier's neutrality. CDMAD uses these refined pseudo-labels\nduring the training of the base SSL algorithm to improve the quality of the\nrepresentations. In the test phase, CDMAD similarly refines biased class\npredictions on test samples. CDMAD can be seen as an extension of post-hoc\nlogit adjustment to address a challenge of incorporating the unknown class\ndistribution of the unlabeled set for re-balancing the biased classifier under\nclass distribution mismatch. CDMAD ensures Fisher consistency for the balanced\nerror. Extensive experiments verify the effectiveness of CDMAD.", "keywords": [], "authors_list": ["Hyuck Lee", "Heeyoung Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f036"}, "filepath": "data/2309.16992.png", "tags": [], "_media_type": "image", "_rand": 0.9992264648540568, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2309.16992", "other_link": "https://github.com/vignywang/SAMFeat.", "title": "LoS: Local Structure Guided Stereo Matching", "abstract": "Local feature detection and description play an important role in many\ncomputer vision tasks, which are designed to detect and describe keypoints in\n\"any scene\" and \"any downstream task\". Data-driven local feature learning\nmethods need to rely on pixel-level correspondence for training, which is\nchallenging to acquire at scale, thus hindering further improvements in\nperformance. In this paper, we propose SAMFeat to introduce SAM (segment\nanything model), a fundamental model trained on 11 million images, as a teacher\nto guide local feature learning and thus inspire higher performance on limited\ndatasets. To do so, first, we construct an auxiliary task of Pixel Semantic\nRelational Distillation (PSRD), which distillates feature relations with\ncategory-agnostic semantic information learned by the SAM encoder into a local\nfeature learning network, to improve local feature description using semantic\ndiscrimination. Second, we develop a technique called Weakly Supervised\nContrastive Learning Based on Semantic Grouping (WSC), which utilizes semantic\ngroupings derived from SAM as weakly supervised signals, to optimize the metric\nspace of local descriptors. Third, we design an Edge Attention Guidance (EAG)\nto further improve the accuracy of local feature detection and description by\nprompting the network to pay more attention to the edge region guided by SAM.\nSAMFeat's performance on various tasks such as image matching on HPatches, and\nlong-term visual localization on Aachen Day-Night showcases its superiority\nover previous local features. The release code is available at\nhttps://github.com/vignywang/SAMFeat.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Kunhong Li", "Longguang Wang", "Ye Zhang", "Kaiwen Xue", "Shunbo Zhou", "Yulan Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f037"}, "filepath": "data/2403.16398.png", "tags": [], "_media_type": "image", "_rand": 0.9990094060263753, "arXiv_link": "https://arxiv.org/abs/2403.16398", "other_link": "", "title": "Rethinking the Representation in Federated Unsupervised Learning with Non-IID Data", "abstract": "Federated learning achieves effective performance in modeling decentralized\ndata. In practice, client data are not well-labeled, which makes it potential\nfor federated unsupervised learning (FUSL) with non-IID data. However, the\nperformance of existing FUSL methods suffers from insufficient representations,\ni.e., (1) representation collapse entanglement among local and global models,\nand (2) inconsistent representation spaces among local models. The former\nindicates that representation collapse in local model will subsequently impact\nthe global model and other local models. The latter means that clients model\ndata representation with inconsistent parameters due to the deficiency of\nsupervision signals. In this work, we propose FedU2 which enhances generating\nuniform and unified representation in FUSL with non-IID data. Specifically,\nFedU2 consists of flexible uniform regularizer (FUR) and efficient unified\naggregator (EUA). FUR in each client avoids representation collapse via\ndispersing samples uniformly, and EUA in server promotes unified representation\nby constraining consistent client model updating. To extensively validate the\nperformance of FedU2, we conduct both cross-device and cross-silo evaluation\nexperiments on two benchmark datasets, i.e., CIFAR10 and CIFAR100.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xinting Liao", "Weiming Liu", "Chaochao Chen", "Pengyang Zhou", "Fengyuan Yu", "Huabin Zhu", "Binhui Yao", "Tao Wang", "Xiaolin Zheng", "Yanchao Tan"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f038"}, "filepath": "data/2404.04876.png", "tags": [], "_media_type": "image", "_rand": 0.9995916008420758, "arXiv_link": "https://arxiv.org/abs/2404.04876", "other_link": "", "title": "Towards Detailed and Robust 3D Clothed Human Reconstruction with High-Frequency and Low-Frequency Information of Parametric Body Models", "abstract": "Reconstructing 3D clothed human involves creating a detailed geometry of\nindividuals in clothing, with applications ranging from virtual try-on, movies,\nto games. To enable practical and widespread applications, recent advances\npropose to generate a clothed human from an RGB image. However, they struggle\nto reconstruct detailed and robust avatars simultaneously. We empirically find\nthat the high-frequency (HF) and low-frequency (LF) information from a\nparametric model has the potential to enhance geometry details and improve\nrobustness to noise, respectively. Based on this, we propose HiLo, namely\nclothed human reconstruction with high- and low-frequency information, which\ncontains two components. 1) To recover detailed geometry using HF information,\nwe propose a progressive HF Signed Distance Function to enhance the detailed 3D\ngeometry of a clothed human. We analyze that our progressive learning manner\nalleviates large gradients that hinder model convergence. 2) To achieve robust\nreconstruction against inaccurate estimation of the parametric model by using\nLF information, we propose a spatial interaction implicit function. This\nfunction effectively exploits the complementary spatial information from a\nlow-resolution voxel grid of the parametric model. Experimental results\ndemonstrate that HiLo outperforms the state-of-the-art methods by 10.43% and\n9.54% in terms of Chamfer distance on the Thuman2.0 and CAPE datasets,\nrespectively. Additionally, HiLo demonstrates robustness to noise from the\nparametric model, challenging poses, and various clothing styles.", "keywords": [], "authors_list": ["Yifan Yang", "Dong Liu", "Shuhai Zhang", "Zeshuai Deng", "Zixiong Huang", "Mingkui Tan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f039"}, "filepath": "data/2401.03989.png", "tags": [], "_media_type": "image", "_rand": 0.9996517549386996, "arXiv_link": "https://arxiv.org/abs/2401.03989", "other_link": "", "title": "MS-DETR: Efficient DETR Training with Mixed Supervision", "abstract": "DETR accomplishes end-to-end object detection through iteratively generating\nmultiple object candidates based on image features and promoting one candidate\nfor each ground-truth object. The traditional training procedure using\none-to-one supervision in the original DETR lacks direct supervision for the\nobject detection candidates.\n We aim at improving the DETR training efficiency by explicitly supervising\nthe candidate generation procedure through mixing one-to-one supervision and\none-to-many supervision. Our approach, namely MS-DETR, is simple, and places\none-to-many supervision to the object queries of the primary decoder that is\nused for inference. In comparison to existing DETR variants with one-to-many\nsupervision, such as Group DETR and Hybrid DETR, our approach does not need\nadditional decoder branches or object queries. The object queries of the\nprimary decoder in our approach directly benefit from one-to-many supervision\nand thus are superior in object candidate prediction. Experimental results show\nthat our approach outperforms related DETR variants, such as DN-DETR, Hybrid\nDETR, and Group DETR, and the combination with related DETR variants further\nimproves the performance.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chuyang Zhao", "Yifan Sun", "Wenhao Wang", "Qiang Chen", "Errui Ding", "Yi Yang", "Jingdong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f03a"}, "filepath": "data/2403.17589.png", "tags": [], "_media_type": "image", "_rand": 0.9995070399317793, "arXiv_link": "https://arxiv.org/abs/2403.17589", "other_link": "https://github.com/YBZh/DMN}.", "title": "Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models", "abstract": "With the emergence of pre-trained vision-language models like CLIP, how to\nadapt them to various downstream classification tasks has garnered significant\nattention in recent research. The adaptation strategies can be typically\ncategorized into three paradigms: zero-shot adaptation, few-shot adaptation,\nand the recently-proposed training-free few-shot adaptation. Most existing\napproaches are tailored for a specific setting and can only cater to one or two\nof these paradigms. In this paper, we introduce a versatile adaptation approach\nthat can effectively work under all three settings. Specifically, we propose\nthe dual memory networks that comprise dynamic and static memory components.\nThe static memory caches training data knowledge, enabling training-free\nfew-shot adaptation, while the dynamic memory preserves historical test\nfeatures online during the testing process, allowing for the exploration of\nadditional data insights beyond the training set. This novel capability\nenhances model performance in the few-shot setting and enables model usability\nin the absence of training data. The two memory networks employ the same\nflexible memory interactive strategy, which can operate in a training-free mode\nand can be further enhanced by incorporating learnable projection layers. Our\napproach is tested across 11 datasets under the three task settings.\nRemarkably, in the zero-shot scenario, it outperforms existing methods by over\n3\\% and even shows superior results against methods utilizing external training\ndata. Additionally, our method exhibits robust performance against natural\ndistribution shifts. Codes are available at \\url{https://github.com/YBZh/DMN}.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yabin Zhang", "Wenjie Zhu", "Hui Tang", "Zhiyuan Ma", "Kaiyang Zhou", "Lei Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f03b"}, "filepath": "data/2312.03102.png", "tags": [], "_media_type": "image", "_rand": 0.9993102069063756, "arXiv_link": "https://arxiv.org/abs/2312.03102", "other_link": "", "title": "Fully Convolutional Slice-to-Volume Reconstruction for Single-Stack MRI", "abstract": "In magnetic resonance imaging (MRI), slice-to-volume reconstruction (SVR)\nrefers to computational reconstruction of an unknown 3D magnetic resonance\nvolume from stacks of 2D slices corrupted by motion. While promising, current\nSVR methods require multiple slice stacks for accurate 3D reconstruction,\nleading to long scans and limiting their use in time-sensitive applications\nsuch as fetal fMRI. Here, we propose a SVR method that overcomes the\nshortcomings of previous work and produces state-of-the-art reconstructions in\nthe presence of extreme inter-slice motion. Inspired by the recent success of\nsingle-view depth estimation methods, we formulate SVR as a single-stack motion\nestimation task and train a fully convolutional network to predict a motion\nstack for a given slice stack, producing a 3D reconstruction as a byproduct of\nthe predicted motion. Extensive experiments on the SVR of adult and fetal\nbrains demonstrate that our fully convolutional method is twice as accurate as\nprevious SVR methods. Our code is available at github.com/seannz/svr.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Sean I. Young", "Ya\u00ebl Balbastre", "Bruce Fischl", "Polina Golland", "Juan Iglesias"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f03c"}, "filepath": "data/2403.07214.png", "tags": [], "_media_type": "image", "_rand": 0.9997926945460408, "arXiv_link": "https://arxiv.org/abs/2403.07214", "other_link": "", "title": "Text-to-Image Diffusion Models are Great Sketch-Photo Matchmakers", "abstract": "This paper, for the first time, explores text-to-image diffusion models for\nZero-Shot Sketch-based Image Retrieval (ZS-SBIR). We highlight a pivotal\ndiscovery: the capacity of text-to-image diffusion models to seamlessly bridge\nthe gap between sketches and photos. This proficiency is underpinned by their\nrobust cross-modal capabilities and shape bias, findings that are substantiated\nthrough our pilot studies. In order to harness pre-trained diffusion models\neffectively, we introduce a straightforward yet powerful strategy focused on\ntwo key aspects: selecting optimal feature layers and utilising visual and\ntextual prompts. For the former, we identify which layers are most enriched\nwith information and are best suited for the specific retrieval requirements\n(category-level or fine-grained). Then we employ visual and textual prompts to\nguide the model's feature extraction process, enabling it to generate more\ndiscriminative and contextually relevant cross-modal representations. Extensive\nexperiments on several benchmark datasets validate significant performance\nimprovements.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Subhadeep Koley", "Ayan Kumar Bhunia", "Aneeshan Sain", "Pinaki Nath Chowdhury", "Tao Xiang", "Yi-Zhe Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f03d"}, "filepath": "data/2403.19600.png", "tags": [], "_media_type": "image", "_rand": 0.9991827030486873, "arXiv_link": "https://arxiv.org/abs/2403.19600", "other_link": "https://github.com/Zhicaiwww/Diff-Mix),", "title": "Enhance Image Classification Via Inter-Class Image Mixup With Diffusion Model", "abstract": "Text-to-image (T2I) generative models have recently emerged as a powerful\ntool, enabling the creation of photo-realistic images and giving rise to a\nmultitude of applications. However, the effective integration of T2I models\ninto fundamental image classification tasks remains an open question. A\nprevalent strategy to bolster image classification performance is through\naugmenting the training set with synthetic images generated by T2I models. In\nthis study, we scrutinize the shortcomings of both current generative and\nconventional data augmentation techniques. Our analysis reveals that these\nmethods struggle to produce images that are both faithful (in terms of\nforeground objects) and diverse (in terms of background contexts) for\ndomain-specific concepts. To tackle this challenge, we introduce an innovative\ninter-class data augmentation method known as Diff-Mix\n(https://github.com/Zhicaiwww/Diff-Mix), which enriches the dataset by\nperforming image translations between classes. Our empirical results\ndemonstrate that Diff-Mix achieves a better balance between faithfulness and\ndiversity, leading to a marked improvement in performance across diverse image\nclassification scenarios, including few-shot, conventional, and long-tail\nclassifications for domain-specific datasets.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhicai Wang", "Longhui Wei", "Tan Wang", "Heyu Chen", "Yanbin Hao", "Xiang Wang", "Xiangnan He", "Qi Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f03e"}, "filepath": "data/2405.19295.png", "tags": [], "_media_type": "image", "_rand": 0.9996509463838813, "arXiv_link": "https://arxiv.org/abs/2405.19295", "other_link": "", "title": "3D Neural Edge Reconstruction", "abstract": "Real-world objects and environments are predominantly composed of edge\nfeatures, including straight lines and curves. Such edges are crucial elements\nfor various applications, such as CAD modeling, surface meshing, lane mapping,\netc. However, existing traditional methods only prioritize lines over curves\nfor simplicity in geometric modeling. To this end, we introduce EMAP, a new\nmethod for learning 3D edge representations with a focus on both lines and\ncurves. Our method implicitly encodes 3D edge distance and direction in\nUnsigned Distance Functions (UDF) from multi-view edge maps. On top of this\nneural representation, we propose an edge extraction algorithm that robustly\nabstracts parametric 3D edges from the inferred edge points and their\ndirections. Comprehensive evaluations demonstrate that our method achieves\nbetter 3D edge reconstruction on multiple challenging datasets. We further show\nthat our learned UDF field enhances neural surface reconstruction by capturing\nmore details.", "keywords": ["Deep learning architectures and techniques", "Low-level vision"], "authors_list": ["Lei Li", "Songyou Peng", "Zehao Yu", "Shaohui Liu", "R\u00e9mi Pautrat", "Xiaochuan Yin", "Marc Pollefeys"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f03f"}, "filepath": "data/2403.09914.png", "tags": [], "_media_type": "image", "_rand": 0.9995431962561544, "arXiv_link": "https://arxiv.org/abs/2403.09914", "other_link": "", "title": "ProMark: Proactive Diffusion Watermarking for Causal Attribution", "abstract": "Generative AI (GenAI) is transforming creative workflows through the\ncapability to synthesize and manipulate images via high-level prompts. Yet\ncreatives are not well supported to receive recognition or reward for the use\nof their content in GenAI training. To this end, we propose ProMark, a causal\nattribution technique to attribute a synthetically generated image to its\ntraining data concepts like objects, motifs, templates, artists, or styles. The\nconcept information is proactively embedded into the input training images\nusing imperceptible watermarks, and the diffusion models (unconditional or\nconditional) are trained to retain the corresponding watermarks in generated\nimages. We show that we can embed as many as $2^{16}$ unique watermarks into\nthe training data, and each training image can contain more than one watermark.\nProMark can maintain image quality whilst outperforming correlation-based\nattribution. Finally, several qualitative examples are presented, providing the\nconfidence that the presence of the watermark conveys a causative relationship\nbetween training data and synthetic images.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Vishal Asnani", "John Collomosse", "Tu Bui", "Xiaoming Liu", "Shruti Agarwal"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f040"}, "filepath": "data/2403.09140.png", "tags": [], "_media_type": "image", "_rand": 0.9995375455731922, "arXiv_link": "https://arxiv.org/abs/2403.09140", "other_link": "https://stellarcheng.github.io/Sculpt3D/.", "title": "Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior", "abstract": "Recent works on text-to-3d generation show that using only 2D diffusion\nsupervision for 3D generation tends to produce results with inconsistent\nappearances (e.g., faces on the back view) and inaccurate shapes (e.g., animals\nwith extra legs). Existing methods mainly address this issue by retraining\ndiffusion models with images rendered from 3D data to ensure multi-view\nconsistency while struggling to balance 2D generation quality with 3D\nconsistency. In this paper, we present a new framework Sculpt3D that equips the\ncurrent pipeline with explicit injection of 3D priors from retrieved reference\nobjects without re-training the 2D diffusion model. Specifically, we\ndemonstrate that high-quality and diverse 3D geometry can be guaranteed by\nkeypoints supervision through a sparse ray sampling approach. Moreover, to\nensure accurate appearances of different views, we further modulate the output\nof the 2D diffusion model to the correct patterns of the template views without\naltering the generated object's style. These two decoupled designs effectively\nharness 3D information from reference objects to generate 3D objects while\npreserving the generation quality of the 2D diffusion model. Extensive\nexperiments show our method can largely improve the multi-view consistency\nwhile retaining fidelity and diversity. Our project page is available at:\nhttps://stellarcheng.github.io/Sculpt3D/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chen Cheng", "Xiaofeng Yang", "Fan Yang", "Chengzeng Feng", "ZHOUJIE FU", "Chuan-Sheng Foo", "Guosheng Lin", "Fayao Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f041"}, "filepath": "data/2403.07222v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990684474937837, "arXiv_link": "https://arxiv.org/abs/2403.07222v2", "other_link": "", "title": "You'll Never Walk Alone: A Sketch and Text Duet for Fine-Grained Image Retrieval", "abstract": "Two primary input modalities prevail in image retrieval: sketch and text.\nWhile text is widely used for inter-category retrieval tasks, sketches have\nbeen established as the sole preferred modality for fine-grained image\nretrieval due to their ability to capture intricate visual details. In this\npaper, we question the reliance on sketches alone for fine-grained image\nretrieval by simultaneously exploring the fine-grained representation\ncapabilities of both sketch and text, orchestrating a duet between the two. The\nend result enables precise retrievals previously unattainable, allowing users\nto pose ever-finer queries and incorporate attributes like colour and\ncontextual cues from text. For this purpose, we introduce a novel\ncompositionality framework, effectively combining sketches and text using\npre-trained CLIP models, while eliminating the need for extensive fine-grained\ntextual descriptions. Last but not least, our system extends to novel\napplications in composed image retrieval, domain attribute transfer, and\nfine-grained generation, providing solutions for various real-world scenarios.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Subhadeep Koley", "Ayan Kumar Bhunia", "Aneeshan Sain", "Pinaki Nath Chowdhury", "Tao Xiang", "Yi-Zhe Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f042"}, "filepath": "data/2403.02782.png", "tags": [], "_media_type": "image", "_rand": 0.999691583828363, "arXiv_link": "https://arxiv.org/abs/2403.02782", "other_link": "", "title": "Why Not Use Your Textbook? Knowledge-Enhanced Procedure Planning of Instructional Videos", "abstract": "In this paper, we explore the capability of an agent to construct a logical\nsequence of action steps, thereby assembling a strategic procedural plan. This\nplan is crucial for navigating from an initial visual observation to a target\nvisual outcome, as depicted in real-life instructional videos. Existing works\nhave attained partial success by extensively leveraging various sources of\ninformation available in the datasets, such as heavy intermediate visual\nobservations, procedural names, or natural language step-by-step instructions,\nfor features or supervision signals. However, the task remains formidable due\nto the implicit causal constraints in the sequencing of steps and the\nvariability inherent in multiple feasible plans. To tackle these intricacies\nthat previous efforts have overlooked, we propose to enhance the capabilities\nof the agent by infusing it with procedural knowledge. This knowledge, sourced\nfrom training procedure plans and structured as a directed weighted graph,\nequips the agent to better navigate the complexities of step sequencing and its\npotential variations. We coin our approach KEPP, a novel Knowledge-Enhanced\nProcedure Planning system, which harnesses a probabilistic procedural knowledge\ngraph extracted from training data, effectively acting as a comprehensive\ntextbook for the training domain. Experimental evaluations across three\nwidely-used datasets under settings of varying complexity reveal that KEPP\nattains superior, state-of-the-art results while requiring only minimal\nsupervision.", "keywords": [], "authors_list": ["Kumaranage Ravindu Nagasinghe", "Honglu Zhou", "Malitha Gunawardhana", "Martin Renqiang Min", "Daniel Harari", "Muhammad Haris Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f043"}, "filepath": "data/2404.01543.png", "tags": [], "_media_type": "image", "_rand": 0.9995192973297342, "arXiv_link": "https://arxiv.org/abs/2404.01543", "other_link": "", "title": "Efficient 3D Implicit Head Avatar with Mesh-anchored Hash Table Blendshapes", "abstract": "3D head avatars built with neural implicit volumetric representations have\nachieved unprecedented levels of photorealism. However, the computational cost\nof these methods remains a significant barrier to their widespread adoption,\nparticularly in real-time applications such as virtual reality and\nteleconferencing. While attempts have been made to develop fast neural\nrendering approaches for static scenes, these methods cannot be simply employed\nto support realistic facial expressions, such as in the case of a dynamic\nfacial performance. To address these challenges, we propose a novel fast 3D\nneural implicit head avatar model that achieves real-time rendering while\nmaintaining fine-grained controllability and high rendering quality. Our key\nidea lies in the introduction of local hash table blendshapes, which are\nlearned and attached to the vertices of an underlying face parametric model.\nThese per-vertex hash-tables are linearly merged with weights predicted via a\nCNN, resulting in expression dependent embeddings. Our novel representation\nenables efficient density and color predictions using a lightweight MLP, which\nis further accelerated by a hierarchical nearest neighbor search method.\nExtensive experiments show that our approach runs in real-time while achieving\ncomparable rendering quality to state-of-the-arts and decent results on\nchallenging expressions.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Ziqian Bai", "Feitong Tan", "Sean Fanello", "Rohit Pandey", "Mingsong Dou", "Shichen Liu", "Ping Tan", "Yinda Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f044"}, "filepath": "data/2403.03221.png", "tags": [], "_media_type": "image", "_rand": 0.999231792114612, "arXiv_link": "https://arxiv.org/abs/2403.03221", "other_link": "", "title": "Towards Co-Evaluation of Cameras, HDR, and Algorithms for Industrial-Grade 6DoF Pose Estimation", "abstract": "Estimating relative camera poses between images has been a central problem in\ncomputer vision. Methods that find correspondences and solve for the\nfundamental matrix offer high precision in most cases. Conversely, methods\npredicting pose directly using neural networks are more robust to limited\noverlap and can infer absolute translation scale, but at the expense of reduced\nprecision. We show how to combine the best of both methods; our approach yields\nresults that are both precise and robust, while also accurately inferring\ntranslation scales. At the heart of our model lies a Transformer that (1)\nlearns to balance between solved and learned pose estimations, and (2) provides\na prior to guide a solver. A comprehensive analysis supports our design choices\nand demonstrates that our method adapts flexibly to various feature extractors\nand correspondence estimators, showing state-of-the-art performance in 6DoF\npose estimation on Matterport3D, InteriorNet, StreetLearn, and Map-free\nRelocalization.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Agastya Kalra", "Guy Stoppi", "Dmitrii Marin", "Vage Taamazyan", "Aarrushi Shandilya", "Rishav Agarwal", "Anton Boykov", "Aaron Chong", "Michael Stark"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f045"}, "filepath": "data/2403.02041.png", "tags": [], "_media_type": "image", "_rand": 0.9999407339721279, "arXiv_link": "https://arxiv.org/abs/2403.02041", "other_link": "", "title": "A Generative Approach for Wikipedia-Scale Visual Entity Recognition", "abstract": "In this paper, we address web-scale visual entity recognition, specifically\nthe task of mapping a given query image to one of the 6 million existing\nentities in Wikipedia. One way of approaching a problem of such scale is using\ndual-encoder models (eg CLIP), where all the entity names and query images are\nembedded into a unified space, paving the way for an approximate k-NN search.\nAlternatively, it is also possible to re-purpose a captioning model to directly\ngenerate the entity names for a given image. In contrast, we introduce a novel\nGenerative Entity Recognition (GER) framework, which given an input image\nlearns to auto-regressively decode a semantic and discriminative ``code''\nidentifying the target entity. Our experiments demonstrate the efficacy of this\nGER paradigm, showcasing state-of-the-art performance on the challenging OVEN\nbenchmark. GER surpasses strong captioning, dual-encoder, visual matching and\nhierarchical classification baselines, affirming its advantage in tackling the\ncomplexities of web-scale recognition.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Mathilde Caron", "Ahmet Iscen", "Alireza Fathi", "Cordelia Schmid"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f046"}, "filepath": "data/2403.07203.png", "tags": [], "_media_type": "image", "_rand": 0.9998504654364929, "arXiv_link": "https://arxiv.org/abs/2403.07203", "other_link": "", "title": "How to Handle Sketch-Abstraction in Sketch-Based Image Retrieval?", "abstract": "In this paper, we propose a novel abstraction-aware sketch-based image\nretrieval framework capable of handling sketch abstraction at varied levels.\nPrior works had mainly focused on tackling sub-factors such as drawing style\nand order, we instead attempt to model abstraction as a whole, and propose\nfeature-level and retrieval granularity-level designs so that the system builds\ninto its DNA the necessary means to interpret abstraction. On learning\nabstraction-aware features, we for the first-time harness the rich semantic\nembedding of pre-trained StyleGAN model, together with a novel\nabstraction-level mapper that deciphers the level of abstraction and\ndynamically selects appropriate dimensions in the feature matrix\ncorrespondingly, to construct a feature matrix embedding that can be freely\ntraversed to accommodate different levels of abstraction. For granularity-level\nabstraction understanding, we dictate that the retrieval model should not treat\nall abstraction-levels equally and introduce a differentiable surrogate Acc.@q\nloss to inject that understanding into the system. Different to the\ngold-standard triplet loss, our Acc.@q loss uniquely allows a sketch to\nnarrow/broaden its focus in terms of how stringent the evaluation should be -\nthe more abstract a sketch, the less stringent (higher q). Extensive\nexperiments depict our method to outperform existing state-of-the-arts in\nstandard SBIR tasks along with challenging scenarios like early retrieval,\nforensic sketch-photo matching, and style-invariant retrieval.", "keywords": [], "authors_list": ["Subhadeep Koley", "Ayan Kumar Bhunia", "Aneeshan Sain", "Pinaki Nath Chowdhury", "Tao Xiang", "Yi-Zhe Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f047"}, "filepath": "data/2312.15770.png", "tags": [], "_media_type": "image", "_rand": 0.9997354333788271, "arXiv_link": "https://arxiv.org/abs/2312.15770", "other_link": "https://tf-t2v.github.io/.", "title": "A Recipe for Scaling up Text-to-Video Generation with Text-free Videos", "abstract": "Diffusion-based text-to-video generation has witnessed impressive progress in\nthe past year yet still falls behind text-to-image generation. One of the key\nreasons is the limited scale of publicly available data (e.g., 10M video-text\npairs in WebVid10M vs. 5B image-text pairs in LAION), considering the high cost\nof video captioning. Instead, it could be far easier to collect unlabeled clips\nfrom video platforms like YouTube. Motivated by this, we come up with a novel\ntext-to-video generation framework, termed TF-T2V, which can directly learn\nwith text-free videos. The rationale behind is to separate the process of text\ndecoding from that of temporal modeling. To this end, we employ a content\nbranch and a motion branch, which are jointly optimized with weights shared.\nFollowing such a pipeline, we study the effect of doubling the scale of\ntraining set (i.e., video-only WebVid10M) with some randomly collected\ntext-free videos and are encouraged to observe the performance improvement (FID\nfrom 9.67 to 8.19 and FVD from 484 to 441), demonstrating the scalability of\nour approach. We also find that our model could enjoy sustainable performance\ngain (FID from 8.19 to 7.64 and FVD from 441 to 366) after reintroducing some\ntext labels for training. Finally, we validate the effectiveness and\ngeneralizability of our ideology on both native text-to-video generation and\ncompositional video synthesis paradigms. Code and models will be publicly\navailable at https://tf-t2v.github.io/.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Xiang Wang", "Shiwei Zhang", "Hangjie Yuan", "Zhiwu Qing", "Biao Gong", "Yingya Zhang", "Yujun Shen", "Changxin Gao", "Nong Sang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f048"}, "filepath": "data/2312.06553.png", "tags": [], "_media_type": "image", "_rand": 0.999970909935972, "arXiv_link": "https://arxiv.org/abs/2312.06553", "other_link": "", "title": "HOIAnimator: Text-Prompt Human-Object Animations Generation with Perceptive Diffusion Models", "abstract": "We address the problem of generating realistic 3D human-object interactions\n(HOIs) driven by textual prompts. To this end, we take a modular design and\ndecompose the complex task into simpler sub-tasks. We first develop a\ndual-branch diffusion model (HOI-DM) to generate both human and object motions\nconditioned on the input text, and encourage coherent motions by a\ncross-attention communication module between the human and object motion\ngeneration branches. We also develop an affordance prediction diffusion model\n(APDM) to predict the contacting area between the human and object during the\ninteractions driven by the textual prompt. The APDM is independent of the\nresults by the HOI-DM and thus can correct potential errors by the latter.\nMoreover, it stochastically generates the contacting points to diversify the\ngenerated motions. Finally, we incorporate the estimated contacting points into\nthe classifier-guidance to achieve accurate and close contact between humans\nand objects. To train and evaluate our approach, we annotate BEHAVE dataset\nwith text descriptions. Experimental results on BEHAVE and OMOMO demonstrate\nthat our approach produces realistic HOIs with various interactions and\ndifferent types of objects.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Wenfeng Song", "Xinyu Zhang", "Shuai Li", "Yang Gao", "Aimin Hao", "Xia HOU", "Chenglizhao Chen", "Ning Li", "Hong Qin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f049"}, "filepath": "data/2312.00362.png", "tags": [], "_media_type": "image", "_rand": 0.9993738882063956, "arXiv_link": "https://arxiv.org/abs/2312.00362", "other_link": "https://github.com/yuz1wan/video_distillation.", "title": "Dancing with Still Images: Video Distillation via Static-Dynamic Disentanglement", "abstract": "Recently, dataset distillation has paved the way towards efficient machine\nlearning, especially for image datasets. However, the distillation for videos,\ncharacterized by an exclusive temporal dimension, remains an underexplored\ndomain. In this work, we provide the first systematic study of video\ndistillation and introduce a taxonomy to categorize temporal compression. Our\ninvestigation reveals that the temporal information is usually not well learned\nduring distillation, and the temporal dimension of synthetic data contributes\nlittle. The observations motivate our unified framework of disentangling the\ndynamic and static information in the videos. It first distills the videos into\nstill images as static memory and then compensates the dynamic and motion\ninformation with a learnable dynamic memory block. Our method achieves\nstate-of-the-art on video datasets at different scales, with a notably smaller\nmemory storage budget. Our code is available at\nhttps://github.com/yuz1wan/video_distillation.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ziyu Wang", "Yue Xu", "Cewu Lu", "Yonglu Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f04a"}, "filepath": "data/2312.02150.png", "tags": [], "_media_type": "image", "_rand": 0.999282154261733, "arXiv_link": "https://arxiv.org/abs/2312.02150", "other_link": "https://readout-guidance.github.io.", "title": "Readout Guidance: Learning Control from Diffusion Features", "abstract": "We present Readout Guidance, a method for controlling text-to-image diffusion\nmodels with learned signals. Readout Guidance uses readout heads, lightweight\nnetworks trained to extract signals from the features of a pre-trained, frozen\ndiffusion model at every timestep. These readouts can encode single-image\nproperties, such as pose, depth, and edges; or higher-order properties that\nrelate multiple images, such as correspondence and appearance similarity.\nFurthermore, by comparing the readout estimates to a user-defined target, and\nback-propagating the gradient through the readout head, these estimates can be\nused to guide the sampling process. Compared to prior methods for conditional\ngeneration, Readout Guidance requires significantly fewer added parameters and\ntraining samples, and offers a convenient and simple recipe for reproducing\ndifferent forms of conditional control under a single framework, with a single\narchitecture and sampling procedure. We showcase these benefits in the\napplications of drag-based manipulation, identity-consistent generation, and\nspatially aligned control. Project page: https://readout-guidance.github.io.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Grace Luo", "Trevor Darrell", "Oliver Wang", "Dan B Goldman", "Aleksander Holynski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f04b"}, "filepath": "data/2312.01696v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993242972018536, "arXiv_link": "https://arxiv.org/abs/2312.01696v1", "other_link": "", "title": "BEVNeXt: Reviving Dense BEV Frameworks for 3D Object Detection", "abstract": "Recently, the rise of query-based Transformer decoders is reshaping\ncamera-based 3D object detection. These query-based decoders are surpassing the\ntraditional dense BEV (Bird's Eye View)-based methods. However, we argue that\ndense BEV frameworks remain important due to their outstanding abilities in\ndepth estimation and object localization, depicting 3D scenes accurately and\ncomprehensively. This paper aims to address the drawbacks of the existing dense\nBEV-based 3D object detectors by introducing our proposed enhanced components,\nincluding a CRF-modulated depth estimation module enforcing object-level\nconsistencies, a long-term temporal aggregation module with extended receptive\nfields, and a two-stage object decoder combining perspective techniques with\nCRF-modulated depth embedding. These enhancements lead to a \"modernized\" dense\nBEV framework dubbed BEVNeXt. On the nuScenes benchmark, BEVNeXt outperforms\nboth BEV-based and query-based frameworks under various settings, achieving a\nstate-of-the-art result of 64.2 NDS on the nuScenes test set.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Zhenxin Li", "Shiyi Lan", "Jose M. Alvarez", "Zuxuan Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f04c"}, "filepath": "data/2403.07234.png", "tags": [], "_media_type": "image", "_rand": 0.9991852225316016, "arXiv_link": "https://arxiv.org/abs/2403.07234", "other_link": "", "title": "It's All About Your Sketch: Democratising Sketch Control in Diffusion Models", "abstract": "This paper unravels the potential of sketches for diffusion models,\naddressing the deceptive promise of direct sketch control in generative AI. We\nimportantly democratise the process, enabling amateur sketches to generate\nprecise images, living up to the commitment of \"what you sketch is what you\nget\". A pilot study underscores the necessity, revealing that deformities in\nexisting models stem from spatial-conditioning. To rectify this, we propose an\nabstraction-aware framework, utilising a sketch adapter, adaptive time-step\nsampling, and discriminative guidance from a pre-trained fine-grained\nsketch-based image retrieval model, working synergistically to reinforce\nfine-grained sketch-photo association. Our approach operates seamlessly during\ninference without the need for textual prompts; a simple, rough sketch akin to\nwhat you and I can create suffices! We welcome everyone to examine results\npresented in the paper and its supplementary. Contributions include\ndemocratising sketch control, introducing an abstraction-aware framework, and\nleveraging discriminative guidance, validated through extensive experiments.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Subhadeep Koley", "Ayan Kumar Bhunia", "Deeptanshu Sekhri", "Aneeshan Sain", "Pinaki Nath Chowdhury", "Tao Xiang", "Yi-Zhe Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f04d"}, "filepath": "data/2312.01919.png", "tags": [], "_media_type": "image", "_rand": 0.9992920132144205, "arXiv_link": "https://arxiv.org/abs/2312.01919", "other_link": "", "title": "COTR: Compact Occupancy TRansformer for Vision-based 3D Occupancy Prediction", "abstract": "The autonomous driving community has shown significant interest in 3D\noccupancy prediction, driven by its exceptional geometric perception and\ngeneral object recognition capabilities. To achieve this, current works try to\nconstruct a Tri-Perspective View (TPV) or Occupancy (OCC) representation\nextending from the Bird-Eye-View perception. However, compressed views like TPV\nrepresentation lose 3D geometry information while raw and sparse OCC\nrepresentation requires heavy but redundant computational costs. To address the\nabove limitations, we propose Compact Occupancy TRansformer (COTR), with a\ngeometry-aware occupancy encoder and a semantic-aware group decoder to\nreconstruct a compact 3D OCC representation. The occupancy encoder first\ngenerates a compact geometrical OCC feature through efficient explicit-implicit\nview transformation. Then, the occupancy decoder further enhances the semantic\ndiscriminability of the compact OCC representation by a coarse-to-fine semantic\ngrouping strategy. Empirical experiments show that there are evident\nperformance gains across multiple baselines, e.g., COTR outperforms baselines\nwith a relative improvement of 8%-15%, demonstrating the superiority of our\nmethod.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Qihang Ma", "Xin Tan", "Yanyun Qu", "Lizhuang Ma", "Zhizhong Zhang", "Yuan Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f04e"}, "filepath": "data/2403.00041.png", "tags": [], "_media_type": "image", "_rand": 0.9999252530169325, "arXiv_link": "https://arxiv.org/abs/2403.00041", "other_link": "", "title": "Global and Local Prompts Cooperation via Optimal Transport for Federated Learning", "abstract": "Prompt learning in pretrained visual-language models has shown remarkable\nflexibility across various downstream tasks. Leveraging its inherent\nlightweight nature, recent research attempted to integrate the powerful\npretrained models into federated learning frameworks to simultaneously reduce\ncommunication costs and promote local training on insufficient data. Despite\nthese efforts, current federated prompt learning methods lack specialized\ndesigns to systematically address severe data heterogeneities, e.g., data\ndistribution with both label and feature shifts involved. To address this\nchallenge, we present Federated Prompts Cooperation via Optimal Transport\n(FedOTP), which introduces efficient collaborative prompt learning strategies\nto capture diverse category traits on a per-client basis. Specifically, for\neach client, we learn a global prompt to extract consensus knowledge among\nclients, and a local prompt to capture client-specific category\ncharacteristics. Unbalanced Optimal Transport is then employed to align local\nvisual features with these prompts, striking a balance between global consensus\nand local personalization. By relaxing one of the equality constraints, FedOTP\nenables prompts to focus solely on the core regions of image patches. Extensive\nexperiments on datasets with various types of heterogeneities have demonstrated\nthat our FedOTP outperforms the state-of-the-art methods.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Hongxia Li", "Wei Huang", "Jingya Wang", "Ye Shi"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Distributed, Parallel, and Cluster Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f04f"}, "filepath": "data/2307.11108.png", "tags": [], "_media_type": "image", "_rand": 0.9990330605760548, "arXiv_link": "https://arxiv.org/abs/2307.11108", "other_link": "", "title": "Rethinking the Evaluation Protocol of Domain Generalization", "abstract": "Domain generalization (DG) seeks to learn robust models that generalize well\nunder unknown distribution shifts. As a critical aspect of DG, optimizer\nselection has not been explored in depth. Currently, most DG methods follow the\nwidely used benchmark, DomainBed, and utilize Adam as the default optimizer for\nall datasets. However, we reveal that Adam is not necessarily the optimal\nchoice for the majority of current DG methods and datasets. Based on the\nperspective of loss landscape flatness, we propose a novel approach,\nFlatness-Aware Minimization for Domain Generalization (FAD), which can\nefficiently optimize both zeroth-order and first-order flatness simultaneously\nfor DG. We provide theoretical analyses of the FAD's out-of-distribution (OOD)\ngeneralization error and convergence. Our experimental results demonstrate the\nsuperiority of FAD on various DG datasets. Additionally, we confirm that FAD is\ncapable of discovering flatter optima in comparison to other zeroth-order and\nfirst-order flatness-aware optimization methods.", "keywords": [], "authors_list": ["Han Yu", "Xingxuan Zhang", "Renzhe Xu", "Jiashuo Liu", "Yue He", "Peng Cui"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f050"}, "filepath": "data/2404.03652.png", "tags": [], "_media_type": "image", "_rand": 0.999440632173602, "arXiv_link": "https://arxiv.org/abs/2404.03652", "other_link": "", "title": "The More You See in 2D, the More You Perceive in 3D", "abstract": "Humans can infer 3D structure from 2D images of an object based on past\nexperience and improve their 3D understanding as they see more images. Inspired\nby this behavior, we introduce SAP3D, a system for 3D reconstruction and novel\nview synthesis from an arbitrary number of unposed images. Given a few unposed\nimages of an object, we adapt a pre-trained view-conditioned diffusion model\ntogether with the camera poses of the images via test-time fine-tuning. The\nadapted diffusion model and the obtained camera poses are then utilized as\ninstance-specific priors for 3D reconstruction and novel view synthesis. We\nshow that as the number of input images increases, the performance of our\napproach improves, bridging the gap between optimization-based prior-less 3D\nreconstruction methods and single-image-to-3D diffusion-based methods. We\ndemonstrate our system on real images as well as standard synthetic benchmarks.\nOur ablation studies confirm that this adaption behavior is key for more\naccurate 3D understanding.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xinyang Han", "Zelin Gao", "Angjoo Kanazawa", "Shubham Goel", "Yossi Gandelsman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f051"}, "filepath": "data/2403.12710.png", "tags": [], "_media_type": "image", "_rand": 0.9991370561799483, "arXiv_link": "https://arxiv.org/abs/2403.12710", "other_link": "", "title": "Selective, Interpretable and Motion Consistent Privacy Attribute Obfuscation for Action Recognition", "abstract": "Concerns for the privacy of individuals captured in public imagery have led\nto privacy-preserving action recognition. Existing approaches often suffer from\nissues arising through obfuscation being applied globally and a lack of\ninterpretability. Global obfuscation hides privacy sensitive regions, but also\ncontextual regions important for action recognition. Lack of interpretability\nerodes trust in these new technologies. We highlight the limitations of current\nparadigms and propose a solution: Human selected privacy templates that yield\ninterpretability by design, an obfuscation scheme that selectively hides\nattributes and also induces temporal consistency, which is important in action\nrecognition. Our approach is architecture agnostic and directly modifies input\nimagery, while existing approaches generally require architecture training. Our\napproach offers more flexibility, as no retraining is required, and outperforms\nalternatives on three widely used datasets.", "keywords": ["Vision applications for social good and ethics"], "authors_list": ["Filip Ilic", "He Zhao", "Thomas Pock", "Richard P. Wildes"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f052"}, "filepath": "data/2402.18786.png", "tags": [], "_media_type": "image", "_rand": 0.9990601280392912, "arXiv_link": "https://arxiv.org/abs/2402.18786", "other_link": "", "title": "OpticalDR: A Deep Optical Imaging Model for Privacy-Protective Depression Recognition", "abstract": "Depression Recognition (DR) poses a considerable challenge, especially in the\ncontext of the growing concerns surrounding privacy. Traditional automatic\ndiagnosis of DR technology necessitates the use of facial images, undoubtedly\nexpose the patient identity features and poses privacy risks. In order to\nmitigate the potential risks associated with the inappropriate disclosure of\npatient facial images, we design a new imaging system to erase the identity\ninformation of captured facial images while retain disease-relevant features.\nIt is irreversible for identity information recovery while preserving essential\ndisease-related characteristics necessary for accurate DR. More specifically,\nwe try to record a de-identified facial image (erasing the identifiable\nfeatures as much as possible) by a learnable lens, which is optimized in\nconjunction with the following DR task as well as a range of face analysis\nrelated auxiliary tasks in an end-to-end manner. These aforementioned\nstrategies form our final Optical deep Depression Recognition network\n(OpticalDR). Experiments on CelebA, AVEC 2013, and AVEC 2014 datasets\ndemonstrate that our OpticalDR has achieved state-of-the-art privacy protection\nperformance with an average AUC of 0.51 on popular facial recognition models,\nand competitive results for DR with MAE/RMSE of 7.53/8.48 on AVEC 2013 and\n7.89/8.82 on AVEC 2014, respectively.", "keywords": ["Medical imaging and biological vision", "Biometrics and human analysis", "Vision applications for social good and ethics"], "authors_list": ["Yuchen Pan", "Junjun Jiang", "Kui Jiang", "Zhihao Wu", "Keyuan Yu", "Xianming Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f053"}, "filepath": "data/2307.07511.png", "tags": [], "_media_type": "image", "_rand": 0.999552152784687, "arXiv_link": "https://arxiv.org/abs/2307.07511", "other_link": "", "title": "NIFTY: Neural Object Interaction Fields for Guided Human Motion Synthesis", "abstract": "We address the problem of generating realistic 3D motions of humans\ninteracting with objects in a scene. Our key idea is to create a neural\ninteraction field attached to a specific object, which outputs the distance to\nthe valid interaction manifold given a human pose as input. This interaction\nfield guides the sampling of an object-conditioned human motion diffusion\nmodel, so as to encourage plausible contacts and affordance semantics. To\nsupport interactions with scarcely available data, we propose an automated\nsynthetic data pipeline. For this, we seed a pre-trained motion model, which\nhas priors for the basics of human movement, with interaction-specific anchor\nposes extracted from limited motion capture data. Using our guided diffusion\nmodel trained on generated synthetic data, we synthesize realistic motions for\nsitting and lifting with several objects, outperforming alternative approaches\nin terms of motion quality and successful action completion. We call our\nframework NIFTY: Neural Interaction Fields for Trajectory sYnthesis.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Nilesh Kulkarni", "Davis Rempe", "Kyle Genova", "Abhijit Kundu", "Justin Johnson", "David Fouhey", "Leonidas Guibas"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f054"}, "filepath": "data/2208.09602v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990297410412033, "arXiv_link": "https://arxiv.org/html/2208.09602v2", "other_link": "", "title": "On The Vulnerability of Efficient Vision Transformers to Adversarial Computation Attacks", "abstract": "The Vision Transformer has emerged as a powerful tool for image\nclassification tasks, surpassing the performance of convolutional neural\nnetworks (CNNs). Recently, many researchers have attempted to understand the\nrobustness of Transformers against adversarial attacks. However, previous\nresearches have focused solely on perturbations in the spatial domain. This\npaper proposes an additional perspective that explores the adversarial\nrobustness of Transformers against frequency-selective perturbations in the\nspectral domain. To facilitate comparison between these two domains, an attack\nframework is formulated as a flexible tool for implementing attacks on images\nin the spatial and spectral domains. The experiments reveal that Transformers\nrely more on phase and low frequency information, which can render them more\nvulnerable to frequency-selective attacks than CNNs. This work offers new\ninsights into the properties and adversarial robustness of Transformers.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Navaneet K L", "Soroush Abbasi Koohpayegani", "Essam Sleiman", "Hamed Pirsiavash"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f055"}, "filepath": "data/2311.13602.png", "tags": [], "_media_type": "image", "_rand": 0.9996567017178775, "arXiv_link": "https://arxiv.org/abs/2311.13602", "other_link": "", "title": "Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation", "abstract": "Content-aware graphic layout generation aims to automatically arrange visual\nelements along with a given content, such as an e-commerce product image. In\nthis paper, we argue that the current layout generation approaches suffer from\nthe limited training data for the high-dimensional layout structure. We show\nthat a simple retrieval augmentation can significantly improve the generation\nquality. Our model, which is named Retrieval-Augmented Layout Transformer\n(RALF), retrieves nearest neighbor layout examples based on an input image and\nfeeds these results into an autoregressive generator. Our model can apply\nretrieval augmentation to various controllable generation tasks and yield\nhigh-quality layouts within a unified architecture. Our extensive experiments\nshow that RALF successfully generates content-aware layouts in both constrained\nand unconstrained settings and significantly outperforms the baselines.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Daichi Horita", "Naoto Inoue", "Kotaro Kikuchi", "Kota Yamaguchi", "Kiyoharu Aizawa"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f056"}, "filepath": "data/2403.10097.png", "tags": [], "_media_type": "image", "_rand": 0.9993597368330868, "arXiv_link": "https://arxiv.org/abs/2403.10097", "other_link": "", "title": "Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks", "abstract": "While fine-tuning is a de facto standard method for training deep neural\nnetworks, it still suffers from overfitting when using small target datasets.\nPrevious methods improve fine-tuning performance by maintaining knowledge of\nthe source datasets or introducing regularization terms such as contrastive\nloss. However, these methods require auxiliary source information (e.g., source\nlabels or datasets) or heavy additional computations. In this paper, we propose\na simple method called adaptive random feature regularization (AdaRand).\nAdaRand helps the feature extractors of training models to adaptively change\nthe distribution of feature vectors for downstream classification tasks without\nauxiliary source information and with reasonable computation costs. To this\nend, AdaRand minimizes the gap between feature vectors and random reference\nvectors that are sampled from class conditional Gaussian distributions.\nFurthermore, AdaRand dynamically updates the conditional distribution to follow\nthe currently updated feature extractors and balance the distance between\nclasses in feature spaces. Our experiments show that AdaRand outperforms the\nother fine-tuning regularization, which requires auxiliary source information\nand heavy computation costs.", "keywords": [], "authors_list": ["Shin'ya Yamaguchi", "Sekitoshi Kanai", "Kazuki Adachi", "Daiki Chijiwa"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f057"}, "filepath": "data/2404.01933.png", "tags": [], "_media_type": "image", "_rand": 0.999201184099339, "arXiv_link": "https://arxiv.org/abs/2404.01933", "other_link": "", "title": "Error Detection in Egocentric Procedural Task Videos", "abstract": "Promptly identifying procedural errors from egocentric videos in an online\nsetting is highly challenging and valuable for detecting mistakes as soon as\nthey happen. This capability has a wide range of applications across various\nfields, such as manufacturing and healthcare. The nature of procedural mistakes\nis open-set since novel types of failures might occur, which calls for\none-class classifiers trained on correctly executed procedures. However, no\ntechnique can currently detect open-set procedural mistakes online. We propose\nPREGO, the first online one-class classification model for mistake detection in\nPRocedural EGOcentric videos. PREGO is based on an online action recognition\ncomponent to model the current action, and a symbolic reasoning module to\npredict the next actions. Mistake detection is performed by comparing the\nrecognized current action with the expected future one. We evaluate PREGO on\ntwo procedural egocentric video datasets, Assembly101 and Epic-tent, which we\nadapt for online benchmarking of procedural mistake detection to establish\nsuitable benchmarks, thus defining the Assembly101-O and Epic-tent-O datasets,\nrespectively.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Shih-Po Lee", "Zijia Lu", "Zekun Zhang", "Minh Hoai", "Ehsan Elhamifar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f058"}, "filepath": "data/2404.17184.png", "tags": [], "_media_type": "image", "_rand": 0.9994273044589213, "arXiv_link": "https://arxiv.org/abs/2404.17184", "other_link": "", "title": "Low-Rank Knowledge Decomposition for Medical Foundation Models", "abstract": "The popularity of large-scale pre-training has promoted the development of\nmedical foundation models. However, some studies have shown that although\nfoundation models exhibit strong general feature extraction capabilities, their\nperformance on specific tasks is still inferior to task-specific methods. In\nthis paper, we explore a new perspective called ``Knowledge Decomposition'' to\nimprove the performance on specific medical tasks, which deconstruct the\nfoundation model into multiple lightweight expert models, each dedicated to a\nparticular task, with the goal of improving specialization while concurrently\nmitigating resource expenditure. To accomplish the above objective, we design a\nnovel framework named Low-Rank Knowledge Decomposition (LoRKD), which\nexplicitly separates graidents by incorporating low-rank expert modules and the\nefficient knowledge separation convolution. Extensive experimental results\ndemonstrate that the decomposed models perform well in terms of performance and\ntransferability, even surpassing the original foundation models.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yuhang Zhou", "Haolin li", "Siyuan Du", "Jiangchao Yao", "Ya Zhang", "Yanfeng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f059"}, "filepath": "data/2311.16473.png", "tags": [], "_media_type": "image", "_rand": 0.9994939426547789, "arXiv_link": "https://arxiv.org/abs/2311.16473", "other_link": "", "title": "GS-IR: 3D Gaussian Splatting for Inverse Rendering", "abstract": "We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian\nSplatting (GS) that leverages forward mapping volume rendering to achieve\nphotorealistic novel view synthesis and relighting results. Unlike previous\nworks that use implicit neural representations and volume rendering (e.g.\nNeRF), which suffer from low expressive power and high computational\ncomplexity, we extend GS, a top-performance representation for novel view\nsynthesis, to estimate scene geometry, surface material, and environment\nillumination from multi-view images captured under unknown lighting conditions.\nThere are two main problems when introducing GS to inverse rendering: 1) GS\ndoes not support producing plausible normal natively; 2) forward mapping (e.g.\nrasterization and splatting) cannot trace the occlusion like backward mapping\n(e.g. ray tracing). To address these challenges, our GS-IR proposes an\nefficient optimization scheme that incorporates a depth-derivation-based\nregularization for normal estimation and a baking-based occlusion to model\nindirect lighting. The flexible and expressive GS representation allows us to\nachieve fast and compact geometry reconstruction, photorealistic novel view\nsynthesis, and effective physically-based rendering. We demonstrate the\nsuperiority of our method over baseline methods through qualitative and\nquantitative evaluations on various challenging scenes.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Zhihao Liang", "Qi Zhang", "Ying Feng", "Ying Shan", "Kui Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f05a"}, "filepath": "data/2310.18709.png", "tags": [], "_media_type": "image", "_rand": 0.9993909959888561, "arXiv_link": "https://arxiv.org/abs/2310.18709", "other_link": "", "title": "Unraveling Instance Associations: A Closer Look for Audio-Visual Segmentation", "abstract": "In this paper, we propose a new multi-modal task, namely audio-visual\ninstance segmentation (AVIS), in which the goal is to identify, segment, and\ntrack individual sounding object instances in audible videos, simultaneously.\nTo our knowledge, it is the first time that instance segmentation has been\nextended into the audio-visual domain. To better facilitate this research, we\nconstruct the first audio-visual instance segmentation benchmark (AVISeg).\nSpecifically, AVISeg consists of 1,258 videos with an average duration of 62.6\nseconds from YouTube and public audio-visual datasets, where 117 videos have\nbeen annotated by using an interactive semi-automatic labeling tool based on\nthe Segment Anything Model (SAM). In addition, we present a simple baseline\nmodel for the AVIS task. Our new model introduces an audio branch and a\ncross-modal fusion module to Mask2Former to locate all sounding objects.\nFinally, we evaluate the proposed method using two backbones on AVISeg. We\nbelieve that AVIS will inspire the community towards a more comprehensive\nmulti-modal understanding.", "keywords": [], "authors_list": ["Yuanhong Chen", "Yuyuan Liu", "Hu Wang", "Fengbei Liu", "Chong Wang", "Helen Frazer", "Gustavo Carneiro"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Multimedia", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f05b"}, "filepath": "data/2311.10382.png", "tags": [], "_media_type": "image", "_rand": 0.9990431736912766, "arXiv_link": "http://export.arxiv.org/abs/2311.10382", "other_link": "", "title": "Towards Generalizable Multi-Object Tracking", "abstract": "Multi-Object Tracking (MOT) remains a vital component of intelligent video\nanalysis, which aims to locate targets and maintain a consistent identity for\neach target throughout a video sequence. Existing works usually learn a\ndiscriminative feature representation, such as motion and appearance, to\nassociate the detections across frames, which are easily affected by mutual\nocclusion and background clutter in practice. In this paper, we propose a\nsimple yet effective two-stage feature learning paradigm to jointly learn\nsingle-shot and multi-shot features for different targets, so as to achieve\nrobust data association in the tracking process. For the detections without\nbeing associated, we design a novel single-shot feature learning module to\nextract discriminative features of each detection, which can efficiently\nassociate targets between adjacent frames. For the tracklets being lost several\nframes, we design a novel multi-shot feature learning module to extract\ndiscriminative features of each tracklet, which can accurately refind these\nlost targets after a long period. Once equipped with a simple data association\nlogic, the resulting VisualTracker can perform robust MOT based on the\nsingle-shot and multi-shot feature representations. Extensive experimental\nresults demonstrate that our method has achieved significant improvements on\nMOT17 and MOT20 datasets while reaching state-of-the-art performance on\nDanceTrack dataset.", "keywords": [], "authors_list": ["Zheng Qin", "Le Wang", "Sanping Zhou", "Panpan Fu", "Gang Hua", "Wei Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f05c"}, "filepath": "data/2405.07933.png", "tags": [], "_media_type": "image", "_rand": 0.9993697061051533, "arXiv_link": "https://arxiv.org/abs/2405.07933", "other_link": "", "title": "Authentic Hand Avatar from a Phone Scan via Universal Hand Model", "abstract": "The authentic 3D hand avatar with every identifiable information, such as\nhand shapes and textures, is necessary for immersive experiences in AR/VR. In\nthis paper, we present a universal hand model (UHM), which 1) can universally\nrepresent high-fidelity 3D hand meshes of arbitrary identities (IDs) and 2) can\nbe adapted to each person with a short phone scan for the authentic hand\navatar. For effective universal hand modeling, we perform tracking and modeling\nat the same time, while previous 3D hand models perform them separately. The\nconventional separate pipeline suffers from the accumulated errors from the\ntracking stage, which cannot be recovered in the modeling stage. On the other\nhand, ours does not suffer from the accumulated errors while having a much more\nconcise overall pipeline. We additionally introduce a novel image matching loss\nfunction to address a skin sliding during the tracking and modeling, while\nexisting works have not focused on it much. Finally, using learned priors from\nour UHM, we effectively adapt our UHM to each person's short phone scan for the\nauthentic hand avatar.", "keywords": ["Biometrics and human analysis", "Deep learning architectures and techniques"], "authors_list": ["Gyeongsik Moon", "Weipeng Xu", "Rohan Joshi", "Chenglei Wu", "Takaaki Shiratori"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f05d"}, "filepath": "data/2404.15383.png", "tags": [], "_media_type": "image", "_rand": 0.9995889954503298, "arXiv_link": "https://arxiv.org/abs/2404.15383", "other_link": "", "title": "WANDR: Intention-guided Human Motion Generation", "abstract": "Synthesizing natural human motions that enable a 3D human avatar to walk and\nreach for arbitrary goals in 3D space remains an unsolved problem with many\napplications. Existing methods (data-driven or using reinforcement learning)\nare limited in terms of generalization and motion naturalness. A primary\nobstacle is the scarcity of training data that combines locomotion with goal\nreaching. To address this, we introduce WANDR, a data-driven model that takes\nan avatar's initial pose and a goal's 3D position and generates natural human\nmotions that place the end effector (wrist) on the goal location. To solve\nthis, we introduce novel intention features that drive rich goal-oriented\nmovement. Intention guides the agent to the goal, and interactively adapts the\ngeneration to novel situations without needing to define sub-goals or the\nentire motion path. Crucially, intention allows training on datasets that have\ngoal-oriented motions as well as those that do not. WANDR is a conditional\nVariational Auto-Encoder (c-VAE), which we train using the AMASS and CIRCLE\ndatasets. We evaluate our method extensively and demonstrate its ability to\ngenerate natural and long-term motions that reach 3D goals and generalize to\nunseen goal locations. Our models and code are available for research purposes\nat wandr.is.tue.mpg.de.", "keywords": ["Biometrics and human analysis", "Deep learning architectures and techniques"], "authors_list": ["Markos Diomataris", "Nikos Athanasiou", "Omid Taheri", "Xi Wang", "Otmar Hilliges", "Michael J. Black"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f05e"}, "filepath": "data/2311.09543.png", "tags": [], "_media_type": "image", "_rand": 0.9994842914780437, "arXiv_link": "https://arxiv.org/abs/2311.09543", "other_link": "", "title": "SynSP: Synergy of Smoothness and Precision in Pose Sequences Refinement", "abstract": "Though significant progress in human pose and shape recovery from monocular\nRGB images has been made in recent years, obtaining 3D human motion with high\naccuracy and temporal consistency from videos remains challenging. Existing\nvideo-based methods tend to reconstruct human motion from global image\nfeatures, which lack detailed representation capability and limit the\nreconstruction accuracy. In this paper, we propose a Temporal-Aware Refining\nNetwork (TAR), to synchronously explore temporal-aware global and local image\nfeatures for accurate pose and shape recovery. First, a global transformer\nencoder is introduced to obtain temporal global features from static feature\nsequences. Second, a bidirectional ConvGRU network takes the sequence of\nhigh-resolution feature maps as input, and outputs temporal local feature maps\nthat maintain high resolution and capture the local motion of the human body.\nFinally, a recurrent refinement module iteratively updates estimated SMPL\nparameters by leveraging both global and local temporal information to achieve\naccurate and smooth results. Extensive experiments demonstrate that our TAR\nobtains more accurate results than previous state-of-the-art methods on popular\nbenchmarks, i.e., 3DPW, MPI-INF-3DHP, and Human3.6M.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Tao Wang", "Lei Jin", "Zheng Wang", "Jianshu Li", "Liang Li", "Fang Zhao", "Yu Cheng", "Li Yuan", "Li ZHOU", "Junliang Xing", "Jian Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f05f"}, "filepath": "data/2403.13347.png", "tags": [], "_media_type": "image", "_rand": 0.999758163549534, "arXiv_link": "https://arxiv.org/abs/2403.13347", "other_link": "https://github.com/mlvlab/vid-TLDR.", "title": "vid-TLDR: Training Free Token merging for Light-weight Video Transformer", "abstract": "Video Transformers have become the prevalent solution for various video\ndownstream tasks with superior expressive power and flexibility. However, these\nvideo transformers suffer from heavy computational costs induced by the massive\nnumber of tokens across the entire video frames, which has been the major\nbarrier to training the model. Further, the patches irrelevant to the main\ncontents, e.g., backgrounds, degrade the generalization performance of models.\nTo tackle these issues, we propose training free token merging for lightweight\nvideo Transformer (vid-TLDR) that aims to enhance the efficiency of video\nTransformers by merging the background tokens without additional training. For\nvid-TLDR, we introduce a novel approach to capture the salient regions in\nvideos only with the attention map. Further, we introduce the saliency-aware\ntoken merging strategy by dropping the background tokens and sharpening the\nobject scores. Our experiments show that vid-TLDR significantly mitigates the\ncomputational complexity of video Transformers while achieving competitive\nperformance compared to the base model without vid-TLDR. Code is available at\nhttps://github.com/mlvlab/vid-TLDR.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Joonmyung Choi", "Sanghyeok Lee", "Jaewon Chu", "Minhyuk Choi", "Hyunwoo J. Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f060"}, "filepath": "data/2403.06793.png", "tags": [], "_media_type": "image", "_rand": 0.9995954164553776, "arXiv_link": "https://arxiv.org/abs/2403.06793", "other_link": "", "title": "Boosting Image Restoration via Priors from Pre-trained Models", "abstract": "Pre-trained models with large-scale training data, such as CLIP and Stable\nDiffusion, have demonstrated remarkable performance in various high-level\ncomputer vision tasks such as image understanding and generation from language\ndescriptions. Yet, their potential for low-level tasks such as image\nrestoration remains relatively unexplored. In this paper, we explore such\nmodels to enhance image restoration. As off-the-shelf features (OSF) from\npre-trained models do not directly serve image restoration, we propose to learn\nan additional lightweight module called Pre-Train-Guided Refinement Module\n(PTG-RM) to refine restoration results of a target restoration network with\nOSF. PTG-RM consists of two components, Pre-Train-Guided Spatial-Varying\nEnhancement (PTG-SVE), and Pre-Train-Guided Channel-Spatial Attention\n(PTG-CSA). PTG-SVE enables optimal short- and long-range neural operations,\nwhile PTG-CSA enhances spatial-channel attention for restoration-related\nlearning. Extensive experiments demonstrate that PTG-RM, with its compact size\n($<$1M parameters), effectively enhances restoration performance of various\nmodels across different tasks, including low-light enhancement, deraining,\ndeblurring, and denoising.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiaogang Xu", "Shu Kong", "Tao Hu", "Zhe Liu", "Hujun Bao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f061"}, "filepath": "data/2312.03461.png", "tags": [], "_media_type": "image", "_rand": 0.9991761118465414, "arXiv_link": "https://arxiv.org/abs/2312.03461", "other_link": "", "title": "HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting", "abstract": "We have recently seen tremendous progress in photo-real human modeling and\nrendering. Yet, efficiently rendering realistic human performance and\nintegrating it into the rasterization pipeline remains challenging. In this\npaper, we present HiFi4G, an explicit and compact Gaussian-based approach for\nhigh-fidelity human performance rendering from dense footage. Our core\nintuition is to marry the 3D Gaussian representation with non-rigid tracking,\nachieving a compact and compression-friendly representation. We first propose a\ndual-graph mechanism to obtain motion priors, with a coarse deformation graph\nfor effective initialization and a fine-grained Gaussian graph to enforce\nsubsequent constraints. Then, we utilize a 4D Gaussian optimization scheme with\nadaptive spatial-temporal regularizers to effectively balance the non-rigid\nprior and Gaussian updating. We also present a companion compression scheme\nwith residual compensation for immersive experiences on various platforms. It\nachieves a substantial compression rate of approximately 25 times, with less\nthan 2MB of storage per frame. Extensive experiments demonstrate the\neffectiveness of our approach, which significantly outperforms existing\napproaches in terms of optimization speed, rendering quality, and storage\noverhead.", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Yuheng Jiang", "Zhehao Shen", "Penghao Wang", "Zhuo Su", "Yu Hong", "Yingliang Zhang", "Jingyi Yu", "Lan Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f062"}, "filepath": "data/2402.17229v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995710861078139, "arXiv_link": "https://arxiv.org/abs/2402.17229v1", "other_link": "https://github.com/Purdue-M2/Fairness-Generalization", "title": "Preserving Fairness Generalization in Deepfake Detection", "abstract": "Although effective deepfake detection models have been developed in recent\nyears, recent studies have revealed that these models can result in unfair\nperformance disparities among demographic groups, such as race and gender. This\ncan lead to particular groups facing unfair targeting or exclusion from\ndetection, potentially allowing misclassified deepfakes to manipulate public\nopinion and undermine trust in the model. The existing method for addressing\nthis problem is providing a fair loss function. It shows good fairness\nperformance for intra-domain evaluation but does not maintain fairness for\ncross-domain testing. This highlights the significance of fairness\ngeneralization in the fight against deepfakes. In this work, we propose the\nfirst method to address the fairness generalization problem in deepfake\ndetection by simultaneously considering features, loss, and optimization\naspects. Our method employs disentanglement learning to extract demographic and\ndomain-agnostic forgery features, fusing them to encourage fair learning across\na flattened loss landscape. Extensive experiments on prominent deepfake\ndatasets demonstrate our method's effectiveness, surpassing state-of-the-art\napproaches in preserving fairness during cross-domain deepfake detection. The\ncode is available at https://github.com/Purdue-M2/Fairness-Generalization", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Li Lin", "Li Lin", "Xinan He", "Yan Ju", "Xin Wang", "Feng Ding", "Shu Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computers and Society", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f063"}, "filepath": "data/2311.16512.png", "tags": [], "_media_type": "image", "_rand": 0.9995201088973504, "arXiv_link": "https://arxiv.org/abs/2311.16512", "other_link": "https://github.com/VINHYU/CoSeR", "title": "CoSeR: Bridging Image and Language for Cognitive Super-Resolution", "abstract": "Existing super-resolution (SR) models primarily focus on restoring local\ntexture details, often neglecting the global semantic information within the\nscene. This oversight can lead to the omission of crucial semantic details or\nthe introduction of inaccurate textures during the recovery process. In our\nwork, we introduce the Cognitive Super-Resolution (CoSeR) framework, empowering\nSR models with the capacity to comprehend low-resolution images. We achieve\nthis by marrying image appearance and language understanding to generate a\ncognitive embedding, which not only activates prior information from large\ntext-to-image diffusion models but also facilitates the generation of\nhigh-quality reference images to optimize the SR process. To further improve\nimage fidelity, we propose a novel condition injection scheme called\n\"All-in-Attention\", consolidating all conditional information into a single\nmodule. Consequently, our method successfully restores semantically correct and\nphotorealistic details, demonstrating state-of-the-art performance across\nmultiple benchmarks. Code: https://github.com/VINHYU/CoSeR", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Haoze Sun", "Wenbo Li", "Jianzhuang Liu", "Haoyu Chen", "Renjing Pei", "Xueyi Zou", "Youliang Yan", "Yujiu Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f064"}, "filepath": "data/2403.12494.png", "tags": [], "_media_type": "image", "_rand": 0.9993545801791269, "arXiv_link": "https://arxiv.org/abs/2403.12494", "other_link": "https://github.com/YangSun22/TC-MoA", "title": "Task-Customized Mixture of Adapters for General Image Fusion", "abstract": "General image fusion aims at integrating important information from\nmulti-source images. However, due to the significant cross-task gap, the\nrespective fusion mechanism varies considerably in practice, resulting in\nlimited performance across subtasks. To handle this problem, we propose a novel\ntask-customized mixture of adapters (TC-MoA) for general image fusion,\nadaptively prompting various fusion tasks in a unified model. We borrow the\ninsight from the mixture of experts (MoE), taking the experts as efficient\ntuning adapters to prompt a pre-trained foundation model. These adapters are\nshared across different tasks and constrained by mutual information\nregularization, ensuring compatibility with different tasks while\ncomplementarity for multi-source images. The task-specific routing networks\ncustomize these adapters to extract task-specific information from different\nsources with dynamic dominant intensity, performing adaptive visual feature\nprompt fusion. Notably, our TC-MoA controls the dominant intensity bias for\ndifferent fusion tasks, successfully unifying multiple fusion tasks in a single\nmodel. Extensive experiments show that TC-MoA outperforms the competing\napproaches in learning commonalities while retaining compatibility for general\nimage fusion (multi-modal, multi-exposure, and multi-focus), and also\ndemonstrating striking controllability on more generalization experiments. The\ncode is available at https://github.com/YangSun22/TC-MoA .", "keywords": ["Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Pengfei Zhu", "Yang Sun", "Bing Cao", "Qinghua Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f065"}, "filepath": "data/2404.03645.png", "tags": [], "_media_type": "image", "_rand": 0.9999437543260065, "arXiv_link": "https://arxiv.org/abs/2404.03645", "other_link": "https://github.com/heshuting555/DsHmp.", "title": "Decoupling Static and Hierarchical Motion Perception for Referring Video Segmentation", "abstract": "Referring video segmentation relies on natural language expressions to\nidentify and segment objects, often emphasizing motion clues. Previous works\ntreat a sentence as a whole and directly perform identification at the\nvideo-level, mixing up static image-level cues with temporal motion cues.\nHowever, image-level features cannot well comprehend motion cues in sentences,\nand static cues are not crucial for temporal perception. In fact, static cues\ncan sometimes interfere with temporal perception by overshadowing motion cues.\nIn this work, we propose to decouple video-level referring expression\nunderstanding into static and motion perception, with a specific emphasis on\nenhancing temporal comprehension. Firstly, we introduce an\nexpression-decoupling module to make static cues and motion cues perform their\ndistinct role, alleviating the issue of sentence embeddings overlooking motion\ncues. Secondly, we propose a hierarchical motion perception module to capture\ntemporal information effectively across varying timescales. Furthermore, we\nemploy contrastive learning to distinguish the motions of visually similar\nobjects. These contributions yield state-of-the-art performance across five\ndatasets, including a remarkable $\\textbf{9.2%}$ $\\mathcal{J\\&F}$ improvement\non the challenging $\\textbf{MeViS}$ dataset. Code is available at\nhttps://github.com/heshuting555/DsHmp.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Shuting He", "Henghui Ding"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f066"}, "filepath": "data/2403.20225.png", "tags": [], "_media_type": "image", "_rand": 0.9996442622652321, "arXiv_link": "https://arxiv.org/abs/2403.20225", "other_link": "", "title": "MTMMC: A Large-Scale Real-World Multi-Modal Camera Tracking Benchmark", "abstract": "Multi-target multi-camera tracking is a crucial task that involves\nidentifying and tracking individuals over time using video streams from\nmultiple cameras. This task has practical applications in various fields, such\nas visual surveillance, crowd behavior analysis, and anomaly detection.\nHowever, due to the difficulty and cost of collecting and labeling data,\nexisting datasets for this task are either synthetically generated or\nartificially constructed within a controlled camera network setting, which\nlimits their ability to model real-world dynamics and generalize to diverse\ncamera configurations. To address this issue, we present MTMMC, a real-world,\nlarge-scale dataset that includes long video sequences captured by 16\nmulti-modal cameras in two different environments - campus and factory - across\nvarious time, weather, and season conditions. This dataset provides a\nchallenging test-bed for studying multi-camera tracking under diverse\nreal-world complexities and includes an additional input modality of spatially\naligned and temporally synchronized RGB and thermal cameras, which enhances the\naccuracy of multi-camera tracking. MTMMC is a super-set of existing datasets,\nbenefiting independent fields such as person detection, re-identification, and\nmultiple object tracking. We provide baselines and new learning setups on this\ndataset and set the reference scores for future studies. The datasets, models,\nand test server will be made publicly available.", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Sanghyun Woo", "Kwanyong Park", "Inkyu Shin", "Myungchul Kim", "In So Kweon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f067"}, "filepath": "data/2404.19384.png", "tags": [], "_media_type": "image", "_rand": 0.9995189473886481, "arXiv_link": "https://arxiv.org/abs/2404.19384", "other_link": "https://github.com/Zhanwei-Z/PERE.", "title": "Pseudo Label Refinery for Unsupervised Domain Adaptation on Cross-dataset 3D Object Detection", "abstract": "Recent self-training techniques have shown notable improvements in\nunsupervised domain adaptation for 3D object detection (3D UDA). These\ntechniques typically select pseudo labels, i.e., 3D boxes, to supervise models\nfor the target domain. However, this selection process inevitably introduces\nunreliable 3D boxes, in which 3D points cannot be definitively assigned as\nforeground or background. Previous techniques mitigate this by reweighting\nthese boxes as pseudo labels, but these boxes can still poison the training\nprocess. To resolve this problem, in this paper, we propose a novel pseudo\nlabel refinery framework. Specifically, in the selection process, to improve\nthe reliability of pseudo boxes, we propose a complementary augmentation\nstrategy. This strategy involves either removing all points within an\nunreliable box or replacing it with a high-confidence box. Moreover, the point\nnumbers of instances in high-beam datasets are considerably higher than those\nin low-beam datasets, also degrading the quality of pseudo labels during the\ntraining process. We alleviate this issue by generating additional proposals\nand aligning RoI features across different domains. Experimental results\ndemonstrate that our method effectively enhances the quality of pseudo labels\nand consistently surpasses the state-of-the-art methods on six autonomous\ndriving benchmarks. Code will be available at\nhttps://github.com/Zhanwei-Z/PERE.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Zhanwei Zhang", "Minghao Chen", "Shuai Xiao", "Liang Peng", "Hengjia Li", "Binbin Lin", "Ping Li", "Wenxiao Wang", "Boxi Wu", "Deng Cai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f068"}, "filepath": "data/2405.15225.png", "tags": [], "_media_type": "image", "_rand": 0.9990698428020047, "arXiv_link": "https://arxiv.org/abs/2405.15225", "other_link": "", "title": "Unbiased Faster R-CNN for Single-source Domain Generalized Object Detection", "abstract": "Single-source domain generalization (SDG) for object detection is a\nchallenging yet essential task as the distribution bias of the unseen domain\ndegrades the algorithm performance significantly. However, existing methods\nattempt to extract domain-invariant features, neglecting that the biased data\nleads the network to learn biased features that are non-causal and poorly\ngeneralizable. To this end, we propose an Unbiased Faster R-CNN (UFR) for\ngeneralizable feature learning. Specifically, we formulate SDG in object\ndetection from a causal perspective and construct a Structural Causal Model\n(SCM) to analyze the data bias and feature bias in the task, which are caused\nby scene confounders and object attribute confounders. Based on the SCM, we\ndesign a Global-Local Transformation module for data augmentation, which\neffectively simulates domain diversity and mitigates the data bias.\nAdditionally, we introduce a Causal Attention Learning module that incorporates\na designed attention invariance loss to learn image-level features that are\nrobust to scene confounders. Moreover, we develop a Causal Prototype Learning\nmodule with an explicit instance constraint and an implicit prototype\nconstraint, which further alleviates the negative impact of object attribute\nconfounders. Experimental results on five scenes demonstrate the prominent\ngeneralization ability of our method, with an improvement of 3.9% mAP on the\nNight-Clear scene.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Yajing Liu", "Shijun Zhou", "Xiyao Liu", "chunhui Hao", "Baojie Fan", "Jiandong Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f069"}, "filepath": "data/2403.08506.png", "tags": [], "_media_type": "image", "_rand": 0.9998758173280708, "arXiv_link": "https://arxiv.org/abs/2403.08506", "other_link": "", "title": "DiPrompT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning", "abstract": "Federated learning (FL) has emerged as a powerful paradigm for learning from\ndecentralized data, and federated domain generalization further considers the\ntest dataset (target domain) is absent from the decentralized training data\n(source domains). However, most existing FL methods assume that domain labels\nare provided during training, and their evaluation imposes explicit constraints\non the number of domains, which must strictly match the number of clients.\nBecause of the underutilization of numerous edge devices and additional\ncross-client domain annotations in the real world, such restrictions may be\nimpractical and involve potential privacy leaks. In this paper, we propose an\nefficient and novel approach, called Disentangled Prompt Tuning (DiPrompT), a\nmethod that tackles the above restrictions by learning adaptive prompts for\ndomain generalization in a distributed manner. Specifically, we first design\ntwo types of prompts, i.e., global prompt to capture general knowledge across\nall clients and domain prompts to capture domain-specific knowledge. They\neliminate the restriction on the one-to-one mapping between source domains and\nlocal clients. Furthermore, a dynamic query metric is introduced to\nautomatically search the suitable domain label for each sample, which includes\ntwo-substep text-image alignments based on prompt tuning without\nlabor-intensive annotation. Extensive experiments on multiple datasets\ndemonstrate that our DiPrompT achieves superior domain generalization\nperformance over state-of-the-art FL methods when domain labels are not\nprovided, and even outperforms many centralized learning methods using domain\nlabels.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Sikai Bai", "Jie ZHANG", "Song Guo", "Shuaicheng Li", "Jingcai Guo", "Jun Hou", "Tao Han", "Xiaocheng Lu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f06a"}, "filepath": "data/2403.10615.png", "tags": [], "_media_type": "image", "_rand": 0.9993398963107096, "arXiv_link": "https://arxiv.org/abs/2403.10615", "other_link": "", "title": "LightIt: Illumination Modeling and Control for Diffusion Models", "abstract": "We introduce LightIt, a method for explicit illumination control for image\ngeneration. Recent generative methods lack lighting control, which is crucial\nto numerous artistic aspects of image generation such as setting the overall\nmood or cinematic appearance. To overcome these limitations, we propose to\ncondition the generation on shading and normal maps. We model the lighting with\nsingle bounce shading, which includes cast shadows. We first train a shading\nestimation module to generate a dataset of real-world images and shading pairs.\nThen, we train a control network using the estimated shading and normals as\ninput. Our method demonstrates high-quality image generation and lighting\ncontrol in numerous scenes. Additionally, we use our generated dataset to train\nan identity-preserving relighting model, conditioned on an image and a target\nshading. Our method is the first that enables the generation of images with\ncontrollable, consistent lighting and performs on par with specialized\nrelighting state-of-the-art methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Peter Kocsis", "Kalyan Sunkavalli", "Julien Philip", "Matthias Nie\u00dfner", "Yannick Hold-Geoffroy"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f06b"}, "filepath": "data/2402.17464.png", "tags": [], "_media_type": "image", "_rand": 0.9997060938147136, "arXiv_link": "https://arxiv.org/abs/2402.17464", "other_link": "https://github.com/pkudba/3DHPA.", "title": "Generative 3D Part Assembly via Part-Whole-Hierarchy Message Passing", "abstract": "Generative 3D part assembly involves understanding part relationships and\npredicting their 6-DoF poses for assembling a realistic 3D shape. Prior work\noften focus on the geometry of individual parts, neglecting part-whole\nhierarchies of objects. Leveraging two key observations: 1) super-part poses\nprovide strong hints about part poses, and 2) predicting super-part poses is\neasier due to fewer superparts, we propose a part-whole-hierarchy message\npassing network for efficient 3D part assembly. We first introduce super-parts\nby grouping geometrically similar parts without any semantic labels. Then we\nemploy a part-whole hierarchical encoder, wherein a super-part encoder predicts\nlatent super-part poses based on input parts. Subsequently, we transform the\npoint cloud using the latent poses, feeding it to the part encoder for\naggregating super-part information and reasoning about part relationships to\npredict all part poses. In training, only ground-truth part poses are required.\nDuring inference, the predicted latent poses of super-parts enhance\ninterpretability. Experimental results on the PartNet dataset show that our\nmethod achieves state-of-the-art performance in part and connectivity accuracy\nand enables an interpretable hierarchical part assembly. Code is available at\nhttps://github.com/pkudba/3DHPA.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Bi'an Du", "Xiang Gao", "Wei Hu", "Renjie Liao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f06c"}, "filepath": "data/2309.05073.png", "tags": [], "_media_type": "image", "_rand": 0.999822701958319, "arXiv_link": "https://arxiv.org/abs/2309.05073", "other_link": "https://wangjiongw.github.io/freeman.", "title": "FreeMan: Towards benchmarking 3D human pose estimation under Real-World Conditions", "abstract": "Estimating the 3D structure of the human body from natural scenes is a\nfundamental aspect of visual perception. 3D human pose estimation is a vital\nstep in advancing fields like AIGC and human-robot interaction, serving as a\ncrucial technique for understanding and interacting with human actions in\nreal-world settings. However, the current datasets, often collected under\nsingle laboratory conditions using complex motion capture equipment and\nunvarying backgrounds, are insufficient. The absence of datasets on variable\nconditions is stalling the progress of this crucial task. To facilitate the\ndevelopment of 3D pose estimation, we present FreeMan, the first large-scale,\nmulti-view dataset collected under the real-world conditions. FreeMan was\ncaptured by synchronizing 8 smartphones across diverse scenarios. It comprises\n11M frames from 8000 sequences, viewed from different perspectives. These\nsequences cover 40 subjects across 10 different scenarios, each with varying\nlighting conditions. We have also established an semi-automated pipeline\ncontaining error detection to reduce the workload of manual check and ensure\nprecise annotation. We provide comprehensive evaluation baselines for a range\nof tasks, underlining the significant challenges posed by FreeMan. Further\nevaluations of standard indoor/outdoor human sensing datasets reveal that\nFreeMan offers robust representation transferability in real and complex\nscenes. Code and data are available at https://wangjiongw.github.io/freeman.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Jiong WANG", "Fengyu Yang", "Bingliang Li", "Wenbo Gou", "Danqi Yan", "Ailing Zeng", "Ailing Zeng", "Yijun Gao", "Junle Wang", "Yanqing Jing", "Ruimao Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f06d"}, "filepath": "data/2312.13286.png", "tags": [], "_media_type": "image", "_rand": 0.9996965154690065, "arXiv_link": "https://arxiv.org/abs/2312.13286", "other_link": "", "title": "Generative Multimodal Models are In-Context Learners", "abstract": "The human ability to easily solve multimodal tasks in context (i.e., with\nonly a few demonstrations or simple instructions), is what current multimodal\nsystems have largely struggled to imitate. In this work, we demonstrate that\nthe task-agnostic in-context learning capabilities of large multimodal models\ncan be significantly enhanced by effective scaling-up. We introduce Emu2, a\ngenerative multimodal model with 37 billion parameters, trained on large-scale\nmultimodal sequences with a unified autoregressive objective. Emu2 exhibits\nstrong multimodal in-context learning abilities, even emerging to solve tasks\nthat require on-the-fly reasoning, such as visual prompting and object-grounded\ngeneration. The model sets a new record on multiple multimodal understanding\ntasks in few-shot settings. When instruction-tuned to follow specific\ninstructions, Emu2 further achieves new state-of-the-art on challenging tasks\nsuch as question answering benchmarks for large multimodal models and\nopen-ended subject-driven generation. These achievements demonstrate that Emu2\ncan serve as a base model and general-purpose interface for a wide range of\nmultimodal tasks. Code and models are publicly available to facilitate future\nresearch.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Quan Sun", "Yufeng Cui", "Yufeng Cui", "Xiaosong Zhang", "Fan Zhang", "Qiying Yu", "Yueze Wang", "Yongming Rao", "Jingjing Liu", "Tiejun Huang", "Xinlong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f06e"}, "filepath": "data/2402.12712.png", "tags": [], "_media_type": "image", "_rand": 0.9996146416355116, "arXiv_link": "https://arxiv.org/abs/2402.12712", "other_link": "https://mvdiffusion-plusplus.github.io.", "title": "SVDTree: Semantic Voxel Diffusion for Single Image Tree Reconstruction", "abstract": "This paper presents a neural architecture MVDiffusion++ for 3D object\nreconstruction that synthesizes dense and high-resolution views of an object\ngiven one or a few images without camera poses. MVDiffusion++ achieves superior\nflexibility and scalability with two surprisingly simple ideas: 1) A\n``pose-free architecture'' where standard self-attention among 2D latent\nfeatures learns 3D consistency across an arbitrary number of conditional and\ngeneration views without explicitly using camera pose information; and 2) A\n``view dropout strategy'' that discards a substantial number of output views\nduring training, which reduces the training-time memory footprint and enables\ndense and high-resolution view synthesis at test time. We use the Objaverse for\ntraining and the Google Scanned Objects for evaluation with standard novel view\nsynthesis and 3D reconstruction metrics, where MVDiffusion++ significantly\noutperforms the current state of the arts. We also demonstrate a text-to-3D\napplication example by combining MVDiffusion++ with a text-to-image generative\nmodel. The project page is at https://mvdiffusion-plusplus.github.io.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Yuan Li", "Zhihao Liu", "Bedrich Benes", "Xiaopeng Zhang", "Jianwei Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f06f"}, "filepath": "data/2404.18433.png", "tags": [], "_media_type": "image", "_rand": 0.9990131866026437, "arXiv_link": "https://arxiv.org/abs/2404.18433", "other_link": "", "title": "HomoFormer: Homogenized Transformer for Image Shadow Removal", "abstract": "Transformer recently emerged as the de facto model for computer vision tasks\nand has also been successfully applied to shadow removal. However, these\nexisting methods heavily rely on intricate modifications to the attention\nmechanisms within the transformer blocks while using a generic patch embedding.\nAs a result, it often leads to complex architectural designs requiring\nadditional computation resources. In this work, we aim to explore the efficacy\nof incorporating shadow information within the early processing stage.\nAccordingly, we propose a transformer-based framework with a novel patch\nembedding that is tailored for shadow removal, dubbed ShadowMaskFormer.\nSpecifically, we present a simple and effective mask-augmented patch embedding\nto integrate shadow information and promote the model's emphasis on acquiring\nknowledge for shadow regions. Extensive experiments conducted on the ISTD,\nISTD+, and SRD benchmark datasets demonstrate the efficacy of our method\nagainst state-of-the-art approaches while using fewer model parameters.", "keywords": ["Low-level vision", "Efficient and scalable vision"], "authors_list": ["Jie Xiao", "Xueyang Fu", "Yurui Zhu", "Dong Li", "Jie Huang", "Kai Zhu", "Zheng-Jun Zha"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f070"}, "filepath": "data/2405.06283.png", "tags": [], "_media_type": "image", "_rand": 0.9995474176366914, "arXiv_link": "https://arxiv.org/abs/2405.06283", "other_link": "https://github.com/SSDUT-Caiyq/UFG-NCD.", "title": "Novel Class Discovery for Ultra-Fine-Grained Visual Categorization", "abstract": "Ultra-fine-grained visual categorization (Ultra-FGVC) aims at distinguishing\nhighly similar sub-categories within fine-grained objects, such as different\nsoybean cultivars. Compared to traditional fine-grained visual categorization,\nUltra-FGVC encounters more hurdles due to the small inter-class and large\nintra-class variation. Given these challenges, relying on human annotation for\nUltra-FGVC is impractical. To this end, our work introduces a novel task termed\nUltra-Fine-Grained Novel Class Discovery (UFG-NCD), which leverages partially\nannotated data to identify new categories of unlabeled images for Ultra-FGVC.\nTo tackle this problem, we devise a Region-Aligned Proxy Learning (RAPL)\nframework, which comprises a Channel-wise Region Alignment (CRA) module and a\nSemi-Supervised Proxy Learning (SemiPL) strategy. The CRA module is designed to\nextract and utilize discriminative features from local regions, facilitating\nknowledge transfer from labeled to unlabeled classes. Furthermore, SemiPL\nstrengthens representation learning and knowledge transfer with proxy-guided\nsupervised learning and proxy-guided contrastive learning. Such techniques\nleverage class distribution information in the embedding space, improving the\nmining of subtle differences between labeled and unlabeled ultra-fine-grained\nclasses. Extensive experiments demonstrate that RAPL significantly outperforms\nbaselines across various datasets, indicating its effectiveness in handling the\nchallenges of UFG-NCD. Code is available at\nhttps://github.com/SSDUT-Caiyq/UFG-NCD.", "keywords": [], "authors_list": ["Qi Jia", "Yaqi Cai", "Qi Jia", "Binglin Qiu", "Weimin Wang", "Nan Pu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f071"}, "filepath": "data/2403.19501.png", "tags": [], "_media_type": "image", "_rand": 0.9999617163694827, "arXiv_link": "https://arxiv.org/abs/2403.19501", "other_link": "", "title": "RELI11D: A Comprehensive Multimodal Human Motion Dataset and Method", "abstract": "Comprehensive capturing of human motions requires both accurate captures of\ncomplex poses and precise localization of the human within scenes. Most of the\nHPE datasets and methods primarily rely on RGB, LiDAR, or IMU data. However,\nsolely using these modalities or a combination of them may not be adequate for\nHPE, particularly for complex and fast movements. For holistic human motion\nunderstanding, we present RELI11D, a high-quality multimodal human motion\ndataset involves LiDAR, IMU system, RGB camera, and Event camera. It records\nthe motions of 10 actors performing 5 sports in 7 scenes, including 3.32 hours\nof synchronized LiDAR point clouds, IMU measurement data, RGB videos and Event\nsteams. Through extensive experiments, we demonstrate that the RELI11D presents\nconsiderable challenges and opportunities as it contains many rapid and complex\nmotions that require precise location. To address the challenge of integrating\ndifferent modalities, we propose LEIR, a multimodal baseline that effectively\nutilizes LiDAR Point Cloud, Event stream, and RGB through our cross-attention\nfusion strategy. We show that LEIR exhibits promising results for rapid motions\nand daily motions and that utilizing the characteristics of multiple modalities\ncan indeed improve HPE performance. Both the dataset and source code will be\nreleased publicly to the research community, fostering collaboration and\nenabling further exploration in this field.", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Ming Yan", "Yan Zhang", "Shuqiang Cai", "Shuqi Fan", "Xincheng Lin", "Yudi Dai", "Siqi Shen", "Chenglu Wen", "Lan Xu", "Yuexin Ma", "Cheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f072"}, "filepath": "data/2403.10518.png", "tags": [], "_media_type": "image", "_rand": 0.9998580849127298, "arXiv_link": "https://arxiv.org/abs/2403.10518", "other_link": "", "title": "Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation guided by the Characteristic Dance Primitives", "abstract": "We propose Lodge, a network capable of generating extremely long dance\nsequences conditioned on given music. We design Lodge as a two-stage coarse to\nfine diffusion architecture, and propose the characteristic dance primitives\nthat possess significant expressiveness as intermediate representations between\ntwo diffusion models. The first stage is global diffusion, which focuses on\ncomprehending the coarse-level music-dance correlation and production\ncharacteristic dance primitives. In contrast, the second-stage is the local\ndiffusion, which parallelly generates detailed motion sequences under the\nguidance of the dance primitives and choreographic rules. In addition, we\npropose a Foot Refine Block to optimize the contact between the feet and the\nground, enhancing the physical realism of the motion. Our approach can\nparallelly generate dance sequences of extremely long length, striking a\nbalance between global choreographic patterns and local motion quality and\nexpressiveness. Extensive experiments validate the efficacy of our method.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Ronghui Li", "Yuxiang Zhang", "Yachao Zhang", "Hongwen Zhang", "Jie Guo", "Yan Zhang", "Yebin Liu", "Xiu Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f073"}, "filepath": "data/2401.00979.png", "tags": [], "_media_type": "image", "_rand": 0.9990130512980884, "arXiv_link": "https://arxiv.org/abs/2401.00979", "other_link": "https://github.com/XuanHuang0/VANeRF}.", "title": "ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models", "abstract": "Neural radiance fields (NeRFs) are promising 3D representations for scenes,\nobjects, and humans. However, most existing methods require multi-view inputs\nand per-scene training, which limits their real-life applications. Moreover,\ncurrent methods focus on single-subject cases, leaving scenes of interacting\nhands that involve severe inter-hand occlusions and challenging view variations\nremain unsolved. To tackle these issues, this paper proposes a generalizable\nvisibility-aware NeRF (VA-NeRF) framework for interacting hands. Specifically,\ngiven an image of interacting hands as input, our VA-NeRF first obtains a\nmesh-based representation of hands and extracts their corresponding geometric\nand textural features. Subsequently, a feature fusion module that exploits the\nvisibility of query points and mesh vertices is introduced to adaptively merge\nfeatures of both hands, enabling the recovery of features in unseen areas.\nAdditionally, our VA-NeRF is optimized together with a novel discriminator\nwithin an adversarial learning paradigm. In contrast to conventional\ndiscriminators that predict a single real/fake label for the synthesized image,\nthe proposed discriminator generates a pixel-wise visibility map, providing\nfine-grained supervision for unseen areas and encouraging the VA-NeRF to\nimprove the visual quality of synthesized images. Experiments on the\nInterhand2.6M dataset demonstrate that our proposed VA-NeRF outperforms\nconventional NeRFs significantly. Project Page:\n\\url{https://github.com/XuanHuang0/VANeRF}.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Meng-Li Shih", "Wei-Chiu Ma", "Lorenzo Boyice", "Aleksander Holynski", "Forrester Cole", "Brian Curless", "Janne Kontkanen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f074"}, "filepath": "data/2404.15672.png", "tags": [], "_media_type": "image", "_rand": 0.9998269641940406, "arXiv_link": "https://arxiv.org/abs/2404.15672", "other_link": "https://github.com/JLiangLab/Eden.", "title": "Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision", "abstract": "Humans effortlessly interpret images by parsing them into part-whole\nhierarchies; deep learning excels in learning multi-level feature spaces, but\nthey often lack explicit coding of part-whole relations, a prominent property\nof medical imaging. To overcome this limitation, we introduce Adam-v2, a new\nself-supervised learning framework extending Adam [79] by explicitly\nincorporating part-whole hierarchies into its learning objectives through three\nkey branches: (1) Localizability, acquiring discriminative representations to\ndistinguish different anatomical patterns; (2) Composability, learning each\nanatomical structure in a parts-to-whole manner; and (3) Decomposability,\ncomprehending each anatomical structure in a whole-to-parts manner.\nExperimental results across 10 tasks, compared to 11 baselines in zero-shot,\nfew-shot transfer, and full fine-tuning settings, showcase Adam-v2's superior\nperformance over large-scale medical models and existing SSL methods across\ndiverse downstream tasks. The higher generality and robustness of Adam-v2's\nrepresentations originate from its explicit construction of hierarchies for\ndistinct anatomical structures from unlabeled medical images. Adam-v2 preserves\na semantic balance of anatomical diversity and harmony in its embedding,\nyielding representations that are both generic and semantically meaningful, yet\noverlooked in existing SSL methods. All code and pretrained models are\navailable at https://github.com/JLiangLab/Eden.", "keywords": ["Deep learning architectures and techniques", "Medical imaging and biological vision"], "authors_list": ["Mohammad Reza Hosseinzadeh Taher", "Michael Gotway", "Jianming Liang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f075"}, "filepath": "data/2403.07705.png", "tags": [], "_media_type": "image", "_rand": 0.9992183516075186, "arXiv_link": "https://arxiv.org/abs/2403.07705", "other_link": "", "title": "Robust Synthetic-to-Real Transfer for Stereo Matching", "abstract": "With advancements in domain generalized stereo matching networks, models\npre-trained on synthetic data demonstrate strong robustness to unseen domains.\nHowever, few studies have investigated the robustness after fine-tuning them in\nreal-world scenarios, during which the domain generalization ability can be\nseriously degraded. In this paper, we explore fine-tuning stereo matching\nnetworks without compromising their robustness to unseen domains. Our\nmotivation stems from comparing Ground Truth (GT) versus Pseudo Label (PL) for\nfine-tuning: GT degrades, but PL preserves the domain generalization ability.\nEmpirically, we find the difference between GT and PL implies valuable\ninformation that can regularize networks during fine-tuning. We also propose a\nframework to utilize this difference for fine-tuning, consisting of a frozen\nTeacher, an exponential moving average (EMA) Teacher, and a Student network.\nThe core idea is to utilize the EMA Teacher to measure what the Student has\nlearned and dynamically improve GT and PL for fine-tuning. We integrate our\nframework with state-of-the-art networks and evaluate its effectiveness on\nseveral real-world datasets. Extensive experiments show that our method\neffectively preserves the domain generalization ability during fine-tuning.", "keywords": [], "authors_list": ["Jiawei Zhang", "Jiahe Li", "Lei Huang", "Xiaohan Yu", "Lin Gu", "Jin Zheng", "Xiao Bai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f076"}, "filepath": "data/2311.09257.png", "tags": [], "_media_type": "image", "_rand": 0.9992354387364081, "arXiv_link": "https://arxiv.org/abs/2311.09257", "other_link": "", "title": "UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs", "abstract": "Text-to-image diffusion models have demonstrated remarkable capabilities in\ntransforming textual prompts into coherent images, yet the computational cost\nof their inference remains a persistent challenge. To address this issue, we\npresent UFOGen, a novel generative model designed for ultra-fast, one-step\ntext-to-image synthesis. In contrast to conventional approaches that focus on\nimproving samplers or employing distillation techniques for diffusion models,\nUFOGen adopts a hybrid methodology, integrating diffusion models with a GAN\nobjective. Leveraging a newly introduced diffusion-GAN objective and\ninitialization with pre-trained diffusion models, UFOGen excels in efficiently\ngenerating high-quality images conditioned on textual descriptions in a single\nstep. Beyond traditional text-to-image generation, UFOGen showcases versatility\nin applications. Notably, UFOGen stands among the pioneering models enabling\none-step text-to-image generation and diverse downstream tasks, presenting a\nsignificant advancement in the landscape of efficient generative models.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yanwu Xu", "Yang Zhao", "Zhisheng Xiao", "Tingbo Hou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f077"}, "filepath": "data/2307.16377.png", "tags": [], "_media_type": "image", "_rand": 0.9999280190365367, "arXiv_link": "https://arxiv.org/abs/2307.16377", "other_link": "", "title": "Instance-aware Contrastive Learning for Occluded Human Mesh Reconstruction", "abstract": "In this study, we focus on the problem of 3D human mesh recovery from a\nsingle image under obscured conditions. Most state-of-the-art methods aim to\nimprove 2D alignment technologies, such as spatial averaging and 2D joint\nsampling. However, they tend to neglect the crucial aspect of 3D alignment by\nimproving 3D representations. Furthermore, recent methods struggle to separate\nthe target human from occlusion or background in crowded scenes as they\noptimize the 3D space of target human with 3D joint coordinates as local\nsupervision. To address these issues, a desirable method would involve a\nframework for fusing 2D and 3D features and a strategy for optimizing the 3D\nspace globally. Therefore, this paper presents 3D JOint contrastive learning\nwith TRansformers (JOTR) framework for handling occluded 3D human mesh\nrecovery. Our method includes an encoder-decoder transformer architecture to\nfuse 2D and 3D representations for achieving 2D$\\&$3D aligned results in a\ncoarse-to-fine manner and a novel 3D joint contrastive learning approach for\nadding explicitly global supervision for the 3D feature space. The contrastive\nlearning approach includes two contrastive losses: joint-to-joint contrast for\nenhancing the similarity of semantically similar voxels (i.e., human joints),\nand joint-to-non-joint contrast for ensuring discrimination from others (e.g.,\nocclusions and background). Qualitative and quantitative analyses demonstrate\nthat our method outperforms state-of-the-art competitors on both\nocclusion-specific and standard benchmarks, significantly improving the\nreconstruction of occluded humans.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Mi-Gyeong Gwon", "Gi-Mun Um", "Won-Sik Cheong", "Wonjun Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f078"}, "filepath": "data/2309.16496.png", "tags": [], "_media_type": "image", "_rand": 0.999674777030221, "arXiv_link": "https://arxiv.org/abs/2309.16496", "other_link": "", "title": "CCEdit: Creative and Controllable Video Editing via Diffusion Models", "abstract": "In this paper, we present CCEdit, a versatile generative video editing\nframework based on diffusion models. Our approach employs a novel trident\nnetwork structure that separates structure and appearance control, ensuring\nprecise and creative editing capabilities. Utilizing the foundational\nControlNet architecture, we maintain the structural integrity of the video\nduring editing. The incorporation of an additional appearance branch enables\nusers to exert fine-grained control over the edited key frame. These two side\nbranches seamlessly integrate into the main branch, which is constructed upon\nexisting text-to-image (T2I) generation models, through learnable temporal\nlayers. The versatility of our framework is demonstrated through a diverse\nrange of choices in both structure representations and personalized T2I models,\nas well as the option to provide the edited key frame. To facilitate\ncomprehensive evaluation, we introduce the BalanceCC benchmark dataset,\ncomprising 100 videos and 4 target prompts for each video. Our extensive user\nstudies compare CCEdit with eight state-of-the-art video editing methods. The\noutcomes demonstrate CCEdit's substantial superiority over all other methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Ruoyu Feng", "Wenming Weng", "Yanhui Wang", "Yuhui Yuan", "Jianmin Bao", "Chong Luo", "Zhibo Chen", "Baining Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f079"}, "filepath": "data/2403.04198.png", "tags": [], "_media_type": "image", "_rand": 0.9997432335306469, "arXiv_link": "https://arxiv.org/abs/2403.04198", "other_link": "https://github.com/SerCharles/CN-RMA.", "title": "CN-RMA: Combined Network with Ray Marching Aggregation for 3D Indoor Object Detection from Multi-view Images", "abstract": "This paper introduces CN-RMA, a novel approach for 3D indoor object detection\nfrom multi-view images. We observe the key challenge as the ambiguity of image\nand 3D correspondence without explicit geometry to provide occlusion\ninformation. To address this issue, CN-RMA leverages the synergy of 3D\nreconstruction networks and 3D object detection networks, where the\nreconstruction network provides a rough Truncated Signed Distance Function\n(TSDF) and guides image features to vote to 3D space correctly in an end-to-end\nmanner. Specifically, we associate weights to sampled points of each ray\nthrough ray marching, representing the contribution of a pixel in an image to\ncorresponding 3D locations. Such weights are determined by the predicted signed\ndistances so that image features vote only to regions near the reconstructed\nsurface. Our method achieves state-of-the-art performance in 3D object\ndetection from multi-view images, as measured by mAP@0.25 and mAP@0.5 on the\nScanNet and ARKitScenes datasets. The code and models are released at\nhttps://github.com/SerCharles/CN-RMA.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Guanlin Shen", "Jingwei Huang", "Zhihua Hu", "Bin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f07a"}, "filepath": "data/2312.00878.png", "tags": [], "_media_type": "image", "_rand": 0.9999666275708061, "arXiv_link": "https://arxiv.org/abs/2312.00878", "other_link": "", "title": "Grounding Everything: Emerging Localization Properties in Vision-Language Transformers", "abstract": "Vision-language foundation models have shown remarkable performance in\nvarious zero-shot settings such as image retrieval, classification, or\ncaptioning. But so far, those models seem to fall behind when it comes to\nzero-shot localization of referential expressions and objects in images. As a\nresult, they need to be fine-tuned for this task. In this paper, we show that\npretrained vision-language (VL) models allow for zero-shot open-vocabulary\nobject localization without any fine-tuning. To leverage those capabilities, we\npropose a Grounding Everything Module (GEM) that generalizes the idea of\nvalue-value attention introduced by CLIPSurgery to a self-self attention path.\nWe show that the concept of self-self attention corresponds to clustering, thus\nenforcing groups of tokens arising from the same object to be similar while\npreserving the alignment with the language space. To further guide the group\nformation, we propose a set of regularizations that allows the model to finally\ngeneralize across datasets and backbones. We evaluate the proposed GEM\nframework on various benchmark tasks and datasets for semantic segmentation. It\nshows that GEM not only outperforms other training-free open-vocabulary\nlocalization methods, but also achieves state-of-the-art results on the\nrecently proposed OpenImagesV7 large-scale segmentation benchmark.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Walid Bousselham", "Felix Petersen", "Vittorio Ferrari", "Hilde Kuehne"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f07b"}, "filepath": "data/2312.01280.png", "tags": [], "_media_type": "image", "_rand": 0.9995035490056295, "arXiv_link": "https://arxiv.org/abs/2312.01280", "other_link": "", "title": "Brain Decodes Deep Nets", "abstract": "We developed a tool for visualizing and analyzing large pre-trained vision\nmodels by mapping them onto the brain, thus exposing their hidden inside. Our\ninnovation arises from a surprising usage of brain encoding: predicting brain\nfMRI measurements in response to images. We report two findings. First,\nexplicit mapping between the brain and deep-network features across dimensions\nof space, layers, scales, and channels is crucial. This mapping method,\nFactorTopy, is plug-and-play for any deep-network; with it, one can paint a\npicture of the network onto the brain (literally!). Second, our visualization\nshows how different training methods matter: they lead to remarkable\ndifferences in hierarchical organization and scaling behavior, growing with\nmore data or network capacity. It also provides insight into fine-tuning: how\npre-trained models change when adapting to small datasets. We found brain-like\nhierarchically organized network suffer less from catastrophic forgetting after\nfine-tuned.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Huzheng Yang", "James Gee", "Jianbo Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f07c"}, "filepath": "data/2403.16497.png", "tags": [], "_media_type": "image", "_rand": 0.9998788726568493, "arXiv_link": "https://arxiv.org/abs/2403.16497", "other_link": "", "title": "Prompting Vision Foundation Models for Pathology Image Analysis", "abstract": "As natural image understanding moves towards the pretrain-finetune era,\nresearch in pathology imaging is concurrently evolving. Despite the predominant\nfocus on pretraining pathological foundation models, how to adapt foundation\nmodels to downstream tasks is little explored. For downstream adaptation, we\npropose the existence of two domain gaps, i.e., the Foundation-Task Gap and the\nTask-Instance Gap. To mitigate these gaps, we introduce PathoTune, a framework\ndesigned to efficiently adapt pathological or even visual foundation models to\npathology-specific tasks via multi-modal prompt tuning. The proposed framework\nleverages Task-specific Visual Prompts and Task-specific Textual Prompts to\nidentify task-relevant features, along with Instance-specific Visual Prompts\nfor encoding single pathological image features. Results across multiple\ndatasets at both patch-level and WSI-level demonstrate its superior performance\nover single-modality prompt tuning approaches. Significantly, PathoTune\nfacilitates the direct adaptation of natural visual foundation models to\npathological tasks, drastically outperforming pathological foundation models\nwith simple linear probing. The code will be available upon acceptance.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Medical imaging and biological vision"], "authors_list": ["CHONG YIN", "Siqi Liu", "Kaiyang Zhou", "Vincent Wong", "Pong C. Yuen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f07d"}, "filepath": "data/2403.06912.png", "tags": [], "_media_type": "image", "_rand": 0.9997052935693369, "arXiv_link": "https://arxiv.org/abs/2403.06912", "other_link": "", "title": "DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization", "abstract": "Radiance fields have demonstrated impressive performance in synthesizing\nnovel views from sparse input views, yet prevailing methods suffer from high\ntraining costs and slow inference speed. This paper introduces DNGaussian, a\ndepth-regularized framework based on 3D Gaussian radiance fields, offering\nreal-time and high-quality few-shot novel view synthesis at low costs. Our\nmotivation stems from the highly efficient representation and surprising\nquality of the recent 3D Gaussian Splatting, despite it will encounter a\ngeometry degradation when input views decrease. In the Gaussian radiance\nfields, we find this degradation in scene geometry primarily lined to the\npositioning of Gaussian primitives and can be mitigated by depth constraint.\nConsequently, we propose a Hard and Soft Depth Regularization to restore\naccurate scene geometry under coarse monocular depth supervision while\nmaintaining a fine-grained color appearance. To further refine detailed\ngeometry reshaping, we introduce Global-Local Depth Normalization, enhancing\nthe focus on small local depth changes. Extensive experiments on LLFF, DTU, and\nBlender datasets demonstrate that DNGaussian outperforms state-of-the-art\nmethods, achieving comparable or better results with significantly reduced\nmemory cost, a $25 \\times$ reduction in training time, and over $3000 \\times$\nfaster rendering speed.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Jiahe Li", "Jiawei Zhang", "Xiao Bai", "Jin Zheng", "Xin Ning", "Jun Zhou", "Lin Gu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f07e"}, "filepath": "data/2403.03890v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992607804806515, "arXiv_link": "https://arxiv.org/abs/2403.03890v1", "other_link": "", "title": "Hierarchical Diffusion Policy for Kinematics-Aware Multi-Task Robotic Manipulation", "abstract": "This paper introduces Hierarchical Diffusion Policy (HDP), a hierarchical\nagent for multi-task robotic manipulation. HDP factorises a manipulation policy\ninto a hierarchical structure: a high-level task-planning agent which predicts\na distant next-best end-effector pose (NBP), and a low-level goal-conditioned\ndiffusion policy which generates optimal motion trajectories. The factorised\npolicy representation allows HDP to tackle both long-horizon task planning\nwhile generating fine-grained low-level actions. To generate context-aware\nmotion trajectories while satisfying robot kinematics constraints, we present a\nnovel kinematics-aware goal-conditioned control agent, Robot Kinematics\nDiffuser (RK-Diffuser). Specifically, RK-Diffuser learns to generate both the\nend-effector pose and joint position trajectories, and distill the accurate but\nkinematics-unaware end-effector pose diffuser to the kinematics-aware but less\naccurate joint position diffuser via differentiable kinematics. Empirically, we\nshow that HDP achieves a significantly higher success rate than the\nstate-of-the-art methods in both simulation and real-world.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiao Ma", "Sumit Patidar", "Iain Haughton", "Stephen James"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f07f"}, "filepath": "data/2311.15619.png", "tags": [], "_media_type": "image", "_rand": 0.9996442899069462, "arXiv_link": "https://arxiv.org/abs/2311.15619", "other_link": "", "title": "Align before Adapt: Leveraging Entity-to-Region Alignments for Generalizable Video Action Recognition", "abstract": "Large-scale visual-language pre-trained models have achieved significant\nsuccess in various video tasks. However, most existing methods follow an \"adapt\nthen align\" paradigm, which adapts pre-trained image encoders to model\nvideo-level representations and utilizes one-hot or text embedding of the\naction labels for supervision. This paradigm overlooks the challenge of mapping\nfrom static images to complicated activity concepts. In this paper, we propose\na novel \"Align before Adapt\" (ALT) paradigm. Prior to adapting to video\nrepresentation learning, we exploit the entity-to-region alignments for each\nframe. The alignments are fulfilled by matching the region-aware image\nembeddings to an offline-constructed text corpus. With the aligned entities, we\nfeed their text embeddings to a transformer-based video adapter as the queries,\nwhich can help extract the semantics of the most important entities from a\nvideo to a vector. This paradigm reuses the visual-language alignment of VLP\nduring adaptation and tries to explain an action by the underlying entities.\nThis helps understand actions by bridging the gap with complex activity\nsemantics, particularly when facing unfamiliar or unseen categories. ALT\ndemonstrates competitive performance while maintaining remarkably low\ncomputational costs. In fully supervised experiments, it achieves 88.1% top-1\naccuracy on Kinetics-400 with only 4947 GFLOPs. Moreover, ALT outperforms the\nprevious state-of-the-art methods in both zero-shot and few-shot experiments,\nemphasizing its superior generalizability across various learning scenarios.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Yifei Chen", "Dapeng Chen", "Ruijin Liu", "Sai Zhou", "Wenyuan Xue", "Wei Peng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f080"}, "filepath": "data/2403.04321.png", "tags": [], "_media_type": "image", "_rand": 0.9994193554419701, "arXiv_link": "https://arxiv.org/abs/2403.04321", "other_link": "", "title": "Discriminative Probing and Tuning for Text-to-Image Generation", "abstract": "Despite advancements in text-to-image generation (T2I), prior methods often\nface text-image misalignment problems such as relation confusion in generated\nimages. Existing solutions involve cross-attention manipulation for better\ncompositional understanding or integrating large language models for improved\nlayout planning. However, the inherent alignment capabilities of T2I models are\nstill inadequate. By reviewing the link between generative and discriminative\nmodeling, we posit that T2I models' discriminative abilities may reflect their\ntext-image alignment proficiency during generation. In this light, we advocate\nbolstering the discriminative abilities of T2I models to achieve more precise\ntext-to-image alignment for generation. We present a discriminative adapter\nbuilt on T2I models to probe their discriminative abilities on two\nrepresentative tasks and leverage discriminative fine-tuning to improve their\ntext-image alignment. As a bonus of the discriminative adapter, a\nself-correction mechanism can leverage discriminative gradients to better align\ngenerated images to text prompts during inference. Comprehensive evaluations\nacross three benchmark datasets, including both in-distribution and\nout-of-distribution scenarios, demonstrate our method's superior generation\nperformance. Meanwhile, it achieves state-of-the-art discriminative performance\non the two discriminative tasks compared to other generative models.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Leigang Qu", "Wenjie Wang", "Yongqi Li", "Hanwang Zhang", "Liqiang Nie", "Tat-seng Chua"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f081"}, "filepath": "data/2307.12732.png", "tags": [], "_media_type": "image", "_rand": 0.9998727773682113, "arXiv_link": "https://arxiv.org/abs/2307.12732", "other_link": "https://github.com/winycg/CLIP-KD.", "title": "CLIP-KD: An Empirical Study of CLIP Model Distillation", "abstract": "Contrastive Language-Image Pre-training (CLIP) has become a promising\nlanguage-supervised visual pre-training framework. This paper aims to distill\nsmall CLIP models supervised by a large teacher CLIP model. We propose several\ndistillation strategies, including relation, feature, gradient and contrastive\nparadigms, to examine the effectiveness of CLIP-Knowledge Distillation (KD). We\nshow that a simple feature mimicry with Mean Squared Error loss works\nsurprisingly well. Moreover, interactive contrastive learning across teacher\nand student encoders is also effective in performance improvement. We explain\nthat the success of CLIP-KD can be attributed to maximizing the feature\nsimilarity between teacher and student. The unified method is applied to\ndistill several student models trained on CC3M+12M. CLIP-KD improves student\nCLIP models consistently over zero-shot ImageNet classification and cross-modal\nretrieval benchmarks. When using ViT-L/14 pretrained on Laion-400M as the\nteacher, CLIP-KD achieves 57.5\\% and 55.4\\% zero-shot top-1 ImageNet accuracy\nover ViT-B/16 and ResNet-50, surpassing the original CLIP without KD by 20.5\\%\nand 20.1\\% margins, respectively. Our code is released on\nhttps://github.com/winycg/CLIP-KD.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Chuanguang Yang", "Zhulin An", "Libo Huang", "Junyu Bi", "XinQiang Yu", "Han Yang", "boyu diao", "Yongjun Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f082"}, "filepath": "data/2403.12962.png", "tags": [], "_media_type": "image", "_rand": 0.9995712851216374, "arXiv_link": "https://arxiv.org/abs/2403.12962", "other_link": "", "title": "FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation", "abstract": "The remarkable efficacy of text-to-image diffusion models has motivated\nextensive exploration of their potential application in video domains.\nZero-shot methods seek to extend image diffusion models to videos without\nnecessitating model training. Recent methods mainly focus on incorporating\ninter-frame correspondence into attention mechanisms. However, the soft\nconstraint imposed on determining where to attend to valid features can\nsometimes be insufficient, resulting in temporal inconsistency. In this paper,\nwe introduce FRESCO, intra-frame correspondence alongside inter-frame\ncorrespondence to establish a more robust spatial-temporal constraint. This\nenhancement ensures a more consistent transformation of semantically similar\ncontent across frames. Beyond mere attention guidance, our approach involves an\nexplicit update of features to achieve high spatial-temporal consistency with\nthe input video, significantly improving the visual coherence of the resulting\ntranslated videos. Extensive experiments demonstrate the effectiveness of our\nproposed framework in producing high-quality, coherent videos, marking a\nnotable improvement over existing zero-shot methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Shuai Yang", "Yifan Zhou", "Ziwei Liu", "Chen Change Loy"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f083"}, "filepath": "data/2310.00582.png", "tags": [], "_media_type": "image", "_rand": 0.999383716121651, "arXiv_link": "https://arxiv.org/abs/2310.00582", "other_link": "https://github.com/SY-Xuan/Pink.", "title": "LocLLM: Exploiting Generalizable Human Keypoint Localization via Large Language Model", "abstract": "Multi-modal Large Language Models (MLLMs) have shown remarkable capabilities\nin various multi-modal tasks. Nevertheless, their performance in fine-grained\nimage understanding tasks is still limited. To address this issue, this paper\nproposes a new framework to enhance the fine-grained image understanding\nabilities of MLLMs. Specifically, we present a new method for constructing the\ninstruction tuning dataset at a low cost by leveraging annotations in existing\ndatasets. A self-consistent bootstrapping method is also introduced to extend\nexisting dense object annotations into high-quality\nreferring-expression-bounding-box pairs. These methods enable the generation of\nhigh-quality instruction data which includes a wide range of fundamental\nabilities essential for fine-grained image perception. Moreover, we argue that\nthe visual encoder should be tuned during instruction tuning to mitigate the\ngap between full image perception and fine-grained image perception.\nExperimental results demonstrate the superior performance of our method. For\ninstance, our model exhibits a 5.2% accuracy improvement over Qwen-VL on GQA\nand surpasses the accuracy of Kosmos-2 by 24.7% on RefCOCO_val. We have also\nattained the top rank on the leaderboard of MMBench. This promising performance\nis achieved by training on only publicly available data, making it easily\nreproducible. The models, datasets, and codes are publicly available at\nhttps://github.com/SY-Xuan/Pink.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Dongkai Wang", "shiyu xuan", "Shiliang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f084"}, "filepath": "data/2403.16439v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994773069288255, "arXiv_link": "https://arxiv.org/abs/2403.16439v1", "other_link": "", "title": "Producing and Leveraging Online Map Uncertainty in Trajectory Prediction", "abstract": "High-definition (HD) maps have played an integral role in the development of\nmodern autonomous vehicle (AV) stacks, albeit with high associated labeling and\nmaintenance costs. As a result, many recent works have proposed methods for\nestimating HD maps online from sensor data, enabling AVs to operate outside of\npreviously-mapped regions. However, current online map estimation approaches\nare developed in isolation of their downstream tasks, complicating their\nintegration in AV stacks. In particular, they do not produce uncertainty or\nconfidence estimates. In this work, we extend multiple state-of-the-art online\nmap estimation methods to additionally estimate uncertainty and show how this\nenables more tightly integrating online mapping with trajectory forecasting. In\ndoing so, we find that incorporating uncertainty yields up to 50% faster\ntraining convergence and up to 15% better prediction performance on the\nreal-world nuScenes driving dataset.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xunjiang Gu", "Guanyu Song", "Igor Gilitschenski", "Marco Pavone", "Boris Ivanovic"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f085"}, "filepath": "data/2311.17737.png", "tags": [], "_media_type": "image", "_rand": 0.9997601980332043, "arXiv_link": "https://arxiv.org/abs/2311.17737", "other_link": "", "title": "GenZI: Zero-Shot 3D Human-Scene Interaction Generation", "abstract": "Can we synthesize 3D humans interacting with scenes without learning from any\n3D human-scene interaction data? We propose GenZI, the first zero-shot approach\nto generating 3D human-scene interactions. Key to GenZI is our distillation of\ninteraction priors from large vision-language models (VLMs), which have learned\na rich semantic space of 2D human-scene compositions. Given a natural language\ndescription and a coarse point location of the desired interaction in a 3D\nscene, we first leverage VLMs to imagine plausible 2D human interactions\ninpainted into multiple rendered views of the scene. We then formulate a robust\niterative optimization to synthesize the pose and shape of a 3D human model in\nthe scene, guided by consistency with the 2D interaction hypotheses. In\ncontrast to existing learning-based approaches, GenZI circumvents the\nconventional need for captured 3D interaction data, and allows for flexible\ncontrol of the 3D interaction synthesis with easy-to-use text prompts.\nExtensive experiments show that our zero-shot approach has high flexibility and\ngenerality, making it applicable to diverse scene types, including both indoor\nand outdoor environments.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Lei Li", "Angela Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f086"}, "filepath": "data/2404.01941.png", "tags": [], "_media_type": "image", "_rand": 0.9990426518866232, "arXiv_link": "https://arxiv.org/abs/2404.01941", "other_link": "", "title": "LPSNet: End-to-End Human Pose and Shape Estimation with Lensless Imaging", "abstract": "Human pose and shape (HPS) estimation with lensless imaging is not only\nbeneficial to privacy protection but also can be used in covert surveillance\nscenarios due to the small size and simple structure of this device. However,\nthis task presents significant challenges due to the inherent ambiguity of the\ncaptured measurements and lacks effective methods for directly estimating human\npose and shape from lensless data. In this paper, we propose the first\nend-to-end framework to recover 3D human poses and shapes from lensless\nmeasurements to our knowledge. We specifically design a multi-scale lensless\nfeature decoder to decode the lensless measurements through the optically\nencoded mask for efficient feature extraction. We also propose a double-head\nauxiliary supervision mechanism to improve the estimation accuracy of human\nlimb ends. Besides, we establish a lensless imaging system and verify the\neffectiveness of our method on various datasets acquired by our lensless\nimaging system.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Haoyang Ge", "Qiao Feng", "Hailong Jia", "Xiongzheng Li", "Xiangjun Yin", "You Zhou", "Jingyu Yang", "Kun Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f087"}, "filepath": "data/2309.01327.png", "tags": [], "_media_type": "image", "_rand": 0.9995276169675297, "arXiv_link": "https://arxiv.org/abs/2309.01327", "other_link": "https://github.com/doc-doc/NExT-GQA.", "title": "Can I Trust Your Answer? Visually Grounded Video Question Answering", "abstract": "We study visually grounded VideoQA in response to the emerging trends of\nutilizing pretraining techniques for video-language understanding.\nSpecifically, by forcing vision-language models (VLMs) to answer questions and\nsimultaneously provide visual evidence, we seek to ascertain the extent to\nwhich the predictions of such techniques are genuinely anchored in relevant\nvideo content, versus spurious correlations from language or irrelevant visual\ncontext. Towards this, we construct NExT-GQA -- an extension of NExT-QA with\n10.5$K$ temporal grounding (or location) labels tied to the original QA pairs.\nWith NExT-GQA, we scrutinize a series of state-of-the-art VLMs. Through\npost-hoc attention analysis, we find that these models are extremely weak in\nsubstantiating the answers despite their strong QA performance. This exposes\nthe limitation of current VLMs in making reliable predictions. As a remedy, we\nfurther explore and propose a grounded-QA method via Gaussian mask optimization\nand cross-modal learning. Experiments with different backbones demonstrate that\nthis grounding mechanism improves both grounding and QA. With these efforts, we\naim to push towards trustworthy VLMs in VQA systems. Our dataset and code are\navailable at https://github.com/doc-doc/NExT-GQA.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Junbin Xiao", "Angela Yao", "Yicong Li", "Tat-seng Chua"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f088"}, "filepath": "data/2312.01215.png", "tags": [], "_media_type": "image", "_rand": 0.9997811374604197, "arXiv_link": "https://arxiv.org/abs/2312.01215", "other_link": "", "title": "RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction", "abstract": "This paper introduces a versatile paradigm for integrating multi-view\nreflectance (optional) and normal maps acquired through photometric stereo. Our\napproach employs a pixel-wise joint re-parameterization of reflectance and\nnormal, considering them as a vector of radiances rendered under simulated,\nvarying illumination. This re-parameterization enables the seamless integration\nof reflectance and normal maps as input data in neural volume rendering-based\n3D reconstruction while preserving a single optimization objective. In\ncontrast, recent multi-view photometric stereo (MVPS) methods depend on\nmultiple, potentially conflicting objectives. Despite its apparent simplicity,\nour proposed approach outperforms state-of-the-art approaches in MVPS\nbenchmarks across F-score, Chamfer distance, and mean angular error metrics.\nNotably, it significantly improves the detailed 3D reconstruction of areas with\nhigh curvature or low visibility.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Baptiste Brument", "Robin Bruneau", "Yvain Queau", "Jean M\u00e9lou", "Francois Lauze", "Jean-Denis Durou", "Lilian Calvet"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f089"}, "filepath": "data/2405.02911.png", "tags": [], "_media_type": "image", "_rand": 0.9990660556179198, "arXiv_link": "https://arxiv.org/abs/2405.02911", "other_link": "", "title": "Multimodal Sense-Informed Prediction of 3D Human Motions", "abstract": "Predicting future human pose is a fundamental application for machine\nintelligence, which drives robots to plan their behavior and paths ahead of\ntime to seamlessly accomplish human-robot collaboration in real-world 3D\nscenarios. Despite encouraging results, existing approaches rarely consider the\neffects of the external scene on the motion sequence, leading to pronounced\nartifacts and physical implausibilities in the predictions. To address this\nlimitation, this work introduces a novel multi-modal sense-informed motion\nprediction approach, which conditions high-fidelity generation on two modal\ninformation: external 3D scene, and internal human gaze, and is able to\nrecognize their salience for future human activity. Furthermore, the gaze\ninformation is regarded as the human intention, and combined with both motion\nand scene features, we construct a ternary intention-aware attention to\nsupervise the generation to match where the human wants to reach. Meanwhile, we\nintroduce semantic coherence-aware attention to explicitly distinguish the\nsalient point clouds and the underlying ones, to ensure a reasonable\ninteraction of the generated sequence with the 3D scene. On two real-world\nbenchmarks, the proposed method achieves state-of-the-art performance both in\n3D human pose and trajectory prediction.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Zhenyu Lou", "Qiongjie Cui", "Haofan Wang", "Xu Tang", "Hong Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f08a"}, "filepath": "data/2405.09713.png", "tags": [], "_media_type": "image", "_rand": 0.9994319101206333, "arXiv_link": "https://arxiv.org/abs/2405.09713", "other_link": "", "title": "SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge", "abstract": "Learning commonsense reasoning from visual contexts and scenes in real-world\nis a crucial step toward advanced artificial intelligence. However, existing\nvideo reasoning benchmarks are still inadequate since they were mainly designed\nfor factual or situated reasoning and rarely involve broader knowledge in the\nreal world. Our work aims to delve deeper into reasoning evaluations,\nspecifically within dynamic, open-world, and structured context knowledge. We\npropose a new benchmark (SOK-Bench), consisting of 44K questions and 10K\nsituations with instance-level annotations depicted in the videos. The\nreasoning process is required to understand and apply situated knowledge and\ngeneral knowledge for problem-solving. To create such a dataset, we propose an\nautomatic and scalable generation method to generate question-answer pairs,\nknowledge graphs, and rationales by instructing the combinations of LLMs and\nMLLMs. Concretely, we first extract observable situated entities, relations,\nand processes from videos for situated knowledge and then extend to open-world\nknowledge beyond the visible content. The task generation is facilitated\nthrough multiple dialogues as iterations and subsequently corrected and refined\nby our designed self-promptings and demonstrations. With a corpus of both\nexplicit situated facts and implicit commonsense, we generate associated\nquestion-answer pairs and reasoning processes, finally followed by manual\nreviews for quality assurance. We evaluated recent mainstream large\nvision-language models on the benchmark and found several insightful\nconclusions. For more information, please refer to our benchmark at\nwww.bobbywu.com/SOKBench.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Andong Wang", "Bo Wu", "Sunli Chen", "Zhenfang Chen", "Haotian Guan", "Wei-Ning Lee", "Li Erran Li", "Chuang Gan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f08b"}, "filepath": "data/2403.00543.png", "tags": [], "_media_type": "image", "_rand": 0.9990096236309495, "arXiv_link": "https://arxiv.org/abs/2403.00543", "other_link": "https://yutingli0606.github.io/SURE/}.", "title": "SURE: SUrvey REcipes for building reliable and robust deep networks", "abstract": "In this paper, we revisit techniques for uncertainty estimation within deep\nneural networks and consolidate a suite of techniques to enhance their\nreliability. Our investigation reveals that an integrated application of\ndiverse techniques--spanning model regularization, classifier and\noptimization--substantially improves the accuracy of uncertainty predictions in\nimage classification tasks. The synergistic effect of these techniques\nculminates in our novel SURE approach. We rigorously evaluate SURE against the\nbenchmark of failure prediction, a critical testbed for uncertainty estimation\nefficacy. Our results showcase a consistently better performance than models\nthat individually deploy each technique, across various datasets and model\narchitectures. When applied to real-world challenges, such as data corruption,\nlabel noise, and long-tailed class distribution, SURE exhibits remarkable\nrobustness, delivering results that are superior or on par with current\nstate-of-the-art specialized methods. Particularly on Animal-10N and Food-101N\nfor learning with noisy labels, SURE achieves state-of-the-art performance\nwithout any task-specific adjustments. This work not only sets a new benchmark\nfor robust uncertainty estimation but also paves the way for its application in\ndiverse, real-world scenarios where reliability is paramount. Our code is\navailable at \\url{https://yutingli0606.github.io/SURE/}.", "keywords": [], "authors_list": ["Yuting Li", "Yingyi Chen", "Xuanlong Yu", "Dexiong Chen", "Xi Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f08c"}, "filepath": "data/2311.11106.png", "tags": [], "_media_type": "image", "_rand": 0.999072000657387, "arXiv_link": "https://arxiv.org/abs/2311.11106", "other_link": "", "title": "ShapeMatcher: Self-Supervised Joint Shape Canonicalization, Segmentation, Retrieval and Deformation", "abstract": "In this paper, we present ShapeMatcher, a unified self-supervised learning\nframework for joint shape canonicalization, segmentation, retrieval and\ndeformation. Given a partially-observed object in an arbitrary pose, we first\ncanonicalize the object by extracting point-wise affine-invariant features,\ndisentangling inherent structure of the object with its pose and size. These\nlearned features are then leveraged to predict semantically consistent part\nsegmentation and corresponding part centers. Next, our lightweight retrieval\nmodule aggregates the features within each part as its retrieval token and\ncompare all the tokens with source shapes from a pre-established database to\nidentify the most geometrically similar shape. Finally, we deform the retrieved\nshape in the deformation module to tightly fit the input object by harnessing\npart center guided neural cage deformation. The key insight of ShapeMaker is\nthe simultaneous training of the four highly-associated processes:\ncanonicalization, segmentation, retrieval, and deformation, leveraging\ncross-task consistency losses for mutual supervision. Extensive experiments on\nsynthetic datasets PartNet, ComplementMe, and real-world dataset Scan2CAD\ndemonstrate that ShapeMaker surpasses competitors by a large margin.", "keywords": [], "authors_list": ["Yan Di", "Chenyangguang Zhang", "Chaowei Wang", "Ruida Zhang", "Guangyao Zhai", "Yanyan Li", "Bowen Fu", "Xiangyang Ji", "Shan Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f08d"}, "filepath": "data/2311.18635.png", "tags": [], "_media_type": "image", "_rand": 0.9990324714482863, "arXiv_link": "https://arxiv.org/abs/2311.18635", "other_link": "", "title": "DiffusionAvatars: Deferred Diffusion for High-fidelity 3D Head Avatars", "abstract": "DiffusionAvatars synthesizes a high-fidelity 3D head avatar of a person,\noffering intuitive control over both pose and expression. We propose a\ndiffusion-based neural renderer that leverages generic 2D priors to produce\ncompelling images of faces. For coarse guidance of the expression and head\npose, we render a neural parametric head model (NPHM) from the target\nviewpoint, which acts as a proxy geometry of the person. Additionally, to\nenhance the modeling of intricate facial expressions, we condition\nDiffusionAvatars directly on the expression codes obtained from NPHM via\ncross-attention. Finally, to synthesize consistent surface details across\ndifferent viewpoints and expressions, we rig learnable spatial features to the\nhead's surface via TriPlane lookup in NPHM's canonical space. We train\nDiffusionAvatars on RGB videos and corresponding fitted NPHM meshes of a person\nand test the obtained avatars in both self-reenactment and animation scenarios.\nOur experiments demonstrate that DiffusionAvatars generates temporally\nconsistent and visually appealing videos for novel poses and expressions of a\nperson, outperforming existing approaches.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Tobias Kirschstein", "Simon Giebenhain", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f08e"}, "filepath": "data/2311.14402.png", "tags": [], "_media_type": "image", "_rand": 0.999545244761149, "arXiv_link": "https://arxiv.org/abs/2311.14402", "other_link": "", "title": "TEA: Test-time Energy Adaptation", "abstract": "Test-time adaptation (TTA) aims to improve model generalizability when test\ndata diverges from training distribution, offering the distinct advantage of\nnot requiring access to training data and processes, especially valuable in the\ncontext of large pre-trained models. However, current TTA methods fail to\naddress the fundamental issue: covariate shift, i.e., the decreased\ngeneralizability can be attributed to the model's reliance on the marginal\ndistribution of the training data, which may impair model calibration and\nintroduce confirmation bias. To address this, we propose a novel energy-based\nperspective, enhancing the model's perception of target data distributions\nwithout requiring access to training data or processes. Building on this\nperspective, we introduce $\\textbf{T}$est-time $\\textbf{E}$nergy\n$\\textbf{A}$daptation ($\\textbf{TEA}$), which transforms the trained classifier\ninto an energy-based model and aligns the model's distribution with the test\ndata's, enhancing its ability to perceive test distributions and thus improving\noverall generalizability. Extensive experiments across multiple tasks,\nbenchmarks and architectures demonstrate TEA's superior generalization\nperformance against state-of-the-art methods. Further in-depth analyses reveal\nthat TEA can equip the model with a comprehensive perception of test\ndistribution, ultimately paving the way toward improved generalization and\ncalibration.", "keywords": [], "authors_list": ["Yige Yuan", "Bingbing Xu", "Liang Hou", "Fei Sun", "Huawei Shen", "Xueqi Cheng"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f08f"}, "filepath": "data/2403.03739.png", "tags": [], "_media_type": "image", "_rand": 0.9993933989908043, "arXiv_link": "https://arxiv.org/abs/2403.03739", "other_link": "", "title": "A&B BNN: Add&Bit-Operation-Only Hardware-Friendly Binary Neural Network", "abstract": "Binary neural networks utilize 1-bit quantized weights and activations to\nreduce both the model's storage demands and computational burden. However,\nadvanced binary architectures still incorporate millions of inefficient and\nnonhardware-friendly full-precision multiplication operations. A&B BNN is\nproposed to directly remove part of the multiplication operations in a\ntraditional BNN and replace the rest with an equal number of bit operations,\nintroducing the mask layer and the quantized RPReLU structure based on the\nnormalizer-free network architecture. The mask layer can be removed during\ninference by leveraging the intrinsic characteristics of BNN with\nstraightforward mathematical transformations to avoid the associated\nmultiplication operations. The quantized RPReLU structure enables more\nefficient bit operations by constraining its slope to be integer powers of 2.\nExperimental results achieved 92.30%, 69.35%, and 66.89% on the CIFAR-10,\nCIFAR-100, and ImageNet datasets, respectively, which are competitive with the\nstate-of-the-art. Ablation studies have verified the efficacy of the quantized\nRPReLU structure, leading to a 1.14% enhancement on the ImageNet compared to\nusing a fixed slope RLeakyReLU. The proposed add&bit-operation-only BNN offers\nan innovative approach for hardware-friendly network architecture.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ruichen Ma", "Guanchao Qiao", "Yian Liu", "Liwei Meng", "Ning Ning", "Yang Liu", "Shaogang Hu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f090"}, "filepath": "data/2309.13925.png", "tags": [], "_media_type": "image", "_rand": 0.9999093348087496, "arXiv_link": "https://arxiv.org/abs/2309.13925", "other_link": "https://xuange923.github.io/Surveillance-Video-Understanding.", "title": "Towards Surveillance Video-and-Language Understanding: New Dataset, Baselines, and Challenges", "abstract": "Surveillance videos are an essential component of daily life with various\ncritical applications, particularly in public security. However, current\nsurveillance video tasks mainly focus on classifying and localizing anomalous\nevents. Existing methods are limited to detecting and classifying the\npredefined events with unsatisfactory semantic understanding, although they\nhave obtained considerable performance. To address this issue, we propose a new\nresearch direction of surveillance video-and-language understanding, and\nconstruct the first multimodal surveillance video dataset. We manually annotate\nthe real-world surveillance dataset UCF-Crime with fine-grained event content\nand timing. Our newly annotated dataset, UCA (UCF-Crime Annotation), contains\n23,542 sentences, with an average length of 20 words, and its annotated videos\nare as long as 110.7 hours. Furthermore, we benchmark SOTA models for four\nmultimodal tasks on this newly created dataset, which serve as new baselines\nfor surveillance video-and-language understanding. Through our experiments, we\nfind that mainstream models used in previously publicly available datasets\nperform poorly on surveillance video, which demonstrates the new challenges in\nsurveillance video-and-language understanding. To validate the effectiveness of\nour UCA, we conducted experiments on multimodal anomaly detection. The results\ndemonstrate that our multimodal surveillance learning can improve the\nperformance of conventional anomaly detection tasks. All the experiments\nhighlight the necessity of constructing this dataset to advance surveillance\nAI. The link to our dataset is provided at:\nhttps://xuange923.github.io/Surveillance-Video-Understanding.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Tongtong Yuan", "Xuange Zhang", "Kun Liu", "Bo Liu", "Chen Chen", "Jian Jin", "Zhenzhen Jiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f091"}, "filepath": "data/2403.12457.png", "tags": [], "_media_type": "image", "_rand": 0.9990458477609715, "arXiv_link": "https://arxiv.org/abs/2403.12457", "other_link": "https://github.com/Tencent/TFace.", "title": "Validating Privacy-Preserving Face Recognition under a Minimum Assumption", "abstract": "The widespread adoption of face recognition has led to increasing privacy\nconcerns, as unauthorized access to face images can expose sensitive personal\ninformation. This paper explores face image protection against viewing and\nrecovery attacks. Inspired by image compression, we propose creating a visually\nuninformative face image through feature subtraction between an original face\nand its model-produced regeneration. Recognizable identity features within the\nimage are encouraged by co-training a recognition model on its high-dimensional\nfeature representation. To enhance privacy, the high-dimensional representation\nis crafted through random channel shuffling, resulting in randomized\nrecognizable images devoid of attacker-leverageable texture details. We distill\nour methodologies into a novel privacy-preserving face recognition method,\nMinusFace. Experiments demonstrate its high recognition accuracy and effective\nprivacy protection. Its code is available at https://github.com/Tencent/TFace.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hui Zhang", "Xingbo Dong", "YenLungLai", "Ying Zhou", "Xiaoyan ZHANG", "Xingguo Lv", "Zhe Jin", "Xuejun Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f092"}, "filepath": "data/2311.17776v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999029752311506, "arXiv_link": "https://arxiv.org/abs/2311.17776v1", "other_link": "", "title": "One-Shot Open Affordance Learning with Foundation Models", "abstract": "We introduce One-shot Open Affordance Learning (OOAL), where a model is\ntrained with just one example per base object category, but is expected to\nidentify novel objects and affordances. While vision-language models excel at\nrecognizing novel objects and scenes, they often struggle to understand finer\nlevels of granularity such as affordances. To handle this issue, we conduct a\ncomprehensive analysis of existing foundation models, to explore their inherent\nunderstanding of affordances and assess the potential for data-limited\naffordance learning. We then propose a vision-language framework with simple\nand effective designs that boost the alignment between visual features and\naffordance text embeddings. Experiments on two affordance segmentation\nbenchmarks show that the proposed method outperforms state-of-the-art models\nwith less than 1% of the full training data, and exhibits reasonable\ngeneralization capability on unseen objects and affordances.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Gen Li", "Deqing Sun", "Laura Sevilla-Lara", "Varun Jampani"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f093"}, "filepath": "data/2404.05661.png", "tags": [], "_media_type": "image", "_rand": 0.9998249579610902, "arXiv_link": "https://arxiv.org/abs/2404.05661", "other_link": "https://xy-cong.github.io/imagine-colorization.", "title": "Automatic Controllable Colorization via Imagination", "abstract": "We propose a framework for automatic colorization that allows for iterative\nediting and modifications. The core of our framework lies in an imagination\nmodule: by understanding the content within a grayscale image, we utilize a\npre-trained image generation model to generate multiple images that contain the\nsame content. These images serve as references for coloring, mimicking the\nprocess of human experts. As the synthesized images can be imperfect or\ndifferent from the original grayscale image, we propose a Reference Refinement\nModule to select the optimal reference composition. Unlike most previous\nend-to-end automatic colorization algorithms, our framework allows for\niterative and localized modifications of the colorization results because we\nexplicitly model the coloring samples. Extensive experiments demonstrate the\nsuperiority of our framework over existing automatic colorization algorithms in\neditability and flexibility. Project page:\nhttps://xy-cong.github.io/imagine-colorization.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Xiaoyan Cong", "Yue Wu", "Qifeng Chen", "Chenyang Lei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f094"}, "filepath": "data/2309.11804.png", "tags": [], "_media_type": "image", "_rand": 0.9998600558541909, "arXiv_link": "https://arxiv.org/abs/2309.11804", "other_link": "", "title": "GAFusion: Adaptive Fusing LiDAR and Camera with Multiple Guidance for 3D Object Detection", "abstract": "Lidars and cameras are critical sensors that provide complementary\ninformation for 3D detection in autonomous driving. While most prevalent\nmethods progressively downscale the 3D point clouds and camera images and then\nfuse the high-level features, the downscaled features inevitably lose low-level\ndetailed information. In this paper, we propose Fine-Grained Lidar-Camera\nFusion (FGFusion) that make full use of multi-scale features of image and point\ncloud and fuse them in a fine-grained way. First, we design a dual pathway\nhierarchy structure to extract both high-level semantic and low-level detailed\nfeatures of the image. Second, an auxiliary network is introduced to guide\npoint cloud features to better learn the fine-grained spatial information.\nFinally, we propose multi-scale fusion (MSF) to fuse the last N feature maps of\nimage and point cloud. Extensive experiments on two popular autonomous driving\nbenchmarks, i.e. KITTI and Waymo, demonstrate the effectiveness of our method.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiaotian Li", "Baojie Fan", "Jiandong Tian", "Huijie Fan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f095"}, "filepath": "data/2312.12463.png", "tags": [], "_media_type": "image", "_rand": 0.9990650390108594, "arXiv_link": "https://arxiv.org/abs/2312.12463", "other_link": "", "title": "Open Vocabulary Semantic Scene Sketch Understanding", "abstract": "We study the underexplored but fundamental vision problem of machine\nunderstanding of abstract freehand scene sketches. We introduce a sketch\nencoder that results in semantically-aware feature space, which we evaluate by\ntesting its performance on a semantic sketch segmentation task. To train our\nmodel we rely only on the availability of bitmap sketches with their brief\ncaptions and do not require any pixel-level annotations. To obtain\ngeneralization to a large set of sketches and categories, we build on a vision\ntransformer encoder pretrained with the CLIP model. We freeze the text encoder\nand perform visual-prompt tuning of the visual encoder branch while introducing\na set of critical modifications. Firstly, we augment the classical key-query\n(k-q) self-attention blocks with value-value (v-v) self-attention blocks.\nCentral to our model is a two-level hierarchical network design that enables\nefficient semantic disentanglement: The first level ensures holistic scene\nsketch encoding, and the second level focuses on individual categories. We,\nthen, in the second level of the hierarchy, introduce a cross-attention between\ntextual and visual branches. Our method outperforms zero-shot CLIP pixel\naccuracy of segmentation results by 37 points, reaching an accuracy of $85.5\\%$\non the FS-COCO sketch dataset. Finally, we conduct a user study that allows us\nto identify further improvements needed over our method to reconcile machine\nand human understanding of scene sketches.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Ahmed Bourouis", "Judith Fan", "Yulia Gryaditskaya"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f096"}, "filepath": "data/2308.08110.png", "tags": [], "_media_type": "image", "_rand": 0.9990109541810668, "arXiv_link": "https://arxiv.org/abs/2308.08110", "other_link": "", "title": "View From Above: Orthogonal viewpoint aware Cross-view Localization", "abstract": "This paper proposes a fine-grained self-localization method for outdoor\nrobotics that utilizes a flexible number of onboard cameras and readily\naccessible satellite images. The proposed method addresses limitations in\nexisting cross-view localization methods that struggle to handle noise sources\nsuch as moving objects and seasonal variations. It is the first sparse\nvisual-only method that enhances perception in dynamic environments by\ndetecting view-consistent key points and their corresponding deep features from\nground and satellite views, while removing off-the-ground objects and\nestablishing homography transformation between the two views. Moreover, the\nproposed method incorporates a spatial embedding approach that leverages camera\nintrinsic and extrinsic information to reduce the ambiguity of purely visual\nmatching, leading to improved feature matching and overall pose estimation\naccuracy. The method exhibits strong generalization and is robust to\nenvironmental changes, requiring only geo-poses as ground truth. Extensive\nexperiments on the KITTI and Ford Multi-AV Seasonal datasets demonstrate that\nour proposed method outperforms existing state-of-the-art methods, achieving\nmedian spatial accuracy errors below $0.5$ meters along the lateral and\nlongitudinal directions, and a median orientation accuracy error below 2\ndegrees.", "keywords": ["Remote sensing and photogrammetry", "Scene analysis and understanding"], "authors_list": ["Shan Wang", "Chuong Nguyen", "Jiawei Liu", "Yanhao Zhang", "Sundaram Muthu", "Fahira Afzal Maken", "Kaihao Zhang", "Hongdong Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f097"}, "filepath": "data/2403.18092.png", "tags": [], "_media_type": "image", "_rand": 0.9990601613666279, "arXiv_link": "https://arxiv.org/abs/2403.18092", "other_link": "", "title": "OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation", "abstract": "The scarcity of ground-truth labels poses one major challenge in developing\noptical flow estimation models that are both generalizable and robust. While\ncurrent methods rely on data augmentation, they have yet to fully exploit the\nrich information available in labeled video sequences. We propose OCAI, a\nmethod that supports robust frame interpolation by generating intermediate\nvideo frames alongside optical flows in between. Utilizing a forward warping\napproach, OCAI employs occlusion awareness to resolve ambiguities in pixel\nvalues and fills in missing values by leveraging the forward-backward\nconsistency of optical flows. Additionally, we introduce a teacher-student\nstyle semi-supervised learning method on top of the interpolated frames. Using\na pair of unlabeled frames and the teacher model's predicted optical flow, we\ngenerate interpolated frames and flows to train a student model. The teacher's\nweights are maintained using Exponential Moving Averaging of the student. Our\nevaluations demonstrate perceptually superior interpolation quality and\nenhanced optical flow accuracy on established benchmarks such as Sintel and\nKITTI.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Jisoo Jeong", "Hong Cai", "Risheek Garrepalli", "Jamie Lin", "Munawar Hayat", "Fatih Porikli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f098"}, "filepath": "data/2312.00093.png", "tags": [], "_media_type": "image", "_rand": 0.9996702126989293, "arXiv_link": "https://arxiv.org/abs/2312.00093", "other_link": "", "title": "GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs", "abstract": "As pretrained text-to-image diffusion models become increasingly powerful,\nrecent efforts have been made to distill knowledge from these text-to-image\npretrained models for optimizing a text-guided 3D model. Most of the existing\nmethods generate a holistic 3D model from a plain text input. This can be\nproblematic when the text describes a complex scene with multiple objects,\nbecause the vectorized text embeddings are inherently unable to capture a\ncomplex description with multiple entities and relationships. Holistic 3D\nmodeling of the entire scene further prevents accurate grounding of text\nentities and concepts. To address this limitation, we propose GraphDreamer, a\nnovel framework to generate compositional 3D scenes from scene graphs, where\nobjects are represented as nodes and their interactions as edges. By exploiting\nnode and edge information in scene graphs, our method makes better use of the\npretrained text-to-image diffusion model and is able to fully disentangle\ndifferent objects without image-level supervision. To facilitate modeling of\nobject-wise relationships, we use signed distance fields as representation and\nimpose a constraint to avoid inter-penetration of objects. To avoid manual\nscene graph creation, we design a text prompt for ChatGPT to generate scene\ngraphs based on text inputs. We conduct both qualitative and quantitative\nexperiments to validate the effectiveness of GraphDreamer in generating\nhigh-fidelity compositional 3D scenes with disentangled object entities.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Gege Gao", "Weiyang Liu", "Anpei Chen", "Andreas Geiger", "Bernhard Sch\u00f6lkopf"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f099"}, "filepath": "data/2404.14808v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999466854232305, "arXiv_link": "https://arxiv.org/abs/2404.14808v1", "other_link": "", "title": "Visual-Augmented Dynamic Semantic Prototype for Generative Zero-Shot Learning", "abstract": "Generative Zero-shot learning (ZSL) learns a generator to synthesize visual\nsamples for unseen classes, which is an effective way to advance ZSL. However,\nexisting generative methods rely on the conditions of Gaussian noise and the\npredefined semantic prototype, which limit the generator only optimized on\nspecific seen classes rather than characterizing each visual instance,\nresulting in poor generalizations (\\textit{e.g.}, overfitting to seen classes).\nTo address this issue, we propose a novel Visual-Augmented Dynamic Semantic\nprototype method (termed VADS) to boost the generator to learn accurate\nsemantic-visual mapping by fully exploiting the visual-augmented knowledge into\nsemantic conditions. In detail, VADS consists of two modules: (1) Visual-aware\nDomain Knowledge Learning module (VDKL) learns the local bias and global prior\nof the visual features (referred to as domain visual knowledge), which replace\npure Gaussian noise to provide richer prior noise information; (2)\nVision-Oriented Semantic Updation module (VOSU) updates the semantic prototype\naccording to the visual representations of the samples. Ultimately, we\nconcatenate their output as a dynamic semantic prototype, which serves as the\ncondition of the generator. Extensive experiments demonstrate that our VADS\nachieves superior CZSL and GZSL performances on three prominent datasets and\noutperforms other state-of-the-art methods with averaging increases by 6.4\\%,\n5.9\\% and 4.2\\% on SUN, CUB and AWA2, respectively.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Wenjin Hou", "Shiming Chen", "Shuhuang Chen", "Ziming Hong", "Yan Wang", "Xuetao Feng", "Salman Khan", "Fahad Shahbaz Khan", "Xinge You"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f09a"}, "filepath": "data/2308.13223.png", "tags": [], "_media_type": "image", "_rand": 0.999280885795115, "arXiv_link": "https://arxiv.org/abs/2308.13223", "other_link": "https://efficientdreamer.github.io.", "title": "EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Priors", "abstract": "While image diffusion models have made significant progress in text-driven 3D\ncontent creation, they often fail to accurately capture the intended meaning of\ntext prompts, especially for view information. This limitation leads to the\nJanus problem, where multi-faced 3D models are generated under the guidance of\nsuch diffusion models. In this paper, we propose a robust high-quality 3D\ncontent generation pipeline by exploiting orthogonal-view image guidance.\nFirst, we introduce a novel 2D diffusion model that generates an image\nconsisting of four orthogonal-view sub-images based on the given text prompt.\nThen, the 3D content is created using this diffusion model. Notably, the\ngenerated orthogonal-view image provides strong geometric structure priors and\nthus improves 3D consistency. As a result, it effectively resolves the Janus\nproblem and significantly enhances the quality of 3D content creation.\nAdditionally, we present a 3D synthesis fusion network that can further improve\nthe details of the generated 3D contents. Both quantitative and qualitative\nevaluations demonstrate that our method surpasses previous text-to-3D\ntechniques. Project page: https://efficientdreamer.github.io.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhipeng Hu", "Minda Zhao", "Chaoyi Zhao", "Xinyue Liang", "Lincheng Li", "Zeng Zhao", "Changjie Fan", "Xiaowei Zhou", "Xin Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f09b"}, "filepath": "data/2404.01089.png", "tags": [], "_media_type": "image", "_rand": 0.9996899253539573, "arXiv_link": "https://arxiv.org/abs/2404.01089", "other_link": "", "title": "Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On", "abstract": "Image-based virtual try-on is an increasingly important task for online\nshopping. It aims to synthesize images of a specific person wearing a specified\ngarment. Diffusion model-based approaches have recently become popular, as they\nare excellent at image synthesis tasks. However, these approaches usually\nemploy additional image encoders and rely on the cross-attention mechanism for\ntexture transfer from the garment to the person image, which affects the\ntry-on's efficiency and fidelity. To address these issues, we propose an\nTexture-Preserving Diffusion (TPD) model for virtual try-on, which enhances the\nfidelity of the results and introduces no additional image encoders.\nAccordingly, we make contributions from two aspects. First, we propose to\nconcatenate the masked person and reference garment images along the spatial\ndimension and utilize the resulting image as the input for the diffusion\nmodel's denoising UNet. This enables the original self-attention layers\ncontained in the diffusion model to achieve efficient and accurate texture\ntransfer. Second, we propose a novel diffusion-based method that predicts a\nprecise inpainting mask based on the person and reference garment images,\nfurther enhancing the reliability of the try-on results. In addition, we\nintegrate mask prediction and image synthesis into a single compact model. The\nexperimental results show that our approach can be applied to various try-on\ntasks, e.g., garment-to-person and person-to-person try-ons, and significantly\noutperforms state-of-the-art methods on popular VITON, VITON-HD databases.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Xu Yang", "Changxing Ding", "Zhibin Hong", "Junhao Huang", "Jin Tao", "Xiangmin Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f09c"}, "filepath": "data/2312.04964.png", "tags": [], "_media_type": "image", "_rand": 0.999630021381177, "arXiv_link": "https://arxiv.org/abs/2312.04964", "other_link": "", "title": "ZePT: Zero-Shot Pan-Tumor Segmentation via Query-Disentangling and Self-Prompting", "abstract": "The long-tailed distribution problem in medical image analysis reflects a\nhigh prevalence of common conditions and a low prevalence of rare ones, which\nposes a significant challenge in developing a unified model capable of\nidentifying rare or novel tumor categories not encountered during training. In\nthis paper, we propose a new zero-shot pan-tumor segmentation framework (ZePT)\nbased on query-disentangling and self-prompting to segment unseen tumor\ncategories beyond the training set. ZePT disentangles the object queries into\ntwo subsets and trains them in two stages. Initially, it learns a set of\nfundamental queries for organ segmentation through an object-aware feature\ngrouping strategy, which gathers organ-level visual features. Subsequently, it\nrefines the other set of advanced queries that focus on the auto-generated\nvisual prompts for unseen tumor segmentation. Moreover, we introduce\nquery-knowledge alignment at the feature level to enhance each query's\ndiscriminative representation and generalizability. Extensive experiments on\nvarious tumor segmentation tasks demonstrate the performance superiority of\nZePT, which surpasses the previous counterparts and evidence the promising\nability for zero-shot tumor segmentation in real-world settings.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Yankai Jiang", "Zhongzhen Huang", "Rongzhao Zhang", "Xiaofan Zhang", "Shaoting Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f09d"}, "filepath": "data/2311.16714v1.png", "tags": [], "_media_type": "image", "_rand": 0.999310428007105, "arXiv_link": "https://arxiv.org/abs/2311.16714v1", "other_link": "", "title": "Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld", "abstract": "While large language models (LLMs) excel in a simulated world of texts, they\nstruggle to interact with the more realistic world without perceptions of other\nmodalities such as visual or audio signals. Although vision-language models\n(VLMs) integrate LLM modules (1) aligned with static image features, and (2)\nmay possess prior knowledge of world dynamics (as demonstrated in the text\nworld), they have not been trained in an embodied visual world and thus cannot\nalign with its dynamics. On the other hand, training an embodied agent in a\nnoisy visual world without expert guidance is often challenging and\ninefficient. In this paper, we train a VLM agent living in a visual world using\nan LLM agent excelling in a parallel text world (but inapplicable to the visual\nworld). Specifically, we distill LLM's reflection outcomes (improved actions by\nanalyzing mistakes) in a text world's tasks to finetune the VLM on the same\ntasks of the visual world, resulting in an Embodied Multi-Modal Agent (EMMA)\nquickly adapting to the visual world dynamics. Such cross-modality imitation\nlearning between the two parallel worlds enables EMMA to generalize to a broad\nscope of new tasks without any further guidance from the LLM expert. Extensive\nevaluations on the ALFWorld benchmark highlight EMMA's superior performance to\nSOTA VLM-based agents across diverse tasks, e.g., 20%-70% improvement in the\nsuccess rate.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yijun Yang", "Tianyi Zhou", "kanxue Li", "Dapeng Tao", "Lusong Li", "Li Shen", "Xiaodong He", "Jing Jiang", "Yuhui Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f09e"}, "filepath": "data/2311.16493.png", "tags": [], "_media_type": "image", "_rand": 0.9996594506949377, "arXiv_link": "https://arxiv.org/abs/2311.16493", "other_link": "", "title": "Mip-Splatting: Alias-free 3D Gaussian Splatting", "abstract": "Recently, 3D Gaussian Splatting has demonstrated impressive novel view\nsynthesis results, reaching high fidelity and efficiency. However, strong\nartifacts can be observed when changing the sampling rate, \\eg, by changing\nfocal length or camera distance. We find that the source for this phenomenon\ncan be attributed to the lack of 3D frequency constraints and the usage of a 2D\ndilation filter. To address this problem, we introduce a 3D smoothing filter\nwhich constrains the size of the 3D Gaussian primitives based on the maximal\nsampling frequency induced by the input views, eliminating high-frequency\nartifacts when zooming in. Moreover, replacing 2D dilation with a 2D Mip\nfilter, which simulates a 2D box filter, effectively mitigates aliasing and\ndilation issues. Our evaluation, including scenarios such a training on\nsingle-scale images and testing on multiple scales, validates the effectiveness\nof our approach.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zehao Yu", "Anpei Chen", "Binbin Huang", "Torsten Sattler", "Andreas Geiger"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f09f"}, "filepath": "data/2403.10071.png", "tags": [], "_media_type": "image", "_rand": 0.999323769218648, "arXiv_link": "https://arxiv.org/abs/2403.10071", "other_link": "", "title": "Codebook Transfer with Part-of-Speech for Vector-Quantized Image Modeling", "abstract": "Vector-Quantized Image Modeling (VQIM) is a fundamental research problem in\nimage synthesis, which aims to represent an image with a discrete token\nsequence. Existing studies effectively address this problem by learning a\ndiscrete codebook from scratch and in a code-independent manner to quantize\ncontinuous representations into discrete tokens. However, learning a codebook\nfrom scratch and in a code-independent manner is highly challenging, which may\nbe a key reason causing codebook collapse, i.e., some code vectors can rarely\nbe optimized without regard to the relationship between codes and good codebook\npriors such that die off finally. In this paper, inspired by pretrained\nlanguage models, we find that these language models have actually pretrained a\nsuperior codebook via a large number of text corpus, but such information is\nrarely exploited in VQIM. To this end, we propose a novel codebook transfer\nframework with part-of-speech, called VQCT, which aims to transfer a\nwell-trained codebook from pretrained language models to VQIM for robust\ncodebook learning. Specifically, we first introduce a pretrained codebook from\nlanguage models and part-of-speech knowledge as priors. Then, we construct a\nvision-related codebook with these priors for achieving codebook transfer.\nFinally, a novel codebook transfer network is designed to exploit abundant\nsemantic relationships between codes contained in pretrained codebooks for\nrobust VQIM codebook learning. Experimental results on four datasets show that\nour VQCT method achieves superior VQIM performance over previous\nstate-of-the-art methods.", "keywords": [], "authors_list": ["Baoquan Zhang", "Huaibin Wang", "Luo Chuyao", "Xutao Li", "Guotao liang", "Yunming Ye", "joeq", "Yao He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a0"}, "filepath": "data/2404.00168.png", "tags": [], "_media_type": "image", "_rand": 0.9998936957403299, "arXiv_link": "https://arxiv.org/abs/2404.00168", "other_link": "", "title": "Multi-Level Neural Scene Graphs for Dynamic Urban Environments", "abstract": "We estimate the radiance field of large-scale dynamic areas from multiple\nvehicle captures under varying environmental conditions. Previous works in this\ndomain are either restricted to static environments, do not scale to more than\na single short video, or struggle to separately represent dynamic object\ninstances. To this end, we present a novel, decomposable radiance field\napproach for dynamic urban environments. We propose a multi-level neural scene\ngraph representation that scales to thousands of images from dozens of\nsequences with hundreds of fast-moving objects. To enable efficient training\nand rendering of our representation, we develop a fast composite ray sampling\nand rendering scheme. To test our approach in urban driving scenarios, we\nintroduce a new, novel view synthesis benchmark. We show that our approach\noutperforms prior art by a significant margin on both established and our\nproposed benchmark while being faster in training and rendering.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Tobias Fischer", "Lorenzo Porzi", "Samuel Rota Bul\u00f2", "Marc Pollefeys", "Peter Kontschieder"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a1"}, "filepath": "data/2403.10897.png", "tags": [], "_media_type": "image", "_rand": 0.9991994197098406, "arXiv_link": "https://arxiv.org/abs/2403.10897", "other_link": "https://github.com/Guanzhou-Ke/MRDD.", "title": "Rethinking Multi-view Representation Learning via Distilled Disentangling", "abstract": "Multi-view representation learning aims to derive robust representations that\nare both view-consistent and view-specific from diverse data sources. This\npaper presents an in-depth analysis of existing approaches in this domain,\nhighlighting a commonly overlooked aspect: the redundancy between\nview-consistent and view-specific representations. To this end, we propose an\ninnovative framework for multi-view representation learning, which incorporates\na technique we term 'distilled disentangling'. Our method introduces the\nconcept of masked cross-view prediction, enabling the extraction of compact,\nhigh-quality view-consistent representations from various sources without\nincurring extra computational overhead. Additionally, we develop a distilled\ndisentangling module that efficiently filters out consistency-related\ninformation from multi-view representations, resulting in purer view-specific\nrepresentations. This approach significantly reduces redundancy between\nview-consistent and view-specific representations, enhancing the overall\nefficiency of the learning process. Our empirical evaluations reveal that\nhigher mask ratios substantially improve the quality of view-consistent\nrepresentations. Moreover, we find that reducing the dimensionality of\nview-consistent representations relative to that of view-specific\nrepresentations further refines the quality of the combined representations.\nOur code is accessible at: https://github.com/Guanzhou-Ke/MRDD.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Guanzhou Ke", "Bo Wang", "Xiao-Li Wang", "Shengfeng He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a2"}, "filepath": "data/2402.14371v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995157267508068, "arXiv_link": "https://arxiv.org/html/2402.14371v2", "other_link": "", "title": "Neural Refinement for Absolute Pose Regression with Feature Synthesis", "abstract": "Absolute Pose Regressors (APRs) directly estimate camera poses from monocular\nimages, but their accuracy is unstable for different queries. Uncertainty-aware\nAPRs provide uncertainty information on the estimated pose, alleviating the\nimpact of these unreliable predictions. However, existing uncertainty modelling\ntechniques are often coupled with a specific APR architecture, resulting in\nsuboptimal performance compared to state-of-the-art (SOTA) APR methods. This\nwork introduces a novel APR-agnostic framework, HR-APR, that formulates\nuncertainty estimation as cosine similarity estimation between the query and\ndatabase features. It does not rely on or affect APR network architecture,\nwhich is flexible and computationally efficient. In addition, we take advantage\nof the uncertainty for pose refinement to enhance the performance of APR. The\nextensive experiments demonstrate the effectiveness of our framework, reducing\n27.4\\% and 15.2\\% of computational overhead on the 7Scenes and Cambridge\nLandmarks datasets while maintaining the SOTA accuracy in single-image APRs.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Shuai Chen", "Yash Bhalgat", "Xinghui Li", "Jia-Wang Bian", "Kejie Li", "Zirui Wang", "Victor Adrian Prisacariu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a3"}, "filepath": "data/2311.13120.png", "tags": [], "_media_type": "image", "_rand": 0.9996997657822674, "arXiv_link": "https://arxiv.org/abs/2311.13120", "other_link": "https://github.com/bytedance/E2STR", "title": "Multi-modal In-Context Learning Makes an Ego-evolving Scene Text Recognizer", "abstract": "Scene text recognition (STR) in the wild frequently encounters challenges\nwhen coping with domain variations, font diversity, shape deformations, etc. A\nstraightforward solution is performing model fine-tuning tailored to a specific\nscenario, but it is computationally intensive and requires multiple model\ncopies for various scenarios. Recent studies indicate that large language\nmodels (LLMs) can learn from a few demonstration examples in a training-free\nmanner, termed \"In-Context Learning\" (ICL). Nevertheless, applying LLMs as a\ntext recognizer is unacceptably resource-consuming. Moreover, our pilot\nexperiments on LLMs show that ICL fails in STR, mainly attributed to the\ninsufficient incorporation of contextual information from diverse samples in\nthe training stage. To this end, we introduce E$^2$STR, a STR model trained\nwith context-rich scene text sequences, where the sequences are generated via\nour proposed in-context training strategy. E$^2$STR demonstrates that a\nregular-sized model is sufficient to achieve effective ICL capabilities in STR.\nExtensive experiments show that E$^2$STR exhibits remarkable training-free\nadaptation in various scenarios and outperforms even the fine-tuned\nstate-of-the-art approaches on public benchmarks. The code is released at\nhttps://github.com/bytedance/E2STR .", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Zhen Zhao", "Jingqun Tang", "Chunhui Lin", "Binghong Wu", "Can Huang", "Hao Liu", "Xin Tan", "Zhizhong Zhang", "Yuan Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a4"}, "filepath": "data/2312.15905.png", "tags": [], "_media_type": "image", "_rand": 0.999715952083339, "arXiv_link": "https://arxiv.org/abs/2312.15905", "other_link": "", "title": "Cross Initialization for Face Personalization of Text-to-Image Models", "abstract": "Recently, there has been a surge in face personalization techniques,\nbenefiting from the advanced capabilities of pretrained text-to-image diffusion\nmodels. Among these, a notable method is Textual Inversion, which generates\npersonalized images by inverting given images into textual embeddings. However,\nmethods based on Textual Inversion still struggle with balancing the trade-off\nbetween reconstruction quality and editability. In this study, we examine this\nissue through the lens of initialization. Upon closely examining traditional\ninitialization methods, we identified a significant disparity between the\ninitial and learned embeddings in terms of both scale and orientation. The\nscale of the learned embedding can be up to 100 times greater than that of the\ninitial embedding. Such a significant change in the embedding could increase\nthe risk of overfitting, thereby compromising the editability. Driven by this\nobservation, we introduce a novel initialization method, termed Cross\nInitialization, that significantly narrows the gap between the initial and\nlearned embeddings. This method not only improves both reconstruction and\neditability but also reduces the optimization steps from 5000 to 320.\nFurthermore, we apply a regularization term to keep the learned embedding close\nto the initial embedding. We show that when combined with Cross Initialization,\nthis regularization term can effectively improve editability. We provide\ncomprehensive empirical evidence to demonstrate the superior performance of our\nmethod compared to the baseline methods. Notably, in our experiments, Cross\nInitialization is the only method that successfully edits an individual's\nfacial expression. Additionally, a fast version of our method allows for\ncapturing an input image in roughly 26 seconds, while surpassing the baseline\nmethods in terms of both reconstruction and editability. Code will be made\npublicly available.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Lianyu Pang", "Jian Yin", "Haoran Xie", "Qiping Wang", "Qing Li", "Xudong Mao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a5"}, "filepath": "data/2311.12342.png", "tags": [], "_media_type": "image", "_rand": 0.9996455083925023, "arXiv_link": "https://arxiv.org/abs/2311.12342", "other_link": "", "title": "Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis", "abstract": "Recent text-to-image diffusion models have reached an unprecedented level in\ngenerating high-quality images. However, their exclusive reliance on textual\nprompts often falls short in precise control of image compositions. In this\npaper, we propose LoCo, a training-free approach for layout-to-image Synthesis\nthat excels in producing high-quality images aligned with both textual prompts\nand layout instructions. Specifically, we introduce a Localized Attention\nConstraint (LAC), leveraging semantic affinity between pixels in self-attention\nmaps to create precise representations of desired objects and effectively\nensure the accurate placement of objects in designated regions. We further\npropose a Padding Token Constraint (PTC) to leverage the semantic information\nembedded in previously neglected padding tokens, improving the consistency\nbetween object appearance and layout instructions. LoCo seamlessly integrates\ninto existing text-to-image and layout-to-image models, enhancing their\nperformance in spatial control and addressing semantic failures observed in\nprior methods. Extensive experiments showcase the superiority of our approach,\nsurpassing existing state-of-the-art training-free layout-to-image methods both\nqualitatively and quantitatively across multiple benchmarks.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Marianna Ohanyan", "Hayk Manukyan", "Zhangyang Wang", "Shant Navasardyan", "Humphrey Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a6"}, "filepath": "data/2312.06739.png", "tags": [], "_media_type": "image", "_rand": 0.9996984636950207, "arXiv_link": "https://arxiv.org/abs/2312.06739", "other_link": "", "title": "SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models", "abstract": "Current instruction-based editing methods, such as InstructPix2Pix, often\nfail to produce satisfactory results in complex scenarios due to their\ndependence on the simple CLIP text encoder in diffusion models. To rectify\nthis, this paper introduces SmartEdit, a novel approach to instruction-based\nimage editing that leverages Multimodal Large Language Models (MLLMs) to\nenhance their understanding and reasoning capabilities. However, direct\nintegration of these elements still faces challenges in situations requiring\ncomplex reasoning. To mitigate this, we propose a Bidirectional Interaction\nModule that enables comprehensive bidirectional information interactions\nbetween the input image and the MLLM output. During training, we initially\nincorporate perception data to boost the perception and understanding\ncapabilities of diffusion models. Subsequently, we demonstrate that a small\namount of complex instruction editing data can effectively stimulate\nSmartEdit's editing capabilities for more complex instructions. We further\nconstruct a new evaluation dataset, Reason-Edit, specifically tailored for\ncomplex instruction-based image editing. Both quantitative and qualitative\nresults on this evaluation dataset indicate that our SmartEdit surpasses\nprevious methods, paving the way for the practical application of complex\ninstruction-based image editing.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yuzhou Huang", "Liangbin Xie", "Xintao Wang", "Ziyang Yuan", "Xiaodong Cun", "Yixiao Ge", "Jiantao Zhou", "Chao Dong", "Rui Huang", "Ruimao Zhang", "Ying Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a7"}, "filepath": "data/2405.07011.png", "tags": [], "_media_type": "image", "_rand": 0.9997481063967156, "arXiv_link": "https://arxiv.org/abs/2405.07011", "other_link": "https://github.com/ZzoomD/FairSAD.", "title": "FADES: Fair Disentanglement with Sensitive Relevance", "abstract": "Group fairness for Graph Neural Networks (GNNs), which emphasizes algorithmic\ndecisions neither favoring nor harming certain groups defined by sensitive\nattributes (e.g., race and gender), has gained considerable attention. In\nparticular, the objective of group fairness is to ensure that the decisions\nmade by GNNs are independent of the sensitive attribute. To achieve this\nobjective, most existing approaches involve eliminating sensitive attribute\ninformation in node representations or algorithmic decisions. However, such\nways may also eliminate task-related information due to its inherent\ncorrelation with the sensitive attribute, leading to a sacrifice in utility. In\nthis work, we focus on improving the fairness of GNNs while preserving\ntask-related information and propose a fair GNN framework named FairSAD.\nInstead of eliminating sensitive attribute information, FairSAD enhances the\nfairness of GNNs via Sensitive Attribute Disentanglement (SAD), which separates\nthe sensitive attribute-related information into an independent component to\nmitigate its impact. Additionally, FairSAD utilizes a channel masking mechanism\nto adaptively identify the sensitive attribute-related component and\nsubsequently decorrelates it. Overall, FairSAD minimizes the impact of the\nsensitive attribute on GNN outcomes rather than eliminating sensitive\nattributes, thereby preserving task-related information associated with the\nsensitive attribute. Furthermore, experiments conducted on several real-world\ndatasets demonstrate that FairSAD outperforms other state-of-the-art methods by\na significant margin in terms of both fairness and utility performance. Our\nsource code is available at https://github.com/ZzoomD/FairSAD.", "keywords": [], "authors_list": ["Taeuk Jang", "Xiaoqian Wang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computers and Society"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a8"}, "filepath": "data/2402.17726.png", "tags": [], "_media_type": "image", "_rand": 0.9996874283294578, "arXiv_link": "https://arxiv.org/abs/2402.17726", "other_link": "https://github.com/syp2ysy/VRP-SAM}", "title": "VRP-SAM: SAM with Visual Reference Prompt", "abstract": "In this paper, we propose a novel Visual Reference Prompt (VRP) encoder that\nempowers the Segment Anything Model (SAM) to utilize annotated reference images\nas prompts for segmentation, creating the VRP-SAM model. In essence, VRP-SAM\ncan utilize annotated reference images to comprehend specific objects and\nperform segmentation of specific objects in target image. It is note that the\nVRP encoder can support a variety of annotation formats for reference images,\nincluding \\textbf{point}, \\textbf{box}, \\textbf{scribble}, and \\textbf{mask}.\nVRP-SAM achieves a breakthrough within the SAM framework by extending its\nversatility and applicability while preserving SAM's inherent strengths, thus\nenhancing user-friendliness. To enhance the generalization ability of VRP-SAM,\nthe VRP encoder adopts a meta-learning strategy. To validate the effectiveness\nof VRP-SAM, we conducted extensive empirical studies on the Pascal and COCO\ndatasets. Remarkably, VRP-SAM achieved state-of-the-art performance in visual\nreference segmentation with minimal learnable parameters. Furthermore, VRP-SAM\ndemonstrates strong generalization capabilities, allowing it to perform\nsegmentation of unseen objects and enabling cross-domain segmentation. The\nsource code and models will be available at\n\\url{https://github.com/syp2ysy/VRP-SAM}", "keywords": [], "authors_list": ["Yanpeng Sun", "Jiahui Chen", "Shan Zhang", "Xinyu Zhang", "Qiang Chen", "gang zhang", "Errui Ding", "Jingdong Wang", "Zechao Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0a9"}, "filepath": "data/2312.06420.png", "tags": [], "_media_type": "image", "_rand": 0.999931340997989, "arXiv_link": "https://arxiv.org/abs/2312.06420", "other_link": "https://github.com/LiljaAdam/geographical-splits", "title": "Localization Is All You Evaluate: Data Leakage in Online Mapping Datasets and How to Fix It", "abstract": "The task of online mapping is to predict a local map using current sensor\nobservations, e.g. from lidar and camera, without relying on a pre-built map.\nState-of-the-art methods are based on supervised learning and are trained\npredominantly using two datasets: nuScenes and Argoverse 2. However, these\ndatasets revisit the same geographic locations across training, validation, and\ntest sets. Specifically, over $80$% of nuScenes and $40$% of Argoverse 2\nvalidation and test samples are less than $5$ m from a training sample. At test\ntime, the methods are thus evaluated more on how well they localize within a\nmemorized implicit map built from the training data than on extrapolating to\nunseen locations. Naturally, this data leakage causes inflated performance\nnumbers and we propose geographically disjoint data splits to reveal the true\nperformance in unseen environments. Experimental results show that methods\nperform considerably worse, some dropping more than $45$ mAP, when trained and\nevaluated on proper data splits. Additionally, a reassessment of prior design\nchoices reveals diverging conclusions from those based on the original split.\nNotably, the impact of lifting methods and the support from auxiliary tasks\n(e.g., depth supervision) on performance appears less substantial or follows a\ndifferent trajectory than previously perceived. Splits can be found at\nhttps://github.com/LiljaAdam/geographical-splits", "keywords": [], "authors_list": ["Adam Lilja", "Junsheng Fu", "Erik Stenborg", "Lars Hammarstrand"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0aa"}, "filepath": "data/2405.19819.png", "tags": [], "_media_type": "image", "_rand": 0.9992521812458653, "arXiv_link": "https://arxiv.org/abs/2405.19819", "other_link": "https://light.princeton.edu/gatedfields/.", "title": "Gated Fields: Learning Scene Reconstruction from Gated Videos", "abstract": "Reconstructing outdoor 3D scenes from temporal observations is a challenge\nthat recent work on neural fields has offered a new avenue for. However,\nexisting methods that recover scene properties, such as geometry, appearance,\nor radiance, solely from RGB captures often fail when handling poorly-lit or\ntexture-deficient regions. Similarly, recovering scenes with scanning LiDAR\nsensors is also difficult due to their low angular sampling rate which makes\nrecovering expansive real-world scenes difficult. Tackling these gaps, we\nintroduce Gated Fields - a neural scene reconstruction method that utilizes\nactive gated video sequences. To this end, we propose a neural rendering\napproach that seamlessly incorporates time-gated capture and illumination. Our\nmethod exploits the intrinsic depth cues in the gated videos, achieving precise\nand dense geometry reconstruction irrespective of ambient illumination\nconditions. We validate the method across day and night scenarios and find that\nGated Fields compares favorably to RGB and LiDAR reconstruction methods. Our\ncode and datasets are available at https://light.princeton.edu/gatedfields/.", "keywords": ["Scene analysis and understanding", "Computational imaging and physics-based vision"], "authors_list": ["Andrea Ramazzina", "Stefanie Walz", "Pragyan Dahal", "Mario Bijelic", "Felix Heide"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ab"}, "filepath": "data/2307.00842.png", "tags": [], "_media_type": "image", "_rand": 0.9990675869254168, "arXiv_link": "https://arxiv.org/abs/2307.00842", "other_link": "", "title": "VINECS: Video-based Neural Character Skinning", "abstract": "Rigging and skinning clothed human avatars is a challenging task and\ntraditionally requires a lot of manual work and expertise. Recent methods\naddressing it either generalize across different characters or focus on\ncapturing the dynamics of a single character observed under different pose\nconfigurations. However, the former methods typically predict solely static\nskinning weights, which perform poorly for highly articulated poses, and the\nlatter ones either require dense 3D character scans in different poses or\ncannot generate an explicit mesh with vertex correspondence over time. To\naddress these challenges, we propose a fully automated approach for creating a\nfully rigged character with pose-dependent skinning weights, which can be\nsolely learned from multi-view video. Therefore, we first acquire a rigged\ntemplate, which is then statically skinned. Next, a coordinate-based MLP learns\na skinning weights field parameterized over the position in a canonical pose\nspace and the respective pose. Moreover, we introduce our pose- and\nview-dependent appearance field allowing us to differentiably render and\nsupervise the posed mesh using multi-view imagery. We show that our approach\noutperforms state-of-the-art while not relying on dense 4D scans.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Zhouyingcheng Liao", "Vladislav Golyanik", "Marc Habermann", "Christian Theobalt"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ac"}, "filepath": "data/2404.05225.png", "tags": [], "_media_type": "image", "_rand": 0.9994641092691228, "arXiv_link": "https://arxiv.org/abs/2404.05225", "other_link": "https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/DocumentUnderstanding/LayoutLLM", "title": "LayoutLLM: Layout Instruction Tuning with Large Language Models for Document Understanding", "abstract": "Recently, leveraging large language models (LLMs) or multimodal large\nlanguage models (MLLMs) for document understanding has been proven very\npromising. However, previous works that employ LLMs/MLLMs for document\nunderstanding have not fully explored and utilized the document layout\ninformation, which is vital for precise document understanding. In this paper,\nwe propose LayoutLLM, an LLM/MLLM based method for document understanding. The\ncore of LayoutLLM is a layout instruction tuning strategy, which is specially\ndesigned to enhance the comprehension and utilization of document layouts. The\nproposed layout instruction tuning strategy consists of two components:\nLayout-aware Pre-training and Layout-aware Supervised Fine-tuning. To capture\nthe characteristics of document layout in Layout-aware Pre-training, three\ngroups of pre-training tasks, corresponding to document-level, region-level and\nsegment-level information, are introduced. Furthermore, a novel module called\nlayout chain-of-thought (LayoutCoT) is devised to enable LayoutLLM to focus on\nregions relevant to the question and generate accurate answers. LayoutCoT is\neffective for boosting the performance of document understanding. Meanwhile, it\nbrings a certain degree of interpretability, which could facilitate manual\ninspection and correction. Experiments on standard benchmarks show that the\nproposed LayoutLLM significantly outperforms existing methods that adopt\nopen-source 7B LLMs/MLLMs for document understanding. The training data of the\nLayoutLLM is publicly available at\nhttps://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/DocumentUnderstanding/LayoutLLM", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Chuwei Luo", "Yufan Shen", "Zhaoqing Zhu", "Qi Zheng", "Zhi Yu", "Cong Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ad"}, "filepath": "data/2403.13667.png", "tags": [], "_media_type": "image", "_rand": 0.9990985098311139, "arXiv_link": "https://arxiv.org/abs/2403.13667", "other_link": "https://github.com/Carmenw1203/DanceCamera3D-Official.", "title": "DanceCamera3D: 3D Camera Movement Synthesis with Music and Dance", "abstract": "Choreographers determine what the dances look like, while cameramen determine\nthe final presentation of dances. Recently, various methods and datasets have\nshowcased the feasibility of dance synthesis. However, camera movement\nsynthesis with music and dance remains an unsolved challenging problem due to\nthe scarcity of paired data. Thus, we present DCM, a new multi-modal 3D\ndataset, which for the first time combines camera movement with dance motion\nand music audio. This dataset encompasses 108 dance sequences (3.2 hours) of\npaired dance-camera-music data from the anime community, covering 4 music\ngenres. With this dataset, we uncover that dance camera movement is\nmultifaceted and human-centric, and possesses multiple influencing factors,\nmaking dance camera synthesis a more challenging task compared to camera or\ndance synthesis alone. To overcome these difficulties, we propose\nDanceCamera3D, a transformer-based diffusion model that incorporates a novel\nbody attention loss and a condition separation strategy. For evaluation, we\ndevise new metrics measuring camera movement quality, diversity, and dancer\nfidelity. Utilizing these metrics, we conduct extensive experiments on our DCM\ndataset, providing both quantitative and qualitative evidence showcasing the\neffectiveness of our DanceCamera3D model. Code and video demos are available at\nhttps://github.com/Carmenw1203/DanceCamera3D-Official.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Zixuan Wang", "Jia Jia", "Shikun Sun", "Haozhe Wu", "Rong Han", "Zhenyu Li", "Di Tang", "Jiaqing Zhou", "Jiebo Luo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ae"}, "filepath": "data/2311.11178.png", "tags": [], "_media_type": "image", "_rand": 0.9995178928406712, "arXiv_link": "https://arxiv.org/abs/2311.11178", "other_link": "https://github.com/kaist-dmlab/pcb", "title": "Active Prompt Learning in Vision Language Models", "abstract": "Pre-trained Vision Language Models (VLMs) have demonstrated notable progress\nin various zero-shot tasks, such as classification and retrieval. Despite their\nperformance, because improving performance on new tasks requires task-specific\nknowledge, their adaptation is essential. While labels are needed for the\nadaptation, acquiring them is typically expensive. To overcome this challenge,\nactive learning, a method of achieving a high performance by obtaining labels\nfor a small number of samples from experts, has been studied. Active learning\nprimarily focuses on selecting unlabeled samples for labeling and leveraging\nthem to train models. In this study, we pose the question, \"how can the\npre-trained VLMs be adapted under the active learning framework?\" In response\nto this inquiry, we observe that (1) simply applying a conventional active\nlearning framework to pre-trained VLMs even may degrade performance compared to\nrandom selection because of the class imbalance in labeling candidates, and (2)\nthe knowledge of VLMs can provide hints for achieving the balance before\nlabeling. Based on these observations, we devise a novel active learning\nframework for VLMs, denoted as PCB. To assess the effectiveness of our\napproach, we conduct experiments on seven different real-world datasets, and\nthe results demonstrate that PCB surpasses conventional active learning and\nrandom sampling methods. Code will be available in\nhttps://github.com/kaist-dmlab/pcb .", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jihwan Bang", "Sumyeong Ahn", "Jae-Gil Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0af"}, "filepath": "data/2305.10300v3.png", "tags": [], "_media_type": "image", "_rand": 0.999584877820276, "arXiv_link": "https://arxiv.org/html/2305.10300v3", "other_link": "", "title": "One-Prompt to Segment All Medical Images", "abstract": "Large foundation models, known for their strong zero-shot generalization,\nhave excelled in visual and language applications. However, applying them to\nmedical image segmentation, a domain with diverse imaging types and target\nlabels, remains an open challenge. Current approaches, such as adapting\ninteractive segmentation models like Segment Anything Model (SAM), require user\nprompts for each sample during inference. Alternatively, transfer learning\nmethods like few/one-shot models demand labeled samples, leading to high costs.\nThis paper introduces a new paradigm toward the universal medical image\nsegmentation, termed 'One-Prompt Segmentation.' One-Prompt Segmentation\ncombines the strengths of one-shot and interactive methods. In the inference\nstage, with just \\textbf{one prompted sample}, it can adeptly handle the unseen\ntask in a single forward pass. We train One-Prompt Model on 64 open-source\nmedical datasets, accompanied by the collection of over 3,000 clinician-labeled\nprompts. Tested on 14 previously unseen tasks, the One-Prompt Model showcases\nsuperior zero-shot segmentation capabilities, outperforming a wide range of\nrelated methods. The code and annotated data will be publicly released.", "keywords": ["Large multimodal models and prompting techniques", "Medical imaging and biological vision"], "authors_list": ["Wu", "Min Xu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b0"}, "filepath": "data/2312.05251.png", "tags": [], "_media_type": "image", "_rand": 0.9999963131456678, "arXiv_link": "https://arxiv.org/abs/2312.05251", "other_link": "https://geopavlakos.github.io/hamer/.", "title": "Reconstructing Hands in 3D with Transformers", "abstract": "We present an approach that can reconstruct hands in 3D from monocular input.\nOur approach for Hand Mesh Recovery, HaMeR, follows a fully transformer-based\narchitecture and can analyze hands with significantly increased accuracy and\nrobustness compared to previous work. The key to HaMeR's success lies in\nscaling up both the data used for training and the capacity of the deep network\nfor hand reconstruction. For training data, we combine multiple datasets that\ncontain 2D or 3D hand annotations. For the deep model, we use a large scale\nVision Transformer architecture. Our final model consistently outperforms the\nprevious baselines on popular 3D hand pose benchmarks. To further evaluate the\neffect of our design in non-controlled settings, we annotate existing\nin-the-wild datasets with 2D hand keypoint annotations. On this newly collected\ndataset of annotations, HInt, we demonstrate significant improvements over\nexisting baselines. We make our code, data and models available on the project\nwebsite: https://geopavlakos.github.io/hamer/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Georgios Pavlakos", "Dandan Shan", "Ilija Radosavovic", "Angjoo Kanazawa", "David Fouhey", "Jitendra Malik"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b1"}, "filepath": "data/2404.01509.png", "tags": [], "_media_type": "image", "_rand": 0.9991977630424365, "arXiv_link": "https://arxiv.org/abs/2404.01509", "other_link": "https://github.com/paulgavrikov/biases_vs_generalization", "title": "Can Biases in ImageNet Models Explain Generalization?", "abstract": "The robust generalization of models to rare, in-distribution (ID) samples\ndrawn from the long tail of the training distribution and to\nout-of-training-distribution (OOD) samples is one of the major challenges of\ncurrent deep learning methods. For image classification, this manifests in the\nexistence of adversarial attacks, the performance drops on distorted images,\nand a lack of generalization to concepts such as sketches. The current\nunderstanding of generalization in neural networks is very limited, but some\nbiases that differentiate models from human vision have been identified and\nmight be causing these limitations. Consequently, several attempts with varying\nsuccess have been made to reduce these biases during training to improve\ngeneralization. We take a step back and sanity-check these attempts. Fixing the\narchitecture to the well-established ResNet-50, we perform a large-scale study\non 48 ImageNet models obtained via different training methods to understand how\nand if these biases - including shape bias, spectral biases, and critical bands\n- interact with generalization. Our extensive study results reveal that\ncontrary to previous findings, these biases are insufficient to accurately\npredict the generalization of a model holistically. We provide access to all\ncheckpoints and evaluation code at\nhttps://github.com/paulgavrikov/biases_vs_generalization", "keywords": [], "authors_list": ["Paul Gavrikov", "Janis Keuper"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b2"}, "filepath": "data/2403.16385.png", "tags": [], "_media_type": "image", "_rand": 0.9992101622623871, "arXiv_link": "https://arxiv.org/abs/2403.16385", "other_link": "", "title": "Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA", "abstract": "Understanding data visualizations like charts and plots requires reasoning\nabout both visual elements and numerics. Although strong in extractive\nquestions, current chart visual question answering (chart VQA) models suffer on\ncomplex reasoning questions. In this work, we address the lack of reasoning\nability by data augmentation. We leverage Large Language Models (LLMs), which\nhave shown to have strong reasoning ability, as an automatic data annotator\nthat generates question-answer annotations for chart images. The key innovation\nin our method lies in the Synthesize Step-by-Step strategy: our LLM-based data\ngenerator learns to decompose the complex question into step-by-step\nsub-questions (rationales), which are then used to derive the final answer\nusing external tools, i.e. Python. This step-wise generation procedure is\ntrained on synthetic data generated using a template-based QA generation\npipeline. Experimental results highlight the significance of the proposed\nstep-by-step generation. By training with the LLM-augmented data (LAMENDA), we\nsignificantly enhance the chart VQA models, achieving the state-of-the-art\naccuracy on the ChartQA and PlotQA datasets. In particular, our approach\nimproves the accuracy of the previous state-of-the-art approach from 38% to 54%\non the human-written questions in the ChartQA dataset, which needs strong\nreasoning. We hope our work underscores the potential of synthetic data and\nencourages further exploration of data augmentation using LLMs for\nreasoning-heavy tasks.", "keywords": [], "authors_list": ["Zhuowan Li", "Bhavan Jasani", "Peng Tang", "Shabnam Ghadar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b3"}, "filepath": "data/2312.06713.png", "tags": [], "_media_type": "image", "_rand": 0.9997601236017256, "arXiv_link": "https://arxiv.org/abs/2312.06713", "other_link": "", "title": "TeTriRF: Temporal Tri-Plane Radiance Fields for Efficient Free-Viewpoint Video", "abstract": "Neural Radiance Fields (NeRF) revolutionize the realm of visual media by\nproviding photorealistic Free-Viewpoint Video (FVV) experiences, offering\nviewers unparalleled immersion and interactivity. However, the technology's\nsignificant storage requirements and the computational complexity involved in\ngeneration and rendering currently limit its broader application. To close this\ngap, this paper presents Temporal Tri-Plane Radiance Fields (TeTriRF), a novel\ntechnology that significantly reduces the storage size for Free-Viewpoint Video\n(FVV) while maintaining low-cost generation and rendering. TeTriRF introduces a\nhybrid representation with tri-planes and voxel grids to support scaling up to\nlong-duration sequences and scenes with complex motions or rapid changes. We\npropose a group training scheme tailored to achieving high training efficiency\nand yielding temporally consistent, low-entropy scene representations.\nLeveraging these properties of the representations, we introduce a compression\npipeline with off-the-shelf video codecs, achieving an order of magnitude less\nstorage size compared to the state-of-the-art. Our experiments demonstrate that\nTeTriRF can achieve competitive quality with a higher compression rate.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Minye Wu", "Zehao Wang", "Georgios Kouros", "Tinne Tuytelaars"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b4"}, "filepath": "data/2306.09310.png", "tags": [], "_media_type": "image", "_rand": 0.999368488339971, "arXiv_link": "https://arxiv.org/abs/2306.09310", "other_link": "https://infinigen.org", "title": "Infinigen Indoors: Photorealistic Indoor Scenes using Procedural Generation", "abstract": "We introduce Infinigen, a procedural generator of photorealistic 3D scenes of\nthe natural world. Infinigen is entirely procedural: every asset, from shape to\ntexture, is generated from scratch via randomized mathematical rules, using no\nexternal source and allowing infinite variation and composition. Infinigen\noffers broad coverage of objects and scenes in the natural world including\nplants, animals, terrains, and natural phenomena such as fire, cloud, rain, and\nsnow. Infinigen can be used to generate unlimited, diverse training data for a\nwide range of computer vision tasks including object detection, semantic\nsegmentation, optical flow, and 3D reconstruction. We expect Infinigen to be a\nuseful resource for computer vision research and beyond. Please visit\nhttps://infinigen.org for videos, code and pre-generated data.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Alexander Raistrick", "Lingjie Mei", "Karhan Kayan", "David Yan", "Yiming Zuo", "Beining Han", "Hongyu Wen", "Meenal Parakh", "Stamatis Alexandropoulos", "Lahav Lipson", "Zeyu Ma", "Jia Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b5"}, "filepath": "data/2312.09238.png", "tags": [], "_media_type": "image", "_rand": 0.9993861473659444, "arXiv_link": "https://arxiv.org/abs/2312.09238", "other_link": "", "title": "Auto MC-Reward: Automated Dense Reward Design with Large Language Models for Minecraft", "abstract": "Many reinforcement learning environments (e.g., Minecraft) provide only\nsparse rewards that indicate task completion or failure with binary values. The\nchallenge in exploration efficiency in such environments makes it difficult for\nreinforcement-learning-based agents to learn complex tasks. To address this,\nthis paper introduces an advanced learning system, named Auto MC-Reward, that\nleverages Large Language Models (LLMs) to automatically design dense reward\nfunctions, thereby enhancing the learning efficiency. Auto MC-Reward consists\nof three important components: Reward Designer, Reward Critic, and Trajectory\nAnalyzer. Given the environment information and task descriptions, the Reward\nDesigner first design the reward function by coding an executable Python\nfunction with predefined observation inputs. Then, our Reward Critic will be\nresponsible for verifying the code, checking whether the code is\nself-consistent and free of syntax and semantic errors. Further, the Trajectory\nAnalyzer summarizes possible failure causes and provides refinement suggestions\naccording to collected trajectories. In the next round, Reward Designer will\nfurther refine and iterate the dense reward function based on feedback.\nExperiments demonstrate a significant improvement in the success rate and\nlearning efficiency of our agents in complex tasks in Minecraft, such as\nobtaining diamond with the efficient ability to avoid lava, and efficiently\nexplore trees and animals that are sparse in the plains biome.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Hao Li", "Xue Yang", "Zhaokai Wang", "Xizhou Zhu", "Jie Zhou", "Yu Qiao", "Xiaogang Wang", "Hongsheng Li", "Lewei Lu", "Jifeng Dai"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Computation and Language", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b6"}, "filepath": "data/2402.04476.png", "tags": [], "_media_type": "image", "_rand": 0.9996447193820305, "arXiv_link": "https://arxiv.org/abs/2402.04476", "other_link": "", "title": "Dual-View Visual Contextualization for Web Navigation", "abstract": "Automatic web navigation aims to build a web agent that can follow language\ninstructions to execute complex and diverse tasks on real-world websites.\nExisting work primarily takes HTML documents as input, which define the\ncontents and action spaces (i.e., actionable elements and operations) of\nwebpages. Nevertheless, HTML documents may not provide a clear task-related\ncontext for each element, making it hard to select the right (sequence of)\nactions. In this paper, we propose to contextualize HTML elements through their\n\"dual views\" in webpage screenshots: each HTML element has its corresponding\nbounding box and visual content in the screenshot. We build upon the insight --\nweb developers tend to arrange task-related elements nearby on webpages to\nenhance user experiences -- and propose to contextualize each element with its\nneighbor elements, using both textual and visual features. The resulting\nrepresentations of HTML elements are more informative for the agent to take\naction. We validate our method on the recently released Mind2Web dataset, which\nfeatures diverse navigation domains and tasks on real-world websites. Our\nmethod consistently outperforms the baseline in all the scenarios, including\ncross-task, cross-website, and cross-domain ones.", "keywords": [], "authors_list": ["Jihyung Kil", "Chan Hee Song", "Boyuan Zheng", "Xiang Deng", "Yu Su", "Wei-Lun Chao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b7"}, "filepath": "data/2403.19278v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994368558027307, "arXiv_link": "https://arxiv.org/abs/2403.19278v1", "other_link": "", "title": "CAT: Exploiting Inter-Class Dynamics for Domain Adaptive Object Detection", "abstract": "Domain adaptive object detection aims to adapt detection models to domains\nwhere annotated data is unavailable. Existing methods have been proposed to\naddress the domain gap using the semi-supervised student-teacher framework.\nHowever, a fundamental issue arises from the class imbalance in the labelled\ntraining set, which can result in inaccurate pseudo-labels. The relationship\nbetween classes, especially where one class is a majority and the other\nminority, has a large impact on class bias. We propose Class-Aware Teacher\n(CAT) to address the class bias issue in the domain adaptation setting. In our\nwork, we approximate the class relationships with our Inter-Class Relation\nmodule (ICRm) and exploit it to reduce the bias within the model. In this way,\nwe are able to apply augmentations to highly related classes, both inter- and\nintra-domain, to boost the performance of minority classes while having minimal\nimpact on majority classes. We further reduce the bias by implementing a\nclass-relation weight to our classification loss. Experiments conducted on\nvarious datasets and ablation studies show that our method is able to address\nthe class bias in the domain adaptation setting. On the Cityscapes to Foggy\nCityscapes dataset, we attained a 52.5 mAP, a substantial improvement over the\n51.2 mAP achieved by the state-of-the-art method.", "keywords": [], "authors_list": ["Mikhail Kennerley", "Jian-Gang Wang", "Bharadwaj Veeravalli", "Robby T. Tan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b8"}, "filepath": "data/2402.17062.png", "tags": [], "_media_type": "image", "_rand": 0.9998044287783525, "arXiv_link": "https://arxiv.org/abs/2402.17062", "other_link": "https://github.com/amathislab/HOISDF", "title": "HOISDF: Constraining 3D Hand Object Pose Estimation with Global Signed Distance Fields", "abstract": "Human hands are highly articulated and versatile at handling objects. Jointly\nestimating the 3D poses of a hand and the object it manipulates from a\nmonocular camera is challenging due to frequent occlusions. Thus, existing\nmethods often rely on intermediate 3D shape representations to increase\nperformance. These representations are typically explicit, such as 3D point\nclouds or meshes, and thus provide information in the direct surroundings of\nthe intermediate hand pose estimate. To address this, we introduce HOISDF, a\nSigned Distance Field (SDF) guided hand-object pose estimation network, which\njointly exploits hand and object SDFs to provide a global, implicit\nrepresentation over the complete reconstruction volume. Specifically, the role\nof the SDFs is threefold: equip the visual encoder with implicit shape\ninformation, help to encode hand-object interactions, and guide the hand and\nobject pose regression via SDF-based sampling and by augmenting the feature\nrepresentations. We show that HOISDF achieves state-of-the-art results on\nhand-object pose estimation benchmarks (DexYCB and HO3Dv2). Code is available\nat https://github.com/amathislab/HOISDF", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haozhe Qi", "Chen Zhao", "Mathieu Salzmann", "Alexander Mathis"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0b9"}, "filepath": "data/2312.11782.png", "tags": [], "_media_type": "image", "_rand": 0.9991342439847389, "arXiv_link": "https://arxiv.org/abs/2312.11782", "other_link": "", "title": "Learning Object State Changes in Videos: An Open-World Perspective", "abstract": "Object State Changes (OSCs) are pivotal for video understanding. While humans\ncan effortlessly generalize OSC understanding from familiar to unknown objects,\ncurrent approaches are confined to a closed vocabulary. Addressing this gap, we\nintroduce a novel open-world formulation for the video OSC problem. The goal is\nto temporally localize the three stages of an OSC -- the object's initial\nstate, its transitioning state, and its end state -- whether or not the object\nhas been observed during training. Towards this end, we develop VidOSC, a\nholistic learning approach that: (1) leverages text and vision-language models\nfor supervisory signals to obviate manually labeling OSC training data, and (2)\nabstracts fine-grained shared state representations from objects to enhance\ngeneralization. Furthermore, we present HowToChange, the first open-world\nbenchmark for video OSC localization, which offers an order of magnitude\nincrease in the label space and annotation volume compared to the best existing\nbenchmark. Experimental results demonstrate the efficacy of our approach, in\nboth traditional closed-world and open-world scenarios.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Zihui Xue", "Kumar Ashutosh", "Kristen Grauman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ba"}, "filepath": "data/2405.11867.png", "tags": [], "_media_type": "image", "_rand": 0.9999828482670994, "arXiv_link": "https://arxiv.org/abs/2405.11867", "other_link": "https://github.com/JinhwiPark/DepthPrompting", "title": "Depth Prompting for Sensor-Agnostic Depth Estimation", "abstract": "Dense depth maps have been used as a key element of visual perception tasks.\nThere have been tremendous efforts to enhance the depth quality, ranging from\noptimization-based to learning-based methods. Despite the remarkable progress\nfor a long time, their applicability in the real world is limited due to\nsystematic measurement biases such as density, sensing pattern, and scan range.\nIt is well-known that the biases make it difficult for these methods to achieve\ntheir generalization. We observe that learning a joint representation for input\nmodalities (e.g., images and depth), which most recent methods adopt, is\nsensitive to the biases. In this work, we disentangle those modalities to\nmitigate the biases with prompt engineering. For this, we design a novel depth\nprompt module to allow the desirable feature representation according to new\ndepth distributions from either sensor types or scene configurations. Our depth\nprompt can be embedded into foundation models for monocular depth estimation.\nThrough this embedding process, our method helps the pretrained model to be\nfree from restraint of depth scan range and to provide absolute scale depth\nmaps. We demonstrate the effectiveness of our method through extensive\nevaluations. Source code is publicly available at\nhttps://github.com/JinhwiPark/DepthPrompting .", "keywords": ["Deep learning architectures and techniques", "Large multimodal models and prompting techniques"], "authors_list": ["Jin-Hwi Park", "Chanhwi Jeong", "Junoh Lee", "Hae-Gon Jeon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0bb"}, "filepath": "data/2307.13339.png", "tags": [], "_media_type": "image", "_rand": 0.9998200836994594, "arXiv_link": "https://arxiv.org/abs/2307.13339", "other_link": "", "title": "PromptCoT: Align Prompt Distribution via Adapted Chain-of-Thought", "abstract": "Chain-of-thought (CoT) prompting has been shown to empirically improve the\naccuracy of large language models (LLMs) on various question answering tasks.\nWhile understanding why CoT prompting is effective is crucial to ensuring that\nthis phenomenon is a consequence of desired model behavior, little work has\naddressed this; nonetheless, such an understanding is a critical prerequisite\nfor responsible model deployment. We address this question by leveraging\ngradient-based feature attribution methods which produce saliency scores that\ncapture the influence of input tokens on model output. Specifically, we probe\nseveral open-source LLMs to investigate whether CoT prompting affects the\nrelative importances they assign to particular input tokens. Our results\nindicate that while CoT prompting does not increase the magnitude of saliency\nscores attributed to semantically relevant tokens in the prompt compared to\nstandard few-shot prompting, it increases the robustness of saliency scores to\nquestion perturbations and variations in model output.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Junyi Yao", "Yijiang Liu", "Zhen Dong", "Mingfei Guo", "Helan Hu", "Kurt Keutzer", "Li Du", "Daquan Zhou", "Shanghang Zhang"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0bc"}, "filepath": "data/2404.00563.png", "tags": [], "_media_type": "image", "_rand": 0.99974830183044, "arXiv_link": "https://arxiv.org/abs/2404.00563", "other_link": "https://github.com/VincenDen/IID.", "title": "Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation", "abstract": "Dataset distillation has emerged as a promising approach in deep learning,\nenabling efficient training with small synthetic datasets derived from larger\nreal ones. Particularly, distribution matching-based distillation methods\nattract attention thanks to its effectiveness and low computational cost.\nHowever, these methods face two primary limitations: the dispersed feature\ndistribution within the same class in synthetic datasets, reducing class\ndiscrimination, and an exclusive focus on mean feature consistency, lacking\nprecision and comprehensiveness. To address these challenges, we introduce two\nnovel constraints: a class centralization constraint and a covariance matching\nconstraint. The class centralization constraint aims to enhance class\ndiscrimination by more closely clustering samples within classes. The\ncovariance matching constraint seeks to achieve more accurate feature\ndistribution matching between real and synthetic datasets through local feature\ncovariance matrices, particularly beneficial when sample sizes are much smaller\nthan the number of features. Experiments demonstrate notable improvements with\nthese constraints, yielding performance boosts of up to 6.6% on CIFAR10, 2.9%\non SVHN, 2.5% on CIFAR100, and 2.5% on TinyImageNet, compared to the\nstate-of-the-art relevant methods. In addition, our method maintains robust\nperformance in cross-architecture settings, with a maximum performance drop of\n1.7% on four architectures. Code is available at\nhttps://github.com/VincenDen/IID.", "keywords": [], "authors_list": ["Wenxiao Deng", "Wenbin Li", "Tianyu Ding", "Lei Wang", "Hongguang Zhang", "Kuihua Huang", "Jing Huo", "Yang Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0bd"}, "filepath": "data/2308.10305.png", "tags": [], "_media_type": "image", "_rand": 0.9990646342630874, "arXiv_link": "https://arxiv.org/abs/2308.10305", "other_link": "https://github.com/kasvii/PMCE.", "title": "MeshPose: Unifying DensePose and 3D Body Mesh reconstruction", "abstract": "Despite significant progress in single image-based 3D human mesh recovery,\naccurately and smoothly recovering 3D human motion from a video remains\nchallenging. Existing video-based methods generally recover human mesh by\nestimating the complex pose and shape parameters from coupled image features,\nwhose high complexity and low representation ability often result in\ninconsistent pose motion and limited shape patterns. To alleviate this issue,\nwe introduce 3D pose as the intermediary and propose a Pose and Mesh\nCo-Evolution network (PMCE) that decouples this task into two parts: 1)\nvideo-based 3D human pose estimation and 2) mesh vertices regression from the\nestimated 3D pose and temporal image feature. Specifically, we propose a\ntwo-stream encoder that estimates mid-frame 3D pose and extracts a temporal\nimage feature from the input image sequence. In addition, we design a\nco-evolution decoder that performs pose and mesh interactions with the\nimage-guided Adaptive Layer Normalization (AdaLN) to make pose and mesh fit the\nhuman body shape. Extensive experiments demonstrate that the proposed PMCE\noutperforms previous state-of-the-art methods in terms of both per-frame\naccuracy and temporal consistency on three benchmark datasets: 3DPW, Human3.6M,\nand MPI-INF-3DHP. Our code is available at https://github.com/kasvii/PMCE.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Eric-Tuan Le", "Antonios Kakolyris", "Petros Koutras", "Himmy Tam", "Efstratios Skordos", "George Papandreou", "Riza Alp Guler", "Iasonas Kokkinos"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0be"}, "filepath": "data/2312.03160.png", "tags": [], "_media_type": "image", "_rand": 0.9993515715325364, "arXiv_link": "https://arxiv.org/abs/2312.03160", "other_link": "", "title": "HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces", "abstract": "Neural radiance fields provide state-of-the-art view synthesis quality but\ntend to be slow to render. One reason is that they make use of volume\nrendering, thus requiring many samples (and model queries) per ray at render\ntime. Although this representation is flexible and easy to optimize, most\nreal-world objects can be modeled more efficiently with surfaces instead of\nvolumes, requiring far fewer samples per ray. This observation has spurred\nconsiderable progress in surface representations such as signed distance\nfunctions, but these may struggle to model semi-opaque and thin structures. We\npropose a method, HybridNeRF, that leverages the strengths of both\nrepresentations by rendering most objects as surfaces while modeling the\n(typically) small fraction of challenging regions volumetrically. We evaluate\nHybridNeRF against the challenging Eyeful Tower dataset along with other\ncommonly used view synthesis datasets. When comparing to state-of-the-art\nbaselines, including recent rasterization-based approaches, we improve error\nrates by 15-30% while achieving real-time framerates (at least 36 FPS) for\nvirtual-reality resolutions (2Kx2K).", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques"], "authors_list": ["Haithem Turki", "Vasu Agrawal", "Samuel Rota Bul\u00f2", "Lorenzo Porzi", "Peter Kontschieder", "Deva Ramanan", "Michael Zollhoefer", "Christian Richardt"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0bf"}, "filepath": "data/2404.00857.png", "tags": [], "_media_type": "image", "_rand": 0.9994085185171999, "arXiv_link": "https://arxiv.org/abs/2404.00857", "other_link": "", "title": "LTA-PCS: Learnable Task-Agnostic Point Cloud Sampling", "abstract": "Point cloud classification refers to the process of assigning semantic labels\nor categories to individual points within a point cloud data structure. Recent\nworks have explored the extension of pre-trained CLIP to 3D recognition. In\nthis direction, CLIP-based point cloud models like PointCLIP, CLIP2Point have\nbecome state-of-the-art methods in the few-shot setup. Although these methods\nshow promising performance for some classes like airplanes, desks, guitars,\netc, the performance for some classes like the cup, flower pot, sink,\nnightstand, etc is still far from satisfactory. This is due to the fact that\nthe adapter of CLIP-based models is trained using randomly sampled N-way K-shot\ndata in the standard supervised learning setup. In this paper, we propose a\nnovel meta-episodic learning framework for CLIP-based point cloud\nclassification, addressing the challenges of limited training examples and\nsampling unknown classes. Additionally, we introduce dynamic task sampling\nwithin the episode based on performance memory. This sampling strategy\neffectively addresses the challenge of sampling unknown classes, ensuring that\nthe model learns from a diverse range of classes and promotes the exploration\nof underrepresented categories. By dynamically updating the performance memory,\nwe adaptively prioritize the sampling of classes based on their performance,\nenhancing the model's ability to handle challenging and real-world scenarios.\nExperiments show an average performance gain of 3-6\\% on ModelNet40 and\nScanobjectNN datasets in a few-shot setup.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiaheng Liu", "Jianhao Li", "Kaisiyuan Wang", "Hongcheng Guo", "Jian Yang", "Junran Peng", "Ke Xu", "Xianglong Liu", "Jinyang Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c0"}, "filepath": "data/2312.08985v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999873342663561, "arXiv_link": "https://arxiv.org/abs/2312.08985v3", "other_link": "https://tr3e.github.io/omg-page.", "title": "OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers", "abstract": "We have recently seen tremendous progress in realistic text-to-motion\ngeneration. Yet, the existing methods often fail or produce implausible motions\nwith unseen text inputs, which limits the applications. In this paper, we\npresent OMG, a novel framework, which enables compelling motion generation from\nzero-shot open-vocabulary text prompts. Our key idea is to carefully tailor the\npretrain-then-finetune paradigm into the text-to-motion generation. At the\npre-training stage, our model improves the generation ability by learning the\nrich out-of-domain inherent motion traits. To this end, we scale up a large\nunconditional diffusion model up to 1B parameters, so as to utilize the massive\nunlabeled motion data up to over 20M motion instances. At the subsequent\nfine-tuning stage, we introduce motion ControlNet, which incorporates text\nprompts as conditioning information, through a trainable copy of the\npre-trained model and the proposed novel Mixture-of-Controllers (MoC) block.\nMoC block adaptively recognizes various ranges of the sub-motions with a\ncross-attention mechanism and processes them separately with the\ntext-token-specific experts. Such a design effectively aligns the CLIP token\nembeddings of text prompts to various ranges of compact and expressive motion\nfeatures. Extensive experiments demonstrate that our OMG achieves significant\nimprovements over the state-of-the-art methods on zero-shot text-to-motion\ngeneration. Project page: https://tr3e.github.io/omg-page.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Han Liang", "Jiacheng Bao", "Ruichi Zhang", "Sihan Ren", "Yuecheng Xu", "Sibei Yang", "Xin Chen", "Jingyi Yu", "Lan Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c1"}, "filepath": "data/2308.12532v6.png", "tags": [], "_media_type": "image", "_rand": 0.9999388775693867, "arXiv_link": "https://arxiv.org/abs/2308.12532v6", "other_link": "", "title": "FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated Learning", "abstract": "Federated Learning (FL) aggregates locally trained models from individual\nclients to construct a global model. While FL enables learning a model with\ndata privacy, it often suffers from significant performance degradation when\nclients have heterogeneous data distributions. This data heterogeneity causes\nthe model to forget the global knowledge acquired from previously sampled\nclients after being trained on local datasets. Although the introduction of\nproximal objectives in local updates helps to preserve global knowledge, it can\nalso hinder local learning by interfering with local objectives. To address\nthis problem, we propose a novel method, Federated Stabilized Orthogonal\nLearning (FedSOL), which adopts an orthogonal learning strategy to balance the\ntwo conflicting objectives. FedSOL is designed to identify gradients of local\nobjectives that are inherently orthogonal to directions affecting the proximal\nobjective. Specifically, FedSOL targets parameter regions where learning on the\nlocal objective is minimally influenced by proximal weight perturbations. Our\nexperiments demonstrate that FedSOL consistently achieves state-of-the-art\nperformance across various scenarios.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Gihun Lee", "Minchan Jeong", "SangMook Kim", "Jaehoon Oh", "Se-Young Yun"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c2"}, "filepath": "data/2405.00340.png", "tags": [], "_media_type": "image", "_rand": 0.9997078314957266, "arXiv_link": "https://arxiv.org/abs/2405.00340", "other_link": "", "title": "NC-SDF: Enhancing Indoor Scene Reconstruction Using Neural SDFs with View-Dependent Normal Compensation", "abstract": "State-of-the-art neural implicit surface representations have achieved\nimpressive results in indoor scene reconstruction by incorporating monocular\ngeometric priors as additional supervision. However, we have observed that\nmulti-view inconsistency between such priors poses a challenge for high-quality\nreconstructions. In response, we present NC-SDF, a neural signed distance field\n(SDF) 3D reconstruction framework with view-dependent normal compensation (NC).\nSpecifically, we integrate view-dependent biases in monocular normal priors\ninto the neural implicit representation of the scene. By adaptively learning\nand correcting the biases, our NC-SDF effectively mitigates the adverse impact\nof inconsistent supervision, enhancing both the global consistency and local\ndetails in the reconstructions. To further refine the details, we introduce an\ninformative pixel sampling strategy to pay more attention to intricate geometry\nwith higher information content. Additionally, we design a hybrid geometry\nmodeling approach to improve the neural implicit representation. Experiments on\nsynthetic and real-world datasets demonstrate that NC-SDF outperforms existing\napproaches in terms of reconstruction quality.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Ziyi Chen", "Xiaolong Wu", "Yu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c3"}, "filepath": "data/2404.07603.png", "tags": [], "_media_type": "image", "_rand": 0.9991777180613374, "arXiv_link": "https://arxiv.org/abs/2404.07603", "other_link": "", "title": "GLID: Pre-training a Generalist Encoder-Decoder Vision Model", "abstract": "This paper proposes a GeneraLIst encoder-Decoder (GLID) pre-training method\nfor better handling various downstream computer vision tasks. While\nself-supervised pre-training approaches, e.g., Masked Autoencoder, have shown\nsuccess in transfer learning, task-specific sub-architectures are still\nrequired to be appended for different downstream tasks, which cannot enjoy the\nbenefits of large-scale pre-training. GLID overcomes this challenge by allowing\nthe pre-trained generalist encoder-decoder to be fine-tuned on various vision\ntasks with minimal task-specific architecture modifications. In the GLID\ntraining scheme, pre-training pretext task and other downstream tasks are\nmodeled as \"query-to-answer\" problems, including the pre-training pretext task\nand other downstream tasks. We pre-train a task-agnostic encoder-decoder with\nquery-mask pairs. During fine-tuning, GLID maintains the pre-trained\nencoder-decoder and queries, only replacing the topmost linear transformation\nlayer with task-specific linear heads. This minimizes the pretrain-finetune\narchitecture inconsistency and enables the pre-trained model to better adapt to\ndownstream tasks. GLID achieves competitive performance on various vision\ntasks, including object detection, image segmentation, pose estimation, and\ndepth estimation, outperforming or matching specialist models such as\nMask2Former, DETR, ViTPose, and BinsFormer.", "keywords": [], "authors_list": ["Jihao Liu", "Jinliang Zheng", "Yu Liu", "Hongsheng Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c4"}, "filepath": "data/2404.02889.png", "tags": [], "_media_type": "image", "_rand": 0.9993022271326184, "arXiv_link": "https://arxiv.org/abs/2404.02889", "other_link": "", "title": "Steganographic Passport: An Owner and User Verifiable Credential for Deep Model IP Protection Without Retraining", "abstract": "Ensuring the legal usage of deep models is crucial to promoting trustable,\naccountable, and responsible artificial intelligence innovation. Current\npassport-based methods that obfuscate model functionality for license-to-use\nand ownership verifications suffer from capacity and quality constraints, as\nthey require retraining the owner model for new users. They are also vulnerable\nto advanced Expanded Residual Block ambiguity attacks. We propose\nSteganographic Passport, which uses an invertible steganographic network to\ndecouple license-to-use from ownership verification by hiding the user's\nidentity images into the owner-side passport and recovering them from their\nrespective user-side passports. An irreversible and collision-resistant hash\nfunction is used to avoid exposing the owner-side passport from the derived\nuser-side passports and increase the uniqueness of the model signature. To\nsafeguard both the passport and model's weights against advanced ambiguity\nattacks, an activation-level obfuscation is proposed for the verification\nbranch of the owner's model. By jointly training the verification and\ndeployment branches, their weights become tightly coupled. The proposed method\nsupports agile licensing of deep models by providing a strong ownership proof\nand license accountability without requiring a separate model retraining for\nthe admission of every new user. Experiment results show that our\nSteganographic Passport outperforms other passport-based deep model protection\nmethods in robustness against various known attacks.", "keywords": ["Vision applications for social good and ethics"], "authors_list": ["Qi Cui", "Ruohan Meng", "Chaohui Xu", "Chip Hong Chang"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c5"}, "filepath": "data/2405.15217.png", "tags": [], "_media_type": "image", "_rand": 0.9992712623205842, "arXiv_link": "https://arxiv.org/abs/2405.15217", "other_link": "", "title": "NIVeL: Neural Implicit Vector Layers for Text-to-Vector Generation", "abstract": "The success of denoising diffusion models in representing rich data\ndistributions over 2D raster images has prompted research on extending them to\nother data representations, such as vector graphics. Unfortunately due to their\nvariable structure and scarcity of vector training data, directly applying\ndiffusion models on this domain remains a challenging problem. Using\nworkarounds like optimization via Score Distillation Sampling (SDS) is also\nfraught with difficulty, as vector representations are non trivial to directly\noptimize and tend to result in implausible geometries such as redundant or\nself-intersecting shapes. NIVeL addresses these challenges by reinterpreting\nthe problem on an alternative, intermediate domain which preserves the\ndesirable properties of vector graphics -- mainly sparsity of representation\nand resolution-independence. This alternative domain is based on neural\nimplicit fields expressed in a set of decomposable, editable layers. Based on\nour experiments, NIVeL produces text-to-vector graphics results of\nsignificantly better quality than the state-of-the-art.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Vikas Thamizharasan", "Difan Liu", "Matthew Fisher", "Nanxuan Zhao", "Evangelos Kalogerakis", "Michal Luk\u00e1\u010d"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c6"}, "filepath": "data/2312.05291.png", "tags": [], "_media_type": "image", "_rand": 0.9997820317468011, "arXiv_link": "https://arxiv.org/abs/2312.05291", "other_link": "https://glitchbench.github.io/", "title": "GlitchBench: Can large multimodal models detect video game glitches?", "abstract": "Large multimodal models (LMMs) have evolved from large language models (LLMs)\nto integrate multiple input modalities, such as visual inputs. This integration\naugments the capacity of LLMs for tasks requiring visual comprehension and\nreasoning. However, the extent and limitations of their enhanced abilities are\nnot fully understood, especially when it comes to real-world tasks. To address\nthis gap, we introduce GlitchBench, a novel benchmark derived from video game\nquality assurance tasks, to test and evaluate the reasoning capabilities of\nLMMs. Our benchmark is curated from a variety of unusual and glitched scenarios\nfrom video games and aims to challenge both the visual and linguistic reasoning\npowers of LMMs in detecting and interpreting out-of-the-ordinary events. We\nevaluate multiple state-of-the-art LMMs, and we show that GlitchBench presents\na new challenge for these models. Code and data are available at:\nhttps://glitchbench.github.io/", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Mohammad Reza Taesiri", "Tianjun Feng", "Cor-Paul Bezemer", "Anh Nguyen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c7"}, "filepath": "data/2403.07392.png", "tags": [], "_media_type": "image", "_rand": 0.9994296676009731, "arXiv_link": "https://arxiv.org/abs/2403.07392", "other_link": "https://github.com/Traffic-X/ViT-CoMer.", "title": "ViT-CoMer: Vision Transformer with Convolutional Multi-scale Feature Interaction for Dense Predictions", "abstract": "Although Vision Transformer (ViT) has achieved significant success in\ncomputer vision, it does not perform well in dense prediction tasks due to the\nlack of inner-patch information interaction and the limited diversity of\nfeature scale. Most existing studies are devoted to designing vision-specific\ntransformers to solve the above problems, which introduce additional\npre-training costs. Therefore, we present a plain, pre-training-free, and\nfeature-enhanced ViT backbone with Convolutional Multi-scale feature\ninteraction, named ViT-CoMer, which facilitates bidirectional interaction\nbetween CNN and transformer. Compared to the state-of-the-art, ViT-CoMer has\nthe following advantages: (1) We inject spatial pyramid multi-receptive field\nconvolutional features into the ViT architecture, which effectively alleviates\nthe problems of limited local information interaction and single-feature\nrepresentation in ViT. (2) We propose a simple and efficient CNN-Transformer\nbidirectional fusion interaction module that performs multi-scale fusion across\nhierarchical features, which is beneficial for handling dense prediction tasks.\n(3) We evaluate the performance of ViT-CoMer across various dense prediction\ntasks, different frameworks, and multiple advanced pre-training. Notably, our\nViT-CoMer-L achieves 64.3% AP on COCO val2017 without extra training data, and\n62.1% mIoU on ADE20K val, both of which are comparable to state-of-the-art\nmethods. We hope ViT-CoMer can serve as a new backbone for dense prediction\ntasks to facilitate future research. The code will be released at\nhttps://github.com/Traffic-X/ViT-CoMer.", "keywords": [], "authors_list": ["Chunlong Xia", "Xinliang Wang", "Feng Lv", "Xin Hao", "Yifeng Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c8"}, "filepath": "data/2404.02742.png", "tags": [], "_media_type": "image", "_rand": 0.9997379358715712, "arXiv_link": "https://arxiv.org/abs/2404.02742", "other_link": "https://github.com/ispc-lab/LiDAR4D.", "title": "LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis", "abstract": "Although neural radiance fields (NeRFs) have achieved triumphs in image novel\nview synthesis (NVS), LiDAR NVS remains largely unexplored. Previous LiDAR NVS\nmethods employ a simple shift from image NVS methods while ignoring the dynamic\nnature and the large-scale reconstruction problem of LiDAR point clouds. In\nlight of this, we propose LiDAR4D, a differentiable LiDAR-only framework for\nnovel space-time LiDAR view synthesis. In consideration of the sparsity and\nlarge-scale characteristics, we design a 4D hybrid representation combined with\nmulti-planar and grid features to achieve effective reconstruction in a\ncoarse-to-fine manner. Furthermore, we introduce geometric constraints derived\nfrom point clouds to improve temporal consistency. For the realistic synthesis\nof LiDAR point clouds, we incorporate the global optimization of ray-drop\nprobability to preserve cross-region patterns. Extensive experiments on\nKITTI-360 and NuScenes datasets demonstrate the superiority of our method in\naccomplishing geometry-aware and time-consistent dynamic reconstruction. Codes\nare available at https://github.com/ispc-lab/LiDAR4D.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zehan Zheng", "Fan Lu", "Weiyi Xue", "Guang Chen", "Changjun Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0c9"}, "filepath": "data/2404.01351.png", "tags": [], "_media_type": "image", "_rand": 0.9995687395289038, "arXiv_link": "https://arxiv.org/abs/2404.01351", "other_link": "https://github.com/taeckyung/AETTA.", "title": "AETTA: Label-Free Accuracy Estimation for Test-Time Adaptation", "abstract": "Test-time adaptation (TTA) has emerged as a viable solution to adapt\npre-trained models to domain shifts using unlabeled test data. However, TTA\nfaces challenges of adaptation failures due to its reliance on blind adaptation\nto unknown test samples in dynamic scenarios. Traditional methods for\nout-of-distribution performance estimation are limited by unrealistic\nassumptions in the TTA context, such as requiring labeled data or re-training\nmodels. To address this issue, we propose AETTA, a label-free accuracy\nestimation algorithm for TTA. We propose the prediction disagreement as the\naccuracy estimate, calculated by comparing the target model prediction with\ndropout inferences. We then improve the prediction disagreement to extend the\napplicability of AETTA under adaptation failures. Our extensive evaluation with\nfour baselines and six TTA methods demonstrates that AETTA shows an average of\n19.8%p more accurate estimation compared with the baselines. We further\ndemonstrate the effectiveness of accuracy estimation with a model recovery case\nstudy, showcasing the practicality of our model recovery based on accuracy\nestimation. The source code is available at https://github.com/taeckyung/AETTA.", "keywords": [], "authors_list": ["Taeckyung Lee", "Sorn Chottananurak", "Taesik Gong", "Sung-Ju Lee"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ca"}, "filepath": "data/2312.08912.png", "tags": [], "_media_type": "image", "_rand": 0.9990471771294035, "arXiv_link": "https://arxiv.org/abs/2312.08912", "other_link": "", "title": "Adversarial Distillation Based on Slack Matching and Attribution Region Alignment", "abstract": "Dataset distillation is the technique of synthesizing smaller condensed\ndatasets from large original datasets while retaining necessary information to\npersist the effect. In this paper, we approach the dataset distillation problem\nfrom a novel perspective: we regard minimizing the prediction discrepancy on\nthe real data distribution between models, which are respectively trained on\nthe large original dataset and on the small distilled dataset, as a conduit for\ncondensing information from the raw data into the distilled version. An\nadversarial framework is proposed to solve the problem efficiently. In contrast\nto existing distillation methods involving nested optimization or long-range\ngradient unrolling, our approach hinges on single-level optimization. This\nensures the memory efficiency of our method and provides a flexible tradeoff\nbetween time and memory budgets, allowing us to distil ImageNet-1K using a\nminimum of only 6.5GB of GPU memory. Under the optimal tradeoff strategy, it\nrequires only 2.5$\\times$ less memory and 5$\\times$ less runtime compared to\nthe state-of-the-art. Empirically, our method can produce synthetic datasets\njust 10% the size of the original, yet achieve, on average, 94% of the test\naccuracy of models trained on the full original datasets including ImageNet-1K,\nsignificantly surpassing state-of-the-art. Additionally, extensive tests reveal\nthat our distilled datasets excel in cross-architecture generalization\ncapabilities.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Shenglin Yin", "Zhen Xiao", "Mingxuan Song", "Jieyi Long"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0cb"}, "filepath": "data/2312.05941.png", "tags": [], "_media_type": "image", "_rand": 0.9998584070470254, "arXiv_link": "https://arxiv.org/abs/2312.05941", "other_link": "", "title": "ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering", "abstract": "Real-time rendering of photorealistic and controllable human avatars stands\nas a cornerstone in Computer Vision and Graphics. While recent advances in\nneural implicit rendering have unlocked unprecedented photorealism for digital\navatars, real-time performance has mostly been demonstrated for static scenes\nonly. To address this, we propose ASH, an animatable Gaussian splatting\napproach for photorealistic rendering of dynamic humans in real-time. We\nparameterize the clothed human as animatable 3D Gaussians, which can be\nefficiently splatted into image space to generate the final rendering. However,\nnaively learning the Gaussian parameters in 3D space poses a severe challenge\nin terms of compute. Instead, we attach the Gaussians onto a deformable\ncharacter model, and learn their parameters in 2D texture space, which allows\nleveraging efficient 2D convolutional architectures that easily scale with the\nrequired number of Gaussians. We benchmark ASH with competing methods on\npose-controllable avatars, demonstrating that our method outperforms existing\nreal-time methods by a large margin and shows comparable or even better results\nthan offline methods.", "keywords": ["Biometrics and human analysis", "Vision systems and graphics integration"], "authors_list": ["Haokai Pang", "Heming Zhu", "Adam Kortylewski", "Christian Theobalt", "Marc Habermann"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0cc"}, "filepath": "data/2404.02686.png", "tags": [], "_media_type": "image", "_rand": 0.9994794240703275, "arXiv_link": "https://arxiv.org/abs/2404.02686", "other_link": "", "title": "Design2Cloth: 3D Cloth Generation from 2D Masks", "abstract": "In recent years, there has been a significant shift in the field of digital\navatar research, towards modeling, animating and reconstructing clothed human\nrepresentations, as a key step towards creating realistic avatars. However,\ncurrent 3D cloth generation methods are garment specific or trained completely\non synthetic data, hence lacking fine details and realism. In this work, we\nmake a step towards automatic realistic garment design and propose\nDesign2Cloth, a high fidelity 3D generative model trained on a real world\ndataset from more than 2000 subject scans. To provide vital contribution to the\nfashion industry, we developed a user-friendly adversarial model capable of\ngenerating diverse and detailed clothes simply by drawing a 2D cloth mask.\nUnder a series of both qualitative and quantitative experiments, we showcase\nthat Design2Cloth outperforms current state-of-the-art cloth generative models\nby a large margin. In addition to the generative properties of our network, we\nshowcase that the proposed method can be used to achieve high quality\nreconstructions from single in-the-wild images and 3D scans. Dataset, code and\npre-trained model will become publicly available.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiali Zheng", "Rolandos Alexandros Potamias", "Stefanos Zafeiriou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0cd"}, "filepath": "data/2311.12905.png", "tags": [], "_media_type": "image", "_rand": 0.9999113474939563, "arXiv_link": "https://arxiv.org/abs/2311.12905", "other_link": "", "title": "Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer", "abstract": "Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a\nnew target domain by actively selecting a limited number of target data to\nannotate.This setting neglects the more practical scenario where training data\nare collected from multiple sources. This motivates us to target a new and\nchallenging setting of knowledge transfer that extends ADA from a single source\ndomain to multiple source domains, termed Multi-source Active Domain Adaptation\n(MADA). Not surprisingly, we find that most traditional ADA methods cannot work\ndirectly in such a setting, mainly due to the excessive domain gap introduced\nby all the source domains and thus their uncertainty-aware sample selection can\neasily become miscalibrated under the multi-domain shifts. Considering this, we\npropose a Dynamic integrated uncertainty valuation framework(Detective) that\ncomprehensively consider the domain shift between multi-source domains and\ntarget domain to detect the informative target samples. Specifically, the\nleverages a dynamic Domain Adaptation(DA) model that learns how to adapt the\nmodel's parameters to fit the union of multi-source domains. This enables an\napproximate single-source domain modeling by the dynamic model. We then\ncomprehensively measure both domain uncertainty and predictive uncertainty in\nthe target domain to detect informative target samples using evidential deep\nlearning, thereby mitigating uncertainty miscalibration. Furthermore, we\nintroduce a contextual diversity-aware calculator to enhance the diversity of\nthe selected samples. Experiments demonstrate that our solution outperforms\nexisting methods by a considerable margin on three domain adaptation\nbenchmarks.", "keywords": [], "authors_list": ["Wenqiao Zhang", "Zheqi Lv"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ce"}, "filepath": "data/2402.18277.png", "tags": [], "_media_type": "image", "_rand": 0.999985365878936, "arXiv_link": "https://arxiv.org/abs/2402.18277", "other_link": "", "title": "Attentive Illumination Decomposition Model for Multi-Illuminant White Balancing", "abstract": "White balance (WB) algorithms in many commercial cameras assume single and\nuniform illumination, leading to undesirable results when multiple lighting\nsources with different chromaticities exist in the scene. Prior research on\nmulti-illuminant WB typically predicts illumination at the pixel level without\nfully grasping the scene's actual lighting conditions, including the number and\ncolor of light sources. This often results in unnatural outcomes lacking in\noverall consistency. To handle this problem, we present a deep white balancing\nmodel that leverages the slot attention, where each slot is in charge of\nrepresenting individual illuminants. This design enables the model to generate\nchromaticities and weight maps for individual illuminants, which are then fused\nto compose the final illumination map. Furthermore, we propose the\ncentroid-matching loss, which regulates the activation of each slot based on\nthe color range, thereby enhancing the model to separate illumination more\neffectively. Our method achieves the state-of-the-art performance on both\nsingle- and multi-illuminant WB benchmarks, and also offers additional\ninformation such as the number of illuminants in the scene and their\nchromaticity. This capability allows for illumination editing, an application\nnot feasible with prior methods.", "keywords": ["Low-level vision", "Scene analysis and understanding"], "authors_list": ["Dongyoung Kim", "Jinwoo Kim", "Junsang Yu", "Seon Joo Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0cf"}, "filepath": "data/2312.08870.png", "tags": [], "_media_type": "image", "_rand": 0.9992542047719016, "arXiv_link": "https://arxiv.org/abs/2312.08870", "other_link": "https://jinxxian.github.io/Vista-LLaMA.", "title": "Vista-LLaMA: Reliable Video Teller via Equal Distance to Visual Tokens", "abstract": "Recent advances in large video-language models have displayed promising\noutcomes in video comprehension. Current approaches straightforwardly convert\nvideo into language tokens and employ large language models for multi-modal\ntasks. However, this method often leads to the generation of irrelevant\ncontent, commonly known as \"hallucination\", as the length of the text increases\nand the impact of the video diminishes. To address this problem, we propose\nVista-LLaMA, a novel framework that maintains the consistent distance between\nall visual tokens and any language tokens, irrespective of the generated text\nlength. Vista-LLaMA omits relative position encoding when determining attention\nweights between visual and text tokens, retaining the position encoding for\ntext and text tokens. This amplifies the effect of visual tokens on text\ngeneration, especially when the relative distance is longer between visual and\ntext tokens. The proposed attention mechanism significantly reduces the chance\nof producing irrelevant text related to the video content. Furthermore, we\npresent a sequential visual projector that projects the current video frame\ninto tokens of language space with the assistance of the previous frame. This\napproach not only captures the temporal relationship within the video, but also\nallows less visual tokens to encompass the entire video. Our approach\nsignificantly outperforms various previous methods (e.g., Video-ChatGPT,\nMovieChat) on four challenging open-ended video question answering benchmarks.\nWe reach an accuracy of 60.7 on the zero-shot NExT-QA and 60.5 on the zero-shot\nMSRVTT-QA, setting a new state-of-the-art performance. This project is\navailable at https://jinxxian.github.io/Vista-LLaMA.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Image and video generation and manipulation"], "authors_list": ["Fan Ma", "Xiaojie Jin", "Heng Wang", "Yuchen Xian", "Jiashi Feng", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d0"}, "filepath": "data/2404.01464.png", "tags": [], "_media_type": "image", "_rand": 0.9993274176208272, "arXiv_link": "https://arxiv.org/abs/2404.01464", "other_link": "https://github.com/jungeun122333/UVI-Net.", "title": "Data-Efficient Unsupervised Interpolation Without Any Intermediate Frame for 4D Medical Images", "abstract": "4D medical images, which represent 3D images with temporal information, are\ncrucial in clinical practice for capturing dynamic changes and monitoring\nlong-term disease progression. However, acquiring 4D medical images poses\nchallenges due to factors such as radiation exposure and imaging duration,\nnecessitating a balance between achieving high temporal resolution and\nminimizing adverse effects. Given these circumstances, not only is data\nacquisition challenging, but increasing the frame rate for each dataset also\nproves difficult. To address this challenge, this paper proposes a simple yet\neffective Unsupervised Volumetric Interpolation framework, UVI-Net. This\nframework facilitates temporal interpolation without the need for any\nintermediate frames, distinguishing it from the majority of other existing\nunsupervised methods. Experiments on benchmark datasets demonstrate significant\nimprovements across diverse evaluation metrics compared to unsupervised and\nsupervised baselines. Remarkably, our approach achieves this superior\nperformance even when trained with a dataset as small as one, highlighting its\nexceptional robustness and efficiency in scenarios with sparse supervision.\nThis positions UVI-Net as a compelling alternative for 4D medical imaging,\nparticularly in settings where data availability is limited. The source code is\navailable at https://github.com/jungeun122333/UVI-Net.", "keywords": ["Efficient and scalable vision"], "authors_list": ["JungEun Kim", "Hangyul Yoon", "Geondo Park", "Kyungsu Kim", "Eunho Yang"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d1"}, "filepath": "data/2307.13929v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992658968981601, "arXiv_link": "https://arxiv.org/abs/2307.13929v3", "other_link": "", "title": "ERMVP: Communication-Efficient and Collaboration-Robust Multi-Vehicle Perception in Challenging Environments", "abstract": "Multi-agent collaborative perception as a potential application for\nvehicle-to-everything communication could significantly improve the perception\nperformance of autonomous vehicles over single-agent perception. However,\nseveral challenges remain in achieving pragmatic information sharing in this\nemerging research. In this paper, we propose SCOPE, a novel collaborative\nperception framework that aggregates the spatio-temporal awareness\ncharacteristics across on-road agents in an end-to-end manner. Specifically,\nSCOPE has three distinct strengths: i) it considers effective semantic cues of\nthe temporal context to enhance current representations of the target agent;\nii) it aggregates perceptually critical spatial information from heterogeneous\nagents and overcomes localization errors via multi-scale feature interactions;\niii) it integrates multi-source representations of the target agent based on\ntheir complementary contributions by an adaptive fusion paradigm. To thoroughly\nevaluate SCOPE, we consider both real-world and simulated scenarios of\ncollaborative 3D object detection tasks on three datasets. Extensive\nexperiments demonstrate the superiority of our approach and the necessity of\nthe proposed components.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jingyu Zhang", "Kun Yang", "Yilei Wang", "Hanqi Wang", "Peng Sun", "Liang Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d2"}, "filepath": "data/2312.03050.png", "tags": [], "_media_type": "image", "_rand": 0.9990045033767124, "arXiv_link": "https://arxiv.org/abs/2312.03050", "other_link": "", "title": "HIG: Hierarchical Interlacement Graph Approach to Scene Graph Generation in Video Understanding", "abstract": "Visual interactivity understanding within visual scenes presents a\nsignificant challenge in computer vision. Existing methods focus on complex\ninteractivities while leveraging a simple relationship model. These methods,\nhowever, struggle with a diversity of appearance, situation, position,\ninteraction, and relation in videos. This limitation hinders the ability to\nfully comprehend the interplay within the complex visual dynamics of subjects.\nIn this paper, we delve into interactivities understanding within visual\ncontent by deriving scene graph representations from dense interactivities\namong humans and objects. To achieve this goal, we first present a new dataset\ncontaining Appearance-Situation-Position-Interaction-Relation predicates, named\nASPIRe, offering an extensive collection of videos marked by a wide range of\ninteractivities. Then, we propose a new approach named Hierarchical\nInterlacement Graph (HIG), which leverages a unified layer and graph within a\nhierarchical structure to provide deep insights into scene changes across five\ndistinct tasks. Our approach demonstrates superior performance to other methods\nthrough extensive experiments conducted in various scenarios.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Trong-Thuan Nguyen", "Pha Nguyen", "Khoa Luu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d3"}, "filepath": "data/2312.08578.png", "tags": [], "_media_type": "image", "_rand": 0.9993052450076483, "arXiv_link": "https://arxiv.org/abs/2312.08578", "other_link": "", "title": "A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions", "abstract": "Curation methods for massive vision-language datasets trade off between\ndataset size and quality. However, even the highest quality of available\ncurated captions are far too short to capture the rich visual detail in an\nimage. To show the value of dense and highly-aligned image-text pairs, we\ncollect the Densely Captioned Images (DCI) dataset, containing 8012 natural\nimages human-annotated with mask-aligned descriptions averaging above 1000\nwords each. With precise and reliable captions associated with specific parts\nof an image, we can evaluate vision-language models' (VLMs) understanding of\nimage content with a novel task that matches each caption with its\ncorresponding subcrop. As current models are often limited to 77 text tokens,\nwe also introduce a summarized version (sDCI) in which each caption length is\nlimited. We show that modern techniques that make progress on standard\nbenchmarks do not correspond with significant improvement on our sDCI based\nbenchmark. Lastly, we finetune CLIP using sDCI and show significant\nimprovements over the baseline despite a small training set. By releasing the\nfirst human annotated dense image captioning dataset, we hope to enable the\ndevelopment of new benchmarks or fine-tuning recipes for the next generation of\nVLMs to come.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jack Urbanek", "Florian Bordes", "Pietro Astolfi", "Mary Williamson", "Vasu Sharma", "Adriana Romero-Soriano"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d4"}, "filepath": "data/2403.13470v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999424956018592, "arXiv_link": "https://arxiv.org/html/2403.13470v1", "other_link": "", "title": "Scaling Diffusion Models to Real-World 3D LiDAR Scene Completion", "abstract": "Computer vision techniques play a central role in the perception stack of\nautonomous vehicles. Such methods are employed to perceive the vehicle\nsurroundings given sensor data. 3D LiDAR sensors are commonly used to collect\nsparse 3D point clouds from the scene. However, compared to human perception,\nsuch systems struggle to deduce the unseen parts of the scene given those\nsparse point clouds. In this matter, the scene completion task aims at\npredicting the gaps in the LiDAR measurements to achieve a more complete scene\nrepresentation. Given the promising results of recent diffusion models as\ngenerative models for images, we propose extending them to achieve scene\ncompletion from a single 3D LiDAR scan. Previous works used diffusion models\nover range images extracted from LiDAR data, directly applying image-based\ndiffusion methods. Distinctly, we propose to directly operate on the points,\nreformulating the noising and denoising diffusion process such that it can\nefficiently work at scene scale. Together with our approach, we propose a\nregularization loss to stabilize the noise predicted during the denoising\nprocess. Our experimental evaluation shows that our method can complete the\nscene given a single LiDAR scan as input, producing a scene with more details\ncompared to state-of-the-art scene completion methods. We believe that our\nproposed diffusion process formulation can support further research in\ndiffusion models applied to scene-scale point cloud data.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Lucas Nunes", "Rodrigo Marcuzzi", "Benedikt Mersch", "Jens Behley", "Cyrill Stachniss"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d5"}, "filepath": "data/2402.14797.png", "tags": [], "_media_type": "image", "_rand": 0.9990280901926644, "arXiv_link": "https://arxiv.org/abs/2402.14797", "other_link": "https://snap-research.github.io/snapvideo/.", "title": "Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis", "abstract": "Contemporary models for generating images show remarkable quality and\nversatility. Swayed by these advantages, the research community repurposes them\nto generate videos. Since video content is highly redundant, we argue that\nnaively bringing advances of image models to the video generation domain\nreduces motion fidelity, visual quality and impairs scalability. In this work,\nwe build Snap Video, a video-first model that systematically addresses these\nchallenges. To do that, we first extend the EDM framework to take into account\nspatially and temporally redundant pixels and naturally support video\ngeneration. Second, we show that a U-Net - a workhorse behind image generation\n- scales poorly when generating videos, requiring significant computational\noverhead. Hence, we propose a new transformer-based architecture that trains\n3.31 times faster than U-Nets (and is ~4.5 faster at inference). This allows us\nto efficiently train a text-to-video model with billions of parameters for the\nfirst time, reach state-of-the-art results on a number of benchmarks, and\ngenerate videos with substantially higher quality, temporal consistency, and\nmotion complexity. The user studies showed that our model was favored by a\nlarge margin over the most recent methods. See our website at\nhttps://snap-research.github.io/snapvideo/.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Willi Menapace", "Aliaksandr Siarohin", "Ivan Skorokhodov", "Ekaterina Deyneka", "Tsai-Shien Chen", "Anil Kag", "Yuwei Fang", "Aleksei Stoliar", "Elisa Ricci", "Jian Ren", "Sergey Tulyakov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d6"}, "filepath": "data/2311.04246.png", "tags": [], "_media_type": "image", "_rand": 0.999618763172739, "arXiv_link": "https://arxiv.org/abs/2311.04246", "other_link": "", "title": "ADFactory: An Effective Framework for Generalizing Optical Flow with NeRF", "abstract": "A significant challenge facing current optical flow methods is the difficulty\nin generalizing them well to the real world. This is mainly due to the high\ncost of hand-crafted datasets, and existing self-supervised methods are limited\nby indirect loss and occlusions, resulting in fuzzy outcomes. To address this\nchallenge, we introduce a novel optical flow training framework: automatic data\nfactory (ADF). ADF only requires RGB images as input to effectively train the\noptical flow network on the target data domain. Specifically, we use advanced\nNerf technology to reconstruct scenes from photo groups collected by a\nmonocular camera, and then calculate optical flow labels between camera pose\npairs based on the rendering results. To eliminate erroneous labels caused by\ndefects in the scene reconstructed by Nerf, we screened the generated labels\nfrom multiple aspects, such as optical flow matching accuracy, radiation field\nconfidence, and depth consistency. The filtered labels can be directly used for\nnetwork supervision. Experimentally, the generalization ability of ADF on KITTI\nsurpasses existing self-supervised optical flow and monocular scene flow\nalgorithms. In addition, ADF achieves impressive results in real-world\nzero-point generalization evaluations and surpasses most supervised methods.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Han Ling", "Quansen Sun", "Yinghui Sun", "Xian Xu", "Xingfeng Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d7"}, "filepath": "data/2403.12580.png", "tags": [], "_media_type": "image", "_rand": 0.9996858509740737, "arXiv_link": "https://arxiv.org/abs/2403.12580", "other_link": "", "title": "Real-IAD: A Real-World Multi-View Dataset for Benchmarking Versatile Industrial Anomaly Detection", "abstract": "Industrial anomaly detection (IAD) has garnered significant attention and\nexperienced rapid development. However, the recent development of IAD approach\nhas encountered certain difficulties due to dataset limitations. On the one\nhand, most of the state-of-the-art methods have achieved saturation (over 99%\nin AUROC) on mainstream datasets such as MVTec, and the differences of methods\ncannot be well distinguished, leading to a significant gap between public\ndatasets and actual application scenarios. On the other hand, the research on\nvarious new practical anomaly detection settings is limited by the scale of the\ndataset, posing a risk of overfitting in evaluation results. Therefore, we\npropose a large-scale, Real-world, and multi-view Industrial Anomaly Detection\ndataset, named Real-IAD, which contains 150K high-resolution images of 30\ndifferent objects, an order of magnitude larger than existing datasets. It has\na larger range of defect area and ratio proportions, making it more challenging\nthan previous datasets. To make the dataset closer to real application\nscenarios, we adopted a multi-view shooting method and proposed sample-level\nevaluation metrics. In addition, beyond the general unsupervised anomaly\ndetection setting, we propose a new setting for Fully Unsupervised Industrial\nAnomaly Detection (FUIAD) based on the observation that the yield rate in\nindustrial production is usually greater than 60%, which has more practical\napplication value. Finally, we report the results of popular IAD methods on the\nReal-IAD dataset, providing a highly challenging benchmark to promote the\ndevelopment of the IAD field.", "keywords": [], "authors_list": ["Chengjie Wang", "wenbing zhu", "Bin-Bin Gao", "Zhenye Gan", "Jiangning Zhang", "Zhihao Gu", "Bruce Qian", "Mingang Chen", "Lizhuang Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d8"}, "filepath": "data/2312.04548.png", "tags": [], "_media_type": "image", "_rand": 0.9998396038689908, "arXiv_link": "https://arxiv.org/abs/2312.04548", "other_link": "https://mavrec.github.io.", "title": "Multiview Aerial Visual RECognition (MAVREC) Dataset: Can Multi-view Improve Aerial Visual Perception?", "abstract": "Despite the commercial abundance of UAVs, aerial data acquisition remains\nchallenging, and the existing Asia and North America-centric open-source UAV\ndatasets are small-scale or low-resolution and lack diversity in scene\ncontextuality. Additionally, the color content of the scenes, solar-zenith\nangle, and population density of different geographies influence the data\ndiversity. These two factors conjointly render suboptimal aerial-visual\nperception of the deep neural network (DNN) models trained primarily on the\nground-view data, including the open-world foundational models.\n To pave the way for a transformative era of aerial detection, we present\nMultiview Aerial Visual RECognition or MAVREC, a video dataset where we record\nsynchronized scenes from different perspectives -- ground camera and\ndrone-mounted camera. MAVREC consists of around 2.5 hours of industry-standard\n2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million\nannotated bounding boxes. This makes MAVREC the largest ground and aerial-view\ndataset, and the fourth largest among all drone-based datasets across all\nmodalities and tasks. Through our extensive benchmarking on MAVREC, we\nrecognize that augmenting object detectors with ground-view images from the\ncorresponding geographical location is a superior pre-training strategy for\naerial detection. Building on this strategy, we benchmark MAVREC with a\ncurriculum-based semi-supervised object detection approach that leverages\nlabeled (ground and aerial) and unlabeled (only aerial) images to enhance the\naerial detection. We publicly release the MAVREC dataset:\nhttps://mavrec.github.io.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Aritra Dutta", "Srijan Das", "Jacob Nielsen", "RAJATSUBHRA CHAKRABORTY", "Mubarak Shah"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0d9"}, "filepath": "data/2404.04319.png", "tags": [], "_media_type": "image", "_rand": 0.9992471482204641, "arXiv_link": "https://arxiv.org/abs/2404.04319", "other_link": "", "title": "SpatialTracker: Tracking Any 2D Pixels in 3D Space", "abstract": "Recovering dense and long-range pixel motion in videos is a challenging\nproblem. Part of the difficulty arises from the 3D-to-2D projection process,\nleading to occlusions and discontinuities in the 2D motion domain. While 2D\nmotion can be intricate, we posit that the underlying 3D motion can often be\nsimple and low-dimensional. In this work, we propose to estimate point\ntrajectories in 3D space to mitigate the issues caused by image projection. Our\nmethod, named SpatialTracker, lifts 2D pixels to 3D using monocular depth\nestimators, represents the 3D content of each frame efficiently using a\ntriplane representation, and performs iterative updates using a transformer to\nestimate 3D trajectories. Tracking in 3D allows us to leverage\nas-rigid-as-possible (ARAP) constraints while simultaneously learning a\nrigidity embedding that clusters pixels into different rigid parts. Extensive\nevaluation shows that our approach achieves state-of-the-art tracking\nperformance both qualitatively and quantitatively, particularly in challenging\nscenarios such as out-of-plane rotation.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yuxi Xiao", "Qianqian Wang", "Shangzhan Zhang", "Nan Xue", "Sida Peng", "Yujun Shen", "Xiaowei Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0da"}, "filepath": "data/2311.11860.png", "tags": [], "_media_type": "image", "_rand": 0.9991749688286871, "arXiv_link": "https://arxiv.org/abs/2311.11860", "other_link": "", "title": "LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge", "abstract": "Multimodal Large Language Models (MLLMs) have endowed LLMs with the ability\nto perceive and understand multi-modal signals. However, most of the existing\nMLLMs mainly adopt vision encoders pretrained on coarsely aligned image-text\npairs, leading to insufficient extraction and reasoning of visual knowledge. To\naddress this issue, we devise a dual-Level vIsual knOwledge eNhanced Multimodal\nLarge Language Model (LION), which empowers the MLLM by injecting visual\nknowledge in two levels. 1) Progressive incorporation of fine-grained\nspatial-aware visual knowledge. We design a vision aggregator cooperated with\nregion-level vision-language (VL) tasks to incorporate fine-grained\nspatial-aware visual knowledge into the MLLM. To alleviate the conflict between\nimage-level and region-level VL tasks during incorporation, we devise a\ndedicated stage-wise instruction-tuning strategy with mixture-of-adapters. This\nprogressive incorporation scheme contributes to the mutual promotion between\nthese two kinds of VL tasks. 2) Soft prompting of high-level semantic visual\nevidence. We facilitate the MLLM with high-level semantic visual evidence by\nleveraging diverse image tags. To mitigate the potential influence caused by\nimperfect predicted tags, we propose a soft prompting method by embedding a\nlearnable token into the tailored text instruction. Comprehensive experiments\non several multi-modal benchmarks demonstrate the superiority of our model\n(e.g., improvement of 5% accuracy on VSR and 3% CIDEr on TextCaps over\nInstructBLIP, 5% accuracy on RefCOCOg over Kosmos-2).", "keywords": ["Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Gongwei Chen", "Leyang Shen", "Rui Shao", "Xiang Deng", "Liqiang Nie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0db"}, "filepath": "data/2403.03421.png", "tags": [], "_media_type": "image", "_rand": 0.9993115795230261, "arXiv_link": "https://arxiv.org/abs/2403.03421", "other_link": "https://github.com/ispc-lab/LEAD.", "title": "LEAD: Learning Decomposition for Source-free Universal Domain Adaptation", "abstract": "Universal Domain Adaptation (UniDA) targets knowledge transfer in the\npresence of both covariate and label shifts. Recently, Source-free Universal\nDomain Adaptation (SF-UniDA) has emerged to achieve UniDA without access to\nsource data, which tends to be more practical due to data protection policies.\nThe main challenge lies in determining whether covariate-shifted samples belong\nto target-private unknown categories. Existing methods tackle this either\nthrough hand-crafted thresholding or by developing time-consuming iterative\nclustering strategies. In this paper, we propose a new idea of LEArning\nDecomposition (LEAD), which decouples features into source-known and -unknown\ncomponents to identify target-private data. Technically, LEAD initially\nleverages the orthogonal decomposition analysis for feature decomposition.\nThen, LEAD builds instance-level decision boundaries to adaptively identify\ntarget-private data. Extensive experiments across various UniDA scenarios have\ndemonstrated the effectiveness and superiority of LEAD. Notably, in the OPDA\nscenario on VisDA dataset, LEAD outperforms GLC by 3.5% overall H-score and\nreduces 75% time to derive pseudo-labeling decision boundaries. Besides, LEAD\nis also appealing in that it is complementary to most existing methods. The\ncode is available at https://github.com/ispc-lab/LEAD.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Sanqing Qu", "Tianpei Zou", "Lianghua He", "Florian R\u00f6hrbein", "Alois Knoll", "Guang Chen", "Changjun Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0dc"}, "filepath": "data/2401.01885.png", "tags": [], "_media_type": "image", "_rand": 0.9994397304266506, "arXiv_link": "https://arxiv.org/abs/2401.01885", "other_link": "", "title": "From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations", "abstract": "We present a framework for generating full-bodied photorealistic avatars that\ngesture according to the conversational dynamics of a dyadic interaction. Given\nspeech audio, we output multiple possibilities of gestural motion for an\nindividual, including face, body, and hands. The key behind our method is in\ncombining the benefits of sample diversity from vector quantization with the\nhigh-frequency details obtained through diffusion to generate more dynamic,\nexpressive motion. We visualize the generated motion using highly\nphotorealistic avatars that can express crucial nuances in gestures (e.g.\nsneers and smirks). To facilitate this line of research, we introduce a\nfirst-of-its-kind multi-view conversational dataset that allows for\nphotorealistic reconstruction. Experiments show our model generates appropriate\nand diverse gestures, outperforming both diffusion- and VQ-only methods.\nFurthermore, our perceptual evaluation highlights the importance of\nphotorealism (vs. meshes) in accurately assessing subtle motion details in\nconversational gestures. Code and dataset available online.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Evonne Ng", "Javier Romero", "Timur Bagautdinov", "Shaojie Bai", "Trevor Darrell", "Angjoo Kanazawa", "Alexander Richard"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0dd"}, "filepath": "data/2403.07347.png", "tags": [], "_media_type": "image", "_rand": 0.9992577799415763, "arXiv_link": "https://arxiv.org/abs/2403.07347", "other_link": "https://github.com/Jiafei127/FD4MM.", "title": "Frequency Decoupling for Motion Magnification via Multi-Level Isomorphic Architecture", "abstract": "Video Motion Magnification (VMM) aims to reveal subtle and imperceptible\nmotion information of objects in the macroscopic world. Prior methods directly\nmodel the motion field from the Eulerian perspective by Representation Learning\nthat separates shape and texture or Multi-domain Learning from phase\nfluctuations. Inspired by the frequency spectrum, we observe that the\nlow-frequency components with stable energy always possess spatial structure\nand less noise, making them suitable for modeling the subtle motion field. To\nthis end, we present FD4MM, a new paradigm of Frequency Decoupling for Motion\nMagnification with a Multi-level Isomorphic Architecture to capture multi-level\nhigh-frequency details and a stable low-frequency structure (motion field) in\nvideo space. Since high-frequency details and subtle motions are susceptible to\ninformation degradation due to their inherent subtlety and unavoidable external\ninterference from noise, we carefully design Sparse High/Low-pass Filters to\nenhance the integrity of details and motion structures, and a Sparse Frequency\nMixer to promote seamless recoupling. Besides, we innovatively design a\ncontrastive regularization for this task to strengthen the model's ability to\ndiscriminate irrelevant features, reducing undesired motion magnification.\nExtensive experiments on both Real-world and Synthetic Datasets show that our\nFD4MM outperforms SOTA methods. Meanwhile, FD4MM reduces FLOPs by 1.63$\\times$\nand boosts inference speed by 1.68$\\times$ than the latest method. Our code is\navailable at https://github.com/Jiafei127/FD4MM.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Fei Wang", "Dan Guo", "Kun Li", "Zhun Zhong", "Meng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0de"}, "filepath": "data/2404.00532.png", "tags": [], "_media_type": "image", "_rand": 0.9992620429212875, "arXiv_link": "https://arxiv.org/abs/2404.00532", "other_link": "", "title": "LLM-AR: When Large Language Model Meets Skeleton-Based Action Recognition", "abstract": "Skeleton-based action recognition has attracted lots of research attention.\nRecently, to build an accurate skeleton-based action recognizer, a variety of\nworks have been proposed. Among them, some works use large model architectures\nas backbones of their recognizers to boost the skeleton data representation\ncapability, while some other works pre-train their recognizers on external data\nto enrich the knowledge. In this work, we observe that large language models\nwhich have been extensively used in various natural language processing tasks\ngenerally hold both large model architectures and rich implicit knowledge.\nMotivated by this, we propose a novel LLM-AR framework, in which we investigate\ntreating the Large Language Model as an Action Recognizer. In our framework, we\npropose a linguistic projection process to project each input action signal\n(i.e., each skeleton sequence) into its ``sentence format'' (i.e., an ``action\nsentence''). Moreover, we also incorporate our framework with several designs\nto further facilitate this linguistic projection process. Extensive experiments\ndemonstrate the efficacy of our proposed framework.", "keywords": ["Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Haoxuan Qu", "Yujun Cai", "Jun Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0df"}, "filepath": "data/2403.01439.png", "tags": [], "_media_type": "image", "_rand": 0.999181637062163, "arXiv_link": "https://arxiv.org/abs/2403.01439", "other_link": "https://github.com/LMD0311/DAPT.", "title": "Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis", "abstract": "Point cloud analysis has achieved outstanding performance by transferring\npoint cloud pre-trained models. However, existing methods for model adaptation\nusually update all model parameters, i.e., full fine-tuning paradigm, which is\ninefficient as it relies on high computational costs (e.g., training GPU\nmemory) and massive storage space. In this paper, we aim to study\nparameter-efficient transfer learning for point cloud analysis with an ideal\ntrade-off between task performance and parameter efficiency. To achieve this\ngoal, we freeze the parameters of the default pre-trained models and then\npropose the Dynamic Adapter, which generates a dynamic scale for each token,\nconsidering the token significance to the downstream task. We further\nseamlessly integrate Dynamic Adapter with Prompt Tuning (DAPT) by constructing\nInternal Prompts, capturing the instance-specific features for interaction.\nExtensive experiments conducted on five challenging datasets demonstrate that\nthe proposed DAPT achieves superior performance compared to the full\nfine-tuning counterparts while significantly reducing the trainable parameters\nand training GPU memory by 95% and 35%, respectively. Code is available at\nhttps://github.com/LMD0311/DAPT.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xin Zhou", "Dingkang Liang", "Wei Xu", "Xingkui Zhu", "Yihan Xu", "Zhikang Zou", "Xiang Bai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e0"}, "filepath": "data/2308.07891.png", "tags": [], "_media_type": "image", "_rand": 0.9990451245286047, "arXiv_link": "https://arxiv.org/abs/2308.07891", "other_link": "https://github.com/isekai-portal/Link-Context-Learning.", "title": "Link-Context Learning for Multimodal LLMs", "abstract": "The ability to learn from context with novel concepts, and deliver\nappropriate responses are essential in human conversations. Despite current\nMultimodal Large Language Models (MLLMs) and Large Language Models (LLMs) being\ntrained on mega-scale datasets, recognizing unseen images or understanding\nnovel concepts in a training-free manner remains a challenge. In-Context\nLearning (ICL) explores training-free few-shot learning, where models are\nencouraged to ``learn to learn\" from limited tasks and generalize to unseen\ntasks. In this work, we propose link-context learning (LCL), which emphasizes\n\"reasoning from cause and effect\" to augment the learning capabilities of\nMLLMs. LCL goes beyond traditional ICL by explicitly strengthening the causal\nrelationship between the support set and the query set. By providing\ndemonstrations with causal links, LCL guides the model to discern not only the\nanalogy but also the underlying causal associations between data points, which\nempowers MLLMs to recognize unseen images and understand novel concepts more\neffectively. To facilitate the evaluation of this novel approach, we introduce\nthe ISEKAI dataset, comprising exclusively of unseen generated image-label\npairs designed for link-context learning. Extensive experiments show that our\nLCL-MLLM exhibits strong link-context learning capabilities to novel concepts\nover vanilla MLLMs. Code and data will be released at\nhttps://github.com/isekai-portal/Link-Context-Learning.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Yan Tai", "Weichen Fan", "Zhao Zhang", "Ziwei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e1"}, "filepath": "data/2404.15081.png", "tags": [], "_media_type": "image", "_rand": 0.9992405425199273, "arXiv_link": "https://arxiv.org/abs/2404.15081", "other_link": "", "title": "Perturbing Attention Gives You More Bang for the Buck: Subtle Imaging Perturbations That Efficiently Fool Customized Diffusion Models", "abstract": "Diffusion models (DMs) embark a new era of generative modeling and offer more\nopportunities for efficient generating high-quality and realistic data samples.\nHowever, their widespread use has also brought forth new challenges in model\nsecurity, which motivates the creation of more effective adversarial attackers\non DMs to understand its vulnerability. We propose CAAT, a simple but generic\nand efficient approach that does not require costly training to effectively\nfool latent diffusion models (LDMs). The approach is based on the observation\nthat cross-attention layers exhibits higher sensitivity to gradient change,\nallowing for leveraging subtle perturbations on published images to\nsignificantly corrupt the generated images. We show that a subtle perturbation\non an image can significantly impact the cross-attention layers, thus changing\nthe mapping between text and image during the fine-tuning of customized\ndiffusion models. Extensive experiments demonstrate that CAAT is compatible\nwith diverse diffusion models and outperforms baseline attack methods in a more\neffective (more noise) and efficient (twice as fast as Anti-DreamBooth and\nMist) manner.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Jingyao Xu", "Yuetong Lu", "Yandong Li", "Siyang Lu", "Dongdong Wang", "Xiang Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Cryptography and Security", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e2"}, "filepath": "data/2404.04318.png", "tags": [], "_media_type": "image", "_rand": 0.9995136552558884, "arXiv_link": "https://arxiv.org/abs/2404.04318", "other_link": "https://lastbasket.github.io/PPFT/.", "title": "Robust Depth Enhancement via Polarization Prompt Fusion Tuning", "abstract": "Existing depth sensors are imperfect and may provide inaccurate depth values\nin challenging scenarios, such as in the presence of transparent or reflective\nobjects. In this work, we present a general framework that leverages\npolarization imaging to improve inaccurate depth measurements from various\ndepth sensors. Previous polarization-based depth enhancement methods focus on\nutilizing pure physics-based formulas for a single sensor. In contrast, our\nmethod first adopts a learning-based strategy where a neural network is trained\nto estimate a dense and complete depth map from polarization data and a sensor\ndepth map from different sensors. To further improve the performance, we\npropose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively\nutilize RGB-based models pre-trained on large-scale datasets, as the size of\nthe polarization dataset is limited to train a strong model from scratch. We\nconducted extensive experiments on a public dataset, and the results\ndemonstrate that the proposed method performs favorably compared to existing\ndepth enhancement baselines. Code and demos are available at\nhttps://lastbasket.github.io/PPFT/.", "keywords": ["Deep learning architectures and techniques", "Computational imaging and physics-based vision"], "authors_list": ["Kei IKEMURA", "Yiming Huang", "Felix Heide", "Zhaoxiang Zhang", "Qifeng Chen", "Chenyang Lei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e3"}, "filepath": "data/2311.17138.png", "tags": [], "_media_type": "image", "_rand": 0.9992166642672088, "arXiv_link": "https://arxiv.org/abs/2311.17138", "other_link": "", "title": "Shadows Don\u2019t Lie and Lines Can't Bend! Generative Models don't know Projective Geometry...for now", "abstract": "Generative models can produce impressively realistic images. This paper\ndemonstrates that generated images have geometric features different from those\nof real images. We build a set of collections of generated images, prequalified\nto fool simple, signal-based classifiers into believing they are real. We then\nshow that prequalified generated images can be identified reliably by\nclassifiers that only look at geometric properties. We use three such\nclassifiers. All three classifiers are denied access to image pixels, and look\nonly at derived geometric features. The first classifier looks at the\nperspective field of the image, the second looks at lines detected in the\nimage, and the third looks at relations between detected objects and shadows.\nOur procedure detects generated images more reliably than SOTA local signal\nbased detectors, for images from a number of distinct generators. Saliency maps\nsuggest that the classifiers can identify geometric problems reliably. We\nconclude that current generators cannot reliably reproduce geometric properties\nof real images.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Ayush Sarkar", "Hanlin Mai", "Amitabh Mahapatra", "David Forsyth", "Svetlana Lazebnik", "Anand Bhattad"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e4"}, "filepath": "data/2404.02152.png", "tags": [], "_media_type": "image", "_rand": 0.9995107830141432, "arXiv_link": "https://arxiv.org/abs/2404.02152", "other_link": "https://zju3dv.github.io/geneavatar/", "title": "GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image", "abstract": "Recently, we have witnessed the explosive growth of various volumetric\nrepresentations in modeling animatable head avatars. However, due to the\ndiversity of frameworks, there is no practical method to support high-level\napplications like 3D head avatar editing across different representations. In\nthis paper, we propose a generic avatar editing approach that can be\nuniversally applied to various 3DMM driving volumetric head avatars. To achieve\nthis goal, we design a novel expression-aware modification generative model,\nwhich enables lift 2D editing from a single image to a consistent 3D\nmodification field. To ensure the effectiveness of the generative modification\nprocess, we develop several techniques, including an expression-dependent\nmodification distillation scheme to draw knowledge from the large-scale head\navatar model and 2D facial texture editing tools, implicit latent space\nguidance to enhance model convergence, and a segmentation-based loss reweight\nstrategy for fine-grained texture inversion. Extensive experiments demonstrate\nthat our method delivers high-quality and consistent results across multiple\nexpression and viewpoints. Project page: https://zju3dv.github.io/geneavatar/", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Chong Bao", "Yinda Zhang", "Yuan Li", "Xiyu Zhang", "Bangbang Yang", "Hujun Bao", "Marc Pollefeys", "Guofeng Zhang", "Zhaopeng Cui"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e5"}, "filepath": "data/2308.10997.png", "tags": [], "_media_type": "image", "_rand": 0.9991276706643966, "arXiv_link": "https://arxiv.org/abs/2308.10997", "other_link": "", "title": "MarkovGen: Structured Prediction for Efficient Text-to-Image Generation", "abstract": "Modern text-to-image generation models produce high-quality images that are\nboth photorealistic and faithful to the text prompts. However, this quality\ncomes at significant computational cost: nearly all of these models are\niterative and require running sampling multiple times with large models. This\niterative process is needed to ensure that different regions of the image are\nnot only aligned with the text prompt, but also compatible with each other. In\nthis work, we propose a light-weight approach to achieving this compatibility\nbetween different regions of an image, using a Markov Random Field (MRF) model.\nWe demonstrate the effectiveness of this method on top of the latent\ntoken-based Muse text-to-image model. The MRF richly encodes the compatibility\namong image tokens at different spatial locations to improve quality and\nsignificantly reduce the required number of Muse sampling steps. Inference with\nthe MRF is significantly cheaper, and its parameters can be quickly learned\nthrough back-propagation by modeling MRF inference as a differentiable\nneural-network layer. Our full model, MarkovGen, uses this proposed MRF model\nto both speed up Muse by 1.5X and produce higher quality images by decreasing\nundesirable image artifacts.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Sadeep Jayasumana", "Daniel Glasner", "Srikumar Ramalingam", "Andreas Veit", "Ayan Chakrabarti", "Sanjiv Kumar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e6"}, "filepath": "data/2312.09913.png", "tags": [], "_media_type": "image", "_rand": 0.9998811049881093, "arXiv_link": "https://arxiv.org/abs/2312.09913", "other_link": "", "title": "LAENeRF: Local Appearance Editing for Neural Radiance Fields", "abstract": "Due to the omnipresence of Neural Radiance Fields (NeRFs), the interest\ntowards editable implicit 3D representations has surged over the last years.\nHowever, editing implicit or hybrid representations as used for NeRFs is\ndifficult due to the entanglement of appearance and geometry encoded in the\nmodel parameters. Despite these challenges, recent research has shown first\npromising steps towards photorealistic and non-photorealistic appearance edits.\nThe main open issues of related work include limited interactivity, a lack of\nsupport for local edits and large memory requirements, rendering them less\nuseful in practice. We address these limitations with LAENeRF, a unified\nframework for photorealistic and non-photorealistic appearance editing of\nNeRFs. To tackle local editing, we leverage a voxel grid as starting point for\nregion selection. We learn a mapping from expected ray terminations to final\noutput color, which can optionally be supervised by a style loss, resulting in\na framework which can perform photorealistic and non-photorealistic appearance\nediting of selected regions. Relying on a single point per ray for our mapping,\nwe limit memory requirements and enable fast optimization. To guarantee\ninteractivity, we compose the output color using a set of learned, modifiable\nbase colors, composed with additive layer mixing. Compared to concurrent work,\nLAENeRF enables recoloring and stylization while keeping processing time low.\nFurthermore, we demonstrate that our approach surpasses baseline methods both\nquantitatively and qualitatively.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Lukas Radl", "Michael Steiner", "Andreas Kurz", "Markus Steinberger"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e7"}, "filepath": "data/2401.08739.png", "tags": [], "_media_type": "image", "_rand": 0.999701191333421, "arXiv_link": "https://arxiv.org/abs/2401.08739", "other_link": "https://ego-gen.github.io/.", "title": "EgoGen: An Egocentric Synthetic Data Generator", "abstract": "Understanding the world in first-person view is fundamental in Augmented\nReality (AR). This immersive perspective brings dramatic visual changes and\nunique challenges compared to third-person views. Synthetic data has empowered\nthird-person-view vision models, but its application to embodied egocentric\nperception tasks remains largely unexplored. A critical challenge lies in\nsimulating natural human movements and behaviors that effectively steer the\nembodied cameras to capture a faithful egocentric representation of the 3D\nworld. To address this challenge, we introduce EgoGen, a new synthetic data\ngenerator that can produce accurate and rich ground-truth training data for\negocentric perception tasks. At the heart of EgoGen is a novel human motion\nsynthesis model that directly leverages egocentric visual inputs of a virtual\nhuman to sense the 3D environment. Combined with collision-avoiding motion\nprimitives and a two-stage reinforcement learning approach, our motion\nsynthesis model offers a closed-loop solution where the embodied perception and\nmovement of the virtual human are seamlessly coupled. Compared to previous\nworks, our model eliminates the need for a pre-defined global path, and is\ndirectly applicable to dynamic environments. Combined with our easy-to-use and\nscalable data generation pipeline, we demonstrate EgoGen's efficacy in three\ntasks: mapping and localization for head-mounted cameras, egocentric camera\ntracking, and human mesh recovery from egocentric views. EgoGen will be fully\nopen-sourced, offering a practical solution for creating realistic egocentric\ntraining data and aiming to serve as a useful tool for egocentric computer\nvision research. Refer to our project page: https://ego-gen.github.io/.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Gen Li", "Kaifeng Zhao", "Siwei Zhang", "Xiaozhong Lyu", "Mihai Dusmanu", "Yan Zhang", "Marc Pollefeys", "Siyu Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e8"}, "filepath": "data/2403.09359.png", "tags": [], "_media_type": "image", "_rand": 0.9994137134768412, "arXiv_link": "https://arxiv.org/abs/2403.09359", "other_link": "https://github.com/EdwardDo69/D3T", "title": "D3T: Distinctive Dual-Domain Teacher Zigzagging Across RGB-Thermal Gap for Domain-Adaptive Object Detection", "abstract": "Domain adaptation for object detection typically entails transferring\nknowledge from one visible domain to another visible domain. However, there are\nlimited studies on adapting from the visible to the thermal domain, because the\ndomain gap between the visible and thermal domains is much larger than\nexpected, and traditional domain adaptation can not successfully facilitate\nlearning in this situation. To overcome this challenge, we propose a\nDistinctive Dual-Domain Teacher (D3T) framework that employs distinct training\nparadigms for each domain. Specifically, we segregate the source and target\ntraining sets for building dual-teachers and successively deploy exponential\nmoving average to the student model to individual teachers of each domain. The\nframework further incorporates a zigzag learning method between dual teachers,\nfacilitating a gradual transition from the visible to thermal domains during\ntraining. We validate the superiority of our method through newly designed\nexperimental protocols with well-known thermal datasets, i.e., FLIR and KAIST.\nSource code is available at https://github.com/EdwardDo69/D3T .", "keywords": [], "authors_list": ["Dinh Phat Do", "Taehoon Kim", "JAEMIN NA", "Jiwon Kim", "Keonho LEE", "Kyunghwan Cho", "Wonjun Hwang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0e9"}, "filepath": "data/2403.06973.png", "tags": [], "_media_type": "image", "_rand": 0.9996067447172237, "arXiv_link": "https://arxiv.org/abs/2403.06973", "other_link": "", "title": "Bayesian Diffusion Models for 3D Shape Reconstruction", "abstract": "We present Bayesian Diffusion Models (BDM), a prediction algorithm that\nperforms effective Bayesian inference by tightly coupling the top-down (prior)\ninformation with the bottom-up (data-driven) procedure via joint diffusion\nprocesses. We show the effectiveness of BDM on the 3D shape reconstruction\ntask. Compared to prototypical deep learning data-driven approaches trained on\npaired (supervised) data-labels (e.g. image-point clouds) datasets, our BDM\nbrings in rich prior information from standalone labels (e.g. point clouds) to\nimprove the bottom-up 3D reconstruction. As opposed to the standard Bayesian\nframeworks where explicit prior and likelihood are required for the inference,\nBDM performs seamless information fusion via coupled diffusion processes with\nlearned gradient computation networks. The specialty of our BDM lies in its\ncapability to engage the active and effective information exchange and fusion\nof the top-down and bottom-up processes where each itself is a diffusion\nprocess. We demonstrate state-of-the-art results on both synthetic and\nreal-world benchmarks for 3D shape reconstruction.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haiyang Xu", "Yu lei", "Zeyuan Chen", "Xiang Zhang", "Yue Zhao", "Yilin Wang", "Zhuowen Tu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ea"}, "filepath": "data/2312.16256.png", "tags": [], "_media_type": "image", "_rand": 0.9991726185765919, "arXiv_link": "https://arxiv.org/abs/2312.16256", "other_link": "https://dl3dv-10k.github.io/DL3DV-10K/.", "title": "DL3DV-10K: A Large-Scale Scene Dataset for Deep Learning-based 3D Vision", "abstract": "We have witnessed significant progress in deep learning-based 3D vision,\nranging from neural radiance field (NeRF) based 3D representation learning to\napplications in novel view synthesis (NVS). However, existing scene-level\ndatasets for deep learning-based 3D vision, limited to either synthetic\nenvironments or a narrow selection of real-world scenes, are quite\ninsufficient. This insufficiency not only hinders a comprehensive benchmark of\nexisting methods but also caps what could be explored in deep learning-based 3D\nanalysis. To address this critical gap, we present DL3DV-10K, a large-scale\nscene dataset, featuring 51.2 million frames from 10,510 videos captured from\n65 types of point-of-interest (POI) locations, covering both bounded and\nunbounded scenes, with different levels of reflection, transparency, and\nlighting. We conducted a comprehensive benchmark of recent NVS methods on\nDL3DV-10K, which revealed valuable insights for future research in NVS. In\naddition, we have obtained encouraging results in a pilot study to learn\ngeneralizable NeRF from DL3DV-10K, which manifests the necessity of a\nlarge-scale scene-level dataset to forge a path toward a foundation model for\nlearning 3D representation. Our DL3DV-10K dataset, benchmark results, and\nmodels will be publicly accessible at https://dl3dv-10k.github.io/DL3DV-10K/.", "keywords": ["Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Lu Ling", "Yichen Sheng", "Zhi Tu", "Wentian Zhao", "Cheng Xin", "Kun Wan", "Lantao Yu", "Qianyu Guo", "Zixun Yu", "Yawen Lu", "Xuanmao Li", "Xingpeng Sun", "Rohan Ashok", "Aniruddha Mukherjee", "Hao Kang", "Xiangrui Kong", "Gang Hua", "Tianyi Zhang", "Bedrich Benes", "Aniket Bera"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0eb"}, "filepath": "data/2311.17918.png", "tags": [], "_media_type": "image", "_rand": 0.9995967222755874, "arXiv_link": "https://arxiv.org/abs/2311.17918", "other_link": "", "title": "Driving into the Future: Multiview Visual Forecasting and Planning with World Model for Autonomous Driving", "abstract": "In autonomous driving, predicting future events in advance and evaluating the\nforeseeable risks empowers autonomous vehicles to better plan their actions,\nenhancing safety and efficiency on the road. To this end, we propose Drive-WM,\nthe first driving world model compatible with existing end-to-end planning\nmodels. Through a joint spatial-temporal modeling facilitated by view\nfactorization, our model generates high-fidelity multiview videos in driving\nscenes. Building on its powerful generation ability, we showcase the potential\nof applying the world model for safe driving planning for the first time.\nParticularly, our Drive-WM enables driving into multiple futures based on\ndistinct driving maneuvers, and determines the optimal trajectory according to\nthe image-based rewards. Evaluation on real-world driving datasets verifies\nthat our method could generate high-quality, consistent, and controllable\nmultiview videos, opening up possibilities for real-world simulations and safe\nplanning.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yuqi Wang", "Jiawei He", "Lue Fan", "Hongxin Li", "Yuntao Chen", "Zhaoxiang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ec"}, "filepath": "data/2312.16794.png", "tags": [], "_media_type": "image", "_rand": 0.999687831751484, "arXiv_link": "https://arxiv.org/abs/2312.16794", "other_link": "https://github.com/lsl001006/ZONE.", "title": "ZONE: Zero-Shot Instruction-Guided Local Editing", "abstract": "Recent advances in vision-language models like Stable Diffusion have shown\nremarkable power in creative image synthesis and editing.However, most existing\ntext-to-image editing methods encounter two obstacles: First, the text prompt\nneeds to be carefully crafted to achieve good results, which is not intuitive\nor user-friendly. Second, they are insensitive to local edits and can\nirreversibly affect non-edited regions, leaving obvious editing traces. To\ntackle these problems, we propose a Zero-shot instructiON-guided local image\nEditing approach, termed ZONE. We first convert the editing intent from the\nuser-provided instruction (e.g., \"make his tie blue\") into specific image\nediting regions through InstructPix2Pix. We then propose a Region-IoU scheme\nfor precise image layer extraction from an off-the-shelf segment model. We\nfurther develop an edge smoother based on FFT for seamless blending between the\nlayer and the image.Our method allows for arbitrary manipulation of a specific\nregion with a single instruction while preserving the rest. Extensive\nexperiments demonstrate that our ZONE achieves remarkable local editing results\nand user-friendliness, outperforming state-of-the-art methods. Code is\navailable at https://github.com/lsl001006/ZONE.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Shanglin Li", "Bohan Zeng", "Yutang Feng", "Sicheng Gao", "Xuhui Liu", "Jiaming Liu", "Li Lin", "Xu Tang", "Yao Hu", "Jianzhuang Liu", "Baochang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ed"}, "filepath": "data/2312.07409.png", "tags": [], "_media_type": "image", "_rand": 0.9992711967368523, "arXiv_link": "https://arxiv.org/abs/2312.07409", "other_link": "", "title": "DiffMorpher: Unleashing the Capability of Diffusion Models for Image Morphing", "abstract": "Diffusion models have achieved remarkable image generation quality surpassing\nprevious generative models. However, a notable limitation of diffusion models,\nin comparison to GANs, is their difficulty in smoothly interpolating between\ntwo image samples, due to their highly unstructured latent space. Such a smooth\ninterpolation is intriguing as it naturally serves as a solution for the image\nmorphing task with many applications. In this work, we present DiffMorpher, the\nfirst approach enabling smooth and natural image interpolation using diffusion\nmodels. Our key idea is to capture the semantics of the two images by fitting\ntwo LoRAs to them respectively, and interpolate between both the LoRA\nparameters and the latent noises to ensure a smooth semantic transition, where\ncorrespondence automatically emerges without the need for annotation. In\naddition, we propose an attention interpolation and injection technique and a\nnew sampling schedule to further enhance the smoothness between consecutive\nimages. Extensive experiments demonstrate that DiffMorpher achieves starkly\nbetter image morphing effects than previous methods across a variety of object\ncategories, bridging a critical functional gap that distinguished diffusion\nmodels from GANs.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Kaiwen Zhang", "Yifan Zhou", "Xudong XU", "Bo Dai", "Xingang Pan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ee"}, "filepath": "data/2309.03895.png", "tags": [], "_media_type": "image", "_rand": 0.9991583910492695, "arXiv_link": "https://arxiv.org/abs/2309.03895", "other_link": "", "title": "InstructDiffusion: A Generalist Modeling Interface for Vision Tasks", "abstract": "We present InstructDiffusion, a unifying and generic framework for aligning\ncomputer vision tasks with human instructions. Unlike existing approaches that\nintegrate prior knowledge and pre-define the output space (e.g., categories and\ncoordinates) for each vision task, we cast diverse vision tasks into a\nhuman-intuitive image-manipulating process whose output space is a flexible and\ninteractive pixel space. Concretely, the model is built upon the diffusion\nprocess and is trained to predict pixels according to user instructions, such\nas encircling the man's left shoulder in red or applying a blue mask to the\nleft car. InstructDiffusion could handle a variety of vision tasks, including\nunderstanding tasks (such as segmentation and keypoint detection) and\ngenerative tasks (such as editing and enhancement). It even exhibits the\nability to handle unseen tasks and outperforms prior methods on novel datasets.\nThis represents a significant step towards a generalist modeling interface for\nvision tasks, advancing artificial general intelligence in the field of\ncomputer vision.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zigang Geng", "Binxin Yang", "Tiankai Hang", "Chen Li", "Shuyang Gu", "Ting Zhang", "Jianmin Bao", "Zheng Zhang", "Houqiang Li", "Han Hu", "Dong Chen", "Baining Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ef"}, "filepath": "data/2308.16682.png", "tags": [], "_media_type": "image", "_rand": 0.9996156727370176, "arXiv_link": "https://arxiv.org/abs/2308.16682", "other_link": "https://diffusionposer.github.io/.", "title": "Loose Inertial Poser: Motion Capture with IMU-attached Loose-Wear Jacket", "abstract": "Motion capture from a limited number of body-worn sensors, such as inertial\nmeasurement units (IMUs) and pressure insoles, has important applications in\nhealth, human performance, and entertainment. Recent work has focused on\naccurately reconstructing whole-body motion from a specific sensor\nconfiguration using six IMUs. While a common goal across applications is to use\nthe minimal number of sensors to achieve required accuracy, the optimal\narrangement of the sensors might differ from application to application. We\npropose a single diffusion model, DiffusionPoser, which reconstructs human\nmotion in real-time from an arbitrary combination of sensors, including IMUs\nplaced at specified locations, and, pressure insoles. Unlike existing methods,\nour model grants users the flexibility to determine the number and arrangement\nof sensors tailored to the specific activity of interest, without the need for\nretraining. A novel autoregressive inferencing scheme ensures real-time motion\nreconstruction that closely aligns with measured sensor signals. The generative\nnature of DiffusionPoser ensures realistic behavior, even for\ndegrees-of-freedom not directly measured. Qualitative results can be found on\nour website: https://diffusionposer.github.io/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chengxu Zuo", "Yiming Wang", "Lishuang Zhan", "Shihui Guo", "Xinyu Yi", "Feng Xu", "Yipeng Qin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f0"}, "filepath": "data/2405.11643.png", "tags": [], "_media_type": "image", "_rand": 0.9994657388914066, "arXiv_link": "https://arxiv.org/abs/2405.11643", "other_link": "", "title": "Morphological Prototyping for Unsupervised Slide Representation Learning in Computational Pathology", "abstract": "Representation learning of pathology whole-slide images (WSIs) has been has\nprimarily relied on weak supervision with Multiple Instance Learning (MIL).\nHowever, the slide representations resulting from this approach are highly\ntailored to specific clinical tasks, which limits their expressivity and\ngeneralization, particularly in scenarios with limited data. Instead, we\nhypothesize that morphological redundancy in tissue can be leveraged to build a\ntask-agnostic slide representation in an unsupervised fashion. To this end, we\nintroduce PANTHER, a prototype-based approach rooted in the Gaussian mixture\nmodel that summarizes the set of WSI patches into a much smaller set of\nmorphological prototypes. Specifically, each patch is assumed to have been\ngenerated from a mixture distribution, where each mixture component represents\na morphological exemplar. Utilizing the estimated mixture parameters, we then\nconstruct a compact slide representation that can be readily used for a wide\nrange of downstream tasks. By performing an extensive evaluation of PANTHER on\nsubtyping and survival tasks using 13 datasets, we show that 1) PANTHER\noutperforms or is on par with supervised MIL baselines and 2) the analysis of\nmorphological prototypes brings new qualitative and quantitative insights into\nmodel interpretability.", "keywords": ["Deep learning architectures and techniques", "Medical imaging and biological vision"], "authors_list": ["Andrew Song", "Richard J. Chen", "Tong Ding", "Drew F. K. Williamson", "Guillaume Jaume", "Faisal Mahmood"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f1"}, "filepath": "data/2403.19964.png", "tags": [], "_media_type": "image", "_rand": 0.9991234229954823, "arXiv_link": "https://arxiv.org/abs/2403.19964", "other_link": "", "title": "FairRAG: Fair Human Generation via Fair Retrieval Augmentation", "abstract": "Existing text-to-image generative models reflect or even amplify societal\nbiases ingrained in their training data. This is especially concerning for\nhuman image generation where models are biased against certain demographic\ngroups. Existing attempts to rectify this issue are hindered by the inherent\nlimitations of the pre-trained models and fail to substantially improve\ndemographic diversity. In this work, we introduce Fair Retrieval Augmented\nGeneration (FairRAG), a novel framework that conditions pre-trained generative\nmodels on reference images retrieved from an external image database to improve\nfairness in human generation. FairRAG enables conditioning through a\nlightweight linear module that projects reference images into the textual\nspace. To enhance fairness, FairRAG applies simple-yet-effective debiasing\nstrategies, providing images from diverse demographic groups during the\ngenerative process. Extensive experiments demonstrate that FairRAG outperforms\nexisting methods in terms of demographic diversity, image-text alignment, and\nimage fidelity while incurring minimal computational overhead during inference.", "keywords": ["Image and video generation and manipulation", "Vision applications for social good and ethics", "Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Robik Shrestha", "Yang Zou", "Qiuyu Chen", "Zhiheng Li", "Yusheng Xie", "Siqi Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computers and Society", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f2"}, "filepath": "data/2404.08027.png", "tags": [], "_media_type": "image", "_rand": 0.9992751586917155, "arXiv_link": "https://arxiv.org/abs/2404.08027", "other_link": "", "title": "Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction", "abstract": "Multi-modal learning that combines pathological images with genomic data has\nsignificantly enhanced the accuracy of survival prediction. Nevertheless,\nexisting methods have not fully utilized the inherent hierarchical structure\nwithin both whole slide images (WSIs) and transcriptomic data, from which\nbetter intra-modal representations and inter-modal integration could be\nderived. Moreover, many existing studies attempt to improve multi-modal\nrepresentations through attention mechanisms, which inevitably lead to high\ncomplexity when processing high-dimensional WSIs and transcriptomic data.\nRecently, a structured state space model named Mamba emerged as a promising\napproach for its superior performance in modeling long sequences with low\ncomplexity. In this study, we propose Mamba with multi-grained multi-modal\ninteraction (SurvMamba) for survival prediction. SurvMamba is implemented with\na Hierarchical Interaction Mamba (HIM) module that facilitates efficient\nintra-modal interactions at different granularities, thereby capturing more\ndetailed local features as well as rich global representations. In addition, an\nInteraction Fusion Mamba (IFM) module is used for cascaded inter-modal\ninteractive fusion, yielding more comprehensive features for survival\nprediction. Comprehensive evaluations on five TCGA datasets demonstrate that\nSurvMamba outperforms other existing methods in terms of performance and\ncomputational cost.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Guillaume Jaume", "Anurag Vaidya", "Richard J. Chen", "Drew F. K. Williamson", "Paul Pu Liang", "Faisal Mahmood"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f3"}, "filepath": "data/2311.17901.png", "tags": [], "_media_type": "image", "_rand": 0.9995968622837036, "arXiv_link": "https://arxiv.org/abs/2311.17901", "other_link": "", "title": "SODA: Bottleneck Diffusion Models for Representation Learning", "abstract": "We introduce SODA, a self-supervised diffusion model, designed for\nrepresentation learning. The model incorporates an image encoder, which\ndistills a source view into a compact representation, that, in turn, guides the\ngeneration of related novel views. We show that by imposing a tight bottleneck\nbetween the encoder and a denoising decoder, and leveraging novel view\nsynthesis as a self-supervised objective, we can turn diffusion models into\nstrong representation learners, capable of capturing visual semantics in an\nunsupervised manner. To the best of our knowledge, SODA is the first diffusion\nmodel to succeed at ImageNet linear-probe classification, and, at the same\ntime, it accomplishes reconstruction, editing and synthesis tasks across a wide\nrange of datasets. Further investigation reveals the disentangled nature of its\nemergent latent space, that serves as an effective interface to control and\nmanipulate the model's produced images. All in all, we aim to shed light on the\nexciting and promising potential of diffusion models, not only for image\ngeneration, but also for learning rich and robust representations.", "keywords": [], "authors_list": ["Drew Hudson", "Daniel Zoran", "Mateusz Malinowski", "Andrew Lampinen", "Andrew Jaegle", "James McClelland", "Loic Matthey", "Felix Hill", "Alexander Lerchner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f4"}, "filepath": "data/2312.00869.png", "tags": [], "_media_type": "image", "_rand": 0.9993242718281111, "arXiv_link": "https://arxiv.org/abs/2312.00869", "other_link": "https://xk-huang.github.io/segment-caption-anything/.", "title": "Segment and Caption Anything", "abstract": "We propose a method to efficiently equip the Segment Anything Model (SAM)\nwith the ability to generate regional captions. SAM presents strong\ngeneralizability to segment anything while is short for semantic understanding.\nBy introducing a lightweight query-based feature mixer, we align the\nregion-specific features with the embedding space of language models for later\ncaption generation. As the number of trainable parameters is small (typically\nin the order of tens of millions), it costs less computation, less memory\nusage, and less communication bandwidth, resulting in both fast and scalable\ntraining. To address the scarcity problem of regional caption data, we propose\nto first pre-train our model on objection detection and segmentation tasks. We\ncall this step weak supervision pretraining since the pre-training data only\ncontains category names instead of full-sentence descriptions. The weak\nsupervision pretraining allows us to leverage many publicly available object\ndetection and segmentation datasets. We conduct extensive experiments to\ndemonstrate the superiority of our method and validate each design choice. This\nwork serves as a stepping stone towards scaling up regional captioning data and\nsheds light on exploring efficient ways to augment SAM with regional semantics.\nThe project page, along with the associated code, can be accessed via\nhttps://xk-huang.github.io/segment-caption-anything/.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Xiaoke Huang", "Jianfeng Wang", "Yansong Tang", "Zheng Zhang", "Han Hu", "Jiwen Lu", "Lijuan Wang", "Zicheng Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f5"}, "filepath": "data/2401.00029.png", "tags": [], "_media_type": "image", "_rand": 0.9994829253643555, "arXiv_link": "https://arxiv.org/abs/2401.00029", "other_link": "", "title": "6D-Diff: A Keypoint Diffusion Framework for 6D Object Pose Estimation", "abstract": "Estimating the 6D object pose from a single RGB image often involves noise\nand indeterminacy due to challenges such as occlusions and cluttered\nbackgrounds. Meanwhile, diffusion models have shown appealing performance in\ngenerating high-quality images from random noise with high indeterminacy\nthrough step-by-step denoising. Inspired by their denoising capability, we\npropose a novel diffusion-based framework (6D-Diff) to handle the noise and\nindeterminacy in object pose estimation for better performance. In our\nframework, to establish accurate 2D-3D correspondence, we formulate 2D\nkeypoints detection as a reverse diffusion (denoising) process. To facilitate\nsuch a denoising process, we design a Mixture-of-Cauchy-based forward diffusion\nprocess and condition the reverse process on the object features. Extensive\nexperiments on the LM-O and YCB-V datasets demonstrate the effectiveness of our\nframework.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Li Xu", "Haoxuan Qu", "Yujun Cai", "Jun Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f6"}, "filepath": "data/2312.11557.png", "tags": [], "_media_type": "image", "_rand": 0.999690328384116, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2312.11557", "other_link": "https://yd-yin.github.io/SAI3D.", "title": "UnScene3D: Unsupervised 3D Instance Segmentation for Indoor Scenes", "abstract": "Advancements in 3D instance segmentation have traditionally been tethered to\nthe availability of annotated datasets, limiting their application to a narrow\nspectrum of object categories. Recent efforts have sought to harness\nvision-language models like CLIP for open-set semantic reasoning, yet these\nmethods struggle to distinguish between objects of the same categories and rely\non specific prompts that are not universally applicable. In this paper, we\nintroduce SAI3D, a novel zero-shot 3D instance segmentation approach that\nsynergistically leverages geometric priors and semantic cues derived from\nSegment Anything Model (SAM). Our method partitions a 3D scene into geometric\nprimitives, which are then progressively merged into 3D instance segmentations\nthat are consistent with the multi-view SAM masks. Moreover, we design a\nhierarchical region-growing algorithm with a dynamic thresholding mechanism,\nwhich largely improves the robustness of finegrained 3D scene parsing.Empirical\nevaluations on ScanNet, Matterport3D and the more challenging ScanNet++\ndatasets demonstrate the superiority of our approach. Notably, SAI3D\noutperforms existing open-vocabulary baselines and even surpasses\nfully-supervised methods in class-agnostic segmentation on ScanNet++. Our\nproject page is at https://yd-yin.github.io/SAI3D.", "keywords": ["Scene analysis and understanding"], "authors_list": ["David Rozenberszki", "Or Litany", "Angela Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f7"}, "filepath": "data/2403.08426.png", "tags": [], "_media_type": "image", "_rand": 0.9996811796704593, "arXiv_link": "https://arxiv.org/abs/2403.08426", "other_link": "", "title": "Exploring Regional Clues in CLIP for Zero-Shot Semantic Segmentation", "abstract": "The pre-trained vision-language model, exemplified by CLIP, advances\nzero-shot semantic segmentation by aligning visual features with class\nembeddings through a transformer decoder to generate semantic masks. Despite\nits effectiveness, prevailing methods within this paradigm encounter\nchallenges, including overfitting on seen classes and small fragmentation in\nmasks. To mitigate these issues, we propose a Language-Driven Visual Consensus\n(LDVC) approach, fostering improved alignment of semantic and visual\ninformation.Specifically, we leverage class embeddings as anchors due to their\ndiscrete and abstract nature, steering vision features toward class embeddings.\nMoreover, to circumvent noisy alignments from the vision part due to its\nredundant nature, we introduce route attention into self-attention for finding\nvisual consensus, thereby enhancing semantic consistency within the same\nobject. Equipped with a vision-language prompting strategy, our approach\nsignificantly boosts the generalization capacity of segmentation models for\nunseen classes. Experimental results underscore the effectiveness of our\napproach, showcasing mIoU gains of 4.5 on the PASCAL VOC 2012 and 3.6 on the\nCOCO-Stuff 164k for unseen classes compared with the state-of-the-art methods.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Yi Zhang", "Meng-Hao Guo", "Miao Wang", "Shi-Min Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f8"}, "filepath": "data/2403.09731.png", "tags": [], "_media_type": "image", "_rand": 0.9992328897946742, "arXiv_link": "https://arxiv.org/abs/2403.09731", "other_link": "", "title": "Selective nonlinearities removal from digital signals", "abstract": "Many instruments performing optical and non-optical imaging and sensing, such\nas Optical Coherence Tomography (OCT), Magnetic Resonance Imaging or\nFourier-transform spectrometry, produce digital signals containing modulations,\nsine-like components, which only after Fourier transformation give information\nabout the structure or characteristics of the investigated object. Due to the\nfundamental physics-related limitations of such methods, the distribution of\nthese signal components is often nonlinear and, when not properly compensated,\nleads to the resolution, precision or quality drop in the final image. Here, we\npropose an innovative approach that has the potential to allow cleaning of the\nsignal from the nonlinearities but most of all, it now allows to switch the\ngiven order off, leaving all others intact. The latter provides a tool for more\nin-depth analysis of the nonlinearity-inducing properties of the investigated\nobject, which can lead to applications in early disease detection or more\nsensitive sensing of chemical compounds. We consider OCT signals and\nnonlinearities up to the third order. In our approach, we propose two neural\nnetworks: one to remove solely the second-order nonlinearity and the other for\nremoving solely the third-order nonlinearity. The input of the networks is a\nnovel two-dimensional data structure with all the information needed for the\nnetwork to infer a nonlinearity-free signal. We describe the developed networks\nand present the results for second-order and third-order nonlinearity removal\nin OCT data representing the images of various objects: a mirror, glass, and\nfruits.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Krzysztof Maliszewski", "Magdalena Urbanska", "Varvara Vetrova", "Sylwia Kolenderska"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Unknown", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0f9"}, "filepath": "data/2309.01838.png", "tags": [], "_media_type": "image", "_rand": 0.9996332222864523, "arXiv_link": "https://arxiv.org/abs/2309.01838", "other_link": "", "title": "Efficient Model Stealing Defense with Noise Transition Matrix", "abstract": "Model stealing attacks have become a serious concern for deep learning\nmodels, where an attacker can steal a trained model by querying its black-box\nAPI. This can lead to intellectual property theft and other security and\nprivacy risks. The current state-of-the-art defenses against model stealing\nattacks suggest adding perturbations to the prediction probabilities. However,\nthey suffer from heavy computations and make impracticable assumptions about\nthe adversary. They often require the training of auxiliary models. This can be\ntime-consuming and resource-intensive which hinders the deployment of these\ndefenses in real-world applications. In this paper, we propose a simple yet\neffective and efficient defense alternative. We introduce a heuristic approach\nto perturb the output probabilities. The proposed defense can be easily\nintegrated into models without additional training. We show that our defense is\neffective in defending against three state-of-the-art stealing attacks. We\nevaluate our approach on large and quantized (i.e., compressed) Convolutional\nNeural Networks (CNNs) trained on several vision datasets. Our technique\noutperforms the state-of-the-art defenses with a $\\times37$ faster inference\nlatency without requiring any additional model and with a low impact on the\nmodel's performance. We validate that our defense is also effective for\nquantized CNNs targeting edge devices.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Dong-Dong Wu", "Chilin Fu", "Weichang Wu", "Wenwen Xia", "Xiaolu Zhang", "JUN ZHOU", "Min-Ling Zhang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0fa"}, "filepath": "data/2312.17243.png", "tags": [], "_media_type": "image", "_rand": 0.999004257444581, "arXiv_link": "https://arxiv.org/abs/2312.17243", "other_link": "", "title": "Unsupervised Universal Image Segmentation", "abstract": "Several unsupervised image segmentation approaches have been proposed which\neliminate the need for dense manually-annotated segmentation masks; current\nmodels separately handle either semantic segmentation (e.g., STEGO) or\nclass-agnostic instance segmentation (e.g., CutLER), but not both (i.e.,\npanoptic segmentation). We propose an Unsupervised Universal Segmentation model\n(U2Seg) adept at performing various image segmentation tasks -- instance,\nsemantic and panoptic -- using a novel unified framework. U2Seg generates\npseudo semantic labels for these segmentation tasks via leveraging\nself-supervised models followed by clustering; each cluster represents\ndifferent semantic and/or instance membership of pixels. We then self-train the\nmodel on these pseudo semantic labels, yielding substantial performance gains\nover specialized methods tailored to each task: a +2.6 AP$^{\\text{box}}$ boost\nvs. CutLER in unsupervised instance segmentation on COCO and a +7.0 PixelAcc\nincrease (vs. STEGO) in unsupervised semantic segmentation on COCOStuff.\nMoreover, our method sets up a new baseline for unsupervised panoptic\nsegmentation, which has not been previously explored. U2Seg is also a strong\npretrained model for few-shot segmentation, surpassing CutLER by +5.0\nAP$^{\\text{mask}}$ when trained on a low-data regime, e.g., only 1% COCO\nlabels. We hope our simple yet effective method can inspire more research on\nunsupervised universal image segmentation.", "keywords": ["Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["XuDong Wang", "Dantong Niu", "Xinyang Han", "Long Lian", "Roei Herzig", "Trevor Darrell"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0fb"}, "filepath": "data/2310.01406.png", "tags": [], "_media_type": "image", "_rand": 0.9996321302491972, "arXiv_link": "https://arxiv.org/abs/2310.01406", "other_link": "https://humannorm.github.io/.", "title": "HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation", "abstract": "Recent text-to-3D methods employing diffusion models have made significant\nadvancements in 3D human generation. However, these approaches face challenges\ndue to the limitations of text-to-image diffusion models, which lack an\nunderstanding of 3D structures. Consequently, these methods struggle to achieve\nhigh-quality human generation, resulting in smooth geometry and cartoon-like\nappearances. In this paper, we propose HumanNorm, a novel approach for\nhigh-quality and realistic 3D human generation. The main idea is to enhance the\nmodel's 2D perception of 3D geometry by learning a normal-adapted diffusion\nmodel and a normal-aligned diffusion model. The normal-adapted diffusion model\ncan generate high-fidelity normal maps corresponding to user prompts with\nview-dependent and body-aware text. The normal-aligned diffusion model learns\nto generate color images aligned with the normal maps, thereby transforming\nphysical geometry details into realistic appearance. Leveraging the proposed\nnormal diffusion model, we devise a progressive geometry generation strategy\nand a multi-step Score Distillation Sampling (SDS) loss to enhance the\nperformance of 3D human generation. Comprehensive experiments substantiate\nHumanNorm's ability to generate 3D humans with intricate geometry and realistic\nappearances. HumanNorm outperforms existing text-to-3D methods in both geometry\nand texture quality. The project page of HumanNorm is\nhttps://humannorm.github.io/.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Xin Huang", "Ruizhi Shao", "Qi Zhang", "Hongwen Zhang", "Ying Feng", "Yebin Liu", "Qing Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0fc"}, "filepath": "data/2405.18322.png", "tags": [], "_media_type": "image", "_rand": 0.9995095664049671, "arXiv_link": "https://arxiv.org/abs/2405.18322", "other_link": "", "title": "SCE-MAE: Selective Correspondence Enhancement with Masked Autoencoder for Self-Supervised Landmark Estimation", "abstract": "Self-supervised landmark estimation is a challenging task that demands the\nformation of locally distinct feature representations to identify sparse facial\nlandmarks in the absence of annotated data. To tackle this task, existing\nstate-of-the-art (SOTA) methods (1) extract coarse features from backbones that\nare trained with instance-level self-supervised learning (SSL) paradigms, which\nneglect the dense prediction nature of the task, (2) aggregate them into\nmemory-intensive hypercolumn formations, and (3) supervise lightweight\nprojector networks to naively establish full local correspondences among all\npairs of spatial features. In this paper, we introduce SCE-MAE, a framework\nthat (1) leverages the MAE, a region-level SSL method that naturally better\nsuits the landmark prediction task, (2) operates on the vanilla feature map\ninstead of on expensive hypercolumns, and (3) employs a Correspondence\nApproximation and Refinement Block (CARB) that utilizes a simple density peak\nclustering algorithm and our proposed Locality-Constrained Repellence Loss to\ndirectly hone only select local correspondences. We demonstrate through\nextensive experiments that SCE-MAE is highly effective and robust,\noutperforming existing SOTA methods by large margins of approximately 20%-44%\non the landmark matching and approximately 9%-15% on the landmark detection\ntasks.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Kejia Yin", "Varshanth Rao", "Ruowei Jiang", "Xudong Liu", "Parham Aarabi", "David B. Lindell"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0fd"}, "filepath": "data/2307.16424.png", "tags": [], "_media_type": "image", "_rand": 0.9994731493863124, "arXiv_link": "https://arxiv.org/abs/2307.16424", "other_link": "", "title": "Beyond Textual Constraints: Learning Novel Diffusion Conditions with Fewer Examples", "abstract": "Equipping a deep model the abaility of few-shot learning, i.e., learning\nquickly from only few examples, is a core challenge for artificial\nintelligence. Gradient-based meta-learning approaches effectively address the\nchallenge by learning how to learn novel tasks. Its key idea is learning a deep\nmodel in a bi-level optimization manner, where the outer-loop process learns a\nshared gradient descent algorithm (i.e., its hyperparameters), while the\ninner-loop process leverage it to optimize a task-specific model by using only\nfew labeled data. Although these existing methods have shown superior\nperformance, the outer-loop process requires calculating second-order\nderivatives along the inner optimization path, which imposes considerable\nmemory burdens and the risk of vanishing gradients. Drawing inspiration from\nrecent progress of diffusion models, we find that the inner-loop gradient\ndescent process can be actually viewed as a reverse process (i.e., denoising)\nof diffusion where the target of denoising is model weights but the origin\ndata. Based on this fact, in this paper, we propose to model the gradient\ndescent optimizer as a diffusion model and then present a novel\ntask-conditional diffusion-based meta-learning, called MetaDiff, that\neffectively models the optimization process of model weights from Gaussion\nnoises to target weights in a denoising manner. Thanks to the training\nefficiency of diffusion models, our MetaDiff do not need to differentiate\nthrough the inner-loop path such that the memory burdens and the risk of\nvanishing gradients can be effectvely alleviated. Experiment results show that\nour MetaDiff outperforms the state-of-the-art gradient-based meta-learning\nfamily in few-shot learning tasks.", "keywords": [], "authors_list": ["Yuyang Yu", "Bangzhen Liu", "Chenxi Zheng", "Xuemiao Xu", "Huaidong Zhang", "Shengfeng He"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0fe"}, "filepath": "data/2312.08591.png", "tags": [], "_media_type": "image", "_rand": 0.9994761049358656, "arXiv_link": "https://arxiv.org/abs/2312.08591", "other_link": "http://cic.tju.edu.cn/faculty/likun/projects/Joint2Human.", "title": "Joint2Human: High-quality 3D Human Generation via Compact Spherical Embedding of 3D Joints", "abstract": "3D human generation is increasingly significant in various applications.\nHowever, the direct use of 2D generative methods in 3D generation often results\nin losing local details, while methods that reconstruct geometry from generated\nimages struggle with global view consistency. In this work, we introduce\nJoint2Human, a novel method that leverages 2D diffusion models to generate\ndetailed 3D human geometry directly, ensuring both global structure and local\ndetails. To achieve this, we employ the Fourier occupancy field (FOF)\nrepresentation, enabling the direct generation of 3D shapes as preliminary\nresults with 2D generative models. With the proposed high-frequency enhancer\nand the multi-view recarving strategy, our method can seamlessly integrate the\ndetails from different views into a uniform global shape. To better utilize the\n3D human prior and enhance control over the generated geometry, we introduce a\ncompact spherical embedding of 3D joints. This allows for an effective guidance\nof pose during the generation process. Additionally, our method can generate 3D\nhumans guided by textual inputs. Our experimental results demonstrate the\ncapability of our method to ensure global structure, local details, high\nresolution, and low computational cost simultaneously. More results and the\ncode can be found on our project page at\nhttp://cic.tju.edu.cn/faculty/likun/projects/Joint2Human.", "keywords": ["Biometrics and human analysis", "Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Muxin Zhang", "Qiao Feng", "Zhuo Su", "Chao Wen", "Zhou Xue", "Kun Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f0ff"}, "filepath": "data/2308.09107.png", "tags": [], "_media_type": "image", "_rand": 0.9992595925266036, "arXiv_link": "https://arxiv.org/abs/2308.09107", "other_link": "", "title": "Rethinking Generalizable Face Anti-spoofing via Hierarchical Prototype-guided Distribution Refinement in Hyperbolic Space", "abstract": "Learning generalized face anti-spoofing (FAS) models against presentation\nattacks is essential for the security of face recognition systems. Previous FAS\nmethods usually encourage models to extract discriminative features, of which\nthe distances within the same class (bonafide or attack) are pushed close while\nthose between bonafide and attack are pulled away. However, these methods are\ndesigned based on Euclidean distance, which lacks generalization ability for\nunseen attack detection due to poor hierarchy embedding ability. According to\nthe evidence that different spoofing attacks are intrinsically hierarchical, we\npropose to learn richer hierarchical and discriminative spoofing cues in\nhyperbolic space. Specifically, for unimodal FAS learning, the feature\nembeddings are projected into the Poincar\\'e ball, and then the hyperbolic\nbinary logistic regression layer is cascaded for classification. To further\nimprove generalization, we conduct hyperbolic contrastive learning for the\nbonafide only while relaxing the constraints on diverse spoofing attacks. To\nalleviate the vanishing gradient problem in hyperbolic space, a new feature\nclipping method is proposed to enhance the training stability of hyperbolic\nmodels. Besides, we further design a multimodal FAS framework with Euclidean\nmultimodal feature decomposition and hyperbolic multimodal feature fusion &\nclassification. Extensive experiments on three benchmark datasets (i.e., WMCA,\nPADISI-Face, and SiW-M) with diverse attack types demonstrate that the proposed\nmethod can bring significant improvement compared to the Euclidean baselines on\nunseen attack detection. In addition, the proposed framework is also\ngeneralized well on four benchmark datasets (i.e., MSU-MFSD, IDIAP\nREPLAY-ATTACK, CASIA-FASD, and OULU-NPU) with a limited number of attack types.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Chengyang Hu", "Ke-Yue Zhang", "Taiping Yao", "Shouhong Ding", "Lizhuang Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f100"}, "filepath": "data/2402.18771v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994962424105397, "arXiv_link": "https://arxiv.org/abs/2402.18771v2", "other_link": "", "title": "NARUTO: Neural Active Reconstruction from Uncertain Target Observations", "abstract": "We present NARUTO, a neural active reconstruction system that combines a\nhybrid neural representation with uncertainty learning, enabling high-fidelity\nsurface reconstruction. Our approach leverages a multi-resolution hash-grid as\nthe mapping backbone, chosen for its exceptional convergence speed and capacity\nto capture high-frequency local features.The centerpiece of our work is the\nincorporation of an uncertainty learning module that dynamically quantifies\nreconstruction uncertainty while actively reconstructing the environment. By\nharnessing learned uncertainty, we propose a novel uncertainty aggregation\nstrategy for goal searching and efficient path planning. Our system\nautonomously explores by targeting uncertain observations and reconstructs\nenvironments with remarkable completeness and fidelity. We also demonstrate the\nutility of this uncertainty-aware approach by enhancing SOTA neural SLAM\nsystems through an active ray sampling strategy. Extensive evaluations of\nNARUTO in various environments, using an indoor scene simulator, confirm its\nsuperior performance and state-of-the-art status in active reconstruction, as\nevidenced by its impressive results on benchmark datasets like Replica and\nMP3D.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Ziyue Feng", "Huangying Zhan", "Zheng Chen", "Qingan Yan", "Xiangyu Xu", "Changjiang Cai", "Bing Li", "Qilun Zhu", "Yi Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f101"}, "filepath": "data/2306.09337.png", "tags": [], "_media_type": "image", "_rand": 0.9991760435499701, "arXiv_link": "https://arxiv.org/abs/2306.09337", "other_link": "", "title": "Generative Proxemics: A Prior for 3D Social Interaction from Images", "abstract": "Social interaction is a fundamental aspect of human behavior and\ncommunication. The way individuals position themselves in relation to others,\nalso known as proxemics, conveys social cues and affects the dynamics of social\ninteraction. Reconstructing such interaction from images presents challenges\nbecause of mutual occlusion and the limited availability of large training\ndatasets. To address this, we present a novel approach that learns a prior over\nthe 3D proxemics two people in close social interaction and demonstrate its use\nfor single-view 3D reconstruction. We start by creating 3D training data of\ninteracting people using image datasets with contact annotations. We then model\nthe proxemics using a novel denoising diffusion model called BUDDI that learns\nthe joint distribution over the poses of two people in close social\ninteraction. Sampling from our generative proxemics model produces realistic 3D\nhuman interactions, which we validate through a perceptual study. We use BUDDI\nin reconstructing two people in close proximity from a single image without any\ncontact annotation via an optimization approach that uses the diffusion model\nas a prior. Our approach recovers accurate and plausible 3D social interactions\nfrom noisy initial estimates, outperforming state-of-the-art methods. Our code,\ndata, and model are availableat our project website at: muelea.github.io/buddi.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Vickie Ye", "Vickie Ye", "Georgios Pavlakos", "Michael J. Black", "Angjoo Kanazawa"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f102"}, "filepath": "data/2307.00761v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996134996000203, "arXiv_link": "https://arxiv.org/abs/2307.00761v3", "other_link": "", "title": "Learning Degradation Independent Representations for Camera ISP Pipelines", "abstract": "Image signal processing (ISP) pipeline plays a fundamental role in digital\ncameras, which converts raw Bayer sensor data to RGB images. However,\nISP-generated images usually suffer from imperfections due to the compounded\ndegradations that stem from sensor noises, demosaicing noises, compression\nartifacts, and possibly adverse effects of erroneous ISP hyperparameter\nsettings such as ISO and gamma values. In a general sense, these ISP\nimperfections can be considered as degradations. The highly complex mechanisms\nof ISP degradations, some of which are even unknown, pose great challenges to\nthe generalization capability of deep neural networks (DNN) for image\nrestoration and to their adaptability to downstream tasks. To tackle the\nissues, we propose a novel DNN approach to learn degradation-independent\nrepresentations (DiR) through the refinement of a self-supervised learned\nbaseline representation. The proposed DiR learning technique has remarkable\ndomain generalization capability and consequently, it outperforms\nstate-of-the-art methods across various downstream tasks, including blind image\nrestoration, object detection, and instance segmentation, as verified in our\nexperiments.", "keywords": ["Low-level vision"], "authors_list": ["Yanhui Guo", "Fangzhou Luo", "Xiaolin Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f103"}, "filepath": "data/2308.14710.png", "tags": [], "_media_type": "image", "_rand": 0.9998696820678074, "arXiv_link": "https://arxiv.org/abs/2308.14710", "other_link": "", "title": "VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation", "abstract": "Existing approaches to unsupervised video instance segmentation typically\nrely on motion estimates and experience difficulties tracking small or\ndivergent motions. We present VideoCutLER, a simple method for unsupervised\nmulti-instance video segmentation without using motion-based learning signals\nlike optical flow or training on natural videos. Our key insight is that using\nhigh-quality pseudo masks and a simple video synthesis method for model\ntraining is surprisingly sufficient to enable the resulting video model to\neffectively segment and track multiple instances across video frames. We show\nthe first competitive unsupervised learning results on the challenging\nYouTubeVIS-2019 benchmark, achieving 50.7% APvideo^50 , surpassing the previous\nstate-of-the-art by a large margin. VideoCutLER can also serve as a strong\npretrained model for supervised video instance segmentation tasks, exceeding\nDINO by 15.9% on YouTubeVIS-2019 in terms of APvideo.", "keywords": [], "authors_list": ["XuDong Wang", "Ishan Misra", "Ziyun Zeng", "Rohit Girdhar", "Trevor Darrell"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f104"}, "filepath": "data/2310.17569.png", "tags": [], "_media_type": "image", "_rand": 0.9991633041827634, "arXiv_link": "https://arxiv.org/abs/2310.17569", "other_link": "", "title": "SD4Match: Learning to Prompt Stable Diffusion Model for Semantic Matching", "abstract": "In this paper, we address the challenge of matching semantically similar\nkeypoints across image pairs. Existing research indicates that the intermediate\noutput of the UNet within the Stable Diffusion (SD) can serve as robust image\nfeature maps for such a matching task. We demonstrate that by employing a basic\nprompt tuning technique, the inherent potential of Stable Diffusion can be\nharnessed, resulting in a significant enhancement in accuracy over previous\napproaches. We further introduce a novel conditional prompting module that\nconditions the prompt on the local details of the input image pairs, leading to\na further improvement in performance. We designate our approach as SD4Match,\nshort for Stable Diffusion for Semantic Matching. Comprehensive evaluations of\nSD4Match on the PF-Pascal, PF-Willow, and SPair-71k datasets show that it sets\nnew benchmarks in accuracy across all these datasets. Particularly, SD4Match\noutperforms the previous state-of-the-art by a margin of 12 percentage points\non the challenging SPair-71k dataset.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Xinghui Li", "Jingyi Lu", "Kai Han", "Victor Adrian Prisacariu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f105"}, "filepath": "data/2403.12870.png", "tags": [], "_media_type": "image", "_rand": 0.9997556810282066, "arXiv_link": "https://arxiv.org/abs/2403.12870", "other_link": "", "title": "PoNQ: a Neural QEM-based Mesh Representation", "abstract": "Although polygon meshes have been a standard representation in geometry\nprocessing, their irregular and combinatorial nature hinders their suitability\nfor learning-based applications. In this work, we introduce a novel learnable\nmesh representation through a set of local 3D sample Points and their\nassociated Normals and Quadric error metrics (QEM) w.r.t. the underlying shape,\nwhich we denote PoNQ. A global mesh is directly derived from PoNQ by\nefficiently leveraging the knowledge of the local quadric errors. Besides\nmarking the first use of QEM within a neural shape representation, our\ncontribution guarantees both topological and geometrical properties by ensuring\nthat a PoNQ mesh does not self-intersect and is always the boundary of a\nvolume. Notably, our representation does not rely on a regular grid, is\nsupervised directly by the target surface alone, and also handles open surfaces\nwith boundaries and/or sharp features. We demonstrate the efficacy of PoNQ\nthrough a learning-based mesh prediction from SDF grids and show that our\nmethod surpasses recent state-of-the-art techniques in terms of both surface\nand edge-based metrics.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Nissim Maruani", "Maks Ovsjanikov", "Pierre Alliez", "Mathieu Desbrun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f106"}, "filepath": "data/2405.07472.png", "tags": [], "_media_type": "image", "_rand": 0.9990868786603861, "arXiv_link": "https://arxiv.org/abs/2405.07472", "other_link": "", "title": "M&M VTO: Multi-Garment Virtual Try-On and Editing", "abstract": "The increasing prominence of e-commerce has underscored the importance of\nVirtual Try-On (VTON). However, previous studies predominantly focus on the 2D\nrealm and rely heavily on extensive data for training. Research on 3D VTON\nprimarily centers on garment-body shape compatibility, a topic extensively\ncovered in 2D VTON. Thanks to advances in 3D scene editing, a 2D diffusion\nmodel has now been adapted for 3D editing via multi-viewpoint editing. In this\nwork, we propose GaussianVTON, an innovative 3D VTON pipeline integrating\nGaussian Splatting (GS) editing with 2D VTON. To facilitate a seamless\ntransition from 2D to 3D VTON, we propose, for the first time, the use of only\nimages as editing prompts for 3D editing. To further address issues, e.g., face\nblurring, garment inaccuracy, and degraded viewpoint quality during editing, we\ndevise a three-stage refinement strategy to gradually mitigate potential\nissues. Furthermore, we introduce a new editing strategy termed Edit Recall\nReconstruction (ERR) to tackle the limitations of previous editing strategies\nin leading to complex geometric changes. Our comprehensive experiments\ndemonstrate the superiority of GaussianVTON, offering a novel perspective on 3D\nVTON while also establishing a novel starting point for image-prompting 3D\nscene editing.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Scene analysis and understanding"], "authors_list": ["Luyang Zhu", "Yingwei Li", "Nan Liu", "Hao Peng", "Dawei Yang", "Ira Kemelmacher-Shlizerman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f107"}, "filepath": "data/2311.15744.png", "tags": [], "_media_type": "image", "_rand": 0.9994118347165653, "arXiv_link": "https://arxiv.org/abs/2311.15744", "other_link": "", "title": "One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion Schedule Flaws and Enhancing Low-Frequency Controls", "abstract": "It is well known that many open-released foundational diffusion models have\ndifficulty in generating images that substantially depart from average\nbrightness, despite such images being present in the training data. This is due\nto an inconsistency: while denoising starts from pure Gaussian noise during\ninference, the training noise schedule retains residual data even in the final\ntimestep distribution, due to difficulties in numerical conditioning in\nmainstream formulation, leading to unintended bias during inference. To\nmitigate this issue, certain $\\epsilon$-prediction models are combined with an\nad-hoc offset-noise methodology. In parallel, some contemporary models have\nadopted zero-terminal SNR noise schedules together with\n$\\mathbf{v}$-prediction, which necessitate major alterations to pre-trained\nmodels. However, such changes risk destabilizing a large multitude of\ncommunity-driven applications anchored on these pre-trained models. In light of\nthis, our investigation revisits the fundamental causes, leading to our\nproposal of an innovative and principled remedy, called One More Step (OMS). By\nintegrating a compact network and incorporating an additional simple yet\neffective step during inference, OMS elevates image fidelity and harmonizes the\ndichotomy between training and inference, while preserving original model\nparameters. Once trained, various pre-trained diffusion models with the same\nlatent domain can share the same OMS module.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Minghui Hu", "Jianbin Zheng", "Chuanxia Zheng", "Chaoyue Wang", "Dacheng Tao", "Tat-Jen Cham"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f108"}, "filepath": "data/2311.18113.png", "tags": [], "_media_type": "image", "_rand": 0.9999204615017399, "arXiv_link": "https://arxiv.org/abs/2311.18113", "other_link": "", "title": "Back to 3D: Few-Shot 3D Keypoint Detection with Back-Projected 2D Features", "abstract": "With the immense growth of dataset sizes and computing resources in recent\nyears, so-called foundation models have become popular in NLP and vision tasks.\nIn this work, we propose to explore foundation models for the task of keypoint\ndetection on 3D shapes. A unique characteristic of keypoint detection is that\nit requires semantic and geometric awareness while demanding high localization\naccuracy. To address this problem, we propose, first, to back-project features\nfrom large pre-trained 2D vision models onto 3D shapes and employ them for this\ntask. We show that we obtain robust 3D features that contain rich semantic\ninformation and analyze multiple candidate features stemming from different 2D\nfoundation models. Second, we employ a keypoint candidate optimization module\nwhich aims to match the average observed distribution of keypoints on the shape\nand is guided by the back-projected features. The resulting approach achieves a\nnew state of the art for few-shot keypoint detection on the KeyPointNet\ndataset, almost doubling the performance of the previous best methods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Thomas Wimmer", "Peter Wonka", "Maks Ovsjanikov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f109"}, "filepath": "data/2404.01547v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998615433163307, "arXiv_link": "https://arxiv.org/abs/2404.01547v1", "other_link": "https://github.com/cschenxiang/NeRD-Rain.", "title": "Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining", "abstract": "How to effectively explore multi-scale representations of rain streaks is\nimportant for image deraining. In contrast to existing Transformer-based\nmethods that depend mostly on single-scale rain appearance, we develop an\nend-to-end multi-scale Transformer that leverages the potentially useful\nfeatures in various scales to facilitate high-quality image reconstruction. To\nbetter explore the common degradation representations from spatially-varying\nrain streaks, we incorporate intra-scale implicit neural representations based\non pixel coordinates with the degraded inputs in a closed-loop design, enabling\nthe learned features to facilitate rain removal and improve the robustness of\nthe model in complex scenarios. To ensure richer collaborative representation\nfrom different scales, we embed a simple yet effective inter-scale\nbidirectional feedback operation into our multi-scale Transformer by performing\ncoarse-to-fine and fine-to-coarse information communication. Extensive\nexperiments demonstrate that our approach, named as NeRD-Rain, performs\nfavorably against the state-of-the-art ones on both synthetic and real-world\nbenchmark datasets. The source code and trained models are available at\nhttps://github.com/cschenxiang/NeRD-Rain.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Xiang Chen", "Jinshan Pan", "Jiangxin Dong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f10a"}, "filepath": "data/2402.03290.png", "tags": [], "_media_type": "image", "_rand": 0.9992527499327721, "arXiv_link": "https://arxiv.org/abs/2402.03290", "other_link": "", "title": "InstanceDiffusion: Instance-level Control for Image Generation", "abstract": "Text-to-image diffusion models produce high quality images but do not offer\ncontrol over individual instances in the image. We introduce InstanceDiffusion\nthat adds precise instance-level control to text-to-image diffusion models.\nInstanceDiffusion supports free-form language conditions per instance and\nallows flexible ways to specify instance locations such as simple single\npoints, scribbles, bounding boxes or intricate instance segmentation masks, and\ncombinations thereof. We propose three major changes to text-to-image models\nthat enable precise instance-level control. Our UniFusion block enables\ninstance-level conditions for text-to-image models, the ScaleU block improves\nimage fidelity, and our Multi-instance Sampler improves generations for\nmultiple instances. InstanceDiffusion significantly surpasses specialized\nstate-of-the-art models for each location condition. Notably, on the COCO\ndataset, we outperform previous state-of-the-art by 20.4% AP$_{50}^\\text{box}$\nfor box inputs, and 25.4% IoU for mask inputs.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["XuDong Wang", "Trevor Darrell", "Sai Saketh Rambhatla", "Rohit Girdhar", "Ishan Misra"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f10b"}, "filepath": "data/2404.08951.png", "tags": [], "_media_type": "image", "_rand": 0.9996287422822334, "arXiv_link": "https://arxiv.org/abs/2404.08951", "other_link": "https://github.com/MQinghe/MiDSS", "title": "Constructing and Exploring Intermediate Domains in Mixed Domain Semi-supervised Medical Image Segmentation", "abstract": "Both limited annotation and domain shift are prevalent challenges in medical\nimage segmentation. Traditional semi-supervised segmentation and unsupervised\ndomain adaptation methods address one of these issues separately. However, the\ncoexistence of limited annotation and domain shift is quite common, which\nmotivates us to introduce a novel and challenging scenario: Mixed Domain\nSemi-supervised medical image Segmentation (MiDSS). In this scenario, we handle\ndata from multiple medical centers, with limited annotations available for a\nsingle domain and a large amount of unlabeled data from multiple domains. We\nfound that the key to solving the problem lies in how to generate reliable\npseudo labels for the unlabeled data in the presence of domain shift with\nlabeled data. To tackle this issue, we employ Unified Copy-Paste (UCP) between\nimages to construct intermediate domains, facilitating the knowledge transfer\nfrom the domain of labeled data to the domains of unlabeled data. To fully\nutilize the information within the intermediate domain, we propose a symmetric\nGuidance training strategy (SymGD), which additionally offers direct guidance\nto unlabeled data by merging pseudo labels from intermediate samples.\nSubsequently, we introduce a Training Process aware Random Amplitude MixUp\n(TP-RAM) to progressively incorporate style-transition components into\nintermediate samples. Compared with existing state-of-the-art approaches, our\nmethod achieves a notable 13.57% improvement in Dice score on Prostate dataset,\nas demonstrated on three public datasets. Our code is available at\nhttps://github.com/MQinghe/MiDSS .", "keywords": [], "authors_list": ["Qinghe Ma", "Jian Zhang", "Lei Qi", "Qian Yu", "Yinghuan Shi", "Yang Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f10c"}, "filepath": "data/2403.03122v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992835123414352, "arXiv_link": "https://arxiv.org/abs/2403.03122v1", "other_link": "", "title": "NRDF: Neural Riemannian Distance Fields for Learning Articulated Pose Priors", "abstract": "Faithfully modeling the space of articulations is a crucial task that allows\nrecovery and generation of realistic poses, and remains a notorious challenge.\nTo this end, we introduce Neural Riemannian Distance Fields (NRDFs),\ndata-driven priors modeling the space of plausible articulations, represented\nas the zero-level-set of a neural field in a high-dimensional\nproduct-quaternion space. To train NRDFs only on positive examples, we\nintroduce a new sampling algorithm, ensuring that the geodesic distances follow\na desired distribution, yielding a principled distance field learning paradigm.\nWe then devise a projection algorithm to map any random pose onto the level-set\nby an adaptive-step Riemannian optimizer, adhering to the product manifold of\njoint rotations at all times. NRDFs can compute the Riemannian gradient via\nbackpropagation and by mathematical analogy, are related to Riemannian flow\nmatching, a recent generative model. We conduct a comprehensive evaluation of\nNRDF against other pose priors in various downstream tasks, i.e., pose\ngeneration, image-based pose estimation, and solving inverse kinematics,\nhighlighting NRDF's superior performance. Besides humans, NRDF's versatility\nextends to hand and animal poses, as it can effectively represent any\narticulation.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Yannan He", "Garvita Tiwari", "Tolga Birdal", "Jan Lenssen", "Gerard Pons-Moll"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f10d"}, "filepath": "data/2405.07784v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994820480770529, "arXiv_link": "https://arxiv.org/html/2405.07784v1", "other_link": "", "title": "Generating Human Motion in 3D Scenes from Text Descriptions", "abstract": "Generating human motions from textual descriptions has gained growing\nresearch interest due to its wide range of applications. However, only a few\nworks consider human-scene interactions together with text conditions, which is\ncrucial for visual and physical realism. This paper focuses on the task of\ngenerating human motions in 3D indoor scenes given text descriptions of the\nhuman-scene interactions. This task presents challenges due to the\nmulti-modality nature of text, scene, and motion, as well as the need for\nspatial reasoning. To address these challenges, we propose a new approach that\ndecomposes the complex problem into two more manageable sub-problems: (1)\nlanguage grounding of the target object and (2) object-centric motion\ngeneration. For language grounding of the target object, we leverage the power\nof large language models. For motion generation, we design an object-centric\nscene representation for the generative model to focus on the target object,\nthereby reducing the scene complexity and facilitating the modeling of the\nrelationship between human motions and the object. Experiments demonstrate the\nbetter motion quality of our approach compared to baselines and validate our\ndesign choices.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Zhi Cen", "Huaijin Pi", "Sida Peng", "Zehong Shen", "Minghui Yang", "Shuai Zhu", "Hujun Bao", "Xiaowei Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f10e"}, "filepath": "data/2311.17061.png", "tags": [], "_media_type": "image", "_rand": 0.9994891925094802, "arXiv_link": "https://arxiv.org/abs/2311.17061", "other_link": "https://alvinliu0.github.io/projects/HumanGaussian", "title": "HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting", "abstract": "Realistic 3D human generation from text prompts is a desirable yet\nchallenging task. Existing methods optimize 3D representations like mesh or\nneural fields via score distillation sampling (SDS), which suffers from\ninadequate fine details or excessive training time. In this paper, we propose\nan efficient yet effective framework, HumanGaussian, that generates\nhigh-quality 3D humans with fine-grained geometry and realistic appearance. Our\nkey insight is that 3D Gaussian Splatting is an efficient renderer with\nperiodic Gaussian shrinkage or growing, where such adaptive density control can\nbe naturally guided by intrinsic human structures. Specifically, 1) we first\npropose a Structure-Aware SDS that simultaneously optimizes human appearance\nand geometry. The multi-modal score function from both RGB and depth space is\nleveraged to distill the Gaussian densification and pruning process. 2)\nMoreover, we devise an Annealed Negative Prompt Guidance by decomposing SDS\ninto a noisier generative score and a cleaner classifier score, which well\naddresses the over-saturation issue. The floating artifacts are further\neliminated based on Gaussian size in a prune-only phase to enhance generation\nsmoothness. Extensive experiments demonstrate the superior efficiency and\ncompetitive quality of our framework, rendering vivid 3D humans under diverse\nscenarios. Project Page: https://alvinliu0.github.io/projects/HumanGaussian", "keywords": ["Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Xian Liu", "Xiaohang Zhan", "Jiaxiang Tang", "Ying Shan", "Gang Zeng", "Dahua Lin", "Xihui Liu", "Ziwei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f10f"}, "filepath": "data/2312.08366v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990236087821409, "arXiv_link": "https://arxiv.org/html/2312.08366v1", "other_link": "", "title": "See, Say, and Segment: Correcting False Premises with LMMs", "abstract": "Current open-source Large Multimodal Models (LMMs) excel at tasks such as\nopen-vocabulary language grounding and segmentation but can suffer under false\npremises when queries imply the existence of something that is not actually\npresent in the image. We observe that existing methods that fine-tune an LMM to\nsegment images significantly degrade their ability to reliably determine\n(\"see\") if an object is present and to interact naturally with humans (\"say\"),\na form of catastrophic forgetting. In this work, we propose a cascading and\njoint training approach for LMMs to solve this task, avoiding catastrophic\nforgetting of previous skills. Our resulting model can \"see\" by detecting\nwhether objects are present in an image, \"say\" by telling the user if they are\nnot, proposing alternative queries or correcting semantic errors in the query,\nand finally \"segment\" by outputting the mask of the desired objects if they\nexist. Additionally, we introduce a novel False Premise Correction benchmark\ndataset, an extension of existing RefCOCO(+/g) referring segmentation datasets\n(which we call FP-RefCOCO(+/g)). The results show that our method not only\ndetects false premises up to 55% better than existing approaches, but under\nfalse premise conditions produces relative cIOU improvements of more than 31%\nover baselines, and produces natural language feedback judged helpful up to 67%\nof the time.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Tsung-Han Wu", "Giscard Biamby", "David Chan", "Lisa Dunlap", "Ritwik Gupta", "XuDong Wang", "Trevor Darrell", "Joseph Gonzalez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f110"}, "filepath": "data/2312.07330.png", "tags": [], "_media_type": "image", "_rand": 0.9999232331396312, "arXiv_link": "https://arxiv.org/abs/2312.07330", "other_link": "", "title": "Learned representation-guided diffusion models for large-image generation", "abstract": "To synthesize high-fidelity samples, diffusion models typically require\nauxiliary data to guide the generation process. However, it is impractical to\nprocure the painstaking patch-level annotation effort required in specialized\ndomains like histopathology and satellite imagery; it is often performed by\ndomain experts and involves hundreds of millions of patches. Modern-day\nself-supervised learning (SSL) representations encode rich semantic and visual\ninformation. In this paper, we posit that such representations are expressive\nenough to act as proxies to fine-grained human labels. We introduce a novel\napproach that trains diffusion models conditioned on embeddings from SSL. Our\ndiffusion models successfully project these features back to high-quality\nhistopathology and remote sensing images. In addition, we construct larger\nimages by assembling spatially consistent patches inferred from SSL embeddings,\npreserving long-range dependencies. Augmenting real data by generating\nvariations of real images improves downstream classifier accuracy for\npatch-level and larger, image-scale classification tasks. Our models are\neffective even on datasets not encountered during training, demonstrating their\nrobustness and generalizability. Generating images from learned embeddings is\nagnostic to the source of the embeddings. The SSL embeddings used to generate a\nlarge image can either be extracted from a reference image, or sampled from an\nauxiliary model conditioned on any related modality (e.g. class labels, text,\ngenomic data). As proof of concept, we introduce the text-to-large image\nsynthesis paradigm where we successfully synthesize large pathology and\nsatellite images out of text descriptions.", "keywords": ["Image and video generation and manipulation", "Medical imaging and biological vision", "Remote sensing and photogrammetry"], "authors_list": ["Alexandros Graikos", "Srikar Yellapragada", "Minh-Quan Le", "Saarthak Kapse", "Prateek Prasanna", "Joel Saltz", "Dimitris Samaras"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f111"}, "filepath": "data/2403.01852.png", "tags": [], "_media_type": "image", "_rand": 0.99943615096549, "arXiv_link": "https://arxiv.org/abs/2403.01852", "other_link": "https://github.com/cszy98/PLACE/tree/main.", "title": "PLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis", "abstract": "Recent advancements in large-scale pre-trained text-to-image models have led\nto remarkable progress in semantic image synthesis. Nevertheless, synthesizing\nhigh-quality images with consistent semantics and layout remains a challenge.\nIn this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE)\nthat harnesses pre-trained models to alleviate the aforementioned issues.\nSpecifically, we first employ the layout control map to faithfully represent\nlayouts in the feature space. Subsequently, we combine the layout and semantic\nfeatures in a timestep-adaptive manner to synthesize images with realistic\ndetails. During fine-tuning, we propose the Semantic Alignment (SA) loss to\nfurther enhance layout alignment. Additionally, we introduce the Layout-Free\nPrior Preservation (LFP) loss, which leverages unlabeled data to maintain the\npriors of pre-trained models, thereby improving the visual quality and semantic\nconsistency of synthesized images. Extensive experiments demonstrate that our\napproach performs favorably in terms of visual quality, semantic consistency,\nand layout alignment. The source code and model are available at\nhttps://github.com/cszy98/PLACE/tree/main.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhengyao Lv", "Yuxiang Wei", "Wangmeng Zuo", "Kwan-Yee K. Wong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f112"}, "filepath": "data/2312.01711v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999372259435173, "arXiv_link": "https://arxiv.org/abs/2312.01711v2", "other_link": "", "title": "Regressor-Segmenter Mutual Prompt Learning for Crowd Counting", "abstract": "Crowd counting has achieved significant progress by training regressors to\npredict instance positions. In heavily crowded scenarios, however, regressors\nare challenged by uncontrollable annotation variance, which causes density map\nbias and context information inaccuracy. In this study, we propose mutual\nprompt learning (mPrompt), which leverages a regressor and a segmenter as\nguidance for each other, solving bias and inaccuracy caused by annotation\nvariance while distinguishing foreground from background. In specific, mPrompt\nleverages point annotations to tune the segmenter and predict pseudo head masks\nin a way of point prompt learning. It then uses the predicted segmentation\nmasks, which serve as spatial constraint, to rectify biased point annotations\nas context prompt learning. mPrompt defines a way of mutual information\nmaximization from prompt learning, mitigating the impact of annotation variance\nwhile improving model accuracy. Experiments show that mPrompt significantly\nreduces the Mean Average Error (MAE), demonstrating the potential to be general\nframework for down-stream vision tasks.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Mingyue Guo", "Li Yuan", "Zhaoyi Yan", "Binghui Chen", "Yaowei Wang", "Qixiang Ye"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f113"}, "filepath": "data/2403.05087.png", "tags": [], "_media_type": "image", "_rand": 0.9991427980274877, "arXiv_link": "https://arxiv.org/abs/2403.05087", "other_link": "", "title": "SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting", "abstract": "We present SplattingAvatar, a hybrid 3D representation of photorealistic\nhuman avatars with Gaussian Splatting embedded on a triangle mesh, which\nrenders over 300 FPS on a modern GPU and 30 FPS on a mobile device. We\ndisentangle the motion and appearance of a virtual human with explicit mesh\ngeometry and implicit appearance modeling with Gaussian Splatting. The\nGaussians are defined by barycentric coordinates and displacement on a triangle\nmesh as Phong surfaces. We extend lifted optimization to simultaneously\noptimize the parameters of the Gaussians while walking on the triangle mesh.\nSplattingAvatar is a hybrid representation of virtual humans where the mesh\nrepresents low-frequency motion and surface deformation, while the Gaussians\ntake over the high-frequency geometry and detailed appearance. Unlike existing\ndeformation methods that rely on an MLP-based linear blend skinning (LBS) field\nfor motion, we control the rotation and translation of the Gaussians directly\nby mesh, which empowers its compatibility with various animation techniques,\ne.g., skeletal animation, blend shapes, and mesh editing. Trainable from\nmonocular videos for both full-body and head avatars, SplattingAvatar shows\nstate-of-the-art rendering quality across multiple datasets.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Zhijing Shao", "Wang Zhaolong", "Zhuang Li", "Duotun Wang", "Xiangru Lin", "Yu Zhang", "Mingming Fan", "Zeyu Wang"], "category_name": "Graphics", "all_categories": ["Graphics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f114"}, "filepath": "data/2403.08770.png", "tags": [], "_media_type": "image", "_rand": 0.9994961247222254, "arXiv_link": "https://arxiv.org/abs/2403.08770", "other_link": "https://github.com/Forrest-110/FastMAC.", "title": "FastMAC: Stochastic Spectral Sampling of Correspondence Graph", "abstract": "3D correspondence, i.e., a pair of 3D points, is a fundamental concept in\ncomputer vision. A set of 3D correspondences, when equipped with compatibility\nedges, forms a correspondence graph. This graph is a critical component in\nseveral state-of-the-art 3D point cloud registration approaches, e.g., the one\nbased on maximal cliques (MAC). However, its properties have not been well\nunderstood. So we present the first study that introduces graph signal\nprocessing into the domain of correspondence graph. We exploit the generalized\ndegree signal on correspondence graph and pursue sampling strategies that\npreserve high-frequency components of this signal. To address time-consuming\nsingular value decomposition in deterministic sampling, we resort to a\nstochastic approximate sampling strategy. As such, the core of our method is\nthe stochastic spectral sampling of correspondence graph. As an application, we\nbuild a complete 3D registration algorithm termed as FastMAC, that reaches\nreal-time speed while leading to little to none performance drop. Through\nextensive experiments, we validate that FastMAC works for both indoor and\noutdoor benchmarks. For example, FastMAC can accelerate MAC by 80 times while\nmaintaining high registration success rate on KITTI. Codes are publicly\navailable at https://github.com/Forrest-110/FastMAC.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yifei Zhang", "Hao Zhao", "Hongyang Li", "Siheng Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f115"}, "filepath": "data/2312.13834.png", "tags": [], "_media_type": "image", "_rand": 0.9995417074183583, "arXiv_link": "https://arxiv.org/abs/2312.13834", "other_link": "", "title": "Fairy: Fast Parallellized Instruction-Guided Video-to-Video Synthesis", "abstract": "In this paper, we introduce Fairy, a minimalist yet robust adaptation of\nimage-editing diffusion models, enhancing them for video editing applications.\nOur approach centers on the concept of anchor-based cross-frame attention, a\nmechanism that implicitly propagates diffusion features across frames, ensuring\nsuperior temporal coherence and high-fidelity synthesis. Fairy not only\naddresses limitations of previous models, including memory and processing\nspeed. It also improves temporal consistency through a unique data augmentation\nstrategy. This strategy renders the model equivariant to affine transformations\nin both source and target images. Remarkably efficient, Fairy generates\n120-frame 512x384 videos (4-second duration at 30 FPS) in just 14 seconds,\noutpacing prior works by at least 44x. A comprehensive user study, involving\n1000 generated samples, confirms that our approach delivers superior quality,\ndecisively outperforming established methods.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Bichen Wu", "Ching-Yao Chuang", "Xiaoyan Wang", "Yichen Jia", "Kapil Krishnakumar", "Tong Xiao", "Feng Liang", "Licheng Yu", "Peter Vajda"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f116"}, "filepath": "data/2405.15684.png", "tags": [], "_media_type": "image", "_rand": 0.9999840117062953, "arXiv_link": "https://arxiv.org/abs/2405.15684", "other_link": "", "title": "MMA: Multi-Modal Adapter for Vision-Language Models", "abstract": "To bridge the gap between vision and language modalities, Multimodal Large\nLanguage Models (MLLMs) usually learn an adapter that converts visual inputs to\nunderstandable tokens for Large Language Models (LLMs). However, most adapters\ngenerate consistent visual tokens, regardless of the specific objects of\ninterest mentioned in the prompt. Since these adapters distribute equal\nattention to every detail in the image and focus on the entire scene, they may\nincrease the cognitive load for LLMs, particularly when processing complex\nscenes. To alleviate this problem, we propose prompt-aware adapters. These\nadapters are designed with the capability to dynamically embed visual inputs\nbased on the specific focus of the prompt. Specifically, prompt-aware adapters\nutilize both global and local textual features to capture the most relevant\nvisual clues from the prompt at both coarse and fine granularity levels. This\napproach significantly enhances the ability of LLMs to understand and interpret\nvisual content. Experiments on various visual question answering tasks, such as\ncounting and position reasoning, demonstrate the effectiveness of prompt-aware\nadapters.", "keywords": ["Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Lingxiao Yang", "Ru-Yuan Zhang", "Yanchen Wang", "Xiaohua Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f117"}, "filepath": "data/2403.14442.png", "tags": [], "_media_type": "image", "_rand": 0.9998325739912082, "arXiv_link": "https://arxiv.org/abs/2403.14442", "other_link": "", "title": "RoDLA: Benchmarking the Robustness of Document Layout Analysis Models", "abstract": "Before developing a Document Layout Analysis (DLA) model in real-world\napplications, conducting comprehensive robustness testing is essential.\nHowever, the robustness of DLA models remains underexplored in the literature.\nTo address this, we are the first to introduce a robustness benchmark for DLA\nmodels, which includes 450K document images of three datasets. To cover\nrealistic corruptions, we propose a perturbation taxonomy with 36 common\ndocument perturbations inspired by real-world document processing.\nAdditionally, to better understand document perturbation impacts, we propose\ntwo metrics, Mean Perturbation Effect (mPE) for perturbation assessment and\nMean Robustness Degradation (mRD) for robustness evaluation. Furthermore, we\nintroduce a self-titled model, i.e., Robust Document Layout Analyzer (RoDLA),\nwhich improves attention mechanisms to boost extraction of robust features.\nExperiments on the proposed benchmarks (PubLayNet-P, DocLayNet-P, and\nM$^6$Doc-P) demonstrate that RoDLA obtains state-of-the-art mRD scores of\n115.7, 135.4, and 150.4, respectively. Compared to previous methods, RoDLA\nachieves notable improvements in mAP of +3.8%, +7.1% and +12.1%, respectively.", "keywords": [], "authors_list": ["Yufan Chen", "Jiaming Zhang", "Kunyu Peng", "Junwei Zheng", "Ruiping Liu", "Philip H.S. Torr", "Rainer Stiefelhagen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f118"}, "filepath": "data/2404.00292.png", "tags": [], "_media_type": "image", "_rand": 0.9995385791680684, "arXiv_link": "https://arxiv.org/abs/2404.00292", "other_link": "", "title": "LAKE-RED: Camouflaged Images Generation by Latent Background Knowledge Retrieval-Augmented Diffusion", "abstract": "Camouflaged vision perception is an important vision task with numerous\npractical applications. Due to the expensive collection and labeling costs,\nthis community struggles with a major bottleneck that the species category of\nits datasets is limited to a small number of object species. However, the\nexisting camouflaged generation methods require specifying the background\nmanually, thus failing to extend the camouflaged sample diversity in a low-cost\nmanner. In this paper, we propose a Latent Background Knowledge\nRetrieval-Augmented Diffusion (LAKE-RED) for camouflaged image generation. To\nour knowledge, our contributions mainly include: (1) For the first time, we\npropose a camouflaged generation paradigm that does not need to receive any\nbackground inputs. (2) Our LAKE-RED is the first knowledge retrieval-augmented\nmethod with interpretability for camouflaged generation, in which we propose an\nidea that knowledge retrieval and reasoning enhancement are separated\nexplicitly, to alleviate the task-specific challenges. Moreover, our method is\nnot restricted to specific foreground targets or backgrounds, offering a\npotential for extending camouflaged vision perception to more diverse domains.\n(3) Experimental results demonstrate that our method outperforms the existing\napproaches, generating more realistic camouflage images.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Pancheng Zhao", "Peng Xu", "Pengda Qin", "Deng-Ping Fan", "Zhicheng Zhang", "Guoli Jia", "Bowen Zhou", "Jufeng Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f119"}, "filepath": "data/2311.08046.png", "tags": [], "_media_type": "image", "_rand": 0.9992274031815295, "arXiv_link": "https://arxiv.org/abs/2311.08046", "other_link": "https://github.com/PKU-YuanGroup/Chat-UniVi.", "title": "Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding", "abstract": "Large language models have demonstrated impressive universal capabilities\nacross a wide range of open-ended tasks and have extended their utility to\nencompass multimodal conversations. However, existing methods encounter\nchallenges in effectively handling both image and video understanding,\nparticularly with limited visual tokens. In this work, we introduce Chat-UniVi,\na Unified Vision-language model capable of comprehending and engaging in\nconversations involving images and videos through a unified visual\nrepresentation. Specifically, we employ a set of dynamic visual tokens to\nuniformly represent images and videos. This representation framework empowers\nthe model to efficiently utilize a limited number of visual tokens to\nsimultaneously capture the spatial details necessary for images and the\ncomprehensive temporal relationship required for videos. Moreover, we leverage\na multi-scale representation, enabling the model to perceive both high-level\nsemantic concepts and low-level visual details. Notably, Chat-UniVi is trained\non a mixed dataset containing both images and videos, allowing direct\napplication to tasks involving both mediums without requiring any\nmodifications. Extensive experimental results demonstrate that Chat-UniVi\nconsistently outperforms even existing methods exclusively designed for either\nimages or videos. Code is available at\nhttps://github.com/PKU-YuanGroup/Chat-UniVi.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Peng Jin", "Ryuichi Takanobu", "Cai Zhang", "Xiaochun Cao", "Li Yuan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f11a"}, "filepath": "data/2311.15672.png", "tags": [], "_media_type": "image", "_rand": 0.9995464738622704, "arXiv_link": "https://arxiv.org/abs/2311.15672", "other_link": "https://seanchenxy.github.io/HaveFunWeb/.", "title": "HAVE-FUN: Human Avatar Reconstruction from Few-Shot Unconstrained Images", "abstract": "As for human avatar reconstruction, contemporary techniques commonly\nnecessitate the acquisition of costly data and struggle to achieve satisfactory\nresults from a small number of casual images. In this paper, we investigate\nthis task from a few-shot unconstrained photo album. The reconstruction of\nhuman avatars from such data sources is challenging because of limited data\namount and dynamic articulated poses. For handling dynamic data, we integrate a\nskinning mechanism with deep marching tetrahedra (DMTet) to form a drivable\ntetrahedral representation, which drives arbitrary mesh topologies generated by\nthe DMTet for the adaptation of unconstrained images. To effectively mine\ninstructive information from few-shot data, we devise a two-phase optimization\nmethod with few-shot reference and few-shot guidance. The former focuses on\naligning avatar identity with reference images, while the latter aims to\ngenerate plausible appearances for unseen regions. Overall, our framework,\ncalled HaveFun, can undertake avatar reconstruction, rendering, and animation.\nExtensive experiments on our developed benchmarks demonstrate that HaveFun\nexhibits substantially superior performance in reconstructing the human body\nand hand. Project website: https://seanchenxy.github.io/HaveFunWeb/.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis", "Image and video generation and manipulation"], "authors_list": ["Xihe Yang", "Xingyu Chen", "Daiheng Gao", "Finn Wong", "Xiaoguang Han", "Baoyuan Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f11b"}, "filepath": "data/2311.16194.png", "tags": [], "_media_type": "image", "_rand": 0.9991730839875452, "arXiv_link": "https://arxiv.org/abs/2311.16194", "other_link": "", "title": "BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP", "abstract": "Contrastive Vision-Language Pre-training, known as CLIP, has shown promising\neffectiveness in addressing downstream image recognition tasks. However, recent\nworks revealed that the CLIP model can be implanted with a downstream-oriented\nbackdoor. On downstream tasks, one victim model performs well on clean samples\nbut predicts a specific target class whenever a specific trigger is present.\nFor injecting a backdoor, existing attacks depend on a large amount of\nadditional data to maliciously fine-tune the entire pre-trained CLIP model,\nwhich makes them inapplicable to data-limited scenarios. In this work,\nmotivated by the recent success of learnable prompts, we address this problem\nby injecting a backdoor into the CLIP model in the prompt learning stage. Our\nmethod named BadCLIP is built on a novel and effective mechanism in backdoor\nattacks on CLIP, i.e., influencing both the image and text encoders with the\ntrigger. It consists of a learnable trigger applied to images and a\ntrigger-aware context generator, such that the trigger can change text features\nvia trigger-aware prompts, resulting in a powerful and generalizable attack.\nExtensive experiments conducted on 11 datasets verify that the clean accuracy\nof BadCLIP is similar to those of advanced prompt learning methods and the\nattack success rate is higher than 99% in most cases. BadCLIP is also\ngeneralizable to unseen classes, and shows a strong generalization capability\nunder cross-dataset and cross-domain settings.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jiawang Bai", "Kuofeng Gao", "Shaobo Min", "Shu-Tao Xia", "Zhifeng Li", "Wei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f11c"}, "filepath": "data/2403.02781v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996015516602004, "arXiv_link": "https://arxiv.org/abs/2403.02781v3", "other_link": "", "title": "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models", "abstract": "Prompt learning has emerged as a valuable technique in enhancing\nvision-language models (VLMs) such as CLIP for downstream tasks in specific\ndomains. Existing work mainly focuses on designing various learning forms of\nprompts, neglecting the potential of prompts as effective distillers for\nlearning from larger teacher models. In this paper, we introduce an\nunsupervised domain prompt distillation framework, which aims to transfer the\nknowledge of a larger teacher model to a lightweight target model through\nprompt-driven imitation using unlabeled domain images. Specifically, our\nframework consists of two distinct stages. In the initial stage, we pre-train a\nlarge CLIP teacher model using domain (few-shot) labels. After pre-training, we\nleverage the unique decoupled-modality characteristics of CLIP by pre-computing\nand storing the text features as class vectors only once through the teacher\ntext encoder. In the subsequent stage, the stored class vectors are shared\nacross teacher and student image encoders for calculating the predicted logits.\nFurther, we align the logits of both the teacher and student models via KL\ndivergence, encouraging the student image encoder to generate similar\nprobability distributions to the teacher through the learnable prompts. The\nproposed prompt distillation process eliminates the reliance on labeled data,\nenabling the algorithm to leverage a vast amount of unlabeled images within the\ndomain. Finally, the well-trained student image encoders and pre-stored text\nfeatures (class vectors) are utilized for inference. To our best knowledge, we\nare the first to (1) perform unsupervised domain-specific prompt-driven\nknowledge distillation for CLIP, and (2) establish a practical pre-storing\nmechanism of text features as shared class vectors between teacher and student.\nExtensive experiments on 11 datasets demonstrate the effectiveness of our\nmethod.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Zheng Li", "Xiang Li", "xinyi fu", "Xin Zhang", "Weiqiang Wang", "Shuo Chen", "Jian Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f11d"}, "filepath": "data/2405.03413v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996722400042021, "arXiv_link": "https://arxiv.org/html/2405.03413v2", "other_link": "https://github.com/zzzzxxxx111/SLslam.", "title": "IBD-SLAM: Learning Image-Based Depth Fusion for Generalizable SLAM", "abstract": "This paper explores how deep learning techniques can improve visual-based\nSLAM performance in challenging environments. By combining deep feature\nextraction and deep matching methods, we introduce a versatile hybrid visual\nSLAM system designed to enhance adaptability in challenging scenarios, such as\nlow-light conditions, dynamic lighting, weak-texture areas, and severe jitter.\nOur system supports multiple modes, including monocular, stereo,\nmonocular-inertial, and stereo-inertial configurations. We also perform\nanalysis how to combine visual SLAM with deep learning methods to enlighten\nother researches. Through extensive experiments on both public datasets and\nself-sampled data, we demonstrate the superiority of the SL-SLAM system over\ntraditional approaches. The experimental results show that SL-SLAM outperforms\nstate-of-the-art SLAM algorithms in terms of localization accuracy and tracking\nrobustness. For the benefit of community, we make public the source code at\nhttps://github.com/zzzzxxxx111/SLslam.", "keywords": [], "authors_list": ["Minghao Yin", "Shangzhe Wu", "Kai Han"], "category_name": "Robotics", "all_categories": ["Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f11e"}, "filepath": "data/2402.10128.png", "tags": [], "_media_type": "image", "_rand": 0.9990682201514712, "arXiv_link": "https://arxiv.org/abs/2402.10128", "other_link": "https://abdullahamdi.com/ges", "title": "GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering", "abstract": "Advancements in 3D Gaussian Splatting have significantly accelerated 3D\nreconstruction and generation. However, it may require a large number of\nGaussians, which creates a substantial memory footprint. This paper introduces\nGES (Generalized Exponential Splatting), a novel representation that employs\nGeneralized Exponential Function (GEF) to model 3D scenes, requiring far fewer\nparticles to represent a scene and thus significantly outperforming Gaussian\nSplatting methods in efficiency with a plug-and-play replacement ability for\nGaussian-based utilities. GES is validated theoretically and empirically in\nboth principled 1D setup and realistic 3D scenes.\n It is shown to represent signals with sharp edges more accurately, which are\ntypically challenging for Gaussians due to their inherent low-pass\ncharacteristics. Our empirical analysis demonstrates that GEF outperforms\nGaussians in fitting natural-occurring signals (e.g. squares, triangles, and\nparabolic signals), thereby reducing the need for extensive splitting\noperations that increase the memory footprint of Gaussian Splatting. With the\naid of a frequency-modulated loss, GES achieves competitive performance in\nnovel-view synthesis benchmarks while requiring less than half the memory\nstorage of Gaussian Splatting and increasing the rendering speed by up to 39%.\nThe code is available on the project website https://abdullahamdi.com/ges .", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Abdullah J Hamdi", "Luke Melas-Kyriazi", "Jinjie Mai", "Guocheng Qian", "Ruoshi Liu", "Carl Vondrick", "Bernard Ghanem", "Andrea Vedaldi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f11f"}, "filepath": "data/2312.03678.png", "tags": [], "_media_type": "image", "_rand": 0.9995076252918496, "arXiv_link": "https://arxiv.org/abs/2312.03678", "other_link": "", "title": "Hybrid Functional Maps for Crease-Aware Non-Isometric Shape Matching", "abstract": "Non-isometric shape correspondence remains a fundamental challenge in\ncomputer vision. Traditional methods using Laplace-Beltrami operator (LBO)\neigenmodes face limitations in characterizing high-frequency extrinsic shape\nchanges like bending and creases. We propose a novel approach of combining the\nnon-orthogonal extrinsic basis of eigenfunctions of the elastic thin-shell\nhessian with the intrinsic ones of the LBO, creating a hybrid spectral space in\nwhich we construct functional maps. To this end, we present a theoretical\nframework to effectively integrate non-orthogonal basis functions into\ndescriptor- and learning-based functional map methods. Our approach can be\nincorporated easily into existing functional map pipelines across varying\napplications and is able to handle complex deformations beyond isometries. We\nshow extensive evaluations across various supervised and unsupervised settings\nand demonstrate significant improvements. Notably, our approach achieves up to\n15% better mean geodesic error for non-isometric correspondence settings and up\nto 45% improvement in scenarios with topological noise.", "keywords": [], "authors_list": ["Lennart Bastian", "Yizheng Xie", "Nassir Navab", "Zorah L\u00e4hner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f120"}, "filepath": "data/2403.01226.png", "tags": [], "_media_type": "image", "_rand": 0.9993960403923204, "arXiv_link": "https://arxiv.org/abs/2403.01226", "other_link": "", "title": "DiffSal: Joint Audio and Video Learning for Diffusion Saliency Prediction", "abstract": "Audio-visual saliency prediction can draw support from diverse modality\ncomplements, but further performance enhancement is still challenged by\ncustomized architectures as well as task-specific loss functions. In recent\nstudies, denoising diffusion models have shown more promising in unifying task\nframeworks owing to their inherent ability of generalization. Following this\nmotivation, a novel Diffusion architecture for generalized audio-visual\nSaliency prediction (DiffSal) is proposed in this work, which formulates the\nprediction problem as a conditional generative task of the saliency map by\nutilizing input audio and video as the conditions. Based on the spatio-temporal\naudio-visual features, an extra network Saliency-UNet is designed to perform\nmulti-modal attention modulation for progressive refinement of the ground-truth\nsaliency map from the noisy map. Extensive experiments demonstrate that the\nproposed DiffSal can achieve excellent performance across six challenging\naudio-visual benchmarks, with an average relative improvement of 6.3\\% over the\nprevious state-of-the-art results by six metrics.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Junwen Xiong", "Peng Zhang", "Tao You", "Chuanyue Li", "Wei Huang", "Yufei Zha"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f121"}, "filepath": "data/2404.04956.png", "tags": [], "_media_type": "image", "_rand": 0.9999388877790394, "arXiv_link": "https://arxiv.org/abs/2404.04956", "other_link": "", "title": "Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models", "abstract": "Ethical concerns surrounding copyright protection and inappropriate content\ngeneration pose challenges for the practical implementation of diffusion\nmodels. One effective solution involves watermarking the generated images.\nHowever, existing methods often compromise the model performance or require\nadditional training, which is undesirable for operators and users. To address\nthis issue, we propose Gaussian Shading, a diffusion model watermarking\ntechnique that is both performance-lossless and training-free, while serving\nthe dual purpose of copyright protection and tracing of offending content. Our\nwatermark embedding is free of model parameter modifications and thus is\nplug-and-play. We map the watermark to latent representations following a\nstandard Gaussian distribution, which is indistinguishable from latent\nrepresentations obtained from the non-watermarked diffusion model. Therefore we\ncan achieve watermark embedding with lossless performance, for which we also\nprovide theoretical proof. Furthermore, since the watermark is intricately\nlinked with image semantics, it exhibits resilience to lossy processing and\nerasure attempts. The watermark can be extracted by Denoising Diffusion\nImplicit Models (DDIM) inversion and inverse sampling. We evaluate Gaussian\nShading on multiple versions of Stable Diffusion, and the results demonstrate\nthat Gaussian Shading not only is performance-lossless but also outperforms\nexisting methods in terms of robustness.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zijin Yang", "Kai Zeng", "Kejiang Chen", "Han Fang", "Weiming Zhang", "Nenghai Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f122"}, "filepath": "data/2307.16620.png", "tags": [], "_media_type": "image", "_rand": 0.9998741185290877, "arXiv_link": "https://arxiv.org/abs/2307.16620", "other_link": "", "title": "Benchmarking Audio Visual Segmentation for Long-Untrimmed Videos", "abstract": "The audio-visual segmentation (AVS) task aims to segment sounding objects\nfrom a given video. Existing works mainly focus on fusing audio and visual\nfeatures of a given video to achieve sounding object masks. However, we\nobserved that prior arts are prone to segment a certain salient object in a\nvideo regardless of the audio information. This is because sounding objects are\noften the most salient ones in the AVS dataset. Thus, current AVS methods might\nfail to localize genuine sounding objects due to the dataset bias. In this\nwork, we present an audio-visual instance-aware segmentation approach to\novercome the dataset bias. In a nutshell, our method first localizes potential\nsounding objects in a video by an object segmentation network, and then\nassociates the sounding object candidates with the given audio. We notice that\nan object could be a sounding object in one video but a silent one in another\nvideo. This would bring ambiguity in training our object segmentation network\nas only sounding objects have corresponding segmentation masks. We thus propose\na silent object-aware segmentation objective to alleviate the ambiguity.\nMoreover, since the category information of audio is unknown, especially for\nmultiple sounding sources, we propose to explore the audio-visual semantic\ncorrelation and then associate audio with potential objects. Specifically, we\nattend predicted audio category scores to potential instance masks and these\nscores will highlight corresponding sounding instances while suppressing\ninaudible ones. When we enforce the attended instance masks to resemble the\nground-truth mask, we are able to establish audio-visual semantics correlation.\nExperimental results on the AVS benchmarks demonstrate that our method can\neffectively segment sounding objects without being biased to salient objects.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Chen Liu", "Peike Li", "Qingtao Yu", "Hongwei Sheng", "Dadong Wang", "Lincheng Li", "Xin Yu"], "category_name": "Sound", "all_categories": ["Sound", "Computer Vision and Pattern Recognition", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f123"}, "filepath": "data/2402.19276.png", "tags": [], "_media_type": "image", "_rand": 0.9998599998034403, "arXiv_link": "https://arxiv.org/abs/2402.19276", "other_link": "", "title": "Modular Blind Video Quality Assessment", "abstract": "Blind video quality assessment (BVQA) plays a pivotal role in evaluating and\nimproving the viewing experience of end-users across a wide range of\nvideo-based platforms and services. Contemporary deep learning-based models\nprimarily analyze video content in its aggressively subsampled format, while\nbeing blind to the impact of the actual spatial resolution and frame rate on\nvideo quality. In this paper, we propose a modular BVQA model and a method of\ntraining it to improve its modularity. Our model comprises a base quality\npredictor, a spatial rectifier, and a temporal rectifier, responding to the\nvisual content and distortion, spatial resolution, and frame rate changes on\nvideo quality, respectively. During training, spatial and temporal rectifiers\nare dropped out with some probabilities to render the base quality predictor a\nstandalone BVQA model, which should work better with the rectifiers. Extensive\nexperiments on both professionally-generated content and user-generated content\nvideo databases show that our quality model achieves superior or comparable\nperformance to current methods. Additionally, the modularity of our model\noffers an opportunity to analyze existing video quality databases in terms of\ntheir spatial and temporal complexity.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Wen Wen", "Mu Li", "Yabin ZHANG", "Yiting Liao", "Junlin Li", "Li zhang", "Kede Ma"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f124"}, "filepath": "data/2404.04231.png", "tags": [], "_media_type": "image", "_rand": 0.9997632154251037, "arXiv_link": "https://arxiv.org/abs/2404.04231", "other_link": "", "title": "Image-Text Co-Decomposition for Text-Supervised Semantic Segmentation", "abstract": "This paper addresses text-supervised semantic segmentation, aiming to learn a\nmodel capable of segmenting arbitrary visual concepts within images by using\nonly image-text pairs without dense annotations. Existing methods have\ndemonstrated that contrastive learning on image-text pairs effectively aligns\nvisual segments with the meanings of texts. We notice that there is a\ndiscrepancy between text alignment and semantic segmentation: A text often\nconsists of multiple semantic concepts, whereas semantic segmentation strives\nto create semantically homogeneous segments. To address this issue, we propose\na novel framework, Image-Text Co-Decomposition (CoDe), where the paired image\nand text are jointly decomposed into a set of image regions and a set of word\nsegments, respectively, and contrastive learning is developed to enforce\nregion-word alignment. To work with a vision-language model, we present a\nprompt learning mechanism that derives an extra representation to highlight an\nimage segment or a word segment of interest, with which more effective features\ncan be extracted from that segment. Comprehensive experimental results\ndemonstrate that our method performs favorably against existing text-supervised\nsemantic segmentation methods on six benchmark datasets.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Ji-Jia Wu", "Andy Chia-Hao Chang", "Chieh-Yu Chuang", "Chun-Pei Chen", "Yu-Lun Liu", "Min-Hung Chen", "Hou-Ning Hu", "Yung-Yu Chuang", "Yen-Yu Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f125"}, "filepath": "data/2306.15669.png", "tags": [], "_media_type": "image", "_rand": 0.9997966954041222, "arXiv_link": "https://arxiv.org/abs/2306.15669", "other_link": "", "title": "Detector-Free Structure from Motion", "abstract": "We propose a new structure-from-motion framework to recover accurate camera\nposes and point clouds from unordered images. Traditional SfM systems typically\nrely on the successful detection of repeatable keypoints across multiple views\nas the first step, which is difficult for texture-poor scenes, and poor\nkeypoint detection may break down the whole SfM system. We propose a new\ndetector-free SfM framework to draw benefits from the recent success of\ndetector-free matchers to avoid the early determination of keypoints, while\nsolving the multi-view inconsistency issue of detector-free matchers.\nSpecifically, our framework first reconstructs a coarse SfM model from\nquantized detector-free matches. Then, it refines the model by a novel\niterative refinement pipeline, which iterates between an attention-based\nmulti-view matching module to refine feature tracks and a geometry refinement\nmodule to improve the reconstruction accuracy. Experiments demonstrate that the\nproposed framework outperforms existing detector-based SfM systems on common\nbenchmark datasets. We also collect a texture-poor SfM dataset to demonstrate\nthe capability of our framework to reconstruct texture-poor scenes. Based on\nthis framework, we take $\\textit{first place}$ in Image Matching Challenge\n2023.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xingyi He", "Jiaming Sun", "Yifan Wang", "Sida Peng", "Qixing Huang", "Hujun Bao", "Xiaowei Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f126"}, "filepath": "data/2311.18649.png", "tags": [], "_media_type": "image", "_rand": 0.9996140691587966, "arXiv_link": "https://arxiv.org/abs/2311.18649", "other_link": "https://github.com/zhangdoudou123/SemFew.", "title": "Simple Semantic-Aided Few-Shot Learning", "abstract": "Learning from a limited amount of data, namely Few-Shot Learning, stands out\nas a challenging computer vision task. Several works exploit semantics and\ndesign complicated semantic fusion mechanisms to compensate for rare\nrepresentative features within restricted data. However, relying on naive\nsemantics such as class names introduces biases due to their brevity, while\nacquiring extensive semantics from external knowledge takes a huge time and\neffort. This limitation severely constrains the potential of semantics in\nFew-Shot Learning. In this paper, we design an automatic way called Semantic\nEvolution to generate high-quality semantics. The incorporation of high-quality\nsemantics alleviates the need for complex network structures and learning\nalgorithms used in previous works. Hence, we employ a simple two-layer network\ntermed Semantic Alignment Network to transform semantics and visual features\ninto robust class prototypes with rich discriminative features for few-shot\nclassification. The experimental results show our framework outperforms all\nprevious methods on six benchmarks, demonstrating a simple network with\nhigh-quality semantics can beat intricate multi-modal modules on few-shot\nclassification tasks. Code is available at\nhttps://github.com/zhangdoudou123/SemFew.", "keywords": [], "authors_list": ["Hai Zhang", "Junzhe Xu", "Shanlin Jiang", "Zhenan He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f127"}, "filepath": "data/2306.17618.png", "tags": [], "_media_type": "image", "_rand": 0.9991877073238496, "arXiv_link": "https://arxiv.org/abs/2306.17618", "other_link": "", "title": "iToF-flow-based High Frame Rate Depth Imaging", "abstract": "Indirect time-of-flight (iToF) imaging allows us to capture dense depth\ninformation at a low cost. However, iToF imaging often suffers from multipath\ninterference (MPI) artifacts in the presence of scattering media, resulting in\nsevere depth-accuracy degradation. For instance, iToF cameras cannot measure\ndepth accurately through fog because ToF active illumination scatters back to\nthe sensor before reaching the farther target surface. In this work, we propose\na polarimetric iToF imaging method that can capture depth information robustly\nthrough scattering media. Our observations on the principle of indirect ToF\nimaging and polarization of light allow us to formulate a novel computational\nmodel of scattering-aware polarimetric phase measurements that enables us to\ncorrect MPI errors. We first devise a scattering-aware polarimetric iToF model\nthat can estimate the phase of unpolarized backscattered light. We then combine\nthe optical filtering of polarization and our computational modeling of\nunpolarized backscattered light via scattering analysis of phase and amplitude.\nThis allows us to tackle the MPI problem by estimating the scattering energy\nthrough the participating media. We validate our method on an experimental\nsetup using a customized off-the-shelf iToF camera. Our method outperforms\nbaseline methods by a significant margin by means of our scattering model and\npolarimetric phase measurements.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Yu Meng", "Zhou Xue", "Xu Chang", "Xuemei Hu", "Tao Yue"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f128"}, "filepath": "data/2404.06692.png", "tags": [], "_media_type": "image", "_rand": 0.9996052546319057, "arXiv_link": "https://arxiv.org/abs/2404.06692", "other_link": "https://github.com/mulns/PerVFI}", "title": "Perceptual-Oriented Video Frame Interpolation Via Asymmetric Synergistic Blending", "abstract": "Previous methods for Video Frame Interpolation (VFI) have encountered\nchallenges, notably the manifestation of blur and ghosting effects. These\nissues can be traced back to two pivotal factors: unavoidable motion errors and\nmisalignment in supervision. In practice, motion estimates often prove to be\nerror-prone, resulting in misaligned features. Furthermore, the reconstruction\nloss tends to bring blurry results, particularly in misaligned regions. To\nmitigate these challenges, we propose a new paradigm called PerVFI\n(Perception-oriented Video Frame Interpolation). Our approach incorporates an\nAsymmetric Synergistic Blending module (ASB) that utilizes features from both\nsides to synergistically blend intermediate features. One reference frame\nemphasizes primary content, while the other contributes complementary\ninformation. To impose a stringent constraint on the blending process, we\nintroduce a self-learned sparse quasi-binary mask which effectively mitigates\nghosting and blur artifacts in the output. Additionally, we employ a\nnormalizing flow-based generator and utilize the negative log-likelihood loss\nto learn the conditional distribution of the output, which further facilitates\nthe generation of clear and fine details. Experimental results validate the\nsuperiority of PerVFI, demonstrating significant improvements in perceptual\nquality compared to existing methods. Codes are available at\n\\url{https://github.com/mulns/PerVFI}", "keywords": ["Image and video generation and manipulation", "Low-level vision"], "authors_list": ["Guangyang Wu", "Xin Tao", "Changlin Li", "Wenyi Wang", "Xiaohong Liu", "Qingqing Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f129"}, "filepath": "data/2312.05247.png", "tags": [], "_media_type": "image", "_rand": 0.9994681139670221, "arXiv_link": "https://arxiv.org/abs/2312.05247", "other_link": "", "title": "Dynamic LiDAR Re-simulation using Compositional Neural Fields", "abstract": "We introduce DyNFL, a novel neural field-based approach for high-fidelity\nre-simulation of LiDAR scans in dynamic driving scenes. DyNFL processes LiDAR\nmeasurements from dynamic environments, accompanied by bounding boxes of moving\nobjects, to construct an editable neural field. This field, comprising\nseparately reconstructed static background and dynamic objects, allows users to\nmodify viewpoints, adjust object positions, and seamlessly add or remove\nobjects in the re-simulated scene. A key innovation of our method is the neural\nfield composition technique, which effectively integrates reconstructed neural\nassets from various scenes through a ray drop test, accounting for occlusions\nand transparent surfaces. Our evaluation with both synthetic and real-world\nenvironments demonstrates that DyNFL substantially improves dynamic scene LiDAR\nsimulation, offering a combination of physical fidelity and flexible editing\ncapabilities.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Hanfeng Wu", "Xingxing Zuo", "Stefan Leutenegger", "Or Litany", "Konrad Schindler", "Shengyu Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f12a"}, "filepath": "data/2403.03608.png", "tags": [], "_media_type": "image", "_rand": 0.9998888808921237, "arXiv_link": "https://arxiv.org/abs/2403.03608", "other_link": "", "title": "GSNeRF: Generalizable Semantic Neural Radiance Fields with Enhanced 3D Scene Understanding", "abstract": "Utilizing multi-view inputs to synthesize novel-view images, Neural Radiance\nFields (NeRF) have emerged as a popular research topic in 3D vision. In this\nwork, we introduce a Generalizable Semantic Neural Radiance Field (GSNeRF),\nwhich uniquely takes image semantics into the synthesis process so that both\nnovel view images and the associated semantic maps can be produced for unseen\nscenes. Our GSNeRF is composed of two stages: Semantic Geo-Reasoning and\nDepth-Guided Visual rendering. The former is able to observe multi-view image\ninputs to extract semantic and geometry features from a scene. Guided by the\nresulting image geometry information, the latter performs both image and\nsemantic rendering with improved performances. Our experiments not only confirm\nthat GSNeRF performs favorably against prior works on both novel-view image and\nsemantic segmentation synthesis but the effectiveness of our sampling strategy\nfor visual rendering is further verified.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Zi-Ting Chou", "Sheng-Yu Huang", "I-Jieh Liu", "Yu-Chiang Frank Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f12b"}, "filepath": "data/2311.17005.png", "tags": [], "_media_type": "image", "_rand": 0.9999701515147253, "arXiv_link": "https://arxiv.org/abs/2311.17005", "other_link": "https://github.com/OpenGVLab/Ask-Anything.", "title": "MVBench: A Comprehensive Multi-modal Video Understanding Benchmark", "abstract": "With the rapid development of Multi-modal Large Language Models (MLLMs), a\nnumber of diagnostic benchmarks have recently emerged to evaluate the\ncomprehension capabilities of these models. However, most benchmarks\npredominantly assess spatial understanding in the static image tasks, while\noverlooking temporal understanding in the dynamic video tasks. To alleviate\nthis issue, we introduce a comprehensive Multi-modal Video understanding\nBenchmark, namely MVBench, which covers 20 challenging video tasks that cannot\nbe effectively solved with a single frame. Specifically, we first introduce a\nnovel static-to-dynamic method to define these temporal-related tasks. By\ntransforming various static tasks into dynamic ones, we enable the systematic\ngeneration of video tasks that require a broad spectrum of temporal skills,\nranging from perception to cognition. Then, guided by the task definition, we\nautomatically convert public video annotations into multiple-choice QA to\nevaluate each task. On one hand, such a distinct paradigm allows us to build\nMVBench efficiently, without much manual intervention. On the other hand, it\nguarantees evaluation fairness with ground-truth video annotations, avoiding\nthe biased scoring of LLMs. Moreover, we further develop a robust video MLLM\nbaseline, i.e., VideoChat2, by progressive multi-modal training with diverse\ninstruction-tuning data. The extensive results on our MVBench reveal that, the\nexisting MLLMs are far from satisfactory in temporal understanding, while our\nVideoChat2 largely surpasses these leading models by over 15% on MVBench. All\nmodels and data are available at https://github.com/OpenGVLab/Ask-Anything.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Scene analysis and understanding"], "authors_list": ["Kunchang Li", "Yali Wang", "Yinan He", "Yizhuo Li", "Yi Wang", "Yi Liu", "Zun Wang", "Jilan Xu", "Guo Chen", "Ping Luo", "Limin Wang", "Yu Qiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f12c"}, "filepath": "data/2309.04506.png", "tags": [], "_media_type": "image", "_rand": 0.9997630926165177, "arXiv_link": "https://arxiv.org/abs/2309.04506", "other_link": "", "title": "Unsupervised Gaze Representation Learning from Multi-view Face Images", "abstract": "Appearance-based gaze estimation has shown great promise in many applications\nby using a single general-purpose camera as the input device. However, its\nsuccess is highly depending on the availability of large-scale well-annotated\ngaze datasets, which are sparse and expensive to collect. To alleviate this\nchallenge we propose ConGaze, a contrastive learning-based framework that\nleverages unlabeled facial images to learn generic gaze-aware representations\nacross subjects in an unsupervised way. Specifically, we introduce the\ngaze-specific data augmentation to preserve the gaze-semantic features and\nmaintain the gaze consistency, which are proven to be crucial for effective\ncontrastive gaze representation learning. Moreover, we devise a novel\nsubject-conditional projection module that encourages a share feature extractor\nto learn gaze-aware and generic representations. Our experiments on three\npublic gaze estimation datasets show that ConGaze outperforms existing\nunsupervised learning solutions by 6.7% to 22.5%; and achieves 15.1% to 24.6%\nimprovement over its supervised learning-based counterpart in cross-dataset\nevaluations.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Yiwei Bao", "Feng Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f12d"}, "filepath": "data/2311.02633.png", "tags": [], "_media_type": "image", "_rand": 0.9997198151101017, "arXiv_link": "https://arxiv.org/abs/2311.02633", "other_link": "", "title": "DIOD: Self-Distillation Meets Object Discovery", "abstract": "Recent works have shown that objects discovery can largely benefit from the\ninherent motion information in video data. However, these methods lack a proper\nbackground processing, resulting in an over-segmentation of the non-object\nregions into random segments. This is a critical limitation given the\nunsupervised setting, where object segments and noise are not distinguishable.\nTo address this limitation we propose BMOD, a Background-aware Motion-guided\nObjects Discovery method. Concretely, we leverage masks of moving objects\nextracted from optical flow and design a learning mechanism to extend them to\nthe true foreground composed of both moving and static objects. The background,\na complementary concept of the learned foreground class, is then isolated in\nthe object discovery process. This enables a joint learning of the objects\ndiscovery task and the object/non-object separation. The conducted experiments\non synthetic and real-world datasets show that integrating our background\nhandling with various cutting-edge methods brings each time a considerable\nimprovement. Specifically, we improve the objects discovery performance with a\nlarge margin, while establishing a strong baseline for object/non-object\nseparation.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Sandra Kara", "Hejer AMMAR", "Julien Denize", "Florian Chabot", "Quoc Cuong PHAM"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f12e"}, "filepath": "data/2403.17465.png", "tags": [], "_media_type": "image", "_rand": 0.9995896385212802, "arXiv_link": "https://arxiv.org/abs/2403.17465", "other_link": "", "title": "$\\textbf{LaRE}^2$: Latent Reconstruction Error Based Method for Diffusion-Generated Image Detection", "abstract": "The evolution of Diffusion Models has dramatically improved image generation\nquality, making it increasingly difficult to differentiate between real and\ngenerated images. This development, while impressive, also raises significant\nprivacy and security concerns. In response to this, we propose a novel Latent\nREconstruction error guided feature REfinement method (LaRE^2) for detecting\nthe diffusion-generated images. We come up with the Latent Reconstruction Error\n(LaRE), the first reconstruction-error based feature in the latent space for\ngenerated image detection. LaRE surpasses existing methods in terms of feature\nextraction efficiency while preserving crucial cues required to differentiate\nbetween the real and the fake. To exploit LaRE, we propose an Error-Guided\nfeature REfinement module (EGRE), which can refine the image feature guided by\nLaRE to enhance the discriminativeness of the feature. Our EGRE utilizes an\nalign-then-refine mechanism, which effectively refines the image feature for\ngenerated-image detection from both spatial and channel perspectives. Extensive\nexperiments on the large-scale GenImage benchmark demonstrate the superiority\nof our LaRE^2, which surpasses the best SoTA method by up to 11.9%/12.1%\naverage ACC/AP across 8 different image generators. LaRE also surpasses\nexisting methods in terms of feature extraction cost, delivering an impressive\nspeed enhancement of 8 times.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Yunpeng Luo", "Junlong Du", "Ke Yan", "Shouhong Ding"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f12f"}, "filepath": "data/2404.07850.png", "tags": [], "_media_type": "image", "_rand": 0.9991021706674084, "arXiv_link": "https://arxiv.org/abs/2404.07850", "other_link": "https://littlepure2333.github.io/MindBridge", "title": "MindBridge: A Cross-Subject Brain Decoding Framework", "abstract": "Brain decoding, a pivotal field in neuroscience, aims to reconstruct stimuli\nfrom acquired brain signals, primarily utilizing functional magnetic resonance\nimaging (fMRI). Currently, brain decoding is confined to a\nper-subject-per-model paradigm, limiting its applicability to the same\nindividual for whom the decoding model is trained. This constraint stems from\nthree key challenges: 1) the inherent variability in input dimensions across\nsubjects due to differences in brain size; 2) the unique intrinsic neural\npatterns, influencing how different individuals perceive and process sensory\ninformation; 3) limited data availability for new subjects in real-world\nscenarios hampers the performance of decoding models. In this paper, we present\na novel approach, MindBridge, that achieves cross-subject brain decoding by\nemploying only one model. Our proposed framework establishes a generic paradigm\ncapable of addressing these challenges by introducing biological-inspired\naggregation function and novel cyclic fMRI reconstruction mechanism for\nsubject-invariant representation learning. Notably, by cycle reconstruction of\nfMRI, MindBridge can enable novel fMRI synthesis, which also can serve as\npseudo data augmentation. Within the framework, we also devise a novel\nreset-tuning method for adapting a pretrained model to a new subject.\nExperimental results demonstrate MindBridge's ability to reconstruct images for\nmultiple subjects, which is competitive with dedicated subject-specific models.\nFurthermore, with limited data for a new subject, we achieve a high level of\ndecoding accuracy, surpassing that of subject-specific models. This advancement\nin cross-subject brain decoding suggests promising directions for wider\napplications in neuroscience and indicates potential for more efficient\nutilization of limited fMRI data in real-world scenarios. Project page:\nhttps://littlepure2333.github.io/MindBridge", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Shizun Wang", "Songhua Liu", "Zhenxiong Tan", "Xinchao Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f130"}, "filepath": "data/2404.05490.png", "tags": [], "_media_type": "image", "_rand": 0.9998930282854339, "arXiv_link": "https://arxiv.org/abs/2404.05490", "other_link": "", "title": "Capturing Closely Interacted Two-Person Motions with Reaction Priors", "abstract": "Close and continuous interaction with rich contacts is a crucial aspect of\nhuman activities (e.g. hugging, dancing) and of interest in many domains like\nactivity recognition, motion prediction, character animation, etc. However,\nacquiring such skeletal motion is challenging. While direct motion capture is\nexpensive and slow, motion editing/generation is also non-trivial, as complex\ncontact patterns with topological and geometric constraints have to be\nretained. To this end, we propose a new deep learning method for two-body\nskeletal interaction motion augmentation, which can generate variations of\ncontact-rich interactions with varying body sizes and proportions while\nretaining the key geometric/topological relations between two bodies. Our\nsystem can learn effectively from a relatively small amount of data and\ngeneralize to drastically different skeleton sizes. Through exhaustive\nevaluation and comparison, we show it can generate high-quality motions, has\nstrong generalizability and outperforms traditional optimization-based methods\nand alternative deep learning solutions.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Qi Fang", "Yinghui Fan", "Yanjun Li", "Junting Dong", "Dingwei Wu", "Weidong Zhang", "Kang Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f131"}, "filepath": "data/2402.17210.png", "tags": [], "_media_type": "image", "_rand": 0.9999276611376898, "arXiv_link": "https://arxiv.org/abs/2402.17210", "other_link": "https://github.com/albblgb/PUSNet}", "title": "Purified and Unified Steganographic Network", "abstract": "Steganography is the art of hiding secret data into the cover media for\ncovert communication. In recent years, more and more deep neural network\n(DNN)-based steganographic schemes are proposed to train steganographic\nnetworks for secret embedding and recovery, which are shown to be promising.\nCompared with the handcrafted steganographic tools, steganographic networks\ntend to be large in size. It raises concerns on how to imperceptibly and\neffectively transmit these networks to the sender and receiver to facilitate\nthe covert communication. To address this issue, we propose in this paper a\nPurified and Unified Steganographic Network (PUSNet). It performs an ordinary\nmachine learning task in a purified network, which could be triggered into\nsteganographic networks for secret embedding or recovery using different keys.\nWe formulate the construction of the PUSNet into a sparse weight filling\nproblem to flexibly switch between the purified and steganographic networks. We\nfurther instantiate our PUSNet as an image denoising network with two\nsteganographic networks concealed for secret image embedding and recovery.\nComprehensive experiments demonstrate that our PUSNet achieves good performance\non secret image embedding, secret image recovery, and image denoising in a\nsingle architecture. It is also shown to be capable of imperceptibly carrying\nthe steganographic networks in a purified network. Code is available at\n\\url{https://github.com/albblgb/PUSNet}", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["GuoBiao Li", "Sheng Li", "Zicong Luo", "Zhenxing Qian", "Xinpeng Zhang"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f132"}, "filepath": "data/2405.14855.png", "tags": [], "_media_type": "image", "_rand": 0.9998272543927157, "arXiv_link": "https://arxiv.org/abs/2405.14855", "other_link": "https://paulchhuang.github.io/synchmr", "title": "Synergistic Global-space Camera and Human Reconstruction from Videos", "abstract": "Remarkable strides have been made in reconstructing static scenes or human\nbodies from monocular videos. Yet, the two problems have largely been\napproached independently, without much synergy. Most visual SLAM methods can\nonly reconstruct camera trajectories and scene structures up to scale, while\nmost HMR methods reconstruct human meshes in metric scale but fall short in\nreasoning with cameras and scenes. This work introduces Synergistic Camera and\nHuman Reconstruction (SynCHMR) to marry the best of both worlds. Specifically,\nwe design Human-aware Metric SLAM to reconstruct metric-scale camera poses and\nscene point clouds using camera-frame HMR as a strong prior, addressing depth,\nscale, and dynamic ambiguities. Conditioning on the dense scene recovered, we\nfurther learn a Scene-aware SMPL Denoiser to enhance world-frame HMR by\nincorporating spatio-temporal coherency and dynamic scene constraints.\nTogether, they lead to consistent reconstructions of camera trajectories, human\nmeshes, and dense scene point clouds in a common world frame. Project page:\nhttps://paulchhuang.github.io/synchmr", "keywords": ["Scene analysis and understanding"], "authors_list": ["Yizhou Zhao", "Tuanfeng Y. Wang", "Bhiksha Raj", "Min Xu", "Jimei Yang", "Chun-Hao P. Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f133"}, "filepath": "data/2403.08639.png", "tags": [], "_media_type": "image", "_rand": 0.9998601834029042, "arXiv_link": "https://arxiv.org/abs/2403.08639", "other_link": "", "title": "HIMap: HybrId Representation Learning for End-to-end Vectorized HD Map Construction", "abstract": "Vectorized High-Definition (HD) map construction requires predictions of the\ncategory and point coordinates of map elements (e.g. road boundary, lane\ndivider, pedestrian crossing, etc.). State-of-the-art methods are mainly based\non point-level representation learning for regressing accurate point\ncoordinates. However, this pipeline has limitations in obtaining element-level\ninformation and handling element-level failures, e.g. erroneous element shape\nor entanglement between elements. To tackle the above issues, we propose a\nsimple yet effective HybrId framework named HIMap to sufficiently learn and\ninteract both point-level and element-level information. Concretely, we\nintroduce a hybrid representation called HIQuery to represent all map elements,\nand propose a point-element interactor to interactively extract and encode the\nhybrid information of elements, e.g. point position and element shape, into the\nHIQuery. Additionally, we present a point-element consistency constraint to\nenhance the consistency between the point-level and element-level information.\nFinally, the output point-element integrated HIQuery can be directly converted\ninto map elements' class, point coordinates, and mask. We conduct extensive\nexperiments and consistently outperform previous methods on both nuScenes and\nArgoverse2 datasets. Notably, our method achieves $77.8$ mAP on the nuScenes\ndataset, remarkably superior to previous SOTAs by $8.3$ mAP at least.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yi ZHOU", "Hui Zhang", "Jiaqian Yu", "yifan yang", "Sangil Jung", "Seung-In Park", "ByungIn Yoo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f134"}, "filepath": "data/2404.14034.png", "tags": [], "_media_type": "image", "_rand": 0.9996230906384008, "arXiv_link": "https://arxiv.org/abs/2404.14034", "other_link": "", "title": "Dynamic Cues-Assisted Transformer for Robust Point Cloud Registration", "abstract": "Point cloud registration is a fundamental technique in 3-D computer vision\nwith applications in graphics, autonomous driving, and robotics. However,\nregistration tasks under challenging conditions, under which noise or\nperturbations are prevalent, can be difficult. We propose a robust point cloud\nregistration approach that leverages graph neural partial differential\nequations (PDEs) and heat kernel signatures. Our method first uses graph neural\nPDE modules to extract high dimensional features from point clouds by\naggregating information from the 3-D point neighborhood, thereby enhancing the\nrobustness of the feature representations. Then, we incorporate heat kernel\nsignatures into an attention mechanism to efficiently obtain corresponding\nkeypoints. Finally, a singular value decomposition (SVD) module with learnable\nweights is used to predict the transformation between two point clouds.\nEmpirical experiments on a 3-D point cloud dataset demonstrate that our\napproach not only achieves state-of-the-art performance for point cloud\nregistration but also exhibits better robustness to additive noise or 3-D shape\nperturbations.", "keywords": [], "authors_list": ["Hong Chen", "Pei Yan", "sihe xiang", "Yihua Tan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f135"}, "filepath": "data/2403.18383.png", "tags": [], "_media_type": "image", "_rand": 0.9998958040430966, "arXiv_link": "https://arxiv.org/abs/2403.18383", "other_link": "https://github.com/DoubleClass/GMM}.", "title": "Generative Multi-modal Models are Good Class Incremental Learners", "abstract": "In class-incremental learning (CIL) scenarios, the phenomenon of catastrophic\nforgetting caused by the classifier's bias towards the current task has long\nposed a significant challenge. It is mainly caused by the characteristic of\ndiscriminative models. With the growing popularity of the generative\nmulti-modal models, we would explore replacing discriminative models with\ngenerative ones for CIL. However, transitioning from discriminative to\ngenerative models requires addressing two key challenges. The primary challenge\nlies in transferring the generated textual information into the classification\nof distinct categories. Additionally, it requires formulating the task of CIL\nwithin a generative framework. To this end, we propose a novel generative\nmulti-modal model (GMM) framework for class-incremental learning. Our approach\ndirectly generates labels for images using an adapted generative model. After\nobtaining the detailed text, we use a text encoder to extract text features and\nemploy feature matching to determine the most similar label as the\nclassification prediction. In the conventional CIL settings, we achieve\nsignificantly better results in long-sequence task scenarios. Under the\nFew-shot CIL setting, we have improved by at least 14\\% accuracy over all the\ncurrent state-of-the-art methods with significantly less forgetting. Our code\nis available at \\url{https://github.com/DoubleClass/GMM}.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Xusheng Cao", "Haori Lu", "Linlan Huang", "Xialei Liu", "Ming-Ming Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f136"}, "filepath": "data/2404.00906.png", "tags": [], "_media_type": "image", "_rand": 0.9991754546643454, "arXiv_link": "https://arxiv.org/abs/2404.00906", "other_link": "", "title": "From Pixels to Graphs: Open-Vocabulary Scene Graph Generation with Vision-Language Models", "abstract": "Scene graph generation (SGG) aims to parse a visual scene into an\nintermediate graph representation for downstream reasoning tasks. Despite\nrecent advancements, existing methods struggle to generate scene graphs with\nnovel visual relation concepts. To address this challenge, we introduce a new\nopen-vocabulary SGG framework based on sequence generation. Our framework\nleverages vision-language pre-trained models (VLM) by incorporating an\nimage-to-graph generation paradigm. Specifically, we generate scene graph\nsequences via image-to-text generation with VLM and then construct scene graphs\nfrom these sequences. By doing so, we harness the strong capabilities of VLM\nfor open-vocabulary SGG and seamlessly integrate explicit relational modeling\nfor enhancing the VL tasks. Experimental results demonstrate that our design\nnot only achieves superior performance with an open vocabulary but also\nenhances downstream vision-language task performance through explicit relation\nmodeling knowledge.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Rongjie Li", "Songyang Zhang", "Dahua Lin", "Kai Chen", "Xuming He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f137"}, "filepath": "data/2312.02439.png", "tags": [], "_media_type": "image", "_rand": 0.9996756689348398, "arXiv_link": "https://arxiv.org/abs/2312.02439", "other_link": "https://zhongshsh.github.io/CLoT/.", "title": "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation", "abstract": "Chain-of-Thought (CoT) guides large language models (LLMs) to reason\nstep-by-step, and can motivate their logical reasoning ability. While effective\nfor logical tasks, CoT is not conducive to creative problem-solving which often\nrequires out-of-box thoughts and is crucial for innovation advancements. In\nthis paper, we explore the Leap-of-Thought (LoT) abilities within LLMs -- a\nnon-sequential, creative paradigm involving strong associations and knowledge\nleaps. To this end, we study LLMs on the popular Oogiri game which needs\nparticipants to have good creativity and strong associative thinking for\nresponding unexpectedly and humorously to the given image, text, or both, and\nthus is suitable for LoT study. Then to investigate LLMs' LoT ability in the\nOogiri game, we first build a multimodal and multilingual Oogiri-GO dataset\nwhich contains over 130,000 samples from the Oogiri game, and observe the\ninsufficient LoT ability or failures of most existing LLMs on the Oogiri game.\nAccordingly, we introduce a creative Leap-of-Thought (CLoT) paradigm to improve\nLLM's LoT ability. CLoT first formulates the Oogiri-GO dataset into\nLoT-oriented instruction tuning data to train pretrained LLM for achieving\ncertain LoT humor generation and discrimination abilities. Then CLoT designs an\nexplorative self-refinement that encourages the LLM to generate more creative\nLoT data via exploring parallels between seemingly unrelated concepts and\nselects high-quality data to train itself for self-refinement. CLoT not only\nexcels in humor generation in the Oogiri game but also boosts creative\nabilities in various tasks like cloud guessing game and divergent association\ntask. These findings advance our understanding and offer a pathway to improve\nLLMs' creative capacities for innovative applications across domains. The\ndataset, code, and models will be released online.\nhttps://zhongshsh.github.io/CLoT/.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Shanshan Zhong", "Zhongzhan Huang", "Shanghua Gao", "Wushao Wen", "Liang Lin", "Marinka Zitnik", "Pan Zhou"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Computation and Language", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f138"}, "filepath": "data/2403.16124.png", "tags": [], "_media_type": "image", "_rand": 0.9994692106109044, "arXiv_link": "https://arxiv.org/abs/2403.16124", "other_link": "", "title": "Enhancing Visual Continual Learning with Language-Guided Supervision", "abstract": "Continual learning (CL) aims to empower models to learn new tasks without\nforgetting previously acquired knowledge. Most prior works concentrate on the\ntechniques of architectures, replay data, regularization, \\etc. However, the\ncategory name of each class is largely neglected. Existing methods commonly\nutilize the one-hot labels and randomly initialize the classifier head. We\nargue that the scarce semantic information conveyed by the one-hot labels\nhampers the effective knowledge transfer across tasks. In this paper, we\nrevisit the role of the classifier head within the CL paradigm and replace the\nclassifier with semantic knowledge from pretrained language models (PLMs).\nSpecifically, we use PLMs to generate semantic targets for each class, which\nare frozen and serve as supervision signals during training. Such targets fully\nconsider the semantic correlation between all classes across tasks. Empirical\nstudies show that our approach mitigates forgetting by alleviating\nrepresentation drifting and facilitating knowledge transfer across tasks. The\nproposed method is simple to implement and can seamlessly be plugged into\nexisting methods with negligible adjustments. Extensive experiments based on\neleven mainstream baselines demonstrate the effectiveness and generalizability\nof our approach to various protocols. For example, under the class-incremental\nlearning setting on ImageNet-100, our method significantly improves the Top-1\naccuracy by 3.2\\% to 6.1\\% while reducing the forgetting rate by 2.6\\% to\n13.1\\%.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Bolin Ni", "Hongbo Zhao", "Chenghao Zhang", "Ke Hu", "Gaofeng Meng", "Zhaoxiang Zhang", "Shiming Xiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f139"}, "filepath": "data/2401.09721.png", "tags": [], "_media_type": "image", "_rand": 0.9990101494477056, "arXiv_link": "https://arxiv.org/abs/2401.09721", "other_link": "", "title": "Denoising Point Clouds in Latent Space via Graph Convolution and Invertible Neural Network", "abstract": "Point clouds are utilized in various 3D applications such as cross-reality\n(XR) and realistic 3D displays. In some applications, e.g., for live streaming\nusing a 3D point cloud, real-time point cloud denoising methods are required to\nenhance the visual quality. However, conventional high-precision denoising\nmethods cannot be executed in real time for large-scale point clouds owing to\nthe complexity of graph constructions with K nearest neighbors and noise level\nestimation. This paper proposes a fast graph-based denoising (FGBD) for a\nlarge-scale point cloud. First, high-speed graph construction is achieved by\nscanning a point cloud in various directions and searching adjacent\nneighborhoods on the scanning lines. Second, we propose a fast noise level\nestimation method using eigenvalues of the covariance matrix on a graph.\nFinally, we also propose a new low-cost filter selection method to enhance\ndenoising accuracy to compensate for the degradation caused by the acceleration\nalgorithms. In our experiments, we succeeded in reducing the processing time\ndramatically while maintaining accuracy relative to conventional denoising\nmethods. Denoising was performed at 30fps, with frames containing approximately\n1 million points.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Aihua Mao", "Biao Yan", "Zijing Ma", "Ying He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing", "Signal Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f13a"}, "filepath": "data/2309.10911.png", "tags": [], "_media_type": "image", "_rand": 0.9990519767694067, "arXiv_link": "https://arxiv.org/abs/2309.10911", "other_link": "https://3DAPNet.github.io", "title": "LASO: Language-guided Affordance Segmentation on 3D Object", "abstract": "Affordance detection and pose estimation are of great importance in many\nrobotic applications. Their combination helps the robot gain an enhanced\nmanipulation capability, in which the generated pose can facilitate the\ncorresponding affordance task. Previous methods for affodance-pose joint\nlearning are limited to a predefined set of affordances, thus limiting the\nadaptability of robots in real-world environments. In this paper, we propose a\nnew method for language-conditioned affordance-pose joint learning in 3D point\nclouds. Given a 3D point cloud object, our method detects the affordance region\nand generates appropriate 6-DoF poses for any unconstrained affordance label.\nOur method consists of an open-vocabulary affordance detection branch and a\nlanguage-guided diffusion model that generates 6-DoF poses based on the\naffordance text. We also introduce a new high-quality dataset for the task of\nlanguage-driven affordance-pose joint learning. Intensive experimental results\ndemonstrate that our proposed method works effectively on a wide range of\nopen-vocabulary affordances and outperforms other baselines by a large margin.\nIn addition, we illustrate the usefulness of our method in real-world robotic\napplications. Our code and dataset are publicly available at\nhttps://3DAPNet.github.io", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yicong Li", "Na Zhao", "Junbin Xiao", "Chun Feng", "Xiang Wang", "Tat-seng Chua"], "category_name": "Robotics", "all_categories": ["Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f13b"}, "filepath": "data/2404.03181v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995667099500533, "arXiv_link": "https://arxiv.org/abs/2404.03181v1", "other_link": "https://github.com/elvintanhust/MonoCD.", "title": "MonoCD: Monocular 3D Object Detection with Complementary Depths", "abstract": "Monocular 3D object detection has attracted widespread attention due to its\npotential to accurately obtain object 3D localization from a single image at a\nlow cost. Depth estimation is an essential but challenging subtask of monocular\n3D object detection due to the ill-posedness of 2D to 3D mapping. Many methods\nexplore multiple local depth clues such as object heights and keypoints and\nthen formulate the object depth estimation as an ensemble of multiple depth\npredictions to mitigate the insufficiency of single-depth information. However,\nthe errors of existing multiple depths tend to have the same sign, which\nhinders them from neutralizing each other and limits the overall accuracy of\ncombined depth. To alleviate this problem, we propose to increase the\ncomplementarity of depths with two novel designs. First, we add a new depth\nprediction branch named complementary depth that utilizes global and efficient\ndepth clues from the entire image rather than the local clues to reduce the\ncorrelation of depth predictions. Second, we propose to fully exploit the\ngeometric relations between multiple depth clues to achieve complementarity in\nform. Benefiting from these designs, our method achieves higher\ncomplementarity. Experiments on the KITTI benchmark demonstrate that our method\nachieves state-of-the-art performance without introducing extra data. In\naddition, complementary depth can also be a lightweight and plug-and-play\nmodule to boost multiple existing monocular 3d object detectors. Code is\navailable at https://github.com/elvintanhust/MonoCD.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Longfei Yan", "Pei Yan", "Shengzhou Xiong", "Xuanyu Xiang", "Yihua Tan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f13c"}, "filepath": "data/2312.05264.png", "tags": [], "_media_type": "image", "_rand": 0.999026906796292, "arXiv_link": "https://arxiv.org/abs/2312.05264", "other_link": "", "title": "All Rivers Run to the Sea: Private Learning with Asymmetric Flows", "abstract": "Data privacy is of great concern in cloud machine-learning service platforms,\nwhen sensitive data are exposed to service providers. While private computing\nenvironments (e.g., secure enclaves), and cryptographic approaches (e.g.,\nhomomorphic encryption) provide strong privacy protection, their computing\nperformance still falls short compared to cloud GPUs. To achieve privacy\nprotection with high computing performance, we propose Delta, a new private\ntraining and inference framework, with comparable model performance as\nnon-private centralized training. Delta features two asymmetric data flows: the\nmain information-sensitive flow and the residual flow. The main part flows into\na small model while the residuals are offloaded to a large model. Specifically,\nDelta embeds the information-sensitive representations into a low-dimensional\nspace while pushing the information-insensitive part into high-dimension\nresiduals. To ensure privacy protection, the low-dimensional\ninformation-sensitive part is secured and fed to a small model in a private\nenvironment. On the other hand, the residual part is sent to fast cloud GPUs,\nand processed by a large model. To further enhance privacy and reduce the\ncommunication cost, Delta applies a random binary quantization technique along\nwith a DP-based technique to the residuals before sharing them with the public\nplatform. We theoretically show that Delta guarantees differential privacy in\nthe public environment and greatly reduces the complexity in the private\nenvironment. We conduct empirical analyses on CIFAR-10, CIFAR-100 and ImageNet\ndatasets and ResNet-18 and ResNet-34, showing that Delta achieves strong\nprivacy protection, fast training, and inference without significantly\ncompromising the model utility.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yue Niu", "Ramy E. Ali", "Saurav Prakash", "Salman Avestimehr"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f13d"}, "filepath": "data/2404.01727v1.png", "tags": [], "_media_type": "image", "_rand": 0.999420486241057, "arXiv_link": "https://arxiv.org/abs/2404.01727v1", "other_link": "", "title": "Generalizing 6-DoF Grasp Detection via Domain Prior Knowledge", "abstract": "We focus on the generalization ability of the 6-DoF grasp detection method in\nthis paper. While learning-based grasp detection methods can predict grasp\nposes for unseen objects using the grasp distribution learned from the training\nset, they often exhibit a significant performance drop when encountering\nobjects with diverse shapes and structures. To enhance the grasp detection\nmethods' generalization ability, we incorporate domain prior knowledge of\nrobotic grasping, enabling better adaptation to objects with significant shape\nand structure differences. More specifically, we employ the physical constraint\nregularization during the training phase to guide the model towards predicting\ngrasps that comply with the physical rule on grasping. For the unstable grasp\nposes predicted on novel objects, we design a contact-score joint optimization\nusing the projection contact map to refine these poses in cluttered scenarios.\nExtensive experiments conducted on the GraspNet-1billion benchmark demonstrate\na substantial performance gain on the novel object set and the real-world\ngrasping experiments also demonstrate the effectiveness of our generalizing\n6-DoF grasp detection method.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haoxiang Ma", "Modi Shi", "Boyang GAO", "Di Huang"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f13e"}, "filepath": "data/2309.14611.png", "tags": [], "_media_type": "image", "_rand": 0.999118170001535, "arXiv_link": "https://arxiv.org/abs/2309.14611", "other_link": "https://github.com/Event-AHU/EventVOT_Benchmark}", "title": "Event Stream-based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel Baseline", "abstract": "Tracking using bio-inspired event cameras has drawn more and more attention\nin recent years. Existing works either utilize aligned RGB and event data for\naccurate tracking or directly learn an event-based tracker. The first category\nneeds more cost for inference and the second one may be easily influenced by\nnoisy events or sparse spatial resolution. In this paper, we propose a novel\nhierarchical knowledge distillation framework that can fully utilize\nmulti-modal / multi-view information during training to facilitate knowledge\ntransfer, enabling us to achieve high-speed and low-latency visual tracking\nduring testing by using only event signals. Specifically, a teacher\nTransformer-based multi-modal tracking framework is first trained by feeding\nthe RGB frame and event stream simultaneously. Then, we design a new\nhierarchical knowledge distillation strategy which includes pairwise\nsimilarity, feature representation, and response maps-based knowledge\ndistillation to guide the learning of the student Transformer network.\nMoreover, since existing event-based tracking datasets are all low-resolution\n($346 \\times 260$), we propose the first large-scale high-resolution ($1280\n\\times 720$) dataset named EventVOT. It contains 1141 videos and covers a wide\nrange of categories such as pedestrians, vehicles, UAVs, ping pongs, etc.\nExtensive experiments on both low-resolution (FE240hz, VisEvent, COESOT), and\nour newly proposed high-resolution EventVOT dataset fully validated the\neffectiveness of our proposed method. The dataset, evaluation toolkit, and\nsource code are available on\n\\url{https://github.com/Event-AHU/EventVOT_Benchmark}", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xiao Wang", "Shiao Wang", "Chuanming Tang", "Lin Zhu", "Bo Jiang", "Yonghong Tian", "Jin Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Neural and Evolutionary Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f13f"}, "filepath": "data/2401.00028.png", "tags": [], "_media_type": "image", "_rand": 0.9990393280075315, "arXiv_link": "https://arxiv.org/abs/2401.00028", "other_link": "https://github.com/large-ocr-model/large-ocr-model.github.io.", "title": "An Empirical Study of Scaling Law for Scene Text Recognition", "abstract": "The laws of model size, data volume, computation and model performance have\nbeen extensively studied in the field of Natural Language Processing (NLP).\nHowever, the scaling laws in Optical Character Recognition (OCR) have not yet\nbeen investigated. To address this, we conducted comprehensive studies that\ninvolved examining the correlation between performance and the scale of models,\ndata volume and computation in the field of text recognition.Conclusively, the\nstudy demonstrates smooth power laws between performance and model size, as\nwell as training data volume, when other influencing factors are held constant.\nAdditionally, we have constructed a large-scale dataset called REBU-Syn, which\ncomprises 6 million real samples and 18 million synthetic samples. Based on our\nscaling law and new dataset, we have successfully trained a scene text\nrecognition model, achieving a new state-ofthe-art on 6 common test benchmarks\nwith a top-1 average accuracy of 97.42%. The models and dataset are publicly\navailable at https://github.com/large-ocr-model/large-ocr-model.github.io.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Miao Rang", "Zhenni Bi", "Chuanjian Liu", "Yunhe Wang", "Kai Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f140"}, "filepath": "data/2404.05001.png", "tags": [], "_media_type": "image", "_rand": 0.9997276075272445, "arXiv_link": "https://arxiv.org/abs/2404.05001", "other_link": "https://github.com/Gang-Qu/HATNet-SPI.", "title": "Dual-scale Transformer for Large-scale Single-Pixel Imaging", "abstract": "Single-pixel imaging (SPI) is a potential computational imaging technique\nwhich produces image by solving an illposed reconstruction problem from few\nmeasurements captured by a single-pixel detector. Deep learning has achieved\nimpressive success on SPI reconstruction. However, previous poor reconstruction\nperformance and impractical imaging model limit its real-world applications. In\nthis paper, we propose a deep unfolding network with hybrid-attention\nTransformer on Kronecker SPI model, dubbed HATNet, to improve the imaging\nquality of real SPI cameras. Specifically, we unfold the computation graph of\nthe iterative shrinkagethresholding algorithm (ISTA) into two alternative\nmodules: efficient tensor gradient descent and hybrid-attention multiscale\ndenoising. By virtue of Kronecker SPI, the gradient descent module can avoid\nhigh computational overheads rooted in previous gradient descent modules based\non vectorized SPI. The denoising module is an encoder-decoder architecture\npowered by dual-scale spatial attention for high- and low-frequency aggregation\nand channel attention for global information recalibration. Moreover, we build\na SPI prototype to verify the effectiveness of the proposed method. Extensive\nexperiments on synthetic and real data demonstrate that our method achieves the\nstate-of-the-art performance. The source code and pre-trained models are\navailable at https://github.com/Gang-Qu/HATNet-SPI.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Gang Qu", "Ping Wang", "Xin Yuan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f141"}, "filepath": "data/2402.19270.png", "tags": [], "_media_type": "image", "_rand": 0.9998249756130785, "arXiv_link": "https://arxiv.org/abs/2402.19270", "other_link": "", "title": "Learning Intra-view and Cross-view Geometric Knowledge for Stereo Matching", "abstract": "Geometric knowledge has been shown to be beneficial for the stereo matching\ntask. However, prior attempts to integrate geometric insights into stereo\nmatching algorithms have largely focused on geometric knowledge from single\nimages while crucial cross-view factors such as occlusion and matching\nuniqueness have been overlooked. To address this gap, we propose a novel\nIntra-view and Cross-view Geometric knowledge learning Network (ICGNet),\nspecifically crafted to assimilate both intra-view and cross-view geometric\nknowledge. ICGNet harnesses the power of interest points to serve as a channel\nfor intra-view geometric understanding. Simultaneously, it employs the\ncorrespondences among these points to capture cross-view geometric\nrelationships. This dual incorporation empowers the proposed ICGNet to leverage\nboth intra-view and cross-view geometric knowledge in its learning process,\nsubstantially improving its ability to estimate disparities. Our extensive\nexperiments demonstrate the superiority of the ICGNet over contemporary leading\nmodels.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Rui Gong", "Weide Liu", "ZAIWANG GU", "Xulei Yang", "Jun Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f142"}, "filepath": "data/2306.14525.png", "tags": [], "_media_type": "image", "_rand": 0.999912208641388, "arXiv_link": "https://arxiv.org/abs/2306.14525", "other_link": "https://parameternet.github.io/}.", "title": "ParameterNet: Parameters Are All You Need for Large-scale Visual Pretraining of Mobile Networks", "abstract": "The large-scale visual pretraining has significantly improve the performance\nof large vision models. However, we observe the \\emph{low FLOPs pitfall} that\nthe existing low-FLOPs models cannot benefit from large-scale pretraining. In\nthis paper, we introduce a novel design principle, termed ParameterNet, aimed\nat augmenting the number of parameters in large-scale visual pretraining models\nwhile minimizing the increase in FLOPs. We leverage dynamic convolutions to\nincorporate additional parameters into the networks with only a marginal rise\nin FLOPs. The ParameterNet approach allows low-FLOPs networks to take advantage\nof large-scale visual pretraining. Furthermore, we extend the ParameterNet\nconcept to the language domain to enhance inference results while preserving\ninference speed. Experiments on the large-scale ImageNet-22K have shown the\nsuperiority of our ParameterNet scheme. For example, ParameterNet-600M can\nachieve higher accuracy on ImageNet than the widely-used Swin Transformer\n(81.6\\% \\emph{vs.} 80.9\\%) and has much lower FLOPs (0.6G \\emph{vs.} 4.5G). In\nthe language domain, LLaMA-1B enhanced with ParameterNet achieves 2\\% higher\naccuracy over vanilla LLaMA. The code will be released at\n\\url{https://parameternet.github.io/}.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Kai Han", "Yunhe Wang", "Jianyuan Guo", "Enhua Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f143"}, "filepath": "data/2311.10339.png", "tags": [], "_media_type": "image", "_rand": 0.9993514151867118, "arXiv_link": "https://arxiv.org/abs/2311.10339", "other_link": "", "title": "A2XP: Towards Private Domain Generalization", "abstract": "Deep Neural Networks (DNNs) have become pivotal in various fields, especially\nin computer vision, outperforming previous methodologies. A critical challenge\nin their deployment is the bias inherent in data across different domains, such\nas image style and environmental conditions, leading to domain gaps. This\nnecessitates techniques for learning general representations from biased\ntraining data, known as domain generalization. This paper presents Attend to\neXpert Prompts (A2XP), a novel approach for domain generalization that\npreserves the privacy and integrity of the network architecture. A2XP consists\nof two phases: Expert Adaptation and Domain Generalization. In the first phase,\nprompts for each source domain are optimized to guide the model towards the\noptimal direction. In the second phase, two embedder networks are trained to\neffectively amalgamate these expert prompts, aiming for an optimal output. Our\nextensive experiments demonstrate that A2XP achieves state-of-the-art results\nover existing non-private domain generalization methods. The experimental\nresults validate that the proposed approach not only tackles the domain\ngeneralization challenge in DNNs but also offers a privacy-preserving,\nefficient solution to the broader field of computer vision.", "keywords": ["Efficient and scalable vision", "Large multimodal models and prompting techniques"], "authors_list": ["Geunhyeok Yu", "Hyoseok Hwang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f144"}, "filepath": "data/2311.15206.png", "tags": [], "_media_type": "image", "_rand": 0.9999133521349238, "arXiv_link": "https://arxiv.org/abs/2311.15206", "other_link": "", "title": "Insect-Foundation: A Foundation Model and Large-scale 1M Dataset for Visual Insect Understanding", "abstract": "In precision agriculture, the detection and recognition of insects play an\nessential role in the ability of crops to grow healthy and produce a\nhigh-quality yield. The current machine vision model requires a large volume of\ndata to achieve high performance. However, there are approximately 5.5 million\ndifferent insect species in the world. None of the existing insect datasets can\ncover even a fraction of them due to varying geographic locations and\nacquisition costs. In this paper, we introduce a novel \"Insect-1M\" dataset, a\ngame-changing resource poised to revolutionize insect-related foundation model\ntraining. Covering a vast spectrum of insect species, our dataset, including 1\nmillion images with dense identification labels of taxonomy hierarchy and\ninsect descriptions, offers a panoramic view of entomology, enabling foundation\nmodels to comprehend visual and semantic information about insects like never\nbefore. Then, to efficiently establish an Insect Foundation Model, we develop a\nmicro-feature self-supervised learning method with a Patch-wise Relevant\nAttention mechanism capable of discerning the subtle differences among insect\nimages. In addition, we introduce Description Consistency loss to improve\nmicro-feature modeling via insect descriptions. Through our experiments, we\nillustrate the effectiveness of our proposed approach in insect modeling and\nachieve State-of-the-Art performance on standard benchmarks of insect-related\ntasks. Our Insect Foundation Model and Dataset promise to empower the next\ngeneration of insect-related vision models, bringing them closer to the\nultimate goal of precision agriculture.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Hoang-Quan Nguyen", "Thanh-Dat Truong", "Xuan-Bac Nguyen", "Ashley Dowling", "Xin Li", "Khoa Luu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f145"}, "filepath": "data/2403.12473.png", "tags": [], "_media_type": "image", "_rand": 0.9992473740905997, "arXiv_link": "https://arxiv.org/abs/2403.12473", "other_link": "", "title": "PostureHMR: Posture Transformation for 3D Human Mesh Recovery", "abstract": "With the recent advancements in single-image-based human mesh recovery, there\nis a growing interest in enhancing its performance in certain extreme\nscenarios, such as occlusion, while maintaining overall model accuracy.\nAlthough obtaining accurately annotated 3D human poses under occlusion is\nchallenging, there is still a wealth of rich and precise 2D pose annotations\nthat can be leveraged. However, existing works mostly focus on directly\nleveraging 2D pose coordinates to estimate 3D pose and mesh. In this paper, we\npresent PostoMETRO($\\textbf{Pos}$e $\\textbf{to}$ken enhanced $\\textbf{ME}$sh\n$\\textbf{TR}$ansf$\\textbf{O}$rmer), which integrates occlusion-resilient 2D\npose representation into transformers in a token-wise manner. Utilizing a\nspecialized pose tokenizer, we efficiently condense 2D pose data to a compact\nsequence of pose tokens and feed them to the transformer together with the\nimage tokens. This process not only ensures a rich depiction of texture from\nthe image but also fosters a robust integration of pose and image information.\nSubsequently, these combined tokens are queried by vertex and joint tokens to\ndecode 3D coordinates of mesh vertices and human joints. Facilitated by the\nrobust pose token representation and the effective combination, we are able to\nproduce more precise 3D coordinates, even under extreme scenarios like\nocclusion. Experiments on both standard and occlusion-specific benchmarks\ndemonstrate the effectiveness of PostoMETRO. Qualitative results further\nillustrate the clarity of how 2D pose can help 3D reconstruction. Code will be\nmade available.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Yu-Pei Song", "Xiao WU", "Zhaoquan Yuan", "Jian-Jun Qiao", "Qiang Peng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f146"}, "filepath": "data/2403.11284v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992370769607511, "arXiv_link": "https://arxiv.org/html/2403.11284v1", "other_link": "", "title": "InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetuning", "abstract": "Currently, personalized image generation methods mostly require considerable\ntime to finetune and often overfit the concept resulting in generated images\nthat are similar to custom concepts but difficult to edit by prompts. We\npropose an effective and fast approach that could balance the text-image\nconsistency and identity consistency of the generated image and reference\nimage. Our method can generate personalized images without any fine-tuning\nwhile maintaining the inherent text-to-image generation ability of diffusion\nmodels. Given a prompt and a reference image, we merge the custom concept into\ngenerated images by manipulating cross-attention and self-attention layers of\nthe original diffusion model to generate personalized images that match the\ntext description. Comprehensive experiments highlight the superiority of our\nmethod.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Jing Shi", "Wei Xiong", "Zhe Lin", "HyunJoon Jung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f147"}, "filepath": "data/2307.14638v1.png", "tags": [], "_media_type": "image", "_rand": 0.999665723011133, "arXiv_link": "https://arxiv.org/abs/2307.14638v1", "other_link": "", "title": "Exact Fusion via Feature Distribution Matching for Few-shot Image Generation", "abstract": "Due to the absence of fine structure and texture information, existing\nfusion-based few-shot image generation methods suffer from unsatisfactory\ngeneration quality and diversity. To address this problem, we propose a novel\nfeature Equalization fusion Generative Adversarial Network (EqGAN) for few-shot\nimage generation. Unlike existing fusion strategies that rely on either deep\nfeatures or local representations, we design two separate branches to fuse\nstructures and textures by disentangling encoded features into shallow and deep\ncontents. To refine image contents at all feature levels, we equalize the fused\nstructure and texture semantics at different scales and supplement the decoder\nwith richer information by skip connections. Since the fused structures and\ntextures may be inconsistent with each other, we devise a consistent\nequalization loss between the equalized features and the intermediate output of\nthe decoder to further align the semantics. Comprehensive experiments on three\npublic datasets demonstrate that, EqGAN not only significantly improves\ngeneration performance with FID score (by up to 32.7%) and LPIPS score (by up\nto 4.19%), but also outperforms the state-of-the-arts in terms of accuracy (by\nup to 1.97%) for downstream classification tasks.", "keywords": [], "authors_list": ["Yingbo Zhou", "Yutong Ye", "Pengyu Zhang", "Xian Wei", "Mingsong Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f148"}, "filepath": "data/2403.17782.png", "tags": [], "_media_type": "image", "_rand": 0.9992492617000835, "arXiv_link": "https://arxiv.org/abs/2403.17782", "other_link": "", "title": "GenesisTex: Adapting Image Denoising Diffusion to Texture Space", "abstract": "We present GenesisTex, a novel method for synthesizing textures for 3D\ngeometries from text descriptions. GenesisTex adapts the pretrained image\ndiffusion model to texture space by texture space sampling. Specifically, we\nmaintain a latent texture map for each viewpoint, which is updated with\npredicted noise on the rendering of the corresponding viewpoint. The sampled\nlatent texture maps are then decoded into a final texture map. During the\nsampling process, we focus on both global and local consistency across multiple\nviewpoints: global consistency is achieved through the integration of style\nconsistency mechanisms within the noise prediction network, and low-level\nconsistency is achieved by dynamically aligning latent textures. Finally, we\napply reference-based inpainting and img2img on denser views for texture\nrefinement. Our approach overcomes the limitations of slow optimization in\ndistillation-based methods and instability in inpainting-based methods.\nExperiments on meshes from various sources demonstrate that our method\nsurpasses the baseline methods quantitatively and qualitatively.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chenjian Gao", "Boyan Jiang", "Xinghui Li", "YingPeng Zhang", "Qian Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f149"}, "filepath": "data/2312.07533.png", "tags": [], "_media_type": "image", "_rand": 0.9996428586098721, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2312.07533", "other_link": "", "title": "On Scaling up a Multilingual Vision and Language Model", "abstract": "Visual language models (VLMs) rapidly progressed with the recent success of\nlarge language models. There have been growing efforts on visual instruction\ntuning to extend the LLM with visual inputs, but lacks an in-depth study of the\nvisual language pre-training process, where the model learns to perform joint\nmodeling on both modalities. In this work, we examine the design options for\nVLM pre-training by augmenting LLM towards VLM through step-by-step\ncontrollable comparisons. We introduce three main findings: (1) freezing LLMs\nduring pre-training can achieve decent zero-shot performance, but lack\nin-context learning capability, which requires unfreezing the LLM; (2)\ninterleaved pre-training data is beneficial whereas image-text pairs alone are\nnot optimal; (3) re-blending text-only instruction data to image-text data\nduring instruction fine-tuning not only remedies the degradation of text-only\ntasks, but also boosts VLM task accuracy. With an enhanced pre-training recipe\nwe build VILA, a Visual Language model family that consistently outperforms the\nstate-of-the-art models, e.g., LLaVA-1.5, across main benchmarks without bells\nand whistles. Multi-modal pre-training also helps unveil appealing properties\nof VILA, including multi-image reasoning, enhanced in-context learning, and\nbetter world knowledge.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Efficient and scalable vision"], "authors_list": ["Xi Chen", "Josip Djolonga", "Piotr Padlewski", "Basil Mustafa", "Soravit Changpinyo", "Jialin Wu", "Carlos Riquelme Ruiz", "Sebastian Goodman", "Xiao Wang", "Yi Tay", "Siamak Shakeri", "Mostafa Dehghani", "Daniel Salz", "Mario Lu\u010di\u0107", "Michael Tschannen", "Arsha Nagrani", "Hexiang Hu", "Mandar Joshi", "Bo Pang", "Ceslee Montgomery", "Paulina Pietrzyk", "Marvin Ritter", "AJ Piergiovanni", "Matthias Minderer", "Filip Pavetic", "Austin Waters", "Gang Li", "Ibrahim Alabdulmohsin", "Lucas Beyer", "Julien Amelot", "Kenton Lee", "Andreas Steiner", "Yang Li", "Daniel Keysers", "Anurag Arnab", "Yuanzhong Xu", "Keran Rong", "Alexander Kolesnikov", "Mojtaba Seyedhosseini", "Anelia Angelova", "Xiaohua Zhai", "Neil Houlsby", "Radu Soricut"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f14a"}, "filepath": "data/2403.06213.png", "tags": [], "_media_type": "image", "_rand": 0.9997008690043768, "arXiv_link": "https://arxiv.org/abs/2403.06213", "other_link": "https://github.com/roymiles/vkd", "title": "$V_kD:$ Improving knowledge distillation using orthogonal projections", "abstract": "Knowledge distillation is an effective method for training small and\nefficient deep learning models. However, the efficacy of a single method can\ndegenerate when transferring to other tasks, modalities, or even other\narchitectures. To address this limitation, we propose a novel constrained\nfeature distillation method. This method is derived from a small set of core\nprinciples, which results in two emerging components: an orthogonal projection\nand a task-specific normalisation. Equipped with both of these components, our\ntransformer models can outperform all previous methods on ImageNet and reach up\nto a 4.4% relative improvement over the previous state-of-the-art methods. To\nfurther demonstrate the generality of our method, we apply it to object\ndetection and image generation, whereby we obtain consistent and substantial\nperformance improvements over state-of-the-art. Code and models are publicly\navailable: https://github.com/roymiles/vkd", "keywords": ["Efficient and scalable vision"], "authors_list": ["Roy Miles", "Ismail Elezi", "Jiankang Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f14b"}, "filepath": "data/2309.01858.png", "tags": [], "_media_type": "image", "_rand": 0.99924862395269, "arXiv_link": "https://arxiv.org/abs/2309.01858", "other_link": "https://cmp.felk.cvut.cz/univ_emb/", "title": "Towards Modern Image Manipulation Localization: A Large-Scale Dataset and Novel Methods", "abstract": "Fine-grained and instance-level recognition methods are commonly trained and\nevaluated on specific domains, in a model per domain scenario. Such an\napproach, however, is impractical in real large-scale applications. In this\nwork, we address the problem of universal image embedding, where a single\nuniversal model is trained and used in multiple domains. First, we leverage\nexisting domain-specific datasets to carefully construct a new large-scale\npublic benchmark for the evaluation of universal image embeddings, with 241k\nquery images, 1.4M index images and 2.8M training images across 8 different\ndomains and 349k classes. We define suitable metrics, training and evaluation\nprotocols to foster future research in this area. Second, we provide a\ncomprehensive experimental evaluation on the new dataset, demonstrating that\nexisting approaches and simplistic extensions lead to worse performance than an\nassembly of models trained for each domain separately. Finally, we conducted a\npublic research competition on this topic, leveraging industrial datasets,\nwhich attracted the participation of more than 1k teams worldwide. This\nexercise generated many interesting research ideas and findings which we\npresent in detail. Project webpage: https://cmp.felk.cvut.cz/univ_emb/", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chenfan Qu", "Yiwu Zhong", "Chongyu Liu", "Guitao Xu", "Dezhi Peng", "Fengjun Guo", "Lianwen Jin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f14c"}, "filepath": "data/2403.05842.png", "tags": [], "_media_type": "image", "_rand": 0.999488348471628, "arXiv_link": "https://arxiv.org/abs/2403.05842", "other_link": "", "title": "Permutation Equivariance of Transformers and Its Applications", "abstract": "With the blossom of deep learning models and services, it has become an\nimperative concern to safeguard the valuable model parameters from being\nstolen. Watermarking is considered an important tool for ownership\nverification. However, current watermarking schemes are customized for\ndifferent models and tasks, hard to be integrated as an integrated intellectual\nprotection service. We propose Hufu, a modality-agnostic watermarking system\nfor pre-trained Transformer-based models, relying on the permutation\nequivariance property of Transformers. Hufu embeds watermark by fine-tuning the\npre-trained model on a set of data samples specifically permuted, and the\nembedded model essentially contains two sets of weights -- one for normal use\nand the other for watermark extraction which is triggered on permuted inputs.\nThe permutation equivariance ensures minimal interference between these two\nsets of model weights and thus high fidelity on downstream tasks. Since our\nmethod only depends on the model itself, it is naturally modality-agnostic,\ntask-independent, and trigger-sample-free. Extensive experiments on the\nstate-of-the-art vision Transformers, BERT, and GPT2 have demonstrated Hufu's\nsuperiority in meeting watermarking requirements including effectiveness,\nefficiency, fidelity, and robustness, showing its great potential to be\ndeployed as a uniform ownership verification service for various Transformers.", "keywords": [], "authors_list": ["Hengyuan Xu", "Liyao Xiang", "Hangyu Ye", "Dixi Yao", "Pengzhi Chu", "Baochun Li"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f14d"}, "filepath": "data/2403.09050.png", "tags": [], "_media_type": "image", "_rand": 0.9994694447088875, "arXiv_link": "https://arxiv.org/abs/2403.09050", "other_link": "", "title": "CLOAF: CoLlisiOn-Aware Human Flow", "abstract": "Even the best current algorithms for estimating body 3D shape and pose yield\nresults that include body self-intersections. In this paper, we present CLOAF,\nwhich exploits the diffeomorphic nature of Ordinary Differential Equations to\neliminate such self-intersections while still imposing body shape constraints.\nWe show that, unlike earlier approaches to addressing this issue, ours\ncompletely eliminates the self-intersections without compromising the accuracy\nof the reconstructions. Being differentiable, CLOAF can be used to fine-tune\npose and shape estimation baselines to improve their overall performance and\neliminate self-intersections in their predictions. Furthermore, we demonstrate\nhow our CLOAF strategy can be applied to practically any motion field induced\nby the user. CLOAF also makes it possible to edit motion to interact with the\nenvironment without worrying about potential collision or loss of body-shape\nprior.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Andrey Davydov", "Martin Engilberge", "Mathieu Salzmann", "Pascal Fua"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f14e"}, "filepath": "data/2310.09528.png", "tags": [], "_media_type": "image", "_rand": 0.9991027565908468, "arXiv_link": "https://arxiv.org/abs/2310.09528", "other_link": "", "title": "A Physics-informed Low-rank Deep Neural Network for Blind and Universal Lens Aberration Correction", "abstract": "In various engineering and applied science applications, repetitive numerical\nsimulations of partial differential equations (PDEs) for varying input\nparameters are often required (e.g., aircraft shape optimization over many\ndesign parameters) and solvers are required to perform rapid execution. In this\nstudy, we suggest a path that potentially opens up a possibility for\nphysics-informed neural networks (PINNs), emerging deep-learning-based solvers,\nto be considered as one such solver. Although PINNs have pioneered a proper\nintegration of deep-learning and scientific computing, they require repetitive\ntime-consuming training of neural networks, which is not suitable for\nmany-query scenarios. To address this issue, we propose a lightweight low-rank\nPINNs containing only hundreds of model parameters and an associated\nhypernetwork-based meta-learning algorithm, which allows efficient\napproximation of solutions of PDEs for varying ranges of PDE input parameters.\nMoreover, we show that the proposed method is effective in overcoming a\nchallenging issue, known as \"failure modes\" of PINNs.", "keywords": [], "authors_list": ["Jin Gong", "Runzhao Yang", "Weihang Zhang", "Jinli Suo", "Qionghai Dai"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Numerical Analysis", "Unknown", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f14f"}, "filepath": "data/2403.03346.png", "tags": [], "_media_type": "image", "_rand": 0.9990362634394278, "arXiv_link": "https://arxiv.org/abs/2403.03346", "other_link": "", "title": "Pre-training Vision Models with Mandelbulb Variations", "abstract": "We propose Strongly Supervised pre-training with ScreenShots (S4) - a novel\npre-training paradigm for Vision-Language Models using data from large-scale\nweb screenshot rendering. Using web screenshots unlocks a treasure trove of\nvisual and textual cues that are not present in using image-text pairs. In S4,\nwe leverage the inherent tree-structured hierarchy of HTML elements and the\nspatial localization to carefully design 10 pre-training tasks with large scale\nannotated data. These tasks resemble downstream tasks across different domains\nand the annotations are cheap to obtain. We demonstrate that, compared to\ncurrent screenshot pre-training objectives, our innovative pre-training method\nsignificantly enhances performance of image-to-text model in nine varied and\npopular downstream tasks - up to 76.1% improvements on Table Detection, and at\nleast 1% on Widget Captioning.", "keywords": ["Multimodal models and vision-language models", "Document analysis and understanding"], "authors_list": ["Benjamin N. Chiche", "Yuto Horikawa", "Ryo Fujita"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f150"}, "filepath": "data/2311.17002.png", "tags": [], "_media_type": "image", "_rand": 0.9992515367424591, "arXiv_link": "https://arxiv.org/abs/2311.17002", "other_link": "https://ranni-t2i.github.io/Ranni.", "title": "Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following", "abstract": "Existing text-to-image (T2I) diffusion models usually struggle in\ninterpreting complex prompts, especially those with quantity, object-attribute\nbinding, and multi-subject descriptions. In this work, we introduce a semantic\npanel as the middleware in decoding texts to images, supporting the generator\nto better follow instructions. The panel is obtained through arranging the\nvisual concepts parsed from the input text by the aid of large language models,\nand then injected into the denoising network as a detailed control signal to\ncomplement the text condition. To facilitate text-to-panel learning, we come up\nwith a carefully designed semantic formatting protocol, accompanied by a\nfully-automatic data preparation pipeline. Thanks to such a design, our\napproach, which we call Ranni, manages to enhance a pre-trained T2I generator\nregarding its textual controllability. More importantly, the introduction of\nthe generative middleware brings a more convenient form of interaction (i.e.,\ndirectly adjusting the elements in the panel or using language instructions)\nand further allows users to finely customize their generation, based on which\nwe develop a practical system and showcase its potential in continuous\ngeneration and chatting-based editing. Our project page is at\nhttps://ranni-t2i.github.io/Ranni.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yutong Feng", "Biao Gong", "Di Chen", "Yujun Shen", "Yu Liu", "Jingren Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f151"}, "filepath": "data/2404.01828.png", "tags": [], "_media_type": "image", "_rand": 0.9992109249724563, "arXiv_link": "https://arxiv.org/abs/2404.01828", "other_link": "", "title": "Defense without Forgetting: Continual Adversarial Defense with Anisotropic & Isotropic Pseudo Replay", "abstract": "Deep neural networks have demonstrated susceptibility to adversarial attacks.\nAdversarial defense techniques often focus on one-shot setting to maintain\nrobustness against attack. However, new attacks can emerge in sequences in\nreal-world deployment scenarios. As a result, it is crucial for a defense model\nto constantly adapt to new attacks, but the adaptation process can lead to\ncatastrophic forgetting of previously defended against attacks. In this paper,\nwe discuss for the first time the concept of continual adversarial defense\nunder a sequence of attacks, and propose a lifelong defense baseline called\nAnisotropic \\& Isotropic Replay (AIR), which offers three advantages: (1)\nIsotropic replay ensures model consistency in the neighborhood distribution of\nnew data, indirectly aligning the output preference between old and new tasks.\n(2) Anisotropic replay enables the model to learn a compromise data manifold\nwith fresh mixed semantics for further replay constraints and potential future\nattacks. (3) A straightforward regularizer mitigates the 'plasticity-stability'\ntrade-off by aligning model output between new and old tasks. Experiment\nresults demonstrate that AIR can approximate or even exceed the empirical\nperformance upper bounds achieved by Joint Training.", "keywords": [], "authors_list": ["Yuhang Zhou", "Zhongyun Hua"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f152"}, "filepath": "data/2405.07648.png", "tags": [], "_media_type": "image", "_rand": 0.9994362227844276, "arXiv_link": "https://arxiv.org/abs/2405.07648", "other_link": "https://github.com/I2-Multimedia-Lab/CDFormer}{https://github.com/I2-Multimedia-Lab/CDFormer}.", "title": "CDFormer: When Degradation Prediction Embraces Diffusion Model for Blind Image Super-Resolution", "abstract": "Existing Blind image Super-Resolution (BSR) methods focus on estimating\neither kernel or degradation information, but have long overlooked the\nessential content details. In this paper, we propose a novel BSR approach,\nContent-aware Degradation-driven Transformer (CDFormer), to capture both\ndegradation and content representations. However, low-resolution images cannot\nprovide enough content details, and thus we introduce a diffusion-based module\n$CDFormer_{diff}$ to first learn Content Degradation Prior (CDP) in both low-\nand high-resolution images, and then approximate the real distribution given\nonly low-resolution information. Moreover, we apply an adaptive SR network\n$CDFormer_{SR}$ that effectively utilizes CDP to refine features. Compared to\nprevious diffusion-based SR methods, we treat the diffusion model as an\nestimator that can overcome the limitations of expensive sampling time and\nexcessive diversity. Experiments show that CDFormer can outperform existing\nmethods, establishing a new state-of-the-art performance on various benchmarks\nunder blind settings. Codes and models will be available at\n\\href{https://github.com/I2-Multimedia-Lab/CDFormer}{https://github.com/I2-Multimedia-Lab/CDFormer}.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Qingguo Liu", "Chenyi Zhuang", "Pan Gao", "Jie Qin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f153"}, "filepath": "data/2402.18933.png", "tags": [], "_media_type": "image", "_rand": 0.999519014419519, "arXiv_link": "https://arxiv.org/abs/2402.18933", "other_link": "", "title": "Modality-Agnostic Structural Image Representation Learning for Deformable Multi-Modality Medical Image Registration", "abstract": "Establishing dense anatomical correspondence across distinct imaging\nmodalities is a foundational yet challenging procedure for numerous medical\nimage analysis studies and image-guided radiotherapy. Existing multi-modality\nimage registration algorithms rely on statistical-based similarity measures or\nlocal structural image representations. However, the former is sensitive to\nlocally varying noise, while the latter is not discriminative enough to cope\nwith complex anatomical structures in multimodal scans, causing ambiguity in\ndetermining the anatomical correspondence across scans with different\nmodalities. In this paper, we propose a modality-agnostic structural\nrepresentation learning method, which leverages Deep Neighbourhood\nSelf-similarity (DNS) and anatomy-aware contrastive learning to learn\ndiscriminative and contrast-invariance deep structural image representations\n(DSIR) without the need for anatomical delineations or pre-aligned training\nimages. We evaluate our method on multiphase CT, abdomen MR-CT, and brain MR\nT1w-T2w registration. Comprehensive results demonstrate that our method is\nsuperior to the conventional local structural representation and\nstatistical-based similarity measures in terms of discriminability and\naccuracy.", "keywords": [], "authors_list": ["Tony C. W. MOK", "Zi Li", "Yunhao Bai", "Jianpeng Zhang", "Wei Liu", "Yan-Jie Zhou", "Ke Yan", "Dakai Jin", "Yu Shi", "Xiaoli Yin", "Le Lu", "Ling Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f154"}, "filepath": "data/2405.05502.png", "tags": [], "_media_type": "image", "_rand": 0.9990794957728601, "arXiv_link": "https://arxiv.org/abs/2405.05502", "other_link": "", "title": "Towards Accurate and Robust Architectures via Neural Architecture Search", "abstract": "To defend deep neural networks from adversarial attacks, adversarial training\nhas been drawing increasing attention for its effectiveness. However, the\naccuracy and robustness resulting from the adversarial training are limited by\nthe architecture, because adversarial training improves accuracy and robustness\nby adjusting the weight connection affiliated to the architecture. In this\nwork, we propose ARNAS to search for accurate and robust architectures for\nadversarial training. First we design an accurate and robust search space, in\nwhich the placement of the cells and the proportional relationship of the\nfilter numbers are carefully determined. With the design, the architectures can\nobtain both accuracy and robustness by deploying accurate and robust structures\nto their sensitive positions, respectively. Then we propose a differentiable\nmulti-objective search strategy, performing gradient descent towards directions\nthat are beneficial for both natural loss and adversarial loss, thus the\naccuracy and robustness can be guaranteed at the same time. We conduct\ncomprehensive experiments in terms of white-box attacks, black-box attacks, and\ntransferability. Experimental results show that the searched architecture has\nthe strongest robustness with the competitive accuracy, and breaks the\ntraditional idea that NAS-based architectures cannot transfer well to complex\ntasks in robustness scenarios. By analyzing outstanding architectures searched,\nwe also conclude that accurate and robust neural architectures tend to deploy\ndifferent structures near the input and output, which has great practical\nsignificance on both hand-crafting and automatically designing of accurate and\nrobust architectures.", "keywords": [], "authors_list": ["Yuwei Ou", "Yuqi Feng", "Yanan Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Cryptography and Security", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f155"}, "filepath": "data/2405.05216.png", "tags": [], "_media_type": "image", "_rand": 0.9991046079218431, "arXiv_link": "https://arxiv.org/abs/2405.05216", "other_link": "https://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024.", "title": "Fast Adaptation for Human Pose Estimation via Meta-Optimization", "abstract": "The 3D Human Pose Estimation (3D HPE) task uses 2D images or videos to\npredict human joint coordinates in 3D space. Despite recent advancements in\ndeep learning-based methods, they mostly ignore the capability of coupling\naccessible texts and naturally feasible knowledge of humans, missing out on\nvaluable implicit supervision to guide the 3D HPE task. Moreover, previous\nefforts often study this task from the perspective of the whole human body,\nneglecting fine-grained guidance hidden in different body parts. To this end,\nwe present a new Fine-Grained Prompt-Driven Denoiser based on a diffusion model\nfor 3D HPE, named \\textbf{FinePOSE}. It consists of three core blocks enhancing\nthe reverse process of the diffusion model: (1) Fine-grained Part-aware Prompt\nlearning (FPP) block constructs fine-grained part-aware prompts via coupling\naccessible texts and naturally feasible knowledge of body parts with learnable\nprompts to model implicit guidance. (2) Fine-grained Prompt-pose Communication\n(FPC) block establishes fine-grained communications between learned part-aware\nprompts and poses to improve the denoising quality. (3) Prompt-driven Timestamp\nStylization (PTS) block integrates learned prompt embedding and temporal\ninformation related to the noise level to enable adaptive adjustment at each\ndenoising step. Extensive experiments on public single-human pose estimation\ndatasets show that FinePOSE outperforms state-of-the-art methods. We further\nextend FinePOSE to multi-human pose estimation. Achieving 34.3mm average MPJPE\non the EgoHumans dataset demonstrates the potential of FinePOSE to deal with\ncomplex multi-human scenarios. Code is available at\nhttps://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Shengxiang Hu", "Huaijiang Sun", "Bin Li", "Dong Wei", "Weiqing Li", "Jianfeng Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f156"}, "filepath": "data/2402.19286.png", "tags": [], "_media_type": "image", "_rand": 0.9997733637081843, "arXiv_link": "https://arxiv.org/abs/2402.19286", "other_link": "", "title": "PrPSeg: Universal Proposition Learning for Panoramic Renal Pathology Segmentation", "abstract": "Understanding the anatomy of renal pathology is crucial for advancing disease\ndiagnostics, treatment evaluation, and clinical research. The complex kidney\nsystem comprises various components across multiple levels, including regions\n(cortex, medulla), functional units (glomeruli, tubules), and cells (podocytes,\nmesangial cells in glomerulus). Prior studies have predominantly overlooked the\nintricate spatial interrelations among objects from clinical knowledge. In this\nresearch, we introduce a novel universal proposition learning approach, called\npanoramic renal pathology segmentation (PrPSeg), designed to segment\ncomprehensively panoramic structures within kidney by integrating extensive\nknowledge of kidney anatomy.\n In this paper, we propose (1) the design of a comprehensive universal\nproposition matrix for renal pathology, facilitating the incorporation of\nclassification and spatial relationships into the segmentation process; (2) a\ntoken-based dynamic head single network architecture, with the improvement of\nthe partial label image segmentation and capability for future data\nenlargement; and (3) an anatomy loss function, quantifying the inter-object\nrelationships across the kidney.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ruining Deng", "Quan Liu", "Can Cui", "Tianyuan Yao", "Jialin Yue", "Juming Xiong", "Lining yu", "Yifei Wu", "Mengmeng Yin", "Yu Wang", "Shilin Zhao", "Yucheng Tang", "Haichun Yang", "Yuankai Huo"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f157"}, "filepath": "data/2312.02696.png", "tags": [], "_media_type": "image", "_rand": 0.9994018216625754, "arXiv_link": "https://arxiv.org/abs/2312.02696", "other_link": "", "title": "Analyzing and Improving the Training Dynamics of Diffusion Models", "abstract": "Diffusion models currently dominate the field of data-driven image synthesis\nwith their unparalleled scaling to large datasets. In this paper, we identify\nand rectify several causes for uneven and ineffective training in the popular\nADM diffusion model architecture, without altering its high-level structure.\nObserving uncontrolled magnitude changes and imbalances in both the network\nactivations and weights over the course of training, we redesign the network\nlayers to preserve activation, weight, and update magnitudes on expectation. We\nfind that systematic application of this philosophy eliminates the observed\ndrifts and imbalances, resulting in considerably better networks at equal\ncomputational complexity. Our modifications improve the previous record FID of\n2.41 in ImageNet-512 synthesis to 1.81, achieved using fast deterministic\nsampling.\n As an independent contribution, we present a method for setting the\nexponential moving average (EMA) parameters post-hoc, i.e., after completing\nthe training run. This allows precise tuning of EMA length without the cost of\nperforming several training runs, and reveals its surprising interactions with\nnetwork architecture, training time, and guidance.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Tero Karras", "Miika Aittala", "Jaakko Lehtinen", "Janne Hellsten", "Timo Aila", "Samuli Laine"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Neural and Evolutionary Computing", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f158"}, "filepath": "data/2401.14758.png", "tags": [], "_media_type": "image", "_rand": 0.9990919642800455, "arXiv_link": "https://arxiv.org/abs/2401.14758", "other_link": "https://github.com/ZifanWu/CAL.", "title": "POCE: Primal Policy Optimization with Conservative Estimation for Multi-constraint Offline Reinforcement Learning", "abstract": "Primal-dual safe RL methods commonly perform iterations between the primal\nupdate of the policy and the dual update of the Lagrange Multiplier. Such a\ntraining paradigm is highly susceptible to the error in cumulative cost\nestimation since this estimation serves as the key bond connecting the primal\nand dual update processes. We show that this problem causes significant\nunderestimation of cost when using off-policy methods, leading to the failure\nto satisfy the safety constraint. To address this issue, we propose\nconservative policy optimization, which learns a policy in a\nconstraint-satisfying area by considering the uncertainty in cost estimation.\nThis improves constraint satisfaction but also potentially hinders reward\nmaximization. We then introduce local policy convexification to help eliminate\nsuch suboptimality by gradually reducing the estimation uncertainty. We provide\ntheoretical interpretations of the joint coupling effect of these two\ningredients and further verify them by extensive experiments. Results on\nbenchmark tasks show that our method not only achieves an asymptotic\nperformance comparable to state-of-the-art on-policy methods while using much\nfewer samples, but also significantly reduces constraint violation during\ntraining. Our code is available at https://github.com/ZifanWu/CAL.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jiayi Guan", "Li Shen", "Ao Zhou", "Lusong Li", "Han Hu", "Xiaodong He", "Guang Chen", "Changjun Jiang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f159"}, "filepath": "data/2309.00696.png", "tags": [], "_media_type": "image", "_rand": 0.9990987590403, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2309.00696", "other_link": "", "title": "VicTR: Video-conditioned Text Representations for Activity Recognition", "abstract": "The challenge of long-term video understanding remains constrained by the\nefficient extraction of object semantics and the modelling of their\nrelationships for downstream tasks. Although the CLIP visual features exhibit\ndiscriminative properties for various vision tasks, particularly in object\nencoding, they are suboptimal for long-term video understanding. To address\nthis issue, we present the Attributes-Aware Network (AAN), which consists of\ntwo key components: the Attributes Extractor and a Graph Reasoning block. These\ncomponents facilitate the extraction of object-centric attributes and the\nmodelling of their relationships within the video. By leveraging CLIP features,\nAAN outperforms state-of-the-art approaches on two popular action detection\ndatasets: Charades and Toyota Smarthome Untrimmed datasets.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Kumara Kahatapitiya", "Anurag Arnab", "Arsha Nagrani", "Michael Ryoo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f15a"}, "filepath": "data/2402.17200.png", "tags": [], "_media_type": "image", "_rand": 0.9993882382103494, "arXiv_link": "https://arxiv.org/abs/2402.17200", "other_link": "", "title": "Enhancing Quality of Compressed Images by Mitigating Enhancement Bias Towards Compression Domain", "abstract": "Existing quality enhancement methods for compressed images focus on aligning\nthe enhancement domain with the raw domain to yield realistic images. However,\nthese methods exhibit a pervasive enhancement bias towards the compression\ndomain, inadvertently regarding it as more realistic than the raw domain. This\nbias makes enhanced images closely resemble their compressed counterparts, thus\ndegrading their perceptual quality. In this paper, we propose a simple yet\neffective method to mitigate this bias and enhance the quality of compressed\nimages. Our method employs a conditional discriminator with the compressed\nimage as a key condition, and then incorporates a domain-divergence\nregularization to actively distance the enhancement domain from the compression\ndomain. Through this dual strategy, our method enables the discrimination\nagainst the compression domain, and brings the enhancement domain closer to the\nraw domain. Comprehensive quality evaluations confirm the superiority of our\nmethod over other state-of-the-art methods without incurring inference\noverheads.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Qunliang Xing", "Mai Xu", "Shengxi Li", "Xin Deng", "Meisong Zheng", "huaida liu", "Ying Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f15b"}, "filepath": "data/2310.05370.png", "tags": [], "_media_type": "image", "_rand": 0.9997467671706322, "arXiv_link": "https://arxiv.org/abs/2310.05370", "other_link": "", "title": "SocialCircle: Learning the Angle-based Social Interaction Representation for Pedestrian Trajectory Prediction", "abstract": "Analyzing and forecasting trajectories of agents like pedestrians and cars in\ncomplex scenes has become more and more significant in many intelligent systems\nand applications. The diversity and uncertainty in socially interactive\nbehaviors among a rich variety of agents make this task more challenging than\nother deterministic computer vision tasks. Researchers have made a lot of\nefforts to quantify the effects of these interactions on future trajectories\nthrough different mathematical models and network structures, but this problem\nhas not been well solved. Inspired by marine animals that localize the\npositions of their companions underwater through echoes, we build a new\nanglebased trainable social interaction representation, named SocialCircle, for\ncontinuously reflecting the context of social interactions at different angular\norientations relative to the target agent. We validate the effect of the\nproposed SocialCircle by training it along with several newly released\ntrajectory prediction models, and experiments show that the SocialCircle not\nonly quantitatively improves the prediction performance, but also qualitatively\nhelps better simulate social interactions when forecasting pedestrian\ntrajectories in a way that is consistent with human intuitions.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Conghao Wong", "Beihao Xia", "Ziqian Zou", "Yulong Wang", "Xinge You"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f15c"}, "filepath": "data/2403.17496.png", "tags": [], "_media_type": "image", "_rand": 0.9992883983219222, "arXiv_link": "https://arxiv.org/abs/2403.17496", "other_link": "", "title": "Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-training via Differentiable Rendering of Line Segments", "abstract": "In the film and gaming industries, achieving a realistic hair appearance\ntypically involves the use of strands originating from the scalp. However,\nreconstructing these strands from observed surface images of hair presents\nsignificant challenges. The difficulty in acquiring Ground Truth (GT) data has\nled state-of-the-art learning-based methods to rely on pre-training with\nmanually prepared synthetic CG data. This process is not only labor-intensive\nand costly but also introduces complications due to the domain gap when\ncompared to real-world data. In this study, we propose an optimization-based\napproach that eliminates the need for pre-training. Our method represents hair\nstrands as line segments growing from the scalp and optimizes them using a\nnovel differentiable rendering algorithm. To robustly optimize a substantial\nnumber of slender explicit geometries, we introduce 3D orientation estimation\nutilizing global optimization, strand initialization based on Laplace's\nequation, and reparameterization that leverages geometric connectivity and\nspatial proximity. Unlike existing optimization-based methods, our method is\ncapable of reconstructing internal hair flow in an absolute direction. Our\nmethod exhibits robust and accurate inverse rendering, surpassing the quality\nof existing methods and significantly improving processing speed.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Yusuke Takimoto", "Hikari Takehara", "Hiroyuki Sato", "Zihao Zhu", "Bo Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f15d"}, "filepath": "data/2404.02257.png", "tags": [], "_media_type": "image", "_rand": 0.999093030655552, "arXiv_link": "https://arxiv.org/abs/2404.02257", "other_link": "", "title": "SnAG: Scalable and Accurate Video Grounding", "abstract": "Temporal grounding of text descriptions in videos is a central problem in\nvision-language learning and video understanding. Existing methods often\nprioritize accuracy over scalability -- they have been optimized for grounding\nonly a few text queries within short videos, and fail to scale up to long\nvideos with hundreds of queries. In this paper, we study the effect of\ncross-modal fusion on the scalability of video grounding models. Our analysis\nestablishes late fusion as a more cost-effective fusion scheme for long-form\nvideos with many text queries. Moreover, it leads us to a novel, video-centric\nsampling scheme for efficient training. Based on these findings, we present\nSnAG, a simple baseline for scalable and accurate video grounding. Without\nbells and whistles, SnAG is 43% more accurate and 1.5x faster than CONE, a\nstate of the art for long-form video grounding on the challenging MAD dataset,\nwhile achieving highly competitive results on short videos.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Fangzhou Mu", "Sicheng Mo", "Yin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f15e"}, "filepath": "data/2403.05247.png", "tags": [], "_media_type": "image", "_rand": 0.999081838391567, "arXiv_link": "https://arxiv.org/abs/2403.05247", "other_link": "https://github.com/TRLou/HiT-ADV.", "title": "Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds", "abstract": "Adversarial attack methods based on point manipulation for 3D point cloud\nclassification have revealed the fragility of 3D models, yet the adversarial\nexamples they produce are easily perceived or defended against. The trade-off\nbetween the imperceptibility and adversarial strength leads most point attack\nmethods to inevitably introduce easily detectable outlier points upon a\nsuccessful attack. Another promising strategy, shape-based attack, can\neffectively eliminate outliers, but existing methods often suffer significant\nreductions in imperceptibility due to irrational deformations. We find that\nconcealing deformation perturbations in areas insensitive to human eyes can\nachieve a better trade-off between imperceptibility and adversarial strength,\nspecifically in parts of the object surface that are complex and exhibit\ndrastic curvature changes. Therefore, we propose a novel shape-based\nadversarial attack method, HiT-ADV, which initially conducts a two-stage search\nfor attack regions based on saliency and imperceptibility scores, and then adds\ndeformation perturbations in each attack region using Gaussian kernel\nfunctions. Additionally, HiT-ADV is extendable to physical attack. We propose\nthat by employing benign resampling and benign rigid transformations, we can\nfurther enhance physical adversarial strength with little sacrifice to\nimperceptibility. Extensive experiments have validated the superiority of our\nmethod in terms of adversarial and imperceptible properties in both digital and\nphysical spaces. Our code is avaliable at: https://github.com/TRLou/HiT-ADV.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Tianrui Lou", "Xiaojun Jia", "Jindong Gu", "Li Liu", "Siyuan Liang", "Bangyan He", "Xiaochun Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f15f"}, "filepath": "data/2404.09011.png", "tags": [], "_media_type": "image", "_rand": 0.9993621643350482, "arXiv_link": "https://arxiv.org/abs/2404.09011", "other_link": "", "title": "PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization", "abstract": "Domain Generalization (DG) aims to resolve distribution shifts between source\nand target domains, and current DG methods are default to the setting that data\nfrom source and target domains share identical categories. Nevertheless, there\nexists unseen classes from target domains in practical scenarios. To address\nthis issue, Open Set Domain Generalization (OSDG) has emerged and several\nmethods have been exclusively proposed. However, most existing methods adopt\ncomplex architectures with slight improvement compared with DG methods.\nRecently, vision-language models (VLMs) have been introduced in DG following\nthe fine-tuning paradigm, but consume huge training overhead with large vision\nmodels. Therefore, in this paper, we innovate to transfer knowledge from VLMs\nto lightweight vision models and improve the robustness by introducing\nPerturbation Distillation (PD) from three perspectives, including Score, Class\nand Instance (SCI), named SCI-PD. Moreover, previous methods are oriented by\nthe benchmarks with identical and fixed splits, ignoring the divergence between\nsource domains. These methods are revealed to suffer from sharp performance\ndecay with our proposed new benchmark Hybrid Domain Generalization (HDG) and a\nnovel metric $H^{2}$-CV, which construct various splits to comprehensively\nassess the robustness of algorithms. Extensive experiments demonstrate that our\nmethod outperforms state-of-the-art algorithms on multiple datasets, especially\nimproving the robustness when confronting data scarcity.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zining Chen", "Weiqiu Wang", "Zhicheng Zhao", "Fei Su", "Aidong Men", "Hongying Meng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f160"}, "filepath": "data/2401.11704v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997121680889589, "arXiv_link": "https://arxiv.org/html/2401.11704v1", "other_link": "", "title": "Kernel Adaptive Convolution for Scene Text Detection via Distance Map Prediction", "abstract": "Recently, scene text detection has received significant attention due to its\nwide application. However, accurate detection in complex scenes of multiple\nscales, orientations, and curvature remains a challenge. Numerous detection\nmethods adopt the Vatti clipping (VC) algorithm for multiple-instance training\nto address the issue of arbitrary-shaped text. Yet we identify several bias\nresults from these approaches called the \"shrinked kernel\". Specifically, it\nrefers to a decrease in accuracy resulting from an output that overly favors\nthe text kernel. In this paper, we propose a new approach named Expand Kernel\nNetwork (EK-Net) with expand kernel distance to compensate for the previous\ndeficiency, which includes three-stages regression to complete instance\ndetection. Moreover, EK-Net not only realize the precise positioning of\narbitrary-shaped text, but also achieve a trade-off between performance and\nspeed. Evaluation results demonstrate that EK-Net achieves state-of-the-art or\ncompetitive performance compared to other advanced methods, e.g., F-measure of\n85.72% at 35.42 FPS on ICDAR 2015, F-measure of 85.75% at 40.13 FPS on CTW1500.", "keywords": ["Scene analysis and understanding", "Document analysis and understanding"], "authors_list": ["Jinzhi Zheng", "Heng Fan", "Libo Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f161"}, "filepath": "data/2403.11162.png", "tags": [], "_media_type": "image", "_rand": 0.999806084867298, "arXiv_link": "https://arxiv.org/abs/2403.11162", "other_link": "https://github.com/Nicholas0228/Revelio.", "title": "CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion", "abstract": "Diffusion Models (DMs) have evolved into advanced image generation tools,\nespecially for few-shot generation where a pretrained model is fine-tuned on a\nsmall set of images to capture a specific style or object. Despite their\nsuccess, concerns exist about potential copyright violations stemming from the\nuse of unauthorized data in this process. In response, we present Contrasting\nGradient Inversion for Diffusion Models (CGI-DM), a novel method featuring\nvivid visual representations for digital copyright authentication. Our approach\ninvolves removing partial information of an image and recovering missing\ndetails by exploiting conceptual differences between the pretrained and\nfine-tuned models. We formulate the differences as KL divergence between latent\nvariables of the two models when given the same input image, which can be\nmaximized through Monte Carlo sampling and Projected Gradient Descent (PGD).\nThe similarity between original and recovered images serves as a strong\nindicator of potential infringements. Extensive experiments on the WikiArt and\nDreambooth datasets demonstrate the high accuracy of CGI-DM in digital\ncopyright authentication, surpassing alternative validation techniques. Code\nimplementation is available at https://github.com/Nicholas0228/Revelio.", "keywords": ["Image and video generation and manipulation", "Vision applications for social good and ethics"], "authors_list": ["Xiaoyu Wu", "Yang Hua", "Chumeng Liang", "Jiaru Zhang", "Hao Wang", "Tao Song", "Haibing Guan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Cryptography and Security", "Computers and Society", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f162"}, "filepath": "data/2402.05746.png", "tags": [], "_media_type": "image", "_rand": 0.9992146380089914, "arXiv_link": "https://arxiv.org/abs/2402.05746", "other_link": "", "title": "Editable Scene Simulation for Autonomous Driving via LLM-Agent Collaboration", "abstract": "Scene simulation in autonomous driving has gained significant attention\nbecause of its huge potential for generating customized data. However, existing\neditable scene simulation approaches face limitations in terms of user\ninteraction efficiency, multi-camera photo-realistic rendering and external\ndigital assets integration. To address these challenges, this paper introduces\nChatSim, the first system that enables editable photo-realistic 3D driving\nscene simulations via natural language commands with external digital assets.\nTo enable editing with high command flexibility,~ChatSim leverages a large\nlanguage model (LLM) agent collaboration framework. To generate photo-realistic\noutcomes, ChatSim employs a novel multi-camera neural radiance field method.\nFurthermore, to unleash the potential of extensive high-quality digital assets,\nChatSim employs a novel multi-camera lighting estimation method to achieve\nscene-consistent assets' rendering. Our experiments on Waymo Open Dataset\ndemonstrate that ChatSim can handle complex language commands and generate\ncorresponding photo-realistic scene videos.", "keywords": ["Image and video generation and manipulation", "Scene analysis and understanding"], "authors_list": ["Yuxi Wei", "Zi Wang", "Yifan Lu", "Chenxin Xu", "Changxing Liu", "Hao Zhao", "Siheng Chen", "Yanfeng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f163"}, "filepath": "data/2403.15760.png", "tags": [], "_media_type": "image", "_rand": 0.9999721275245613, "arXiv_link": "https://arxiv.org/abs/2403.15760", "other_link": "https://github.com/TsingZ0/FedKTL", "title": "An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated Learning", "abstract": "Heterogeneous Federated Learning (HtFL) enables collaborative learning on\nmultiple clients with different model architectures while preserving privacy.\nDespite recent research progress, knowledge sharing in HtFL is still difficult\ndue to data and model heterogeneity. To tackle this issue, we leverage the\nknowledge stored in pre-trained generators and propose a new upload-efficient\nknowledge transfer scheme called Federated Knowledge-Transfer Loop (FedKTL).\nOur FedKTL can produce client-task-related prototypical image-vector pairs via\nthe generator's inference on the server. With these pairs, each client can\ntransfer pre-existing knowledge from the generator to its local model through\nan additional supervised local task. We conduct extensive experiments on four\ndatasets under two types of data heterogeneity with 14 kinds of models\nincluding CNNs and ViTs. Results show that our upload-efficient FedKTL\nsurpasses seven state-of-the-art methods by up to 7.31% in accuracy. Moreover,\nour knowledge transfer scheme is applicable in scenarios with only one edge\nclient. Code: https://github.com/TsingZ0/FedKTL", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jianqing Zhang", "Yang Liu", "Yang Hua", "Jian Cao"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Distributed, Parallel, and Cluster Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f164"}, "filepath": "data/2403.11074.png", "tags": [], "_media_type": "image", "_rand": 0.9998951492303584, "arXiv_link": "https://arxiv.org/abs/2403.11074", "other_link": "", "title": "Audio-Visual Segmentation via Unlabeled Frame Exploitation", "abstract": "Audio-visual segmentation (AVS) aims to segment the sounding objects in video\nframes. Although great progress has been witnessed, we experimentally reveal\nthat current methods reach marginal performance gain within the use of the\nunlabeled frames, leading to the underutilization issue. To fully explore the\npotential of the unlabeled frames for AVS, we explicitly divide them into two\ncategories based on their temporal characteristics, i.e., neighboring frame\n(NF) and distant frame (DF). NFs, temporally adjacent to the labeled frame,\noften contain rich motion information that assists in the accurate localization\nof sounding objects. Contrary to NFs, DFs have long temporal distances from the\nlabeled frame, which share semantic-similar objects with appearance variations.\nConsidering their unique characteristics, we propose a versatile framework that\neffectively leverages them to tackle AVS. Specifically, for NFs, we exploit the\nmotion cues as the dynamic guidance to improve the objectness localization.\nBesides, we exploit the semantic cues in DFs by treating them as valid\naugmentations to the labeled frames, which are then used to enrich data\ndiversity in a self-training manner. Extensive experimental results demonstrate\nthe versatility and superiority of our method, unleashing the power of the\nabundant unlabeled frames.", "keywords": [], "authors_list": ["Jinxiang Liu", "Yikun Liu", "Ferenas", "Chen Ju", "Ya Zhang", "Yanfeng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Multimedia", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f165"}, "filepath": "data/2309.16421.png", "tags": [], "_media_type": "image", "_rand": 0.9995022129003648, "arXiv_link": "https://arxiv.org/abs/2309.16421", "other_link": "", "title": "Distilling ODE Solvers of Diffusion Models into Smaller Steps", "abstract": "Abstract Diffusion models have recently gained prominence as a novel category\nof generative models. Despite their success, these models face a notable\ndrawback in terms of slow sampling speeds, requiring a high number of function\nevaluations (NFE) in the order of hundreds or thousands. In response, both\nlearning-free and learning-based sampling strategies have been explored to\nexpedite the sampling process. Learning-free sampling employs various ordinary\ndifferential equation (ODE) solvers based on the formulation of diffusion ODEs.\nHowever, it encounters challenges in faithfully tracking the true sampling\ntrajectory, particularly for small NFE. Conversely, learning-based sampling\nmethods, such as knowledge distillation, demand extensive additional training,\nlimiting their practical applicability. To overcome these limitations, we\nintroduce Distilled-ODE solvers (D-ODE solvers), a straightforward distillation\napproach grounded in ODE solver formulations. Our method seamlessly integrates\nthe strengths of both learning-free and learning-based sampling. D-ODE solvers\nare constructed by introducing a single parameter adjustment to existing ODE\nsolvers. Furthermore, we optimize D-ODE solvers with smaller steps using\nknowledge distillation from ODE solvers with larger steps across a batch of\nsamples. Comprehensive experiments demonstrate the superior performance of\nD-ODE solvers compared to existing ODE solvers, including DDIM, PNDM,\nDPM-Solver, DEIS, and EDM, particularly in scenarios with fewer NFE. Notably,\nour method incurs negligible computational overhead compared to previous\ndistillation techniques, facilitating straightforward and rapid integration\nwith existing samplers. Qualitative analysis reveals that D-ODE solvers not\nonly enhance image quality but also faithfully follow the target ODE\ntrajectory.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Sanghwan Kim", "Hao Tang", "Fisher Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f166"}, "filepath": "data/2404.07933.png", "tags": [], "_media_type": "image", "_rand": 0.9990528111647149, "arXiv_link": "https://arxiv.org/abs/2404.07933", "other_link": "", "title": "Boosting Self-Supervision for Single-View Scene Completion via Knowledge Distillation", "abstract": "Inferring scene geometry from images via Structure from Motion is a\nlong-standing and fundamental problem in computer vision. While classical\napproaches and, more recently, depth map predictions only focus on the visible\nparts of a scene, the task of scene completion aims to reason about geometry\neven in occluded regions. With the popularity of neural radiance fields\n(NeRFs), implicit representations also became popular for scene completion by\npredicting so-called density fields. Unlike explicit approaches. e.g.\nvoxel-based methods, density fields also allow for accurate depth prediction\nand novel-view synthesis via image-based rendering. In this work, we propose to\nfuse the scene reconstruction from multiple images and distill this knowledge\ninto a more accurate single-view scene reconstruction. To this end, we propose\nMulti-View Behind the Scenes (MVBTS) to fuse density fields from multiple posed\nimages, trained fully self-supervised only from image data. Using knowledge\ndistillation, we use MVBTS to train a single-view scene completion network via\ndirect supervision called KDBTS. It achieves state-of-the-art performance on\noccupancy prediction, especially in occluded regions.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Keonhee Han", "Dominik Muhle", "Felix Wimbauer", "Daniel Cremers"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f167"}, "filepath": "data/2403.15194.png", "tags": [], "_media_type": "image", "_rand": 0.9992590329700987, "arXiv_link": "https://arxiv.org/abs/2403.15194", "other_link": "", "title": "Your Image is My Video: Reshaping the Receptive Field via Image-To-Video Differentiable AutoAugmentation and Fusion", "abstract": "The landscape of deep learning research is moving towards innovative\nstrategies to harness the true potential of data. Traditionally, emphasis has\nbeen on scaling model architectures, resulting in large and complex neural\nnetworks, which can be difficult to train with limited computational resources.\nHowever, independently of the model size, data quality (i.e. amount and\nvariability) is still a major factor that affects model generalization. In this\nwork, we propose a novel technique to exploit available data through the use of\nautomatic data augmentation for the tasks of image classification and semantic\nsegmentation. We introduce the first Differentiable Augmentation Search method\n(DAS) to generate variations of images that can be processed as videos.\nCompared to previous approaches, DAS is extremely fast and flexible, allowing\nthe search on very large search spaces in less than a GPU day. Our intuition is\nthat the increased receptive field in the temporal dimension provided by DAS\ncould lead to benefits also to the spatial receptive field. More specifically,\nwe leverage DAS to guide the reshaping of the spatial receptive field by\nselecting task-dependant transformations. As a result, compared to standard\naugmentation alternatives, we improve in terms of accuracy on ImageNet,\nCifar10, Cifar100, Tiny-ImageNet, Pascal-VOC-2012 and CityScapes datasets when\nplugging-in our DAS over different light-weight video backbones.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Sofia Casarin", "Cynthia Ugwu", "Sergio Escalera", "Oswald Lanz"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f168"}, "filepath": "data/2404.15620.png", "tags": [], "_media_type": "image", "_rand": 0.9992937583984404, "arXiv_link": "https://arxiv.org/abs/2404.15620", "other_link": "https://github.com/XYLGroup/DKP.", "title": "A Dynamic Kernel Prior Model for Unsupervised Blind Image Super-Resolution", "abstract": "Deep learning-based methods have achieved significant successes on solving\nthe blind super-resolution (BSR) problem. However, most of them request\nsupervised pre-training on labelled datasets. This paper proposes an\nunsupervised kernel estimation model, named dynamic kernel prior (DKP), to\nrealize an unsupervised and pre-training-free learning-based algorithm for\nsolving the BSR problem. DKP can adaptively learn dynamic kernel priors to\nrealize real-time kernel estimation, and thereby enables superior HR image\nrestoration performances. This is achieved by a Markov chain Monte Carlo\nsampling process on random kernel distributions. The learned kernel prior is\nthen assigned to optimize a blur kernel estimation network, which entails a\nnetwork-based Langevin dynamic optimization strategy. These two techniques\nensure the accuracy of the kernel estimation. DKP can be easily used to replace\nthe kernel estimation models in the existing methods, such as Double-DIP and\nFKP-DIP, or be added to the off-the-shelf image restoration model, such as\ndiffusion model. In this paper, we incorporate our DKP model with DIP and\ndiffusion model, referring to DIP-DKP and Diff-DKP, for validations. Extensive\nsimulations on Gaussian and motion kernel scenarios demonstrate that the\nproposed DKP model can significantly improve the kernel estimation with\ncomparable runtime and memory usage, leading to state-of-the-art BSR results.\nThe code is available at https://github.com/XYLGroup/DKP.", "keywords": ["Low-level vision"], "authors_list": ["Zhixiong Yang", "Jingyuan Xia", "Shengxi Li", "Xinghua Huang", "Shuanghui Zhang", "Zhen Liu", "Yaowen Fu", "Yongxiang Liu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f169"}, "filepath": "data/2308.09905.png", "tags": [], "_media_type": "image", "_rand": 0.9995219112504407, "arXiv_link": "https://arxiv.org/abs/2308.09905", "other_link": "", "title": "DiffusionTrack: Point Set Diffusion Model for Visual Object Tracking", "abstract": "Multi-object tracking (MOT) is a challenging vision task that aims to detect\nindividual objects within a single frame and associate them across multiple\nframes. Recent MOT approaches can be categorized into two-stage\ntracking-by-detection (TBD) methods and one-stage joint detection and tracking\n(JDT) methods. Despite the success of these approaches, they also suffer from\ncommon problems, such as harmful global or local inconsistency, poor trade-off\nbetween robustness and model complexity, and lack of flexibility in different\nscenes within the same video. In this paper we propose a simple but robust\nframework that formulates object detection and association jointly as a\nconsistent denoising diffusion process from paired noise boxes to paired\nground-truth boxes. This novel progressive denoising diffusion strategy\nsubstantially augments the tracker's effectiveness, enabling it to discriminate\nbetween various objects. During the training stage, paired object boxes diffuse\nfrom paired ground-truth boxes to random distribution, and the model learns\ndetection and tracking simultaneously by reversing this noising process. In\ninference, the model refines a set of paired randomly generated boxes to the\ndetection and tracking results in a flexible one-step or multi-step denoising\ndiffusion process. Extensive experiments on three widely used MOT benchmarks,\nincluding MOT17, MOT20, and Dancetrack, demonstrate that our approach achieves\ncompetitive performance compared to the current state-of-the-art methods.", "keywords": [], "authors_list": ["Fei Xie", "Zhongdao Wang", "Chao Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f16a"}, "filepath": "data/2403.14366.png", "tags": [], "_media_type": "image", "_rand": 0.9990260439753709, "arXiv_link": "https://arxiv.org/abs/2403.14366", "other_link": "", "title": "SurroundSDF: Implicit 3D Scene Understanding Based on Signed Distance Field", "abstract": "Vision-centric 3D environment understanding is both vital and challenging for\nautonomous driving systems. Recently, object-free methods have attracted\nconsiderable attention. Such methods perceive the world by predicting the\nsemantics of discrete voxel grids but fail to construct continuous and accurate\nobstacle surfaces. To this end, in this paper, we propose SurroundSDF to\nimplicitly predict the signed distance field (SDF) and semantic field for the\ncontinuous perception from surround images. Specifically, we introduce a\nquery-based approach and utilize SDF constrained by the Eikonal formulation to\naccurately describe the surfaces of obstacles. Furthermore, considering the\nabsence of precise SDF ground truth, we propose a novel weakly supervised\nparadigm for SDF, referred to as the Sandwich Eikonal formulation, which\nemphasizes applying correct and dense constraints on both sides of the surface,\nthereby enhancing the perceptual accuracy of the surface. Experiments suggest\nthat our method achieves SOTA for both occupancy prediction and 3D scene\nreconstruction tasks on the nuScenes dataset.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Lizhe Liu", "Bohua Wang", "Hongwei Xie", "Daqi Liu", "Li Liu", "Kuiyuan Yang", "Bing Wang", "Zhiqiang Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f16b"}, "filepath": "data/2312.06439.png", "tags": [], "_media_type": "image", "_rand": 0.9998125304641488, "arXiv_link": "https://arxiv.org/abs/2312.06439", "other_link": "https://github.com/tyhuang0428/DreamControl.", "title": "DreamControl: Control-Based Text-to-3D Generation with 3D Self-Prior", "abstract": "3D generation has raised great attention in recent years. With the success of\ntext-to-image diffusion models, the 2D-lifting technique becomes a promising\nroute to controllable 3D generation. However, these methods tend to present\ninconsistent geometry, which is also known as the Janus problem. We observe\nthat the problem is caused mainly by two aspects, i.e., viewpoint bias in 2D\ndiffusion models and overfitting of the optimization objective. To address it,\nwe propose a two-stage 2D-lifting framework, namely DreamControl, which\noptimizes coarse NeRF scenes as 3D self-prior and then generates fine-grained\nobjects with control-based score distillation. Specifically, adaptive viewpoint\nsampling and boundary integrity metric are proposed to ensure the consistency\nof generated priors. The priors are then regarded as input conditions to\nmaintain reasonable geometries, in which conditional LoRA and weighted score\nare further proposed to optimize detailed textures. DreamControl can generate\nhigh-quality 3D content in terms of both geometry consistency and texture\nfidelity. Moreover, our control-based optimization guidance is applicable to\nmore downstream tasks, including user-guided generation and 3D animation. The\nproject page is available at https://github.com/tyhuang0428/DreamControl.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Tianyu Huang", "Yihan Zeng", "Zhilu Zhang", "Wan Xu", "Hang Xu", "Songcen Xu", "Rynson W.H. Lau", "Wangmeng Zuo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f16c"}, "filepath": "data/2309.03185.png", "tags": [], "_media_type": "image", "_rand": 0.999252802778875, "arXiv_link": "https://arxiv.org/abs/2309.03185", "other_link": "https://bayesrays.github.io.", "title": "Bayes' Rays: Uncertainty Quantification for Neural Radiance Fields", "abstract": "Neural Radiance Fields (NeRFs) have shown promise in applications like view\nsynthesis and depth estimation, but learning from multiview images faces\ninherent uncertainties. Current methods to quantify them are either heuristic\nor computationally demanding. We introduce BayesRays, a post-hoc framework to\nevaluate uncertainty in any pre-trained NeRF without modifying the training\nprocess. Our method establishes a volumetric uncertainty field using spatial\nperturbations and a Bayesian Laplace approximation. We derive our algorithm\nstatistically and show its superior performance in key metrics and\napplications. Additional results available at: https://bayesrays.github.io.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Leili Goli", "Cody Reading", "Silvia Sell\u00e1n", "Alec Jacobson", "Andrea Tagliasacchi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f16d"}, "filepath": "data/2306.07831.png", "tags": [], "_media_type": "image", "_rand": 0.9991796449929539, "arXiv_link": "https://arxiv.org/abs/2306.07831", "other_link": "https://github.com/mahmoodlab/MI-Zero.", "title": "CPLIP: Zero-Shot Learning for Histopathology with Comprehensive Vision-Language Alignment", "abstract": "Contrastive visual language pretraining has emerged as a powerful method for\neither training new language-aware image encoders or augmenting existing\npretrained models with zero-shot visual recognition capabilities. However,\nexisting works typically train on large datasets of image-text pairs and have\nbeen designed to perform downstream tasks involving only small to medium\nsized-images, neither of which are applicable to the emerging field of\ncomputational pathology where there are limited publicly available paired\nimage-text datasets and each image can span up to 100,000 x 100,000 pixels. In\nthis paper we present MI-Zero, a simple and intuitive framework for unleashing\nthe zero-shot transfer capabilities of contrastively aligned image and text\nmodels on gigapixel histopathology whole slide images, enabling multiple\ndownstream diagnostic tasks to be carried out by pretrained encoders without\nrequiring any additional labels. MI-Zero reformulates zero-shot transfer under\nthe framework of multiple instance learning to overcome the computational\nchallenge of inference on extremely large images. We used over 550k pathology\nreports and other available in-domain text corpora to pre-train our text\nencoder. By effectively leveraging strong pre-trained encoders, our best model\npretrained on over 33k histopathology image-caption pairs achieves an average\nmedian zero-shot accuracy of 70.2% across three different real-world cancer\nsubtyping tasks. Our code is available at:\nhttps://github.com/mahmoodlab/MI-Zero.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Sajid Javed", "Arif Mahmood", "IYYAKUTTI IYAPPAN GANAPATHI", "Fayaz Ali", "Naoufel Werghi", "Mohammed Bennamoun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f16e"}, "filepath": "data/2402.18573.png", "tags": [], "_media_type": "image", "_rand": 0.9990188385266342, "arXiv_link": "https://arxiv.org/abs/2402.18573", "other_link": "", "title": "UniMODE: Unified Monocular 3D Object Detection", "abstract": "Realizing unified monocular 3D object detection, including both indoor and\noutdoor scenes, holds great importance in applications like robot navigation.\nHowever, involving various scenarios of data to train models poses challenges\ndue to their significantly different characteristics, e.g., diverse geometry\nproperties and heterogeneous domain distributions. To address these challenges,\nwe build a detector based on the bird's-eye-view (BEV) detection paradigm,\nwhere the explicit feature projection is beneficial to addressing the geometry\nlearning ambiguity when employing multiple scenarios of data to train\ndetectors. Then, we split the classical BEV detection architecture into two\nstages and propose an uneven BEV grid design to handle the convergence\ninstability caused by the aforementioned challenges. Moreover, we develop a\nsparse BEV feature projection strategy to reduce computational cost and a\nunified domain alignment method to handle heterogeneous domains. Combining\nthese techniques, a unified detector UniMODE is derived, which surpasses the\nprevious state-of-the-art on the challenging Omni3D dataset (a large-scale\ndataset including both indoor and outdoor scenes) by 4.9% AP_3D, revealing the\nfirst successful generalization of a BEV detector to unified 3D object\ndetection.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zhuoling Li", "Xiaogang Xu", "Ser-Nam Lim", "Hengshuang Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f16f"}, "filepath": "data/2310.12877v4.png", "tags": [], "_media_type": "image", "_rand": 0.999104703166217, "arXiv_link": "https://arxiv.org/abs/2310.12877v4", "other_link": "", "title": "Perceptual Assessment and Optimization of HDR Image Rendering", "abstract": "High dynamic range (HDR) rendering has the ability to faithfully reproduce\nthe wide luminance ranges in natural scenes, but how to accurately assess the\nrendering quality is relatively underexplored. Existing quality models are\nmostly designed for low dynamic range (LDR) images, and do not align well with\nhuman perception of HDR image quality. To fill this gap, we propose a family of\nHDR quality metrics, in which the key step is employing a simple inverse\ndisplay model to decompose an HDR image into a stack of LDR images with varying\nexposures. Subsequently, these decomposed images are assessed through\nwell-established LDR quality metrics. Our HDR quality models present three\ndistinct benefits. First, they directly inherit the recent advancements of LDR\nquality metrics. Second, they do not rely on human perceptual data of HDR image\nquality for re-calibration. Third, they facilitate the alignment and\nprioritization of specific luminance ranges for more accurate and detailed\nquality assessment. Experimental results show that our HDR quality metrics\nconsistently outperform existing models in terms of quality assessment on four\nHDR image quality datasets and perceptual optimization of HDR novel view\nsynthesis.", "keywords": [], "authors_list": ["Peibei Cao", "Rafal Mantiuk", "Kede Ma"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f170"}, "filepath": "data/2312.10118.png", "tags": [], "_media_type": "image", "_rand": 0.9995290651535702, "arXiv_link": "https://arxiv.org/abs/2312.10118", "other_link": "", "title": "From-Ground-To-Objects: Coarse-to-Fine Self-supervised Monocular Depth Estimation of Dynamic Objects with Ground Contact Prior", "abstract": "Self-supervised monocular depth estimation (DE) is an approach to learning\ndepth without costly depth ground truths. However, it often struggles with\nmoving objects that violate the static scene assumption during training. To\naddress this issue, we introduce a coarse-to-fine training strategy leveraging\nthe ground contacting prior based on the observation that most moving objects\nin outdoor scenes contact the ground. In the coarse training stage, we exclude\nthe objects in dynamic classes from the reprojection loss calculation to avoid\ninaccurate depth learning. To provide precise supervision on the depth of the\nobjects, we present a novel Ground-contacting-prior Disparity Smoothness Loss\n(GDS-Loss) that encourages a DE network to align the depth of the objects with\ntheir ground-contacting points. Subsequently, in the fine training stage, we\nrefine the DE network to learn the detailed depth of the objects from the\nreprojection loss, while ensuring accurate DE on the moving object regions by\nemploying our regularization loss with a cost-volume-based weighting factor.\nOur overall coarse-to-fine training strategy can easily be integrated with\nexisting DE methods without any modifications, significantly enhancing DE\nperformance on challenging Cityscapes and KITTI datasets, especially in the\nmoving object regions.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Jaeho Moon", "Juan Luis Gonzalez Bello", "Byeongjun Kwon", "Munchurl Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f171"}, "filepath": "data/2405.14881.png", "tags": [], "_media_type": "image", "_rand": 0.9999838162162161, "arXiv_link": "https://arxiv.org/abs/2405.14881", "other_link": "https://diffusemix.github.io/", "title": "DiffuseMix: Label-Preserving Data Augmentation with Diffusion Models", "abstract": "Recently, a number of image-mixing-based augmentation techniques have been\nintroduced to improve the generalization of deep neural networks. In these\ntechniques, two or more randomly selected natural images are mixed together to\ngenerate an augmented image. Such methods may not only omit important portions\nof the input images but also introduce label ambiguities by mixing images\nacross labels resulting in misleading supervisory signals. To address these\nlimitations, we propose DiffuseMix, a novel data augmentation technique that\nleverages a diffusion model to reshape training images, supervised by our\nbespoke conditional prompts. First, concatenation of a partial natural image\nand its generated counterpart is obtained which helps in avoiding the\ngeneration of unrealistic images or label ambiguities. Then, to enhance\nresilience against adversarial attacks and improves safety measures, a randomly\nselected structural pattern from a set of fractal images is blended into the\nconcatenated image to form the final augmented image for training. Our\nempirical results on seven different datasets reveal that DiffuseMix achieves\nsuperior performance compared to existing state-of the-art methods on tasks\nincluding general classification,fine-grained classification, fine-tuning, data\nscarcity, and adversarial robustness. Augmented datasets and codes are\navailable here: https://diffusemix.github.io/", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Khawar Islam", "Muhammad Zaigham Zaheer", "Arif Mahmood", "Karthik Nandakumar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f172"}, "filepath": "data/2405.16038.png", "tags": [], "_media_type": "image", "_rand": 0.9995299221953289, "arXiv_link": "https://arxiv.org/abs/2405.16038", "other_link": "https://github.com/XueZ-phd/Efficient-RGB-T-Early-Fusion-Detection}.", "title": "Neural Exposure Fusion for High-Dynamic Range Object Detection", "abstract": "Most recent multispectral object detectors employ a two-branch structure to\nextract features from RGB and thermal images. While the two-branch structure\nachieves better performance than a single-branch structure, it overlooks\ninference efficiency. This conflict is increasingly aggressive, as recent works\nsolely pursue higher performance rather than both performance and efficiency.\nIn this paper, we address this issue by improving the performance of efficient\nsingle-branch structures. We revisit the reasons causing the performance gap\nbetween these structures. For the first time, we reveal the information\ninterference problem in the naive early-fusion strategy adopted by previous\nsingle-branch structures. Besides, we find that the domain gap between\nmultispectral images, and weak feature representation of the single-branch\nstructure are also key obstacles for performance. Focusing on these three\nproblems, we propose corresponding solutions, including a novel shape-priority\nearly-fusion strategy, a weakly supervised learning method, and a core\nknowledge distillation technique. Experiments demonstrate that single-branch\nnetworks equipped with these three contributions achieve significant\nperformance enhancements while retaining high efficiency. Our code will be\navailable at\n\\url{https://github.com/XueZ-phd/Efficient-RGB-T-Early-Fusion-Detection}.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Emmanuel Onzon", "Maximilian B\u00f6mer", "Fahim Mannan", "Felix Heide"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f173"}, "filepath": "data/2311.09104.png", "tags": [], "_media_type": "image", "_rand": 0.9996459006921145, "arXiv_link": "https://arxiv.org/abs/2311.09104", "other_link": "", "title": "Cross-view and Cross-pose Completion for 3D Human Understanding", "abstract": "Human perception and understanding is a major domain of computer vision\nwhich, like many other vision subdomains recently, stands to gain from the use\nof large models pre-trained on large datasets. We hypothesize that the most\ncommon pre-training strategy of relying on general purpose, object-centric\nimage datasets such as ImageNet, is limited by an important domain shift. On\nthe other hand, collecting domain-specific ground truth such as 2D or 3D labels\ndoes not scale well. Therefore, we propose a pre-training approach based on\nself-supervised learning that works on human-centric data using only images.\nOur method uses pairs of images of humans: the first is partially masked and\nthe model is trained to reconstruct the masked parts given the visible ones and\na second image. It relies on both stereoscopic (cross-view) pairs, and temporal\n(cross-pose) pairs taken from videos, in order to learn priors about 3D as well\nas human motion. We pre-train a model for body-centric tasks and one for\nhand-centric tasks. With a generic transformer architecture, these models\noutperform existing self-supervised pre-training methods on a wide set of\nhuman-centric downstream tasks, and obtain state-of-the-art performance for\ninstance when fine-tuning for model-based and model-free human mesh recovery.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Matthieu Armando", "Salma Galaaoui", "Fabien Baradel", "Thomas Lucas", "Vincent Leroy", "Romain BR\u00c9GIER", "Philippe Weinzaepfel", "Gr\u00e9gory Rogez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f174"}, "filepath": "data/2312.04483.png", "tags": [], "_media_type": "image", "_rand": 0.9998007097067746, "arXiv_link": "https://arxiv.org/abs/2312.04483", "other_link": "", "title": "Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation", "abstract": "Despite diffusion models having shown powerful abilities to generate\nphotorealistic images, generating videos that are realistic and diverse still\nremains in its infancy. One of the key reasons is that current methods\nintertwine spatial content and temporal dynamics together, leading to a notably\nincreased complexity of text-to-video generation (T2V). In this work, we\npropose HiGen, a diffusion model-based method that improves performance by\ndecoupling the spatial and temporal factors of videos from two perspectives,\ni.e., structure level and content level. At the structure level, we decompose\nthe T2V task into two steps, including spatial reasoning and temporal\nreasoning, using a unified denoiser. Specifically, we generate spatially\ncoherent priors using text during spatial reasoning and then generate\ntemporally coherent motions from these priors during temporal reasoning. At the\ncontent level, we extract two subtle cues from the content of the input video\nthat can express motion and appearance changes, respectively. These two cues\nthen guide the model's training for generating videos, enabling flexible\ncontent variations and enhancing temporal stability. Through the decoupled\nparadigm, HiGen can effectively reduce the complexity of this task and generate\nrealistic videos with semantics accuracy and motion stability. Extensive\nexperiments demonstrate the superior performance of HiGen over the\nstate-of-the-art T2V methods.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Zhiwu Qing", "Shiwei Zhang", "Jiayu Wang", "Xiang Wang", "Yujie Wei", "Yingya Zhang", "Changxin Gao", "Nong Sang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f175"}, "filepath": "data/2403.14302.png", "tags": [], "_media_type": "image", "_rand": 0.9993654989965542, "arXiv_link": "https://arxiv.org/abs/2403.14302", "other_link": "", "title": "SpikingResformer: Bridging ResNet and Vision Transformer in Spiking Neural Networks", "abstract": "The remarkable success of Vision Transformers in Artificial Neural Networks\n(ANNs) has led to a growing interest in incorporating the self-attention\nmechanism and transformer-based architecture into Spiking Neural Networks\n(SNNs). While existing methods propose spiking self-attention mechanisms that\nare compatible with SNNs, they lack reasonable scaling methods, and the overall\narchitectures proposed by these methods suffer from a bottleneck in effectively\nextracting local features. To address these challenges, we propose a novel\nspiking self-attention mechanism named Dual Spike Self-Attention (DSSA) with a\nreasonable scaling method. Based on DSSA, we propose a novel spiking Vision\nTransformer architecture called SpikingResformer, which combines the\nResNet-based multi-stage architecture with our proposed DSSA to improve both\nperformance and energy efficiency while reducing parameters. Experimental\nresults show that SpikingResformer achieves higher accuracy with fewer\nparameters and lower energy consumption than other spiking Vision Transformer\ncounterparts. Notably, our SpikingResformer-L achieves 79.40% top-1 accuracy on\nImageNet with 4 time-steps, which is the state-of-the-art result in the SNN\nfield.", "keywords": [], "authors_list": ["Xinyu Shi", "Zecheng Hao", "Zhaofei Yu"], "category_name": "Neural and Evolutionary Computing", "all_categories": ["Neural and Evolutionary Computing", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f176"}, "filepath": "data/2312.17648.png", "tags": [], "_media_type": "image", "_rand": 0.9999846827667133, "arXiv_link": "https://arxiv.org/abs/2312.17648", "other_link": "", "title": "C$^2$KD: Bridging the Modality Gap for Cross-Modal Knowledge Distillation", "abstract": "Visual grounding aims to align visual information of specific regions of\nimages with corresponding natural language expressions. Current visual\ngrounding methods leverage pre-trained visual and language backbones separately\nto obtain visual features and linguistic features. Although these two types of\nfeatures are then fused via delicately designed networks, the heterogeneity of\nthe features makes them inapplicable for multi-modal reasoning. This problem\narises from the domain gap between the single-modal pre-training backbone used\nin current visual grounding methods, which can hardly be overcome by the\ntraditional end-to-end training method. To alleviate this, our work proposes an\nEmpowering pre-trained model for Visual Grounding (EpmVG) framework, which\ndistills a multimodal pre-trained model to guide the visual grounding task.\nEpmVG is based on a novel cross-modal distillation mechanism, which can\neffectively introduce the consistency information of images and texts in the\npre-trained model, to reduce the domain gap existing in the backbone networks,\nthereby improving the performance of the model in the visual grounding task.\nExtensive experiments are carried out on five conventionally used datasets, and\nresults demonstrate that our method achieves better performance than\nstate-of-the-art methods.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Fushuo Huo", "Wenchao Xu", "Jingcai Guo", "Haozhao Wang", "Song Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f177"}, "filepath": "data/2404.09884.png", "tags": [], "_media_type": "image", "_rand": 0.9999873123712718, "arXiv_link": "https://arxiv.org/abs/2404.09884", "other_link": "https://nianticlabs.github.io/marepo", "title": "Map-Relative Pose Regression for Visual Re-Localization", "abstract": "Pose regression networks predict the camera pose of a query image relative to\na known environment. Within this family of methods, absolute pose regression\n(APR) has recently shown promising accuracy in the range of a few centimeters\nin position error. APR networks encode the scene geometry implicitly in their\nweights. To achieve high accuracy, they require vast amounts of training data\nthat, realistically, can only be created using novel view synthesis in a\ndays-long process. This process has to be repeated for each new scene again and\nagain. We present a new approach to pose regression, map-relative pose\nregression (marepo), that satisfies the data hunger of the pose regression\nnetwork in a scene-agnostic fashion. We condition the pose regressor on a\nscene-specific map representation such that its pose predictions are relative\nto the scene map. This allows us to train the pose regressor across hundreds of\nscenes to learn the generic relation between a scene-specific map\nrepresentation and the camera pose. Our map-relative pose regressor can be\napplied to new map representations immediately or after mere minutes of\nfine-tuning for the highest accuracy. Our approach outperforms previous pose\nregression methods by far on two public datasets, indoor and outdoor. Code is\navailable: https://nianticlabs.github.io/marepo", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Shuai Chen", "Tommaso Cavallari", "Victor Adrian Prisacariu", "Eric Brachmann"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f178"}, "filepath": "data/2404.01943.png", "tags": [], "_media_type": "image", "_rand": 0.9996117974263833, "arXiv_link": "https://arxiv.org/abs/2404.01943", "other_link": "", "title": "Lookahead Exploration with Neural Radiance Representation for Continuous Vision-Language Navigation", "abstract": "Vision-and-language navigation (VLN) enables the agent to navigate to a\nremote location following the natural language instruction in 3D environments.\nAt each navigation step, the agent selects from possible candidate locations\nand then makes the move. For better navigation planning, the lookahead\nexploration strategy aims to effectively evaluate the agent's next action by\naccurately anticipating the future environment of candidate locations. To this\nend, some existing works predict RGB images for future environments, while this\nstrategy suffers from image distortion and high computational cost. To address\nthese issues, we propose the pre-trained hierarchical neural radiance\nrepresentation model (HNR) to produce multi-level semantic features for future\nenvironments, which are more robust and efficient than pixel-wise RGB\nreconstruction. Furthermore, with the predicted future environmental\nrepresentations, our lookahead VLN model is able to construct the navigable\nfuture path tree and select the optimal path via efficient parallel evaluation.\nExtensive experiments on the VLN-CE datasets confirm the effectiveness of our\nmethod.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques"], "authors_list": ["Zihan Wang", "Xiangyang Li", "Jiahao Yang", "Yeqi Liu", "Junjie Hu", "Ming Jiang", "Shuqiang Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f179"}, "filepath": "data/2312.09138.png", "tags": [], "_media_type": "image", "_rand": 0.999559715488127, "arXiv_link": "https://arxiv.org/abs/2312.09138", "other_link": "", "title": "Living Scenes: Multi-object Relocalization and Reconstruction in Changing 3D Environments", "abstract": "Research into dynamic 3D scene understanding has primarily focused on\nshort-term change tracking from dense observations, while little attention has\nbeen paid to long-term changes with sparse observations. We address this gap\nwith MoRE, a novel approach for multi-object relocalization and reconstruction\nin evolving environments. We view these environments as \"living scenes\" and\nconsider the problem of transforming scans taken at different points in time\ninto a 3D reconstruction of the object instances, whose accuracy and\ncompleteness increase over time. At the core of our method lies an\nSE(3)-equivariant representation in a single encoder-decoder network, trained\non synthetic data. This representation enables us to seamlessly tackle instance\nmatching, registration, and reconstruction. We also introduce a joint\noptimization algorithm that facilitates the accumulation of point clouds\noriginating from the same instance across multiple scans taken at different\npoints in time. We validate our method on synthetic and real-world data and\ndemonstrate state-of-the-art performance in both end-to-end performance and\nindividual subtasks.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Liyuan Zhu", "Shengyu Huang", "Konrad Schindler", "Iro Armeni"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f17a"}, "filepath": "data/2310.01407.png", "tags": [], "_media_type": "image", "_rand": 0.9997601763389505, "arXiv_link": "https://arxiv.org/abs/2310.01407", "other_link": "", "title": "CoDi: Conditional Diffusion Distillation for Higher-Fidelity and Faster Image Generation", "abstract": "Large generative diffusion models have revolutionized text-to-image\ngeneration and offer immense potential for conditional generation tasks such as\nimage enhancement, restoration, editing, and compositing. However, their\nwidespread adoption is hindered by the high computational cost, which limits\ntheir real-time application. To address this challenge, we introduce a novel\nmethod dubbed CoDi, that adapts a pre-trained latent diffusion model to accept\nadditional image conditioning inputs while significantly reducing the sampling\nsteps required to achieve high-quality results. Our method can leverage\narchitectures such as ControlNet to incorporate conditioning inputs without\ncompromising the model's prior knowledge gained during large scale\npre-training. Additionally, a conditional consistency loss enforces consistent\npredictions across diffusion steps, effectively compelling the model to\ngenerate high-quality images with conditions in a few steps. Our\nconditional-task learning and distillation approach outperforms previous\ndistillation methods, achieving a new state-of-the-art in producing\nhigh-quality images with very few steps (e.g., 1-4) across multiple tasks,\nincluding super-resolution, text-guided image editing, and depth-to-image\ngeneration.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Kangfu Mei", "Mauricio Delbracio", "Hossein Talebi", "Zhengzhong Tu", "Vishal M. Patel", "Peyman Milanfar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f17b"}, "filepath": "data/2405.05010.png", "tags": [], "_media_type": "image", "_rand": 0.999820080003294, "arXiv_link": "https://arxiv.org/abs/2405.05010", "other_link": "", "title": "NeRFDeformer: NeRF Transformation from a Single View via 3D Scene Flows", "abstract": "Neural fields (NeRF) have emerged as a promising approach for representing\ncontinuous 3D scenes. Nevertheless, the lack of semantic encoding in NeRFs\nposes a significant challenge for scene decomposition. To address this\nchallenge, we present a single model, Multi-Modal Decomposition NeRF\n(${M^2D}$NeRF), that is capable of both text-based and visual patch-based\nedits. Specifically, we use multi-modal feature distillation to integrate\nteacher features from pretrained visual and language models into 3D semantic\nfeature volumes, thereby facilitating consistent 3D editing. To enforce\nconsistency between the visual and language features in our 3D feature volumes,\nwe introduce a multi-modal similarity constraint. We also introduce a\npatch-based joint contrastive loss that helps to encourage object-regions to\ncoalesce in the 3D feature space, resulting in more precise boundaries.\nExperiments on various real-world scenes show superior performance in 3D scene\ndecomposition tasks compared to prior NeRF-based methods.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Zhenggang Tang", "Jason Ren", "Xiaoming Zhao", "Bowen Wen", "Jonathan Tremblay", "Stan Birchfield", "Alexander G. Schwing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f17c"}, "filepath": "data/2405.02581.png", "tags": [], "_media_type": "image", "_rand": 0.9991523141858294, "arXiv_link": "https://arxiv.org/abs/2405.02581", "other_link": "https://github.com/miccunifi/iamcl2r.", "title": "Stationary Representations: Optimally Approximating Compatibility and Implications for Improved Model Replacements", "abstract": "Learning compatible representations enables the interchangeable use of\nsemantic features as models are updated over time. This is particularly\nrelevant in search and retrieval systems where it is crucial to avoid\nreprocessing of the gallery images with the updated model. While recent\nresearch has shown promising empirical evidence, there is still a lack of\ncomprehensive theoretical understanding about learning compatible\nrepresentations. In this paper, we demonstrate that the stationary\nrepresentations learned by the $d$-Simplex fixed classifier optimally\napproximate compatibility representation according to the two inequality\nconstraints of its formal definition. This not only establishes a solid\nfoundation for future works in this line of research but also presents\nimplications that can be exploited in practical learning scenarios. An\nexemplary application is the now-standard practice of downloading and\nfine-tuning new pre-trained models. Specifically, we show the strengths and\ncritical issues of stationary representations in the case in which a model\nundergoing sequential fine-tuning is asynchronously replaced by downloading a\nbetter-performing model pre-trained elsewhere. Such a representation enables\nseamless delivery of retrieval service (i.e., no reprocessing of gallery\nimages) and offers improved performance without operational disruptions during\nmodel replacement. Code available at: https://github.com/miccunifi/iamcl2r.", "keywords": [], "authors_list": ["Niccol\u00f2 Biondi", "Federico Pernici", "Simone Ricci", "Alberto Del Bimbo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f17d"}, "filepath": "data/2403.10988.png", "tags": [], "_media_type": "image", "_rand": 0.9992640575502741, "arXiv_link": "https://arxiv.org/abs/2403.10988", "other_link": "https://github.com/liyuantsao/BFSR", "title": "Boosting Flow-based Generative Super-Resolution Models via Learned Prior", "abstract": "Flow-based super-resolution (SR) models have demonstrated astonishing\ncapabilities in generating high-quality images. However, these methods\nencounter several challenges during image generation, such as grid artifacts,\nexploding inverses, and suboptimal results due to a fixed sampling temperature.\nTo overcome these issues, this work introduces a conditional learned prior to\nthe inference phase of a flow-based SR model. This prior is a latent code\npredicted by our proposed latent module conditioned on the low-resolution\nimage, which is then transformed by the flow model into an SR image. Our\nframework is designed to seamlessly integrate with any contemporary flow-based\nSR model without modifying its architecture or pre-trained weights. We evaluate\nthe effectiveness of our proposed framework through extensive experiments and\nablation analyses. The proposed framework successfully addresses all the\ninherent issues in flow-based SR models and enhances their performance in\nvarious SR scenarios. Our code is available at:\nhttps://github.com/liyuantsao/BFSR", "keywords": ["Low-level vision"], "authors_list": ["Li-Yuan Tsao", "Yi-Chen Lo", "Chia-Che Chang", "Hao-Wei Chen", "Roy Tseng", "Chien Feng", "Chun-Yi Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f17e"}, "filepath": "data/2404.18156.png", "tags": [], "_media_type": "image", "_rand": 0.9996406092326809, "arXiv_link": "https://arxiv.org/abs/2404.18156", "other_link": "", "title": "Video Frame Interpolation via Direct Synthesis with the Event-based Reference", "abstract": "Video frame interpolation, the process of synthesizing intermediate frames\nbetween sequential video frames, has made remarkable progress with the use of\nevent cameras. These sensors, with microsecond-level temporal resolution, fill\ninformation gaps between frames by providing precise motion cues. However,\ncontemporary Event-Based Video Frame Interpolation (E-VFI) techniques often\nneglect the fact that event data primarily supply high-confidence features at\nscene edges during multi-modal feature fusion, thereby diminishing the role of\nevent signals in optical flow (OF) estimation and warping refinement. To\naddress this overlooked aspect, we introduce an end-to-end E-VFI learning\nmethod (referred to as EGMR) to efficiently utilize edge features from event\nsignals for motion flow and warping enhancement. Our method incorporates an\nEdge Guided Attentive (EGA) module, which rectifies estimated video motion\nthrough attentive aggregation based on the local correlation of multi-modal\nfeatures in a coarse-to-fine strategy. Moreover, given that event data can\nprovide accurate visual references at scene edges between consecutive frames,\nwe introduce a learned visibility map derived from event data to adaptively\nmitigate the occlusion problem in the warping refinement process. Extensive\nexperiments on both synthetic and real datasets show the effectiveness of the\nproposed approach, demonstrating its potential for higher quality video frame\ninterpolation.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Yuhan Liu", "Yongjian Deng", "Hao Chen", "Zhen Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f17f"}, "filepath": "data/2405.14934.png", "tags": [], "_media_type": "image", "_rand": 0.9998291813221325, "arXiv_link": "https://arxiv.org/abs/2405.14934", "other_link": "", "title": "Universal Robustness via Median Random Smoothing for Real-World Super-Resolution", "abstract": "Most of the recent literature on image Super-Resolution (SR) can be\nclassified into two main approaches. The first one involves learning a\ncorruption model tailored to a specific dataset, aiming to mimic the noise and\ncorruption in low-resolution images, such as sensor noise. However, this\napproach is data-specific, tends to lack adaptability, and its accuracy\ndiminishes when faced with unseen types of image corruptions. A second and more\nrecent approach, referred to as Robust Super-Resolution (RSR), proposes to\nimprove real-world SR by harnessing the generalization capabilities of a model\nby making it robust to adversarial attacks. To delve further into this second\napproach, our paper explores the universality of various methods for enhancing\nthe robustness of deep learning SR models. In other words, we inquire: \"Which\nrobustness method exhibits the highest degree of adaptability when dealing with\na wide range of adversarial attacks ?\". Our extensive experimentation on both\nsynthetic and real-world images empirically demonstrates that median randomized\nsmoothing (MRS) is more general in terms of robustness compared to adversarial\nlearning techniques, which tend to focus on specific types of attacks.\nFurthermore, as expected, we also illustrate that the proposed universal robust\nmethod enables the SR model to handle standard corruptions more effectively,\nsuch as blur and Gaussian noise, and notably, corruptions naturally present in\nreal-world images. These results support the significance of shifting the\nparadigm in the development of real-world SR methods towards RSR, especially\nvia MRS.", "keywords": ["Low-level vision", "Efficient and scalable vision"], "authors_list": ["Zakariya Chaouai", "Mohamed Tamaazousti"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f180"}, "filepath": "data/2401.06146.png", "tags": [], "_media_type": "image", "_rand": 0.9994837979073548, "arXiv_link": "https://arxiv.org/abs/2401.06146", "other_link": "", "title": "AAMDM: Accelerated Auto-regressive Motion Diffusion Model", "abstract": "Interactive motion synthesis is essential in creating immersive experiences\nin entertainment applications, such as video games and virtual reality.\nHowever, generating animations that are both high-quality and contextually\nresponsive remains a challenge. Traditional techniques in the game industry can\nproduce high-fidelity animations but suffer from high computational costs and\npoor scalability. Trained neural network models alleviate the memory and speed\nissues, yet fall short on generating diverse motions. Diffusion models offer\ndiverse motion synthesis with low memory usage, but require expensive reverse\ndiffusion processes. This paper introduces the Accelerated Auto-regressive\nMotion Diffusion Model (AAMDM), a novel motion synthesis framework designed to\nachieve quality, diversity, and efficiency all together. AAMDM integrates\nDenoising Diffusion GANs as a fast Generation Module, and an Auto-regressive\nDiffusion Model as a Polishing Module. Furthermore, AAMDM operates in a\nlower-dimensional embedded space rather than the full-dimensional pose space,\nwhich reduces the training complexity as well as further improves the\nperformance. We show that AAMDM outperforms existing methods in motion quality,\ndiversity, and runtime efficiency, through comprehensive quantitative analyses\nand visual comparisons. We also demonstrate the effectiveness of each\nalgorithmic component through ablation studies.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Tianyu Li", "Calvin Zhuhan Qiao", "Ren Guanqiao", "KangKang Yin", "Sehoon Ha"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f181"}, "filepath": "data/2311.15855.png", "tags": [], "_media_type": "image", "_rand": 0.9990337953158817, "arXiv_link": "https://arxiv.org/abs/2311.15855", "other_link": "https://ait.ethz.ch/sith", "title": "SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion", "abstract": "A long-standing goal of 3D human reconstruction is to create lifelike and\nfully detailed 3D humans from single-view images. The main challenge lies in\ninferring unknown body shapes, appearances, and clothing details in areas not\nvisible in the images. To address this, we propose SiTH, a novel pipeline that\nuniquely integrates an image-conditioned diffusion model into a 3D mesh\nreconstruction workflow. At the core of our method lies the decomposition of\nthe challenging single-view reconstruction problem into generative\nhallucination and reconstruction subproblems. For the former, we employ a\npowerful generative diffusion model to hallucinate unseen back-view appearance\nbased on the input images. For the latter, we leverage skinned body meshes as\nguidance to recover full-body texture meshes from the input and back-view\nimages. SiTH requires as few as 500 3D human scans for training while\nmaintaining its generality and robustness to diverse images. Extensive\nevaluations on two 3D human benchmarks, including our newly created one,\nhighlighted our method's superior accuracy and perceptual quality in 3D\ntextured human reconstruction. Our code and evaluation benchmark are available\nat https://ait.ethz.ch/sith", "keywords": ["Biometrics and human analysis"], "authors_list": ["Hsuan-I Ho", "Jie Song", "Otmar Hilliges"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f182"}, "filepath": "data/2311.17910v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993662417053107, "arXiv_link": "https://arxiv.org/abs/2311.17910v1", "other_link": "https://github.com/apple/ml-hugs", "title": "HUGS: Human Gaussian Splatting", "abstract": "Recent advances in neural rendering have improved both training and rendering\ntimes by orders of magnitude. While these methods demonstrate state-of-the-art\nquality and speed, they are designed for photogrammetry of static scenes and do\nnot generalize well to freely moving humans in the environment. In this work,\nwe introduce Human Gaussian Splats (HUGS) that represents an animatable human\ntogether with the scene using 3D Gaussian Splatting (3DGS). Our method takes\nonly a monocular video with a small number of (50-100) frames, and it\nautomatically learns to disentangle the static scene and a fully animatable\nhuman avatar within 30 minutes. We utilize the SMPL body model to initialize\nthe human Gaussians. To capture details that are not modeled by SMPL (e.g.\ncloth, hairs), we allow the 3D Gaussians to deviate from the human body model.\nUtilizing 3D Gaussians for animated humans brings new challenges, including the\nartifacts created when articulating the Gaussians. We propose to jointly\noptimize the linear blend skinning weights to coordinate the movements of\nindividual Gaussians during animation. Our approach enables novel-pose\nsynthesis of human and novel view synthesis of both the human and the scene. We\nachieve state-of-the-art rendering quality with a rendering speed of 60 FPS\nwhile being ~100x faster to train over previous work. Our code will be\nannounced here: https://github.com/apple/ml-hugs", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Muhammed Kocabas", "Jen-Hao Rick Chang", "James Gabriel", "Oncel Tuzel", "Anurag Ranjan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f183"}, "filepath": "data/2310.08230.png", "tags": [], "_media_type": "image", "_rand": 0.9991759777681865, "arXiv_link": "https://arxiv.org/abs/2310.08230", "other_link": "", "title": "SpiderMatch: 3D Shape Matching with Global Optimality and Geometric Consistency", "abstract": "In this work we propose to combine the advantages of learning-based and\ncombinatorial formalisms for 3D shape matching. While learning-based shape\nmatching solutions lead to state-of-the-art matching performance, they do not\nensure geometric consistency, so that obtained matchings are locally unsmooth.\nOn the contrary, axiomatic methods allow to take geometric consistency into\naccount by explicitly constraining the space of valid matchings. However,\nexisting axiomatic formalisms are impractical since they do not scale to\npractically relevant problem sizes, or they require user input for the\ninitialisation of non-convex optimisation problems. In this work we aim to\nclose this gap by proposing a novel combinatorial solver that combines a unique\nset of favourable properties: our approach is (i) initialisation free, (ii)\nmassively parallelisable powered by a quasi-Newton method, (iii) provides\noptimality gaps, and (iv) delivers decreased runtime and globally optimal\nresults for many instances.", "keywords": [], "authors_list": ["Paul Roetzer", "Florian Bernard"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f184"}, "filepath": "data/2403.13293.png", "tags": [], "_media_type": "image", "_rand": 0.999749096409446, "arXiv_link": "https://arxiv.org/abs/2403.13293", "other_link": "https://github.com/Ascend-Research/AutoBuild", "title": "Building Optimal Neural Architectures using Interpretable Knowledge", "abstract": "Neural Architecture Search is a costly practice. The fact that a search space\ncan span a vast number of design choices with each architecture evaluation\ntaking nontrivial overhead makes it hard for an algorithm to sufficiently\nexplore candidate networks. In this paper, we propose AutoBuild, a scheme which\nlearns to align the latent embeddings of operations and architecture modules\nwith the ground-truth performance of the architectures they appear in. By doing\nso, AutoBuild is capable of assigning interpretable importance scores to\narchitecture modules, such as individual operation features and larger macro\noperation sequences such that high-performance neural networks can be\nconstructed without any need for search. Through experiments performed on\nstate-of-the-art image classification, segmentation, and Stable Diffusion\nmodels, we show that by mining a relatively small set of evaluated\narchitectures, AutoBuild can learn to build high-quality architectures directly\nor help to reduce search space to focus on relevant areas, finding better\narchitectures that outperform both the original labeled ones and ones found by\nsearch baselines. Code available at\nhttps://github.com/Ascend-Research/AutoBuild", "keywords": [], "authors_list": ["Keith Mills", "Fred Han", "Mohammad Salameh", "Shengyao Lu", "CHUNHUA ZHOU", "Jiao He", "Fengyu Sun", "Di Niu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f185"}, "filepath": "data/2311.13231.png", "tags": [], "_media_type": "image", "_rand": 0.9992782827132245, "arXiv_link": "https://arxiv.org/abs/2311.13231", "other_link": "https://github.com/yk7333/D3PO.", "title": "Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model", "abstract": "Using reinforcement learning with human feedback (RLHF) has shown significant\npromise in fine-tuning diffusion models. Previous methods start by training a\nreward model that aligns with human preferences, then leverage RL techniques to\nfine-tune the underlying models. However, crafting an efficient reward model\ndemands extensive datasets, optimal architecture, and manual hyperparameter\ntuning, making the process both time and cost-intensive. The direct preference\noptimization (DPO) method, effective in fine-tuning large language models,\neliminates the necessity for a reward model. However, the extensive GPU memory\nrequirement of the diffusion model's denoising process hinders the direct\napplication of the DPO method. To address this issue, we introduce the Direct\nPreference for Denoising Diffusion Policy Optimization (D3PO) method to\ndirectly fine-tune diffusion models. The theoretical analysis demonstrates that\nalthough D3PO omits training a reward model, it effectively functions as the\noptimal reward model trained using human feedback data to guide the learning\nprocess. This approach requires no training of a reward model, proving to be\nmore direct, cost-effective, and minimizing computational overhead. In\nexperiments, our method uses the relative scale of objectives as a proxy for\nhuman preference, delivering comparable results to methods using ground-truth\nrewards. Moreover, D3PO demonstrates the ability to reduce image distortion\nrates and generate safer images, overcoming challenges lacking robust reward\nmodels. Our code is publicly available at https://github.com/yk7333/D3PO.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Kai Yang", "Jian Tao", "Jiafei Lyu", "Chunjiang Ge", "Jiaxin Chen", "Weihan Shen", "Xiaolong Zhu", "Xiu Li"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f097da8041005727f186"}, "filepath": "data/2403.19811.png", "tags": [], "_media_type": "image", "_rand": 0.9997456682847943, "arXiv_link": "https://arxiv.org/abs/2403.19811", "other_link": "https://github.com/annusha/xmic", "title": "X-MIC: Cross-Modal Instance Conditioning for Egocentric Action Generalization", "abstract": "Lately, there has been growing interest in adapting vision-language models\n(VLMs) to image and third-person video classification due to their success in\nzero-shot recognition. However, the adaptation of these models to egocentric\nvideos has been largely unexplored. To address this gap, we propose a simple\nyet effective cross-modal adaptation framework, which we call X-MIC. Using a\nvideo adapter, our pipeline learns to align frozen text embeddings to each\negocentric video directly in the shared embedding space. Our novel adapter\narchitecture retains and improves generalization of the pre-trained VLMs by\ndisentangling learnable temporal modeling and frozen visual encoder. This\nresults in an enhanced alignment of text embeddings to each egocentric video,\nleading to a significant improvement in cross-dataset generalization. We\nevaluate our approach on the Epic-Kitchens, Ego4D, and EGTEA datasets for\nfine-grained cross-dataset action generalization, demonstrating the\neffectiveness of our method. Code is available at\nhttps://github.com/annusha/xmic", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Anna Kukleva", "Fadime Sener", "Edoardo Remelli", "Bugra Tekin", "Eric Sauser", "Bernt Schiele", "Shugao Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f187"}, "filepath": "data/2306.15876.png", "tags": [], "_media_type": "image", "_rand": 0.99984157974567, "arXiv_link": "https://arxiv.org/abs/2306.15876", "other_link": "", "title": "MART: Masked Affective RepresenTation Learning via Masked Temporal Distribution Distillation", "abstract": "Representation learning has been evolving from traditional supervised\ntraining to Contrastive Learning (CL) and Masked Image Modeling (MIM). Previous\nworks have demonstrated their pros and cons in specific scenarios, i.e., CL and\nsupervised pre-training excel at capturing longer-range global patterns and\nenabling better feature discrimination, while MIM can introduce more local and\ndiverse attention across all transformer layers. In this paper, we explore how\nto obtain a model that combines their strengths. We start by examining previous\nfeature distillation and mask feature reconstruction methods and identify their\nlimitations. We find that their increasing diversity mainly derives from the\nasymmetric designs, but these designs may in turn compromise the discrimination\nability. In order to better obtain both discrimination and diversity, we\npropose a simple but effective Hybrid Distillation strategy, which utilizes\nboth the supervised/CL teacher and the MIM teacher to jointly guide the student\nmodel. Hybrid Distill imitates the token relations of the MIM teacher to\nalleviate attention collapse, as well as distills the feature maps of the\nsupervised/CL teacher to enable discrimination. Furthermore, a progressive\nredundant token masking strategy is also utilized to reduce the distilling\ncosts and avoid falling into local optima. Experiment results prove that Hybrid\nDistill can achieve superior performance on different benchmarks.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhicheng Zhang", "Pancheng Zhao", "Eunil Park", "Jufeng Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f188"}, "filepath": "data/2404.00301.png", "tags": [], "_media_type": "image", "_rand": 0.9998494228096761, "arXiv_link": "https://arxiv.org/abs/2404.00301", "other_link": "https://xingyuren.github.io/id2reflectance/.", "title": "Monocular Identity-Conditioned Facial Reflectance Reconstruction", "abstract": "Recent 3D face reconstruction methods have made remarkable advancements, yet\nthere remain huge challenges in monocular high-quality facial reflectance\nreconstruction. Existing methods rely on a large amount of light-stage captured\ndata to learn facial reflectance models. However, the lack of subject diversity\nposes challenges in achieving good generalization and widespread applicability.\nIn this paper, we learn the reflectance prior in image space rather than UV\nspace and present a framework named ID2Reflectance. Our framework can directly\nestimate the reflectance maps of a single image while using limited reflectance\ndata for training. Our key insight is that reflectance data shares facial\nstructures with RGB faces, which enables obtaining expressive facial prior from\ninexpensive RGB data thus reducing the dependency on reflectance data. We first\nlearn a high-quality prior for facial reflectance. Specifically, we pretrain\nmulti-domain facial feature codebooks and design a codebook fusion method to\nalign the reflectance and RGB domains. Then, we propose an identity-conditioned\nswapping module that injects facial identity from the target image into the\npre-trained autoencoder to modify the identity of the source reflectance image.\nFinally, we stitch multi-view swapped reflectance images to obtain renderable\nassets. Extensive experiments demonstrate that our method exhibits excellent\ngeneralization capability and achieves state-of-the-art facial reflectance\nreconstruction results for in-the-wild faces. Our project page is\nhttps://xingyuren.github.io/id2reflectance/.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Xingyu Ren", "Jiankang Deng", "Yuhao Cheng", "Jia Guo", "Chao Ma", "Yichao Yan", "Wenhan Zhu", "Xiaokang Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f189"}, "filepath": "data/2403.14333.png", "tags": [], "_media_type": "image", "_rand": 0.9997475842004477, "arXiv_link": "https://arxiv.org/abs/2403.14333", "other_link": "", "title": "CFPL-FAS: Class Free Prompt Learning for Generalizable Face Anti-spoofing", "abstract": "Domain generalization (DG) based Face Anti-Spoofing (FAS) aims to improve the\nmodel's performance on unseen domains. Existing methods either rely on domain\nlabels to align domain-invariant feature spaces, or disentangle generalizable\nfeatures from the whole sample, which inevitably lead to the distortion of\nsemantic feature structures and achieve limited generalization. In this work,\nwe make use of large-scale VLMs like CLIP and leverage the textual feature to\ndynamically adjust the classifier's weights for exploring generalizable visual\nfeatures. Specifically, we propose a novel Class Free Prompt Learning (CFPL)\nparadigm for DG FAS, which utilizes two lightweight transformers, namely\nContent Q-Former (CQF) and Style Q-Former (SQF), to learn the different\nsemantic prompts conditioned on content and style features by using a set of\nlearnable query vectors, respectively. Thus, the generalizable prompt can be\nlearned by two improvements: (1) A Prompt-Text Matched (PTM) supervision is\nintroduced to ensure CQF learns visual representation that is most informative\nof the content description. (2) A Diversified Style Prompt (DSP) technology is\nproposed to diversify the learning of style prompts by mixing feature\nstatistics between instance-specific styles. Finally, the learned text features\nmodulate visual features to generalization through the designed Prompt\nModulation (PM). Extensive experiments show that the CFPL is effective and\noutperforms the state-of-the-art methods on several cross-domain datasets.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Ajian Liu", "Shuai Xue", "Gan Jianwen", "Jun Wan", "Yanyan Liang", "Jiankang Deng", "Sergio Escalera", "Zhen Lei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f18a"}, "filepath": "data/2404.01179.png", "tags": [], "_media_type": "image", "_rand": 0.9995413645865603, "arXiv_link": "https://arxiv.org/abs/2404.01179", "other_link": "", "title": "BEM: Balanced and Entropy-based Mix for Long-Tailed Semi-Supervised Learning", "abstract": "Data mixing methods play a crucial role in semi-supervised learning (SSL),\nbut their application is unexplored in long-tailed semi-supervised learning\n(LTSSL). The primary reason is that the in-batch mixing manner fails to address\nclass imbalance. Furthermore, existing LTSSL methods mainly focus on\nre-balancing data quantity but ignore class-wise uncertainty, which is also\nvital for class balance. For instance, some classes with sufficient samples\nmight still exhibit high uncertainty due to indistinguishable features. To this\nend, this paper introduces the Balanced and Entropy-based Mix (BEM), a\npioneering mixing approach to re-balance the class distribution of both data\nquantity and uncertainty. Specifically, we first propose a class balanced mix\nbank to store data of each class for mixing. This bank samples data based on\nthe estimated quantity distribution, thus re-balancing data quantity. Then, we\npresent an entropy-based learning approach to re-balance class-wise\nuncertainty, including entropy-based sampling strategy, entropy-based selection\nmodule, and entropy-based class balanced loss. Our BEM first leverages data\nmixing for improving LTSSL, and it can also serve as a complement to the\nexisting re-balancing methods. Experimental results show that BEM significantly\nenhances various LTSSL frameworks and achieves state-of-the-art performances\nacross multiple benchmarks.", "keywords": [], "authors_list": ["Hongwei Zheng", "Linyuan Zhou", "Han Li", "Jinming Su", "Xiaoming Wei", "Xu Xiaoming"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f18b"}, "filepath": "data/2312.03704.png", "tags": [], "_media_type": "image", "_rand": 0.9997512173799344, "arXiv_link": "https://arxiv.org/abs/2312.03704", "other_link": "", "title": "Relightable Gaussian Codec Avatars", "abstract": "The fidelity of relighting is bounded by both geometry and appearance\nrepresentations. For geometry, both mesh and volumetric approaches have\ndifficulty modeling intricate structures like 3D hair geometry. For appearance,\nexisting relighting models are limited in fidelity and often too slow to render\nin real-time with high-resolution continuous environments. In this work, we\npresent Relightable Gaussian Codec Avatars, a method to build high-fidelity\nrelightable head avatars that can be animated to generate novel expressions.\nOur geometry model based on 3D Gaussians can capture 3D-consistent\nsub-millimeter details such as hair strands and pores on dynamic face\nsequences. To support diverse materials of human heads such as the eyes, skin,\nand hair in a unified manner, we present a novel relightable appearance model\nbased on learnable radiance transfer. Together with global illumination-aware\nspherical harmonics for the diffuse components, we achieve real-time relighting\nwith all-frequency reflections using spherical Gaussians. This appearance model\ncan be efficiently relit under both point light and continuous illumination. We\nfurther improve the fidelity of eye reflections and enable explicit gaze\ncontrol by introducing relightable explicit eye models. Our method outperforms\nexisting approaches without compromising real-time performance. We also\ndemonstrate real-time relighting of avatars on a tethered consumer VR headset,\nshowcasing the efficiency and fidelity of our avatars.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Shunsuke Saito", "Gabriel Schwartz", "Tomas Simon", "Junxuan Li", "Giljoo Nam"], "category_name": "Graphics", "all_categories": ["Graphics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f18c"}, "filepath": "data/2404.18630.png", "tags": [], "_media_type": "image", "_rand": 0.9991940081392955, "arXiv_link": "https://arxiv.org/abs/2404.18630", "other_link": "https://ait.ethz.ch/4d-dress.", "title": "4D-DRESS: A 4D Dataset of Real-World Human Clothing With Semantic Annotations", "abstract": "The studies of human clothing for digital avatars have predominantly relied\non synthetic datasets. While easy to collect, synthetic data often fall short\nin realism and fail to capture authentic clothing dynamics. Addressing this\ngap, we introduce 4D-DRESS, the first real-world 4D dataset advancing human\nclothing research with its high-quality 4D textured scans and garment meshes.\n4D-DRESS captures 64 outfits in 520 human motion sequences, amounting to 78k\ntextured scans. Creating a real-world clothing dataset is challenging,\nparticularly in annotating and segmenting the extensive and complex 4D human\nscans. To address this, we develop a semi-automatic 4D human parsing pipeline.\nWe efficiently combine a human-in-the-loop process with automation to\naccurately label 4D scans in diverse garments and body movements. Leveraging\nprecise annotations and high-quality garment meshes, we establish several\nbenchmarks for clothing simulation and reconstruction. 4D-DRESS offers\nrealistic and challenging data that complements synthetic sources, paving the\nway for advancements in research of lifelike human clothing. Website:\nhttps://ait.ethz.ch/4d-dress.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Wenbo Wang", "Hsuan-I Ho", "Chen Guo", "Boxiang Rong", "Artur Grigorev", "Jie Song", "Juan Jose Zarate", "Otmar Hilliges"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f18d"}, "filepath": "data/2311.10081.png", "tags": [], "_media_type": "image", "_rand": 0.9996807499110671, "arXiv_link": "https://arxiv.org/abs/2311.10081", "other_link": "", "title": "DRESS: Instructing Large Vision-Language Models to Align and Interact with Humans via Natural Language Feedback", "abstract": "We present DRESS, a large vision language model (LVLM) that innovatively\nexploits Natural Language feedback (NLF) from Large Language Models to enhance\nits alignment and interactions by addressing two key limitations in the\nstate-of-the-art LVLMs. First, prior LVLMs generally rely only on the\ninstruction finetuning stage to enhance alignment with human preferences.\nWithout incorporating extra feedback, they are still prone to generate\nunhelpful, hallucinated, or harmful responses. Second, while the visual\ninstruction tuning data is generally structured in a multi-turn dialogue\nformat, the connections and dependencies among consecutive conversational turns\nare weak. This reduces the capacity for effective multi-turn interactions. To\ntackle these, we propose a novel categorization of the NLF into two key types:\ncritique and refinement. The critique NLF identifies the strengths and\nweaknesses of the responses and is used to align the LVLMs with human\npreferences. The refinement NLF offers concrete suggestions for improvement and\nis adopted to improve the interaction ability of the LVLMs-- which focuses on\nLVLMs' ability to refine responses by incorporating feedback in multi-turn\ninteractions. To address the non-differentiable nature of NLF, we generalize\nconditional reinforcement learning for training. Our experimental results\ndemonstrate that DRESS can generate more helpful (9.76%), honest (11.52%), and\nharmless (21.03%) responses, and more effectively learn from feedback during\nmulti-turn interactions compared to SOTA LVMLs.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Yangyi Chen", "Karan Sikka", "Michael Cogswell", "Heng Ji", "Ajay Divakaran"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f18e"}, "filepath": "data/2401.06116v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999340276682331, "arXiv_link": "https://arxiv.org/abs/2401.06116v1", "other_link": "", "title": "Gaussian Shadow Casting for Neural Characters", "abstract": "Neural character models can now reconstruct detailed geometry and texture\nfrom video, but they lack explicit shadows and shading, leading to artifacts\nwhen generating novel views and poses or during relighting. It is particularly\ndifficult to include shadows as they are a global effect and the required\ncasting of secondary rays is costly. We propose a new shadow model using a\nGaussian density proxy that replaces sampling with a simple analytic formula.\nIt supports dynamic motion and is tailored for shadow computation, thereby\navoiding the affine projection approximation and sorting required by the\nclosely related Gaussian splatting. Combined with a deferred neural rendering\nmodel, our Gaussian shadows enable Lambertian shading and shadow casting with\nminimal overhead. We demonstrate improved reconstructions, with better\nseparation of albedo, shading, and shadows in challenging outdoor scenes with\ndirect sun light and hard shadows. Our method is able to optimize the light\ndirection without any input from the user. As a result, novel poses have fewer\nshadow artifacts and relighting in novel scenes is more realistic compared to\nthe state-of-the-art methods, providing new ways to pose neural characters in\nnovel environments, increasing their applicability.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Luis Bolanos", "Shih-Yang Su", "Helge Rhodin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f18f"}, "filepath": "data/2312.10908.png", "tags": [], "_media_type": "image", "_rand": 0.9994776728579386, "arXiv_link": "https://arxiv.org/abs/2312.10908", "other_link": "", "title": "CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update", "abstract": "Utilizing large language models (LLMs) to compose off-the-shelf visual tools\nrepresents a promising avenue of research for developing robust visual\nassistants capable of addressing diverse visual tasks. However, these methods\noften overlook the potential for continual learning, typically by freezing the\nutilized tools, thus limiting their adaptation to environments requiring new\nknowledge. To tackle this challenge, we propose CLOVA, a Closed-Loop Visual\nAssistant, which operates within a framework encompassing inference,\nreflection, and learning phases. During the inference phase, LLMs generate\nprograms and execute corresponding tools to complete assigned tasks. In the\nreflection phase, a multimodal global-local reflection scheme analyzes human\nfeedback to determine which tools require updating. Lastly, the learning phase\nemploys three flexible approaches to automatically gather training data and\nintroduces a novel prompt tuning scheme to update the tools, allowing CLOVA to\nefficiently acquire new knowledge. Experimental findings demonstrate that CLOVA\nsurpasses existing tool-usage methods by 5% in visual question answering and\nmultiple-image reasoning, by 10% in knowledge tagging, and by 20% in image\nediting. These results underscore the significance of the continual learning\ncapability in general visual assistants.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zhi Gao", "Yuntao Du.", "Xintong Zhang", "Xiaojian Ma", "Wenjuan Han", "Song-Chun Zhu", "Qing Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f190"}, "filepath": "data/2403.02899.png", "tags": [], "_media_type": "image", "_rand": 0.9993834630743363, "arXiv_link": "https://arxiv.org/abs/2403.02899", "other_link": "", "title": "Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation", "abstract": "Conventional Unsupervised Domain Adaptation (UDA) strives to minimize\ndistribution discrepancy between domains, which neglects to harness rich\nsemantics from data and struggles to handle complex domain shifts. A promising\ntechnique is to leverage the knowledge of large-scale pre-trained\nvision-language models for more guided adaptation. Despite some endeavors,\ncurrent methods often learn textual prompts to embed domain semantics for\nsource and target domains separately and perform classification within each\ndomain, limiting cross-domain knowledge transfer. Moreover, prompting only the\nlanguage branch lacks flexibility to adapt both modalities dynamically. To\nbridge this gap, we propose Domain-Agnostic Mutual Prompting (DAMP) to exploit\ndomain-invariant semantics by mutually aligning visual and textual embeddings.\nSpecifically, the image contextual information is utilized to prompt the\nlanguage branch in a domain-agnostic and instance-conditioned way. Meanwhile,\nvisual prompts are imposed based on the domain-agnostic textual prompt to\nelicit domain-invariant visual embeddings. These two branches of prompts are\nlearned mutually with a cross-attention module and regularized with a\nsemantic-consistency loss and an instance-discrimination contrastive loss.\nExperiments on three UDA benchmarks demonstrate the superiority of DAMP over\nstate-of-the-art approaches.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zhekai Du", "Xinyao Li", "Fengling Li", "Ke Lu", "Lei Zhu", "Jingjing Li"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f191"}, "filepath": "data/2403.02075.png", "tags": [], "_media_type": "image", "_rand": 0.9990762959084618, "arXiv_link": "https://arxiv.org/abs/2403.02075", "other_link": "", "title": "DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Prediction", "abstract": "In Multiple Object Tracking, objects often exhibit non-linear motion of\nacceleration and deceleration, with irregular direction changes.\nTacking-by-detection (TBD) trackers with Kalman Filter motion prediction work\nwell in pedestrian-dominant scenarios but fall short in complex situations when\nmultiple objects perform non-linear and diverse motion simultaneously. To\ntackle the complex non-linear motion, we propose a real-time diffusion-based\nMOT approach named DiffMOT. Specifically, for the motion predictor component,\nwe propose a novel Decoupled Diffusion-based Motion Predictor (D$^2$MP). It\nmodels the entire distribution of various motion presented by the data as a\nwhole. It also predicts an individual object's motion conditioning on an\nindividual's historical motion information. Furthermore, it optimizes the\ndiffusion process with much fewer sampling steps. As a MOT tracker, the DiffMOT\nis real-time at 22.7FPS, and also outperforms the state-of-the-art on\nDanceTrack and SportsMOT datasets with $62.3\\%$ and $76.2\\%$ in HOTA metrics,\nrespectively. To the best of our knowledge, DiffMOT is the first to introduce a\ndiffusion probabilistic model into the MOT to tackle non-linear motion\nprediction.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Weiyi Lv", "Yuhang Huang", "NING Zhang", "Ruei-Sung Lin", "Mei Han", "Dan Zeng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f192"}, "filepath": "data/2311.11013.png", "tags": [], "_media_type": "image", "_rand": 0.9990112746440272, "arXiv_link": "https://arxiv.org/abs/2311.11013", "other_link": "https://delinqu.github.io/EN-SLAM.", "title": "Implicit Event-RGBD Neural SLAM", "abstract": "Implicit neural SLAM has achieved remarkable progress recently. Nevertheless,\nexisting methods face significant challenges in non-ideal scenarios, such as\nmotion blur or lighting variation, which often leads to issues like convergence\nfailures, localization drifts, and distorted mapping. To address these\nchallenges, we propose EN-SLAM, the first event-RGBD implicit neural SLAM\nframework, which effectively leverages the high rate and high dynamic range\nadvantages of event data for tracking and mapping. Specifically, EN-SLAM\nproposes a differentiable CRF (Camera Response Function) rendering technique to\ngenerate distinct RGB and event camera data via a shared radiance field, which\nis optimized by learning a unified implicit representation with the captured\nevent and RGBD supervision. Moreover, based on the temporal difference property\nof events, we propose a temporal aggregating optimization strategy for the\nevent joint tracking and global bundle adjustment, capitalizing on the\nconsecutive difference constraints of events, significantly enhancing tracking\naccuracy and robustness. Finally, we construct the simulated dataset\nDEV-Indoors and real captured dataset DEV-Reals containing 6 scenes, 17\nsequences with practical motion blur and lighting changes for evaluations.\nExperimental results show that our method outperforms the SOTA methods in both\ntracking ATE and mapping ACC with a real-time 17 FPS in various challenging\nenvironments. Project page: https://delinqu.github.io/EN-SLAM.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Delin Qu", "Chi Yan", "Dong Wang", "Jie Yin", "Qizhi Chen", "Dan Xu", "Yiting Zhang", "Bin Zhao", "Xuelong Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f193"}, "filepath": "data/2312.06734.png", "tags": [], "_media_type": "image", "_rand": 0.9996918149234789, "arXiv_link": "https://arxiv.org/abs/2312.06734", "other_link": "https://github.com/DeminYu98/DiffCast.", "title": "DiffCast: A Unified Framework via Residual Diffusion for Precipitation Nowcasting", "abstract": "Precipitation nowcasting is an important spatio-temporal prediction task to\npredict the radar echoes sequences based on current observations, which can\nserve both meteorological science and smart city applications. Due to the\nchaotic evolution nature of the precipitation systems, it is a very challenging\nproblem. Previous studies address the problem either from the perspectives of\ndeterministic modeling or probabilistic modeling. However, their predictions\nsuffer from the blurry, high-value echoes fading away and position inaccurate\nissues. The root reason of these issues is that the chaotic evolutionary\nprecipitation systems are not appropriately modeled. Inspired by the nature of\nthe systems, we propose to decompose and model them from the perspective of\nglobal deterministic motion and local stochastic variations with residual\nmechanism. A unified and flexible framework that can equip any type of\nspatio-temporal models is proposed based on residual diffusion, which\neffectively tackles the shortcomings of previous methods. Extensive\nexperimental results on four publicly available radar datasets demonstrate the\neffectiveness and superiority of the proposed framework, compared to\nstate-of-the-art techniques. Our code is publicly available at\nhttps://github.com/DeminYu98/DiffCast.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Demin Yu", "Xutao Li", "Yunming Ye", "Baoquan Zhang", "Luo Chuyao", "Kuai Dai", "wangrui", "Chenxunlai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f194"}, "filepath": "data/2311.17048.png", "tags": [], "_media_type": "image", "_rand": 0.9996969810923381, "arXiv_link": "https://arxiv.org/abs/2311.17048", "other_link": "https://github.com/Show-han/Zeroshot_REC.", "title": "Zero-shot Referring Expression Comprehension via Structural Similarity Between Images and Captions", "abstract": "Zero-shot referring expression comprehension aims at localizing bounding\nboxes in an image corresponding to provided textual prompts, which requires:\n(i) a fine-grained disentanglement of complex visual scene and textual context,\nand (ii) a capacity to understand relationships among disentangled entities.\nUnfortunately, existing large vision-language alignment (VLA) models, e.g.,\nCLIP, struggle with both aspects so cannot be directly used for this task. To\nmitigate this gap, we leverage large foundation models to disentangle both\nimages and texts into triplets in the format of (subject, predicate, object).\nAfter that, grounding is accomplished by calculating the structural similarity\nmatrix between visual and textual triplets with a VLA model, and subsequently\npropagate it to an instance-level similarity matrix. Furthermore, to equip VLA\nmodels with the ability of relationship understanding, we design a\ntriplet-matching objective to fine-tune the VLA models on a collection of\ncurated dataset containing abundant entity relationships. Experiments\ndemonstrate that our visual grounding performance increase of up to 19.5% over\nthe SOTA zero-shot model on RefCOCO/+/g. On the more challenging Who's Waldo\ndataset, our zero-shot approach achieves comparable accuracy to the fully\nsupervised model. Code is available at\nhttps://github.com/Show-han/Zeroshot_REC.", "keywords": ["Large multimodal models and prompting techniques", "Scene analysis and understanding"], "authors_list": ["Zeyu Han", "Fangrui Zhu", "Qianru Lao", "Huaizu Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f195"}, "filepath": "data/2404.05136.png", "tags": [], "_media_type": "image", "_rand": 0.9996415763876108, "arXiv_link": "https://arxiv.org/abs/2404.05136", "other_link": "", "title": "Self-Supervised Multi-Object Tracking with Path Consistency", "abstract": "In this paper, we propose a novel concept of path consistency to learn robust\nobject matching without using manual object identity supervision. Our key idea\nis that, to track a object through frames, we can obtain multiple different\nassociation results from a model by varying the frames it can observe, i.e.,\nskipping frames in observation. As the differences in observations do not alter\nthe identities of objects, the obtained association results should be\nconsistent. Based on this rationale, we generate multiple observation paths,\neach specifying a different set of frames to be skipped, and formulate the Path\nConsistency Loss that enforces the association results are consistent across\ndifferent observation paths. We use the proposed loss to train our object\nmatching model with only self-supervision. By extensive experiments on three\ntracking datasets (MOT17, PersonPath22, KITTI), we demonstrate that our method\noutperforms existing unsupervised methods with consistent margins on various\nevaluation metrics, and even achieves performance close to supervised methods.", "keywords": [], "authors_list": ["Zijia Lu", "Bing Shuai", "Yanbei Chen", "Zhenlin Xu", "Davide Modolo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f196"}, "filepath": "data/2312.06038.png", "tags": [], "_media_type": "image", "_rand": 0.999416833570661, "arXiv_link": "https://arxiv.org/abs/2312.06038", "other_link": "https://github.com/UCSB-NLP-Chang/diffusion_resampling.git.", "title": "Correcting Diffusion Generation through Resampling", "abstract": "Despite diffusion models' superior capabilities in modeling complex\ndistributions, there are still non-trivial distributional discrepancies between\ngenerated and ground-truth images, which has resulted in several notable\nproblems in image generation, including missing object errors in text-to-image\ngeneration and low image quality. Existing methods that attempt to address\nthese problems mostly do not tend to address the fundamental cause behind these\nproblems, which is the distributional discrepancies, and hence achieve\nsub-optimal results. In this paper, we propose a particle filtering framework\nthat can effectively address both problems by explicitly reducing the\ndistributional discrepancies. Specifically, our method relies on a set of\nexternal guidance, including a small set of real images and a pre-trained\nobject detector, to gauge the distribution gap, and then design the resampling\nweight accordingly to correct the gap. Experiments show that our methods can\neffectively correct missing object errors and improve image quality in various\nimage generation tasks. Notably, our method outperforms the existing strongest\nbaseline by 5% in object occurrence and 1.0 in FID on MS-COCO. Our code is\npublicly available at\nhttps://github.com/UCSB-NLP-Chang/diffusion_resampling.git.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yujian Liu", "Yang Zhang", "Tommi Jaakkola", "Shiyu Chang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f197"}, "filepath": "data/2402.18162.png", "tags": [], "_media_type": "image", "_rand": 0.9996195580892024, "arXiv_link": "https://arxiv.org/abs/2402.18162", "other_link": "", "title": "YolOOD: Utilizing Object Detection Concepts for Multi-Label Out-of-Distribution Detection", "abstract": "Out-of-distribution detection (OOD) is a crucial technique for deploying\nmachine learning models in the real world to handle the unseen scenarios. In\nthis paper, we first propose a simple yet effective Neural Activation Prior\n(NAP) for OOD detection. Our neural activation prior is based on a key\nobservation that, for a channel before the global pooling layer of a fully\ntrained neural network, the probability of a few neurons being activated with a\nlarge response by an in-distribution (ID) sample is significantly higher than\nthat by an OOD sample. An intuitive explanation is that for a model fully\ntrained on ID dataset, each channel would play a role in detecting a certain\npattern in the ID dataset, and a few neurons can be activated with a large\nresponse when the pattern is detected in an input sample. Then, a new scoring\nfunction based on this prior is proposed to highlight the role of these\nstrongly activated neurons in OOD detection. Our approach is plug-and-play and\ndoes not lead to any performance degradation on ID data classification and\nrequires no extra training or statistics from training or external datasets.\nNotice that previous methods primarily rely on post-global-pooling features of\nthe neural networks, while the within-channel distribution information we\nleverage would be discarded by the global pooling operator. Consequently, our\nmethod is orthogonal to existing approaches and can be effectively combined\nwith them in various applications. Experimental results show that our method\nachieves the state-of-the-art performance on CIFAR benchmark and ImageNet\ndataset, which demonstrates the power of the proposed prior. Finally, we extend\nour method to Transformers and the experimental findings indicate that NAP can\nalso significantly enhance the performance of OOD detection on Transformers,\nthereby demonstrating the broad applicability of this prior knowledge.", "keywords": [], "authors_list": ["Alon Zolfi", "Guy AmiT", "Amit Baras", "Satoru Koda", "Ikuya Morikawa", "Yuval Elovici", "Asaf Shabtai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f198"}, "filepath": "data/2402.18146.png", "tags": [], "_media_type": "image", "_rand": 0.9993559312561558, "arXiv_link": "https://arxiv.org/abs/2402.18146", "other_link": "", "title": "3DSFLabelling: Boosting 3D Scene Flow Estimation by Pseudo Auto-labelling", "abstract": "Learning 3D scene flow from LiDAR point clouds presents significant\ndifficulties, including poor generalization from synthetic datasets to real\nscenes, scarcity of real-world 3D labels, and poor performance on real sparse\nLiDAR point clouds. We present a novel approach from the perspective of\nauto-labelling, aiming to generate a large number of 3D scene flow pseudo\nlabels for real-world LiDAR point clouds. Specifically, we employ the\nassumption of rigid body motion to simulate potential object-level rigid\nmovements in autonomous driving scenarios. By updating different motion\nattributes for multiple anchor boxes, the rigid motion decomposition is\nobtained for the whole scene. Furthermore, we developed a novel 3D scene flow\ndata augmentation method for global and local motion. By perfectly synthesizing\ntarget point clouds based on augmented motion parameters, we easily obtain lots\nof 3D scene flow labels in point clouds highly consistent with real scenarios.\nOn multiple real-world datasets including LiDAR KITTI, nuScenes, and Argoverse,\nour method outperforms all previous supervised and unsupervised methods without\nrequiring manual labelling. Impressively, our method achieves a tenfold\nreduction in EPE3D metric on the LiDAR KITTI dataset, reducing it from $0.190m$\nto a mere $0.008m$ error.", "keywords": [], "authors_list": ["Chaokang Jiang", "Guangming Wang", "Jiuming Liu", "Hesheng Wang", "Zhuang Ma", "Zhenqiang Liu", "LIANG", "Yi Shan", "Dalong Du"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f199"}, "filepath": "data/2309.12378.png", "tags": [], "_media_type": "image", "_rand": 0.9994242190637, "arXiv_link": "https://arxiv.org/abs/2309.12378", "other_link": "", "title": "Unsupervised Semantic Segmentation Through Depth-Guided Feature Correlation and Sampling", "abstract": "Traditionally, training neural networks to perform semantic segmentation\nrequired expensive human-made annotations. But more recently, advances in the\nfield of unsupervised learning have made significant progress on this issue and\ntowards closing the gap to supervised algorithms. To achieve this, semantic\nknowledge is distilled by learning to correlate randomly sampled features from\nimages across an entire dataset. In this work, we build upon these advances by\nincorporating information about the structure of the scene into the training\nprocess through the use of depth information. We achieve this by (1) learning\ndepth-feature correlation by spatially correlate the feature maps with the\ndepth maps to induce knowledge about the structure of the scene and (2)\nimplementing farthest-point sampling to more effectively select relevant\nfeatures by utilizing 3D sampling techniques on depth information of the scene.\nFinally, we demonstrate the effectiveness of our technical contributions\nthrough extensive experimentation and present significant improvements in\nperformance across multiple benchmark datasets.", "keywords": [], "authors_list": ["Leon Sick", "Dominik Engel", "Pedro Hermosilla", "Timo Ropinski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f19a"}, "filepath": "data/2402.07183.png", "tags": [], "_media_type": "image", "_rand": 0.9992264444739103, "arXiv_link": "https://arxiv.org/abs/2402.07183", "other_link": "", "title": "Random Entangled Tokens for Adversarially Robust Vision Transformer", "abstract": "Deep neural networks (DNNs) are well known to be vulnerable to adversarial\nexamples (AEs). In previous studies, the use of models encrypted with a secret\nkey was demonstrated to be robust against white-box attacks, but not against\nblack-box ones. In this paper, we propose a novel method using the vision\ntransformer (ViT) that is a random ensemble of encrypted models for enhancing\nrobustness against both white-box and black-box attacks. In addition, a\nbenchmark attack method, called AutoAttack, is applied to models to test\nadversarial robustness objectively. In experiments, the method was demonstrated\nto be robust against not only white-box attacks but also black-box ones in an\nimage classification task on the CIFAR-10 and ImageNet datasets. The method was\nalso compared with the state-of-the-art in a standardized benchmark for\nadversarial robustness, RobustBench, and it was verified to outperform\nconventional defenses in terms of clean accuracy and robust accuracy.", "keywords": [], "authors_list": ["Huihui Gong", "Minjing Dong", "Siqi Ma", "Seyit Camtepe", "Surya Nepal", "Chang Xu"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f19b"}, "filepath": "data/2312.04461.png", "tags": [], "_media_type": "image", "_rand": 0.999768501645326, "arXiv_link": "https://arxiv.org/abs/2312.04461", "other_link": "https://photo-maker.github.io/", "title": "PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding", "abstract": "Recent advances in text-to-image generation have made remarkable progress in\nsynthesizing realistic human photos conditioned on given text prompts. However,\nexisting personalized generation methods cannot simultaneously satisfy the\nrequirements of high efficiency, promising identity (ID) fidelity, and flexible\ntext controllability. In this work, we introduce PhotoMaker, an efficient\npersonalized text-to-image generation method, which mainly encodes an arbitrary\nnumber of input ID images into a stack ID embedding for preserving ID\ninformation. Such an embedding, serving as a unified ID representation, can not\nonly encapsulate the characteristics of the same input ID comprehensively, but\nalso accommodate the characteristics of different IDs for subsequent\nintegration. This paves the way for more intriguing and practically valuable\napplications. Besides, to drive the training of our PhotoMaker, we propose an\nID-oriented data construction pipeline to assemble the training data. Under the\nnourishment of the dataset constructed through the proposed pipeline, our\nPhotoMaker demonstrates better ID preservation ability than test-time\nfine-tuning based methods, yet provides significant speed improvements,\nhigh-quality generation results, strong generalization capabilities, and a wide\nrange of applications. Our project page is available at\nhttps://photo-maker.github.io/", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Zhen Li", "Mingdeng Cao", "Xintao Wang", "Zhongang Qi", "Ming-Ming Cheng", "Ying Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f19c"}, "filepath": "data/2402.03587.png", "tags": [], "_media_type": "image", "_rand": 0.9998958387796801, "arXiv_link": "https://arxiv.org/abs/2402.03587", "other_link": "", "title": "Hierarchical Correlation Clustering and Tree Preserving Embedding", "abstract": "We study correlation clustering where the pairwise similarities are not known\nin advance. For this purpose, we employ active learning to query pairwise\nsimilarities in a cost-efficient way. We propose a number of effective\ninformation-theoretic acquisition functions based on entropy and information\ngain. We extensively investigate the performance of our methods in different\nsettings and demonstrate their superior performance compared to the\nalternatives.", "keywords": [], "authors_list": ["Morteza Haghir Chehreghani", "Mostafa Haghir Chehreghani"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f19d"}, "filepath": "data/2310.08092.png", "tags": [], "_media_type": "image", "_rand": 0.999929091802772, "arXiv_link": "https://arxiv.org/abs/2310.08092", "other_link": "", "title": "CORE-MPI: Consistency Object Removal with Embedding MultiPlane Image", "abstract": "Large image diffusion models enable novel view synthesis with high quality\nand excellent zero-shot capability. However, such models based on\nimage-to-image translation have no guarantee of view consistency, limiting the\nperformance for downstream tasks like 3D reconstruction and image-to-3D\ngeneration. To empower consistency, we propose Consistent123 to synthesize\nnovel views simultaneously by incorporating additional cross-view attention\nlayers and the shared self-attention mechanism. The proposed attention\nmechanism improves the interaction across all synthesized views, as well as the\nalignment between the condition view and novel views. In the sampling stage,\nsuch architecture supports simultaneously generating an arbitrary number of\nviews while training at a fixed length. We also introduce a progressive\nclassifier-free guidance strategy to achieve the trade-off between texture and\ngeometry for synthesized object views. Qualitative and quantitative experiments\nshow that Consistent123 outperforms baselines in view consistency by a large\nmargin. Furthermore, we demonstrate a significant improvement of Consistent123\non varying downstream tasks, showing its great potential in the 3D generation\nfield. The project page is available at consistent-123.github.io.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Donggeun Yoon", "Donghyeon Cho"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f19e"}, "filepath": "data/2306.04744.png", "tags": [], "_media_type": "image", "_rand": 0.9991146623755987, "arXiv_link": "https://arxiv.org/abs/2306.04744", "other_link": "https://github.com/kylemin/WOUAF}.", "title": "WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models", "abstract": "The rapid advancement of generative models, facilitating the creation of\nhyper-realistic images from textual descriptions, has concurrently escalated\ncritical societal concerns such as misinformation. Although providing some\nmitigation, traditional fingerprinting mechanisms fall short in attributing\nresponsibility for the malicious use of synthetic images. This paper introduces\na novel approach to model fingerprinting that assigns responsibility for the\ngenerated images, thereby serving as a potential countermeasure to model\nmisuse. Our method modifies generative models based on each user's unique\ndigital fingerprint, imprinting a unique identifier onto the resultant content\nthat can be traced back to the user. This approach, incorporating fine-tuning\ninto Text-to-Image (T2I) tasks using the Stable Diffusion Model, demonstrates\nnear-perfect attribution accuracy with a minimal impact on output quality.\nThrough extensive evaluation, we show that our method outperforms baseline\nmethods with an average improvement of 11\\% in handling image post-processes.\nOur method presents a promising and novel avenue for accountable model\ndistribution and responsible use. Our code is available in\n\\url{https://github.com/kylemin/WOUAF}.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Changhoon Kim", "Kyle Min", "Maitreya Patel", "Sheng Cheng", "'YZ' Yezhou Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f19f"}, "filepath": "data/2404.01976.png", "tags": [], "_media_type": "image", "_rand": 0.9995424574990502, "arXiv_link": "https://arxiv.org/abs/2404.01976", "other_link": "", "title": "Joint-Task Regularization for Partially Labeled Multi-Task Learning", "abstract": "Multi-task learning has become increasingly popular in the machine learning\nfield, but its practicality is hindered by the need for large, labeled\ndatasets. Most multi-task learning methods depend on fully labeled datasets\nwherein each input example is accompanied by ground-truth labels for all target\ntasks. Unfortunately, curating such datasets can be prohibitively expensive and\nimpractical, especially for dense prediction tasks which require per-pixel\nlabels for each image. With this in mind, we propose Joint-Task Regularization\n(JTR), an intuitive technique which leverages cross-task relations to\nsimultaneously regularize all tasks in a single joint-task latent space to\nimprove learning when data is not fully labeled for all tasks. JTR stands out\nfrom existing approaches in that it regularizes all tasks jointly rather than\nseparately in pairs -- therefore, it achieves linear complexity relative to the\nnumber of tasks while previous methods scale quadratically. To demonstrate the\nvalidity of our approach, we extensively benchmark our method across a wide\nvariety of partially labeled scenarios based on NYU-v2, Cityscapes, and\nTaskonomy.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Kento Nishi", "Junsik Kim", "Wanhua Li", "Hanspeter Pfister"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a0"}, "filepath": "data/2403.03561.png", "tags": [], "_media_type": "image", "_rand": 0.9991842851039696, "arXiv_link": "https://arxiv.org/abs/2403.03561", "other_link": "https://pico-ai-team.github.io/hmd-poser", "title": "HMD-Poser: On-Device Real-time Human Motion Tracking from Scalable Sparse Observations", "abstract": "It is especially challenging to achieve real-time human motion tracking on a\nstandalone VR Head-Mounted Display (HMD) such as Meta Quest and PICO. In this\npaper, we propose HMD-Poser, the first unified approach to recover full-body\nmotions using scalable sparse observations from HMD and body-worn IMUs. In\nparticular, it can support a variety of input scenarios, such as HMD,\nHMD+2IMUs, HMD+3IMUs, etc. The scalability of inputs may accommodate users'\nchoices for both high tracking accuracy and easy-to-wear. A lightweight\ntemporal-spatial feature learning network is proposed in HMD-Poser to guarantee\nthat the model runs in real-time on HMDs. Furthermore, HMD-Poser presents\nonline body shape estimation to improve the position accuracy of body joints.\nExtensive experimental results on the challenging AMASS dataset show that\nHMD-Poser achieves new state-of-the-art results in both accuracy and real-time\nperformance. We also build a new free-dancing motion dataset to evaluate\nHMD-Poser's on-device performance and investigate the performance gap between\nsynthetic data and real-captured sensor data. Finally, we demonstrate our\nHMD-Poser with a real-time Avatar-driving application on a commercial HMD. Our\ncode and free-dancing motion dataset are available\nhttps://pico-ai-team.github.io/hmd-poser", "keywords": ["Biometrics and human analysis"], "authors_list": ["Peng Dai", "Yang Zhang", "Tao Liu", "ZhenFan", "Tianyuan Du", "Zhuo Su", "Xiaozheng Zheng", "Zeming Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a1"}, "filepath": "data/2401.02317.png", "tags": [], "_media_type": "image", "_rand": 0.9996066967302576, "arXiv_link": "https://arxiv.org/abs/2401.02317", "other_link": "https://github.com/zongzi13545329/BA-SAM", "title": "BA-SAM: Scalable Bias-Mode Attention Mask for Segment Anything Model", "abstract": "In this paper, we address the challenge of image resolution variation for the\nSegment Anything Model (SAM). SAM, known for its zero-shot generalizability,\nexhibits a performance degradation when faced with datasets with varying image\nsizes. Previous approaches tend to resize the image to a fixed size or adopt\nstructure modifications, hindering the preservation of SAM's rich prior\nknowledge. Besides, such task-specific tuning necessitates a complete\nretraining of the model, which is cost-expensive and unacceptable for\ndeployment in the downstream tasks. In this paper, we reformulate this issue as\na length extrapolation problem, where token sequence length varies while\nmaintaining a consistent patch size for images of different sizes. To this end,\nwe propose Scalable Bias-Mode Attention Mask (BA-SAM) to enhance SAM's\nadaptability to varying image resolutions while eliminating the need for\nstructure modifications. Firstly, we introduce a new scaling factor to ensure\nconsistent magnitude in the attention layer's dot product values when the token\nsequence length changes. Secondly, we present a bias-mode attention mask that\nallows each token to prioritize neighboring information, mitigating the impact\nof untrained distant information. Our BA-SAM demonstrates efficacy in two\nscenarios: zero-shot and fine-tuning. Extensive evaluation on diverse datasets,\nincluding DIS5K, DUTS, ISIC, COD10K, and COCO, reveals its ability to\nsignificantly mitigate performance degradation in the zero-shot setting and\nachieve state-of-the-art performance with minimal fine-tuning. Furthermore, we\npropose a generalized model and benchmark, showcasing BA-SAM's generalizability\nacross all four datasets simultaneously. Code is available at\nhttps://github.com/zongzi13545329/BA-SAM", "keywords": ["Efficient and scalable vision"], "authors_list": ["song yiran", "Qianyu Zhou", "Xiangtai Li", "Deng-Ping Fan", "Xuequan Lu", "Lizhuang Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a2"}, "filepath": "data/2403.18554.png", "tags": [], "_media_type": "image", "_rand": 0.9990250495221887, "arXiv_link": "https://arxiv.org/abs/2403.18554", "other_link": "", "title": "CosalPure: Learning Concept from Group Images for Robust Co-Saliency Detection", "abstract": "Co-salient object detection (CoSOD) aims to identify the common and salient\n(usually in the foreground) regions across a given group of images. Although\nachieving significant progress, state-of-the-art CoSODs could be easily\naffected by some adversarial perturbations, leading to substantial accuracy\nreduction. The adversarial perturbations can mislead CoSODs but do not change\nthe high-level semantic information (e.g., concept) of the co-salient objects.\nIn this paper, we propose a novel robustness enhancement framework by first\nlearning the concept of the co-salient objects based on the input group images\nand then leveraging this concept to purify adversarial perturbations, which are\nsubsequently fed to CoSODs for robustness enhancement. Specifically, we propose\nCosalPure containing two modules, i.e., group-image concept learning and\nconcept-guided diffusion purification. For the first module, we adopt a\npre-trained text-to-image diffusion model to learn the concept of co-salient\nobjects within group images where the learned concept is robust to adversarial\nexamples. For the second module, we map the adversarial image to the latent\nspace and then perform diffusion generation by embedding the learned concept\ninto the noise prediction function as an extra condition. Our method can\neffectively alleviate the influence of the SOTA adversarial attack containing\ndifferent adversarial patterns, including exposure and noise. The extensive\nresults demonstrate that our method could enhance the robustness of CoSODs\nsignificantly.", "keywords": [], "authors_list": ["Jiayi Zhu", "Qing Guo", "Felix Juefei Xu", "Yihao Huang", "Yang Liu", "Geguang Pu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a3"}, "filepath": "data/2405.00378.png", "tags": [], "_media_type": "image", "_rand": 0.9997821279909499, "arXiv_link": "https://arxiv.org/abs/2405.00378", "other_link": "https://github.com/chy-upc/ABD.", "title": "Adaptive Bidirectional Displacement for Semi-Supervised Medical Image Segmentation", "abstract": "Consistency learning is a central strategy to tackle unlabeled data in\nsemi-supervised medical image segmentation (SSMIS), which enforces the model to\nproduce consistent predictions under the perturbation. However, most current\napproaches solely focus on utilizing a specific single perturbation, which can\nonly cope with limited cases, while employing multiple perturbations\nsimultaneously is hard to guarantee the quality of consistency learning. In\nthis paper, we propose an Adaptive Bidirectional Displacement (ABD) approach to\nsolve the above challenge. Specifically, we first design a bidirectional patch\ndisplacement based on reliable prediction confidence for unlabeled data to\ngenerate new samples, which can effectively suppress uncontrollable regions and\nstill retain the influence of input perturbations. Meanwhile, to enforce the\nmodel to learn the potentially uncontrollable content, a bidirectional\ndisplacement operation with inverse confidence is proposed for the labeled\nimages, which generates samples with more unreliable information to facilitate\nmodel learning. Extensive experiments show that ABD achieves new\nstate-of-the-art performances for SSMIS, significantly improving different\nbaselines. Source code is available at https://github.com/chy-upc/ABD.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Hanyang Chi", "Jian Pang", "Bingfeng Zhang", "Weifeng Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a4"}, "filepath": "data/2404.05145.png", "tags": [], "_media_type": "image", "_rand": 0.9997835271298751, "arXiv_link": "https://arxiv.org/abs/2404.05145", "other_link": "", "title": "UniMix: Towards Domain Adaptive and Generalizable LiDAR Semantic Segmentation in Adverse Weather", "abstract": "LiDAR semantic segmentation (LSS) is a critical task in autonomous driving\nand has achieved promising progress. However, prior LSS methods are\nconventionally investigated and evaluated on datasets within the same domain in\nclear weather. The robustness of LSS models in unseen scenes and all weather\nconditions is crucial for ensuring safety and reliability in real applications.\nTo this end, we propose UniMix, a universal method that enhances the\nadaptability and generalizability of LSS models. UniMix first leverages\nphysically valid adverse weather simulation to construct a Bridge Domain, which\nserves to bridge the domain gap between the clear weather scenes and the\nadverse weather scenes. Then, a Universal Mixing operator is defined regarding\nspatial, intensity, and semantic distributions to create the intermediate\ndomain with mixed samples from given domains. Integrating the proposed two\ntechniques into a teacher-student framework, UniMix efficiently mitigates the\ndomain gap and enables LSS models to learn weather-robust and domain-invariant\nrepresentations. We devote UniMix to two main setups: 1) unsupervised domain\nadaption, adapting the model from the clear weather source domain to the\nadverse weather target domain; 2) domain generalization, learning a model that\ngeneralizes well to unseen scenes in adverse weather. Extensive experiments\nvalidate the effectiveness of UniMix across different tasks and datasets, all\nachieving superior performance over state-of-the-art methods. The code will be\nreleased.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haimei Zhao", "Jing Zhang", "Zhuo Chen", "Shanshan Zhao", "Dacheng Tao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a5"}, "filepath": "data/2311.12198.png", "tags": [], "_media_type": "image", "_rand": 0.999337839492638, "arXiv_link": "https://arxiv.org/abs/2311.12198", "other_link": "https://xpandora.github.io/PhysGaussian/", "title": "PhysGaussian: Physics-Integrated 3D Gaussians for Generative Dynamics", "abstract": "We introduce PhysGaussian, a new method that seamlessly integrates physically\ngrounded Newtonian dynamics within 3D Gaussians to achieve high-quality novel\nmotion synthesis. Employing a custom Material Point Method (MPM), our approach\nenriches 3D Gaussian kernels with physically meaningful kinematic deformation\nand mechanical stress attributes, all evolved in line with continuum mechanics\nprinciples. A defining characteristic of our method is the seamless integration\nbetween physical simulation and visual rendering: both components utilize the\nsame 3D Gaussian kernels as their discrete representations. This negates the\nnecessity for triangle/tetrahedron meshing, marching cubes, \"cage meshes,\" or\nany other geometry embedding, highlighting the principle of \"what you see is\nwhat you simulate (WS$^2$).\" Our method demonstrates exceptional versatility\nacross a wide variety of materials--including elastic entities, metals,\nnon-Newtonian fluids, and granular materials--showcasing its strong\ncapabilities in creating diverse visual content with novel viewpoints and\nmovements. Our project page is at: https://xpandora.github.io/PhysGaussian/", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Tianyi Xie", "Zeshun Zong", "Yuxing Qiu", "Xuan Li", "Yutao Feng", "Yin Yang", "Chenfanfu Jiang"], "category_name": "Graphics", "all_categories": ["Graphics", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a6"}, "filepath": "data/2308.00353.png", "tags": [], "_media_type": "image", "_rand": 0.9996408385025807, "arXiv_link": "https://arxiv.org/abs/2308.00353", "other_link": "", "title": "RegionPLC: Regional Point-Language Contrastive Learning for Open-World 3D Scene Understanding", "abstract": "Open-world instance-level scene understanding aims to locate and recognize\nunseen object categories that are not present in the annotated dataset. This\ntask is challenging because the model needs to both localize novel 3D objects\nand infer their semantic categories. A key factor for the recent progress in 2D\nopen-world perception is the availability of large-scale image-text pairs from\nthe Internet, which cover a wide range of vocabulary concepts. However, this\nsuccess is hard to replicate in 3D scenarios due to the scarcity of 3D-text\npairs. To address this challenge, we propose to harness pre-trained\nvision-language (VL) foundation models that encode extensive knowledge from\nimage-text pairs to generate captions for multi-view images of 3D scenes. This\nallows us to establish explicit associations between 3D shapes and\nsemantic-rich captions. Moreover, to enhance the fine-grained visual-semantic\nrepresentation learning from captions for object-level categorization, we\ndesign hierarchical point-caption association methods to learn semantic-aware\nembeddings that exploit the 3D geometry between 3D points and multi-view\nimages. In addition, to tackle the localization challenge for novel classes in\nthe open-world setting, we develop debiased instance localization, which\ninvolves training object grouping modules on unlabeled data using\ninstance-level pseudo supervision. This significantly improves the\ngeneralization capabilities of instance grouping and thus the ability to\naccurately locate novel objects. We conduct extensive experiments on 3D\nsemantic, instance, and panoptic segmentation tasks, covering indoor and\noutdoor scenes across three datasets. Our method outperforms baseline methods\nby a significant margin in semantic segmentation (e.g. 34.5%$\\sim$65.3%),\ninstance segmentation (e.g. 21.8%$\\sim$54.0%) and panoptic segmentation (e.g.\n14.7%$\\sim$43.3%). Code will be available.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Jihan Yang", "Runyu Ding", "Weipeng DENG", "Zhe Wang", "Xiaojuan Qi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a7"}, "filepath": "data/2401.00889.png", "tags": [], "_media_type": "image", "_rand": 0.9990756998324539, "arXiv_link": "https://arxiv.org/abs/2401.00889", "other_link": "", "title": "3D Human Pose Perception from Egocentric Stereo Videos", "abstract": "While head-mounted devices are becoming more compact, they provide egocentric\nviews with significant self-occlusions of the device user. Hence, existing\nmethods often fail to accurately estimate complex 3D poses from egocentric\nviews. In this work, we propose a new transformer-based framework to improve\negocentric stereo 3D human pose estimation, which leverages the scene\ninformation and temporal context of egocentric stereo videos. Specifically, we\nutilize 1) depth features from our 3D scene reconstruction module with\nuniformly sampled windows of egocentric stereo frames, and 2) human joint\nqueries enhanced by temporal features of the video inputs. Our method is able\nto accurately estimate human poses even in challenging scenarios, such as\ncrouching and sitting. Furthermore, we introduce two new benchmark datasets,\ni.e., UnrealEgo2 and UnrealEgo-RW (RealWorld). The proposed datasets offer a\nmuch larger number of egocentric stereo views with a wider variety of human\nmotions than the existing datasets, allowing comprehensive evaluation of\nexisting and upcoming methods. Our extensive experiments show that the proposed\napproach significantly outperforms previous methods. We will release\nUnrealEgo2, UnrealEgo-RW, and trained models on our project page.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hiroyasu Akada", "Jian Wang", "Vladislav Golyanik", "Christian Theobalt"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a8"}, "filepath": "data/2403.15139.png", "tags": [], "_media_type": "image", "_rand": 0.9994099541231117, "arXiv_link": "https://arxiv.org/abs/2403.15139", "other_link": "", "title": "Deep Generative Model based Rate-Distortion for Image Downscaling Assessment", "abstract": "In this paper, we propose Image Downscaling Assessment by Rate-Distortion\n(IDA-RD), a novel measure to quantitatively evaluate image downscaling\nalgorithms. In contrast to image-based methods that measure the quality of\ndownscaled images, ours is process-based that draws ideas from rate-distortion\ntheory to measure the distortion incurred during downscaling. Our main idea is\nthat downscaling and super-resolution (SR) can be viewed as the encoding and\ndecoding processes in the rate-distortion model, respectively, and that a\ndownscaling algorithm that preserves more details in the resulting\nlow-resolution (LR) images should lead to less distorted high-resolution (HR)\nimages in SR. In other words, the distortion should increase as the downscaling\nalgorithm deteriorates. However, it is non-trivial to measure this distortion\nas it requires the SR algorithm to be blind and stochastic. Our key insight is\nthat such requirements can be met by recent SR algorithms based on deep\ngenerative models that can find all matching HR images for a given LR image on\ntheir learned image manifolds. Extensive experimental results show the\neffectiveness of our IDA-RD measure.", "keywords": ["Low-level vision"], "authors_list": ["yuanbang liang", "Bhavesh Garg", "Paul L. Rosin", "Yipeng Qin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1a9"}, "filepath": "data/2312.03031.png", "tags": [], "_media_type": "image", "_rand": 0.9998770439376499, "arXiv_link": "https://arxiv.org/abs/2312.03031", "other_link": "https://github.com/NVlabs/BEV-Planner}", "title": "Is Ego Status All You Need for Open-Loop End-to-End Autonomous Driving?", "abstract": "End-to-end autonomous driving recently emerged as a promising research\ndirection to target autonomy from a full-stack perspective. Along this line,\nmany of the latest works follow an open-loop evaluation setting on nuScenes to\nstudy the planning behavior. In this paper, we delve deeper into the problem by\nconducting thorough analyses and demystifying more devils in the details. We\ninitially observed that the nuScenes dataset, characterized by relatively\nsimple driving scenarios, leads to an under-utilization of perception\ninformation in end-to-end models incorporating ego status, such as the ego\nvehicle's velocity. These models tend to rely predominantly on the ego\nvehicle's status for future path planning. Beyond the limitations of the\ndataset, we also note that current metrics do not comprehensively assess the\nplanning quality, leading to potentially biased conclusions drawn from existing\nbenchmarks. To address this issue, we introduce a new metric to evaluate\nwhether the predicted trajectories adhere to the road. We further propose a\nsimple baseline able to achieve competitive results without relying on\nperception annotations. Given the current limitations on the benchmark and\nmetrics, we suggest the community reassess relevant prevailing research and be\ncautious whether the continued pursuit of state-of-the-art would yield\nconvincing and universal conclusions. Code and models are available at\n\\url{https://github.com/NVlabs/BEV-Planner}", "keywords": [], "authors_list": ["Zhiqi Li", "Zhiding Yu", "Shiyi Lan", "Jiahan Li", "Jan Kautz", "Tong Lu", "Jose M. Alvarez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1aa"}, "filepath": "data/2312.02214.png", "tags": [], "_media_type": "image", "_rand": 0.9991948217499534, "arXiv_link": "https://arxiv.org/abs/2312.02214", "other_link": "https://ustc3dv.github.io/FlashAvatar/", "title": "FlashAvatar: High-fidelity Head Avatar with Efficient Gaussian Embedding", "abstract": "We propose FlashAvatar, a novel and lightweight 3D animatable avatar\nrepresentation that could reconstruct a digital avatar from a short monocular\nvideo sequence in minutes and render high-fidelity photo-realistic images at\n300FPS on a consumer-grade GPU. To achieve this, we maintain a uniform 3D\nGaussian field embedded in the surface of a parametric face model and learn\nextra spatial offset to model non-surface regions and subtle facial details.\nWhile full use of geometric priors can capture high-frequency facial details\nand preserve exaggerated expressions, proper initialization can help reduce the\nnumber of Gaussians, thus enabling super-fast rendering speed. Extensive\nexperimental results demonstrate that FlashAvatar outperforms existing works\nregarding visual quality and personalized details and is almost an order of\nmagnitude faster in rendering speed. Project page:\nhttps://ustc3dv.github.io/FlashAvatar/", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Jun Xiang", "Xuan Gao", "Yudong Guo", "Juyong Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ab"}, "filepath": "data/2401.10224.png", "tags": [], "_media_type": "image", "_rand": 0.9993127065726977, "arXiv_link": "https://arxiv.org/abs/2401.10224", "other_link": "https://github.com/ragavsachdeva/magi.", "title": "The Manga Whisperer: Automatically Generating Transcriptions for Comics", "abstract": "In the past few decades, Japanese comics, commonly referred to as Manga, have\ntranscended both cultural and linguistic boundaries to become a true worldwide\nsensation. Yet, the inherent reliance on visual cues and illustration within\nmanga renders it largely inaccessible to individuals with visual impairments.\nIn this work, we seek to address this substantial barrier, with the aim of\nensuring that manga can be appreciated and actively engaged by everyone.\nSpecifically, we tackle the problem of diarisation i.e. generating a\ntranscription of who said what and when, in a fully automatic way.\n To this end, we make the following contributions: (1) we present a unified\nmodel, Magi, that is able to (a) detect panels, text boxes and character boxes,\n(b) cluster characters by identity (without knowing the number of clusters\napriori), and (c) associate dialogues to their speakers; (2) we propose a novel\napproach that is able to sort the detected text boxes in their reading order\nand generate a dialogue transcript; (3) we annotate an evaluation benchmark for\nthis task using publicly available [English] manga pages. The code, evaluation\ndatasets and the pre-trained model can be found at:\nhttps://github.com/ragavsachdeva/magi.", "keywords": [], "authors_list": ["Ragav Sachdeva", "Andrew Zisserman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ac"}, "filepath": "data/2401.11140.png", "tags": [], "_media_type": "image", "_rand": 0.999863332027646, "arXiv_link": "https://arxiv.org/abs/2401.11140", "other_link": "", "title": "SNIDA: Unlocking Few-Shot Object Detection with Non-linear Semantic Decoupling Augmentation", "abstract": "Few-shot object detection(FSOD) aims to design methods to adapt object\ndetectors efficiently with only few annotated samples. Fine-tuning has been\nshown to be an effective and practical approach. However, previous works often\ntake the classical base-novel two stage fine-tuning procedure but ignore the\nimplicit stability-plasticity contradiction among different modules.\nSpecifically, the random re-initialized classifiers need more plasticity to\nadapt to novel samples. The other modules inheriting pre-trained weights demand\nmore stability to reserve their class-agnostic knowledge. Regular fine-tuning\nwhich couples the optimization of these two parts hurts the model\ngeneralization in FSOD scenarios. In this paper, we find that this problem is\nprominent in the end-to-end object detector Sparse R-CNN for its\nmulti-classifier cascaded architecture. We propose to mitigate this\ncontradiction by a new three-stage fine-tuning procedure by introducing an\naddtional plasticity classifier fine-tuning(PCF) stage. We further design the\nmulti-source ensemble(ME) technique to enhance the generalization of the model\nin the final fine-tuning stage. Extensive experiments verify that our method is\neffective in regularizing Sparse R-CNN, outperforming previous methods in the\nFSOD benchmark.", "keywords": [], "authors_list": ["Yanjie Wang", "Xu Zou", "Luxin Yan", "Sheng Zhong", "Jiahuan Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ad"}, "filepath": "data/2403.16131.png", "tags": [], "_media_type": "image", "_rand": 0.9995360726471784, "arXiv_link": "https://arxiv.org/abs/2403.16131", "other_link": "https://github.com/xiuqhou/Salience-DETR.", "title": "Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering Refinement", "abstract": "DETR-like methods have significantly increased detection performance in an\nend-to-end manner. The mainstream two-stage frameworks of them perform dense\nself-attention and select a fraction of queries for sparse cross-attention,\nwhich is proven effective for improving performance but also introduces a heavy\ncomputational burden and high dependence on stable query selection. This paper\ndemonstrates that suboptimal two-stage selection strategies result in scale\nbias and redundancy due to the mismatch between selected queries and objects in\ntwo-stage initialization. To address these issues, we propose hierarchical\nsalience filtering refinement, which performs transformer encoding only on\nfiltered discriminative queries, for a better trade-off between computational\nefficiency and precision. The filtering process overcomes scale bias through a\nnovel scale-independent salience supervision. To compensate for the semantic\nmisalignment among queries, we introduce elaborate query refinement modules for\nstable two-stage initialization. Based on above improvements, the proposed\nSalience DETR achieves significant improvements of +4.0% AP, +0.2% AP, +4.4% AP\non three challenging task-specific detection datasets, as well as 49.2% AP on\nCOCO 2017 with less FLOPs. The code is available at\nhttps://github.com/xiuqhou/Salience-DETR.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xiuquan Hou", "Meiqin Liu", "Senlin Zhang", "Ping Wei", "Badong Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ae"}, "filepath": "data/2403.17301.png", "tags": [], "_media_type": "image", "_rand": 0.9996357677528199, "arXiv_link": "https://arxiv.org/abs/2403.17301", "other_link": "", "title": "Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving", "abstract": "Deep learning-based monocular depth estimation (MDE), extensively applied in\nautonomous driving, is known to be vulnerable to adversarial attacks. Previous\nphysical attacks against MDE models rely on 2D adversarial patches, so they\nonly affect a small, localized region in the MDE map but fail under various\nviewpoints. To address these limitations, we propose 3D Depth Fool\n(3D$^2$Fool), the first 3D texture-based adversarial attack against MDE models.\n3D$^2$Fool is specifically optimized to generate 3D adversarial textures\nagnostic to model types of vehicles and to have improved robustness in bad\nweather conditions, such as rain and fog. Experimental results validate the\nsuperior performance of our 3D$^2$Fool across various scenarios, including\nvehicles, MDE models, weather conditions, and viewpoints. Real-world\nexperiments with printed 3D textures on physical vehicle models further\ndemonstrate that our 3D$^2$Fool can cause an MDE error of over 10 meters.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Junhao Zheng", "Chenhao Lin", "Jiahao Sun", "Zhengyu Zhao", "Qian Li", "Chao Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1af"}, "filepath": "data/2312.10540.png", "tags": [], "_media_type": "image", "_rand": 0.9996229162852994, "arXiv_link": "https://arxiv.org/abs/2312.10540", "other_link": "", "title": "VecFusion: Vector Font Generation with Diffusion", "abstract": "We present VecFusion, a new neural architecture that can generate vector\nfonts with varying topological structures and precise control point positions.\nOur approach is a cascaded diffusion model which consists of a raster diffusion\nmodel followed by a vector diffusion model. The raster model generates\nlow-resolution, rasterized fonts with auxiliary control point information,\ncapturing the global style and shape of the font, while the vector model\nsynthesizes vector fonts conditioned on the low-resolution raster fonts from\nthe first stage. To synthesize long and complex curves, our vector diffusion\nmodel uses a transformer architecture and a novel vector representation that\nenables the modeling of diverse vector geometry and the precise prediction of\ncontrol points. Our experiments show that, in contrast to previous generative\nmodels for vector graphics, our new cascaded vector diffusion model generates\nhigher quality vector fonts, with complex structures and diverse styles.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Vikas Thamizharasan", "Difan Liu", "Shantanu Agarwal", "Matthew Fisher", "Micha\u00ebl Gharbi", "Oliver Wang", "Alec Jacobson", "Evangelos Kalogerakis"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b0"}, "filepath": "data/2401.13856.png", "tags": [], "_media_type": "image", "_rand": 0.9993800058372524, "arXiv_link": "https://arxiv.org/abs/2401.13856", "other_link": "https://github.com/10Ring/LAA-Net.", "title": "LAA-Net: Localized Artifact Attention Network for Quality-Agnostic and Generalizable Deepfake Detection", "abstract": "This paper introduces a novel approach for high-quality deepfake detection\ncalled Localized Artifact Attention Network (LAA-Net). Existing methods for\nhigh-quality deepfake detection are mainly based on a supervised binary\nclassifier coupled with an implicit attention mechanism. As a result, they do\nnot generalize well to unseen manipulations. To handle this issue, two main\ncontributions are made. First, an explicit attention mechanism within a\nmulti-task learning framework is proposed. By combining heatmap-based and\nself-consistency attention strategies, LAA-Net is forced to focus on a few\nsmall artifact-prone vulnerable regions. Second, an Enhanced Feature Pyramid\nNetwork (E-FPN) is proposed as a simple and effective mechanism for spreading\ndiscriminative low-level features into the final feature output, with the\nadvantage of limiting redundancy. Experiments performed on several benchmarks\nshow the superiority of our approach in terms of Area Under the Curve (AUC) and\nAverage Precision (AP). The code is available at\nhttps://github.com/10Ring/LAA-Net.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Dat NGUYEN", "Nesryne Mejri", "Inder Pal Singh", "Polina Kuleshova", "Marcella Astrid", "Anis Kacem", "Enjie Ghorbel", "Djamila Aouada"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b1"}, "filepath": "data/2312.05849.png", "tags": [], "_media_type": "image", "_rand": 0.9997837681468299, "arXiv_link": "https://arxiv.org/abs/2312.05849", "other_link": "https://jiuntian.github.io/interactdiffusion.", "title": "InteractDiffusion: Interaction Control in Text-to-Image Diffusion Models", "abstract": "Large-scale text-to-image (T2I) diffusion models have showcased incredible\ncapabilities in generating coherent images based on textual descriptions,\nenabling vast applications in content generation. While recent advancements\nhave introduced control over factors such as object localization, posture, and\nimage contours, a crucial gap remains in our ability to control the\ninteractions between objects in the generated content. Well-controlling\ninteractions in generated images could yield meaningful applications, such as\ncreating realistic scenes with interacting characters. In this work, we study\nthe problems of conditioning T2I diffusion models with Human-Object Interaction\n(HOI) information, consisting of a triplet label (person, action, object) and\ncorresponding bounding boxes. We propose a pluggable interaction control model,\ncalled InteractDiffusion that extends existing pre-trained T2I diffusion models\nto enable them being better conditioned on interactions. Specifically, we\ntokenize the HOI information and learn their relationships via interaction\nembeddings. A conditioning self-attention layer is trained to map HOI tokens to\nvisual tokens, thereby conditioning the visual tokens better in existing T2I\ndiffusion models. Our model attains the ability to control the interaction and\nlocation on existing T2I diffusion models, which outperforms existing baselines\nby a large margin in HOI detection score, as well as fidelity in FID and KID.\nProject page: https://jiuntian.github.io/interactdiffusion.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Jiun Tian Hoe", "Xudong Jiang", "Chee Seng Chan", "Yap-peng Tan", "Weipeng Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b2"}, "filepath": "data/2403.00939.png", "tags": [], "_media_type": "image", "_rand": 0.9994535125590427, "arXiv_link": "https://arxiv.org/abs/2403.00939", "other_link": "https://github.com/preddy5/G3DR", "title": "G3DR: Generative 3D Reconstruction in ImageNet", "abstract": "We introduce a novel 3D generative method, Generative 3D Reconstruction\n(G3DR) in ImageNet, capable of generating diverse and high-quality 3D objects\nfrom single images, addressing the limitations of existing methods. At the\nheart of our framework is a novel depth regularization technique that enables\nthe generation of scenes with high-geometric fidelity. G3DR also leverages a\npretrained language-vision model, such as CLIP, to enable reconstruction in\nnovel views and improve the visual realism of generations. Additionally, G3DR\ndesigns a simple but effective sampling procedure to further improve the\nquality of generations. G3DR offers diverse and efficient 3D asset generation\nbased on class or text conditioning. Despite its simplicity, G3DR is able to\nbeat state-of-theart methods, improving over them by up to 22% in perceptual\nmetrics and 90% in geometry scores, while needing only half of the training\ntime. Code is available at https://github.com/preddy5/G3DR", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Pradyumna Reddy", "Ismail Elezi", "Jiankang Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b3"}, "filepath": "data/2312.09249.png", "tags": [], "_media_type": "image", "_rand": 0.9997404839186227, "arXiv_link": "https://arxiv.org/abs/2312.09249", "other_link": "https://sarahweiii.github.io/zerorf/", "title": "ZeroRF: Fast Sparse View 360\u00b0 Reconstruction with Zero Pretraining", "abstract": "We present ZeroRF, a novel per-scene optimization method addressing the\nchallenge of sparse view 360{\\deg} reconstruction in neural field\nrepresentations. Current breakthroughs like Neural Radiance Fields (NeRF) have\ndemonstrated high-fidelity image synthesis but struggle with sparse input\nviews. Existing methods, such as Generalizable NeRFs and per-scene optimization\napproaches, face limitations in data dependency, computational cost, and\ngeneralization across diverse scenarios. To overcome these challenges, we\npropose ZeroRF, whose key idea is to integrate a tailored Deep Image Prior into\na factorized NeRF representation. Unlike traditional methods, ZeroRF\nparametrizes feature grids with a neural network generator, enabling efficient\nsparse view 360{\\deg} reconstruction without any pretraining or additional\nregularization. Extensive experiments showcase ZeroRF's versatility and\nsuperiority in terms of both quality and speed, achieving state-of-the-art\nresults on benchmark datasets. ZeroRF's significance extends to applications in\n3D content generation and editing. Project page:\nhttps://sarahweiii.github.io/zerorf/", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Ruoxi Shi", "Xinyue Wei", "Cheng Wang", "Hao Su"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b4"}, "filepath": "data/2311.13614.png", "tags": [], "_media_type": "image", "_rand": 0.9995621123766975, "arXiv_link": "https://arxiv.org/abs/2311.13614", "other_link": "https://github.com/Yuqifan1117/HalluciDoctor}.", "title": "HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data", "abstract": "Multi-modal Large Language Models (MLLMs) tuned on machine-generated\ninstruction-following data have demonstrated remarkable performance in various\nmulti-modal understanding and generation tasks. However, the hallucinations\ninherent in machine-generated data, which could lead to hallucinatory outputs\nin MLLMs, remain under-explored. This work aims to investigate various\nhallucinations (i.e., object, relation, attribute hallucinations) and mitigate\nthose hallucinatory toxicities in large-scale machine-generated visual\ninstruction datasets. Drawing on the human ability to identify factual errors,\nwe present a novel hallucination detection and elimination framework,\nHalluciDoctor, based on the cross-checking paradigm. We use our framework to\nidentify and eliminate hallucinations in the training data automatically.\nInterestingly, HalluciDoctor also indicates that spurious correlations arising\nfrom long-tail object co-occurrences contribute to hallucinations. Based on\nthat, we execute counterfactual visual instruction expansion to balance data\ndistribution, thereby enhancing MLLMs' resistance to hallucinations.\nComprehensive experiments on hallucination evaluation benchmarks show that our\nmethod successfully mitigates 44.6% hallucinations relatively and maintains\ncompetitive performance compared to LLaVA. The data and code for this paper are\npublicly available. \\url{https://github.com/Yuqifan1117/HalluciDoctor}.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Qifan Yu", "Juncheng Li", "Longhui Wei", "Liang Pang", "Wentao Ye", "Bosheng Qin", "Siliang Tang", "Qi Tian", "Yueting Zhuang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b5"}, "filepath": "data/2311.15939.png", "tags": [], "_media_type": "image", "_rand": 0.9990663832854206, "arXiv_link": "https://arxiv.org/abs/2311.15939", "other_link": "", "title": "Mudslide: A Universal Nuclear Instance Segmentation Method", "abstract": "Nucleus instance segmentation in histology images is crucial for a broad\nspectrum of clinical applications. Current dominant algorithms rely on\nregression of nuclear proxy maps. Distinguishing nucleus instances from the\nestimated maps requires carefully curated post-processing, which is error-prone\nand parameter-sensitive. Recently, the Segment Anything Model (SAM) has earned\nhuge attention in medical image segmentation, owing to its impressive\ngeneralization ability and promptable property. Nevertheless, its potential on\nnucleus instance segmentation remains largely underexplored. In this paper, we\npresent a novel prompt-driven framework that consists of a nucleus prompter and\nSAM for automatic nucleus instance segmentation. Specifically, the prompter\nlearns to generate a unique point prompt for each nucleus while the SAM is\nfine-tuned to output the corresponding mask for the prompted nucleus.\nFurthermore, we propose the inclusion of adjacent nuclei as negative prompts to\nenhance the model's capability to identify overlapping nuclei. Without\ncomplicated post-processing, our proposed method sets a new state-of-the-art\nperformance on three challenging benchmarks. Code is available at\n\\url{github.com/windygoo/PromptNucSeg}", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jun Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b6"}, "filepath": "data/2403.19080.png", "tags": [], "_media_type": "image", "_rand": 0.9998556441885772, "arXiv_link": "https://arxiv.org/abs/2403.19080", "other_link": "", "title": "MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models", "abstract": "Different from a unimodal model whose input is from a single modality, the\ninput (called multi-modal input) of a multi-modal model is from multiple\nmodalities such as image, 3D points, audio, text, etc. Similar to unimodal\nmodels, many existing studies show that a multi-modal model is also vulnerable\nto adversarial perturbation, where an attacker could add small perturbation to\nall modalities of a multi-modal input such that the multi-modal model makes\nincorrect predictions for it. Existing certified defenses are mostly designed\nfor unimodal models, which achieve sub-optimal certified robustness guarantees\nwhen extended to multi-modal models as shown in our experimental results. In\nour work, we propose MMCert, the first certified defense against adversarial\nattacks to a multi-modal model. We derive a lower bound on the performance of\nour MMCert under arbitrary adversarial attacks with bounded perturbations to\nboth modalities (e.g., in the context of auto-driving, we bound the number of\nchanged pixels in both RGB image and depth image). We evaluate our MMCert using\ntwo benchmark datasets: one for the multi-modal road segmentation task and the\nother for the multi-modal emotion recognition task. Moreover, we compare our\nMMCert with a state-of-the-art certified defense extended from unimodal models.\nOur experimental results show that our MMCert outperforms the baseline.", "keywords": [], "authors_list": ["Yanting Wang", "Hongye Fu", "Wei Zou", "Jinyuan Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b7"}, "filepath": "data/2309.12790.png", "tags": [], "_media_type": "image", "_rand": 0.9999648044609397, "arXiv_link": "https://arxiv.org/abs/2309.12790", "other_link": "https://github.com/ucwxb/NTO3D.", "title": "NTO3D: Neural Target Object 3D Reconstruction with Segment Anything", "abstract": "Neural 3D reconstruction from multi-view images has recently attracted\nincreasing attention from the community. Existing methods normally learn a\nneural field for the whole scene, while it is still under-explored how to\nreconstruct a target object indicated by users. Considering the Segment\nAnything Model (SAM) has shown effectiveness in segmenting any 2D images, in\nthis paper, we propose NTO3D, a novel high-quality Neural Target Object 3D\n(NTO3D) reconstruction method, which leverages the benefits of both neural\nfield and SAM. We first propose a novel strategy to lift the multi-view 2D\nsegmentation masks of SAM into a unified 3D occupancy field. The 3D occupancy\nfield is then projected into 2D space and generates the new prompts for SAM.\nThis process is iterative until convergence to separate the target object from\nthe scene. After this, we then lift the 2D features of the SAM encoder into a\n3D feature field in order to improve the reconstruction quality of the target\nobject. NTO3D lifts the 2D masks and features of SAM into the 3D neural field\nfor high-quality neural target object 3D reconstruction. We conduct detailed\nexperiments on several benchmark datasets to demonstrate the advantages of our\nmethod. The code will be available at: https://github.com/ucwxb/NTO3D.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Xiaobao Wei", "Renrui Zhang", "Jiarui Wu", "Jiaming Liu", "Ming Lu", "Yandong Guo", "Shanghang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b8"}, "filepath": "data/2403.07277v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999680215533836, "arXiv_link": "https://arxiv.org/abs/2403.07277v1", "other_link": "", "title": "A Bayesian Approach to OOD Robustness in Image Classification", "abstract": "An important and unsolved problem in computer vision is to ensure that the\nalgorithms are robust to changes in image domains. We address this problem in\nthe scenario where we have access to images from the target domains but no\nannotations. Motivated by the challenges of the OOD-CV benchmark where we\nencounter real world Out-of-Domain (OOD) nuisances and occlusion, we introduce\na novel Bayesian approach to OOD robustness for object classification. Our work\nextends Compositional Neural Networks (CompNets), which have been shown to be\nrobust to occlusion but degrade badly when tested on OOD data. We exploit the\nfact that CompNets contain a generative head defined over feature vectors\nrepresented by von Mises-Fisher (vMF) kernels, which correspond roughly to\nobject parts, and can be learned without supervision. We obverse that some vMF\nkernels are similar between different domains, while others are not. This\nenables us to learn a transitional dictionary of vMF kernels that are\nintermediate between the source and target domains and train the generative\nmodel on this dictionary using the annotations on the source domain, followed\nby iterative refinement. This approach, termed Unsupervised Generative\nTransition (UGT), performs very well in OOD scenarios even when occlusion is\npresent. UGT is evaluated on different OOD benchmarks including the OOD-CV\ndataset, several popular datasets (e.g., ImageNet-C [9]), artificial image\ncorruptions (including adding occluders), and synthetic-to-real domain\ntransfer, and does well in all scenarios outperforming SOTA alternatives (e.g.\nup to 10% top-1 accuracy on Occluded OOD-CV dataset).", "keywords": [], "authors_list": ["Prakhar Kaushik", "Adam Kortylewski", "Alan L. Yuille"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1b9"}, "filepath": "data/2311.11016.png", "tags": [], "_media_type": "image", "_rand": 0.9991979678103918, "arXiv_link": "https://arxiv.org/abs/2311.11016", "other_link": "", "title": "SNI-SLAM: Semantic Neural Implicit SLAM", "abstract": "We propose SNI-SLAM, a semantic SLAM system utilizing neural implicit\nrepresentation, that simultaneously performs accurate semantic mapping,\nhigh-quality surface reconstruction, and robust camera tracking. In this\nsystem, we introduce hierarchical semantic representation to allow multi-level\nsemantic comprehension for top-down structured semantic mapping of the scene.\nIn addition, to fully utilize the correlation between multiple attributes of\nthe environment, we integrate appearance, geometry and semantic features\nthrough cross-attention for feature collaboration. This strategy enables a more\nmultifaceted understanding of the environment, thereby allowing SNI-SLAM to\nremain robust even when single attribute is defective. Then, we design an\ninternal fusion-based decoder to obtain semantic, RGB, Truncated Signed\nDistance Field (TSDF) values from multi-level features for accurate decoding.\nFurthermore, we propose a feature loss to update the scene representation at\nthe feature level. Compared with low-level losses such as RGB loss and depth\nloss, our feature loss is capable of guiding the network optimization on a\nhigher-level. Our SNI-SLAM method demonstrates superior performance over all\nrecent NeRF-based SLAM methods in terms of mapping and tracking accuracy on\nReplica and ScanNet datasets, while also showing excellent capabilities in\naccurate semantic segmentation and real-time semantic mapping.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Siting Zhu", "Guangming Wang", "Hermann Blum", "Jiuming Liu", "LiangSong", "Marc Pollefeys", "Hesheng Wang"], "category_name": "Robotics", "all_categories": ["Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ba"}, "filepath": "data/2311.12062.png", "tags": [], "_media_type": "image", "_rand": 0.9992074861593901, "arXiv_link": "https://arxiv.org/abs/2311.12062", "other_link": "", "title": "PBWR: Parametric Building Wireframe Reconstruction from Aerial LiDAR Point Clouds", "abstract": "In this paper, we present an end-to-end 3D building wireframe reconstruction\nmethod to regress edges directly from aerial LiDAR point clouds.Our method,\nnamed Parametric Building Wireframe Reconstruction (PBWR), takes aerial LiDAR\npoint clouds and initial edge entities as input, and fully uses self-attention\nmechanism of transformers to regress edge parameters without any intermediate\nsteps such as corner prediction. We propose an edge non-maximum suppression\n(E-NMS) module based on edge similarityto remove redundant edges. Additionally,\na dedicated edge loss function is utilized to guide the PBWR in regressing\nedges parameters, where simple use of edge distance loss isn't suitable. In our\nexperiments, we demonstrate state-of-the-art results on the Building3D dataset,\nachieving an improvement of approximately 36% in entry-level dataset edge\naccuracy and around 42% improvement in the Tallinn dataset.", "keywords": ["Remote sensing and photogrammetry", "Deep learning architectures and techniques"], "authors_list": ["Shangfeng Huang", "Ruisheng Wang", "Bo Guo", "Hongxin Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1bb"}, "filepath": "data/2311.16096.png", "tags": [], "_media_type": "image", "_rand": 0.9994230126129189, "arXiv_link": "https://arxiv.org/abs/2311.16096", "other_link": "", "title": "Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling", "abstract": "Modeling animatable human avatars from RGB videos is a long-standing and\nchallenging problem. Recent works usually adopt MLP-based neural radiance\nfields (NeRF) to represent 3D humans, but it remains difficult for pure MLPs to\nregress pose-dependent garment details. To this end, we introduce Animatable\nGaussians, a new avatar representation that leverages powerful 2D CNNs and 3D\nGaussian splatting to create high-fidelity avatars. To associate 3D Gaussians\nwith the animatable avatar, we learn a parametric template from the input\nvideos, and then parameterize the template on two front & back canonical\nGaussian maps where each pixel represents a 3D Gaussian. The learned template\nis adaptive to the wearing garments for modeling looser clothes like dresses.\nSuch template-guided 2D parameterization enables us to employ a powerful\nStyleGAN-based CNN to learn the pose-dependent Gaussian maps for modeling\ndetailed dynamic appearances. Furthermore, we introduce a pose projection\nstrategy for better generalization given novel poses. To tackle the realistic\nrelighting of animatable avatars, we introduce physically-based rendering into\nthe avatar representation for decomposing avatar materials and environment\nillumination. Overall, our method can create lifelike avatars with dynamic,\nrealistic, generalized and relightable appearances. Experiments show that our\nmethod outperforms other state-of-the-art approaches.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Zhe Li", "Zerong Zheng", "Lizhen Wang", "Yebin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1bc"}, "filepath": "data/2403.07684.png", "tags": [], "_media_type": "image", "_rand": 0.9996634752761739, "arXiv_link": "https://arxiv.org/abs/2403.07684", "other_link": "", "title": "Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for Video Adverse Weather Removal", "abstract": "Real-world vision tasks frequently suffer from the appearance of unexpected\nadverse weather conditions, including rain, haze, snow, and raindrops. In the\nlast decade, convolutional neural networks and vision transformers have yielded\noutstanding results in single-weather video removal. However, due to the\nabsence of appropriate adaptation, most of them fail to generalize to other\nweather conditions. Although ViWS-Net is proposed to remove adverse weather\nconditions in videos with a single set of pre-trained weights, it is seriously\nblinded by seen weather at train-time and degenerates when coming to unseen\nweather during test-time. In this work, we introduce test-time adaptation into\nadverse weather removal in videos, and propose the first framework that\nintegrates test-time adaptation into the iterative diffusion reverse process.\nSpecifically, we devise a diffusion-based network with a novel temporal noise\nmodel to efficiently explore frame-correlated information in degraded video\nclips at training stage. During inference stage, we introduce a proxy task\nnamed Diffusion Tubelet Self-Calibration to learn the primer distribution of\ntest video stream and optimize the model by approximating the temporal noise\nmodel for online adaptation. Experimental results, on benchmark datasets,\ndemonstrate that our Test-Time Adaptation method with Diffusion-based\nnetwork(Diff-TTA) outperforms state-of-the-art methods in terms of restoring\nvideos degraded by seen weather conditions. Its generalizable capability is\nalso validated with unseen weather conditions in both synthesized and\nreal-world videos.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yijun Yang", "Hongtao Wu", "Angelica I. Aviles-Rivero", "Yulun Zhang", "Jing Qin", "Lei Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1bd"}, "filepath": "data/2404.13541.png", "tags": [], "_media_type": "image", "_rand": 0.9991872348867119, "arXiv_link": "https://arxiv.org/abs/2404.13541", "other_link": "", "title": "Generalizable Novel-View Synthesis using a Stereo Camera", "abstract": "In this paper, we propose the first generalizable view synthesis approach\nthat specifically targets multi-view stereo-camera images. Since recent stereo\nmatching has demonstrated accurate geometry prediction, we introduce stereo\nmatching into novel-view synthesis for high-quality geometry reconstruction. To\nthis end, this paper proposes a novel framework, dubbed StereoNeRF, which\nintegrates stereo matching into a NeRF-based generalizable view synthesis\napproach. StereoNeRF is equipped with three key components to effectively\nexploit stereo matching in novel-view synthesis: a stereo feature extractor, a\ndepth-guided plane-sweeping, and a stereo depth loss. Moreover, we propose the\nStereoNVS dataset, the first multi-view dataset of stereo-camera images,\nencompassing a wide variety of both real and synthetic scenes. Our experimental\nresults demonstrate that StereoNeRF surpasses previous approaches in\ngeneralizable view synthesis.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haechan Lee", "Wonjoon Jin", "Seung-Hwan Baek", "Sunghyun Cho"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1be"}, "filepath": "data/2404.04430.png", "tags": [], "_media_type": "image", "_rand": 0.999145061609117, "arXiv_link": "https://arxiv.org/abs/2404.04430", "other_link": "", "title": "PhysPT: Physics-aware Pretrained Transformer for Estimating Human Dynamics from Monocular Videos", "abstract": "While current methods have shown promising progress on estimating 3D human\nmotion from monocular videos, their motion estimates are often physically\nunrealistic because they mainly consider kinematics. In this paper, we\nintroduce Physics-aware Pretrained Transformer (PhysPT), which improves\nkinematics-based motion estimates and infers motion forces. PhysPT exploits a\nTransformer encoder-decoder backbone to effectively learn human dynamics in a\nself-supervised manner. Moreover, it incorporates physics principles governing\nhuman motion. Specifically, we build a physics-based body representation and\ncontact force model. We leverage them to impose novel physics-inspired training\nlosses (i.e., force loss, contact loss, and Euler-Lagrange loss), enabling\nPhysPT to capture physical properties of the human body and the forces it\nexperiences. Experiments demonstrate that, once trained, PhysPT can be directly\napplied to kinematics-based estimates to significantly enhance their physical\nplausibility and generate favourable motion forces. Furthermore, we show that\nthese physically meaningful quantities translate into improved accuracy of an\nimportant downstream task: human action recognition.", "keywords": ["Biometrics and human analysis", "Computational imaging and physics-based vision"], "authors_list": ["Yufei Zhang", "Jeffrey Kephart", "Zijun Cui", "Qiang Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1bf"}, "filepath": "data/2306.12041v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993668544364516, "arXiv_link": "https://arxiv.org/abs/2306.12041v2", "other_link": "https://github.com/ristea/aed-mae.", "title": "Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly Detectors", "abstract": "We propose an efficient abnormal event detection model based on a lightweight\nmasked auto-encoder (AE) applied at the video frame level. The novelty of the\nproposed model is threefold. First, we introduce an approach to weight tokens\nbased on motion gradients, thus shifting the focus from the static background\nscene to the foreground objects. Second, we integrate a teacher decoder and a\nstudent decoder into our architecture, leveraging the discrepancy between the\noutputs given by the two decoders to improve anomaly detection. Third, we\ngenerate synthetic abnormal events to augment the training videos, and task the\nmasked AE model to jointly reconstruct the original frames (without anomalies)\nand the corresponding pixel-level anomaly maps. Our design leads to an\nefficient and effective model, as demonstrated by the extensive experiments\ncarried out on four benchmarks: Avenue, ShanghaiTech, UBnormal and UCSD Ped2.\nThe empirical results show that our model achieves an excellent trade-off\nbetween speed and accuracy, obtaining competitive AUC scores, while processing\n1655 FPS. Hence, our model is between 8 and 70 times faster than competing\nmethods. We also conduct an ablation study to justify our design. Our code is\nfreely available at: https://github.com/ristea/aed-mae.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Nicolae Ristea", "Florinel Croitoru", "Radu Tudor Ionescu", "Marius Popescu", "Fahad Shahbaz Khan", "Mubarak Shah"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c0"}, "filepath": "data/2402.18447.png", "tags": [], "_media_type": "image", "_rand": 0.9995952030680613, "arXiv_link": "https://arxiv.org/abs/2402.18447", "other_link": "", "title": "Prompt-Driven Dynamic Object-Centric Learning for Single Domain Generalization", "abstract": "Single-domain generalization aims to learn a model from single source domain\ndata to achieve generalized performance on other unseen target domains.\nExisting works primarily focus on improving the generalization ability of\nstatic networks. However, static networks are unable to dynamically adapt to\nthe diverse variations in different image scenes, leading to limited\ngeneralization capability. Different scenes exhibit varying levels of\ncomplexity, and the complexity of images further varies significantly in\ncross-domain scenarios. In this paper, we propose a dynamic object-centric\nperception network based on prompt learning, aiming to adapt to the variations\nin image complexity. Specifically, we propose an object-centric gating module\nbased on prompt learning to focus attention on the object-centric features\nguided by the various scene prompts. Then, with the object-centric gating\nmasks, the dynamic selective module dynamically selects highly correlated\nfeature regions in both spatial and channel dimensions enabling the model to\nadaptively perceive object-centric relevant features, thereby enhancing the\ngeneralization capability. Extensive experiments were conducted on\nsingle-domain generalization tasks in image classification and object\ndetection. The experimental results demonstrate that our approach outperforms\nstate-of-the-art methods, which validates the effectiveness and generally of\nour proposed method.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Deng Li", "Aming Wu", "Yaowei Wang", "Yahong Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c1"}, "filepath": "data/2404.04960.png", "tags": [], "_media_type": "image", "_rand": 0.9996705452924403, "arXiv_link": "https://arxiv.org/abs/2404.04960", "other_link": "https://github.com/YtongXie/PairAug}.", "title": "PairAug: What Can Augmented Image-Text Pairs Do for Radiology?", "abstract": "Current vision-language pre-training (VLP) methodologies predominantly depend\non paired image-text datasets, a resource that is challenging to acquire in\nradiology due to privacy considerations and labelling complexities. Data\naugmentation provides a practical solution to overcome the issue of data\nscarcity, however, most augmentation methods exhibit a limited focus,\nprioritising either image or text augmentation exclusively. Acknowledging this\nlimitation, our objective is to devise a framework capable of concurrently\naugmenting medical image and text data. We design a Pairwise Augmentation\n(PairAug) approach that contains an Inter-patient Augmentation (InterAug)\nbranch and an Intra-patient Augmentation (IntraAug) branch. Specifically, the\nInterAug branch of our approach generates radiology images using synthesised\nyet plausible reports derived from a Large Language Model (LLM). The generated\npairs can be considered a collection of new patient cases since they are\nartificially created and may not exist in the original dataset. In contrast,\nthe IntraAug branch uses newly generated reports to manipulate images. This\nprocess allows us to create new paired data for each individual with diverse\nmedical conditions. Our extensive experiments on various downstream tasks\ncovering medical image classification zero-shot and fine-tuning analysis\ndemonstrate that our PairAug, concurrently expanding both image and text data,\nsubstantially outperforms image-/text-only expansion baselines and advanced\nmedical VLP baselines. Our code is released at\n\\url{https://github.com/YtongXie/PairAug}.", "keywords": ["Multimodal models and vision-language models", "Medical imaging and biological vision"], "authors_list": ["Yutong Xie", "Qi Chen", "Sinuo Wang", "Minh-Son To", "Iris Lee", "Ee Win Khoo", "Kerolos Hendy", "Daniel Koh", "Yong Xia", "Qi Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c2"}, "filepath": "data/2309.16650.png", "tags": [], "_media_type": "image", "_rand": 0.999353743179594, "arXiv_link": "https://arxiv.org/abs/2309.16650", "other_link": "https://concept-graphs.github.io/", "title": "CLIP-Driven Open-Vocabulary 3D Scene Graph Generation via Cross-Modality Contrastive Learning", "abstract": "For robots to perform a wide variety of tasks, they require a 3D\nrepresentation of the world that is semantically rich, yet compact and\nefficient for task-driven perception and planning. Recent approaches have\nattempted to leverage features from large vision-language models to encode\nsemantics in 3D representations. However, these approaches tend to produce maps\nwith per-point feature vectors, which do not scale well in larger environments,\nnor do they contain semantic spatial relationships between entities in the\nenvironment, which are useful for downstream planning. In this work, we propose\nConceptGraphs, an open-vocabulary graph-structured representation for 3D\nscenes. ConceptGraphs is built by leveraging 2D foundation models and fusing\ntheir output to 3D by multi-view association. The resulting representations\ngeneralize to novel semantic classes, without the need to collect large 3D\ndatasets or finetune models. We demonstrate the utility of this representation\nthrough a number of downstream planning tasks that are specified through\nabstract (language) prompts and require complex reasoning over spatial and\nsemantic concepts. (Project page: https://concept-graphs.github.io/ Explainer\nvideo: https://youtu.be/mRhNkQwRYnc )", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Lianggangxu Chen", "Xuejiao Wang", "Jiale Lu", "Shaohui Lin", "Changbo Wang", "Gaoqi He"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c3"}, "filepath": "data/2312.05716.png", "tags": [], "_media_type": "image", "_rand": 0.9993402728822728, "arXiv_link": "https://arxiv.org/abs/2312.05716", "other_link": "https://github.com/DongXzz/RoLI}.", "title": "Initialization Matters for Adversarial Transfer Learning", "abstract": "With the prevalence of the Pretraining-Finetuning paradigm in transfer\nlearning, the robustness of downstream tasks has become a critical concern. In\nthis work, we delve into adversarial robustness in transfer learning and reveal\nthe critical role of initialization, including both the pretrained model and\nthe linear head. First, we discover the necessity of an adversarially robust\npretrained model. Specifically, we reveal that with a standard pretrained\nmodel, Parameter-Efficient Finetuning (PEFT) methods either fail to be\nadversarially robust or continue to exhibit significantly degraded adversarial\nrobustness on downstream tasks, even with adversarial training during\nfinetuning. Leveraging a robust pretrained model, surprisingly, we observe that\na simple linear probing can outperform full finetuning and other PEFT methods\nwith random initialization on certain datasets. We further identify that linear\nprobing excels in preserving robustness from the robust pretraining. Based on\nthis, we propose Robust Linear Initialization (RoLI) for adversarial\nfinetuning, which initializes the linear head with the weights obtained by\nadversarial linear probing to maximally inherit the robustness from\npretraining. Across five different image classification datasets, we\ndemonstrate the effectiveness of RoLI and achieve new state-of-the-art results.\nOur code is available at \\url{https://github.com/DongXzz/RoLI}.", "keywords": [], "authors_list": ["Andong Hua", "Jindong Gu", "Zhiyu Xue", "Nicholas Carlini", "Eric Wong", "Yao Qin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c4"}, "filepath": "data/2402.10636.png", "tags": [], "_media_type": "image", "_rand": 0.9996373685857924, "arXiv_link": "https://arxiv.org/abs/2402.10636", "other_link": "", "title": "PEGASUS: Personalized Generative 3D Avatars with Composable Attributes", "abstract": "We present PEGASUS, a method for constructing a personalized generative 3D\nface avatar from monocular video sources. Our generative 3D avatar enables\ndisentangled controls to selectively alter the facial attributes (e.g., hair or\nnose) while preserving the identity. Our approach consists of two stages:\nsynthetic database generation and constructing a personalized generative\navatar. We generate a synthetic video collection of the target identity with\nvarying facial attributes, where the videos are synthesized by borrowing the\nattributes from monocular videos of diverse identities. Then, we build a\nperson-specific generative 3D avatar that can modify its attributes\ncontinuously while preserving its identity. Through extensive experiments, we\ndemonstrate that our method of generating a synthetic database and creating a\n3D generative avatar is the most effective in preserving identity while\nachieving high realism. Subsequently, we introduce a zero-shot approach to\nachieve the same goal of generative modeling more efficiently by leveraging a\npreviously constructed personalized generative model.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Hyunsoo Cha", "Byungjun Kim", "Hanbyul Joo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c5"}, "filepath": "data/2311.13250v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994462139939907, "arXiv_link": "https://arxiv.org/abs/2311.13250v2", "other_link": "", "title": "FedHCA$^2$: Towards Hetero-Client Federated Multi-Task Learning", "abstract": "Federated Learning (FL) enables joint training across distributed clients\nusing their local data privately. Federated Multi-Task Learning (FMTL) builds\non FL to handle multiple tasks, assuming model congruity that identical model\narchitecture is deployed in each client. To relax this assumption and thus\nextend real-world applicability, we introduce a novel problem setting,\nHetero-Client Federated Multi-Task Learning (HC-FMTL), to accommodate diverse\ntask setups. The main challenge of HC-FMTL is the model incongruity issue that\ninvalidates conventional aggregation methods. It also escalates the\ndifficulties in accurate model aggregation to deal with data and task\nheterogeneity inherent in FMTL. To address these challenges, we propose the\nFedHCA$^2$ framework, which allows for federated training of personalized\nmodels by modeling relationships among heterogeneous clients. Drawing on our\ntheoretical insights into the difference between multi-task and federated\noptimization, we propose the Hyper Conflict-Averse Aggregation scheme to\nmitigate conflicts during encoder updates. Additionally, inspired by task\ninteraction in MTL, the Hyper Cross Attention Aggregation scheme uses\nlayer-wise cross attention to enhance decoder interactions while alleviating\nmodel incongruity. Moreover, we employ learnable Hyper Aggregation Weights for\neach client to customize personalized parameter updates. Extensive experiments\ndemonstrate the superior performance of FedHCA$^2$ in various HC-FMTL scenarios\ncompared to representative methods. Our code will be made publicly available.", "keywords": [], "authors_list": ["Yuxiang Lu", "Suizhi Huang", "Yuwen Yang", "Shalayiding Sirejiding", "Yue Ding", "Hongtao Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c6"}, "filepath": "data/2402.17614.png", "tags": [], "_media_type": "image", "_rand": 0.999554140769872, "arXiv_link": "https://arxiv.org/abs/2402.17614", "other_link": "", "title": "Adapt Before Comparison: A New Perspective on Cross-Domain Few-Shot Segmentation", "abstract": "Few-shot segmentation performance declines substantially when facing images\nfrom a domain different than the training domain, effectively limiting\nreal-world use cases. To alleviate this, recently cross-domain few-shot\nsegmentation (CD-FSS) has emerged. Works that address this task mainly\nattempted to learn segmentation on a source domain in a manner that generalizes\nacross domains. Surprisingly, we can outperform these approaches while\neliminating the training stage and removing their main segmentation network. We\nshow test-time task-adaption is the key for successful CD-FSS instead.\nTask-adaption is achieved by appending small networks to the feature pyramid of\na conventionally classification-pretrained backbone. To avoid overfitting to\nthe few labeled samples in supervised fine-tuning, consistency across augmented\nviews of input images serves as guidance while learning the parameters of the\nattached layers. Despite our self-restriction not to use any images other than\nthe few labeled samples at test time, we achieve new state-of-the-art\nperformance in CD-FSS, evidencing the need to rethink approaches for the task.", "keywords": [], "authors_list": ["Jonas Herzog"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c7"}, "filepath": "data/2404.16306.png", "tags": [], "_media_type": "image", "_rand": 0.9996895616815528, "arXiv_link": "https://arxiv.org/abs/2404.16306", "other_link": "", "title": "TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models", "abstract": "Text-conditioned image-to-video generation (TI2V) aims to synthesize a\nrealistic video starting from a given image (e.g., a woman's photo) and a text\ndescription (e.g., \"a woman is drinking water.\"). Existing TI2V frameworks\noften require costly training on video-text datasets and specific model designs\nfor text and image conditioning. In this paper, we propose TI2V-Zero, a\nzero-shot, tuning-free method that empowers a pretrained text-to-video (T2V)\ndiffusion model to be conditioned on a provided image, enabling TI2V generation\nwithout any optimization, fine-tuning, or introducing external modules. Our\napproach leverages a pretrained T2V diffusion foundation model as the\ngenerative prior. To guide video generation with the additional image input, we\npropose a \"repeat-and-slide\" strategy that modulates the reverse denoising\nprocess, allowing the frozen diffusion model to synthesize a video\nframe-by-frame starting from the provided image. To ensure temporal continuity,\nwe employ a DDPM inversion strategy to initialize Gaussian noise for each newly\nsynthesized frame and a resampling technique to help preserve visual details.\nWe conduct comprehensive experiments on both domain-specific and open-domain\ndatasets, where TI2V-Zero consistently outperforms a recent open-domain TI2V\nmodel. Furthermore, we show that TI2V-Zero can seamlessly extend to other tasks\nsuch as video infilling and prediction when provided with more images. Its\nautoregressive design also supports long video generation.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Haomiao Ni", "Bernhard Egger", "Suhas Lohit", "Anoop Cherian", "Ye Wang", "Toshiaki Koike-Akino", "Sharon X. Huang", "Tim Marks"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c8"}, "filepath": "data/2404.01743.png", "tags": [], "_media_type": "image", "_rand": 0.9999197637294686, "arXiv_link": "https://arxiv.org/abs/2404.01743", "other_link": "", "title": "Atom-Level Optical Chemical Structure Recognition with Limited Supervision", "abstract": "Identifying the chemical structure from a graphical representation, or image,\nof a molecule is a challenging pattern recognition task that would greatly\nbenefit drug development. Yet, existing methods for chemical structure\nrecognition do not typically generalize well, and show diminished effectiveness\nwhen confronted with domains where data is sparse, or costly to generate, such\nas hand-drawn molecule images. To address this limitation, we propose a new\nchemical structure recognition tool that delivers state-of-the-art performance\nand can adapt to new domains with a limited number of data samples and\nsupervision. Unlike previous approaches, our method provides atom-level\nlocalization, and can therefore segment the image into the different atoms and\nbonds. Our model is the first model to perform OCSR with atom-level entity\ndetection with only SMILES supervision. Through rigorous and extensive\nbenchmarking, we demonstrate the preeminence of our chemical structure\nrecognition approach in terms of data efficiency, accuracy, and atom-level\nentity prediction.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Martijn Oldenhof", "Edward De Brouwer", "Adam Arany", "Yves Moreau"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1c9"}, "filepath": "data/2307.07607.png", "tags": [], "_media_type": "image", "_rand": 0.9993473557969808, "arXiv_link": "https://arxiv.org/abs/2307.07607", "other_link": "", "title": "SubT-MRS Datasets: Pushing SLAM Towards All-weather Environments", "abstract": "Simultaneous localization and mapping (SLAM) is a fundamental task for\nnumerous applications such as autonomous navigation and exploration. Despite\nmany SLAM datasets have been released, current SLAM solutions still struggle to\nhave sustained and resilient performance. One major issue is the absence of\nhigh-quality datasets including diverse all-weather conditions and a reliable\nmetric for assessing robustness. This limitation significantly restricts the\nscalability and generalizability of SLAM technologies, impacting their\ndevelopment, validation, and deployment. To address this problem, we present\nSubT-MRS, an extremely challenging real-world dataset designed to push SLAM\ntowards all-weather environments to pursue the most robust SLAM performance. It\ncontains multi-degraded environments including over 30 diverse scenes such as\nstructureless corridors, varying lighting conditions, and perceptual obscurants\nlike smoke and dust; multimodal sensors such as LiDAR, fisheye camera, IMU, and\nthermal camera; and multiple locomotions like aerial, legged, and wheeled\nrobots. We develop accuracy and robustness evaluation tracks for SLAM and\nintroduced novel robustness metrics. Comprehensive studies are performed,\nrevealing new observations, challenges, and opportunities for future research.", "keywords": [], "authors_list": ["Shibo Zhao", "Yuanjun Gao", "Tianhao Wu", "Damanpreet Singh", "Rushan Jiang", "Haoxiang Sun", "Mansi Sarawata", "Warren Whittaker", "Ian Higgins", "Shaoshu Su", "Yi Du", "Can Xu", "John Keller", "Jay Karhade", "Lucas Nogueira", "Sourojit Saha", "Yuheng Qiu", "Ji Zhang", "Wenshan Wang", "Chen Wang", "Sebastian Scherer"], "category_name": "Robotics", "all_categories": ["Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ca"}, "filepath": "data/2306.17560.png", "tags": [], "_media_type": "image", "_rand": 0.9995924339509072, "arXiv_link": "https://arxiv.org/abs/2306.17560", "other_link": "", "title": "Class Incremental Learning with Multi-Teacher Distillation", "abstract": "Class-incremental learning aims to learn new classes in an incremental\nfashion without forgetting the previously learned ones. Several research works\nhave shown how additional data can be used by incremental models to help\nmitigate catastrophic forgetting. In this work, following the recent\nbreakthrough in text-to-image generative models and their wide distribution, we\npropose the use of a pretrained Stable Diffusion model as a source of\nadditional data for class-incremental learning. Compared to competitive methods\nthat rely on external, often unlabeled, datasets of real images, our approach\ncan generate synthetic samples belonging to the same classes as the previously\nencountered images. This allows us to use those additional data samples not\nonly in the distillation loss but also for replay in the classification loss.\nExperiments on the competitive benchmarks CIFAR100, ImageNet-Subset, and\nImageNet demonstrate how this new approach can be used to further improve the\nperformance of state-of-the-art methods for class-incremental learning on large\nscale datasets.", "keywords": [], "authors_list": ["Haitao Wen", "Lili Pan", "Yu Dai", "Heqian Qiu", "Lanxiao Wang", "Qingbo Wu", "Hongliang Li"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1cb"}, "filepath": "data/2306.17843.png", "tags": [], "_media_type": "image", "_rand": 0.9991903862331946, "arXiv_link": "https://arxiv.org/abs/2306.17843", "other_link": "https://github.com/guochengqian/Magic123.", "title": "MPOD123: One Image to 3D Content Generation Using Mask-enhanced Progressive Outline-to-Detail Optimization", "abstract": "We present Magic123, a two-stage coarse-to-fine approach for high-quality,\ntextured 3D meshes generation from a single unposed image in the wild using\nboth2D and 3D priors. In the first stage, we optimize a neural radiance field\nto produce a coarse geometry. In the second stage, we adopt a memory-efficient\ndifferentiable mesh representation to yield a high-resolution mesh with a\nvisually appealing texture. In both stages, the 3D content is learned through\nreference view supervision and novel views guided by a combination of 2D and 3D\ndiffusion priors. We introduce a single trade-off parameter between the 2D and\n3D priors to control exploration (more imaginative) and exploitation (more\nprecise) of the generated geometry. Additionally, we employ textual inversion\nand monocular depth regularization to encourage consistent appearances across\nviews and to prevent degenerate solutions, respectively. Magic123 demonstrates\na significant improvement over previous image-to-3D techniques, as validated\nthrough extensive experiments on synthetic benchmarks and diverse real-world\nimages. Our code, models, and generated 3D assets are available at\nhttps://github.com/guochengqian/Magic123.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Jimin Xu", "Tianbao Wang", "Tao Jin", "Shengyu Zhang", "Dongjie Fu", "Zhe Wang", "Jiangjing Lyu", "Chengfei Lv", "Chaoyue Niu", "Zhou Yu", "Zhou Zhao", "Fei Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1cc"}, "filepath": "data/2404.18150.png", "tags": [], "_media_type": "image", "_rand": 0.9995029426588763, "arXiv_link": "https://arxiv.org/abs/2404.18150", "other_link": "", "title": "RadSimReal: Bridging the Gap Between Synthetic and Real Data in Radar Object Detection With Simulation", "abstract": "Object detection in radar imagery with neural networks shows great potential\nfor improving autonomous driving. However, obtaining annotated datasets from\nreal radar images, crucial for training these networks, is challenging,\nespecially in scenarios with long-range detection and adverse weather and\nlighting conditions where radar performance excels. To address this challenge,\nwe present RadSimReal, an innovative physical radar simulation capable of\ngenerating synthetic radar images with accompanying annotations for various\nradar types and environmental conditions, all without the need for real data\ncollection. Remarkably, our findings demonstrate that training object detection\nmodels on RadSimReal data and subsequently evaluating them on real-world data\nproduce performance levels comparable to models trained and tested on real data\nfrom the same dataset, and even achieves better performance when testing across\ndifferent real datasets. RadSimReal offers advantages over other physical radar\nsimulations that it does not necessitate knowledge of the radar design details,\nwhich are often not disclosed by radar suppliers, and has faster run-time. This\ninnovative tool has the potential to advance the development of computer vision\nalgorithms for radar-based autonomous driving applications.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Oded Bialer", "Yuval Haitman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1cd"}, "filepath": "data/2401.17879.png", "tags": [], "_media_type": "image", "_rand": 0.9998802293240486, "arXiv_link": "https://arxiv.org/abs/2401.17879", "other_link": "https://github.com/jonasricker/aeroblade", "title": "AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error", "abstract": "With recent text-to-image models, anyone can generate deceptively realistic\nimages with arbitrary contents, fueling the growing threat of visual\ndisinformation. A key enabler for generating high-resolution images with low\ncomputational cost has been the development of latent diffusion models (LDMs).\nIn contrast to conventional diffusion models, LDMs perform the denoising\nprocess in the low-dimensional latent space of a pre-trained autoencoder (AE)\ninstead of the high-dimensional image space. Despite their relevance, the\nforensic analysis of LDMs is still in its infancy. In this work we propose\nAEROBLADE, a novel detection method which exploits an inherent component of\nLDMs: the AE used to transform images between image and latent space. We find\nthat generated images can be more accurately reconstructed by the AE than real\nimages, allowing for a simple detection approach based on the reconstruction\nerror. Most importantly, our method is easy to implement and does not require\nany training, yet nearly matches the performance of detectors that rely on\nextensive training. We empirically demonstrate that AEROBLADE is effective\nagainst state-of-the-art LDMs, including Stable Diffusion and Midjourney.\nBeyond detection, our approach allows for the qualitative analysis of images,\nwhich can be leveraged for identifying inpainted regions. We release our code\nand data at https://github.com/jonasricker/aeroblade .", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Jonas Ricker", "Denis Lukovnikov", "Asja Fischer"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ce"}, "filepath": "data/2403.18036.png", "tags": [], "_media_type": "image", "_rand": 0.9999477798543406, "arXiv_link": "https://arxiv.org/abs/2403.18036", "other_link": "", "title": "Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance", "abstract": "Despite significant advancements in text-to-motion synthesis, generating\nlanguage-guided human motion within 3D environments poses substantial\nchallenges. These challenges stem primarily from (i) the absence of powerful\ngenerative models capable of jointly modeling natural language, 3D scenes, and\nhuman motion, and (ii) the generative models' intensive data requirements\ncontrasted with the scarcity of comprehensive, high-quality,\nlanguage-scene-motion datasets. To tackle these issues, we introduce a novel\ntwo-stage framework that employs scene affordance as an intermediate\nrepresentation, effectively linking 3D scene grounding and conditional motion\ngeneration. Our framework comprises an Affordance Diffusion Model (ADM) for\npredicting explicit affordance map and an Affordance-to-Motion Diffusion Model\n(AMDM) for generating plausible human motions. By leveraging scene affordance\nmaps, our method overcomes the difficulty in generating human motion under\nmultimodal condition signals, especially when training with limited data\nlacking extensive language-scene-motion pairs. Our extensive experiments\ndemonstrate that our approach consistently outperforms all baselines on\nestablished benchmarks, including HumanML3D and HUMANISE. Additionally, we\nvalidate our model's exceptional generalization capabilities on a specially\ncurated evaluation set featuring previously unseen descriptions and scenes.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Zan Wang", "Yixin Chen", "Baoxiong Jia", "Puhao Li", "Jinlu Zhang", "Jingze Zhang", "Tengyu Liu", "Yixin Zhu", "Wei Liang", "Siyuan Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1cf"}, "filepath": "data/2311.12886.png", "tags": [], "_media_type": "image", "_rand": 0.9995188718212554, "arXiv_link": "https://arxiv.org/abs/2311.12886", "other_link": "https://animationai.github.io/AnimateAnything.", "title": "Animating General Image with Large Visual Motion Model", "abstract": "Image animation is a key task in computer vision which aims to generate\ndynamic visual content from static image. Recent image animation methods employ\nneural based rendering technique to generate realistic animations. Despite\nthese advancements, achieving fine-grained and controllable image animation\nguided by text remains challenging, particularly for open-domain images\ncaptured in diverse real environments. In this paper, we introduce an open\ndomain image animation method that leverages the motion prior of video\ndiffusion model. Our approach introduces targeted motion area guidance and\nmotion strength guidance, enabling precise control the movable area and its\nmotion speed. This results in enhanced alignment between the animated visual\nelements and the prompting text, thereby facilitating a fine-grained and\ninteractive animation generation process for intricate motion sequences. We\nvalidate the effectiveness of our method through rigorous experiments on an\nopen-domain dataset, with the results showcasing its superior performance.\nProject page can be found at https://animationai.github.io/AnimateAnything.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Dengsheng Chen", "Xiaoming Wei", "Xiaolin Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d0"}, "filepath": "data/2403.10103.png", "tags": [], "_media_type": "image", "_rand": 0.9992916875944047, "arXiv_link": "https://arxiv.org/abs/2403.10103", "other_link": "", "title": "DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video", "abstract": "Recent advancements in dynamic neural radiance field methods have yielded\nremarkable outcomes. However, these approaches rely on the assumption of sharp\ninput images. When faced with motion blur, existing dynamic NeRF methods often\nstruggle to generate high-quality novel views. In this paper, we propose\nDyBluRF, a dynamic radiance field approach that synthesizes sharp novel views\nfrom a monocular video affected by motion blur. To account for motion blur in\ninput images, we simultaneously capture the camera trajectory and object\nDiscrete Cosine Transform (DCT) trajectories within the scene. Additionally, we\nemploy a global cross-time rendering approach to ensure consistent temporal\ncoherence across the entire scene. We curate a dataset comprising diverse\ndynamic scenes that are specifically tailored for our task. Experimental\nresults on our dataset demonstrate that our method outperforms existing\napproaches in generating sharp novel views from motion-blurred inputs while\nmaintaining spatial-temporal consistency of the scene.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Huiqiang Sun", "Xingyi Li", "Liao Shen", "Xinyi Ye", "Ke Xian", "Zhiguo Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d1"}, "filepath": "data/2403.07939.png", "tags": [], "_media_type": "image", "_rand": 0.9999717938723152, "arXiv_link": "https://arxiv.org/abs/2403.07939", "other_link": "", "title": "Dynamic Policy-Driven Adaptive Multi-Instance Learning for Whole Slide Image Classification", "abstract": "Multi-Instance Learning (MIL) has shown impressive performance for\nhistopathology whole slide image (WSI) analysis using bags or pseudo-bags. It\ninvolves instance sampling, feature representation, and decision-making.\nHowever, existing MIL-based technologies at least suffer from one or more of\nthe following problems: 1) requiring high storage and intensive pre-processing\nfor numerous instances (sampling); 2) potential over-fitting with limited\nknowledge to predict bag labels (feature representation); 3) pseudo-bag counts\nand prior biases affect model robustness and generalizability\n(decision-making). Inspired by clinical diagnostics, using the past sampling\ninstances can facilitate the final WSI analysis, but it is barely explored in\nprior technologies. To break free these limitations, we integrate the dynamic\ninstance sampling and reinforcement learning into a unified framework to\nimprove the instance selection and feature aggregation, forming a novel Dynamic\nPolicy Instance Selection (DPIS) scheme for better and more credible\ndecision-making. Specifically, the measurement of feature distance and reward\nfunction are employed to boost continuous instance sampling. To alleviate the\nover-fitting, we explore the latent global relations among instances for more\nrobust and discriminative feature representation while establishing reward and\npunishment mechanisms to correct biases in pseudo-bags using contrastive\nlearning. These strategies form the final Dynamic Policy-Driven Adaptive\nMulti-Instance Learning (PAMIL) method for WSI tasks. Extensive experiments\nreveal that our PAMIL method outperforms the state-of-the-art by 3.8\\% on\nCAMELYON16 and 4.4\\% on TCGA lung cancer datasets.", "keywords": ["Efficient and scalable vision", "Medical imaging and biological vision"], "authors_list": ["Tingting Zheng", "Kui Jiang", "Hongxun Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d2"}, "filepath": "data/2404.00676.png", "tags": [], "_media_type": "image", "_rand": 0.9992142570769261, "arXiv_link": "https://arxiv.org/abs/2404.00676", "other_link": "", "title": "OmniLocalRF: Omnidirectional Local Radiance Fields from Dynamic Videos", "abstract": "Omnidirectional cameras are extensively used in various applications to\nprovide a wide field of vision. However, they face a challenge in synthesizing\nnovel views due to the inevitable presence of dynamic objects, including the\nphotographer, in their wide field of view. In this paper, we introduce a new\napproach called Omnidirectional Local Radiance Fields (OmniLocalRF) that can\nrender static-only scene views, removing and inpainting dynamic objects\nsimultaneously. Our approach combines the principles of local radiance fields\nwith the bidirectional optimization of omnidirectional rays. Our input is an\nomnidirectional video, and we evaluate the mutual observations of the entire\nangle between the previous and current frames. To reduce ghosting artifacts of\ndynamic objects and inpaint occlusions, we devise a multi-resolution motion\nmask prediction module. Unlike existing methods that primarily separate dynamic\ncomponents through the temporal domain, our method uses multi-resolution neural\nfeature planes for precise segmentation, which is more suitable for long\n360-degree videos. Our experiments validate that OmniLocalRF outperforms\nexisting methods in both qualitative and quantitative metrics, especially in\nscenarios with complex real-world scenes. In particular, our approach\neliminates the need for manual interaction, such as drawing motion masks by\nhand and additional pose estimation, making it a highly effective and efficient\nsolution.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Dongyoung Choi", "Hyeonjoong Jang", "Min H. Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d3"}, "filepath": "data/2311.17982.png", "tags": [], "_media_type": "image", "_rand": 0.9995603341822761, "arXiv_link": "https://arxiv.org/abs/2311.17982", "other_link": "", "title": "VBench: Comprehensive Benchmark Suite for Video Generative Models", "abstract": "Video generation has witnessed significant advancements, yet evaluating these\nmodels remains a challenge. A comprehensive evaluation benchmark for video\ngeneration is indispensable for two reasons: 1) Existing metrics do not fully\nalign with human perceptions; 2) An ideal evaluation system should provide\ninsights to inform future developments of video generation. To this end, we\npresent VBench, a comprehensive benchmark suite that dissects \"video generation\nquality\" into specific, hierarchical, and disentangled dimensions, each with\ntailored prompts and evaluation methods. VBench has three appealing properties:\n1) Comprehensive Dimensions: VBench comprises 16 dimensions in video generation\n(e.g., subject identity inconsistency, motion smoothness, temporal flickering,\nand spatial relationship, etc). The evaluation metrics with fine-grained levels\nreveal individual models' strengths and weaknesses. 2) Human Alignment: We also\nprovide a dataset of human preference annotations to validate our benchmarks'\nalignment with human perception, for each evaluation dimension respectively. 3)\nValuable Insights: We look into current models' ability across various\nevaluation dimensions, and various content types. We also investigate the gaps\nbetween video and image generation models. We will open-source VBench,\nincluding all prompts, evaluation methods, generated videos, and human\npreference annotations, and also include more video generation models in VBench\nto drive forward the field of video generation.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Ziqi Huang", "Yinan He", "Jiashuo Yu", "Fan Zhang", "Chenyang Si", "Yuming Jiang", "Yuanhan Zhang", "Tianxing Wu", "Jin Qingyang", "Nattapol Chanpaisit", "Yaohui Wang", "Xinyuan Chen", "Limin Wang", "Dahua Lin", "Yu Qiao", "Ziwei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d4"}, "filepath": "data/2404.00777.png", "tags": [], "_media_type": "image", "_rand": 0.9990291262073957, "arXiv_link": "https://arxiv.org/abs/2404.00777", "other_link": "", "title": "Privacy-preserving Optics for Enhancing Protection in Face De-identification", "abstract": "The modern surge in camera usage alongside widespread computer vision\ntechnology applications poses significant privacy and security concerns.\nCurrent artificial intelligence (AI) technologies aid in recognizing relevant\nevents and assisting in daily tasks in homes, offices, hospitals, etc. The need\nto access or process personal information for these purposes raises privacy\nconcerns. While software-level solutions like face de-identification provide a\ngood privacy/utility trade-off, they present vulnerabilities to sniffing\nattacks. In this paper, we propose a hardware-level face de-identification\nmethod to solve this vulnerability. Specifically, our approach first learns an\noptical encoder along with a regression model to obtain a face heatmap while\nhiding the face identity from the source image. We also propose an\nanonymization framework that generates a new face using the privacy-preserving\nimage, face heatmap, and a reference face image from a public dataset as input.\nWe validate our approach with extensive simulations and hardware experiments.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Jhon Lopez", "Carlos Hinojosa", "Henry Arguello", "Bernard Ghanem"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Cryptography and Security", "Machine Learning", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d5"}, "filepath": "data/2311.16464.png", "tags": [], "_media_type": "image", "_rand": 0.9993692410139903, "arXiv_link": "https://arxiv.org/abs/2311.16464", "other_link": "", "title": "Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection", "abstract": "Video Moment Retrieval (MR) and Highlight Detection (HD) have attracted\nsignificant attention due to the growing demand for video analysis. Recent\napproaches treat MR and HD as similar video grounding problems and address them\ntogether with transformer-based architecture. However, we observe that the\nemphasis of MR and HD differs, with one necessitating the perception of local\nrelationships and the other prioritizing the understanding of global contexts.\nConsequently, the lack of task-specific design will inevitably lead to\nlimitations in associating the intrinsic specialty of two tasks. To tackle the\nissue, we propose a Unified Video COMprehension framework (UVCOM) to bridge the\ngap and jointly solve MR and HD effectively. By performing progressive\nintegration on intra and inter-modality across multi-granularity, UVCOM\nachieves the comprehensive understanding in processing a video. Moreover, we\npresent multi-aspect contrastive learning to consolidate the local relation\nmodeling and global knowledge accumulation via well aligned multi-modal space.\nExtensive experiments on QVHighlights, Charades-STA, TACoS , YouTube Highlights\nand TVSum datasets demonstrate the effectiveness and rationality of UVCOM which\noutperforms the state-of-the-art methods by a remarkable margin.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Yicheng Xiao", "Zhuoyan Luo", "Yong Liu", "Yue Ma", "Hengwei Bian", "Yatai Ji", "Yujiu Yang", "Xiu Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d6"}, "filepath": "data/2404.05016.png", "tags": [], "_media_type": "image", "_rand": 0.9997741369214722, "arXiv_link": "https://arxiv.org/abs/2404.05016", "other_link": "", "title": "Hyperbolic Learning with Synthetic Captions for Open-World Detection", "abstract": "Open-world detection poses significant challenges, as it requires the\ndetection of any object using either object class labels or free-form texts.\nExisting related works often use large-scale manual annotated caption datasets\nfor training, which are extremely expensive to collect. Instead, we propose to\ntransfer knowledge from vision-language models (VLMs) to enrich the\nopen-vocabulary descriptions automatically. Specifically, we bootstrap dense\nsynthetic captions using pre-trained VLMs to provide rich descriptions on\ndifferent regions in images, and incorporate these captions to train a novel\ndetector that generalizes to novel concepts. To mitigate the noise caused by\nhallucination in synthetic captions, we also propose a novel hyperbolic\nvision-language learning approach to impose a hierarchy between visual and\ncaption embeddings. We call our detector ``HyperLearner''. We conduct extensive\nexperiments on a wide variety of open-world detection benchmarks (COCO, LVIS,\nObject Detection in the Wild, RefCOCO) and our results show that our model\nconsistently outperforms existing state-of-the-art methods, such as GLIP,\nGLIPv2 and Grounding DINO, when using the same backbone.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Fanjie Kong", "Yanbei Chen", "Jiarui Cai", "Davide Modolo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d7"}, "filepath": "data/2403.19225.png", "tags": [], "_media_type": "image", "_rand": 0.9991266211475738, "arXiv_link": "https://arxiv.org/abs/2403.19225", "other_link": "", "title": "Efficient and Effective Weakly-Supervised Action Segmentation via Action-Transition-Aware Boundary Alignment", "abstract": "Weakly-supervised action segmentation is a task of learning to partition a\nlong video into several action segments, where training videos are only\naccompanied by transcripts (ordered list of actions). Most of existing methods\nneed to infer pseudo segmentation for training by serial alignment between all\nframes and the transcript, which is time-consuming and hard to be parallelized\nwhile training. In this work, we aim to escape from this inefficient alignment\nwith massive but redundant frames, and instead to directly localize a few\naction transitions for pseudo segmentation generation, where a transition\nrefers to the change from an action segment to its next adjacent one in the\ntranscript. As the true transitions are submerged in noisy boundaries due to\nintra-segment visual variation, we propose a novel Action-Transition-Aware\nBoundary Alignment (ATBA) framework to efficiently and effectively filter out\nnoisy boundaries and detect transitions. In addition, to boost the semantic\nlearning in the case that noise is inevitably present in the pseudo\nsegmentation, we also introduce video-level losses to utilize the trusted\nvideo-level supervision. Extensive experiments show the effectiveness of our\napproach on both performance and training speed.", "keywords": [], "authors_list": ["Angchi Xu", "Wei-Shi Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d8"}, "filepath": "data/2405.11481.png", "tags": [], "_media_type": "image", "_rand": 0.9990053509657304, "arXiv_link": "https://arxiv.org/abs/2405.11481", "other_link": "", "title": "Physics-aware Hand-object Interaction Denoising", "abstract": "The credibility and practicality of a reconstructed hand-object interaction\nsequence depend largely on its physical plausibility. However, due to high\nocclusions during hand-object interaction, physical plausibility remains a\nchallenging criterion for purely vision-based tracking methods. To address this\nissue and enhance the results of existing hand trackers, this paper proposes a\nnovel physically-aware hand motion de-noising method. Specifically, we\nintroduce two learned loss terms that explicitly capture two crucial aspects of\nphysical plausibility: grasp credibility and manipulation feasibility. These\nterms are used to train a physically-aware de-noising network. Qualitative and\nquantitative experiments demonstrate that our approach significantly improves\nboth fine-grained physical plausibility and overall pose accuracy, surpassing\ncurrent state-of-the-art de-noising methods.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Haowen Luo", "Yunze Liu", "Li Yi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1d9"}, "filepath": "data/2404.13103.png", "tags": [], "_media_type": "image", "_rand": 0.9990444838270658, "arXiv_link": "https://arxiv.org/abs/2404.13103", "other_link": "", "title": "ToNNO: Tomographic Reconstruction of a Neural Network\u2019s Output for Weakly Supervised Segmentation of 3D Medical Images", "abstract": "Annotating lots of 3D medical images for training segmentation models is\ntime-consuming. The goal of weakly supervised semantic segmentation is to train\nsegmentation models without using any ground truth segmentation masks. Our work\naddresses the case where only image-level categorical labels, indicating the\npresence or absence of a particular region of interest (such as tumours or\nlesions), are available. Most existing methods rely on class activation mapping\n(CAM). We propose a novel approach, ToNNO, which is based on the Tomographic\nreconstruction of a Neural Network's Output. Our technique extracts stacks of\nslices with different angles from the input 3D volume, feeds these slices to a\n2D encoder, and applies the inverse Radon transform in order to reconstruct a\n3D heatmap of the encoder's predictions. This generic method allows to perform\ndense prediction tasks on 3D volumes using any 2D image encoder. We apply it to\nweakly supervised medical image segmentation by training the 2D encoder to\noutput high values for slices containing the regions of interest. We test it on\nfour large scale medical image datasets and outperform 2D CAM methods. We then\nextend ToNNO by combining tomographic reconstruction with CAM methods,\nproposing Averaged CAM and Tomographic CAM, which obtain even better results.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Marius Schmidt-Mengin", "Alexis Benichoux", "Shibeshih Belachew", "Nikos Komodakis", "Nikos Paragios"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1da"}, "filepath": "data/2404.18962.png", "tags": [], "_media_type": "image", "_rand": 0.9998022845829504, "arXiv_link": "https://arxiv.org/abs/2404.18962", "other_link": "", "title": "An Aggregation-Free Federated Learning for Tackling Data Heterogeneity", "abstract": "The performance of Federated Learning (FL) hinges on the effectiveness of\nutilizing knowledge from distributed datasets. Traditional FL methods adopt an\naggregate-then-adapt framework, where clients update local models based on a\nglobal model aggregated by the server from the previous training round. This\nprocess can cause client drift, especially with significant cross-client data\nheterogeneity, impacting model performance and convergence of the FL algorithm.\nTo address these challenges, we introduce FedAF, a novel aggregation-free FL\nalgorithm. In this framework, clients collaboratively learn condensed data by\nleveraging peer knowledge, the server subsequently trains the global model\nusing the condensed data and soft labels received from the clients. FedAF\ninherently avoids the issue of client drift, enhances the quality of condensed\ndata amid notable data heterogeneity, and improves the global model\nperformance. Extensive numerical studies on several popular benchmark datasets\nshow FedAF surpasses various state-of-the-art FL algorithms in handling\nlabel-skew and feature-skew data heterogeneity, leading to superior global\nmodel accuracy and faster convergence.", "keywords": [], "authors_list": ["Yuan Wang", "Huazhu Fu", "Renuga Kanagavelu", "Qingsong Wei", "Yong Liu", "Rick Goh"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1db"}, "filepath": "data/2403.02640.png", "tags": [], "_media_type": "image", "_rand": 0.9992825068844694, "arXiv_link": "https://arxiv.org/abs/2403.02640", "other_link": "", "title": "HoloVIC: Large-scale Dataset and Benchmark for Multi-Sensor Holographic Intersection and Vehicle-Infrastructure Cooperative", "abstract": "Vehicle-to-everything (V2X) is a popular topic in the field of Autonomous\nDriving in recent years. Vehicle-infrastructure cooperation (VIC) becomes one\nof the important research area. Due to the complexity of traffic conditions\nsuch as blind spots and occlusion, it greatly limits the perception\ncapabilities of single-view roadside sensing systems. To further enhance the\naccuracy of roadside perception and provide better information to the vehicle\nside, in this paper, we constructed holographic intersections with various\nlayouts to build a large-scale multi-sensor holographic vehicle-infrastructure\ncooperation dataset, called HoloVIC. Our dataset includes 3 different types of\nsensors (Camera, Lidar, Fisheye) and employs 4 sensor-layouts based on the\ndifferent intersections. Each intersection is equipped with 6-18 sensors to\ncapture synchronous data. While autonomous vehicles pass through these\nintersections for collecting VIC data. HoloVIC contains in total on 100k+\nsynchronous frames from different sensors. Additionally, we annotated 3D\nbounding boxes based on Camera, Fisheye, and Lidar. We also associate the IDs\nof the same objects across different devices and consecutive frames in\nsequence. Based on HoloVIC, we formulated four tasks to facilitate the\ndevelopment of related research. We also provide benchmarks for these tasks.", "keywords": ["Scene analysis and understanding"], "authors_list": ["CONG MA", "Qiao Lei", "Chengkai Zhu", "Kai Liu", "Zelong Kong", "Liqing", "Xueqi Zhou", "Yuheng KAN", "Wei Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1dc"}, "filepath": "data/2311.14405.png", "tags": [], "_media_type": "image", "_rand": 0.9992868390700995, "arXiv_link": "https://arxiv.org/abs/2311.14405", "other_link": "", "title": "OneFormer3D: One Transformer for Unified Point Cloud Segmentation", "abstract": "Semantic, instance, and panoptic segmentation of 3D point clouds have been\naddressed using task-specific models of distinct design. Thereby, the\nsimilarity of all segmentation tasks and the implicit relationship between them\nhave not been utilized effectively. This paper presents a unified, simple, and\neffective model addressing all these tasks jointly. The model, named\nOneFormer3D, performs instance and semantic segmentation consistently, using a\ngroup of learnable kernels, where each kernel is responsible for generating a\nmask for either an instance or a semantic category. These kernels are trained\nwith a transformer-based decoder with unified instance and semantic queries\npassed as an input. Such a design enables training a model end-to-end in a\nsingle run, so that it achieves top performance on all three segmentation tasks\nsimultaneously. Specifically, our OneFormer3D ranks 1st and sets a new\nstate-of-the-art (+2.1 mAP50) in the ScanNet test leaderboard. We also\ndemonstrate the state-of-the-art results in semantic, instance, and panoptic\nsegmentation of ScanNet (+21 PQ), ScanNet200 (+3.8 mAP50), and S3DIS (+0.8\nmIoU) datasets.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Maksim Kolodiazhnyi", "Anna Vorontsova", "Anton Konushin", "Danila Rukhovich"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1dd"}, "filepath": "data/2405.14873.png", "tags": [], "_media_type": "image", "_rand": 0.9993618916583502, "arXiv_link": "http://export.arxiv.org/abs/2405.14873", "other_link": "", "title": "Federated Online Adaptation for Deep Stereo", "abstract": "We introduce a novel approach for adapting deep stereo networks in a\ncollaborative manner. By building over principles of federated learning, we\ndevelop a distributed framework allowing for demanding the optimization process\nto a number of clients deployed in different environments. This makes it\npossible, for a deep stereo network running on resourced-constrained devices,\nto capitalize on the adaptation process carried out by other instances of the\nsame architecture, and thus improve its accuracy in challenging environments\neven when it cannot carry out adaptation on its own. Experimental results show\nhow federated adaptation performs equivalently to on-device adaptation, and\neven better when dealing with challenging environments.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Matteo Poggi", "Fabio Tosi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1de"}, "filepath": "data/2404.04458.png", "tags": [], "_media_type": "image", "_rand": 0.9999261817910221, "arXiv_link": "https://arxiv.org/abs/2404.04458", "other_link": "", "title": "JRDB-Social: A Multifaceted Robotic Dataset for Understanding of Context and Dynamics of Human Interactions Within Social Groups", "abstract": "Understanding human social behaviour is crucial in computer vision and\nrobotics. Micro-level observations like individual actions fall short,\nnecessitating a comprehensive approach that considers individual behaviour,\nintra-group dynamics, and social group levels for a thorough understanding. To\naddress dataset limitations, this paper introduces JRDB-Social, an extension of\nJRDB. Designed to fill gaps in human understanding across diverse indoor and\noutdoor social contexts, JRDB-Social provides annotations at three levels:\nindividual attributes, intra-group interactions, and social group context. This\ndataset aims to enhance our grasp of human social dynamics for robotic\napplications. Utilizing the recent cutting-edge multi-modal large language\nmodels, we evaluated our benchmark to explore their capacity to decipher social\nhuman behaviour.", "keywords": ["Scene analysis and understanding", "Biometrics and human analysis"], "authors_list": ["Simindokht Jahangard", "Zhixi Cai", "Shiki Wen", "Hamid Rezatofighi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1df"}, "filepath": "data/2402.02352.png", "tags": [], "_media_type": "image", "_rand": 0.999392375966859, "arXiv_link": "https://arxiv.org/abs/2402.02352", "other_link": "", "title": "Region-Based Representations Revisited", "abstract": "We investigate whether region-based representations are effective for\nrecognition. Regions were once a mainstay in recognition approaches, but pixel\nand patch-based features are now used almost exclusively. We show that recent\nclass-agnostic segmenters like SAM can be effectively combined with strong\nunsupervised representations like DINOv2 and used for a wide variety of tasks,\nincluding semantic segmentation, object-based image retrieval, and multi-image\nanalysis. Once the masks and features are extracted, these representations,\neven with linear decoders, enable competitive performance, making them well\nsuited to applications that require custom queries. The compactness of the\nrepresentation also makes it well-suited to video analysis and other problems\nrequiring inference across many images.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Michal Shlapentokh-Rothman", "Ansel Blume", "Yao Xiao", "Yuqun Wu", "Sethuraman T V", "Heyi Tao", "Jae Yong Lee", "Wilfredo Torres-Calderon", "Yu-Xiong Wang", "Derek Hoiem"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e0"}, "filepath": "data/2404.04878.png", "tags": [], "_media_type": "image", "_rand": 0.9996753020990015, "arXiv_link": "https://arxiv.org/abs/2404.04878", "other_link": "", "title": "CycleINR: Cycle Implicit Neural Representation for Arbitrary-Scale Volumetric Super-Resolution of Medical Data", "abstract": "In the realm of medical 3D data, such as CT and MRI images, prevalent\nanisotropic resolution is characterized by high intra-slice but diminished\ninter-slice resolution. The lowered resolution between adjacent slices poses\nchallenges, hindering optimal viewing experiences and impeding the development\nof robust downstream analysis algorithms. Various volumetric super-resolution\nalgorithms aim to surmount these challenges, enhancing inter-slice resolution\nand overall 3D medical imaging quality. However, existing approaches confront\ninherent challenges: 1) often tailored to specific upsampling factors, lacking\nflexibility for diverse clinical scenarios; 2) newly generated slices\nfrequently suffer from over-smoothing, degrading fine details, and leading to\ninter-slice inconsistency. In response, this study presents CycleINR, a novel\nenhanced Implicit Neural Representation model for 3D medical data volumetric\nsuper-resolution. Leveraging the continuity of the learned implicit function,\nthe CycleINR model can achieve results with arbitrary up-sampling rates,\neliminating the need for separate training. Additionally, we enhance the grid\nsampling in CycleINR with a local attention mechanism and mitigate\nover-smoothing by integrating cycle-consistent loss. We introduce a new metric,\nSlice-wise Noise Level Inconsistency (SNLI), to quantitatively assess\ninter-slice noise level inconsistency. The effectiveness of our approach is\ndemonstrated through image quality evaluations on an in-house dataset and a\ndownstream task analysis on the Medical Segmentation Decathlon liver tumor\ndataset.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Low-level vision"], "authors_list": ["Wei Fang", "Yuxing Tang", "Heng Guo", "Mingze Yuan", "Tony C. W. MOK", "Ke Yan", "Jiawen Yao", "Xin Chen", "Zaiyi Liu", "Le Lu", "Ling Zhang", "Minfeng Xu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e1"}, "filepath": "data/2404.09833v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994193470669506, "arXiv_link": "https://arxiv.org/abs/2404.09833v1", "other_link": "", "title": "Video2Game: Real-time, Interactive, Realistic and Browser-Compatible Environment from a Single Video", "abstract": "Creating high-quality and interactive virtual environments, such as games and\nsimulators, often involves complex and costly manual modeling processes. In\nthis paper, we present Video2Game, a novel approach that automatically converts\nvideos of real-world scenes into realistic and interactive game environments.\nAt the heart of our system are three core components:(i) a neural radiance\nfields (NeRF) module that effectively captures the geometry and visual\nappearance of the scene; (ii) a mesh module that distills the knowledge from\nNeRF for faster rendering; and (iii) a physics module that models the\ninteractions and physical dynamics among the objects. By following the\ncarefully designed pipeline, one can construct an interactable and actionable\ndigital replica of the real world. We benchmark our system on both indoor and\nlarge-scale outdoor scenes. We show that we can not only produce\nhighly-realistic renderings in real-time, but also build interactive games on\ntop.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation", "Scene analysis and understanding"], "authors_list": ["Hongchi Xia", "Chih-Hao Lin", "Wei-Chiu Ma", "Shenlong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e2"}, "filepath": "data/2404.09263.png", "tags": [], "_media_type": "image", "_rand": 0.999073472800819, "arXiv_link": "https://arxiv.org/abs/2404.09263", "other_link": "https://github.com/EdenGabriel/TaskWeave.", "title": "Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint Moment Retrieval and Highlight Detection", "abstract": "Video moment retrieval and highlight detection are two highly valuable tasks\nin video understanding, but until recently they have been jointly studied.\nAlthough existing studies have made impressive advancement recently, they\npredominantly follow the data-driven bottom-up paradigm. Such paradigm\noverlooks task-specific and inter-task effects, resulting in poor model\nperformance. In this paper, we propose a novel task-driven top-down framework\nTaskWeave for joint moment retrieval and highlight detection. The framework\nintroduces a task-decoupled unit to capture task-specific and common\nrepresentations. To investigate the interplay between the two tasks, we propose\nan inter-task feedback mechanism, which transforms the results of one task as\nguiding masks to assist the other task. Different from existing methods, we\npresent a task-dependent joint loss function to optimize the model.\nComprehensive experiments and in-depth ablation studies on QVHighlights, TVSum,\nand Charades-STA datasets corroborate the effectiveness and flexibility of the\nproposed framework. Codes are available at\nhttps://github.com/EdenGabriel/TaskWeave.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Jin Yang", "Ping Wei", "Huan Li", "Ziyang Ren"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e3"}, "filepath": "data/2311.16495.png", "tags": [], "_media_type": "image", "_rand": 0.9993111236742012, "arXiv_link": "https://arxiv.org/abs/2311.16495", "other_link": "", "title": "Egocentric Full Body Motion Capture with FisheyeViT and Diffusion-Based Motion Refinement", "abstract": "In this work, we explore egocentric whole-body motion capture using a single\nfisheye camera, which simultaneously estimates human body and hand motion. This\ntask presents significant challenges due to three factors: the lack of\nhigh-quality datasets, fisheye camera distortion, and human body\nself-occlusion. To address these challenges, we propose a novel approach that\nleverages FisheyeViT to extract fisheye image features, which are subsequently\nconverted into pixel-aligned 3D heatmap representations for 3D human body pose\nprediction. For hand tracking, we incorporate dedicated hand detection and hand\npose estimation networks for regressing 3D hand poses. Finally, we develop a\ndiffusion-based whole-body motion prior model to refine the estimated\nwhole-body motion while accounting for joint uncertainties. To train these\nnetworks, we collect a large synthetic dataset, EgoWholeBody, comprising\n840,000 high-quality egocentric images captured across a diverse range of\nwhole-body motion sequences. Quantitative and qualitative evaluations\ndemonstrate the effectiveness of our method in producing high-quality\nwhole-body motion estimates from a single egocentric camera.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jian Wang", "Zhe Cao", "Diogo Luvizon", "Lingjie Liu", "Kripasindhu Sarkar", "Danhang Tang", "Thabo Beeler", "Christian Theobalt"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e4"}, "filepath": "data/2405.06586.png", "tags": [], "_media_type": "image", "_rand": 0.9997515088032647, "arXiv_link": "https://arxiv.org/abs/2405.06586", "other_link": "", "title": "PSDPM: Prototype-based Secondary Discriminative Pixels Mining for Weakly Supervised Semantic Segmentation", "abstract": "Semantic segmentation is a core computer vision problem, but the high costs\nof data annotation have hindered its wide application. Weakly-Supervised\nSemantic Segmentation (WSSS) offers a cost-efficient workaround to extensive\nlabeling in comparison to fully-supervised methods by using partial or\nincomplete labels. Existing WSSS methods have difficulties in learning the\nboundaries of objects leading to poor segmentation results. We propose a novel\nand effective framework that addresses these issues by leveraging visual\nfoundation models inside the bounding box. Adopting a two-stage WSSS framework,\nour proposed network consists of a pseudo-label generation module and a\nsegmentation module. The first stage leverages Segment Anything Model (SAM) to\ngenerate high-quality pseudo-labels. To alleviate the problem of delineating\nprecise boundaries, we adopt SAM inside the bounding box with the help of\nanother pre-trained foundation model (e.g., Grounding-DINO). Furthermore, we\neliminate the necessity of using the supervision of image labels, by employing\nCLIP in classification. Then in the second stage, the generated high-quality\npseudo-labels are used to train an off-the-shelf segmenter that achieves the\nstate-of-the-art performance on PASCAL VOC 2012 and MS COCO 2014.", "keywords": [], "authors_list": ["Xinqiao Zhao", "Ziqian Yang", "Tianhong Dai", "Bingfeng Zhang", "Jimin Xiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e5"}, "filepath": "data/2312.03502.png", "tags": [], "_media_type": "image", "_rand": 0.9997876372070944, "arXiv_link": "https://arxiv.org/abs/2312.03502", "other_link": "", "title": "Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation", "abstract": "The success of large language models has inspired the computer vision\ncommunity to explore image segmentation foundation model that is able to\nzero/few-shot generalize through prompt engineering. Segment-Anything(SAM),\namong others, is the state-of-the-art image segmentation foundation model\ndemonstrating strong zero/few-shot generalization. Despite the success, recent\nstudies reveal the weakness of SAM under strong distribution shift. In\nparticular, SAM performs awkwardly on corrupted natural images, camouflaged\nimages, medical images, etc. Motivated by the observations, we aim to develop a\nself-training based strategy to adapt SAM to target distribution. Given the\nunique challenges of large source dataset, high computation cost and incorrect\npseudo label, we propose a weakly supervised self-training architecture with\nanchor regularization and low-rank finetuning to improve the robustness and\ncomputation efficiency of adaptation. We validate the effectiveness on 5 types\nof downstream segmentation tasks including natural clean/corrupted images,\nmedical images, camouflaged images and robotic images. Our proposed method is\ntask-agnostic in nature and outperforms pre-trained SAM and state-of-the-art\ndomain adaptation methods on almost all downstream tasks with the same testing\nprompt inputs.", "keywords": ["Efficient and scalable vision", "Medical imaging and biological vision"], "authors_list": ["Haojie Zhang", "Yongyi Su", "Xun Xu", "Kui Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e6"}, "filepath": "data/2401.12168.png", "tags": [], "_media_type": "image", "_rand": 0.999128340658778, "arXiv_link": "https://arxiv.org/abs/2401.12168", "other_link": "https://spatial-vlm.github.io/", "title": "SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities", "abstract": "Understanding and reasoning about spatial relationships is a fundamental\ncapability for Visual Question Answering (VQA) and robotics. While Vision\nLanguage Models (VLM) have demonstrated remarkable performance in certain VQA\nbenchmarks, they still lack capabilities in 3D spatial reasoning, such as\nrecognizing quantitative relationships of physical objects like distances or\nsize differences. We hypothesize that VLMs' limited spatial reasoning\ncapability is due to the lack of 3D spatial knowledge in training data and aim\nto solve this problem by training VLMs with Internet-scale spatial reasoning\ndata. To this end, we present a system to facilitate this approach. We first\ndevelop an automatic 3D spatial VQA data generation framework that scales up to\n2 billion VQA examples on 10 million real-world images. We then investigate\nvarious factors in the training recipe, including data quality, training\npipeline, and VLM architecture. Our work features the first internet-scale 3D\nspatial reasoning dataset in metric space. By training a VLM on such data, we\nsignificantly enhance its ability on both qualitative and quantitative spatial\nVQA. Finally, we demonstrate that this VLM unlocks novel downstream\napplications in chain-of-thought spatial reasoning and robotics due to its\nquantitative estimation capability. Project website:\nhttps://spatial-vlm.github.io/", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Boyuan Chen", "Zhuo Xu", "Sean Kirmani", "brian ichter", "Dorsa Sadigh", "Leonidas Guibas", "Fei Xia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e7"}, "filepath": "data/2405.14077.png", "tags": [], "_media_type": "image", "_rand": 0.9996627180325027, "arXiv_link": "https://arxiv.org/abs/2405.14077", "other_link": "https://github.com/RongyiZhu/L2T.", "title": "Learning to Transform Dynamically for Better Adversarial Transferability", "abstract": "Adversarial examples, crafted by adding perturbations imperceptible to\nhumans, can deceive neural networks. Recent studies identify the adversarial\ntransferability across various models, \\textit{i.e.}, the cross-model attack\nability of adversarial samples. To enhance such adversarial transferability,\nexisting input transformation-based methods diversify input data with\ntransformation augmentation. However, their effectiveness is limited by the\nfinite number of available transformations. In our study, we introduce a novel\napproach named Learning to Transform (L2T). L2T increases the diversity of\ntransformed images by selecting the optimal combination of operations from a\npool of candidates, consequently improving adversarial transferability. We\nconceptualize the selection of optimal transformation combinations as a\ntrajectory optimization problem and employ a reinforcement learning strategy to\neffectively solve the problem. Comprehensive experiments on the ImageNet\ndataset, as well as practical tests with Google Vision and GPT-4V, reveal that\nL2T surpasses current methodologies in enhancing adversarial transferability,\nthereby confirming its effectiveness and practical significance. The code is\navailable at https://github.com/RongyiZhu/L2T.", "keywords": [], "authors_list": ["Rongyi Zhu", "Zeliang Zhang", "Susan Liang", "Zhuo Liu", "Chenliang Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e8"}, "filepath": "data/2405.20305.png", "tags": [], "_media_type": "image", "_rand": 0.9991604658487638, "arXiv_link": "https://arxiv.org/abs/2405.20305", "other_link": "", "title": "Can\u2019t make an Omelette without Breaking some Eggs: Plausible Action Anticipation using Large Video-Language Models", "abstract": "We introduce PlausiVL, a large video-language model for anticipating action\nsequences that are plausible in the real-world. While significant efforts have\nbeen made towards anticipating future actions, prior approaches do not take\ninto account the aspect of plausibility in an action sequence. To address this\nlimitation, we explore the generative capability of a large video-language\nmodel in our work and further, develop the understanding of plausibility in an\naction sequence by introducing two objective functions, a counterfactual-based\nplausible action sequence learning loss and a long-horizon action repetition\nloss. We utilize temporal logical constraints as well as verb-noun action pair\nlogical constraints to create implausible/counterfactual action sequences and\nuse them to train the model with plausible action sequence learning loss. This\nloss helps the model to differentiate between plausible and not plausible\naction sequences and also helps the model to learn implicit temporal cues\ncrucial for the task of action anticipation. The long-horizon action repetition\nloss puts a higher penalty on the actions that are more prone to repetition\nover a longer temporal window. With this penalization, the model is able to\ngenerate diverse, plausible action sequences. We evaluate our approach on two\nlarge-scale datasets, Ego4D and EPIC-Kitchens-100, and show improvements on the\ntask of action anticipation.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Himangi Mittal", "Nakul Agarwal", "Shao-Yuan Lo", "Kwonjoon Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1e9"}, "filepath": "data/2404.00742.png", "tags": [], "_media_type": "image", "_rand": 0.9996212845316178, "arXiv_link": "https://arxiv.org/abs/2404.00742", "other_link": "", "title": "Adapting to Length Shift: FlexiLength Network for Trajectory Prediction", "abstract": "Trajectory prediction plays an important role in various applications,\nincluding autonomous driving, robotics, and scene understanding. Existing\napproaches mainly focus on developing compact neural networks to increase\nprediction precision on public datasets, typically employing a standardized\ninput duration. However, a notable issue arises when these models are evaluated\nwith varying observation lengths, leading to a significant performance drop, a\nphenomenon we term the Observation Length Shift. To address this issue, we\nintroduce a general and effective framework, the FlexiLength Network (FLN), to\nenhance the robustness of existing trajectory prediction techniques against\nvarying observation periods. Specifically, FLN integrates trajectory data with\ndiverse observation lengths, incorporates FlexiLength Calibration (FLC) to\nacquire temporal invariant representations, and employs FlexiLength Adaptation\n(FLA) to further refine these representations for more accurate future\ntrajectory predictions. Comprehensive experiments on multiple datasets, ie,\nETH/UCY, nuScenes, and Argoverse 1, demonstrate the effectiveness and\nflexibility of our proposed FLN framework.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yi Xu", "Yun Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ea"}, "filepath": "data/2403.02753.png", "tags": [], "_media_type": "image", "_rand": 0.9996978771253244, "arXiv_link": "https://arxiv.org/abs/2403.02753", "other_link": "https://github.com/chihina/GAFL-CVPR2024.", "title": "Learning Group Activity Features Through Person Attribute Prediction", "abstract": "This paper proposes Group Activity Feature (GAF) learning in which features\nof multi-person activity are learned as a compact latent vector. Unlike prior\nwork in which the manual annotation of group activities is required for\nsupervised learning, our method learns the GAF through person attribute\nprediction without group activity annotations. By learning the whole network in\nan end-to-end manner so that the GAF is required for predicting the person\nattributes of people in a group, the GAF is trained as the features of\nmulti-person activity. As a person attribute, we propose to use a person's\naction class and appearance features because the former is easy to annotate due\nto its simpleness, and the latter requires no manual annotation. In addition,\nwe introduce a location-guided attribute prediction to disentangle the complex\nGAF for extracting the features of each target person properly. Various\nexperimental results validate that our method outperforms SOTA methods\nquantitatively and qualitatively on two public datasets. Visualization of our\nGAF also demonstrates that our method learns the GAF representing fined-grained\ngroup activity classes. Code: https://github.com/chihina/GAFL-CVPR2024.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Chihiro Nakatani", "Hiroaki Kawashima", "Norimichi Ukita"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1eb"}, "filepath": "data/2311.17532.png", "tags": [], "_media_type": "image", "_rand": 0.999611747418612, "arXiv_link": "https://arxiv.org/abs/2311.17532", "other_link": "https://xingqunqi-lab.github.io/Emo-Transition-Gesture/.", "title": "Weakly-Supervised Emotion Transition Learning for Diverse 3D Co-speech Gesture Generation", "abstract": "Generating vivid and emotional 3D co-speech gestures is crucial for virtual\navatar animation in human-machine interaction applications. While the existing\nmethods enable generating the gestures to follow a single emotion label, they\noverlook that long gesture sequence modeling with emotion transition is more\npractical in real scenes. In addition, the lack of large-scale available\ndatasets with emotional transition speech and corresponding 3D human gestures\nalso limits the addressing of this task. To fulfill this goal, we first\nincorporate the ChatGPT-4 and an audio inpainting approach to construct the\nhigh-fidelity emotion transition human speeches. Considering obtaining the\nrealistic 3D pose annotations corresponding to the dynamically inpainted\nemotion transition audio is extremely difficult, we propose a novel weakly\nsupervised training strategy to encourage authority gesture transitions.\nSpecifically, to enhance the coordination of transition gestures w.r.t\ndifferent emotional ones, we model the temporal association representation\nbetween two different emotional gesture sequences as style guidance and infuse\nit into the transition generation. We further devise an emotion mixture\nmechanism that provides weak supervision based on a learnable mixed emotion\nlabel for transition gestures. Last, we present a keyframe sampler to supply\neffective initial posture cues in long sequences, enabling us to generate\ndiverse gestures. Extensive experiments demonstrate that our method outperforms\nthe state-of-the-art models constructed by adapting single emotion-conditioned\ncounterparts on our newly defined emotion transition task and datasets. Our\ncode and dataset will be released on the project page:\nhttps://xingqunqi-lab.github.io/Emo-Transition-Gesture/.", "keywords": [], "authors_list": ["Xingqun Qi", "Jiahao Pan", "Peng Li", "Ruibin Yuan", "Xiaowei Chi", "Mengfei Li", "Wenhan Luo", "Wei Xue", "Shanghang Zhang", "Qifeng Liu", "Yike Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ec"}, "filepath": "data/2403.08848.png", "tags": [], "_media_type": "image", "_rand": 0.9998945566441862, "arXiv_link": "https://arxiv.org/abs/2403.08848", "other_link": "https://gbc-iitd.github.io/focusmae", "title": "FocusMAE: Gallbladder Cancer Detection from Ultrasound Videos with Focused Masked Autoencoders", "abstract": "In recent years, automated Gallbladder Cancer (GBC) detection has gained the\nattention of researchers. Current state-of-the-art (SOTA) methodologies relying\non ultrasound sonography (US) images exhibit limited generalization,\nemphasizing the need for transformative approaches. We observe that individual\nUS frames may lack sufficient information to capture disease manifestation.\nThis study advocates for a paradigm shift towards video-based GBC detection,\nleveraging the inherent advantages of spatiotemporal representations. Employing\nthe Masked Autoencoder (MAE) for representation learning, we address\nshortcomings in conventional image-based methods. We propose a novel design\ncalled FocusMAE to systematically bias the selection of masking tokens from\nhigh-information regions, fostering a more refined representation of\nmalignancy. Additionally, we contribute the most extensive US video dataset for\nGBC detection. We also note that, this is the first study on US video-based GBC\ndetection. We validate the proposed methods on the curated dataset, and report\na new state-of-the-art (SOTA) accuracy of 96.4% for the GBC detection problem,\nagainst an accuracy of 84% by current Image-based SOTA - GBCNet, and RadFormer,\nand 94.7% by Video-based SOTA - AdaMAE. We further demonstrate the generality\nof the proposed FocusMAE on a public CT-based Covid detection dataset,\nreporting an improvement in accuracy by 3.3% over current baselines. The source\ncode and pretrained models are available at:\nhttps://gbc-iitd.github.io/focusmae", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Soumen Basu", "Mayuna Gupta", "Chetan Madan", "Pankaj Gupta", "Chetan Arora"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ed"}, "filepath": "data/2405.15160.png", "tags": [], "_media_type": "image", "_rand": 0.9991696706186884, "arXiv_link": "https://arxiv.org/abs/2405.15160", "other_link": "", "title": "Learning to Predict Activity Progress by Self-Supervised Video Alignment", "abstract": "This paper presents a new self-supervised video representation learning\nframework, ARVideo, which autoregressively predicts the next video token in a\ntailored sequence order. Two key designs are included. First, we organize\nautoregressive video tokens into clusters that span both spatially and\ntemporally, thereby enabling a richer aggregation of contextual information\ncompared to the standard spatial-only or temporal-only clusters. Second, we\nadopt a randomized spatiotemporal prediction order to facilitate learning from\nmulti-dimensional data, addressing the limitations of a handcrafted\nspatial-first or temporal-first sequence order. Extensive experiments establish\nARVideo as an effective paradigm for self-supervised video representation\nlearning. For example, when trained with the ViT-B backbone, ARVideo\ncompetitively attains 81.2% on Kinetics-400 and 70.9% on Something-Something\nV2, which are on par with the strong benchmark set by VideoMAE. Importantly,\nARVideo also demonstrates higher training efficiency, i.e., it trains 14%\nfaster and requires 58% less GPU memory compared to VideoMAE.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Gerard Donahue", "Ehsan Elhamifar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ee"}, "filepath": "data/2403.14118.png", "tags": [], "_media_type": "image", "_rand": 0.9998675083366572, "arXiv_link": "https://arxiv.org/abs/2403.14118", "other_link": "", "title": "Revisiting Global Translation Estimation with Feature Tracks", "abstract": "Machine Translation Quality Estimation (MTQE) is the task of estimating the\nquality of machine-translated text in real time without the need for reference\ntranslations, which is of great importance for the development of MT. After two\ndecades of evolution, QE has yielded a wealth of results. This article provides\na comprehensive overview of QE datasets, annotation methods, shared tasks,\nmethodologies, challenges, and future research directions. It begins with an\nintroduction to the background and significance of QE, followed by an\nexplanation of the concepts and evaluation metrics for word-level QE,\nsentence-level QE, document-level QE, and explainable QE. The paper categorizes\nthe methods developed throughout the history of QE into those based on\nhandcrafted features, deep learning, and Large Language Models (LLMs), with a\nfurther division of deep learning-based methods into classic deep learning and\nthose incorporating pre-trained language models (LMs). Additionally, the\narticle details the advantages and limitations of each method and offers a\nstraightforward comparison of different approaches. Finally, the paper\ndiscusses the current challenges in QE research and provides an outlook on\nfuture research directions.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Peilin Tao", "Hainan Cui", "Mengqi Rong", "Shuhan Shen"], "category_name": "Computation and Language", "all_categories": ["Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ef"}, "filepath": "data/2405.17876.png", "tags": [], "_media_type": "image", "_rand": 0.9995688896651617, "arXiv_link": "https://arxiv.org/abs/2405.17876", "other_link": "", "title": "Directed Decentralized Collaboration for Personalized Federated Learning", "abstract": "Personalized Federated Learning (PFL) is proposed to find the greatest\npersonalized models for each client. To avoid the central failure and\ncommunication bottleneck in the server-based FL, we concentrate on the\nDecentralized Personalized Federated Learning (DPFL) that performs distributed\nmodel training in a Peer-to-Peer (P2P) manner. Most personalized works in DPFL\nare based on undirected and symmetric topologies, however, the data,\ncomputation and communication resources heterogeneity result in large variances\nin the personalized models, which lead the undirected aggregation to suboptimal\npersonalized performance and unguaranteed convergence. To address these issues,\nwe propose a directed collaboration DPFL framework by incorporating stochastic\ngradient push and partial model personalized, called \\textbf{D}ecentralized\n\\textbf{Fed}erated \\textbf{P}artial \\textbf{G}radient \\textbf{P}ush\n(\\textbf{DFedPGP}). It personalizes the linear classifier in the modern deep\nmodel to customize the local solution and learns a consensus representation in\na fully decentralized manner. Clients only share gradients with a subset of\nneighbors based on the directed and asymmetric topologies, which guarantees\nflexible choices for resource efficiency and better convergence. Theoretically,\nwe show that the proposed DFedPGP achieves a superior convergence rate of\n$\\mathcal{O}(\\frac{1}{\\sqrt{T}})$ in the general non-convex setting, and prove\nthe tighter connectivity among clients will speed up the convergence. The\nproposed method achieves state-of-the-art (SOTA) accuracy in both data and\ncomputation heterogeneity scenarios, demonstrating the efficiency of the\ndirected collaboration and partial gradient push.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yingqi Liu", "Yifan Shi", "Qinglun Li", "Baoyuan Wu", "Xueqian Wang", "Li Shen"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Distributed, Parallel, and Cluster Computing", "Optimization and Control"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f0"}, "filepath": "data/2308.14610.png", "tags": [], "_media_type": "image", "_rand": 0.9996836872289024, "arXiv_link": "https://arxiv.org/abs/2308.14610", "other_link": "", "title": "PolarRec: Improving Radio Interferometric Data Reconstruction Using Polar Coordinates", "abstract": "In radio astronomy, visibility data, which are measurements of wave signals\nfrom radio telescopes, are transformed into images for observation of distant\ncelestial objects. However, these resultant images usually contain both real\nsources and artifacts, due to signal sparsity and other factors. One way to\nobtain cleaner images is to reconstruct samples into dense forms before\nimaging. Unfortunately, existing reconstruction methods often miss some\ncomponents of visibility in frequency domain, so blurred object edges and\npersistent artifacts remain in the images. Furthermore, the computation\noverhead is high on irregular visibility samples due to the data skew. To\naddress these problems, we propose PolarRec, a transformer-encoder-conditioned\nreconstruction pipeline with visibility samples converted into the polar\ncoordinate representation. This representation matches the way in which radio\ntelescopes observe a celestial area as the Earth rotates. As a result,\nvisibility samples distribute in the polar system more uniformly than in the\nCartesian space. Therefore, we propose to use radial distance in the loss\nfunction, to help reconstruct complete visibility effectively. Also, we group\nvisibility samples by their polar angles and propose a group-based encoding\nscheme to improve the efficiency. Our experiments demonstrate that PolarRec\nmarkedly improves imaging results by faithfully reconstructing all frequency\ncomponents in the visibility domain while significantly reducing the\ncomputation cost in visibility data encoding. We believe this high-quality and\nhigh-efficiency imaging of PolarRec will better facilitate astronomers to\nconduct their research.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ruoqi Wang", "Zhuoyang Chen", "Jiayi Zhu", "Qiong Luo", "Feng Wang"], "category_name": "", "all_categories": ["Unknown", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f1"}, "filepath": "data/2311.16518.png", "tags": [], "_media_type": "image", "_rand": 0.9990631745031948, "arXiv_link": "https://arxiv.org/abs/2311.16518", "other_link": "", "title": "SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution", "abstract": "Owe to the powerful generative priors, the pre-trained text-to-image (T2I)\ndiffusion models have become increasingly popular in solving the real-world\nimage super-resolution problem. However, as a consequence of the heavy quality\ndegradation of input low-resolution (LR) images, the destruction of local\nstructures can lead to ambiguous image semantics. As a result, the content of\nreproduced high-resolution image may have semantic errors, deteriorating the\nsuper-resolution performance. To address this issue, we present a\nsemantics-aware approach to better preserve the semantic fidelity of generative\nreal-world image super-resolution. First, we train a degradation-aware prompt\nextractor, which can generate accurate soft and hard semantic prompts even\nunder strong degradation. The hard semantic prompts refer to the image tags,\naiming to enhance the local perception ability of the T2I model, while the soft\nsemantic prompts compensate for the hard ones to provide additional\nrepresentation information. These semantic prompts can encourage the T2I model\nto generate detailed and semantically accurate results. Furthermore, during the\ninference process, we integrate the LR images into the initial sampling noise\nto mitigate the diffusion model's tendency to generate excessive random\ndetails. The experiments show that our method can reproduce more realistic\nimage details and hold better the semantics.", "keywords": ["Image and video generation and manipulation", "Low-level vision"], "authors_list": ["Rongyuan Wu", "Tao Yang", "Lingchen Sun", "Zhengqiang ZHANG", "Shuai Li", "Lei Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f2"}, "filepath": "data/2312.07378v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999979764775169, "arXiv_link": "https://arxiv.org/abs/2312.07378v1", "other_link": "http://www.hoi4d.top/}.},", "title": "PanoContext-Former: Panoramic Total Scene Understanding with a Transformer", "abstract": "The field of 4D point cloud understanding is rapidly developing with the goal\nof analyzing dynamic 3D point cloud sequences. However, it remains a\nchallenging task due to the sparsity and lack of texture in point clouds.\nMoreover, the irregularity of point cloud poses a difficulty in aligning\ntemporal information within video sequences. To address these issues, we\npropose a novel cross-modal knowledge transfer framework, called\nX4D-SceneFormer. This framework enhances 4D-Scene understanding by transferring\ntexture priors from RGB sequences using a Transformer architecture with\ntemporal relationship mining. Specifically, the framework is designed with a\ndual-branch architecture, consisting of an 4D point cloud transformer and a\nGradient-aware Image Transformer (GIT). During training, we employ multiple\nknowledge transfer techniques, including temporal consistency losses and masked\nself-attention, to strengthen the knowledge transfer between modalities. This\nleads to enhanced performance during inference using single-modal 4D point\ncloud inputs. Extensive experiments demonstrate the superior performance of our\nframework on various 4D point cloud video understanding tasks, including action\nrecognition, action segmentation and semantic segmentation. The results achieve\n1st places, i.e., 85.3% (+7.9%) accuracy and 47.3% (+5.0%) mIoU for 4D action\nsegmentation and semantic segmentation, on the HOI4D\nchallenge\\footnote{\\url{http://www.hoi4d.top/}.}, outperforming previous\nstate-of-the-art by a large margin. We release the code at\nhttps://github.com/jinglinglingling/X4D", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Yuan Dong", "Chuan Fang", "Liefeng Bo", "Zilong Dong", "Ping Tan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f3"}, "filepath": "data/2310.00031.png", "tags": [], "_media_type": "image", "_rand": 0.9994913252662482, "arXiv_link": "https://arxiv.org/abs/2310.00031", "other_link": "https://www.vision.caltech.edu/tadp/.", "title": "Text-image Alignment for Diffusion-based Perception", "abstract": "Diffusion models are generative models with impressive text-to-image\nsynthesis capabilities and have spurred a new wave of creative methods for\nclassical machine learning tasks. However, the best way to harness the\nperceptual knowledge of these generative models for visual tasks is still an\nopen question. Specifically, it is unclear how to use the prompting interface\nwhen applying diffusion backbones to vision tasks. We find that automatically\ngenerated captions can improve text-image alignment and significantly enhance a\nmodel's cross-attention maps, leading to better perceptual performance. Our\napproach improves upon the current state-of-the-art (SOTA) in diffusion-based\nsemantic segmentation on ADE20K and the current overall SOTA for depth\nestimation on NYUv2. Furthermore, our method generalizes to the cross-domain\nsetting. We use model personalization and caption modifications to align our\nmodel to the target domain and find improvements over unaligned baselines. Our\ncross-domain object detection model, trained on Pascal VOC, achieves SOTA\nresults on Watercolor2K. Our cross-domain segmentation method, trained on\nCityscapes, achieves SOTA results on Dark Zurich-val and Nighttime Driving.\nProject page: https://www.vision.caltech.edu/tadp/. Code:\nhttps://github.com/damaggu/TADP.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Neehar Kondapaneni", "Markus Marks", "Manuel Knott", "Rog\u00e9rio Guimar\u00e3es", "Pietro Perona"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f4"}, "filepath": "data/2311.17456.png", "tags": [], "_media_type": "image", "_rand": 0.9994746783706923, "arXiv_link": "https://arxiv.org/abs/2311.17456", "other_link": "https://github.com/IRMVLab/DifFlow3D.", "title": "DifFlow3D: Toward Robust Uncertainty-Aware Scene Flow Estimation with Iterative Diffusion-Based Refinement", "abstract": "Scene flow estimation, which aims to predict per-point 3D displacements of\ndynamic scenes, is a fundamental task in the computer vision field. However,\nprevious works commonly suffer from unreliable correlation caused by locally\nconstrained searching ranges, and struggle with accumulated inaccuracy arising\nfrom the coarse-to-fine structure. To alleviate these problems, we propose a\nnovel uncertainty-aware scene flow estimation network (DifFlow3D) with the\ndiffusion probabilistic model. Iterative diffusion-based refinement is designed\nto enhance the correlation robustness and resilience to challenging cases, e.g.\ndynamics, noisy inputs, repetitive patterns, etc. To restrain the generation\ndiversity, three key flow-related features are leveraged as conditions in our\ndiffusion model. Furthermore, we also develop an uncertainty estimation module\nwithin diffusion to evaluate the reliability of estimated scene flow. Our\nDifFlow3D achieves state-of-the-art performance, with 24.0% and 29.1% EPE3D\nreduction respectively on FlyingThings3D and KITTI 2015 datasets. Notably, our\nmethod achieves an unprecedented millimeter-level accuracy (0.0078m in EPE3D)\non the KITTI dataset. Additionally, our diffusion-based refinement paradigm can\nbe readily integrated as a plug-and-play module into existing scene flow\nnetworks, significantly increasing their estimation accuracy. Codes are\nreleased at https://github.com/IRMVLab/DifFlow3D.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Jiuming Liu", "Guangming Wang", "Weicai Ye", "Chaokang Jiang", "Jinru Han", "Zhe Liu", "Guofeng Zhang", "Dalong Du", "Hesheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f5"}, "filepath": "data/2309.15729.png", "tags": [], "_media_type": "image", "_rand": 0.9995569352155145, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2309.15729", "other_link": "https://github.com/JxuanC/MindGPT.", "title": "Mind Artist: Creating Artistic Snapshots with Human Thought", "abstract": "Decoding of seen visual contents with non-invasive brain recordings has\nimportant scientific and practical values. Efforts have been made to recover\nthe seen images from brain signals. However, most existing approaches cannot\nfaithfully reflect the visual contents due to insufficient image quality or\nsemantic mismatches. Compared with reconstructing pixel-level visual images,\nspeaking is a more efficient and effective way to explain visual information.\nHere we introduce a non-invasive neural decoder, termed as MindGPT, which\ninterprets perceived visual stimuli into natural languages from fMRI signals.\nSpecifically, our model builds upon a visually guided neural encoder with a\ncross-attention mechanism, which permits us to guide latent neural\nrepresentations towards a desired language semantic direction in an end-to-end\nmanner by the collaborative use of the large language model GPT. By doing so,\nwe found that the neural representations of the MindGPT are explainable, which\ncan be used to evaluate the contributions of visual properties to language\nsemantics. Our experiments show that the generated word sequences truthfully\nrepresented the visual information (with essential details) conveyed in the\nseen stimuli. The results also suggested that with respect to language decoding\ntasks, the higher visual cortex (HVC) is more semantically informative than the\nlower visual cortex (LVC), and using only the HVC can recover most of the\nsemantic information. The code of the MindGPT model will be publicly available\nat https://github.com/JxuanC/MindGPT.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Jiaxuan Chen", "Yu Qi", "Yueming Wang", "Gang Pan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f6"}, "filepath": "data/2401.04350v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991905326487888, "arXiv_link": "https://arxiv.org/html/2401.04350v3", "other_link": "https://github.com/serendipity1122/Pre-trained-Model-Guided-Fine-Tuning-for-Zero-Shot-Adversarial-Robustness.", "title": "Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness", "abstract": "Large-scale pre-trained vision-language models like CLIP have demonstrated\nimpressive performance across various tasks, and exhibit remarkable zero-shot\ngeneralization capability, while they are also vulnerable to imperceptible\nadversarial examples. Existing works typically employ adversarial training\n(fine-tuning) as a defense method against adversarial examples. However, direct\napplication to the CLIP model may result in overfitting, compromising the\nmodel's capacity for generalization. In this paper, we propose Pre-trained\nModel Guided Adversarial Fine-Tuning (PMG-AFT) method, which leverages\nsupervision from the original pre-trained model by carefully designing an\nauxiliary branch, to enhance the model's zero-shot adversarial robustness.\nSpecifically, PMG-AFT minimizes the distance between the features of\nadversarial examples in the target model and those in the pre-trained model,\naiming to preserve the generalization features already captured by the\npre-trained model. Extensive Experiments on 15 zero-shot datasets demonstrate\nthat PMG-AFT significantly outperforms the state-of-the-art method, improving\nthe top-1 robust accuracy by an average of 4.99%. Furthermore, our approach\nconsistently improves clean accuracy by an average of 8.72%. Our code is\navailable at\nhttps://github.com/serendipity1122/Pre-trained-Model-Guided-Fine-Tuning-for-Zero-Shot-Adversarial-Robustness.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Sibo Wang", "Jie Zhang", "Zheng Yuan", "Shiguang Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f7"}, "filepath": "data/2403.10357.png", "tags": [], "_media_type": "image", "_rand": 0.9995981812987873, "arXiv_link": "https://arxiv.org/abs/2403.10357", "other_link": "", "title": "ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image", "abstract": "Recent progress in human shape learning, shows that neural implicit models\nare effective in generating 3D human surfaces from limited number of views, and\neven from a single RGB image. However, existing monocular approaches still\nstruggle to recover fine geometric details such as face, hands or cloth\nwrinkles. They are also easily prone to depth ambiguities that result in\ndistorted geometries along the camera optical axis. In this paper, we explore\nthe benefits of incorporating depth observations in the reconstruction process\nby introducing ANIM, a novel method that reconstructs arbitrary 3D human shapes\nfrom single-view RGB-D images with an unprecedented level of accuracy. Our\nmodel learns geometric details from both multi-resolution pixel-aligned and\nvoxel-aligned features to leverage depth information and enable spatial\nrelationships, mitigating depth ambiguities. We further enhance the quality of\nthe reconstructed shape by introducing a depth-supervision strategy, which\nimproves the accuracy of the signed distance field estimation of points that\nlie on the reconstructed surface. Experiments demonstrate that ANIM outperforms\nstate-of-the-art works that use RGB, surface normals, point cloud or RGB-D data\nas input. In addition, we introduce ANIM-Real, a new multi-modal dataset\ncomprising high-quality scans paired with consumer-grade RGB-D camera, and our\nprotocol to fine-tune ANIM, enabling high-quality reconstruction from\nreal-world human capture.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Marco Pesavento", "Yuanlu Xu", "Nikolaos Sarafianos", "Robert Maier", "Ziyan Wang", "Chun-Han Yao", "Marco Volino", "Edmond Boyer", "Adrian Hilton", "Tony Tung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f8"}, "filepath": "data/2311.16682.png", "tags": [], "_media_type": "image", "_rand": 0.9995235277148183, "arXiv_link": "https://arxiv.org/abs/2311.16682", "other_link": "", "title": "ContextSeg: Sketch Semantic Segmentation by Querying the Context with Attention", "abstract": "Sketch semantic segmentation is a well-explored and pivotal problem in\ncomputer vision involving the assignment of pre-defined part labels to\nindividual strokes. This paper presents ContextSeg - a simple yet highly\neffective approach to tackling this problem with two stages. In the first\nstage, to better encode the shape and positional information of strokes, we\npropose to predict an extra dense distance field in an autoencoder network to\nreinforce structural information learning. In the second stage, we treat an\nentire stroke as a single entity and label a group of strokes within the same\nsemantic part using an auto-regressive Transformer with the default attention\nmechanism. By group-based labeling, our method can fully leverage the context\ninformation when making decisions for the remaining groups of strokes. Our\nmethod achieves the best segmentation accuracy compared with state-of-the-art\napproaches on two representative datasets and has been extensively evaluated\ndemonstrating its superior performance. Additionally, we offer insights into\nsolving part imbalance in training data and the preliminary experiment on\ncross-category training, which can inspire future research in this field.", "keywords": [], "authors_list": ["Jiawei Wang", "Changjian Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1f9"}, "filepath": "data/2404.01758.png", "tags": [], "_media_type": "image", "_rand": 0.9998155743442897, "arXiv_link": "https://arxiv.org/abs/2404.01758", "other_link": "", "title": "GEARS: Local Geometry-aware Hand-object Interaction Synthesis", "abstract": "Generating realistic hand motion sequences in interaction with objects has\ngained increasing attention with the growing interest in digital humans. Prior\nwork has illustrated the effectiveness of employing occupancy-based or\ndistance-based virtual sensors to extract hand-object interaction features.\nNonetheless, these methods show limited generalizability across object\ncategories, shapes and sizes. We hypothesize that this is due to two reasons:\n1) the limited expressiveness of employed virtual sensors, and 2) scarcity of\navailable training data. To tackle this challenge, we introduce a novel\njoint-centered sensor designed to reason about local object geometry near\npotential interaction regions. The sensor queries for object surface points in\nthe neighbourhood of each hand joint. As an important step towards mitigating\nthe learning complexity, we transform the points from global frame to hand\ntemplate frame and use a shared module to process sensor features of each\nindividual joint. This is followed by a spatio-temporal transformer network\naimed at capturing correlation among the joints in different dimensions.\nMoreover, we devise simple heuristic rules to augment the limited training\nsequences with vast static hand grasping samples. This leads to a broader\nspectrum of grasping types observed during training, in turn enhancing our\nmodel's generalization capability. We evaluate on two public datasets, GRAB and\nInterCap, where our method shows superiority over baselines both quantitatively\nand perceptually.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Keyang Zhou", "Bharat Lal Bhatnagar", "Jan Lenssen", "Gerard Pons-Moll"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1fa"}, "filepath": "data/2403.13351v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994919841340142, "arXiv_link": "https://arxiv.org/abs/2403.13351v1", "other_link": "", "title": "OrthCaps: An Orthogonal CapsNet with Sparse Attention Routing and Pruning", "abstract": "Redundancy is a persistent challenge in Capsule Networks (CapsNet),leading to\nhigh computational costs and parameter counts. Although previous works have\nintroduced pruning after the initial capsule layer, dynamic routing's fully\nconnected nature and non-orthogonal weight matrices reintroduce redundancy in\ndeeper layers. Besides, dynamic routing requires iterating to converge, further\nincreasing computational demands. In this paper, we propose an Orthogonal\nCapsule Network (OrthCaps) to reduce redundancy, improve routing performance\nand decrease parameter counts. Firstly, an efficient pruned capsule layer is\nintroduced to discard redundant capsules. Secondly, dynamic routing is replaced\nwith orthogonal sparse attention routing, eliminating the need for iterations\nand fully connected structures. Lastly, weight matrices during routing are\northogonalized to sustain low capsule similarity, which is the first approach\nto introduce orthogonality into CapsNet as far as we know. Our experiments on\nbaseline datasets affirm the efficiency and robustness of OrthCaps in\nclassification tasks, in which ablation studies validate the criticality of\neach component. Remarkably, OrthCaps-Shallow outperforms other Capsule Network\nbenchmarks on four datasets, utilizing only 110k parameters, which is a mere\n1.25% of a standard Capsule Network's total. To the best of our knowledge, it\nachieves the smallest parameter count among existing Capsule Networks.\nSimilarly, OrthCaps-Deep demonstrates competitive performance across four\ndatasets, utilizing only 1.2% of the parameters required by its counterparts.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Geng Xinyu", "Jiaming Wang", "Jiawei Gong", "yuerong xue", "Jun Xu", "Fanglin Chen", "Xiaolin Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1fb"}, "filepath": "data/2312.04521.png", "tags": [], "_media_type": "image", "_rand": 0.9991636992252987, "arXiv_link": "https://arxiv.org/abs/2312.04521", "other_link": "", "title": "Multimodal Industrial Anomaly Detection by Crossmodal Feature Mapping", "abstract": "The paper explores the industrial multimodal Anomaly Detection (AD) task,\nwhich exploits point clouds and RGB images to localize anomalies. We introduce\na novel light and fast framework that learns to map features from one modality\nto the other on nominal samples. At test time, anomalies are detected by\npinpointing inconsistencies between observed and mapped features. Extensive\nexperiments show that our approach achieves state-of-the-art detection and\nsegmentation performance in both the standard and few-shot settings on the\nMVTec 3D-AD dataset while achieving faster inference and occupying less memory\nthan previous multimodal AD methods. Moreover, we propose a layer-pruning\ntechnique to improve memory and time efficiency with a marginal sacrifice in\nperformance.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Alex Costanzino", "Pierluigi Zama Ramirez", "Giuseppe Lisanti", "Luigi Di Stefano"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1fc"}, "filepath": "data/2403.12777.png", "tags": [], "_media_type": "image", "_rand": 0.999065072869279, "arXiv_link": "https://arxiv.org/abs/2403.12777", "other_link": "https://github.com/ZhangAIPI/DIM.", "title": "Discover and Mitigate Multiple Biased Subgroups in Image Classifiers", "abstract": "Machine learning models can perform well on in-distribution data but often\nfail on biased subgroups that are underrepresented in the training data,\nhindering the robustness of models for reliable applications. Such subgroups\nare typically unknown due to the absence of subgroup labels. Discovering biased\nsubgroups is the key to understanding models' failure modes and further\nimproving models' robustness. Most previous works of subgroup discovery make an\nimplicit assumption that models only underperform on a single biased subgroup,\nwhich does not hold on in-the-wild data where multiple biased subgroups exist.\n In this work, we propose Decomposition, Interpretation, and Mitigation (DIM),\na novel method to address a more challenging but also more practical problem of\ndiscovering multiple biased subgroups in image classifiers. Our approach\ndecomposes the image features into multiple components that represent multiple\nsubgroups. This decomposition is achieved via a bilinear dimension reduction\nmethod, Partial Least Square (PLS), guided by useful supervision from the image\nclassifier. We further interpret the semantic meaning of each subgroup\ncomponent by generating natural language descriptions using vision-language\nfoundation models. Finally, DIM mitigates multiple biased subgroups\nsimultaneously via two strategies, including the data- and model-centric\nstrategies. Extensive experiments on CIFAR-100 and Breeds datasets demonstrate\nthe effectiveness of DIM in discovering and mitigating multiple biased\nsubgroups. Furthermore, DIM uncovers the failure modes of the classifier on\nHard ImageNet, showcasing its broader applicability to understanding model bias\nin image classifiers. The code is available at\nhttps://github.com/ZhangAIPI/DIM.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Zeliang Zhang", "Mingqian Feng", "Zhiheng Li", "Chenliang Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1fd"}, "filepath": "data/2309.11523.png", "tags": [], "_media_type": "image", "_rand": 0.9993913488609264, "arXiv_link": "https://arxiv.org/abs/2309.11523", "other_link": "https://github.com/qhfan/RMT", "title": "RMT: Retentive Networks Meet Vision Transformers", "abstract": "Vision Transformer (ViT) has gained increasing attention in the computer\nvision community in recent years. However, the core component of ViT,\nSelf-Attention, lacks explicit spatial priors and bears a quadratic\ncomputational complexity, thereby constraining the applicability of ViT. To\nalleviate these issues, we draw inspiration from the recent Retentive Network\n(RetNet) in the field of NLP, and propose RMT, a strong vision backbone with\nexplicit spatial prior for general purposes. Specifically, we extend the\nRetNet's temporal decay mechanism to the spatial domain, and propose a spatial\ndecay matrix based on the Manhattan distance to introduce the explicit spatial\nprior to Self-Attention. Additionally, an attention decomposition form that\nadeptly adapts to explicit spatial prior is proposed, aiming to reduce the\ncomputational burden of modeling global information without disrupting the\nspatial decay matrix. Based on the spatial decay matrix and the attention\ndecomposition form, we can flexibly integrate explicit spatial prior into the\nvision backbone with linear complexity. Extensive experiments demonstrate that\nRMT exhibits exceptional performance across various vision tasks. Specifically,\nwithout extra training data, RMT achieves **84.8%** and **86.1%** top-1 acc on\nImageNet-1k with **27M/4.5GFLOPs** and **96M/18.2GFLOPs**. For downstream\ntasks, RMT achieves **54.5** box AP and **47.2** mask AP on the COCO detection\ntask, and **52.8** mIoU on the ADE20K semantic segmentation task. Code is\navailable at https://github.com/qhfan/RMT", "keywords": ["Efficient and scalable vision"], "authors_list": ["Qihang Fan", "Huaibo Huang", "Mingrui Chen", "Hongmin Liu", "Ran He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1fe"}, "filepath": "data/2404.09993.png", "tags": [], "_media_type": "image", "_rand": 0.9997213640220937, "arXiv_link": "https://arxiv.org/abs/2404.09993", "other_link": "https://liagm.github.io/Bi_Layout/", "title": "No More Ambiguity in 360$^\\circ$ Room Layout via Bi-Layout Estimation", "abstract": "Inherent ambiguity in layout annotations poses significant challenges to\ndeveloping accurate 360{\\deg} room layout estimation models. To address this\nissue, we propose a novel Bi-Layout model capable of predicting two distinct\nlayout types. One stops at ambiguous regions, while the other extends to\nencompass all visible areas. Our model employs two global context embeddings,\nwhere each embedding is designed to capture specific contextual information for\neach layout type. With our novel feature guidance module, the image feature\nretrieves relevant context from these embeddings, generating layout-aware\nfeatures for precise bi-layout predictions. A unique property of our Bi-Layout\nmodel is its ability to inherently detect ambiguous regions by comparing the\ntwo predictions. To circumvent the need for manual correction of ambiguous\nannotations during testing, we also introduce a new metric for disambiguating\nground truth layouts. Our method demonstrates superior performance on benchmark\ndatasets, notably outperforming leading approaches. Specifically, on the\nMatterportLayout dataset, it improves 3DIoU from 81.70% to 82.57% across the\nfull test set and notably from 54.80% to 59.97% in subsets with significant\nambiguity. Project page: https://liagm.github.io/Bi_Layout/", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yu-Ju Tsai", "Jin-Cheng Jhang", "JINGJING ZHENG", "Wei Wang", "Albert Chen", "Min Sun", "Cheng-Hao Kuo", "Ming-Hsuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f1ff"}, "filepath": "data/2312.03816.png", "tags": [], "_media_type": "image", "_rand": 0.9996326129337268, "arXiv_link": "https://arxiv.org/abs/2312.03816", "other_link": "https://zhang-zx.github.io/AVID/", "title": "AVID: Any-Length Video Inpainting with Diffusion Model", "abstract": "Recent advances in diffusion models have successfully enabled text-guided\nimage inpainting. While it seems straightforward to extend such editing\ncapability into the video domain, there have been fewer works regarding\ntext-guided video inpainting. Given a video, a masked region at its initial\nframe, and an editing prompt, it requires a model to do infilling at each frame\nfollowing the editing guidance while keeping the out-of-mask region intact.\nThere are three main challenges in text-guided video inpainting: ($i$) temporal\nconsistency of the edited video, ($ii$) supporting different inpainting types\nat different structural fidelity levels, and ($iii$) dealing with variable\nvideo length. To address these challenges, we introduce Any-Length Video\nInpainting with Diffusion Model, dubbed as AVID. At its core, our model is\nequipped with effective motion modules and adjustable structure guidance, for\nfixed-length video inpainting. Building on top of that, we propose a novel\nTemporal MultiDiffusion sampling pipeline with a middle-frame attention\nguidance mechanism, facilitating the generation of videos with any desired\nduration. Our comprehensive experiments show our model can robustly deal with\nvarious inpainting types at different video duration ranges, with high quality.\nMore visualization results are made publicly available at\nhttps://zhang-zx.github.io/AVID/ .", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhixing Zhang", "Bichen Wu", "Xiaoyan Wang", "Yaqiao Luo", "Luxin Zhang", "Yinan Zhao", "Peter Vajda", "Dimitris N. Metaxas", "Licheng Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f200"}, "filepath": "data/2405.08609.png", "tags": [], "_media_type": "image", "_rand": 0.9990525705056269, "arXiv_link": "https://arxiv.org/abs/2405.08609", "other_link": "", "title": "PaReNeRF: Toward Fast Large-scale Dynamic NeRF with Patch-based Reference", "abstract": "Neural Radiance Field(NeRF) is an novel implicit method to achieve the 3D\nreconstruction and representation with a high resolution. After the first\nresearch of NeRF is proposed, NeRF has gained a robust developing power and is\nbooming in the 3D modeling, representation and reconstruction areas. However\nthe first and most of the followed research projects based on NeRF is static,\nwhich are weak in the practical applications. Therefore, more researcher are\ninterested and focused on the study of dynamic NeRF that is more feasible and\nuseful in practical applications or situations. Compared with the static NeRF,\nimplementing the Dynamic NeRF is more difficult and complex. But Dynamic is\nmore potential in the future even is the basic of Editable NeRF. In this\nreview, we made a detailed and abundant statement for the development and\nimportant implementation principles of Dynamci NeRF. The analysis of main\nprinciple and development of Dynamic NeRF is from 2021 to 2023, including the\nmost of the Dynamic NeRF projects. What is more, with colorful and novel\nspecial designed figures and table, We also made a detailed comparison and\nanalysis of different features of various of Dynamic. Besides, we analyzed and\ndiscussed the key methods to implement a Dynamic NeRF. The volume of the\nreference papers is large. The statements and comparisons are multidimensional.\nWith a reading of this review, the whole development history and most of the\nmain design method or principles of Dynamic NeRF can be easy understood and\ngained.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Xiao Tang", "Min Yang", "Penghui Sun", "Hui Li", "Yuchao Dai", "feng zhu", "Hojae Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f201"}, "filepath": "data/2301.08237.png", "tags": [], "_media_type": "image", "_rand": 0.999644819869412, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2301.08237", "other_link": "https://github.com/SJTUwxz/LoCoNet_ASD.", "title": "LoCoNet: Long-Short Context Network for Active Speaker Detection", "abstract": "Active Speaker Detection (ASD) aims to identify who is speaking in each frame\nof a video. ASD reasons from audio and visual information from two contexts:\nlong-term intra-speaker context and short-term inter-speaker context. Long-term\nintra-speaker context models the temporal dependencies of the same speaker,\nwhile short-term inter-speaker context models the interactions of speakers in\nthe same scene. These two contexts are complementary to each other and can help\ninfer the active speaker. Motivated by these observations, we propose LoCoNet,\na simple yet effective Long-Short Context Network that models the long-term\nintra-speaker context and short-term inter-speaker context. We use\nself-attention to model long-term intra-speaker context due to its\neffectiveness in modeling long-range dependencies, and convolutional blocks\nthat capture local patterns to model short-term inter-speaker context.\nExtensive experiments show that LoCoNet achieves state-of-the-art performance\non multiple datasets, achieving an mAP of 95.2%(+1.1%) on AVA-ActiveSpeaker,\n68.1%(+22%) on Columbia dataset, 97.2%(+2.8%) on Talkies dataset and\n59.7%(+8.0%) on Ego4D dataset. Moreover, in challenging cases where multiple\nspeakers are present, or face of active speaker is much smaller than other\nfaces in the same scene, LoCoNet outperforms previous state-of-the-art methods\nby 3.4% on the AVA-ActiveSpeaker dataset. The code will be released at\nhttps://github.com/SJTUwxz/LoCoNet_ASD.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Xizi Wang", "Feng Cheng", "Gedas Bertasius"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f202"}, "filepath": "data/2403.01444.png", "tags": [], "_media_type": "image", "_rand": 0.999944898304576, "arXiv_link": "https://arxiv.org/abs/2403.01444", "other_link": "", "title": "3DGStream: On-the-Fly Training of 3D Gaussians for Efficient Streaming of Photo-Realistic Free-Viewpoint Videos", "abstract": "Constructing photo-realistic Free-Viewpoint Videos (FVVs) of dynamic scenes\nfrom multi-view videos remains a challenging endeavor. Despite the remarkable\nadvancements achieved by current neural rendering techniques, these methods\ngenerally require complete video sequences for offline training and are not\ncapable of real-time rendering. To address these constraints, we introduce\n3DGStream, a method designed for efficient FVV streaming of real-world dynamic\nscenes. Our method achieves fast on-the-fly per-frame reconstruction within 12\nseconds and real-time rendering at 200 FPS. Specifically, we utilize 3D\nGaussians (3DGs) to represent the scene. Instead of the na\\\"ive approach of\ndirectly optimizing 3DGs per-frame, we employ a compact Neural Transformation\nCache (NTC) to model the translations and rotations of 3DGs, markedly reducing\nthe training time and storage required for each FVV frame. Furthermore, we\npropose an adaptive 3DG addition strategy to handle emerging objects in dynamic\nscenes. Experiments demonstrate that 3DGStream achieves competitive performance\nin terms of rendering speed, image quality, training time, and model storage\nwhen compared with state-of-the-art methods.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Jiakai Sun", "Han Jiao", "Guangyuan Li", "Zhanjie Zhang", "Lei Zhao", "Wei Xing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f203"}, "filepath": "data/2403.11812.png", "tags": [], "_media_type": "image", "_rand": 0.9991169170453719, "arXiv_link": "https://arxiv.org/abs/2403.11812", "other_link": "", "title": "Aerial Lifting: Neural Urban Semantic and Building Instance Lifting from Aerial Imagery", "abstract": "We present a neural radiance field method for urban-scale semantic and\nbuilding-level instance segmentation from aerial images by lifting noisy 2D\nlabels to 3D. This is a challenging problem due to two primary reasons.\nFirstly, objects in urban aerial images exhibit substantial variations in size,\nincluding buildings, cars, and roads, which pose a significant challenge for\naccurate 2D segmentation. Secondly, the 2D labels generated by existing\nsegmentation methods suffer from the multi-view inconsistency problem,\nespecially in the case of aerial images, where each image captures only a small\nportion of the entire scene. To overcome these limitations, we first introduce\na scale-adaptive semantic label fusion strategy that enhances the segmentation\nof objects of varying sizes by combining labels predicted from different\naltitudes, harnessing the novel-view synthesis capabilities of NeRF. We then\nintroduce a novel cross-view instance label grouping strategy based on the 3D\nscene representation to mitigate the multi-view inconsistency problem in the 2D\ninstance labels. Furthermore, we exploit multi-view reconstructed depth priors\nto improve the geometric quality of the reconstructed radiance field, resulting\nin enhanced segmentation results. Experiments on multiple real-world\nurban-scale datasets demonstrate that our approach outperforms existing\nmethods, highlighting its effectiveness.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yuqi Zhang", "Guanying Chen", "Jiaxing Chen", "Shuguang Cui"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f204"}, "filepath": "data/2403.11186.png", "tags": [], "_media_type": "image", "_rand": 0.999249041282028, "arXiv_link": "https://arxiv.org/abs/2403.11186", "other_link": "https://george-zhuang.github.io/nettrack/.", "title": "NetTrack: Tracking Highly Dynamic Objects with a Net", "abstract": "The complex dynamicity of open-world objects presents non-negligible\nchallenges for multi-object tracking (MOT), often manifested as severe\ndeformations, fast motion, and occlusions. Most methods that solely depend on\ncoarse-grained object cues, such as boxes and the overall appearance of the\nobject, are susceptible to degradation due to distorted internal relationships\nof dynamic objects. To address this problem, this work proposes NetTrack, an\nefficient, generic, and affordable tracking framework to introduce fine-grained\nlearning that is robust to dynamicity. Specifically, NetTrack constructs a\ndynamicity-aware association with a fine-grained Net, leveraging point-level\nvisual cues. Correspondingly, a fine-grained sampler and matching method have\nbeen incorporated. Furthermore, NetTrack learns object-text correspondence for\nfine-grained localization. To evaluate MOT in extremely dynamic open-world\nscenarios, a bird flock tracking (BFT) dataset is constructed, which exhibits\nhigh dynamicity with diverse species and open-world scenarios. Comprehensive\nevaluation on BFT validates the effectiveness of fine-grained learning on\nobject dynamicity, and thorough transfer experiments on challenging open-world\nbenchmarks, i.e., TAO, TAO-OW, AnimalTrack, and GMOT-40, validate the strong\ngeneralization ability of NetTrack even without finetuning. Project page:\nhttps://george-zhuang.github.io/nettrack/.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Guangze Zheng", "Shijie Lin", "Haobo Zuo", "Changhong Fu", "Jia Pan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f205"}, "filepath": "data/2311.13608.png", "tags": [], "_media_type": "image", "_rand": 0.9996170391040808, "arXiv_link": "https://arxiv.org/abs/2311.13608", "other_link": "", "title": "Breathing Life Into Sketches Using Text-to-Video Priors", "abstract": "A sketch is one of the most intuitive and versatile tools humans use to\nconvey their ideas visually. An animated sketch opens another dimension to the\nexpression of ideas and is widely used by designers for a variety of purposes.\nAnimating sketches is a laborious process, requiring extensive experience and\nprofessional design skills. In this work, we present a method that\nautomatically adds motion to a single-subject sketch (hence, \"breathing life\ninto it\"), merely by providing a text prompt indicating the desired motion. The\noutput is a short animation provided in vector representation, which can be\neasily edited. Our method does not require extensive training, but instead\nleverages the motion prior of a large pretrained text-to-video diffusion model\nusing a score-distillation loss to guide the placement of strokes. To promote\nnatural and smooth motion and to better preserve the sketch's appearance, we\nmodel the learned motion through two components. The first governs small local\ndeformations and the second controls global affine transformations.\nSurprisingly, we find that even models that struggle to generate sketch videos\non their own can still serve as a useful backbone for animating abstract\nrepresentations.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Rinon Gal", "Yael Vinker", "Yuval Alaluf", "Amit H. Bermano", "Daniel Cohen-Or", "Ariel Shamir", "Gal Chechik"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f206"}, "filepath": "data/2311.11995.png", "tags": [], "_media_type": "image", "_rand": 0.9996075189219121, "arXiv_link": "https://arxiv.org/abs/2311.11995", "other_link": "", "title": "BrainWash: A Poisoning Attack to Forget in Continual Learning", "abstract": "Continual learning has gained substantial attention within the deep learning\ncommunity, offering promising solutions to the challenging problem of\nsequential learning. Yet, a largely unexplored facet of this paradigm is its\nsusceptibility to adversarial attacks, especially with the aim of inducing\nforgetting. In this paper, we introduce \"BrainWash,\" a novel data poisoning\nmethod tailored to impose forgetting on a continual learner. By adding the\nBrainWash noise to a variety of baselines, we demonstrate how a trained\ncontinual learner can be induced to forget its previously learned tasks\ncatastrophically, even when using these continual learning baselines. An\nimportant feature of our approach is that the attacker requires no access to\nprevious tasks' data and is armed merely with the model's current parameters\nand the data belonging to the most recent task. Our extensive experiments\nhighlight the efficacy of BrainWash, showcasing degradation in performance\nacross various regularization-based continual learning methods.", "keywords": [], "authors_list": ["Ali Abbasi", "Parsa Nooralinejad", "Hamed Pirsiavash", "Soheil Kolouri"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f207"}, "filepath": "data/2311.17123.png", "tags": [], "_media_type": "image", "_rand": 0.9991014638772391, "arXiv_link": "https://arxiv.org/abs/2311.17123", "other_link": "", "title": "ConTex-Human: Free-View Rendering of Human from a Single Image with Texture-Consistent Synthesis", "abstract": "In this work, we propose a method to address the challenge of rendering a 3D\nhuman from a single image in a free-view manner. Some existing approaches could\nachieve this by using generalizable pixel-aligned implicit fields to\nreconstruct a textured mesh of a human or by employing a 2D diffusion model as\nguidance with the Score Distillation Sampling (SDS) method, to lift the 2D\nimage into 3D space. However, a generalizable implicit field often results in\nan over-smooth texture field, while the SDS method tends to lead to a\ntexture-inconsistent novel view with the input image. In this paper, we\nintroduce a texture-consistent back view synthesis module that could transfer\nthe reference image content to the back view through depth and text-guided\nattention injection. Moreover, to alleviate the color distortion that occurs in\nthe side region, we propose a visibility-aware patch consistency regularization\nfor texture mapping and refinement combined with the synthesized back view\ntexture. With the above techniques, we could achieve high-fidelity and\ntexture-consistent human rendering from a single image. Experiments conducted\non both real and synthetic data demonstrate the effectiveness of our method and\nshow that our approach outperforms previous baseline methods.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Xiangjun Gao", "Xiaoyu Li", "Chaopeng Zhang", "Qi Zhang", "Yan-Pei Cao", "Ying Shan", "Long Quan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f208"}, "filepath": "data/2403.03485.png", "tags": [], "_media_type": "image", "_rand": 0.9991709480037009, "arXiv_link": "https://arxiv.org/abs/2403.03485", "other_link": "https://github.com/univ-esuty/noisecollage.", "title": "NoiseCollage: A Layout-Aware Text-to-Image Diffusion Model Based on Noise Cropping and Merging", "abstract": "Layout-aware text-to-image generation is a task to generate multi-object\nimages that reflect layout conditions in addition to text conditions. The\ncurrent layout-aware text-to-image diffusion models still have several issues,\nincluding mismatches between the text and layout conditions and quality\ndegradation of generated images. This paper proposes a novel layout-aware\ntext-to-image diffusion model called NoiseCollage to tackle these issues.\nDuring the denoising process, NoiseCollage independently estimates noises for\nindividual objects and then crops and merges them into a single noise. This\noperation helps avoid condition mismatches; in other words, it can put the\nright objects in the right places. Qualitative and quantitative evaluations\nshow that NoiseCollage outperforms several state-of-the-art models. These\nsuccessful results indicate that the crop-and-merge operation of noises is a\nreasonable strategy to control image generation. We also show that NoiseCollage\ncan be integrated with ControlNet to use edges, sketches, and pose skeletons as\nadditional conditions. Experimental results show that this integration boosts\nthe layout accuracy of ControlNet. The code is available at\nhttps://github.com/univ-esuty/noisecollage.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Takahiro Shirakawa", "Seiichi Uchida"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f209"}, "filepath": "data/2312.11598.png", "tags": [], "_media_type": "image", "_rand": 0.9998626325283375, "arXiv_link": "https://arxiv.org/abs/2312.11598", "other_link": "", "title": "SkillDiffuser: Interpretable Hierarchical Planning via Skill Abstractions in Diffusion-Based Task Execution", "abstract": "Diffusion models have demonstrated strong potential for robotic trajectory\nplanning. However, generating coherent trajectories from high-level\ninstructions remains challenging, especially for long-range composition tasks\nrequiring multiple sequential skills. We propose SkillDiffuser, an end-to-end\nhierarchical planning framework integrating interpretable skill learning with\nconditional diffusion planning to address this problem. At the higher level,\nthe skill abstraction module learns discrete, human-understandable skill\nrepresentations from visual observations and language instructions. These\nlearned skill embeddings are then used to condition the diffusion model to\ngenerate customized latent trajectories aligned with the skills. This allows\ngenerating diverse state trajectories that adhere to the learnable skills. By\nintegrating skill learning with conditional trajectory generation,\nSkillDiffuser produces coherent behavior following abstract instructions across\ndiverse tasks. Experiments on multi-task robotic manipulation benchmarks like\nMeta-World and LOReL demonstrate state-of-the-art performance and\nhuman-interpretable skill representations from SkillDiffuser. More\nvisualization results and information could be found on our website.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Zhixuan Liang", "Yao Mu", "Hengbo Ma", "Masayoshi Tomizuka", "Mingyu Ding", "Ping Luo"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f20a"}, "filepath": "data/2312.12743.png", "tags": [], "_media_type": "image", "_rand": 0.9990892111445989, "arXiv_link": "https://arxiv.org/abs/2312.12743", "other_link": "", "title": "CurveCloudNet: Processing Point Clouds with 1D Structure", "abstract": "Current methodologies in point cloud analysis predominantly explore 3D\ngeometries, often achieved through the introduction of intricate learnable\ngeometric extractors in the encoder or by deepening networks with repeated\nblocks. However, these approaches inevitably lead to a significant number of\nlearnable parameters, resulting in substantial computational costs and imposing\nmemory burdens on CPU/GPU. Additionally, the existing strategies are primarily\ntailored for object-level point cloud classification and segmentation tasks,\nwith limited extensions to crucial scene-level applications, such as autonomous\ndriving. In response to these limitations, we introduce PointeNet, an efficient\nnetwork designed specifically for point cloud analysis. PointeNet distinguishes\nitself with its lightweight architecture, low training cost, and plug-and-play\ncapability, effectively capturing representative features. The network consists\nof a Multivariate Geometric Encoding (MGE) module and an optional\nDistance-aware Semantic Enhancement (DSE) module. The MGE module employs\noperations of sampling, grouping, and multivariate geometric aggregation to\nlightweightly capture and adaptively aggregate multivariate geometric features,\nproviding a comprehensive depiction of 3D geometries. The DSE module, designed\nfor real-world autonomous driving scenarios, enhances the semantic perception\nof point clouds, particularly for distant points. Our method demonstrates\nflexibility by seamlessly integrating with a classification/segmentation head\nor embedding into off-the-shelf 3D object detection networks, achieving notable\nperformance improvements at a minimal cost. Extensive experiments on\nobject-level datasets, including ModelNet40, ScanObjectNN, ShapeNetPart, and\nthe scene-level dataset KITTI, demonstrate the superior performance of\nPointeNet over state-of-the-art methods in point cloud analysis.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Colton Stearns", "Alex Fu", "Jiateng Liu", "Jeong Joon Park", "Davis Rempe", "Despoina Paschalidou", "Leonidas Guibas"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f20b"}, "filepath": "data/2403.15132.png", "tags": [], "_media_type": "image", "_rand": 0.9999236564582122, "arXiv_link": "https://arxiv.org/abs/2403.15132", "other_link": "", "title": "LAN: Learning to Adapt Noise for Image Denoising", "abstract": "Image denoising is a fundamental task in computer vision. While prevailing\ndeep learning-based supervised and self-supervised methods have excelled in\neliminating in-distribution noise, their susceptibility to out-of-distribution\n(OOD) noise remains a significant challenge. The recent emergence of\ncontrastive language-image pre-training (CLIP) model has showcased exceptional\ncapabilities in open-world image recognition and segmentation. Yet, the\npotential for leveraging CLIP to enhance the robustness of low-level tasks\nremains largely unexplored. This paper uncovers that certain dense features\nextracted from the frozen ResNet image encoder of CLIP exhibit\ndistortion-invariant and content-related properties, which are highly desirable\nfor generalizable denoising. Leveraging these properties, we devise an\nasymmetrical encoder-decoder denoising network, which incorporates dense\nfeatures including the noisy image and its multi-scale features from the frozen\nResNet encoder of CLIP into a learnable image decoder to achieve generalizable\ndenoising. The progressive feature augmentation strategy is further proposed to\nmitigate feature overfitting and improve the robustness of the learnable\ndecoder. Extensive experiments and comparisons conducted across diverse OOD\nnoises, including synthetic noise, real-world sRGB noise, and low-dose CT image\nnoise, demonstrate the superior generalization ability of our method.", "keywords": ["Low-level vision", "Multimodal models and vision-language models"], "authors_list": ["Changjin Kim", "Tae Hyun Kim", "Sungyong Baik"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f20c"}, "filepath": "data/2404.04647.png", "tags": [], "_media_type": "image", "_rand": 0.9991355607861379, "arXiv_link": "https://arxiv.org/abs/2404.04647", "other_link": "", "title": "Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training", "abstract": "Gradient-based saliency maps have been widely used to explain the decisions\nof deep neural network classifiers. However, standard gradient-based\ninterpretation maps, including the simple gradient and integrated gradient\nalgorithms, often lack desired structures such as sparsity and connectedness in\ntheir application to real-world computer vision models. A frequently used\napproach to inducing sparsity structures into gradient-based saliency maps is\nto alter the simple gradient scheme using sparsification or norm-based\nregularization. A drawback with such post-processing methods is their\nfrequently-observed significant loss in fidelity to the original simple\ngradient map. In this work, we propose to apply adversarial training as an\nin-processing scheme to train neural networks with structured simple gradient\nmaps. We show a duality relation between the regularized norms of the\nadversarial perturbations and gradient-based maps, based on which we design\nadversarial training loss functions promoting sparsity and group-sparsity\nproperties in simple gradient maps. We present several numerical results to\nshow the influence of our proposed norm-based adversarial training methods on\nthe standard gradient-based maps of standard neural network architectures on\nbenchmark image datasets.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Shizhan Gong", "Qi Dou", "Farzan Farnia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f20d"}, "filepath": "data/2312.13091v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994280617808273, "arXiv_link": "https://arxiv.org/abs/2312.13091v2", "other_link": "https://ubisoft-laforge.github.io/character/mosar/", "title": "MoSAR: Monocular Semi-Supervised Model for Avatar Reconstruction using Differentiable Shading", "abstract": "Reconstructing an avatar from a portrait image has many applications in\nmultimedia, but remains a challenging research problem. Extracting reflectance\nmaps and geometry from one image is ill-posed: recovering geometry is a\none-to-many mapping problem and reflectance and light are difficult to\ndisentangle. Accurate geometry and reflectance can be captured under the\ncontrolled conditions of a light stage, but it is costly to acquire large\ndatasets in this fashion. Moreover, training solely with this type of data\nleads to poor generalization with in-the-wild images. This motivates the\nintroduction of MoSAR, a method for 3D avatar generation from monocular images.\nWe propose a semi-supervised training scheme that improves generalization by\nlearning from both light stage and in-the-wild datasets. This is achieved using\na novel differentiable shading formulation. We show that our approach\neffectively disentangles the intrinsic face parameters, producing relightable\navatars. As a result, MoSAR estimates a richer set of skin reflectance maps,\nand generates more realistic avatars than existing state-of-the-art methods. We\nalso introduce a new dataset, named FFHQ-UV-Intrinsics, the first public\ndataset providing intrinsic face attributes at scale (diffuse, specular,\nambient occlusion and translucency maps) for a total of 10k subjects. The\nproject website and the dataset are available on the following link:\nhttps://ubisoft-laforge.github.io/character/mosar/", "keywords": ["Biometrics and human analysis"], "authors_list": ["Abdallah Dib", "Luiz Gustavo Hafemann", "Emeline Got", "Trevor Anderson", "Amin Fadaeinejad", "Rafael M. O. Cruz", "Marc-Andr\u00e9 Carbonneau"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning", "Unknown", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f20e"}, "filepath": "data/2311.17754.png", "tags": [], "_media_type": "image", "_rand": 0.9990311917190646, "arXiv_link": "https://arxiv.org/abs/2311.17754", "other_link": "", "title": "Cinematic Behavior Transfer via NeRF-based Differentiable Filming", "abstract": "In the evolving landscape of digital media and video production, the precise\nmanipulation and reproduction of visual elements like camera movements and\ncharacter actions are highly desired. Existing SLAM methods face limitations in\ndynamic scenes and human pose estimation often focuses on 2D projections,\nneglecting 3D statuses. To address these issues, we first introduce a reverse\nfilming behavior estimation technique. It optimizes camera trajectories by\nleveraging NeRF as a differentiable renderer and refining SMPL tracks. We then\nintroduce a cinematic transfer pipeline that is able to transfer various shot\ntypes to a new 2D video or a 3D virtual environment. The incorporation of 3D\nengine workflow enables superior rendering and control abilities, which also\nachieves a higher rating in the user study.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis", "Vision systems and graphics integration"], "authors_list": ["Xuekun Jiang", "Anyi Rao", "Jingbo Wang", "Dahua Lin", "Bo Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Human-Computer Interaction", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f20f"}, "filepath": "data/2402.17587.png", "tags": [], "_media_type": "image", "_rand": 0.9996549537479315, "arXiv_link": "https://arxiv.org/abs/2402.17587", "other_link": "", "title": "Instance-aware Exploration-Verification-Exploitation for Instance ImageGoal Navigation", "abstract": "As a new embodied vision task, Instance ImageGoal Navigation (IIN) aims to\nnavigate to a specified object depicted by a goal image in an unexplored\nenvironment.\n The main challenge of this task lies in identifying the target object from\ndifferent viewpoints while rejecting similar distractors.\n Existing ImageGoal Navigation methods usually adopt the simple\nExploration-Exploitation framework and ignore the identification of specific\ninstance during navigation.\n In this work, we propose to imitate the human behaviour of ``getting closer\nto confirm\" when distinguishing objects from a distance.\n Specifically, we design a new modular navigation framework named\nInstance-aware Exploration-Verification-Exploitation (IEVE) for instance-level\nimage goal navigation.\n Our method allows for active switching among the exploration, verification,\nand exploitation actions, thereby facilitating the agent in making reasonable\ndecisions under different situations.\n On the challenging HabitatMatterport 3D semantic (HM3D-SEM) dataset, our\nmethod surpasses previous state-of-the-art work, with a classical segmentation\nmodel (0.684 vs. 0.561 success) or a robust model (0.702 vs. 0.561 success)", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Xiaohan Lei", "Min Wang", "Wengang Zhou", "Li Li", "Houqiang Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f210"}, "filepath": "data/2403.01325.png", "tags": [], "_media_type": "image", "_rand": 0.9996749300376188, "arXiv_link": "https://arxiv.org/abs/2403.01325", "other_link": "https://github.com/Freedomcls/NeRF-VPT}.", "title": "TextNeRF: A Novel Scene-Text Image Synthesis Method based on Neural Radiance Fields", "abstract": "Neural Radiance Fields (NeRF) have garnered remarkable success in novel view\nsynthesis. Nonetheless, the task of generating high-quality images for novel\nviews persists as a critical challenge. While the existing efforts have\nexhibited commendable progress, capturing intricate details, enhancing\ntextures, and achieving superior Peak Signal-to-Noise Ratio (PSNR) metrics\nwarrant further focused attention and advancement. In this work, we propose\nNeRF-VPT, an innovative method for novel view synthesis to address these\nchallenges. Our proposed NeRF-VPT employs a cascading view prompt tuning\nparadigm, wherein RGB information gained from preceding rendering outcomes\nserves as instructive visual prompts for subsequent rendering stages, with the\naspiration that the prior knowledge embedded in the prompts can facilitate the\ngradual enhancement of rendered image quality. NeRF-VPT only requires sampling\nRGB data from previous stage renderings as priors at each training stage,\nwithout relying on extra guidance or complex techniques. Thus, our NeRF-VPT is\nplug-and-play and can be readily integrated into existing methods. By\nconducting comparative analyses of our NeRF-VPT against several NeRF-based\napproaches on demanding real-scene benchmarks, such as Realistic Synthetic 360,\nReal Forward-Facing, Replica dataset, and a user-captured dataset, we\nsubstantiate that our NeRF-VPT significantly elevates baseline performance and\nproficiently generates more high-quality novel view images than all the\ncompared state-of-the-art methods. Furthermore, the cascading learning of\nNeRF-VPT introduces adaptability to scenarios with sparse inputs, resulting in\na significant enhancement of accuracy for sparse-view novel view synthesis. The\nsource code and dataset are available at\n\\url{https://github.com/Freedomcls/NeRF-VPT}.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Jialei Cui", "Jianwei Du", "Wenzhuo Liu", "Zhouhui Lian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f211"}, "filepath": "data/2404.06913.png", "tags": [], "_media_type": "image", "_rand": 0.9992677012157024, "arXiv_link": "https://arxiv.org/abs/2404.06913", "other_link": "", "title": "Sparse Global Matching for Video Frame Interpolation with Large Motion", "abstract": "Large motion poses a critical challenge in Video Frame Interpolation (VFI)\ntask. Existing methods are often constrained by limited receptive fields,\nresulting in sub-optimal performance when handling scenarios with large motion.\nIn this paper, we introduce a new pipeline for VFI, which can effectively\nintegrate global-level information to alleviate issues associated with large\nmotion. Specifically, we first estimate a pair of initial intermediate flows\nusing a high-resolution feature map for extracting local details. Then, we\nincorporate a sparse global matching branch to compensate for flow estimation,\nwhich consists of identifying flaws in initial flows and generating sparse flow\ncompensation with a global receptive field. Finally, we adaptively merge the\ninitial flow estimation with global flow compensation, yielding a more accurate\nintermediate flow. To evaluate the effectiveness of our method in handling\nlarge motion, we carefully curate a more challenging subset from commonly used\nbenchmarks. Our method demonstrates the state-of-the-art performance on these\nVFI subsets with large motion.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Chunxu Liu", "Guozhen Zhang", "Rui Zhao", "Limin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f212"}, "filepath": "data/2405.08322.png", "tags": [], "_media_type": "image", "_rand": 0.9991505856906789, "arXiv_link": "https://arxiv.org/abs/2405.08322", "other_link": "https://github.com/ddsediri/StraightPCF.", "title": "StraightPCF: Straight Point Cloud Filtering", "abstract": "Point cloud filtering is a fundamental 3D vision task, which aims to remove\nnoise while recovering the underlying clean surfaces. State-of-the-art methods\nremove noise by moving noisy points along stochastic trajectories to the clean\nsurfaces. These methods often require regularization within the training\nobjective and/or during post-processing, to ensure fidelity. In this paper, we\nintroduce StraightPCF, a new deep learning based method for point cloud\nfiltering. It works by moving noisy points along straight paths, thus reducing\ndiscretization errors while ensuring faster convergence to the clean surfaces.\nWe model noisy patches as intermediate states between high noise patch variants\nand their clean counterparts, and design the VelocityModule to infer a constant\nflow velocity from the former to the latter. This constant flow leads to\nstraight filtering trajectories. In addition, we introduce a DistanceModule\nthat scales the straight trajectory using an estimated distance scalar to\nattain convergence near the clean surface. Our network is lightweight and only\nhas $\\sim530K$ parameters, being 17% of IterativePFN (a most recent point cloud\nfiltering network). Extensive experiments on both synthetic and real-world data\nshow our method achieves state-of-the-art results. Our method also demonstrates\nnice distributions of filtered points without the need for regularization. The\nimplementation code can be found at: https://github.com/ddsediri/StraightPCF.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Dasith de Silva Edirimuni", "Xuequan Lu", "Gang Li", "Lei Wei", "Antonio Robles-Kelly", "Hongdong Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f213"}, "filepath": "data/2403.14497.png", "tags": [], "_media_type": "image", "_rand": 0.9999022149653323, "arXiv_link": "https://arxiv.org/abs/2403.14497", "other_link": "", "title": "MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection", "abstract": "We propose a novel approach to video anomaly detection: we treat feature\nvectors extracted from videos as realizations of a random variable with a fixed\ndistribution and model this distribution with a neural network. This lets us\nestimate the likelihood of test videos and detect video anomalies by\nthresholding the likelihood estimates. We train our video anomaly detector\nusing a modification of denoising score matching, a method that injects\ntraining data with noise to facilitate modeling its distribution. To eliminate\nhyperparameter selection, we model the distribution of noisy video features\nacross a range of noise levels and introduce a regularizer that tends to align\nthe models for different levels of noise. At test time, we combine anomaly\nindications at multiple noise scales with a Gaussian mixture model. Running our\nvideo anomaly detector induces minimal delays as inference requires merely\nextracting the features and forward-propagating them through a shallow neural\nnetwork and a Gaussian mixture model. Our experiments on five popular video\nanomaly detection benchmarks demonstrate state-of-the-art performance, both in\nthe object-centric and in the frame-centric setup.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jakub Micorek", "Horst Possegger", "Dominik Narnhofer", "Horst Bischof", "Mateusz Kozinski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f214"}, "filepath": "data/2404.09401.png", "tags": [], "_media_type": "image", "_rand": 0.9995587334030277, "arXiv_link": "https://arxiv.org/abs/2404.09401", "other_link": "", "title": "Watermark-embedded Adversarial Examples for Copyright Protection against Diffusion Models", "abstract": "Diffusion Models (DMs) have shown remarkable capabilities in various\nimage-generation tasks. However, there are growing concerns that DMs could be\nused to imitate unauthorized creations and thus raise copyright issues. To\naddress this issue, we propose a novel framework that embeds personal\nwatermarks in the generation of adversarial examples. Such examples can force\nDMs to generate images with visible watermarks and prevent DMs from imitating\nunauthorized images. We construct a generator based on conditional adversarial\nnetworks and design three losses (adversarial loss, GAN loss, and perturbation\nloss) to generate adversarial examples that have subtle perturbation but can\neffectively attack DMs to prevent copyright violations. Training a generator\nfor a personal watermark by our method only requires 5-10 samples within 2-3\nminutes, and once the generator is trained, it can generate adversarial\nexamples with that watermark significantly fast (0.2s per image). We conduct\nextensive experiments in various conditional image-generation scenarios.\nCompared to existing methods that generate images with chaotic textures, our\nmethod adds visible watermarks on the generated images, which is a more\nstraightforward way to indicate copyright violations. We also observe that our\nadversarial examples exhibit good transferability across unknown generative\nmodels. Therefore, this work provides a simple yet powerful way to protect\ncopyright from DM-based imitation.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Peifei Zhu", "Tsubasa Takahashi", "Hirokatsu Kataoka"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f215"}, "filepath": "data/2308.08843.png", "tags": [], "_media_type": "image", "_rand": 0.9992482277226711, "arXiv_link": "https://arxiv.org/abs/2308.08843", "other_link": "", "title": "Dr. Bokeh: DiffeRentiable Occlusion-aware Bokeh Rendering", "abstract": "Bokeh is widely used in photography to draw attention to the subject while\neffectively isolating distractions in the background. Computational methods\nsimulate bokeh effects without relying on a physical camera lens. However, in\nthe realm of digital bokeh synthesis, the two main challenges for bokeh\nsynthesis are color bleeding and partial occlusion at object boundaries. Our\nprimary goal is to overcome these two major challenges using physics principles\nthat define bokeh formation. To achieve this, we propose a novel and accurate\nfiltering-based bokeh rendering equation and a physically-based occlusion-aware\nbokeh renderer, dubbed Dr.Bokeh, which addresses the aforementioned challenges\nduring the rendering stage without the need of post-processing or data-driven\napproaches. Our rendering algorithm first preprocesses the input RGBD to obtain\na layered scene representation. Dr.Bokeh then takes the layered representation\nand user-defined lens parameters to render photo-realistic lens blur. By\nsoftening non-differentiable operations, we make Dr.Bokeh differentiable such\nthat it can be plugged into a machine-learning framework. We perform\nquantitative and qualitative evaluations on synthetic and real-world images to\nvalidate the effectiveness of the rendering quality and the differentiability\nof our method. We show Dr.Bokeh not only outperforms state-of-the-art bokeh\nrendering algorithms in terms of photo-realism but also improves the depth\nquality from depth-from-defocus.", "keywords": ["Image and video generation and manipulation", "Low-level vision"], "authors_list": ["Yichen Sheng", "Zixun Yu", "Lu Ling", "Zhiwen Cao", "Xuaner Zhang", "Xin Lu", "Ke Xian", "Haiting Lin", "Bedrich Benes"], "category_name": "Graphics", "all_categories": ["Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f216"}, "filepath": "data/2403.19517.png", "tags": [], "_media_type": "image", "_rand": 0.999737609555458, "arXiv_link": "https://arxiv.org/abs/2403.19517", "other_link": "", "title": "XScale-NVS: Cross-Scale Novel View Synthesis with Hash Featurized Manifold", "abstract": "We propose XScale-NVS for high-fidelity cross-scale novel view synthesis of\nreal-world large-scale scenes. Existing representations based on explicit\nsurface suffer from discretization resolution or UV distortion, while implicit\nvolumetric representations lack scalability for large scenes due to the\ndispersed weight distribution and surface ambiguity. In light of the above\nchallenges, we introduce hash featurized manifold, a novel hash-based\nfeaturization coupled with a deferred neural rendering framework. This approach\nfully unlocks the expressivity of the representation by explicitly\nconcentrating the hash entries on the 2D manifold, thus effectively\nrepresenting highly detailed contents independent of the discretization\nresolution. We also introduce a novel dataset, namely GigaNVS, to benchmark\ncross-scale, high-resolution novel view synthesis of realworld large-scale\nscenes. Our method significantly outperforms competing baselines on various\nreal-world scenes, yielding an average LPIPS that is 40% lower than prior\nstate-of-the-art on the challenging GigaNVS benchmark. Please see our project\npage at: xscalenvs.github.io.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Guangyu Wang", "Jinzhi Zhang", "Fan Wang", "Ruqi Huang", "Lu Fang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f217"}, "filepath": "data/2401.02460.png", "tags": [], "_media_type": "image", "_rand": 0.9990084975634649, "arXiv_link": "https://arxiv.org/abs/2401.02460", "other_link": "https://github.com/cvl-umass/AdaptCLIPZS", "title": "Improved Zero-Shot Classification by Adapting VLMs with Text Descriptions", "abstract": "The zero-shot performance of existing vision-language models (VLMs) such as\nCLIP is limited by the availability of large-scale, aligned image and text\ndatasets in specific domains. In this work, we leverage two complementary\nsources of information -- descriptions of categories generated by large\nlanguage models (LLMs) and abundant, fine-grained image classification datasets\n-- to improve the zero-shot classification performance of VLMs across\nfine-grained domains. On the technical side, we develop methods to train VLMs\nwith this \"bag-level\" image-text supervision. We find that simply using these\nattributes at test-time does not improve performance, but our training\nstrategy, for example, on the iNaturalist dataset, leads to an average\nimprovement of 4-5% in zero-shot classification accuracy for novel categories\nof birds and flowers. Similar improvements are observed in domains where a\nsubset of the categories was used to fine-tune the model. By prompting LLMs in\nvarious ways, we generate descriptions that capture visual appearance, habitat,\nand geographic regions and pair them with existing attributes such as the\ntaxonomic structure of the categories. We systematically evaluate their ability\nto improve zero-shot categorization in natural domains. Our findings suggest\nthat geographic priors can be just as effective and are complementary to visual\nappearance. Our method also outperforms prior work on prompt-based tuning of\nVLMs. We release the benchmark, consisting of 14 datasets at\nhttps://github.com/cvl-umass/AdaptCLIPZS , which will contribute to future\nresearch in zero-shot recognition.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Oindrila Saha", "Grant Horn", "Subhransu Maji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f218"}, "filepath": "data/2401.01448.png", "tags": [], "_media_type": "image", "_rand": 0.9998822969338855, "arXiv_link": "https://arxiv.org/abs/2401.01448", "other_link": "", "title": "Contrastive Learning for DeepFake Classification and Localization via Multi-Label Ranking", "abstract": "Multi-label image classification presents a challenging task in many domains,\nincluding computer vision and medical imaging. Recent advancements have\nintroduced graph-based and transformer-based methods to improve performance and\ncapture label dependencies. However, these methods often include complex\nmodules that entail heavy computation and lack interpretability. In this paper,\nwe propose Probabilistic Multi-label Contrastive Learning (ProbMCL), a novel\nframework to address these challenges in multi-label image classification\ntasks. Our simple yet effective approach employs supervised contrastive\nlearning, in which samples that share enough labels with an anchor image based\non a decision threshold are introduced as a positive set. This structure\ncaptures label dependencies by pulling positive pair embeddings together and\npushing away negative samples that fall below the threshold. We enhance\nrepresentation learning by incorporating a mixture density network into\ncontrastive learning and generating Gaussian mixture distributions to explore\nthe epistemic uncertainty of the feature encoder. We validate the effectiveness\nof our framework through experimentation with datasets from the computer vision\nand medical imaging domains. Our method outperforms the existing\nstate-of-the-art methods while achieving a low computational footprint on both\ndatasets. Visualization analyses also demonstrate that ProbMCL-learned\nclassifiers maintain a meaningful semantic topology.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Cheng-Yao Hong", "Yen-Chi Hsu", "Tyng-Luh Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f219"}, "filepath": "data/2312.02528.png", "tags": [], "_media_type": "image", "_rand": 0.9998313703147151, "arXiv_link": "https://arxiv.org/abs/2312.02528", "other_link": "https://github.com/Xiaoqi-Zhao-DLUT/X-ray-PBD}{X-ray", "title": "Towards Automatic Power Battery Detection: New Challenge, Benchmark Dataset and Baseline", "abstract": "We conduct a comprehensive study on a new task named power battery detection\n(PBD), which aims to localize the dense cathode and anode plates endpoints from\nX-ray images to evaluate the quality of power batteries. Existing manufacturers\nusually rely on human eye observation to complete PBD, which makes it difficult\nto balance the accuracy and efficiency of detection. To address this issue and\ndrive more attention into this meaningful task, we first elaborately collect a\ndataset, called X-ray PBD, which has $1,500$ diverse X-ray images selected from\nthousands of power batteries of $5$ manufacturers, with $7$ different visual\ninterference. Then, we propose a novel segmentation-based solution for PBD,\ntermed multi-dimensional collaborative network (MDCNet). With the help of line\nand counting predictors, the representation of the point segmentation branch\ncan be improved at both semantic and detail aspects.Besides, we design an\neffective distance-adaptive mask generation strategy, which can alleviate the\nvisual challenge caused by the inconsistent distribution density of plates to\nprovide MDCNet with stable supervision. Without any bells and whistles, our\nsegmentation-based MDCNet consistently outperforms various other corner\ndetection, crowd counting and general/tiny object detection-based solutions,\nmaking it a strong baseline that can help facilitate future research in PBD.\nFinally, we share some potential difficulties and works for future researches.\nThe source code and datasets will be publicly available at\n\\href{https://github.com/Xiaoqi-Zhao-DLUT/X-ray-PBD}{X-ray PBD}.", "keywords": [], "authors_list": ["Xiaoqi Zhao", "Youwei Pang", "Zhenyu Chen", "Qian Yu", "Lihe Zhang", "Hanqi Liu", "Jiaming Zuo", "Huchuan Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f21a"}, "filepath": "data/2402.04356.png", "tags": [], "_media_type": "image", "_rand": 0.9990029700748091, "arXiv_link": "https://arxiv.org/abs/2402.04356", "other_link": "", "title": "Bidirectional Autoregessive Diffusion Model for Dance Generation", "abstract": "Dance serves as a powerful medium for expressing human emotions, but the\nlifelike generation of dance is still a considerable challenge. Recently,\ndiffusion models have showcased remarkable generative abilities across various\ndomains. They hold promise for human motion generation due to their adaptable\nmany-to-many nature. Nonetheless, current diffusion-based motion generation\nmodels often create entire motion sequences directly and unidirectionally,\nlacking focus on the motion with local and bidirectional enhancement. When\nchoreographing high-quality dance movements, people need to take into account\nnot only the musical context but also the nearby music-aligned dance motions.\nTo authentically capture human behavior, we propose a Bidirectional\nAutoregressive Diffusion Model (BADM) for music-to-dance generation, where a\nbidirectional encoder is built to enforce that the generated dance is\nharmonious in both the forward and backward directions. To make the generated\ndance motion smoother, a local information decoder is built for local motion\nenhancement. The proposed framework is able to generate new motions based on\nthe input conditions and nearby motions, which foresees individual motion\nslices iteratively and consolidates all predictions. To further refine the\nsynchronicity between the generated dance and the beat, the beat information is\nincorporated as an input to generate better music-aligned dance movements.\nExperimental results demonstrate that the proposed model achieves\nstate-of-the-art performance compared to existing unidirectional approaches on\nthe prominent benchmark for music-to-dance generation.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis", "Deep learning architectures and techniques"], "authors_list": ["Canyu Zhang", "Youbao Tang", "NING Zhang", "Ruei-Sung Lin", "Mei Han", "Jing Xiao", "Song Wang"], "category_name": "Sound", "all_categories": ["Sound", "Computer Vision and Pattern Recognition", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f21b"}, "filepath": "data/2404.19250.png", "tags": [], "_media_type": "image", "_rand": 0.9990522213241045, "arXiv_link": "https://arxiv.org/abs/2404.19250", "other_link": "", "title": "Enhancing Intrinsic Features for Debiasing via Investigating Class-Discerning Common Attributes in Bias-Contrastive Pair", "abstract": "In the image classification task, deep neural networks frequently rely on\nbias attributes that are spuriously correlated with a target class in the\npresence of dataset bias, resulting in degraded performance when applied to\ndata without bias attributes. The task of debiasing aims to compel classifiers\nto learn intrinsic attributes that inherently define a target class rather than\nfocusing on bias attributes. While recent approaches mainly focus on\nemphasizing the learning of data samples without bias attributes (i.e.,\nbias-conflicting samples) compared to samples with bias attributes (i.e.,\nbias-aligned samples), they fall short of directly guiding models where to\nfocus for learning intrinsic features. To address this limitation, this paper\nproposes a method that provides the model with explicit spatial guidance that\nindicates the region of intrinsic features. We first identify the intrinsic\nfeatures by investigating the class-discerning common features between a\nbias-aligned (BA) sample and a bias-conflicting (BC) sample (i.e.,\nbias-contrastive pair). Next, we enhance the intrinsic features in the BA\nsample that are relatively under-exploited for prediction compared to the BC\nsample. To construct the bias-contrastive pair without using bias information,\nwe introduce a bias-negative score that distinguishes BC samples from BA\nsamples employing a biased model. The experiments demonstrate that our method\nachieves state-of-the-art performance on synthetic and real-world datasets with\nvarious levels of bias severity.", "keywords": [], "authors_list": ["Jeonghoon Park", "Chaeyeon Chung", "Jaegul Choo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f21c"}, "filepath": "data/2312.10998v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990701398041936, "arXiv_link": "https://arxiv.org/abs/2312.10998v1", "other_link": "", "title": "ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation", "abstract": "Image deblurring aims to remove undesired blurs from an image captured in a\ndynamic scene. Much research has been dedicated to improving deblurring\nperformance through model architectural designs. However, there is little work\non data augmentation for image deblurring. Since continuous motion causes\nblurred artifacts during image exposure, we aspire to develop a groundbreaking\nblur augmentation method to generate diverse blurred images by simulating\nmotion trajectories in a continuous space. This paper proposes Implicit\nDiffusion-based reBLurring AUgmentation (ID-Blau), utilizing a sharp image\npaired with a controllable blur condition map to produce a corresponding\nblurred image. We parameterize the blur patterns of a blurred image with their\norientations and magnitudes as a pixel-wise blur condition map to simulate\nmotion trajectories and implicitly represent them in a continuous space. By\nsampling diverse blur conditions, ID-Blau can generate various blurred images\nunseen in the training set. Experimental results demonstrate that ID-Blau can\nproduce realistic blurred images for training and thus significantly improve\nperformance for state-of-the-art deblurring models.", "keywords": [], "authors_list": ["Jia-Hao Wu", "Fu-Jen Tsai", "Yan-Tsung Peng", "Charles Tsai", "Chia-Wen Lin", "Yen-Yu Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f21d"}, "filepath": "data/2403.17742.png", "tags": [], "_media_type": "image", "_rand": 0.999021796560886, "arXiv_link": "https://arxiv.org/abs/2403.17742", "other_link": "", "title": "SLICE: Stabilized LIME for Consistent Explanations for Image Classification", "abstract": "We investigate the use of a stratified sampling approach for LIME Image, a\npopular model-agnostic explainable AI method for computer vision tasks, in\norder to reduce the artifacts generated by typical Monte Carlo sampling. Such\nartifacts are due to the undersampling of the dependent variable in the\nsynthetic neighborhood around the image being explained, which may result in\ninadequate explanations due to the impossibility of fitting a linear regressor\non the sampled data. We then highlight a connection with the Shapley theory,\nwhere similar arguments about undersampling and sample relevance were suggested\nin the past. We derive all the formulas and adjustment factors required for an\nunbiased stratified sampling estimator. Experiments show the efficacy of the\nproposed approach.", "keywords": [], "authors_list": ["Revoti Prasad Bora", "Kiran Raja", "Philipp Terh\u00f6rst", "Raymond Veldhuis", "Raghavendra Ramachandra"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f21e"}, "filepath": "data/2403.18144.png", "tags": [], "_media_type": "image", "_rand": 0.9992545875889223, "arXiv_link": "https://arxiv.org/abs/2403.18144", "other_link": "", "title": "Leak and Learn: An Attacker's Cookbook to Train Using Leaked Data from Federated Learning", "abstract": "Federated learning is a decentralized learning paradigm introduced to\npreserve privacy of client data. Despite this, prior work has shown that an\nattacker at the server can still reconstruct the private training data using\nonly the client updates. These attacks are known as data reconstruction attacks\nand fall into two major categories: gradient inversion (GI) and linear layer\nleakage attacks (LLL). However, despite demonstrating the effectiveness of\nthese attacks in breaching privacy, prior work has not investigated the\nusefulness of the reconstructed data for downstream tasks. In this work, we\nexplore data reconstruction attacks through the lens of training and improving\nmodels with leaked data. We demonstrate the effectiveness of both GI and LLL\nattacks in maliciously training models using the leaked data more accurately\nthan a benign federated learning strategy. Counter-intuitively, this bump in\ntraining quality can occur despite limited reconstruction quality or a small\ntotal number of leaked images. Finally, we show the limitations of these\nattacks for downstream training, individually for GI attacks and for LLL\nattacks.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Joshua C. Zhao", "Ahaan Dabholkar", "Atul Sharma", "Saurabh Bagchi"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f21f"}, "filepath": "data/2403.05924v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997369106217346, "arXiv_link": "https://arxiv.org/html/2403.05924v1", "other_link": "", "title": "Beyond Seen Primitive Concepts and Attribute-Object Compositional Learning", "abstract": "Attribute and object (A-O) disentanglement is a fundamental and critical\nproblem for Compositional Zero-shot Learning (CZSL), whose aim is to recognize\nnovel A-O compositions based on foregone knowledge. Existing methods based on\ndisentangled representation learning lose sight of the contextual dependency\nbetween the A-O primitive pairs. Inspired by this, we propose a novel A-O\ndisentangled framework for CZSL, namely Class-specified Cascaded Network\n(CSCNet). The key insight is to firstly classify one primitive and then\nspecifies the predicted class as a priori for guiding another primitive\nrecognition in a cascaded fashion. To this end, CSCNet constructs\nAttribute-to-Object and Object-to-Attribute cascaded branches, in addition to a\ncomposition branch modeling the two primitives as a whole. Notably, we devise a\nparametric classifier (ParamCls) to improve the matching between visual and\nsemantic embeddings. By improving the A-O disentanglement, our framework\nachieves superior results than previous competitive methods.", "keywords": [], "authors_list": ["Nirat Saini", "Khoi Pham", "Abhinav Shrivastava"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f220"}, "filepath": "data/2403.01807.png", "tags": [], "_media_type": "image", "_rand": 0.9992972889609599, "arXiv_link": "https://arxiv.org/abs/2403.01807", "other_link": "", "title": "ViewDiff: 3D-Consistent Image Generation with Text-To-Image Models", "abstract": "3D asset generation is getting massive amounts of attention, inspired by the\nrecent success of text-guided 2D content creation. Existing text-to-3D methods\nuse pretrained text-to-image diffusion models in an optimization problem or\nfine-tune them on synthetic data, which often results in non-photorealistic 3D\nobjects without backgrounds. In this paper, we present a method that leverages\npretrained text-to-image models as a prior, and learn to generate multi-view\nimages in a single denoising process from real-world data. Concretely, we\npropose to integrate 3D volume-rendering and cross-frame-attention layers into\neach block of the existing U-Net network of the text-to-image model. Moreover,\nwe design an autoregressive generation that renders more 3D-consistent images\nat any viewpoint. We train our model on real-world datasets of objects and\nshowcase its capabilities to generate instances with a variety of high-quality\nshapes and textures in authentic surroundings. Compared to the existing\nmethods, the results generated by our method are consistent, and have favorable\nvisual quality (-30% FID, -37% KID).", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Lukas H\u00f6llein", "Alja\u017e Bo\u017ei\u010d", "Norman M\u00fcller", "David Novotny", "Hung-Yu Tseng", "Christian Richardt", "Michael Zollhoefer", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f221"}, "filepath": "data/2308.14900.png", "tags": [], "_media_type": "image", "_rand": 0.9996115969320677, "arXiv_link": "https://arxiv.org/abs/2308.14900", "other_link": "", "title": "FACT: Frame-Action Cross-Attention Temporal Modeling for Efficient Action Segmentation", "abstract": "We address the task of supervised action segmentation which aims to partition\na video into non-overlapping segments, each representing a different action.\nRecent works apply transformers to perform temporal modeling at the\nframe-level, which suffer from high computational cost and cannot well capture\naction dependencies over long temporal horizons. To address these issues, we\npropose an efficient BI-level Temporal modeling (BIT) framework that learns\nexplicit action tokens to represent action segments, in parallel performs\ntemporal modeling on frame and action levels, while maintaining a low\ncomputational cost. Our model contains (i) a frame branch that uses convolution\nto learn frame-level relationships, (ii) an action branch that uses transformer\nto learn action-level dependencies with a small set of action tokens and (iii)\ncross-attentions to allow communication between the two branches. We apply and\nextend a set-prediction objective to allow each action token to represent one\nor multiple action segments, thus can avoid learning a large number of tokens\nover long videos with many segments. Thanks to the design of our action branch,\nwe can also seamlessly leverage textual transcripts of videos (when available)\nto help action segmentation by using them to initialize the action tokens. We\nevaluate our model on four video datasets (two egocentric and two third-person)\nfor action segmentation with and without transcripts, showing that BIT\nsignificantly improves the state-of-the-art accuracy with much lower\ncomputational cost (30 times faster) compared to existing transformer-based\nmethods.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Zijia Lu", "Ehsan Elhamifar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f222"}, "filepath": "data/2402.18192v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997136116285559, "arXiv_link": "https://arxiv.org/html/2402.18192v1", "other_link": "https://github.com/eezkni/FDL", "title": "Misalignment-Robust Frequency Distribution Loss for Image Transformation", "abstract": "This paper aims to address a common challenge in deep learning-based image\ntransformation methods, such as image enhancement and super-resolution, which\nheavily rely on precisely aligned paired datasets with pixel-level alignments.\nHowever, creating precisely aligned paired images presents significant\nchallenges and hinders the advancement of methods trained on such data. To\novercome this challenge, this paper introduces a novel and simple Frequency\nDistribution Loss (FDL) for computing distribution distance within the\nfrequency domain. Specifically, we transform image features into the frequency\ndomain using Discrete Fourier Transformation (DFT). Subsequently, frequency\ncomponents (amplitude and phase) are processed separately to form the FDL loss\nfunction. Our method is empirically proven effective as a training constraint\ndue to the thoughtful utilization of global information in the frequency\ndomain. Extensive experimental evaluations, focusing on image enhancement and\nsuper-resolution tasks, demonstrate that FDL outperforms existing\nmisalignment-robust loss functions. Furthermore, we explore the potential of\nour FDL for image style transfer that relies solely on completely misaligned\ndata. Our code is available at: https://github.com/eezkni/FDL", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Zhangkai Ni", "Juncheng Wu", "Zian Wang", "Wenhan Yang", "Hanli Wang", "Lin Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f223"}, "filepath": "data/2403.18447.png", "tags": [], "_media_type": "image", "_rand": 0.9994704549396748, "arXiv_link": "https://arxiv.org/abs/2403.18447", "other_link": "https://github.com/inhwanbae/LMTrajectory", "title": "Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction", "abstract": "Language models have demonstrated impressive ability in context understanding\nand generative performance. Inspired by the recent success of language\nfoundation models, in this paper, we propose LMTraj (Language-based Multimodal\nTrajectory predictor), which recasts the trajectory prediction task into a sort\nof question-answering problem. Departing from traditional numerical regression\nmodels, which treat the trajectory coordinate sequence as continuous signals,\nwe consider them as discrete signals like text prompts. Specially, we first\ntransform an input space for the trajectory coordinate into the natural\nlanguage space. Here, the entire time-series trajectories of pedestrians are\nconverted into a text prompt, and scene images are described as text\ninformation through image captioning. The transformed numerical and image data\nare then wrapped into the question-answering template for use in a language\nmodel. Next, to guide the language model in understanding and reasoning\nhigh-level knowledge, such as scene context and social relationships between\npedestrians, we introduce an auxiliary multi-task question and answering. We\nthen train a numerical tokenizer with the prompt data. We encourage the\ntokenizer to separate the integer and decimal parts well, and leverage it to\ncapture correlations between the consecutive numbers in the language model.\nLastly, we train the language model using the numerical tokenizer and all of\nthe question-answer prompts. Here, we propose a beam-search-based most-likely\nprediction and a temperature-based multimodal prediction to implement both\ndeterministic and stochastic inferences. Applying our LMTraj, we show that the\nlanguage-based model can be a powerful pedestrian trajectory predictor, and\noutperforms existing numerical-based predictor methods. Code is publicly\navailable at https://github.com/inhwanbae/LMTrajectory .", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Inhwan Bae", "Junoh Lee", "Hae-Gon Jeon"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Computer Vision and Pattern Recognition", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f224"}, "filepath": "data/2312.02196.png", "tags": [], "_media_type": "image", "_rand": 0.9995907872788763, "arXiv_link": "https://arxiv.org/abs/2312.02196", "other_link": "https://github.com/dx118/dynaip}}.", "title": "Dynamic Inertial Poser (DynaIP): Part-Based Motion Dynamics Learning for Enhanced Human Pose Estimation with Sparse Inertial Sensors", "abstract": "This paper introduces a novel human pose estimation approach using sparse\ninertial sensors, addressing the shortcomings of previous methods reliant on\nsynthetic data. It leverages a diverse array of real inertial motion capture\ndata from different skeleton formats to improve motion diversity and model\ngeneralization. This method features two innovative components: a\npseudo-velocity regression model for dynamic motion capture with inertial\nsensors, and a part-based model dividing the body and sensor data into three\nregions, each focusing on their unique characteristics. The approach\ndemonstrates superior performance over state-of-the-art models across five\npublic datasets, notably reducing pose error by 19\\% on the DIP-IMU dataset,\nthus representing a significant improvement in inertial sensor-based human pose\nestimation. Our codes are available at {\\url{https://github.com/dx118/dynaip}}.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Yu Zhang", "Songpengcheng Xia", "Lei Chu", "Jiarui Yang", "Qi Wu", "Ling Pei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f225"}, "filepath": "data/2403.04368v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996654496222276, "arXiv_link": "https://arxiv.org/abs/2403.04368v1", "other_link": "https://github.com/jqtangust/FilmRemoval}.", "title": "Learning to Remove Wrinkled Transparent Film with Polarized Prior", "abstract": "In this paper, we study a new problem, Film Removal (FR), which attempts to\nremove the interference of wrinkled transparent films and reconstruct the\noriginal information under films for industrial recognition systems. We first\nphysically model the imaging of industrial materials covered by the film.\nConsidering the specular highlight from the film can be effectively recorded by\nthe polarized camera, we build a practical dataset with polarization\ninformation containing paired data with and without transparent film. We aim to\nremove interference from the film (specular highlights and other degradations)\nwith an end-to-end framework. To locate the specular highlight, we use an angle\nestimation network to optimize the polarization angle with the minimized\nspecular highlight. The image with minimized specular highlight is set as a\nprior for supporting the reconstruction network. Based on the prior and the\npolarized images, the reconstruction network can decouple all degradations from\nthe film. Extensive experiments show that our framework achieves SOTA\nperformance in both image reconstruction and industrial downstream tasks. Our\ncode will be released at \\url{https://github.com/jqtangust/FilmRemoval}.", "keywords": ["Low-level vision"], "authors_list": ["Jiaqi Tang", "RUIZHENG WU", "Xiaogang Xu", "Sixing Hu", "Ying-Cong Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f226"}, "filepath": "data/2312.12722.png", "tags": [], "_media_type": "image", "_rand": 0.9998420553858756, "arXiv_link": "https://arxiv.org/abs/2312.12722", "other_link": "https://github.com/scok30/vit-cil.", "title": "FCS: Feature Calibration and Separation for Non-Exemplar Class Incremental Learning", "abstract": "Non-exemplar class incremental learning aims to learn both the new and old\ntasks without accessing any training data from the past. This strict\nrestriction enlarges the difficulty of alleviating catastrophic forgetting\nsince all techniques can only be applied to current task data. Considering this\nchallenge, we propose a novel framework of fine-grained knowledge selection and\nrestoration. The conventional knowledge distillation-based methods place too\nstrict constraints on the network parameters and features to prevent\nforgetting, which limits the training of new tasks. To loose this constraint,\nwe proposed a novel fine-grained selective patch-level distillation to\nadaptively balance plasticity and stability. Some task-agnostic patches can be\nused to preserve the decision boundary of the old task. While some patches\ncontaining the important foreground are favorable for learning the new task.\n Moreover, we employ a task-agnostic mechanism to generate more realistic\nprototypes of old tasks with the current task sample for reducing classifier\nbias for fine-grained knowledge restoration. Extensive experiments on CIFAR100,\nTinyImageNet and ImageNet-Subset demonstrate the effectiveness of our method.\nCode is available at https://github.com/scok30/vit-cil.", "keywords": [], "authors_list": ["Qiwei Li", "Yuxin Peng", "Jiahuan Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f227"}, "filepath": "data/2403.19926.png", "tags": [], "_media_type": "image", "_rand": 0.9992174582445545, "arXiv_link": "https://arxiv.org/abs/2403.19926", "other_link": "https://github.com/zgspose/DSTA.", "title": "Video-Based Human Pose Regression via Decoupled Space-Time Aggregation", "abstract": "By leveraging temporal dependency in video sequences, multi-frame human pose\nestimation algorithms have demonstrated remarkable results in complicated\nsituations, such as occlusion, motion blur, and video defocus. These algorithms\nare predominantly based on heatmaps, resulting in high computation and storage\nrequirements per frame, which limits their flexibility and real-time\napplication in video scenarios, particularly on edge devices. In this paper, we\ndevelop an efficient and effective video-based human pose regression method,\nwhich bypasses intermediate representations such as heatmaps and instead\ndirectly maps the input to the output joint coordinates. Despite the inherent\nspatial correlation among adjacent joints of the human pose, the temporal\ntrajectory of each individual joint exhibits relative independence. In light of\nthis, we propose a novel Decoupled Space-Time Aggregation network (DSTA) to\nseparately capture the spatial contexts between adjacent joints and the\ntemporal cues of each individual joint, thereby avoiding the conflation of\nspatiotemporal dimensions. Concretely, DSTA learns a dedicated feature token\nfor each joint to facilitate the modeling of their spatiotemporal dependencies.\nWith the proposed joint-wise local-awareness attention mechanism, our method is\ncapable of efficiently and flexibly utilizing the spatial dependency of\nadjacent joints and the temporal dependency of each joint itself. Extensive\nexperiments demonstrate the superiority of our method. Compared to previous\nregression-based single-frame human pose estimation methods, DSTA significantly\nenhances performance, achieving an 8.9 mAP improvement on PoseTrack2017.\nFurthermore, our approach either surpasses or is on par with the\nstate-of-the-art heatmap-based multi-frame human pose estimation methods.\nProject page: https://github.com/zgspose/DSTA.", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Jijie He", "Wenwu Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f228"}, "filepath": "data/2311.07044.png", "tags": [], "_media_type": "image", "_rand": 0.9999304051357535, "arXiv_link": "https://arxiv.org/abs/2311.07044", "other_link": "https://ustc3dv.github.io/L0-Sampler/", "title": "$L_0$-Sampler: An $L_{0}$ Model Guided Volume Sampling for NeRF", "abstract": "Since being proposed, Neural Radiance Fields (NeRF) have achieved great\nsuccess in related tasks, mainly adopting the hierarchical volume sampling\n(HVS) strategy for volume rendering. However, the HVS of NeRF approximates\ndistributions using piecewise constant functions, which provides a relatively\nrough estimation. Based on the observation that a well-trained weight function\n$w(t)$ and the $L_0$ distance between points and the surface have very high\nsimilarity, we propose $L_0$-Sampler by incorporating the $L_0$ model into\n$w(t)$ to guide the sampling process. Specifically, we propose to use piecewise\nexponential functions rather than piecewise constant functions for\ninterpolation, which can not only approximate quasi-$L_0$ weight distributions\nalong rays quite well but also can be easily implemented with few lines of code\nwithout additional computational burden. Stable performance improvements can be\nachieved by applying $L_0$-Sampler to NeRF and its related tasks like 3D\nreconstruction. Code is available at https://ustc3dv.github.io/L0-Sampler/ .", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Liangchen Li", "Juyong Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f229"}, "filepath": "data/2303.06346v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994298572761721, "arXiv_link": "https://arxiv.org/html/2303.06346v2", "other_link": "https://github.com/sitzikbs/3dincaction.", "title": "3DInAction: Understanding Human Actions in 3D Point Clouds", "abstract": "We propose a novel method for 3D point cloud action recognition.\nUnderstanding human actions in RGB videos has been widely studied in recent\nyears, however, its 3D point cloud counterpart remains under-explored. This is\nmostly due to the inherent limitation of the point cloud data modality -- lack\nof structure, permutation invariance, and varying number of points -- which\nmakes it difficult to learn a spatio-temporal representation. To address this\nlimitation, we propose the 3DinAction pipeline that first estimates patches\nmoving in time (t-patches) as a key building block, alongside a hierarchical\narchitecture that learns an informative spatio-temporal representation. We show\nthat our method achieves improved performance on existing datasets, including\nDFAUST and IKEA ASM. Code is publicly available at\nhttps://github.com/sitzikbs/3dincaction.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yizhak Ben-Shabat", "Oren Shrout", "Stephen Gould"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f22a"}, "filepath": "data/2403.06258.png", "tags": [], "_media_type": "image", "_rand": 0.9995680600359892, "arXiv_link": "https://arxiv.org/abs/2403.06258", "other_link": "", "title": "Poly Kernel Inception Network for Remote Sensing Detection", "abstract": "Object detection in remote sensing images (RSIs) often suffers from several\nincreasing challenges, including the large variation in object scales and the\ndiverse-ranging context. Prior methods tried to address these challenges by\nexpanding the spatial receptive field of the backbone, either through\nlarge-kernel convolution or dilated convolution. However, the former typically\nintroduces considerable background noise, while the latter risks generating\noverly sparse feature representations. In this paper, we introduce the Poly\nKernel Inception Network (PKINet) to handle the above challenges. PKINet\nemploys multi-scale convolution kernels without dilation to extract object\nfeatures of varying scales and capture local context. In addition, a Context\nAnchor Attention (CAA) module is introduced in parallel to capture long-range\ncontextual information. These two components work jointly to advance the\nperformance of PKINet on four challenging remote sensing detection benchmarks,\nnamely DOTA-v1.0, DOTA-v1.5, HRSC2016, and DIOR-R.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Xinhao Cai", "Qiuxia Lai", "Yuwei Wang", "Wenguan Wang", "Zeren Sun", "Yazhou Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f22b"}, "filepath": "data/2405.16996.png", "tags": [], "_media_type": "image", "_rand": 0.9994251372198381, "arXiv_link": "https://arxiv.org/abs/2405.16996", "other_link": "", "title": "Mitigating Noisy Correspondence by Geometrical Structure Consistency Learning", "abstract": "Noisy correspondence that refers to mismatches in cross-modal data pairs, is\nprevalent on human-annotated or web-crawled datasets. Prior approaches to\nleverage such data mainly consider the application of uni-modal noisy label\nlearning without amending the impact on both cross-modal and intra-modal\ngeometrical structures in multimodal learning. Actually, we find that both\nstructures are effective to discriminate noisy correspondence through\nstructural differences when being well-established. Inspired by this\nobservation, we introduce a Geometrical Structure Consistency (GSC) method to\ninfer the true correspondence. Specifically, GSC ensures the preservation of\ngeometrical structures within and between modalities, allowing for the accurate\ndiscrimination of noisy samples based on structural differences. Utilizing\nthese inferred true correspondence labels, GSC refines the learning of\ngeometrical structures by filtering out the noisy samples. Experiments across\nfour cross-modal datasets confirm that GSC effectively identifies noisy samples\nand significantly outperforms the current leading methods.", "keywords": [], "authors_list": ["Zihua Zhao", "Mengxi Chen", "Tianjie Dai", "Jiangchao Yao", "Bo Han", "Ya Zhang", "Yanfeng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f22c"}, "filepath": "data/2404.04785.png", "tags": [], "_media_type": "image", "_rand": 0.999561562522773, "arXiv_link": "https://arxiv.org/abs/2404.04785", "other_link": "", "title": "Rethinking Diffusion Model for Multi-Contrast MRI Super-Resolution", "abstract": "Recently, diffusion models (DM) have been applied in magnetic resonance\nimaging (MRI) super-resolution (SR) reconstruction, exhibiting impressive\nperformance, especially with regard to detailed reconstruction. However, the\ncurrent DM-based SR reconstruction methods still face the following issues: (1)\nThey require a large number of iterations to reconstruct the final image, which\nis inefficient and consumes a significant amount of computational resources.\n(2) The results reconstructed by these methods are often misaligned with the\nreal high-resolution images, leading to remarkable distortion in the\nreconstructed MR images. To address the aforementioned issues, we propose an\nefficient diffusion model for multi-contrast MRI SR, named as DiffMSR.\nSpecifically, we apply DM in a highly compact low-dimensional latent space to\ngenerate prior knowledge with high-frequency detail information. The highly\ncompact latent space ensures that DM requires only a few simple iterations to\nproduce accurate prior knowledge. In addition, we design the Prior-Guide Large\nWindow Transformer (PLWformer) as the decoder for DM, which can extend the\nreceptive field while fully utilizing the prior knowledge generated by DM to\nensure that the reconstructed MR image remains undistorted. Extensive\nexperiments on public and clinical datasets demonstrate that our DiffMSR\noutperforms state-of-the-art methods.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Guangyuan Li", "Chen Rao", "Juncheng Mo", "Zhanjie Zhang", "Wei Xing", "Lei Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f22d"}, "filepath": "data/2402.15865.png", "tags": [], "_media_type": "image", "_rand": 0.9991165461912563, "arXiv_link": "https://arxiv.org/abs/2402.15865", "other_link": "https://github.com/LiPang/HIRDiff.", "title": "HIR-Diff: Unsupervised Hyperspectral Image Restoration Via Improved Diffusion Models", "abstract": "Hyperspectral image (HSI) restoration aims at recovering clean images from\ndegraded observations and plays a vital role in downstream tasks. Existing\nmodel-based methods have limitations in accurately modeling the complex image\ncharacteristics with handcraft priors, and deep learning-based methods suffer\nfrom poor generalization ability. To alleviate these issues, this paper\nproposes an unsupervised HSI restoration framework with pre-trained diffusion\nmodel (HIR-Diff), which restores the clean HSIs from the product of two\nlow-rank components, i.e., the reduced image and the coefficient matrix.\nSpecifically, the reduced image, which has a low spectral dimension, lies in\nthe image field and can be inferred from our improved diffusion model where a\nnew guidance function with total variation (TV) prior is designed to ensure\nthat the reduced image can be well sampled. The coefficient matrix can be\neffectively pre-estimated based on singular value decomposition (SVD) and\nrank-revealing QR (RRQR) factorization. Furthermore, a novel exponential noise\nschedule is proposed to accelerate the restoration process (about 5$\\times$\nacceleration for denoising) with little performance decrease. Extensive\nexperimental results validate the superiority of our method in both performance\nand speed on a variety of HSI restoration tasks, including HSI denoising, noisy\nHSI super-resolution, and noisy HSI inpainting. The code is available at\nhttps://github.com/LiPang/HIRDiff.", "keywords": ["Low-level vision"], "authors_list": ["Li Pang", "Xiangyu Rui", "Long Cui", "Hongzhong Wang", "Deyu Meng", "Xiangyong Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f22e"}, "filepath": "data/2309.16975.png", "tags": [], "_media_type": "image", "_rand": 0.9996808118038591, "arXiv_link": "https://arxiv.org/abs/2309.16975", "other_link": "", "title": "Zero-Shot Structure-Preserving Diffusion Model for High Dynamic Range Tone Mapping", "abstract": "One of the key challenges in tone mapping is to preserve the perceptual\nquality of high dynamic range (HDR) images when mapping them to standard\ndynamic range (SDR) displays. Traditional tone mapping operators (TMOs)\ncompress the luminance of HDR images without considering the surround and\ndisplay conditions emanating into suboptimal results. Current research\naddresses this challenge by incorporating perceptual color appearance\nattributes. In this work, we propose a TMO (TMOz) that leverages CIECAM16\nperceptual attributes, i.e., brightness, colorfulness, and hue. TMOz accounts\nfor the effects of both the surround and the display conditions to achieve more\noptimal colorfulness reproduction. The perceptual brightness is compressed, and\nthe perceptual color scales, i.e., colorfulness and hue are derived from HDR\nimages by employing CIECAM16 color adaptation equations. A psychophysical\nexperiment was conducted to automate the brightness compression parameter. The\nmodel employs fully automatic and adaptive approach, obviating the requirement\nfor manual parameter selection. TMOz was evaluated in terms of contrast,\ncolorfulness and overall image quality. The objective and subjective evaluation\nmethods revealed that the proposed model outperformed the state-of-the-art\nTMOs.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Ruoxi Zhu", "Shusong Xu", "Peiye Liu", "Sicheng Li", "Yanheng Lu", "Dimin Niu", "Zihao Liu", "Zihao Meng", "Li Zhiyong", "Xinhua Chen", "Yibo Fan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f22f"}, "filepath": "data/2404.14044.png", "tags": [], "_media_type": "image", "_rand": 0.999927846261042, "arXiv_link": "https://arxiv.org/abs/2404.14044", "other_link": "https://jiahao-ma.github.io/hashpoint/.", "title": "HashPoint: Accelerated Point Searching and Sampling for Neural Rendering", "abstract": "In this paper, we address the problem of efficient point searching and\nsampling for volume neural rendering. Within this realm, two typical approaches\nare employed: rasterization and ray tracing. The rasterization-based methods\nenable real-time rendering at the cost of increased memory and lower fidelity.\nIn contrast, the ray-tracing-based methods yield superior quality but demand\nlonger rendering time. We solve this problem by our HashPoint method combining\nthese two strategies, leveraging rasterization for efficient point searching\nand sampling, and ray marching for rendering. Our method optimizes point\nsearching by rasterizing points within the camera's view, organizing them in a\nhash table, and facilitating rapid searches. Notably, we accelerate the\nrendering process by adaptive sampling on the primary surface encountered by\nthe ray. Our approach yields substantial speed-up for a range of\nstate-of-the-art ray-tracing-based methods, maintaining equivalent or superior\naccuracy across synthetic and real test datasets. The code will be available at\nhttps://jiahao-ma.github.io/hashpoint/.", "keywords": ["Deep learning architectures and techniques", "Computational imaging and physics-based vision"], "authors_list": ["Jiahao Ma", "Miaomiao Liu", "David Ahmedt-Aristizabal", "Chuong Nguyen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f230"}, "filepath": "data/2310.08956.png", "tags": [], "_media_type": "image", "_rand": 0.9994043359643842, "arXiv_link": "https://arxiv.org/abs/2310.08956", "other_link": "https://npucvr.github.io/LRRU/.", "title": "Improving Depth Completion via Depth Feature Upsampling", "abstract": "Existing deep learning-based depth completion methods generally employ\nmassive stacked layers to predict the dense depth map from sparse input data.\nAlthough such approaches greatly advance this task, their accompanied huge\ncomputational complexity hinders their practical applications. To accomplish\ndepth completion more efficiently, we propose a novel lightweight deep network\nframework, the Long-short Range Recurrent Updating (LRRU) network. Without\nlearning complex feature representations, LRRU first roughly fills the sparse\ninput to obtain an initial dense depth map, and then iteratively updates it\nthrough learned spatially-variant kernels. Our iterative update process is\ncontent-adaptive and highly flexible, where the kernel weights are learned by\njointly considering the guidance RGB images and the depth map to be updated,\nand large-to-small kernel scopes are dynamically adjusted to capture\nlong-to-short range dependencies. Our initial depth map has coarse but complete\nscene depth information, which helps relieve the burden of directly regressing\nthe dense depth from sparse ones, while our proposed method can effectively\nrefine it to an accurate depth map with less learnable parameters and inference\ntime. Experimental results demonstrate that our proposed LRRU variants achieve\nstate-of-the-art performance across different parameter regimes. In particular,\nthe LRRU-Base model outperforms competing approaches on the NYUv2 dataset, and\nranks 1st on the KITTI depth completion benchmark at the time of submission.\nProject page: https://npucvr.github.io/LRRU/.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yufei Wang", "Ge Zhang", "Shaoqian Wang", "Bo Li", "Qi Liu", "Le Hui", "Yuchao Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f231"}, "filepath": "data/2403.01773.png", "tags": [], "_media_type": "image", "_rand": 0.9993933231982193, "arXiv_link": "https://arxiv.org/abs/2403.01773", "other_link": "", "title": "Improving Out-of-Distribution Generalization in Graphs via Hierarchical Semantic Environments", "abstract": "Out-of-distribution (OOD) generalization in the graph domain is challenging\ndue to complex distribution shifts and a lack of environmental contexts. Recent\nmethods attempt to enhance graph OOD generalization by generating flat\nenvironments. However, such flat environments come with inherent limitations to\ncapture more complex data distributions. Considering the DrugOOD dataset, which\ncontains diverse training environments (e.g., scaffold, size, etc.), flat\ncontexts cannot sufficiently address its high heterogeneity. Thus, a new\nchallenge is posed to generate more semantically enriched environments to\nenhance graph invariant learning for handling distribution shifts. In this\npaper, we propose a novel approach to generate hierarchical semantic\nenvironments for each graph. Firstly, given an input graph, we explicitly\nextract variant subgraphs from the input graph to generate proxy predictions on\nlocal environments. Then, stochastic attention mechanisms are employed to\nre-extract the subgraphs for regenerating global environments in a hierarchical\nmanner. In addition, we introduce a new learning objective that guides our\nmodel to learn the diversity of environments within the same hierarchy while\nmaintaining consistency across different hierarchies. This approach enables our\nmodel to consider the relationships between environments and facilitates robust\ngraph invariant learning. Extensive experiments on real-world graph data have\ndemonstrated the effectiveness of our framework. Particularly, in the\nchallenging dataset DrugOOD, our method achieves up to 1.29% and 2.83%\nimprovement over the best baselines on IC50 and EC50 prediction tasks,\nrespectively.", "keywords": [], "authors_list": ["Yinhua Piao", "Sangseon Lee", "Yijingxiu Lu", "Sun Kim"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f232"}, "filepath": "data/2403.05105.png", "tags": [], "_media_type": "image", "_rand": 0.9995999019676787, "arXiv_link": "https://arxiv.org/abs/2403.05105", "other_link": "https://github.com/hhc1997/L2RM.", "title": "Learning to Rematch Mismatched Pairs for Robust Cross-Modal Retrieval", "abstract": "Collecting well-matched multimedia datasets is crucial for training\ncross-modal retrieval models. However, in real-world scenarios, massive\nmultimodal data are harvested from the Internet, which inevitably contains\nPartially Mismatched Pairs (PMPs). Undoubtedly, such semantical irrelevant data\nwill remarkably harm the cross-modal retrieval performance. Previous efforts\ntend to mitigate this problem by estimating a soft correspondence to\ndown-weight the contribution of PMPs. In this paper, we aim to address this\nchallenge from a new perspective: the potential semantic similarity among\nunpaired samples makes it possible to excavate useful knowledge from mismatched\npairs. To achieve this, we propose L2RM, a general framework based on Optimal\nTransport (OT) that learns to rematch mismatched pairs. In detail, L2RM aims to\ngenerate refined alignments by seeking a minimal-cost transport plan across\ndifferent modalities. To formalize the rematching idea in OT, first, we propose\na self-supervised cost function that automatically learns from explicit\nsimilarity-cost mapping relation. Second, we present to model a partial OT\nproblem while restricting the transport among false positives to further boost\nrefined alignments. Extensive experiments on three benchmarks demonstrate our\nL2RM significantly improves the robustness against PMPs for existing models.\nThe code is available at https://github.com/hhc1997/L2RM.", "keywords": [], "authors_list": ["Haochen Han", "Qinghua Zheng", "Guang Dai", "Minnan Luo", "Jingdong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f233"}, "filepath": "data/2401.04244.png", "tags": [], "_media_type": "image", "_rand": 0.9990507381525618, "arXiv_link": "https://arxiv.org/abs/2401.04244", "other_link": "https://xg416.github.io/DATUM.", "title": "Spatio-Temporal Turbulence Mitigation: A Translational Perspective", "abstract": "Recovering images distorted by atmospheric turbulence is a challenging\ninverse problem due to the stochastic nature of turbulence. Although numerous\nturbulence mitigation (TM) algorithms have been proposed, their efficiency and\ngeneralization to real-world dynamic scenarios remain severely limited.\nBuilding upon the intuitions of classical TM algorithms, we present the Deep\nAtmospheric TUrbulence Mitigation network (DATUM). DATUM aims to overcome major\nchallenges when transitioning from classical to deep learning approaches. By\ncarefully integrating the merits of classical multi-frame TM methods into a\ndeep network structure, we demonstrate that DATUM can efficiently perform\nlong-range temporal aggregation using a recurrent fashion, while deformable\nattention and temporal-channel attention seamlessly facilitate pixel\nregistration and lucky imaging. With additional supervision, tilt and blur\ndegradation can be jointly mitigated. These inductive biases empower DATUM to\nsignificantly outperform existing methods while delivering a tenfold increase\nin processing speed. A large-scale training dataset, ATSyn, is presented as a\nco-invention to enable generalization in real turbulence. Our code and datasets\nare available at https://xg416.github.io/DATUM.", "keywords": ["Efficient and scalable vision", "Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Xingguang Zhang", "Nicholas M Chimitt", "Yiheng Chi", "Zhiyuan Mao", "Stanley H. Chan"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f234"}, "filepath": "data/2402.15509.png", "tags": [], "_media_type": "image", "_rand": 0.9998075708931952, "arXiv_link": "https://arxiv.org/abs/2402.15509", "other_link": "", "title": "Seamless Human Motion Composition with Blended Positional Encodings", "abstract": "Conditional human motion generation is an important topic with many\napplications in virtual reality, gaming, and robotics. While prior works have\nfocused on generating motion guided by text, music, or scenes, these typically\nresult in isolated motions confined to short durations. Instead, we address the\ngeneration of long, continuous sequences guided by a series of varying textual\ndescriptions. In this context, we introduce FlowMDM, the first diffusion-based\nmodel that generates seamless Human Motion Compositions (HMC) without any\npostprocessing or redundant denoising steps. For this, we introduce the Blended\nPositional Encodings, a technique that leverages both absolute and relative\npositional encodings in the denoising chain. More specifically, global motion\ncoherence is recovered at the absolute stage, whereas smooth and realistic\ntransitions are built at the relative stage. As a result, we achieve\nstate-of-the-art results in terms of accuracy, realism, and smoothness on the\nBabel and HumanML3D datasets. FlowMDM excels when trained with only a single\ndescription per motion sequence thanks to its Pose-Centric Cross-ATtention,\nwhich makes it robust against varying text descriptions at inference time.\nFinally, to address the limitations of existing HMC metrics, we propose two new\nmetrics: the Peak Jerk and the Area Under the Jerk, to detect abrupt\ntransitions.", "keywords": ["Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["German Barquero", "Sergio Escalera", "Cristina Palmero"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f235"}, "filepath": "data/2403.19104.png", "tags": [], "_media_type": "image", "_rand": 0.999119244825919, "arXiv_link": "https://arxiv.org/abs/2403.19104", "other_link": "https://song-jingyu.github.io/CRKD.", "title": "CRKD: Enhanced Camera-Radar Object Detection with Cross-modality Knowledge Distillation", "abstract": "In the field of 3D object detection for autonomous driving, LiDAR-Camera (LC)\nfusion is the top-performing sensor configuration. Still, LiDAR is relatively\nhigh cost, which hinders adoption of this technology for consumer automobiles.\nAlternatively, camera and radar are commonly deployed on vehicles already on\nthe road today, but performance of Camera-Radar (CR) fusion falls behind LC\nfusion. In this work, we propose Camera-Radar Knowledge Distillation (CRKD) to\nbridge the performance gap between LC and CR detectors with a novel\ncross-modality KD framework. We use the Bird's-Eye-View (BEV) representation as\nthe shared feature space to enable effective knowledge distillation. To\naccommodate the unique cross-modality KD path, we propose four distillation\nlosses to help the student learn crucial features from the teacher model. We\npresent extensive evaluations on the nuScenes dataset to demonstrate the\neffectiveness of the proposed CRKD framework. The project page for CRKD is\nhttps://song-jingyu.github.io/CRKD.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Lingjun Zhao", "Jingyu Song", "Katherine Skinner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f236"}, "filepath": "data/2403.10145.png", "tags": [], "_media_type": "image", "_rand": 0.9994143947164563, "arXiv_link": "https://arxiv.org/abs/2403.10145", "other_link": "https://github.com/AIR-THU/DAIR-RCooper.", "title": "RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception", "abstract": "The value of roadside perception, which could extend the boundaries of\nautonomous driving and traffic management, has gradually become more prominent\nand acknowledged in recent years. However, existing roadside perception\napproaches only focus on the single-infrastructure sensor system, which cannot\nrealize a comprehensive understanding of a traffic area because of the limited\nsensing range and blind spots. Orienting high-quality roadside perception, we\nneed Roadside Cooperative Perception (RCooper) to achieve practical\narea-coverage roadside perception for restricted traffic areas. Rcooper has its\nown domain-specific challenges, but further exploration is hindered due to the\nlack of datasets. We hence release the first real-world, large-scale RCooper\ndataset to bloom the research on practical roadside cooperative perception,\nincluding detection and tracking. The manually annotated dataset comprises 50k\nimages and 30k point clouds, including two representative traffic scenes (i.e.,\nintersection and corridor). The constructed benchmarks prove the effectiveness\nof roadside cooperation perception and demonstrate the direction of further\nresearch. Codes and dataset can be accessed at:\nhttps://github.com/AIR-THU/DAIR-RCooper.", "keywords": [], "authors_list": ["Ruiyang Hao", "Siqi Fan", "Yingru Dai", "Zhenlin Zhang", "Chenxi Li", "YuntianWang", "Haibao Yu", "Wenxian Yang", "Jirui Yuan", "Zaiqing Nie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f237"}, "filepath": "data/2404.01882.png", "tags": [], "_media_type": "image", "_rand": 0.999891362235459, "arXiv_link": "https://arxiv.org/abs/2404.01882", "other_link": "https://github.com/Peterande/SAST", "title": "Scene Adaptive Sparse Transformer for Event-based Object Detection", "abstract": "While recent Transformer-based approaches have shown impressive performances\non event-based object detection tasks, their high computational costs still\ndiminish the low power consumption advantage of event cameras. Image-based\nworks attempt to reduce these costs by introducing sparse Transformers.\nHowever, they display inadequate sparsity and adaptability when applied to\nevent-based object detection, since these approaches cannot balance the fine\ngranularity of token-level sparsification and the efficiency of window-based\nTransformers, leading to reduced performance and efficiency. Furthermore, they\nlack scene-specific sparsity optimization, resulting in information loss and a\nlower recall rate. To overcome these limitations, we propose the Scene Adaptive\nSparse Transformer (SAST). SAST enables window-token co-sparsification,\nsignificantly enhancing fault tolerance and reducing computational overhead.\nLeveraging the innovative scoring and selection modules, along with the Masked\nSparse Window Self-Attention, SAST showcases remarkable scene-aware\nadaptability: It focuses only on important objects and dynamically optimizes\nsparsity level according to scene complexity, maintaining a remarkable balance\nbetween performance and computational cost. The evaluation results show that\nSAST outperforms all other dense and sparse networks in both performance and\nefficiency on two large-scale event-based object detection datasets (1Mpx and\nGen1). Code: https://github.com/Peterande/SAST", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yansong Peng", "Li Hebei", "Yueyi Zhang", "Xiaoyan Sun", "Feng Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f238"}, "filepath": "data/2403.11113.png", "tags": [], "_media_type": "image", "_rand": 0.9996363547247962, "arXiv_link": "https://arxiv.org/abs/2403.11113", "other_link": "https://github.com/wdttt/LocoTrans.", "title": "Local-consistent Transformation Learning for Rotation-invariant Point Cloud Analysis", "abstract": "Rotation invariance is an important requirement for point shape analysis. To\nachieve this, current state-of-the-art methods attempt to construct the local\nrotation-invariant representation through learning or defining the local\nreference frame (LRF). Although efficient, these LRF-based methods suffer from\nperturbation of local geometric relations, resulting in suboptimal local\nrotation invariance. To alleviate this issue, we propose a Local-consistent\nTransformation (LocoTrans) learning strategy. Specifically, we first construct\nthe local-consistent reference frame (LCRF) by considering the symmetry of the\ntwo axes in LRF. In comparison with previous LRFs, our LCRF is able to preserve\nlocal geometric relationships better through performing local-consistent\ntransformation. However, as the consistency only exists in local regions, the\nrelative pose information is still lost in the intermediate layers of the\nnetwork. We mitigate such a relative pose issue by developing a relative pose\nrecovery (RPR) module. RPR aims to restore the relative pose between adjacent\ntransformed patches. Equipped with LCRF and RPR, our LocoTrans is capable of\nlearning local-consistent transformation and preserving local geometry, which\nbenefits rotation invariance learning. Competitive performance under arbitrary\nrotations on both shape classification and part segmentation tasks and\nablations can demonstrate the effectiveness of our method. Code will be\navailable publicly at https://github.com/wdttt/LocoTrans.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yiyang Chen", "Lunhao Duan", "Shanshan Zhao", "Changxing Ding", "Dacheng Tao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f239"}, "filepath": "data/2402.17251.png", "tags": [], "_media_type": "image", "_rand": 0.9994064053804487, "arXiv_link": "https://arxiv.org/abs/2402.17251", "other_link": "", "title": "Context-based and Diversity-driven Specificity in Compositional Zero-Shot Learning", "abstract": "Compositional Zero-Shot Learning (CZSL) aims to recognize unseen\nattribute-object pairs based on a limited set of observed examples. Current\nCZSL methodologies, despite their advancements, tend to neglect the distinct\nspecificity levels present in attributes. For instance, given images of sliced\nstrawberries, they may fail to prioritize `Sliced-Strawberry' over a generic\n`Red-Strawberry', despite the former being more informative. They also suffer\nfrom ballooning search space when shifting from Close-World (CW) to Open-World\n(OW) CZSL. To address the issues, we introduce the Context-based and\nDiversity-driven Specificity learning framework for CZSL (CDS-CZSL). Our\nframework evaluates the specificity of attributes by considering the diversity\nof objects they apply to and their related context. This novel approach allows\nfor more accurate predictions by emphasizing specific attribute-object pairs\nand improves composition filtering in OW-CZSL. We conduct experiments in both\nCW and OW scenarios, and our model achieves state-of-the-art results across\nthree datasets.", "keywords": [], "authors_list": ["Yun Li", "Zhe Liu", "Hang Chen", "Lina Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f23a"}, "filepath": "data/2404.04808.png", "tags": [], "_media_type": "image", "_rand": 0.9995991944821183, "arXiv_link": "https://arxiv.org/abs/2404.04808", "other_link": "https://dqiaole.github.io/MemFlow/.", "title": "MemFlow: Optical Flow Estimation and Prediction with Memory", "abstract": "Optical flow is a classical task that is important to the vision community.\nClassical optical flow estimation uses two frames as input, whilst some recent\nmethods consider multiple frames to explicitly model long-range information.\nThe former ones limit their ability to fully leverage temporal coherence along\nthe video sequence; and the latter ones incur heavy computational overhead,\ntypically not possible for real-time flow estimation. Some multi-frame-based\napproaches even necessitate unseen future frames for current estimation,\ncompromising real-time applicability in safety-critical scenarios. To this end,\nwe present MemFlow, a real-time method for optical flow estimation and\nprediction with memory. Our method enables memory read-out and update modules\nfor aggregating historical motion information in real-time. Furthermore, we\nintegrate resolution-adaptive re-scaling to accommodate diverse video\nresolutions. Besides, our approach seamlessly extends to the future prediction\nof optical flow based on past observations. Leveraging effective historical\nmotion aggregation, our method outperforms VideoFlow with fewer parameters and\nfaster inference speed on Sintel and KITTI-15 datasets in terms of\ngeneralization performance. At the time of submission, MemFlow also leads in\nperformance on the 1080p Spring dataset. Codes and models will be available at:\nhttps://dqiaole.github.io/MemFlow/.", "keywords": ["Low-level vision", "Deep learning architectures and techniques"], "authors_list": ["Qiaole Dong", "Yanwei Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f23b"}, "filepath": "data/2306.05688.png", "tags": [], "_media_type": "image", "_rand": 0.9992428628897732, "arXiv_link": "https://arxiv.org/abs/2306.05688", "other_link": "https://github.com/ZAX130/SmileCode.", "title": "H-ViT: A Hierarchical Vision Transformer for Deformable Image Registration", "abstract": "The Transformer structures have been widely used in computer vision and have\nrecently made an impact in the area of medical image registration. However, the\nuse of Transformer in most registration networks is straightforward. These\nnetworks often merely use the attention mechanism to boost the feature learning\nas the segmentation networks do, but do not sufficiently design to be adapted\nfor the registration task. In this paper, we propose a novel motion\ndecomposition Transformer (ModeT) to explicitly model multiple motion\nmodalities by fully exploiting the intrinsic capability of the Transformer\nstructure for deformation estimation. The proposed ModeT naturally transforms\nthe multi-head neighborhood attention relationship into the multi-coordinate\nrelationship to model multiple motion modes. Then the competitive weighting\nmodule (CWM) fuses multiple deformation sub-fields to generate the resulting\ndeformation field. Extensive experiments on two public brain magnetic resonance\nimaging (MRI) datasets show that our method outperforms current\nstate-of-the-art registration networks and Transformers, demonstrating the\npotential of our ModeT for the challenging non-rigid deformation estimation\nproblem. The benchmarks and our code are publicly available at\nhttps://github.com/ZAX130/SmileCode.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Morteza Ghahremani", "Mohammad Khateri", "Bailiang Jian", "Benedikt Wiestler", "Ehsan Adeli", "Christian Wachinger"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f23c"}, "filepath": "data/2311.11845.png", "tags": [], "_media_type": "image", "_rand": 0.9993596023059265, "arXiv_link": "https://arxiv.org/abs/2311.11845", "other_link": "https://github.com/tatakai1/EVENeRF.", "title": "Entangled View-Epipolar Information Aggregation for Generalizable Neural Radiance Fields", "abstract": "Generalizable NeRF can directly synthesize novel views across new scenes,\neliminating the need for scene-specific retraining in vanilla NeRF. A critical\nenabling factor in these approaches is the extraction of a generalizable 3D\nrepresentation by aggregating source-view features. In this paper, we propose\nan Entangled View-Epipolar Information Aggregation method dubbed EVE-NeRF.\nDifferent from existing methods that consider cross-view and along-epipolar\ninformation independently, EVE-NeRF conducts the view-epipolar feature\naggregation in an entangled manner by injecting the scene-invariant appearance\ncontinuity and geometry consistency priors to the aggregation process. Our\napproach effectively mitigates the potential lack of inherent geometric and\nappearance constraint resulting from one-dimensional interactions, thus further\nboosting the 3D representation generalizablity. EVE-NeRF attains\nstate-of-the-art performance across various evaluation scenarios. Extensive\nexperiments demonstate that, compared to prevailing single-dimensional\naggregation, the entangled network excels in the accuracy of 3D scene geometry\nand appearance reconstruction. Our code is publicly available at\nhttps://github.com/tatakai1/EVENeRF.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Zhiyuan Min", "Yawei Luo", "Wei Yang", "Yuesong Wang", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f23d"}, "filepath": "data/2403.20236.png", "tags": [], "_media_type": "image", "_rand": 0.9995661154318488, "arXiv_link": "https://arxiv.org/abs/2403.20236", "other_link": "https://zenodo.org/records/10854201", "title": "Hyperbolic Anomaly Detection", "abstract": "Anomaly detection (AD) aims to identify defective images and localize their\ndefects (if any). Ideally, AD models should be able to detect defects over many\nimage classes; without relying on hard-coded class names that can be\nuninformative or inconsistent across datasets; learn without anomaly\nsupervision; and be robust to the long-tailed distributions of real-world\napplications. To address these challenges, we formulate the problem of\nlong-tailed AD by introducing several datasets with different levels of class\nimbalance and metrics for performance evaluation. We then propose a novel\nmethod, LTAD, to detect defects from multiple and long-tailed classes, without\nrelying on dataset class names. LTAD combines AD by reconstruction and semantic\nAD modules. AD by reconstruction is implemented with a transformer-based\nreconstruction module. Semantic AD is implemented with a binary classifier,\nwhich relies on learned pseudo class names and a pretrained foundation model.\nThese modules are learned over two phases. Phase 1 learns the pseudo-class\nnames and a variational autoencoder (VAE) for feature synthesis that augments\nthe training data to combat long-tails. Phase 2 then learns the parameters of\nthe reconstruction and classification modules of LTAD. Extensive experiments\nusing the proposed long-tailed datasets show that LTAD substantially\noutperforms the state-of-the-art methods for most forms of dataset imbalance.\nThe long-tailed dataset split is available at\nhttps://zenodo.org/records/10854201 .", "keywords": [], "authors_list": ["Huimin Li", "Zhentao Chen", "Yunhao Xu", "Junlin Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f23e"}, "filepath": "data/2310.14566.png", "tags": [], "_media_type": "image", "_rand": 0.9990152500910386, "arXiv_link": "https://arxiv.org/abs/2310.14566", "other_link": "https://github.com/tianyi-lab/HallusionBench.", "title": "HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models", "abstract": "We introduce HallusionBench, a comprehensive benchmark designed for the\nevaluation of image-context reasoning. This benchmark presents significant\nchallenges to advanced large visual-language models (LVLMs), such as\nGPT-4V(Vision), Gemini Pro Vision, Claude 3, and LLaVA-1.5, by emphasizing\nnuanced understanding and interpretation of visual data. The benchmark\ncomprises 346 images paired with 1129 questions, all meticulously crafted by\nhuman experts. We introduce a novel structure for these visual questions\ndesigned to establish control groups. This structure enables us to conduct a\nquantitative analysis of the models' response tendencies, logical consistency,\nand various failure modes. In our evaluation on HallusionBench, we benchmarked\n15 different models, highlighting a 31.42% question-pair accuracy achieved by\nthe state-of-the-art GPT-4V. Notably, all other evaluated models achieve\naccuracy below 16%. Moreover, our analysis not only highlights the observed\nfailure modes, including language hallucination and visual illusion, but also\ndeepens an understanding of these pitfalls. Our comprehensive case studies\nwithin HallusionBench shed light on the challenges of hallucination and\nillusion in LVLMs. Based on these insights, we suggest potential pathways for\ntheir future improvement. The benchmark and codebase can be accessed at\nhttps://github.com/tianyi-lab/HallusionBench.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Tianrui Guan", "Fuxiao Liu", "Xiyang Wu", "Ruiqi Xian", "Zongxia Li", "Xiaoyu Liu", "Xijun Wang", "Lichang Chen", "Furong Huang", "Yaser Yacoob", "Dinesh Manocha", "Tianyi Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f23f"}, "filepath": "data/2312.17686.png", "tags": [], "_media_type": "image", "_rand": 0.999608706162005, "arXiv_link": "https://arxiv.org/abs/2312.17686", "other_link": "https://github.com/IoannaNti/BMViT}{https://github.com/IoannaNti/BMViT}", "title": "Multiscale Vision Transformers meet Bipartite Matching for efficient single-stage Action Localization", "abstract": "Action Localization is a challenging problem that combines detection and\nrecognition tasks, which are often addressed separately. State-of-the-art\nmethods rely on off-the-shelf bounding box detections pre-computed at high\nresolution, and propose transformer models that focus on the classification\ntask alone. Such two-stage solutions are prohibitive for real-time deployment.\nOn the other hand, single-stage methods target both tasks by devoting part of\nthe network (generally the backbone) to sharing the majority of the workload,\ncompromising performance for speed. These methods build on adding a DETR head\nwith learnable queries that after cross- and self-attention can be sent to\ncorresponding MLPs for detecting a person's bounding box and action. However,\nDETR-like architectures are challenging to train and can incur in big\ncomplexity.\n In this paper, we observe that \\textbf{a straight bipartite matching loss can\nbe applied to the output tokens of a vision transformer}. This results in a\nbackbone + MLP architecture that can do both tasks without the need of an extra\nencoder-decoder head and learnable queries. We show that a single MViTv2-S\narchitecture trained with bipartite matching to perform both tasks surpasses\nthe same MViTv2-S when trained with RoI align on pre-computed bounding boxes.\nWith a careful design of token pooling and the proposed training pipeline, our\nBipartite-Matching Vision Transformer model, \\textbf{BMViT}, achieves +3 mAP on\nAVA2.2. w.r.t. the two-stage MViTv2-S counterpart. Code is available at\n\\href{https://github.com/IoannaNti/BMViT}{https://github.com/IoannaNti/BMViT}", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ioanna Ntinou", "Enrique Sanchez", "Georgios Tzimiropoulos"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f240"}, "filepath": "data/2308.01471.png", "tags": [], "_media_type": "image", "_rand": 0.9998322805392759, "arXiv_link": "https://arxiv.org/abs/2308.01471", "other_link": "https://waabi.ai/research/implicito.", "title": "UnO: Unsupervised Occupancy Fields for Perception and Forecasting", "abstract": "A self-driving vehicle (SDV) must be able to perceive its surroundings and\npredict the future behavior of other traffic participants. Existing works\neither perform object detection followed by trajectory forecasting of the\ndetected objects, or predict dense occupancy and flow grids for the whole\nscene. The former poses a safety concern as the number of detections needs to\nbe kept low for efficiency reasons, sacrificing object recall. The latter is\ncomputationally expensive due to the high-dimensionality of the output grid,\nand suffers from the limited receptive field inherent to fully convolutional\nnetworks. Furthermore, both approaches employ many computational resources\npredicting areas or objects that might never be queried by the motion planner.\nThis motivates our unified approach to perception and future prediction that\nimplicitly represents occupancy and flow over time with a single neural\nnetwork. Our method avoids unnecessary computation, as it can be directly\nqueried by the motion planner at continuous spatio-temporal locations.\nMoreover, we design an architecture that overcomes the limited receptive field\nof previous explicit occupancy prediction methods by adding an efficient yet\neffective global attention mechanism. Through extensive experiments in both\nurban and highway settings, we demonstrate that our implicit model outperforms\nthe current state-of-the-art. For more information, visit the project website:\nhttps://waabi.ai/research/implicito.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Ben Agro", "Quinlan Sykora", "Sergio Casas", "Thomas Gilles", "Raquel Urtasun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f241"}, "filepath": "data/2401.05224.png", "tags": [], "_media_type": "image", "_rand": 0.9992270395146847, "arXiv_link": "https://arxiv.org/abs/2401.05224", "other_link": "", "title": "Do Vision and Language Encoders Represent the World Similarly?", "abstract": "Aligned text-image encoders such as CLIP have become the de facto model for\nvision-language tasks. Furthermore, modality-specific encoders achieve\nimpressive performances in their respective domains. This raises a central\nquestion: does an alignment exist between uni-modal vision and language\nencoders since they fundamentally represent the same physical world? Analyzing\nthe latent spaces structure of vision and language models on image-caption\nbenchmarks using the Centered Kernel Alignment (CKA), we find that the\nrepresentation spaces of unaligned and aligned encoders are semantically\nsimilar. In the absence of statistical similarity in aligned encoders like\nCLIP, we show that a possible matching of unaligned encoders exists without any\ntraining. We frame this as a seeded graph-matching problem exploiting the\nsemantic similarity between graphs and propose two methods - a Fast Quadratic\nAssignment Problem optimization, and a novel localized CKA metric-based\nmatching/retrieval. We demonstrate the effectiveness of this on several\ndownstream tasks including cross-lingual, cross-domain caption matching and\nimage classification. Code available at github.com/mayug/0-shot-llm-vision.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Mayug Maniparambil", "Raiymbek Akshulakov", "YASSER ABDELAZIZ DAHOU DJILALI", "Mohamed El Amine Seddik", "Sanath Narayan", "Karttikeya Mangalam", "Noel O'Connor"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f242"}, "filepath": "data/2312.04364v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995322961353661, "arXiv_link": "https://arxiv.org/abs/2312.04364v1", "other_link": "", "title": "DemoCaricature: Democratising Caricature Generation with a Rough Sketch", "abstract": "In this paper, we democratise caricature generation, empowering individuals\nto effortlessly craft personalised caricatures with just a photo and a\nconceptual sketch. Our objective is to strike a delicate balance between\nabstraction and identity, while preserving the creativity and subjectivity\ninherent in a sketch. To achieve this, we present Explicit Rank-1 Model Editing\nalongside single-image personalisation, selectively applying nuanced edits to\ncross-attention layers for a seamless merge of identity and style.\nAdditionally, we propose Random Mask Reconstruction to enhance robustness,\ndirecting the model to focus on distinctive identity and style features.\nCrucially, our aim is not to replace artists but to eliminate accessibility\nbarriers, allowing enthusiasts to engage in the artistry.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Dar-Yen Chen", "Ayan Kumar Bhunia", "Subhadeep Koley", "Aneeshan Sain", "Pinaki Nath Chowdhury", "Yi-Zhe Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f243"}, "filepath": "data/2312.04302.png", "tags": [], "_media_type": "image", "_rand": 0.9993073785759473, "arXiv_link": "https://arxiv.org/abs/2312.04302", "other_link": "https://github.com/dvlab-research/Prompt-Highlighter/", "title": "Prompt Highlighter: Interactive Control for Multi-Modal LLMs", "abstract": "This study targets a critical aspect of multi-modal LLMs' (LLMs&VLMs)\ninference: explicit controllable text generation. Multi-modal LLMs empower\nmulti-modality understanding with the capability of semantic generation yet\nbring less explainability and heavier reliance on prompt contents due to their\nautoregressive generative nature. While manipulating prompt formats could\nimprove outputs, designing specific and precise prompts per task can be\nchallenging and ineffective. To tackle this issue, we introduce a novel\ninference method, Prompt Highlighter, which enables users to highlight specific\nprompt spans to interactively control the focus during generation. Motivated by\nthe classifier-free diffusion guidance, we form regular and unconditional\ncontext pairs based on highlighted tokens, demonstrating that the\nautoregressive generation in models can be guided in a classifier-free way.\nNotably, we find that, during inference, guiding the models with highlighted\ntokens through the attention weights leads to more desired outputs. Our\napproach is compatible with current LLMs and VLMs, achieving impressive\ncustomized generation results without training. Experiments confirm its\neffectiveness in focusing on input contexts and generating reliable content.\nWithout tuning on LLaVA-v1.5, our method secured 70.7 in the MMBench test and\n1552.5 in MME-perception. The code is available at:\nhttps://github.com/dvlab-research/Prompt-Highlighter/", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Yuechen Zhang", "Shengju Qian", "Bohao Peng", "Shu Liu", "Jiaya Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f244"}, "filepath": "data/2403.18186.png", "tags": [], "_media_type": "image", "_rand": 0.9995624862872926, "arXiv_link": "https://arxiv.org/abs/2403.18186", "other_link": "", "title": "Don't Look into the Dark: Latent Codes for Pluralistic Image Inpainting", "abstract": "We present a method for large-mask pluralistic image inpainting based on the\ngenerative framework of discrete latent codes. Our method learns latent priors,\ndiscretized as tokens, by only performing computations at the visible locations\nof the image. This is realized by a restrictive partial encoder that predicts\nthe token label for each visible block, a bidirectional transformer that infers\nthe missing labels by only looking at these tokens, and a dedicated synthesis\nnetwork that couples the tokens with the partial image priors to generate\ncoherent and pluralistic complete image even under extreme mask settings.\nExperiments on public benchmarks validate our design choices as the proposed\nmethod outperforms strong baselines in both visual quality and diversity\nmetrics.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Haiwei Chen", "Yajie Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f245"}, "filepath": "data/2402.17664.png", "tags": [], "_media_type": "image", "_rand": 0.9990309325829473, "arXiv_link": "https://arxiv.org/abs/2402.17664", "other_link": "https://github.com/realcrane/Bayesian-Differentiable-Physics-for-Cloth-Digitalization", "title": "Bayesian Differentiable Physics for Cloth Digitalization", "abstract": "We propose a new method for cloth digitalization. Deviating from existing\nmethods which learn from data captured under relatively casual settings, we\npropose to learn from data captured in strictly tested measuring protocols, and\nfind plausible physical parameters of the cloths. However, such data is\ncurrently absent, so we first propose a new dataset with accurate cloth\nmeasurements. Further, the data size is considerably smaller than the ones in\ncurrent deep learning, due to the nature of the data capture process. To learn\nfrom small data, we propose a new Bayesian differentiable cloth model to\nestimate the complex material heterogeneity of real cloths. It can provide\nhighly accurate digitalization from very limited data samples. Through\nexhaustive evaluation and comparison, we show our method is accurate in cloth\ndigitalization, efficient in learning from limited data samples, and general in\ncapturing material variations. Code and data are available\nhttps://github.com/realcrane/Bayesian-Differentiable-Physics-for-Cloth-Digitalization", "keywords": ["Efficient and scalable vision"], "authors_list": ["Deshan Gong", "Ningtao Mao", "He Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f246"}, "filepath": "data/2312.14494.png", "tags": [], "_media_type": "image", "_rand": 0.9998263409935815, "arXiv_link": "https://arxiv.org/abs/2312.14494", "other_link": "https://github.com/anishmadan23/foundational_fsod", "title": "Few-Shot Object Detection with Foundation Models", "abstract": "Few-shot object detection (FSOD) benchmarks have advanced techniques for\ndetecting new categories with limited annotations. Existing benchmarks\nrepurpose well-established datasets like COCO by partitioning categories into\nbase and novel classes for pre-training and fine-tuning respectively. However,\nthese benchmarks do not reflect how FSOD is deployed in practice. Rather than\nonly pre-training on a small number of base categories, we argue that it is\nmore practical to fine-tune a foundation model (e.g., a vision-language model\n(VLM) pre-trained on web-scale data) for a target domain. Surprisingly, we find\nthat zero-shot inference from VLMs like GroundingDINO significantly outperforms\nthe state-of-the-art (48.3 vs. 33.1 AP) on COCO. However, such zero-shot models\ncan still be misaligned to target concepts of interest. For example, trailers\non the web may be different from trailers in the context of autonomous\nvehicles. In this work, we propose Foundational FSOD, a new benchmark protocol\nthat evaluates detectors pre-trained on any external datasets and fine-tuned on\nK-shots per target class. Further, we note that current FSOD benchmarks are\nactually federated datasets containing exhaustive annotations for each category\non a subset of the data. We leverage this insight to propose simple strategies\nfor fine-tuning VLMs with federated losses. We demonstrate the effectiveness of\nour approach on LVIS and nuImages, improving over prior work by 5.9 AP. Our\ncode is available at https://github.com/anishmadan23/foundational_fsod", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Guangxing Han", "Ser-Nam Lim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f247"}, "filepath": "data/2403.18356.png", "tags": [], "_media_type": "image", "_rand": 0.9997048900366082, "arXiv_link": "https://arxiv.org/abs/2403.18356", "other_link": "https://keyuwu-cs.github.io/MonoHair/.", "title": "MonoHair: High-Fidelity Hair Modeling from a Monocular Video", "abstract": "Undoubtedly, high-fidelity 3D hair is crucial for achieving realism, artistic\nexpression, and immersion in computer graphics. While existing 3D hair modeling\nmethods have achieved impressive performance, the challenge of achieving\nhigh-quality hair reconstruction persists: they either require strict capture\nconditions, making practical applications difficult, or heavily rely on learned\nprior data, obscuring fine-grained details in images. To address these\nchallenges, we propose MonoHair,a generic framework to achieve high-fidelity\nhair reconstruction from a monocular video, without specific requirements for\nenvironments. Our approach bifurcates the hair modeling process into two main\nstages: precise exterior reconstruction and interior structure inference. The\nexterior is meticulously crafted using our Patch-based Multi-View Optimization\n(PMVO). This method strategically collects and integrates hair information from\nmultiple views, independent of prior data, to produce a high-fidelity exterior\n3D line map. This map not only captures intricate details but also facilitates\nthe inference of the hair's inner structure. For the interior, we employ a\ndata-driven, multi-view 3D hair reconstruction method. This method utilizes 2D\nstructural renderings derived from the reconstructed exterior, mirroring the\nsynthetic 2D inputs used during training. This alignment effectively bridges\nthe domain gap between our training data and real-world data, thereby enhancing\nthe accuracy and reliability of our interior structure inference. Lastly, we\ngenerate a strand model and resolve the directional ambiguity by our hair\ngrowth algorithm. Our experiments demonstrate that our method exhibits\nrobustness across diverse hairstyles and achieves state-of-the-art performance.\nFor more results, please refer to our project page\nhttps://keyuwu-cs.github.io/MonoHair/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Keyu Wu", "LINGCHEN YANG", "Zhiyi Kuang", "Yao Feng", "Xutao Han", "Yuefan Shen", "Hongbo Fu", "Kun Zhou", "Youyi Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f248"}, "filepath": "data/2404.07292.png", "tags": [], "_media_type": "image", "_rand": 0.9994676176190782, "arXiv_link": "https://arxiv.org/abs/2404.07292", "other_link": "", "title": "Solving Masked Jigsaw Puzzles with Diffusion Transformers", "abstract": "Solving image and video jigsaw puzzles poses the challenging task of\nrearranging image fragments or video frames from unordered sequences to restore\nmeaningful images and video sequences. Existing approaches often hinge on\ndiscriminative models tasked with predicting either the absolute positions of\npuzzle elements or the permutation actions applied to the original data.\nUnfortunately, these methods face limitations in effectively solving puzzles\nwith a large number of elements. In this paper, we propose JPDVT, an innovative\napproach that harnesses diffusion transformers to address this challenge.\nSpecifically, we generate positional information for image patches or video\nframes, conditioned on their underlying visual content. This information is\nthen employed to accurately assemble the puzzle pieces in their correct\npositions, even in scenarios involving missing pieces. Our method achieves\nstate-of-the-art performance on several datasets.", "keywords": [], "authors_list": ["Jinyang Liu", "Wondmgezahu Teshome", "Sandesh Ghimire", "Mario Sznaier", "Octavia Camps"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f249"}, "filepath": "data/2204.08563v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993842272916378, "arXiv_link": "https://arxiv.org/html/2204.08563v2", "other_link": "https://github.com/KangLiao929/Cylin-Painting}.", "title": "Shadow-Enlightened Image Outpainting", "abstract": "Image outpainting gains increasing attention since it can generate the\ncomplete scene from a partial view, providing a valuable solution to construct\n{360\\textdegree} panoramic images. As image outpainting suffers from the\nintrinsic issue of unidirectional completion flow, previous methods convert the\noriginal problem into inpainting, which allows a bidirectional flow. However,\nwe find that inpainting has its own limitations and is inferior to outpainting\nin certain situations. The question of how they may be combined for the best of\nboth has as yet remained under-explored. In this paper, we provide a deep\nanalysis of the differences between inpainting and outpainting, which\nessentially depends on how the source pixels contribute to the unknown regions\nunder different spatial arrangements. Motivated by this analysis, we present a\nCylin-Painting framework that involves meaningful collaborations between\ninpainting and outpainting and efficiently fuses the different arrangements,\nwith a view to leveraging their complementary benefits on a seamless cylinder.\nNevertheless, straightforwardly applying the cylinder-style convolution often\ngenerates visually unpleasing results as it discards important positional\ninformation. To address this issue, we further present a learnable positional\nembedding strategy to incorporate the missing component of positional encoding\ninto the cylinder convolution, which significantly improves the panoramic\nresults. It is noted that while developed for image outpainting, the proposed\nalgorithm can be effectively extended to other panoramic vision tasks, such as\nobject detection, depth estimation, and image super-resolution. Code will be\nmade available at \\url{https://github.com/KangLiao929/Cylin-Painting}.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Hang Yu", "Ruilin Li", "Shaorong Xie", "Jiayan Qiu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f24a"}, "filepath": "data/2401.06614.png", "tags": [], "_media_type": "image", "_rand": 0.9994782760529481, "arXiv_link": "https://arxiv.org/abs/2401.06614", "other_link": "https://vveicao.github.io/projects/Motion2VecSets/.", "title": "Motion2VecSets: 4D Latent Vector Set Diffusion for Non-rigid Shape Reconstruction and Tracking", "abstract": "We introduce Motion2VecSets, a 4D diffusion model for dynamic surface\nreconstruction from point cloud sequences. While existing state-of-the-art\nmethods have demonstrated success in reconstructing non-rigid objects using\nneural field representations, conventional feed-forward networks encounter\nchallenges with ambiguous observations from noisy, partial, or sparse point\nclouds. To address these challenges, we introduce a diffusion model that\nexplicitly learns the shape and motion distribution of non-rigid objects\nthrough an iterative denoising process of compressed latent representations.\nThe diffusion-based priors enable more plausible and probabilistic\nreconstructions when handling ambiguous inputs. We parameterize 4D dynamics\nwith latent sets instead of using global latent codes. This novel 4D\nrepresentation allows us to learn local shape and deformation patterns, leading\nto more accurate non-linear motion capture and significantly improving\ngeneralizability to unseen motions and identities. For more temporally-coherent\nobject tracking, we synchronously denoise deformation latent sets and exchange\ninformation across multiple frames. To avoid computational overhead, we\ndesigned an interleaved space and time attention block to alternately aggregate\ndeformation latents along spatial and temporal domains. Extensive comparisons\nagainst state-of-the-art methods demonstrate the superiority of our\nMotion2VecSets in 4D reconstruction from various imperfect observations. More\ndetailed information can be found at\nhttps://vveicao.github.io/projects/Motion2VecSets/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Wei Cao", "Chang Luo", "Biao Zhang", "Matthias Nie\u00dfner", "Jiapeng Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f24b"}, "filepath": "data/2311.16420.png", "tags": [], "_media_type": "image", "_rand": 0.9991328005800134, "arXiv_link": "https://arxiv.org/abs/2311.16420", "other_link": "", "title": "Test-Time Linear Out-of-Distribution Detection", "abstract": "Out-of-distribution (OOD) detection is essential for the reliability of ML\nmodels. Most existing methods for OOD detection learn a fixed decision\ncriterion from a given in-distribution dataset and apply it universally to\ndecide if a data point is OOD. Recent work~\\cite{fang2022is} shows that given\nonly in-distribution data, it is impossible to reliably detect OOD data without\nextra assumptions. Motivated by the theoretical result and recent exploration\nof test-time adaptation methods, we propose a Non-Parametric Test Time\n\\textbf{Ada}ptation framework for \\textbf{O}ut-Of-\\textbf{D}istribution\n\\textbf{D}etection (\\abbr). Unlike conventional methods, \\abbr utilizes online\ntest samples for model adaptation during testing, enhancing adaptability to\nchanging data distributions. The framework incorporates detected OOD instances\ninto decision-making, reducing false positive rates, particularly when ID and\nOOD distributions overlap significantly. We demonstrate the effectiveness of\n\\abbr through comprehensive experiments on multiple OOD detection benchmarks,\nextensive empirical studies show that \\abbr significantly improves the\nperformance of OOD detection over state-of-the-art methods. Specifically, \\abbr\nreduces the false positive rate (FPR95) by $23.23\\%$ on the CIFAR-10 benchmarks\nand $38\\%$ on the ImageNet-1k benchmarks compared to the advanced methods.\nLastly, we theoretically verify the effectiveness of \\abbr.", "keywords": [], "authors_list": ["Ke Fan", "Tong Liu", "Xingyu Qiu", "Yikai Wang", "Lian Huai", "Zeyu Shangguan", "Shuang Gou", "FENGJIAN LIU", "Yuqian Fu", "Yanwei Fu", "Xingqun Jiang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f24c"}, "filepath": "data/2403.07535.png", "tags": [], "_media_type": "image", "_rand": 0.999737511503806, "arXiv_link": "https://arxiv.org/abs/2403.07535", "other_link": "https://github.com/Junda24/AFNet/.", "title": "Adaptive Fusion of Single-View and Multi-View Depth for Autonomous Driving", "abstract": "Multi-view depth estimation has achieved impressive performance over various\nbenchmarks. However, almost all current multi-view systems rely on given ideal\ncamera poses, which are unavailable in many real-world scenarios, such as\nautonomous driving. In this work, we propose a new robustness benchmark to\nevaluate the depth estimation system under various noisy pose settings.\nSurprisingly, we find current multi-view depth estimation methods or\nsingle-view and multi-view fusion methods will fail when given noisy pose\nsettings. To address this challenge, we propose a single-view and multi-view\nfused depth estimation system, which adaptively integrates high-confident\nmulti-view and single-view results for both robust and accurate depth\nestimations. The adaptive fusion module performs fusion by dynamically\nselecting high-confidence regions between two branches based on a wrapping\nconfidence map. Thus, the system tends to choose the more reliable branch when\nfacing textureless scenes, inaccurate calibration, dynamic objects, and other\ndegradation or challenging conditions. Our method outperforms state-of-the-art\nmulti-view and fusion methods under robustness testing. Furthermore, we achieve\nstate-of-the-art performance on challenging benchmarks (KITTI and DDAD) when\ngiven accurate pose estimations. Project website:\nhttps://github.com/Junda24/AFNet/.", "keywords": [], "authors_list": ["JunDa Cheng", "Wei Yin", "Kaixuan Wang", "Xiaozhi Chen", "Shijie Wang", "Xin Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f24d"}, "filepath": "data/2311.15243.png", "tags": [], "_media_type": "image", "_rand": 0.9992015383343387, "arXiv_link": "https://arxiv.org/abs/2311.15243", "other_link": "https://github.com/ycfate/ID-like.", "title": "ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection", "abstract": "Out-of-distribution (OOD) detection methods often exploit auxiliary outliers\nto train model identifying OOD samples, especially discovering challenging\noutliers from auxiliary outliers dataset to improve OOD detection. However,\nthey may still face limitations in effectively distinguishing between the most\nchallenging OOD samples that are much like in-distribution (ID) data, i.e.,\n\\idlike samples. To this end, we propose a novel OOD detection framework that\ndiscovers \\idlike outliers using CLIP \\cite{DBLP:conf/icml/RadfordKHRGASAM21}\nfrom the vicinity space of the ID samples, thus helping to identify these most\nchallenging OOD samples. Then a prompt learning framework is proposed that\nutilizes the identified \\idlike outliers to further leverage the capabilities\nof CLIP for OOD detection. Benefiting from the powerful CLIP, we only need a\nsmall number of ID samples to learn the prompts of the model without exposing\nother auxiliary outlier datasets. By focusing on the most challenging \\idlike\nOOD samples and elegantly exploiting the capabilities of CLIP, our method\nachieves superior few-shot learning performance on various real-world image\ndatasets (e.g., in 4-shot OOD detection on the ImageNet-1k dataset, our method\nreduces the average FPR95 by 12.16\\% and improves the average AUROC by 2.76\\%,\ncompared to state-of-the-art methods). Code is available at\nhttps://github.com/ycfate/ID-like.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yichen Bai", "Zongbo Han", "Bing Cao", "Xiaoheng Jiang", "Qinghua Hu", "Changqing Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f24e"}, "filepath": "data/2312.17681.png", "tags": [], "_media_type": "image", "_rand": 0.9992893398069469, "arXiv_link": "https://arxiv.org/abs/2312.17681", "other_link": "", "title": "FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis", "abstract": "Diffusion models have transformed the image-to-image (I2I) synthesis and are\nnow permeating into videos. However, the advancement of video-to-video (V2V)\nsynthesis has been hampered by the challenge of maintaining temporal\nconsistency across video frames. This paper proposes a consistent V2V synthesis\nframework by jointly leveraging spatial conditions and temporal optical flow\nclues within the source video. Contrary to prior methods that strictly adhere\nto optical flow, our approach harnesses its benefits while handling the\nimperfection in flow estimation. We encode the optical flow via warping from\nthe first frame and serve it as a supplementary reference in the diffusion\nmodel. This enables our model for video synthesis by editing the first frame\nwith any prevalent I2I models and then propagating edits to successive frames.\nOur V2V model, FlowVid, demonstrates remarkable properties: (1) Flexibility:\nFlowVid works seamlessly with existing I2I models, facilitating various\nmodifications, including stylization, object swaps, and local edits. (2)\nEfficiency: Generation of a 4-second video with 30 FPS and 512x512 resolution\ntakes only 1.5 minutes, which is 3.1x, 7.2x, and 10.5x faster than CoDeF,\nRerender, and TokenFlow, respectively. (3) High-quality: In user studies, our\nFlowVid is preferred 45.7% of the time, outperforming CoDeF (3.5%), Rerender\n(10.2%), and TokenFlow (40.4%).", "keywords": ["Efficient and scalable vision"], "authors_list": ["Feng Liang", "Bichen Wu", "Jialiang Wang", "Licheng Yu", "Kunpeng Li", "Yinan Zhao", "Ishan Misra", "Jia-Bin Huang", "Peizhao Zhang", "Peter Vajda", "Diana Marculescu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f24f"}, "filepath": "data/2306.14451.png", "tags": [], "_media_type": "image", "_rand": 0.9994576031019935, "arXiv_link": "https://arxiv.org/abs/2306.14451", "other_link": "https://github.com/yujiangpu20/PEL4VAD.", "title": "Prompt-enhanced Multiple Instance Learning for Weakly Supervised Anomaly Detection", "abstract": "Video anomaly detection under weak supervision presents significant\nchallenges, particularly due to the lack of frame-level annotations during\ntraining. While prior research has utilized graph convolution networks and\nself-attention mechanisms alongside multiple instance learning (MIL)-based\nclassification loss to model temporal relations and learn discriminative\nfeatures, these methods often employ multi-branch architectures to capture\nlocal and global dependencies separately, resulting in increased parameters and\ncomputational costs. Moreover, the coarse-grained interclass separability\nprovided by the binary constraint of MIL-based loss neglects the fine-grained\ndiscriminability within anomalous classes. In response, this paper introduces a\nweakly supervised anomaly detection framework that focuses on efficient context\nmodeling and enhanced semantic discriminability. We present a Temporal Context\nAggregation (TCA) module that captures comprehensive contextual information by\nreusing the similarity matrix and implementing adaptive fusion. Additionally,\nwe propose a Prompt-Enhanced Learning (PEL) module that integrates semantic\npriors using knowledge-based prompts to boost the discriminative capacity of\ncontext features while ensuring separability between anomaly sub-classes.\nExtensive experiments validate the effectiveness of our method's components,\ndemonstrating competitive performance with reduced parameters and computational\neffort on three challenging benchmarks: UCF-Crime, XD-Violence, and\nShanghaiTech datasets. Notably, our approach significantly improves the\ndetection accuracy of certain anomaly sub-classes, underscoring its practical\nvalue and efficacy. Our code is available at:\nhttps://github.com/yujiangpu20/PEL4VAD.", "keywords": ["Efficient and scalable vision", "Large multimodal models and prompting techniques"], "authors_list": ["Junxi Chen", "Liang Li", "Li Su", "Zheng-Jun Zha", "Qingming Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f250"}, "filepath": "data/2311.15529v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998212454925188, "arXiv_link": "https://arxiv.org/abs/2311.15529v1", "other_link": "https://github.com/vimar-gu/MinimaxDiffusion.", "title": "Efficient Dataset Distillation via Minimax Diffusion", "abstract": "Dataset distillation reduces the storage and computational consumption of\ntraining a network by generating a small surrogate dataset that encapsulates\nrich information of the original large-scale one. However, previous\ndistillation methods heavily rely on the sample-wise iterative optimization\nscheme. As the images-per-class (IPC) setting or image resolution grows larger,\nthe necessary computation will demand overwhelming time and resources. In this\nwork, we intend to incorporate generative diffusion techniques for computing\nthe surrogate dataset. Observing that key factors for constructing an effective\nsurrogate dataset are representativeness and diversity, we design additional\nminimax criteria in the generative training to enhance these facets for the\ngenerated images of diffusion models. We present a theoretical model of the\nprocess as hierarchical diffusion control demonstrating the flexibility of the\ndiffusion process to target these criteria without jeopardizing the\nfaithfulness of the sample to the desired distribution. The proposed method\nachieves state-of-the-art validation performance while demanding much less\ncomputational resources. Under the 100-IPC setting on ImageWoof, our method\nrequires less than one-twentieth the distillation time of previous methods, yet\nyields even better performance. Source code available in\nhttps://github.com/vimar-gu/MinimaxDiffusion.", "keywords": [], "authors_list": ["Jianyang Gu", "Saeed Vahidian", "Vyacheslav Kungurtsev", "Haonan Wang", "Wei Jiang", "Yang You", "Yiran Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f251"}, "filepath": "data/2402.15584.png", "tags": [], "_media_type": "image", "_rand": 0.9993211515569338, "arXiv_link": "https://arxiv.org/abs/2402.15584", "other_link": "", "title": "State Space Models for Event Cameras", "abstract": "Today, state-of-the-art deep neural networks that process event-camera data\nfirst convert a temporal window of events into dense, grid-like input\nrepresentations. As such, they exhibit poor generalizability when deployed at\nhigher inference frequencies (i.e., smaller temporal windows) than the ones\nthey were trained on. We address this challenge by introducing state-space\nmodels (SSMs) with learnable timescale parameters to event-based vision. This\ndesign adapts to varying frequencies without the need to retrain the network at\ndifferent frequencies. Additionally, we investigate two strategies to\ncounteract aliasing effects when deploying the model at higher frequencies. We\ncomprehensively evaluate our approach against existing methods based on RNN and\nTransformer architectures across various benchmarks, including Gen1 and 1 Mpx\nevent camera datasets. Our results demonstrate that SSM-based models train 33%\nfaster and also exhibit minimal performance degradation when tested at higher\nfrequencies than the training input. Traditional RNN and Transformer models\nexhibit performance drops of more than 20 mAP, with SSMs having a drop of 3.76\nmAP, highlighting the effectiveness of SSMs in event-based vision tasks.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Nikola Zubic", "Mathias Gehrig", "Davide Scaramuzza"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f252"}, "filepath": "data/2312.09056v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997511009319249, "arXiv_link": "https://arxiv.org/abs/2312.09056v1", "other_link": "", "title": "ReCoRe: Regularized Contrastive Representation Learning of World Model", "abstract": "While recent model-free Reinforcement Learning (RL) methods have demonstrated\nhuman-level effectiveness in gaming environments, their success in everyday\ntasks like visual navigation has been limited, particularly under significant\nappearance variations. This limitation arises from (i) poor sample efficiency\nand (ii) over-fitting to training scenarios. To address these challenges, we\npresent a world model that learns invariant features using (i) contrastive\nunsupervised learning and (ii) an intervention-invariant regularizer. Learning\nan explicit representation of the world dynamics i.e. a world model, improves\nsample efficiency while contrastive learning implicitly enforces learning of\ninvariant features, which improves generalization. However, the naive\nintegration of contrastive loss to world models fails due to a lack of\nsupervisory signals to the visual encoder, as world-model-based RL methods\nindependently optimize representation learning and agent policy. To overcome\nthis issue, we propose an intervention-invariant regularizer in the form of an\nauxiliary task such as depth prediction, image denoising, etc., that explicitly\nenforces invariance to style-interventions. Our method outperforms current\nstate-of-the-art model-based and model-free RL methods and significantly on\nout-of-distribution point navigation task evaluated on the iGibson benchmark.\nWe further demonstrate that our approach, with only visual observations,\noutperforms recent language-guided foundation models for point navigation,\nwhich is essential for deployment on robots with limited computation\ncapabilities. Finally, we demonstrate that our proposed model excels at the\nsim-to-real transfer of its perception module on Gibson benchmark.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Rudra P,K. Poudel", "Harit Pandya", "Stephan Liwicki", "Roberto Cipolla"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Robotics", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f253"}, "filepath": "data/2403.02561.png", "tags": [], "_media_type": "image", "_rand": 0.9994111930046936, "arXiv_link": "https://arxiv.org/abs/2403.02561", "other_link": "", "title": "Semantic Human Mesh Reconstruction with Textures", "abstract": "The field of 3D detailed human mesh reconstruction has made significant\nprogress in recent years. However, current methods still face challenges when\nused in industrial applications due to unstable results, low-quality meshes,\nand a lack of UV unwrapping and skinning weights. In this paper, we present\nSHERT, a novel pipeline that can reconstruct semantic human meshes with\ntextures and high-precision details. SHERT applies semantic- and normal-based\nsampling between the detailed surface (e.g. mesh and SDF) and the corresponding\nSMPL-X model to obtain a partially sampled semantic mesh and then generates the\ncomplete semantic mesh by our specifically designed self-supervised completion\nand refinement networks. Using the complete semantic mesh as a basis, we employ\na texture diffusion model to create human textures that are driven by both\nimages and texts. Our reconstructed meshes have stable UV unwrapping,\nhigh-quality triangle meshes, and consistent semantic information. The given\nSMPL-X model provides semantic information and shape priors, allowing SHERT to\nperform well even with incorrect and incomplete inputs. The semantic\ninformation also makes it easy to substitute and animate different body parts\nsuch as the face, body, and hands. Quantitative and qualitative experiments\ndemonstrate that SHERT is capable of producing high-fidelity and robust\nsemantic meshes that outperform state-of-the-art methods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["xiaoyu zhan", "Jianxin Yang", "Yuanqi Li", "Jie Guo", "Yanwen Guo", "Wenping Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f254"}, "filepath": "data/2403.19366.png", "tags": [], "_media_type": "image", "_rand": 0.9996788789966213, "arXiv_link": "https://arxiv.org/abs/2403.19366", "other_link": "https://github.com/ying-fu/MSHNet.", "title": "Infrared Small Target Detection with Scale and Location Sensitivity", "abstract": "Recently, infrared small target detection (IRSTD) has been dominated by\ndeep-learning-based methods. However, these methods mainly focus on the design\nof complex model structures to extract discriminative features, leaving the\nloss functions for IRSTD under-explored. For example, the widely used\nIntersection over Union (IoU) and Dice losses lack sensitivity to the scales\nand locations of targets, limiting the detection performance of detectors. In\nthis paper, we focus on boosting detection performance with a more effective\nloss but a simpler model structure. Specifically, we first propose a novel\nScale and Location Sensitive (SLS) loss to handle the limitations of existing\nlosses: 1) for scale sensitivity, we compute a weight for the IoU loss based on\ntarget scales to help the detector distinguish targets with different scales:\n2) for location sensitivity, we introduce a penalty term based on the center\npoints of targets to help the detector localize targets more precisely. Then,\nwe design a simple Multi-Scale Head to the plain U-Net (MSHNet). By applying\nSLS loss to each scale of the predictions, our MSHNet outperforms existing\nstate-of-the-art methods by a large margin. In addition, the detection\nperformance of existing detectors can be further improved when trained with our\nSLS loss, demonstrating the effectiveness and generalization of our SLS loss.\nThe code is available at https://github.com/ying-fu/MSHNet.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Qiankun Liu", "Rui Liu", "Bolun Zheng", "Hongkui Wang", "Ying Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f255"}, "filepath": "data/2402.12259.png", "tags": [], "_media_type": "image", "_rand": 0.9993409327697159, "arXiv_link": "https://arxiv.org/abs/2402.12259", "other_link": "", "title": "Open3DSG: Open-Vocabulary 3D Scene Graphs from Point Clouds with Queryable Objects and Open-Set Relationships", "abstract": "Current approaches for 3D scene graph prediction rely on labeled datasets to\ntrain models for a fixed set of known object classes and relationship\ncategories. We present Open3DSG, an alternative approach to learn 3D scene\ngraph prediction in an open world without requiring labeled scene graph data.\nWe co-embed the features from a 3D scene graph prediction backbone with the\nfeature space of powerful open world 2D vision language foundation models. This\nenables us to predict 3D scene graphs from 3D point clouds in a zero-shot\nmanner by querying object classes from an open vocabulary and predicting the\ninter-object relationships from a grounded LLM with scene graph features and\nqueried object classes as context. Open3DSG is the first 3D point cloud method\nto predict not only explicit open-vocabulary object classes, but also open-set\nrelationships that are not limited to a predefined label set, making it\npossible to express rare as well as specific objects and relationships in the\npredicted 3D scene graph. Our experiments show that Open3DSG is effective at\npredicting arbitrary object classes as well as their complex inter-object\nrelationships describing spatial, supportive, semantic and comparative\nrelationships.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Sebastian Koch", "Narunas Vaskevicius", "Mirco Colosi", "Pedro Hermosilla", "Timo Ropinski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f256"}, "filepath": "data/2311.15537.png", "tags": [], "_media_type": "image", "_rand": 0.9990284146632992, "arXiv_link": "https://arxiv.org/abs/2311.15537", "other_link": "https://github.com/xb534/SED.git}.", "title": "SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation", "abstract": "Open-vocabulary semantic segmentation strives to distinguish pixels into\ndifferent semantic groups from an open set of categories. Most existing methods\nexplore utilizing pre-trained vision-language models, in which the key is to\nadopt the image-level model for pixel-level segmentation task. In this paper,\nwe propose a simple encoder-decoder, named SED, for open-vocabulary semantic\nsegmentation, which comprises a hierarchical encoder-based cost map generation\nand a gradual fusion decoder with category early rejection. The hierarchical\nencoder-based cost map generation employs hierarchical backbone, instead of\nplain transformer, to predict pixel-level image-text cost map. Compared to\nplain transformer, hierarchical backbone better captures local spatial\ninformation and has linear computational complexity with respect to input size.\nOur gradual fusion decoder employs a top-down structure to combine cost map and\nthe feature maps of different backbone levels for segmentation. To accelerate\ninference speed, we introduce a category early rejection scheme in the decoder\nthat rejects many no-existing categories at the early layer of decoder,\nresulting in at most 4.7 times acceleration without accuracy degradation.\nExperiments are performed on multiple open-vocabulary semantic segmentation\ndatasets, which demonstrates the efficacy of our SED method. When using\nConvNeXt-B, our SED method achieves mIoU score of 31.6\\% on ADE20K with 150\ncategories at 82 millisecond ($ms$) per image on a single A6000. We will\nrelease it at \\url{https://github.com/xb534/SED.git}.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Bin Xie", "Jiale Cao", "Jin Xie", "Fahad Shahbaz Khan", "Yanwei Pang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f257"}, "filepath": "data/2405.12724.png", "tags": [], "_media_type": "image", "_rand": 0.999282895717205, "arXiv_link": "https://arxiv.org/abs/2405.12724", "other_link": "https://wanghongsheng01.github.io/RemoCap/.", "title": "Choose What You Need: Disentangled Representation Learning for Scene Text Recognition, Removal and Editing", "abstract": "Reconstructing 3D human bodies from realistic motion sequences remains a\nchallenge due to pervasive and complex occlusions. Current methods struggle to\ncapture the dynamics of occluded body parts, leading to model penetration and\ndistorted motion. RemoCap leverages Spatial Disentanglement (SD) and Motion\nDisentanglement (MD) to overcome these limitations. SD addresses occlusion\ninterference between the target human body and surrounding objects. It achieves\nthis by disentangling target features along the dimension axis. By aligning\nfeatures based on their spatial positions in each dimension, SD isolates the\ntarget object's response within a global window, enabling accurate capture\ndespite occlusions. The MD module employs a channel-wise temporal shuffling\nstrategy to simulate diverse scene dynamics. This process effectively\ndisentangles motion features, allowing RemoCap to reconstruct occluded parts\nwith greater fidelity. Furthermore, this paper introduces a sequence velocity\nloss that promotes temporal coherence. This loss constrains inter-frame\nvelocity errors, ensuring the predicted motion exhibits realistic consistency.\nExtensive comparisons with state-of-the-art (SOTA) methods on benchmark\ndatasets demonstrate RemoCap's superior performance in 3D human body\nreconstruction. On the 3DPW dataset, RemoCap surpasses all competitors,\nachieving the best results in MPVPE (81.9), MPJPE (72.7), and PA-MPJPE (44.1)\nmetrics. Codes are available at https://wanghongsheng01.github.io/RemoCap/.", "keywords": [], "authors_list": ["Boqiang Zhang", "Hongtao Xie", "Zuan Gao", "Yuxin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f258"}, "filepath": "data/2312.13102.png", "tags": [], "_media_type": "image", "_rand": 0.9995562651361228, "arXiv_link": "https://arxiv.org/abs/2312.13102", "other_link": "", "title": "SpecNeRF: Gaussian Directional Encoding for Specular Reflections", "abstract": "Neural radiance fields have achieved remarkable performance in modeling the\nappearance of 3D scenes. However, existing approaches still struggle with the\nview-dependent appearance of glossy surfaces, especially under complex lighting\nof indoor environments. Unlike existing methods, which typically assume distant\nlighting like an environment map, we propose a learnable Gaussian directional\nencoding to better model the view-dependent effects under near-field lighting\nconditions. Importantly, our new directional encoding captures the\nspatially-varying nature of near-field lighting and emulates the behavior of\nprefiltered environment maps. As a result, it enables the efficient evaluation\nof preconvolved specular color at any 3D location with varying roughness\ncoefficients. We further introduce a data-driven geometry prior that helps\nalleviate the shape radiance ambiguity in reflection modeling. We show that our\nGaussian directional encoding and geometry prior significantly improve the\nmodeling of challenging specular reflections in neural radiance fields, which\nhelps decompose appearance into more physically meaningful components.", "keywords": ["Deep learning architectures and techniques", "Computational imaging and physics-based vision"], "authors_list": ["Li Ma", "Vasu Agrawal", "Haithem Turki", "Changil Kim", "Chen Gao", "Pedro V. Sander", "Michael Zollhoefer", "Christian Richardt"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f259"}, "filepath": "data/2312.12480.png", "tags": [], "_media_type": "image", "_rand": 0.9999677633948986, "arXiv_link": "https://arxiv.org/abs/2312.12480", "other_link": "https://sites.google.com/view/continual-mae/home.", "title": "Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation", "abstract": "Continual Test-Time Adaptation (CTTA) is proposed to migrate a source\npre-trained model to continually changing target distributions, addressing\nreal-world dynamism. Existing CTTA methods mainly rely on entropy minimization\nor teacher-student pseudo-labeling schemes for knowledge extraction in\nunlabeled target domains. However, dynamic data distributions cause\nmiscalibrated predictions and noisy pseudo-labels in existing self-supervised\nlearning methods, hindering the effective mitigation of error accumulation and\ncatastrophic forgetting problems during the continual adaptation process. To\ntackle these issues, we propose a continual self-supervised method, Adaptive\nDistribution Masked Autoencoders (ADMA), which enhances the extraction of\ntarget domain knowledge while mitigating the accumulation of distribution\nshifts. Specifically, we propose a Distribution-aware Masking (DaM) mechanism\nto adaptively sample masked positions, followed by establishing consistency\nconstraints between the masked target samples and the original target samples.\nAdditionally, for masked tokens, we utilize an efficient decoder to reconstruct\na hand-crafted feature descriptor (e.g., Histograms of Oriented Gradients),\nleveraging its invariant properties to boost task-relevant representations.\nThrough conducting extensive experiments on four widely recognized benchmarks,\nour proposed method attains state-of-the-art performance in both classification\nand segmentation CTTA tasks. Our project page:\nhttps://sites.google.com/view/continual-mae/home.", "keywords": [], "authors_list": ["Jiaming Liu", "Ran Xu", "Senqiao Yang", "Renrui Zhang", "Qizhe Zhang", "Zehui Chen", "Yandong Guo", "Shanghang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f25a"}, "filepath": "data/2404.09819.png", "tags": [], "_media_type": "image", "_rand": 0.999068746298473, "arXiv_link": "https://arxiv.org/abs/2404.09819", "other_link": "", "title": "3D Face Tracking from 2D Video through Iterative Dense UV to Image Flow", "abstract": "When working with 3D facial data, improving fidelity and avoiding the uncanny\nvalley effect is critically dependent on accurate 3D facial performance\ncapture. Because such methods are expensive and due to the widespread\navailability of 2D videos, recent methods have focused on how to perform\nmonocular 3D face tracking. However, these methods often fall short in\ncapturing precise facial movements due to limitations in their network\narchitecture, training, and evaluation processes. Addressing these challenges,\nwe propose a novel face tracker, FlowFace, that introduces an innovative 2D\nalignment network for dense per-vertex alignment. Unlike prior work, FlowFace\nis trained on high-quality 3D scan annotations rather than weak supervision or\nsynthetic data. Our 3D model fitting module jointly fits a 3D face model from\none or many observations, integrating existing neutral shape priors for\nenhanced identity and expression disentanglement and per-vertex deformations\nfor detailed facial feature reconstruction. Additionally, we propose a novel\nmetric and benchmark for assessing tracking accuracy. Our method exhibits\nsuperior performance on both custom and publicly available benchmarks. We\nfurther validate the effectiveness of our tracker by generating high-quality 3D\ndata from 2D videos, which leads to performance gains on downstream tasks.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Felix Taubner", "Prashant Raina", "Mathieu Tuli", "Eu Wern Teh", "Chul Lee", "Jinmiao Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f25b"}, "filepath": "data/2404.00417.png", "tags": [], "_media_type": "image", "_rand": 0.9997873809478514, "arXiv_link": "https://arxiv.org/abs/2404.00417", "other_link": "", "title": "Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation", "abstract": "To accommodate real-world dynamics, artificial intelligence systems need to\ncope with sequentially arriving content in an online manner. Beyond regular\nContinual Learning (CL) attempting to address catastrophic forgetting with\noffline training of each task, Online Continual Learning (OCL) is a more\nchallenging yet realistic setting that performs CL in a one-pass data stream.\nCurrent OCL methods primarily rely on memory replay of old training samples.\nHowever, a notable gap from CL to OCL stems from the additional\noverfitting-underfitting dilemma associated with the use of rehearsal buffers:\nthe inadequate learning of new training samples (underfitting) and the repeated\nlearning of a few old training samples (overfitting). To this end, we introduce\na novel approach, Multi-level Online Sequential Experts (MOSE), which\ncultivates the model as stacked sub-experts, integrating multi-level\nsupervision and reverse self-distillation. Supervision signals across multiple\nstages facilitate appropriate convergence of the new task while gathering\nvarious strengths from experts by knowledge distillation mitigates the\nperformance decline of old tasks. MOSE demonstrates remarkable efficacy in\nlearning new samples and preserving past knowledge through multi-level experts,\nthereby significantly advancing OCL performance over state-of-the-art baselines\n(e.g., up to 7.3% on Split CIFAR-100 and 6.1% on Split Tiny-ImageNet).", "keywords": ["Efficient and scalable vision"], "authors_list": ["Hongwei Yan", "Liyuan Wang", "Kaisheng Ma", "Yi Zhong"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f25c"}, "filepath": "data/2402.17562v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995408059292173, "arXiv_link": "https://arxiv.org/abs/2402.17562v1", "other_link": "", "title": "An Empirical Study of the Generalization Ability of Lidar 3D Object Detectors to Unseen Domains", "abstract": "3D Object Detectors (3D-OD) are crucial for understanding the environment in\nmany robotic tasks, especially autonomous driving. Including 3D information via\nLidar sensors improves accuracy greatly. However, such detectors perform poorly\non domains they were not trained on, i.e. different locations, sensors,\nweather, etc., limiting their reliability in safety-critical applications.\nThere exist methods to adapt 3D-ODs to these domains; however, these methods\ntreat 3D-ODs as a black box, neglecting underlying architectural decisions and\nsource-domain training strategies. Instead, we dive deep into the details of\n3D-ODs, focusing our efforts on fundamental factors that influence robustness\nprior to domain adaptation.\n We systematically investigate four design choices (and the interplay between\nthem) often overlooked in 3D-OD robustness and domain adaptation: architecture,\nvoxel encoding, data augmentations, and anchor strategies. We assess their\nimpact on the robustness of nine state-of-the-art 3D-ODs across six benchmarks\nencompassing three types of domain gaps - sensor type, weather, and location.\n Our main findings are: (1) transformer backbones with local point features\nare more robust than 3D CNNs, (2) test-time anchor size adjustment is crucial\nfor adaptation across geographical locations, significantly boosting scores\nwithout retraining, (3) source-domain augmentations allow the model to\ngeneralize to low-resolution sensors, and (4) surprisingly, robustness to bad\nweather is improved when training directly on more clean weather data than on\ntraining with bad weather data. We outline our main conclusions and findings to\nprovide practical guidance on developing more robust 3D-ODs.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["George Eskandar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f25d"}, "filepath": "data/2311.12079.png", "tags": [], "_media_type": "image", "_rand": 0.9991033883977589, "arXiv_link": "https://arxiv.org/abs/2311.12079", "other_link": "", "title": "FreeKD: Knowledge Distillation via Semantic Frequency Prompt", "abstract": "Knowledge distillation (KD) has been applied to various tasks successfully,\nand mainstream methods typically boost the student model via spatial imitation\nlosses. However, the consecutive downsamplings induced in the spatial domain of\nteacher model is a type of corruption, hindering the student from analyzing\nwhat specific information needs to be imitated, which results in accuracy\ndegradation. To better understand the underlying pattern of corrupted feature\nmaps, we shift our attention to the frequency domain. During frequency\ndistillation, we encounter a new challenge: the low-frequency bands convey\ngeneral but minimal context, while the high are more informative but also\nintroduce noise. Not each pixel within the frequency bands contributes equally\nto the performance. To address the above problem: (1) We propose the Frequency\nPrompt plugged into the teacher model, absorbing the semantic frequency context\nduring finetuning. (2) During the distillation period, a pixel-wise frequency\nmask is generated via Frequency Prompt, to localize those pixel of interests\n(PoIs) in various frequency bands. Additionally, we employ a position-aware\nrelational frequency loss for dense prediction tasks, delivering a high-order\nspatial enhancement to the student model. We dub our Frequency Knowledge\nDistillation method as FreeKD, which determines the optimal localization and\nextent for the frequency distillation. Extensive experiments demonstrate that\nFreeKD not only outperforms spatial-based distillation methods consistently on\ndense prediction tasks (e.g., FreeKD brings 3.8 AP gains for RepPoints-R50 on\nCOCO2017 and 4.55 mIoU gains for PSPNet-R18 on Cityscapes), but also conveys\nmore robustness to the student. Notably, we also validate the generalization of\nour approach on large-scale vision models (e.g., DINO and SAM).", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yuan Zhang", "Tao Huang", "Jiaming Liu", "Tao Jiang", "Kuan Cheng", "Shanghang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f25e"}, "filepath": "data/2405.04356.png", "tags": [], "_media_type": "image", "_rand": 0.9993424002291605, "arXiv_link": "https://arxiv.org/abs/2405.04356", "other_link": "https://github.com/1211sh/Diffusion-driven_GAN-Inversion/.", "title": "Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation", "abstract": "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Jihyun Kim", "Changjae Oh", "Hoseok Do", "Soohyun Kim", "Kwanghoon Sohn"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f25f"}, "filepath": "data/2404.04624.png", "tags": [], "_media_type": "image", "_rand": 0.9991407378593179, "arXiv_link": "https://arxiv.org/abs/2404.04624", "other_link": "https://github.com/mxin262/Bridging-Text-Spotting.", "title": "Bridging the Gap Between End-to-End and Two-Step Text Spotting", "abstract": "Modularity plays a crucial role in the development and maintenance of complex\nsystems. While end-to-end text spotting efficiently mitigates the issues of\nerror accumulation and sub-optimal performance seen in traditional two-step\nmethodologies, the two-step methods continue to be favored in many competitions\nand practical settings due to their superior modularity. In this paper, we\nintroduce Bridging Text Spotting, a novel approach that resolves the error\naccumulation and suboptimal performance issues in two-step methods while\nretaining modularity. To achieve this, we adopt a well-trained detector and\nrecognizer that are developed and trained independently and then lock their\nparameters to preserve their already acquired capabilities. Subsequently, we\nintroduce a Bridge that connects the locked detector and recognizer through a\nzero-initialized neural network. This zero-initialized neural network,\ninitialized with weights set to zeros, ensures seamless integration of the\nlarge receptive field features in detection into the locked recognizer.\nFurthermore, since the fixed detector and recognizer cannot naturally acquire\nend-to-end optimization features, we adopt the Adapter to facilitate their\nefficient learning of these features. We demonstrate the effectiveness of the\nproposed method through extensive experiments: Connecting the latest detector\nand recognizer through Bridging Text Spotting, we achieved an accuracy of 83.3%\non Total-Text, 69.8% on CTW1500, and 89.5% on ICDAR 2015. The code is available\nat https://github.com/mxin262/Bridging-Text-Spotting.", "keywords": ["Document analysis and understanding"], "authors_list": ["Mingxin Huang", "Hongliang Li", "Yuliang Liu", "Xiang Bai", "Lianwen Jin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f260"}, "filepath": "data/2405.16925.png", "tags": [], "_media_type": "image", "_rand": 0.9990947523036023, "arXiv_link": "https://arxiv.org/abs/2405.16925", "other_link": "https://github.com/guanw-pku/OED}.", "title": "OED: Towards One-stage End-to-End Dynamic Scene Graph Generation", "abstract": "Dynamic Scene Graph Generation (DSGG) focuses on identifying visual\nrelationships within the spatial-temporal domain of videos. Conventional\napproaches often employ multi-stage pipelines, which typically consist of\nobject detection, temporal association, and multi-relation classification.\nHowever, these methods exhibit inherent limitations due to the separation of\nmultiple stages, and independent optimization of these sub-problems may yield\nsub-optimal solutions. To remedy these limitations, we propose a one-stage\nend-to-end framework, termed OED, which streamlines the DSGG pipeline. This\nframework reformulates the task as a set prediction problem and leverages\npair-wise features to represent each subject-object pair within the scene\ngraph. Moreover, another challenge of DSGG is capturing temporal dependencies,\nwe introduce a Progressively Refined Module (PRM) for aggregating temporal\ncontext without the constraints of additional trackers or handcrafted\ntrajectories, enabling end-to-end optimization of the network. Extensive\nexperiments conducted on the Action Genome benchmark demonstrate the\neffectiveness of our design. The code and models are available at\n\\url{https://github.com/guanw-pku/OED}.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Guan Wang", "Zhimin Li", "Qingchao Chen", "Yang Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f261"}, "filepath": "data/2403.18271.png", "tags": [], "_media_type": "image", "_rand": 0.9999021770180689, "arXiv_link": "https://arxiv.org/abs/2403.18271", "other_link": "https://github.com/Cccccczh404/H-SAM.", "title": "Unleashing the Potential of SAM for Medical Adaptation via Hierarchical Decoding", "abstract": "The Segment Anything Model (SAM) has garnered significant attention for its\nversatile segmentation abilities and intuitive prompt-based interface. However,\nits application in medical imaging presents challenges, requiring either\nsubstantial training costs and extensive medical datasets for full model\nfine-tuning or high-quality prompts for optimal performance. This paper\nintroduces H-SAM: a prompt-free adaptation of SAM tailored for efficient\nfine-tuning of medical images via a two-stage hierarchical decoding procedure.\nIn the initial stage, H-SAM employs SAM's original decoder to generate a prior\nprobabilistic mask, guiding a more intricate decoding process in the second\nstage. Specifically, we propose two key designs: 1) A class-balanced,\nmask-guided self-attention mechanism addressing the unbalanced label\ndistribution, enhancing image embedding; 2) A learnable mask cross-attention\nmechanism spatially modulating the interplay among different image regions\nbased on the prior mask. Moreover, the inclusion of a hierarchical pixel\ndecoder in H-SAM enhances its proficiency in capturing fine-grained and\nlocalized details. This approach enables SAM to effectively integrate learned\nmedical priors, facilitating enhanced adaptation for medical image segmentation\nwith limited samples. Our H-SAM demonstrates a 4.78% improvement in average\nDice compared to existing prompt-free SAM variants for multi-organ segmentation\nusing only 10% of 2D slices. Notably, without using any unlabeled data, H-SAM\neven outperforms state-of-the-art semi-supervised models relying on extensive\nunlabeled training data across various medical datasets. Our code is available\nat https://github.com/Cccccczh404/H-SAM.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Zhiheng Cheng", "Qingyue Wei", "Hongru Zhu", "Yan Wang", "Liangqiong Qu", "Wei Shao", "Yuyin Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f262"}, "filepath": "data/2403.10574.png", "tags": [], "_media_type": "image", "_rand": 0.9997116456262625, "arXiv_link": "https://arxiv.org/abs/2403.10574", "other_link": "", "title": "Autoregressive Queries for Adaptive Tracking with Spatio-Temporal Transformers", "abstract": "The rich spatio-temporal information is crucial to capture the complicated\ntarget appearance variations in visual tracking. However, most top-performing\ntracking algorithms rely on many hand-crafted components for spatio-temporal\ninformation aggregation. Consequently, the spatio-temporal information is far\naway from being fully explored. To alleviate this issue, we propose an adaptive\ntracker with spatio-temporal transformers (named AQATrack), which adopts simple\nautoregressive queries to effectively learn spatio-temporal information without\nmany hand-designed components. Firstly, we introduce a set of learnable and\nautoregressive queries to capture the instantaneous target appearance changes\nin a sliding window fashion. Then, we design a novel attention mechanism for\nthe interaction of existing queries to generate a new query in current frame.\nFinally, based on the initial target template and learnt autoregressive\nqueries, a spatio-temporal information fusion module (STM) is designed for\nspatiotemporal formation aggregation to locate a target object. Benefiting from\nthe STM, we can effectively combine the static appearance and instantaneous\nchanges to guide robust tracking. Extensive experiments show that our method\nsignificantly improves the tracker's performance on six popular tracking\nbenchmarks: LaSOT, LaSOText, TrackingNet, GOT-10k, TNL2K, and UAV123.", "keywords": [], "authors_list": ["Jinxia Xie", "Bineng Zhong", "Zhiyi Mo", "Shengping Zhang", "Liangtao Shi", "Shuxiang Song", "Rongrong Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f263"}, "filepath": "data/2401.03411.png", "tags": [], "_media_type": "image", "_rand": 0.9995207342073708, "arXiv_link": "https://arxiv.org/abs/2401.03411", "other_link": "", "title": "GRAM: Global Reasoning for Multi-Page VQA", "abstract": "The increasing use of transformer-based large language models brings forward\nthe challenge of processing long sequences. In document visual question\nanswering (DocVQA), leading methods focus on the single-page setting, while\ndocuments can span hundreds of pages. We present GRAM, a method that seamlessly\nextends pre-trained single-page models to the multi-page setting, without\nrequiring computationally-heavy pretraining. To do so, we leverage a\nsingle-page encoder for local page-level understanding, and enhance it with\ndocument-level designated layers and learnable tokens, facilitating the flow of\ninformation across pages for global reasoning. To enforce our model to utilize\nthe newly introduced document tokens, we propose a tailored bias adaptation\nmethod. For additional computational savings during decoding, we introduce an\noptional compression stage using our compression-transformer\n(C-Former),reducing the encoded sequence length, thereby allowing a tradeoff\nbetween quality and latency. Extensive experiments showcase GRAM's\nstate-of-the-art performance on the benchmarks for multi-page DocVQA,\ndemonstrating the effectiveness of our approach.", "keywords": ["Efficient and scalable vision", "Document analysis and understanding"], "authors_list": ["Itshak Blau", "Sharon Fogel", "Roi Ronen", "Alona Golts", "Shahar Tsiper", "Elad Ben Avraham", "Aviad Aberdam", "Roy Ganz", "Ron Litman"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f264"}, "filepath": "data/2302.09778.png", "tags": [], "_media_type": "image", "_rand": 0.999785604168799, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2302.09778", "other_link": "", "title": "AnyScene: Customized Image Synthesis with Composited Foreground", "abstract": "Recent large-scale generative models learned on big data are capable of\nsynthesizing incredible images yet suffer from limited controllability. This\nwork offers a new generation paradigm that allows flexible control of the\noutput image, such as spatial layout and palette, while maintaining the\nsynthesis quality and model creativity. With compositionality as the core idea,\nwe first decompose an image into representative factors, and then train a\ndiffusion model with all these factors as the conditions to recompose the\ninput. At the inference stage, the rich intermediate representations work as\ncomposable elements, leading to a huge design space (i.e., exponentially\nproportional to the number of decomposed factors) for customizable content\ncreation. It is noteworthy that our approach, which we call Composer, supports\nvarious levels of conditions, such as text description as the global\ninformation, depth map and sketch as the local guidance, color histogram for\nlow-level details, etc. Besides improving controllability, we confirm that\nComposer serves as a general framework and facilitates a wide range of\nclassical generative tasks without retraining. Code and models will be made\navailable.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Ruidong Chen", "Lanjun Wang", "Weizhi Nie", "Yongdong Zhang", "An-An Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f265"}, "filepath": "data/2404.00429.png", "tags": [], "_media_type": "image", "_rand": 0.9994984578172275, "arXiv_link": "https://arxiv.org/abs/2404.00429", "other_link": "https://github.com/jinsz/Multiway-Point-Cloud-Mosaicking-with-Diffusion-and-Global-Optimization.", "title": "Multiway Point Cloud Mosaicking with Diffusion and Global Optimization", "abstract": "We introduce a novel framework for multiway point cloud mosaicking (named\nWednesday), designed to co-align sets of partially overlapping point clouds --\ntypically obtained from 3D scanners or moving RGB-D cameras -- into a unified\ncoordinate system. At the core of our approach is ODIN, a learned pairwise\nregistration algorithm that iteratively identifies overlaps and refines\nattention scores, employing a diffusion-based process for denoising pairwise\ncorrelation matrices to enhance matching accuracy. Further steps include\nconstructing a pose graph from all point clouds, performing rotation averaging,\na novel robust algorithm for re-estimating translations optimally in terms of\nconsensus maximization and translation optimization. Finally, the point cloud\nrotations and positions are optimized jointly by a diffusion-based approach.\nTested on four diverse, large-scale datasets, our method achieves\nstate-of-the-art pairwise and multiway registration results by a large margin\non all benchmarks. Our code and models are available at\nhttps://github.com/jinsz/Multiway-Point-Cloud-Mosaicking-with-Diffusion-and-Global-Optimization.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Shengze Jin", "Iro Armeni", "Marc Pollefeys", "Daniel Barath"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f266"}, "filepath": "data/2404.18135.png", "tags": [], "_media_type": "image", "_rand": 0.9996002951312293, "arXiv_link": "https://arxiv.org/abs/2404.18135", "other_link": "https://github.com/iSEE-Laboratory/DGTR", "title": "Dexterous Grasp Transformer", "abstract": "In this work, we propose a novel discriminative framework for dexterous grasp\ngeneration, named Dexterous Grasp TRansformer (DGTR), capable of predicting a\ndiverse set of feasible grasp poses by processing the object point cloud with\nonly one forward pass. We formulate dexterous grasp generation as a set\nprediction task and design a transformer-based grasping model for it. However,\nwe identify that this set prediction paradigm encounters several optimization\nchallenges in the field of dexterous grasping and results in restricted\nperformance. To address these issues, we propose progressive strategies for\nboth the training and testing phases. First, the dynamic-static matching\ntraining (DSMT) strategy is presented to enhance the optimization stability\nduring the training phase. Second, we introduce the adversarial-balanced\ntest-time adaptation (AB-TTA) with a pair of adversarial losses to improve\ngrasping quality during the testing phase. Experimental results on the\nDexGraspNet dataset demonstrate the capability of DGTR to predict dexterous\ngrasp poses with both high quality and diversity. Notably, while keeping high\nquality, the diversity of grasp poses predicted by DGTR significantly\noutperforms previous works in multiple metrics without any data pre-processing.\nCodes are available at https://github.com/iSEE-Laboratory/DGTR .", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Guo-Hao Xu", "Yi-Lin Wei", "Dian Zheng", "Xiao-Ming Wu", "Wei-Shi Zheng"], "category_name": "Robotics", "all_categories": ["Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f267"}, "filepath": "data/2404.06842.png", "tags": [], "_media_type": "image", "_rand": 0.9998804903947914, "arXiv_link": "https://arxiv.org/abs/2404.06842", "other_link": "https://github.com/ZYangChen/MoCha-Stereo.", "title": "MoCha-Stereo: Motif Channel Attention Network for Stereo Matching", "abstract": "Learning-based stereo matching techniques have made significant progress.\nHowever, existing methods inevitably lose geometrical structure information\nduring the feature channel generation process, resulting in edge detail\nmismatches. In this paper, the Motif Cha}nnel Attention Stereo Matching Network\n(MoCha-Stereo) is designed to address this problem. We provide the Motif\nChannel Correlation Volume (MCCV) to determine more accurate edge matching\ncosts. MCCV is achieved by projecting motif channels, which capture common\ngeometric structures in feature channels, onto feature maps and cost volumes.\nIn addition, edge variations in %potential feature channels of the\nreconstruction error map also affect details matching, we propose the\nReconstruction Error Motif Penalty (REMP) module to further refine the\nfull-resolution disparity estimation. REMP integrates the frequency information\nof typical channel features from the reconstruction error. MoCha-Stereo ranks\n1st on the KITTI-2015 and KITTI-2012 Reflective leaderboards. Our structure\nalso shows excellent performance in Multi-View Stereo. Code is avaliable at\nhttps://github.com/ZYangChen/MoCha-Stereo.", "keywords": [], "authors_list": ["Ziyang Chen", "Wei Long", "He Yao", "Yongjun Zhang", "Bingshu Wang", "Yongbin Qin", "Jia Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f268"}, "filepath": "data/2312.16812.png", "tags": [], "_media_type": "image", "_rand": 0.9992596661695525, "arXiv_link": "https://arxiv.org/abs/2312.16812", "other_link": "https://github.com/oppo-us-research/SpacetimeGaussians.", "title": "Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis", "abstract": "Novel view synthesis of dynamic scenes has been an intriguing yet challenging\nproblem. Despite recent advancements, simultaneously achieving high-resolution\nphotorealistic results, real-time rendering, and compact storage remains a\nformidable task. To address these challenges, we propose Spacetime Gaussian\nFeature Splatting as a novel dynamic scene representation, composed of three\npivotal components. First, we formulate expressive Spacetime Gaussians by\nenhancing 3D Gaussians with temporal opacity and parametric motion/rotation.\nThis enables Spacetime Gaussians to capture static, dynamic, as well as\ntransient content within a scene. Second, we introduce splatted feature\nrendering, which replaces spherical harmonics with neural features. These\nfeatures facilitate the modeling of view- and time-dependent appearance while\nmaintaining small size. Third, we leverage the guidance of training error and\ncoarse depth to sample new Gaussians in areas that are challenging to converge\nwith existing pipelines. Experiments on several established real-world datasets\ndemonstrate that our method achieves state-of-the-art rendering quality and\nspeed, while retaining compact storage. At 8K resolution, our lite-version\nmodel can render at 60 FPS on an Nvidia RTX 4090 GPU. Our code is available at\nhttps://github.com/oppo-us-research/SpacetimeGaussians.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Zhan Li", "Zhang Chen", "Zhong Li", "Yi Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f269"}, "filepath": "data/2404.06511.png", "tags": [], "_media_type": "image", "_rand": 0.9991510854910037, "arXiv_link": "https://arxiv.org/abs/2404.06511", "other_link": "", "title": "MoReVQA: Exploring Modular Reasoning Models for Video Question Answering", "abstract": "This paper addresses the task of video question answering (videoQA) via a\ndecomposed multi-stage, modular reasoning framework. Previous modular methods\nhave shown promise with a single planning stage ungrounded in visual content.\nHowever, through a simple and effective baseline, we find that such systems can\nlead to brittle behavior in practice for challenging videoQA settings. Thus,\nunlike traditional single-stage planning methods, we propose a multi-stage\nsystem consisting of an event parser, a grounding stage, and a final reasoning\nstage in conjunction with an external memory. All stages are training-free, and\nperformed using few-shot prompting of large models, creating interpretable\nintermediate outputs at each stage. By decomposing the underlying planning and\ntask complexity, our method, MoReVQA, improves over prior work on standard\nvideoQA benchmarks (NExT-QA, iVQA, EgoSchema, ActivityNet-QA) with\nstate-of-the-art results, and extensions to related tasks (grounded videoQA,\nparagraph captioning).", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Juhong Min", "Shyamal Buch", "Arsha Nagrani", "Minsu Cho", "Cordelia Schmid"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f26a"}, "filepath": "data/2311.12194.png", "tags": [], "_media_type": "image", "_rand": 0.9993979337610348, "arXiv_link": "https://arxiv.org/abs/2311.12194", "other_link": "https://people.csail.mit.edu/liyifei/publication/diffavatar/", "title": "DiffAvatar: Simulation-Ready Garment Optimization with Differentiable Simulation", "abstract": "The realism of digital avatars is crucial in enabling telepresence\napplications with self-expression and customization. While physical simulations\ncan produce realistic motions for clothed humans, they require high-quality\ngarment assets with associated physical parameters for cloth simulations.\nHowever, manually creating these assets and calibrating their parameters is\nlabor-intensive and requires specialized expertise. Current methods focus on\nreconstructing geometry, but don't generate complete assets for physics-based\napplications. To address this gap, we propose \\papername,~a novel approach that\nperforms body and garment co-optimization using differentiable simulation. By\nintegrating physical simulation into the optimization loop and accounting for\nthe complex nonlinear behavior of cloth and its intricate interaction with the\nbody, our framework recovers body and garment geometry and extracts important\nmaterial parameters in a physically plausible way. Our experiments demonstrate\nthat our approach generates realistic clothing and body shape suitable for\ndownstream applications. We provide additional insights and results on our\nwebpage: https://people.csail.mit.edu/liyifei/publication/diffavatar/", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Yifei Li", "Hsiaoyu Chen", "Egor Larionov", "Nikolaos Sarafianos", "Wojciech Matusik", "Tuur Stuyck"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f26b"}, "filepath": "data/2312.00648.png", "tags": [], "_media_type": "image", "_rand": 0.9995377751622041, "arXiv_link": "https://arxiv.org/abs/2312.00648", "other_link": "https://github.com/gkakogeorgiou/spot", "title": "SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers", "abstract": "Unsupervised object-centric learning aims to decompose scenes into\ninterpretable object entities, termed slots. Slot-based auto-encoders stand out\nas a prominent method for this task. Within them, crucial aspects include\nguiding the encoder to generate object-specific slots and ensuring the decoder\nutilizes them during reconstruction. This work introduces two novel techniques,\n(i) an attention-based self-training approach, which distills superior\nslot-based attention masks from the decoder to the encoder, enhancing object\nsegmentation, and (ii) an innovative patch-order permutation strategy for\nautoregressive transformers that strengthens the role of slot vectors in\nreconstruction. The effectiveness of these strategies is showcased\nexperimentally. The combined approach significantly surpasses prior slot-based\nautoencoder methods in unsupervised object segmentation, especially with\ncomplex real-world images. We provide the implementation code at\nhttps://github.com/gkakogeorgiou/spot .", "keywords": [], "authors_list": ["Ioannis Kakogeorgiou", "Spyros Gidaris", "Konstantinos Karantzalos", "Nikos Komodakis"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f26c"}, "filepath": "data/2405.17725.png", "tags": [], "_media_type": "image", "_rand": 0.9999231247399498, "arXiv_link": "https://arxiv.org/abs/2405.17725", "other_link": "https://github.com/yiyulics/CSEC.", "title": "Color Shift Estimation-and-Correction for Image Enhancement", "abstract": "Images captured under sub-optimal illumination conditions may contain both\nover- and under-exposures. Current approaches mainly focus on adjusting image\nbrightness, which may exacerbate the color tone distortion in under-exposed\nareas and fail to restore accurate colors in over-exposed regions. We observe\nthat over- and under-exposed regions display opposite color tone distribution\nshifts with respect to each other, which may not be easily normalized in joint\nmodeling as they usually do not have ``normal-exposed'' regions/pixels as\nreference. In this paper, we propose a novel method to enhance images with both\nover- and under-exposures by learning to estimate and correct such color\nshifts. Specifically, we first derive the color feature maps of the brightened\nand darkened versions of the input image via a UNet-based network, followed by\na pseudo-normal feature generator to produce pseudo-normal color feature maps.\nWe then propose a novel COlor Shift Estimation (COSE) module to estimate the\ncolor shifts between the derived brightened (or darkened) color feature maps\nand the pseudo-normal color feature maps. The COSE module corrects the\nestimated color shifts of the over- and under-exposed regions separately. We\nfurther propose a novel COlor MOdulation (COMO) module to modulate the\nseparately corrected colors in the over- and under-exposed regions to produce\nthe enhanced image. Comprehensive experiments show that our method outperforms\nexisting approaches. Project webpage: https://github.com/yiyulics/CSEC.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yiyu Li", "Ke Xu", "Gerhard Hancke", "Rynson W.H. Lau"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f26d"}, "filepath": "data/2311.17113.png", "tags": [], "_media_type": "image", "_rand": 0.9994349348156859, "arXiv_link": "https://arxiv.org/abs/2311.17113", "other_link": "", "title": "Human Gaussian Splatting : Real-time Rendering of Animatable Avatars", "abstract": "This work addresses the problem of real-time rendering of photorealistic\nhuman body avatars learned from multi-view videos. While the classical\napproaches to model and render virtual humans generally use a textured mesh,\nrecent research has developed neural body representations that achieve\nimpressive visual quality. However, these models are difficult to render in\nreal-time and their quality degrades when the character is animated with body\nposes different than the training observations. We propose an animatable human\nmodel based on 3D Gaussian Splatting, that has recently emerged as a very\nefficient alternative to neural radiance fields. The body is represented by a\nset of gaussian primitives in a canonical space which is deformed with a coarse\nto fine approach that combines forward skinning and local non-rigid refinement.\nWe describe how to learn our Human Gaussian Splatting (HuGS) model in an\nend-to-end fashion from multi-view observations, and evaluate it against the\nstate-of-the-art approaches for novel pose synthesis of clothed body. Our\nmethod achieves 1.5 dB PSNR improvement over the state-of-the-art on THuman4\ndataset while being able to render in real-time (80 fps for 512x512\nresolution).", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Arthur Moreau", "Jifei Song", "Helisa Dhamo", "Richard Shaw", "Yiren Zhou", "Eduardo P\u00e9rez-Pellitero"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f26e"}, "filepath": "data/2303.11684.png", "tags": [], "_media_type": "image", "_rand": 0.9997984592844004, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2303.11684", "other_link": "https://openi.pcl.ac.cn/Cordium/SpikeCV}", "title": "Boosting Spike Camera Image Reconstruction from a Perspective of Dealing with Spike Fluctuations", "abstract": "SpikeCV is a new open-source computer vision platform for the spike camera,\nwhich is a neuromorphic visual sensor that has developed rapidly in recent\nyears. In the spike camera, each pixel position directly accumulates the light\nintensity and asynchronously fires spikes. The output binary spikes can reach a\nfrequency of 40,000 Hz. As a new type of visual expression, spike sequence has\nhigh spatiotemporal completeness and preserves the continuous visual\ninformation of the external world. Taking advantage of the low latency and high\ndynamic range of the spike camera, many spike-based algorithms have made\nsignificant progress, such as high-quality imaging and ultra-high-speed target\ndetection.\n To build up a community ecology for the spike vision to facilitate more users\nto take advantage of the spike camera, SpikeCV provides a variety of\nultra-high-speed scene datasets, hardware interfaces, and an easy-to-use\nmodules library. SpikeCV focuses on encapsulation for spike data,\nstandardization for dataset interfaces, modularization for vision tasks, and\nreal-time applications for challenging scenes. With the advent of the\nopen-source Python ecosystem, modules of SpikeCV can be used as a Python\nlibrary to fulfilled most of the numerical analysis needs of researchers. We\ndemonstrate the efficiency of the SpikeCV on offline inference and real-time\napplications. The project repository address are\n\\url{https://openi.pcl.ac.cn/Cordium/SpikeCV} and\n\\url{https://github.com/Zyj061/SpikeCV", "keywords": ["Low-level vision"], "authors_list": ["Rui Zhao", "Ruiqin Xiong", "Jing Zhao", "Jian Zhang", "Xiaopeng Fan", "Zhaofei Yu", "Tiejun Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f26f"}, "filepath": "data/2404.00485.png", "tags": [], "_media_type": "image", "_rand": 0.9997811806868742, "arXiv_link": "https://arxiv.org/abs/2404.00485", "other_link": "", "title": "DiffHuman: Probabilistic Photorealistic 3D Reconstruction of Humans", "abstract": "We present DiffHuman, a probabilistic method for photorealistic 3D human\nreconstruction from a single RGB image. Despite the ill-posed nature of this\nproblem, most methods are deterministic and output a single solution, often\nresulting in a lack of geometric detail and blurriness in unseen or uncertain\nregions. In contrast, DiffHuman predicts a probability distribution over 3D\nreconstructions conditioned on an input 2D image, which allows us to sample\nmultiple detailed 3D avatars that are consistent with the image. DiffHuman is\nimplemented as a conditional diffusion model that denoises pixel-aligned 2D\nobservations of an underlying 3D shape representation. During inference, we may\nsample 3D avatars by iteratively denoising 2D renders of the predicted 3D\nrepresentation. Furthermore, we introduce a generator neural network that\napproximates rendering with considerably reduced runtime (55x speed up),\nresulting in a novel dual-branch diffusion framework. Our experiments show that\nDiffHuman can produce diverse and detailed reconstructions for the parts of the\nperson that are unseen or uncertain in the input image, while remaining\ncompetitive with the state-of-the-art when reconstructing visible surfaces.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Akash Sengupta", "Thiemo Alldieck", "NIKOS KOLOTOUROS", "Enric Corona", "Andrei Zanfir", "Cristian Sminchisescu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f270"}, "filepath": "data/2403.17601.png", "tags": [], "_media_type": "image", "_rand": 0.9991864639885257, "arXiv_link": "https://arxiv.org/abs/2403.17601", "other_link": "", "title": "LASIL: Learner-Aware Supervised Imitation Learning For Long-term Microscopic Traffic Simulation", "abstract": "Microscopic traffic simulation plays a crucial role in transportation\nengineering by providing insights into individual vehicle behavior and overall\ntraffic flow. However, creating a realistic simulator that accurately\nreplicates human driving behaviors in various traffic conditions presents\nsignificant challenges. Traditional simulators relying on heuristic models\noften fail to deliver accurate simulations due to the complexity of real-world\ntraffic environments. Due to the covariate shift issue, existing imitation\nlearning-based simulators often fail to generate stable long-term simulations.\nIn this paper, we propose a novel approach called learner-aware supervised\nimitation learning to address the covariate shift problem in multi-agent\nimitation learning. By leveraging a variational autoencoder simultaneously\nmodeling the expert and learner state distribution, our approach augments\nexpert states such that the augmented state is aware of learner state\ndistribution. Our method, applied to urban traffic simulation, demonstrates\nsignificant improvements over existing state-of-the-art baselines in both\nshort-term microscopic and long-term macroscopic realism when evaluated on the\nreal-world dataset pNEUMA.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ke Guo", "Zhenwei Miao", "Wei Jing", "Weiwei Liu", "Weizi Li", "Dayang Hao", "Jia Pan"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f271"}, "filepath": "data/2404.01819.png", "tags": [], "_media_type": "image", "_rand": 0.9991970823363453, "arXiv_link": "https://arxiv.org/abs/2404.01819", "other_link": "", "title": "Sparse Semi-Detr: Sparse Learnable Queries for Semi-Supervised Object Detection", "abstract": "In this paper, we address the limitations of the DETR-based semi-supervised\nobject detection (SSOD) framework, particularly focusing on the challenges\nposed by the quality of object queries. In DETR-based SSOD, the one-to-one\nassignment strategy provides inaccurate pseudo-labels, while the one-to-many\nassignments strategy leads to overlapping predictions. These issues compromise\ntraining efficiency and degrade model performance, especially in detecting\nsmall or occluded objects. We introduce Sparse Semi-DETR, a novel\ntransformer-based, end-to-end semi-supervised object detection solution to\novercome these challenges. Sparse Semi-DETR incorporates a Query Refinement\nModule to enhance the quality of object queries, significantly improving\ndetection capabilities for small and partially obscured objects. Additionally,\nwe integrate a Reliable Pseudo-Label Filtering Module that selectively filters\nhigh-quality pseudo-labels, thereby enhancing detection accuracy and\nconsistency. On the MS-COCO and Pascal VOC object detection benchmarks, Sparse\nSemi-DETR achieves a significant improvement over current state-of-the-art\nmethods that highlight Sparse Semi-DETR's effectiveness in semi-supervised\nobject detection, particularly in challenging scenarios involving small or\npartially obscured objects.", "keywords": [], "authors_list": ["Tahira Shehzadi", "Khurram Azeem Hashmi", "Didier Stricker", "Muhammad Zeshan Afzal"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f272"}, "filepath": "data/2309.00610.png", "tags": [], "_media_type": "image", "_rand": 0.9991799476758282, "arXiv_link": "https://arxiv.org/abs/2309.00610", "other_link": "", "title": "CityDreamer: Compositional Generative Model of Unbounded 3D Cities", "abstract": "3D city generation is a desirable yet challenging task, since humans are more\nsensitive to structural distortions in urban environments. Additionally,\ngenerating 3D cities is more complex than 3D natural scenes since buildings, as\nobjects of the same class, exhibit a wider range of appearances compared to the\nrelatively consistent appearance of objects like trees in natural scenes. To\naddress these challenges, we propose \\textbf{CityDreamer}, a compositional\ngenerative model designed specifically for unbounded 3D cities. Our key insight\nis that 3D city generation should be a composition of different types of neural\nfields: 1) various building instances, and 2) background stuff, such as roads\nand green lands. Specifically, we adopt the bird's eye view scene\nrepresentation and employ a volumetric render for both instance-oriented and\nstuff-oriented neural fields. The generative hash grid and periodic positional\nembedding are tailored as scene parameterization to suit the distinct\ncharacteristics of building instances and background stuff. Furthermore, we\ncontribute a suite of CityGen Datasets, including OSM and GoogleEarth, which\ncomprises a vast amount of real-world city imagery to enhance the realism of\nthe generated 3D cities both in their layouts and appearances. CityDreamer\nachieves state-of-the-art performance not only in generating realistic 3D\ncities but also in localized editing within the generated cities.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haozhe Xie", "Zhaoxi Chen", "Fangzhou Hong", "Ziwei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f273"}, "filepath": "data/2312.16145.png", "tags": [], "_media_type": "image", "_rand": 0.9990706177459024, "arXiv_link": "https://arxiv.org/abs/2312.16145", "other_link": "https://lyumengyao.github.io/projects/spm.", "title": "One-dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications", "abstract": "The prevalent use of commercial and open-source diffusion models (DMs) for\ntext-to-image generation prompts risk mitigation to prevent undesired\nbehaviors. Existing concept erasing methods in academia are all based on full\nparameter or specification-based fine-tuning, from which we observe the\nfollowing issues: 1) Generation alternation towards erosion: Parameter drift\nduring target elimination causes alternations and potential deformations across\nall generations, even eroding other concepts at varying degrees, which is more\nevident with multi-concept erased; 2) Transfer inability & deployment\ninefficiency: Previous model-specific erasure impedes the flexible combination\nof concepts and the training-free transfer towards other models, resulting in\nlinear cost growth as the deployment scenarios increase. To achieve\nnon-invasive, precise, customizable, and transferable elimination, we ground\nour erasing framework on one-dimensional adapters to erase multiple concepts\nfrom most DMs at once across versatile erasing applications. The\nconcept-SemiPermeable structure is injected as a Membrane (SPM) into any DM to\nlearn targeted erasing, and meantime the alteration and erosion phenomenon is\neffectively mitigated via a novel Latent Anchoring fine-tuning strategy. Once\nobtained, SPMs can be flexibly combined and plug-and-play for other DMs without\nspecific re-tuning, enabling timely and efficient adaptation to diverse\nscenarios. During generation, our Facilitated Transport mechanism dynamically\nregulates the permeability of each SPM to respond to different input prompts,\nfurther minimizing the impact on other concepts. Quantitative and qualitative\nresults across ~40 concepts, 7 DMs and 4 erasing applications have demonstrated\nthe superior erasing of SPM. Our code and pre-tuned SPMs are available on the\nproject page https://lyumengyao.github.io/projects/spm.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Mengyao Lyu", "Yuhong Yang", "Haiwen Hong", "Hui Chen", "Xuan Jin", "Yuan He", "Hui Xue", "Jungong Han", "Guiguang Ding"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f274"}, "filepath": "data/2405.06284.png", "tags": [], "_media_type": "image", "_rand": 0.9995963476445003, "arXiv_link": "https://arxiv.org/abs/2405.06284", "other_link": "", "title": "Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention", "abstract": "Generalizability in deep neural networks plays a pivotal role in medical\nimage segmentation. However, deep learning-based medical image analyses tend to\noverlook the importance of frequency variance, which is critical element for\nachieving a model that is both modality-agnostic and domain-generalizable.\nAdditionally, various models fail to account for the potential information loss\nthat can arise from multi-task learning under deep supervision, a factor that\ncan impair the model representation ability. To address these challenges, we\npropose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical\nimage segmentation, which comprises two key components: a Multi-Frequency in\nMulti-Scale Attention (MFMSA) block and Ensemble Sub-Decoding Module (E-SDM).\nThe MFMSA block refines the process of spatial feature extraction, particularly\nin capturing boundary features, by incorporating multi-frequency and\nmulti-scale features, thereby offering informative cues for tissue outline and\nanatomical structures. Moreover, we propose E-SDM to mitigate information loss\nin multi-task learning with deep supervision, especially during substantial\nupsampling from low resolution. We evaluate the segmentation performance of\nMADGNet across six modalities and fifteen datasets. Through extensive\nexperiments, we demonstrate that MADGNet consistently outperforms\nstate-of-the-art models across various modalities, showcasing superior\nsegmentation performance. This affirms MADGNet as a robust solution for medical\nimage segmentation that excels in diverse imaging scenarios. Our MADGNet code\nis available in GitHub Link.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Ju-Hyeon Nam", "Nur Suriza Syazwany", "Su Jung Kim", "Sang-Chul Lee"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f275"}, "filepath": "data/2312.17269.png", "tags": [], "_media_type": "image", "_rand": 0.9992195314386655, "arXiv_link": "https://arxiv.org/abs/2312.17269", "other_link": "", "title": "CoG-DQA: Chain-of-Guiding Learning with Large Language Models for Diagram Question Answering", "abstract": "Conversational question answering (convQA) over knowledge graphs (KGs)\ninvolves answering multi-turn natural language questions about information\ncontained in a KG. State-of-the-art methods of ConvQA often struggle with\ninexplicit question-answer pairs. These inputs are easy for human beings to\nunderstand given a conversation history, but hard for a machine to interpret,\nwhich can degrade ConvQA performance. To address this problem, we propose a\nreinforcement learning (RL) based model, CornNet, which utilizes question\nreformulations generated by large language models (LLMs) to improve ConvQA\nperformance. CornNet adopts a teacher-student architecture where a teacher\nmodel learns question representations using human writing reformulations, and a\nstudent model to mimic the teacher model's output via reformulations generated\nby LLMs. The learned question representation is then used by an RL model to\nlocate the correct answer in a KG. Extensive experimental results show that\nCornNet outperforms state-of-the-art convQA models.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Shaowei Wang", "Lingling Zhang", "Longji Zhu", "Tao Qin", "Kim-Hui Yap", "Xinyu Zhang", "Jun Liu"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f276"}, "filepath": "data/2402.17172.png", "tags": [], "_media_type": "image", "_rand": 0.9990286260587389, "arXiv_link": "https://arxiv.org/abs/2402.17172", "other_link": "", "title": "Lane2Seq: Towards Unified Lane Detection via Sequence Generation", "abstract": "In this paper, we present a novel sequence generation-based framework for\nlane detection, called Lane2Seq. It unifies various lane detection formats by\ncasting lane detection as a sequence generation task. This is different from\nprevious lane detection methods, which depend on well-designed task-specific\nhead networks and corresponding loss functions. Lane2Seq only adopts a plain\ntransformer-based encoder-decoder architecture with a simple cross-entropy\nloss. Additionally, we propose a new multi-format model tuning based on\nreinforcement learning to incorporate the task-specific knowledge into\nLane2Seq. Experimental results demonstrate that such a simple sequence\ngeneration paradigm not only unifies lane detection but also achieves\ncompetitive performance on benchmarks. For example, Lane2Seq gets 97.95\\% and\n97.42\\% F1 score on Tusimple and LLAMAS datasets, establishing a new\nstate-of-the-art result for two benchmarks.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Kunyang Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f277"}, "filepath": "data/2403.08629.png", "tags": [], "_media_type": "image", "_rand": 0.9990315715781406, "arXiv_link": "https://arxiv.org/abs/2403.08629", "other_link": "", "title": "Scaling Up Dynamic 3D Human-Scene Interaction Modelling", "abstract": "Confronting the challenges of data scarcity and advanced motion synthesis in\nhuman-scene interaction modeling, we introduce the TRUMANS dataset alongside a\nnovel HSI motion synthesis method. TRUMANS stands as the most comprehensive\nmotion-captured HSI dataset currently available, encompassing over 15 hours of\nhuman interactions across 100 indoor scenes. It intricately captures whole-body\nhuman motions and part-level object dynamics, focusing on the realism of\ncontact. This dataset is further scaled up by transforming physical\nenvironments into exact virtual models and applying extensive augmentations to\nappearance and motion for both humans and objects while maintaining interaction\nfidelity. Utilizing TRUMANS, we devise a diffusion-based autoregressive model\nthat efficiently generates HSI sequences of any length, taking into account\nboth scene context and intended actions. In experiments, our approach shows\nremarkable zero-shot generalizability on a range of 3D scene datasets (e.g.,\nPROX, Replica, ScanNet, ScanNet++), producing motions that closely mimic\noriginal motion-captured sequences, as confirmed by quantitative experiments\nand human studies.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Nan Jiang", "Zhiyuan Zhang", "Hongjie Li", "Xiaoxuan Ma", "Zan Wang", "Yixin Chen", "Tengyu Liu", "Yixin Zhu", "Siyuan Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f278"}, "filepath": "data/2309.15785.png", "tags": [], "_media_type": "image", "_rand": 0.9991923390176511, "arXiv_link": "http://export.arxiv.org/abs/2309.15785", "other_link": "", "title": "BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning", "abstract": "The recent progress in Large Language Models (LLM) has spurred various\nadvancements in image-language conversation agents, while how to build a\nproficient video-based dialogue system is still under exploration. Considering\nthe extensive scale of LLM and visual backbone, minimal GPU memory is left for\nfacilitating effective temporal modeling, which is crucial for comprehending\nand providing feedback on videos. To this end, we propose Branching Temporal\nAdapter (BT-Adapter), a novel method for extending image-language pretrained\nmodels into the video domain. Specifically, BT-Adapter serves as a plug-and-use\ntemporal modeling branch alongside the pretrained visual encoder, which is\ntuned while keeping the backbone frozen. Just pretrained once, BT-Adapter can\nbe seamlessly integrated into all image conversation models using this version\nof CLIP, enabling video conversations without the need for video instructions.\nBesides, we develop a unique asymmetric token masking strategy inside the\nbranch with tailor-made training tasks for BT-Adapter, facilitating faster\nconvergence and better results. Thanks to BT-Adapter, we are able to empower\nexisting multimodal dialogue models with strong video understanding\ncapabilities without incurring excessive GPU costs. Without bells and whistles,\nBT-Adapter achieves (1) state-of-the-art zero-shot results on various video\ntasks using thousands of fewer GPU hours. (2) better performance than current\nvideo chatbots without any video instruction tuning. (3) state-of-the-art\nresults of video chatting using video instruction tuning, outperforming\nprevious SOTAs by a large margin.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Ruyang Liu", "Chen Li", "Yixiao Ge", "Thomas H. Li", "Ying Shan", "Ge Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f279"}, "filepath": "data/2404.06194.png", "tags": [], "_media_type": "image", "_rand": 0.99904164475701, "arXiv_link": "https://arxiv.org/abs/2404.06194", "other_link": "https://github.com/ltttpku/CMD-SE-release.", "title": "Exploring the Potential of Large Foundation Models for Open-Vocabulary HOI Detection", "abstract": "Open-vocabulary human-object interaction (HOI) detection, which is concerned\nwith the problem of detecting novel HOIs guided by natural language, is crucial\nfor understanding human-centric scenes. However, prior zero-shot HOI detectors\noften employ the same levels of feature maps to model HOIs with varying\ndistances, leading to suboptimal performance in scenes containing human-object\npairs with a wide range of distances. In addition, these detectors primarily\nrely on category names and overlook the rich contextual information that\nlanguage can provide, which is essential for capturing open vocabulary concepts\nthat are typically rare and not well-represented by category names alone. In\nthis paper, we introduce a novel end-to-end open vocabulary HOI detection\nframework with conditional multi-level decoding and fine-grained semantic\nenhancement (CMD-SE), harnessing the potential of Visual-Language Models\n(VLMs). Specifically, we propose to model human-object pairs with different\ndistances with different levels of feature maps by incorporating a soft\nconstraint during the bipartite matching process. Furthermore, by leveraging\nlarge language models (LLMs) such as GPT models, we exploit their extensive\nworld knowledge to generate descriptions of human body part states for various\ninteractions. Then we integrate the generalizable and fine-grained semantics of\nhuman body parts to improve interaction recognition. Experimental results on\ntwo datasets, SWIG-HOI and HICO-DET, demonstrate that our proposed method\nachieves state-of-the-art results in open vocabulary HOI detection. The code\nand models are available at https://github.com/ltttpku/CMD-SE-release.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Ting Lei", "Shaofeng Yin", "Yang Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f27a"}, "filepath": "data/2312.06704.png", "tags": [], "_media_type": "image", "_rand": 0.9995618061522421, "arXiv_link": "https://arxiv.org/abs/2312.06704", "other_link": "https://river-zhang.github.io/SIFU-projectpage/", "title": "SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction", "abstract": "Creating high-quality 3D models of clothed humans from single images for\nreal-world applications is crucial. Despite recent advancements, accurately\nreconstructing humans in complex poses or with loose clothing from in-the-wild\nimages, along with predicting textures for unseen areas, remains a significant\nchallenge. A key limitation of previous methods is their insufficient prior\nguidance in transitioning from 2D to 3D and in texture prediction. In response,\nwe introduce SIFU (Side-view Conditioned Implicit Function for Real-world\nUsable Clothed Human Reconstruction), a novel approach combining a Side-view\nDecoupling Transformer with a 3D Consistent Texture Refinement pipeline.SIFU\nemploys a cross-attention mechanism within the transformer, using SMPL-X\nnormals as queries to effectively decouple side-view features in the process of\nmapping 2D features to 3D. This method not only improves the precision of the\n3D models but also their robustness, especially when SMPL-X estimates are not\nperfect. Our texture refinement process leverages text-to-image diffusion-based\nprior to generate realistic and consistent textures for invisible views.\nThrough extensive experiments, SIFU surpasses SOTA methods in both geometry and\ntexture reconstruction, showcasing enhanced robustness in complex scenarios and\nachieving an unprecedented Chamfer and P2S measurement. Our approach extends to\npractical applications such as 3D printing and scene building, demonstrating\nits broad utility in real-world scenarios. Project page\nhttps://river-zhang.github.io/SIFU-projectpage/ .", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Zechuan Zhang", "Zongxin Yang", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f27b"}, "filepath": "data/2403.12570.png", "tags": [], "_media_type": "image", "_rand": 0.9996903984105852, "arXiv_link": "https://arxiv.org/abs/2403.12570", "other_link": "https://github.com/MediaBrain-SJTU/MVFA-AD", "title": "Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images", "abstract": "Recent advancements in large-scale visual-language pre-trained models have\nled to significant progress in zero-/few-shot anomaly detection within natural\nimage domains. However, the substantial domain divergence between natural and\nmedical images limits the effectiveness of these methodologies in medical\nanomaly detection. This paper introduces a novel lightweight multi-level\nadaptation and comparison framework to repurpose the CLIP model for medical\nanomaly detection. Our approach integrates multiple residual adapters into the\npre-trained visual encoder, enabling a stepwise enhancement of visual features\nacross different levels. This multi-level adaptation is guided by multi-level,\npixel-wise visual-language feature alignment loss functions, which recalibrate\nthe model's focus from object semantics in natural imagery to anomaly\nidentification in medical images. The adapted features exhibit improved\ngeneralization across various medical data types, even in zero-shot scenarios\nwhere the model encounters unseen medical modalities and anatomical regions\nduring training. Our experiments on medical anomaly detection benchmarks\ndemonstrate that our method significantly surpasses current state-of-the-art\nmodels, with an average AUC improvement of 6.24% and 7.33% for anomaly\nclassification, 2.03% and 2.37% for anomaly segmentation, under the zero-shot\nand few-shot settings, respectively. Source code is available at:\nhttps://github.com/MediaBrain-SJTU/MVFA-AD", "keywords": ["Multimodal models and vision-language models", "Medical imaging and biological vision"], "authors_list": ["Chaoqin Huang", "Aofan Jiang", "Jinghao Feng", "Ya Zhang", "Xinchao Wang", "Yanfeng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f27c"}, "filepath": "data/2311.16813.png", "tags": [], "_media_type": "image", "_rand": 0.9992846419218145, "arXiv_link": "https://arxiv.org/abs/2311.16813", "other_link": "", "title": "Panacea: Panoramic and Controllable Video Generation for Autonomous Driving", "abstract": "The field of autonomous driving increasingly demands high-quality annotated\ntraining data. In this paper, we propose Panacea, an innovative approach to\ngenerate panoramic and controllable videos in driving scenarios, capable of\nyielding an unlimited numbers of diverse, annotated samples pivotal for\nautonomous driving advancements. Panacea addresses two critical challenges:\n'Consistency' and 'Controllability.' Consistency ensures temporal and\ncross-view coherence, while Controllability ensures the alignment of generated\ncontent with corresponding annotations. Our approach integrates a novel 4D\nattention and a two-stage generation pipeline to maintain coherence,\nsupplemented by the ControlNet framework for meticulous control by the\nBird's-Eye-View (BEV) layouts. Extensive qualitative and quantitative\nevaluations of Panacea on the nuScenes dataset prove its effectiveness in\ngenerating high-quality multi-view driving-scene videos. This work notably\npropels the field of autonomous driving by effectively augmenting the training\ndataset used for advanced BEV perception techniques.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yuqing Wen", "Yucheng Zhao", "Yingfei Liu", "Fan Jia", "Yanhui Wang", "Chong Luo", "Chi Zhang", "Tiancai Wang", "Xiaoyan Sun", "Xiangyu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f27d"}, "filepath": "data/2404.04050.png", "tags": [], "_media_type": "image", "_rand": 0.9997586713662882, "arXiv_link": "https://arxiv.org/abs/2404.04050", "other_link": "", "title": "No Time to Train: Empowering Non-Parametric Networks for Few-shot 3D Scene Segmentation", "abstract": "To reduce the reliance on large-scale datasets, recent works in 3D\nsegmentation resort to few-shot learning. Current 3D few-shot segmentation\nmethods first pre-train models on 'seen' classes, and then evaluate their\ngeneralization performance on 'unseen' classes. However, the prior pre-training\nstage not only introduces excessive time overhead but also incurs a significant\ndomain gap on 'unseen' classes. To tackle these issues, we propose a\nNon-parametric Network for few-shot 3D Segmentation, Seg-NN, and its Parametric\nvariant, Seg-PN. Without training, Seg-NN extracts dense representations by\nhand-crafted filters and achieves comparable performance to existing parametric\nmodels. Due to the elimination of pre-training, Seg-NN can alleviate the domain\ngap issue and save a substantial amount of time. Based on Seg-NN, Seg-PN only\nrequires training a lightweight QUEry-Support Transferring (QUEST) module,\nwhich enhances the interaction between the support set and query set.\nExperiments suggest that Seg-PN outperforms previous state-of-the-art method by\n+4.19% and +7.71% mIoU on S3DIS and ScanNet datasets respectively, while\nreducing training time by -90%, indicating its effectiveness and efficiency.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xiangyang Zhu", "Renrui Zhang", "Bowei He", "Ziyu Guo", "Jiaming Liu", "Han Xiao", "Chaoyou Fu", "Hao Dong", "Peng Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f27e"}, "filepath": "data/2403.06225.png", "tags": [], "_media_type": "image", "_rand": 0.9997044326361348, "arXiv_link": "https://arxiv.org/abs/2403.06225", "other_link": "https://github.com/Boeun-Kim/MoST.", "title": "MoST: Motion Style Transformer between Diverse Action Contents", "abstract": "While existing motion style transfer methods are effective between two\nmotions with identical content, their performance significantly diminishes when\ntransferring style between motions with different contents. This challenge lies\nin the lack of clear separation between content and style of a motion. To\ntackle this challenge, we propose a novel motion style transformer that\neffectively disentangles style from content and generates a plausible motion\nwith transferred style from a source motion. Our distinctive approach to\nachieving the goal of disentanglement is twofold: (1) a new architecture for\nmotion style transformer with `part-attentive style modulator across body\nparts' and `Siamese encoders that encode style and content features\nseparately'; (2) style disentanglement loss. Our method outperforms existing\nmethods and demonstrates exceptionally high quality, particularly in motion\npairs with different contents, without the need for heuristic post-processing.\nCodes are available at https://github.com/Boeun-Kim/MoST.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Boeun Kim", "Jungho Kim", "Hyung Jin Chang", "Jin Young Choi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f27f"}, "filepath": "data/2308.13287.png", "tags": [], "_media_type": "image", "_rand": 0.9999685399117104, "arXiv_link": "https://arxiv.org/abs/2308.13287", "other_link": "", "title": "Learned Lossless Image Compression based on Bit Plane Slicing", "abstract": "JPEG is one of the most popular image compression methods. It is beneficial\nto compress those existing JPEG files without introducing additional\ndistortion. In this paper, we propose a deep learning based method to further\ncompress JPEG images losslessly. Specifically, we propose a Multi-Level\nParallel Conditional Modeling (ML-PCM) architecture, which enables parallel\ndecoding in different granularities. First, luma and chroma are processed\nindependently to allow parallel coding. Second, we propose pipeline parallel\ncontext model (PPCM) and compressed checkerboard context model (CCCM) for the\neffective conditional modeling and efficient decoding within luma and chroma\ncomponents. Our method has much lower latency while achieves better compression\nratio compared with previous SOTA. After proper software optimization, we can\nobtain a good throughput of 57 FPS for 1080P images on NVIDIA T4 GPU.\nFurthermore, combined with quantization, our approach can also act as a lossy\nJPEG codec which has obvious advantage over SOTA lossy compression methods in\nhigh bit rate (bpp$>0.9$).", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhe Zhang", "Huairui Wang", "Zhenzhong Chen", "Shan Liu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f280"}, "filepath": "data/2311.10959.png", "tags": [], "_media_type": "image", "_rand": 0.9992622447644447, "arXiv_link": "https://arxiv.org/abs/2311.10959", "other_link": "https://github.com/caiyuanhao1998/SAX-NeRF", "title": "Structure-Aware Sparse-View X-ray 3D Reconstruction", "abstract": "X-ray, known for its ability to reveal internal structures of objects, is\nexpected to provide richer information for 3D reconstruction than visible\nlight. Yet, existing neural radiance fields (NeRF) algorithms overlook this\nimportant nature of X-ray, leading to their limitations in capturing structural\ncontents of imaged objects. In this paper, we propose a framework,\nStructure-Aware X-ray Neural Radiodensity Fields (SAX-NeRF), for sparse-view\nX-ray 3D reconstruction. Firstly, we design a Line Segment-based Transformer\n(Lineformer) as the backbone of SAX-NeRF. Linefomer captures internal\nstructures of objects in 3D space by modeling the dependencies within each line\nsegment of an X-ray. Secondly, we present a Masked Local-Global (MLG) ray\nsampling strategy to extract contextual and geometric information in 2D\nprojection. Plus, we collect a larger-scale dataset X3D covering wider X-ray\napplications. Experiments on X3D show that SAX-NeRF surpasses previous\nNeRF-based methods by 12.56 and 2.49 dB on novel view synthesis and CT\nreconstruction. Code, models, and data are released at\nhttps://github.com/caiyuanhao1998/SAX-NeRF", "keywords": ["Deep learning architectures and techniques", "Medical imaging and biological vision"], "authors_list": ["Yuanhao Cai", "Jiahao Wang", "Alan L. Yuille", "Zongwei Zhou", "Angtian Wang"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f281"}, "filepath": "data/2311.14960.png", "tags": [], "_media_type": "image", "_rand": 0.9993031055954781, "arXiv_link": "https://arxiv.org/abs/2311.14960", "other_link": "", "title": "Point Cloud Pre-training with Diffusion Models", "abstract": "Pre-training a model and then fine-tuning it on downstream tasks has\ndemonstrated significant success in the 2D image and NLP domains. However, due\nto the unordered and non-uniform density characteristics of point clouds, it is\nnon-trivial to explore the prior knowledge of point clouds and pre-train a\npoint cloud backbone. In this paper, we propose a novel pre-training method\ncalled Point cloud Diffusion pre-training (PointDif). We consider the point\ncloud pre-training task as a conditional point-to-point generation problem and\nintroduce a conditional point generator. This generator aggregates the features\nextracted by the backbone and employs them as the condition to guide the\npoint-to-point recovery from the noisy point cloud, thereby assisting the\nbackbone in capturing both local and global geometric priors as well as the\nglobal point density distribution of the object. We also present a recurrent\nuniform sampling optimization strategy, which enables the model to uniformly\nrecover from various noise levels and learn from balanced supervision. Our\nPointDif achieves substantial improvement across various real-world datasets\nfor diverse downstream tasks such as classification, segmentation and\ndetection. Specifically, PointDif attains 70.0% mIoU on S3DIS Area 5 for the\nsegmentation task and achieves an average improvement of 2.4% on ScanObjectNN\nfor the classification task compared to TAP. Furthermore, our pre-training\nframework can be flexibly applied to diverse point cloud backbones and bring\nconsiderable gains.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["xiao zheng", "Xiaoshui Huang", "Guofeng Mei", "Zhaoyang Lyu", "Yuenan Hou", "Wanli Ouyang", "Bo Dai", "Yongshun Gong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f282"}, "filepath": "data/2311.00399.png", "tags": [], "_media_type": "image", "_rand": 0.9995228865368081, "arXiv_link": "https://arxiv.org/abs/2311.00399", "other_link": "", "title": "Instance-level Expert Knowledge and Aggregate Discriminative Attention for Radiology Report Generation", "abstract": "Automatic generation of radiology reports holds crucial clinical value, as it\ncan alleviate substantial workload on radiologists and remind less experienced\nones of potential anomalies. Despite the remarkable performance of various\nimage captioning methods in the natural image field, generating accurate\nreports for medical images still faces challenges, i.e., disparities in visual\nand textual data, and lack of accurate domain knowledge. To address these\nissues, we propose an enhanced knowledge injection framework, which utilizes\ntwo branches to extract different types of knowledge. The Weighted Concept\nKnowledge (WCK) branch is responsible for introducing clinical medical concepts\nweighted by TF-IDF scores. The Multimodal Retrieval Knowledge (MRK) branch\nextracts triplets from similar reports, emphasizing crucial clinical\ninformation related to entity positions and existence. By integrating this\nfiner-grained and well-structured knowledge with the current image, we are able\nto leverage the multi-source knowledge gain to ultimately facilitate more\naccurate report generation. Extensive experiments have been conducted on two\npublic benchmarks, demonstrating that our method achieves superior performance\nover other state-of-the-art methods. Ablation studies further validate the\neffectiveness of two extracted knowledge sources.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Shenshen Bu", "Taiji Li", "Zhiming Dai", "Yuedong Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f283"}, "filepath": "data/2309.06255v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991489983832926, "arXiv_link": "https://arxiv.org/html/2309.06255v3", "other_link": "https://github.com/GeWu-Lab/Valuate-and-Enhance-Multimodal-Cooperation}.", "title": "Enhancing Multimodal Cooperation via Sample-level Modality Valuation", "abstract": "One primary topic of multimodal learning is to jointly incorporate\nheterogeneous information from different modalities. However, most models often\nsuffer from unsatisfactory multimodal cooperation, which cannot jointly utilize\nall modalities well. Some methods are proposed to identify and enhance the\nworse learnt modality, but they are often hard to provide the fine-grained\nobservation of multimodal cooperation at sample-level with theoretical support.\nHence, it is essential to reasonably observe and improve the fine-grained\ncooperation between modalities, especially when facing realistic scenarios\nwhere the modality discrepancy could vary across different samples. To this\nend, we introduce a sample-level modality valuation metric to evaluate the\ncontribution of each modality for each sample. Via modality valuation, we\nobserve that modality discrepancy indeed could be different at sample-level,\nbeyond the global contribution discrepancy at dataset-level. We further analyze\nthis issue and improve cooperation between modalities at sample-level by\nenhancing the discriminative ability of low-contributing modalities in a\ntargeted manner. Overall, our methods reasonably observe the fine-grained\nuni-modal contribution and achieve considerable improvement. The source code\nand dataset are available at\n\\url{https://github.com/GeWu-Lab/Valuate-and-Enhance-Multimodal-Cooperation}.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yake Wei", "Ruoxuan Feng", "Zihe Wang", "Di Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f284"}, "filepath": "data/2312.10144.png", "tags": [], "_media_type": "image", "_rand": 0.9999280036806145, "arXiv_link": "https://arxiv.org/abs/2312.10144", "other_link": "https://github.com/layer6ai-labs/fusemix.", "title": "Data-Efficient Multimodal Fusion on a Single GPU", "abstract": "The goal of multimodal alignment is to learn a single latent space that is\nshared between multimodal inputs. The most powerful models in this space have\nbeen trained using massive datasets of paired inputs and large-scale\ncomputational resources, making them prohibitively expensive to train in many\npractical scenarios. We surmise that existing unimodal encoders pre-trained on\nlarge amounts of unimodal data should provide an effective bootstrap to create\nmultimodal models from unimodal ones at much lower costs. We therefore propose\nFuseMix, a multimodal augmentation scheme that operates on the latent spaces of\narbitrary pre-trained unimodal encoders. Using FuseMix for multimodal\nalignment, we achieve competitive performance -- and in certain cases\noutperform state-of-the art methods -- in both image-text and audio-text\nretrieval, with orders of magnitude less compute and data: for example, we\noutperform CLIP on the Flickr30K text-to-image retrieval task with $\\sim \\!\n600\\times$ fewer GPU days and $\\sim \\! 80\\times$ fewer image-text pairs.\nAdditionally, we show how our method can be applied to convert pre-trained\ntext-to-image generative models into audio-to-image ones. Code is available at:\nhttps://github.com/layer6ai-labs/fusemix.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["No\u00ebl Vouitsis", "Zhaoyan Liu", "Satya Krishna Gorti", "Valentin Villecroze", "Jesse C. Cresswell", "Guangwei Yu", "Gabriel Loaiza-Ganem", "Maksims Volkovs"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f285"}, "filepath": "data/2308.09710.png", "tags": [], "_media_type": "image", "_rand": 0.999294516443962, "arXiv_link": "https://arxiv.org/abs/2308.09710", "other_link": "", "title": "SimDA: Simple Diffusion Adapter for Efficient Video Generation", "abstract": "The recent wave of AI-generated content has witnessed the great development\nand success of Text-to-Image (T2I) technologies. By contrast, Text-to-Video\n(T2V) still falls short of expectations though attracting increasing interests.\nExisting works either train from scratch or adapt large T2I model to videos,\nboth of which are computation and resource expensive. In this work, we propose\na Simple Diffusion Adapter (SimDA) that fine-tunes only 24M out of 1.1B\nparameters of a strong T2I model, adapting it to video generation in a\nparameter-efficient way. In particular, we turn the T2I model for T2V by\ndesigning light-weight spatial and temporal adapters for transfer learning.\nBesides, we change the original spatial attention to the proposed Latent-Shift\nAttention (LSA) for temporal consistency. With similar model architecture, we\nfurther train a video super-resolution model to generate high-definition\n(1024x1024) videos. In addition to T2V generation in the wild, SimDA could also\nbe utilized in one-shot video editing with only 2 minutes tuning. Doing so, our\nmethod could minimize the training effort with extremely few tunable parameters\nfor model adaptation.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhen Xing", "Qi Dai", "Han Hu", "Zuxuan Wu", "Yu-Gang Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f286"}, "filepath": "data/2401.11085.png", "tags": [], "_media_type": "image", "_rand": 0.9998626731424322, "arXiv_link": "https://arxiv.org/abs/2401.11085", "other_link": "", "title": "Learning Adaptive Spatial Coherent Correlations for Speech-Preserving Facial Expression Manipulation", "abstract": "Domain shift poses a significant challenge in Cross-Domain Facial Expression\nRecognition (CD-FER) due to the distribution variation across different\ndomains. Current works mainly focus on learning domain-invariant features\nthrough global feature adaptation, while neglecting the transferability of\nlocal features. Additionally, these methods lack discriminative supervision\nduring training on target datasets, resulting in deteriorated feature\nrepresentation in target domain. To address these limitations, we propose an\nAdaptive Global-Local Representation Learning and Selection (AGLRLS) framework.\nThe framework incorporates global-local adversarial adaptation and\nsemantic-aware pseudo label generation to enhance the learning of\ndomain-invariant and discriminative feature during training. Meanwhile, a\nglobal-local prediction consistency learning is introduced to improve\nclassification results during inference. Specifically, the framework consists\nof separate global-local adversarial learning modules that learn\ndomain-invariant global and local features independently. We also design a\nsemantic-aware pseudo label generation module, which computes semantic labels\nbased on global and local features. Moreover, a novel dynamic threshold\nstrategy is employed to learn the optimal thresholds by leveraging independent\nprediction of global and local features, ensuring filtering out the unreliable\npseudo labels while retaining reliable ones. These labels are utilized for\nmodel optimization through the adversarial learning process in an end-to-end\nmanner. During inference, a global-local prediction consistency module is\ndeveloped to automatically learn an optimal result from multiple predictions.\nWe conduct comprehensive experiments and analysis based on a fair evaluation\nbenchmark. The results demonstrate that the proposed framework outperforms the\ncurrent competing methods by a substantial margin.", "keywords": [], "authors_list": ["Tianshui Chen", "Jianman Lin", "Zhijing Yang", "Chunmei Qing", "Liang Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f287"}, "filepath": "data/2403.07432.png", "tags": [], "_media_type": "image", "_rand": 0.999624656830343, "arXiv_link": "https://arxiv.org/abs/2403.07432", "other_link": "", "title": "Bring Event into RGB and LiDAR: Hierarchical Visual-Motion Fusion for Scene Flow", "abstract": "Single RGB or LiDAR is the mainstream sensor for the challenging scene flow,\nwhich relies heavily on visual features to match motion features. Compared with\nsingle modality, existing methods adopt a fusion strategy to directly fuse the\ncross-modal complementary knowledge in motion space. However, these direct\nfusion methods may suffer the modality gap due to the visual intrinsic\nheterogeneous nature between RGB and LiDAR, thus deteriorating motion features.\nWe discover that event has the homogeneous nature with RGB and LiDAR in both\nvisual and motion spaces. In this work, we bring the event as a bridge between\nRGB and LiDAR, and propose a novel hierarchical visual-motion fusion framework\nfor scene flow, which explores a homogeneous space to fuse the cross-modal\ncomplementary knowledge for physical interpretation. In visual fusion, we\ndiscover that event has a complementarity (relative v.s. absolute) in luminance\nspace with RGB for high dynamic imaging, and has a complementarity (local\nboundary v.s. global shape) in scene structure space with LiDAR for structure\nintegrity. In motion fusion, we figure out that RGB, event and LiDAR are\ncomplementary (spatial-dense, temporal-dense v.s. spatiotemporal-sparse) to\neach other in correlation space, which motivates us to fuse their motion\ncorrelations for motion continuity. The proposed hierarchical fusion can\nexplicitly fuse the multimodal knowledge to progressively improve scene flow\nfrom visual space to motion space. Extensive experiments have been performed to\nverify the superiority of the proposed method.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Hanyu Zhou", "Yi Chang", "Zhiwei Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f288"}, "filepath": "data/2405.08815.png", "tags": [], "_media_type": "image", "_rand": 0.9993633636322059, "arXiv_link": "https://arxiv.org/abs/2405.08815", "other_link": "", "title": "Efficient Vision-Language Pre-training by Cluster Masking", "abstract": "We propose a simple strategy for masking image patches during visual-language\ncontrastive learning that improves the quality of the learned representations\nand the training speed. During each iteration of training, we randomly mask\nclusters of visually similar image patches, as measured by their raw pixel\nintensities. This provides an extra learning signal, beyond the contrastive\ntraining itself, since it forces a model to predict words for masked visual\nstructures solely from context. It also speeds up training by reducing the\namount of data used in each image. We evaluate the effectiveness of our model\nby pre-training on a number of benchmarks, finding that it outperforms other\nmasking strategies, such as FLIP, on the quality of the learned representation.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zihao Wei", "Zixuan Pan", "Andrew Owens"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f289"}, "filepath": "data/2311.18695v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999506711790154, "arXiv_link": "https://arxiv.org/abs/2311.18695v1", "other_link": "", "title": "Seg2Reg: Differentiable 2D Segmentation to 1D Regression Rendering for 360 Room Layout Reconstruction", "abstract": "State-of-the-art single-view 360-degree room layout reconstruction methods\nformulate the problem as a high-level 1D (per-column) regression task. On the\nother hand, traditional low-level 2D layout segmentation is simpler to learn\nand can represent occluded regions, but it requires complex post-processing for\nthe targeting layout polygon and sacrifices accuracy. We present Seg2Reg to\nrender 1D layout depth regression from the 2D segmentation map in a\ndifferentiable and occlusion-aware way, marrying the merits of both sides.\nSpecifically, our model predicts floor-plan density for the input\nequirectangular 360-degree image. Formulating the 2D layout representation as a\ndensity field enables us to employ `flattened' volume rendering to form 1D\nlayout depth regression. In addition, we propose a novel 3D warping\naugmentation on layout to improve generalization. Finally, we re-implement\nrecent room layout reconstruction methods into our codebase for benchmarking\nand explore modern backbones and training techniques to serve as the strong\nbaseline. Our model significantly outperforms previous arts. The code will be\nmade available upon publication.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Cheng Sun", "Wei-En Tai", "Yu-Lin Shih", "Kuan-Wei Chen", "Yong-Jing Syu", "Kent Selwyn The", "Yu-Chiang Frank Wang", "Hwann-Tzong Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f28a"}, "filepath": "data/2404.15010.png", "tags": [], "_media_type": "image", "_rand": 0.9997308558121539, "arXiv_link": "https://arxiv.org/abs/2404.15010", "other_link": "https://github.com/sunshuofeng/X-3D}{https://github.com/sunshuofeng/X-3D}.", "title": "X-3D: Explicit 3D Structure Modeling for Point Cloud Recognition", "abstract": "Numerous prior studies predominantly emphasize constructing relation vectors\nfor individual neighborhood points and generating dynamic kernels for each\nvector and embedding these into high-dimensional spaces to capture implicit\nlocal structures. However, we contend that such implicit high-dimensional\nstructure modeling approch inadequately represents the local geometric\nstructure of point clouds due to the absence of explicit structural\ninformation. Hence, we introduce X-3D, an explicit 3D structure modeling\napproach. X-3D functions by capturing the explicit local structural information\nwithin the input 3D space and employing it to produce dynamic kernels with\nshared weights for all neighborhood points within the current local region.\nThis modeling approach introduces effective geometric prior and significantly\ndiminishes the disparity between the local structure of the embedding space and\nthe original input point cloud, thereby improving the extraction of local\nfeatures. Experiments show that our method can be used on a variety of methods\nand achieves state-of-the-art performance on segmentation, classification,\ndetection tasks with lower extra computational cost, such as \\textbf{90.7\\%} on\nScanObjectNN for classification, \\textbf{79.2\\%} on S3DIS 6 fold and\n\\textbf{74.3\\%} on S3DIS Area 5 for segmentation, \\textbf{76.3\\%} on ScanNetV2\nfor segmentation and \\textbf{64.5\\%} mAP , \\textbf{46.9\\%} mAP on SUN RGB-D and\n\\textbf{69.0\\%} mAP , \\textbf{51.1\\%} mAP on ScanNetV2 . Our code is available\nat\n\\href{https://github.com/sunshuofeng/X-3D}{https://github.com/sunshuofeng/X-3D}.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Shuofeng Sun", "Yongming Rao", "Jiwen Lu", "Haibin Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f28b"}, "filepath": "data/2404.02185.png", "tags": [], "_media_type": "image", "_rand": 0.9994871172518616, "arXiv_link": "https://arxiv.org/abs/2404.02185", "other_link": "", "title": "NeRFCodec: Neural Feature Compression Meets Neural Radiance Fields for Memory-Efficient Scene Representation", "abstract": "The emergence of Neural Radiance Fields (NeRF) has greatly impacted 3D scene\nmodeling and novel-view synthesis. As a kind of visual media for 3D scene\nrepresentation, compression with high rate-distortion performance is an eternal\ntarget. Motivated by advances in neural compression and neural field\nrepresentation, we propose NeRFCodec, an end-to-end NeRF compression framework\nthat integrates non-linear transform, quantization, and entropy coding for\nmemory-efficient scene representation. Since training a non-linear transform\ndirectly on a large scale of NeRF feature planes is impractical, we discover\nthat pre-trained neural 2D image codec can be utilized for compressing the\nfeatures when adding content-specific parameters. Specifically, we reuse neural\n2D image codec but modify its encoder and decoder heads, while keeping the\nother parts of the pre-trained decoder frozen. This allows us to train the full\npipeline via supervision of rendering loss and entropy loss, yielding the\nrate-distortion balance by updating the content-specific parameters. At test\ntime, the bitstreams containing latent code, feature decoder head, and other\nside information are transmitted for communication. Experimental results\ndemonstrate our method outperforms existing NeRF compression methods, enabling\nhigh-quality novel view synthesis with a memory budget of 0.5 MB.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Sicheng Li", "Hao Li", "Yiyi Liao", "Lu Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f28c"}, "filepath": "data/2403.09093v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998345512631798, "arXiv_link": "https://arxiv.org/html/2403.09093v1", "other_link": "https://whaohan.github.io/desigen.", "title": "Desigen: A Pipeline for Controllable Design Template Generation", "abstract": "Templates serve as a good starting point to implement a design (e.g., banner,\nslide) but it takes great effort from designers to manually create. In this\npaper, we present Desigen, an automatic template creation pipeline which\ngenerates background images as well as harmonious layout elements over the\nbackground. Different from natural images, a background image should preserve\nenough non-salient space for the overlaying layout elements. To equip existing\nadvanced diffusion-based models with stronger spatial control, we propose two\nsimple but effective techniques to constrain the saliency distribution and\nreduce the attention weight in desired regions during the background generation\nprocess. Then conditioned on the background, we synthesize the layout with a\nTransformer-based autoregressive generator. To achieve a more harmonious\ncomposition, we propose an iterative inference strategy to adjust the\nsynthesized background and layout in multiple rounds. We constructed a design\ndataset with more than 40k advertisement banners to verify our approach.\nExtensive experiments demonstrate that the proposed pipeline generates\nhigh-quality templates comparable to human designers. More than a single-page\ndesign, we further show an application of presentation generation that outputs\na set of theme-consistent slides. The data and code are available at\nhttps://whaohan.github.io/desigen.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Haohan Weng", "Danqing Huang", "YU QIAO", "Hu Zheng", "Chin-Yew Lin", "Tong Zhang", "C. L. Philip Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f28d"}, "filepath": "data/2404.00098.png", "tags": [], "_media_type": "image", "_rand": 0.9999424365611326, "arXiv_link": "https://arxiv.org/abs/2404.00098", "other_link": "", "title": "Sparse views, Near light: A practical paradigm for uncalibrated point-light photometric stereo", "abstract": "Neural approaches have shown a significant progress on camera-based\nreconstruction. But they require either a fairly dense sampling of the viewing\nsphere, or pre-training on an existing dataset, thereby limiting their\ngeneralizability. In contrast, photometric stereo (PS) approaches have shown\ngreat potential for achieving high-quality reconstruction under sparse\nviewpoints. Yet, they are impractical because they typically require tedious\nlaboratory conditions, are restricted to dark rooms, and often multi-staged,\nmaking them subject to accumulated errors. To address these shortcomings, we\npropose an end-to-end uncalibrated multi-view PS framework for reconstructing\nhigh-resolution shapes acquired from sparse viewpoints in a real-world\nenvironment. We relax the dark room assumption, and allow a combination of\nstatic ambient lighting and dynamic near LED lighting, thereby enabling easy\ndata capture outside the lab. Experimental validation confirms that it\noutperforms existing baseline approaches in the regime of sparse viewpoints by\na large margin. This allows to bring high-accuracy 3D reconstruction from the\ndark room to the real world, while maintaining a reasonable data capture\ncomplexity.", "keywords": ["Computational imaging and physics-based vision", "Efficient and scalable vision"], "authors_list": ["Mohammed Brahimi", "Bjoern Haefner", "Zhenzhang Ye", "Bastian Goldluecke", "Daniel Cremers"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f28e"}, "filepath": "data/2403.15227.png", "tags": [], "_media_type": "image", "_rand": 0.9990046232273037, "arXiv_link": "https://arxiv.org/abs/2403.15227", "other_link": "", "title": "LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example", "abstract": "Recent advances in 3D face stylization have made significant strides in few\nto zero-shot settings. However, the degree of stylization achieved by existing\nmethods is often not sufficient for practical applications because they are\nmostly based on statistical 3D Morphable Models (3DMM) with limited variations.\nTo this end, we propose a method that can produce a highly stylized 3D face\nmodel with desired topology. Our methods train a surface deformation network\nwith 3DMM and translate its domain to the target style using a paired exemplar.\nThe network achieves stylization of the 3D face mesh by mimicking the style of\nthe target using a differentiable renderer and directional CLIP losses.\nAdditionally, during the inference process, we utilize a Mesh Agnostic Encoder\n(MAGE) that takes deformation target, a mesh of diverse topologies as input to\nthe stylization process and encodes its shape into our latent space. The\nresulting stylized face model can be animated by commonly used 3DMM blend\nshapes. A set of quantitative and qualitative evaluations demonstrate that our\nmethod can produce highly stylized face meshes according to a given style and\noutput them in a desired topology. We also demonstrate example applications of\nour method including image-based stylized avatar generation, linear\ninterpolation of geometric styles, and facial animation of stylized avatars.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Soyeon Yoon", "Kwan Yun", "Kwanggyoon Seo", "Sihun Cha", "Jung Eun Yoo", "Junyong Noh"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Unknown", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f28f"}, "filepath": "data/2312.03884.png", "tags": [], "_media_type": "image", "_rand": 0.9997167826546595, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2312.03884", "other_link": "https://kovenyu.com/WonderJourney/", "title": "Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models", "abstract": "We introduce WonderJourney, a modularized framework for perpetual 3D scene\ngeneration. Unlike prior work on view generation that focuses on a single type\nof scenes, we start at any user-provided location (by a text description or an\nimage) and generate a journey through a long sequence of diverse yet coherently\nconnected 3D scenes. We leverage an LLM to generate textual descriptions of the\nscenes in this journey, a text-driven point cloud generation pipeline to make a\ncompelling and coherent sequence of 3D scenes, and a large VLM to verify the\ngenerated scenes. We show compelling, diverse visual results across various\nscene types and styles, forming imaginary \"wonderjourneys\". Project website:\nhttps://kovenyu.com/WonderJourney/", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Chang Liu", "Haoning Wu", "Yujie Zhong", "Xiaoyun Zhang", "Yanfeng Wang", "Weidi Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f290"}, "filepath": "data/2405.11905.png", "tags": [], "_media_type": "image", "_rand": 0.9992541077238621, "arXiv_link": "https://arxiv.org/abs/2405.11905", "other_link": "https://github.com/thswodnjs3/CSTA.", "title": "CSTA: CNN-based Spatiotemporal Attention for Video Summarization", "abstract": "Video summarization aims to generate a concise representation of a video,\ncapturing its essential content and key moments while reducing its overall\nlength. Although several methods employ attention mechanisms to handle\nlong-term dependencies, they often fail to capture the visual significance\ninherent in frames. To address this limitation, we propose a CNN-based\nSpatioTemporal Attention (CSTA) method that stacks each feature of frames from\na single video to form image-like frame representations and applies 2D CNN to\nthese frame features. Our methodology relies on CNN to comprehend the inter and\nintra-frame relations and to find crucial attributes in videos by exploiting\nits ability to learn absolute positions within images. In contrast to previous\nwork compromising efficiency by designing additional modules to focus on\nspatial importance, CSTA requires minimal computational overhead as it uses CNN\nas a sliding window. Extensive experiments on two benchmark datasets (SumMe and\nTVSum) demonstrate that our proposed approach achieves state-of-the-art\nperformance with fewer MACs compared to previous methods. Codes are available\nat https://github.com/thswodnjs3/CSTA.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jaewon Son", "Jaehun Park", "Kwangsu Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f291"}, "filepath": "data/2404.02285.png", "tags": [], "_media_type": "image", "_rand": 0.999764761779078, "arXiv_link": "https://arxiv.org/abs/2404.02285", "other_link": "https://github.com/FereshteShakeri/FewShot-CLIP-Strong-Baseline.git}.", "title": "LP++: A Surprisingly Strong Linear Probe for Few-Shot CLIP", "abstract": "In a recent, strongly emergent literature on few-shot CLIP adaptation, Linear\nProbe (LP) has been often reported as a weak baseline. This has motivated\nintensive research building convoluted prompt learning or feature adaptation\nstrategies. In this work, we propose and examine from convex-optimization\nperspectives a generalization of the standard LP baseline, in which the linear\nclassifier weights are learnable functions of the text embedding, with\nclass-wise multipliers blending image and text knowledge. As our objective\nfunction depends on two types of variables, i.e., the class visual prototypes\nand the learnable blending parameters, we propose a computationally efficient\nblock coordinate Majorize-Minimize (MM) descent algorithm. In our full-batch MM\noptimizer, which we coin LP++, step sizes are implicit, unlike standard\ngradient descent practices where learning rates are intensively searched over\nvalidation sets. By examining the mathematical properties of our loss (e.g.,\nLipschitz gradient continuity), we build majorizing functions yielding\ndata-driven learning rates and derive approximations of the loss's minima,\nwhich provide data-informed initialization of the variables. Our image-language\nobjective function, along with these non-trivial optimization insights and\ningredients, yields, surprisingly, highly competitive few-shot CLIP\nperformances. Furthermore, LP++ operates in black-box, relaxes intensive\nvalidation searches for the optimization hyper-parameters, and runs\norders-of-magnitudes faster than state-of-the-art few-shot CLIP adaptation\nmethods. Our code is available at:\n\\url{https://github.com/FereshteShakeri/FewShot-CLIP-Strong-Baseline.git}.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Yunshi HUANG", "Fereshteh Shakeri", "Jose Dolz", "Malik Boudiaf", "Houda Bahig", "Ismail Ben Ayed"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f292"}, "filepath": "data/2403.06758.png", "tags": [], "_media_type": "image", "_rand": 0.9993609563923475, "arXiv_link": "https://arxiv.org/abs/2403.06758", "other_link": "https://github.com/gmberton/EarthLoc", "title": "EarthLoc: Astronaut Photography Localization by Indexing Earth from Space", "abstract": "Astronaut photography, spanning six decades of human spaceflight, presents a\nunique Earth observations dataset with immense value for both scientific\nresearch and disaster response. Despite its significance, accurately localizing\nthe geographical extent of these images, crucial for effective utilization,\nposes substantial challenges. Current manual localization efforts are\ntime-consuming, motivating the need for automated solutions. We propose a novel\napproach - leveraging image retrieval - to address this challenge efficiently.\nWe introduce innovative training techniques, including Year-Wise Data\nAugmentation and a Neutral-Aware Multi-Similarity Loss, which contribute to the\ndevelopment of a high-performance model, EarthLoc. We develop six evaluation\ndatasets and perform a comprehensive benchmark comparing EarthLoc to existing\nmethods, showcasing its superior efficiency and accuracy. Our approach marks a\nsignificant advancement in automating the localization of astronaut\nphotography, which will help bridge a critical gap in Earth observations data.\nCode and datasets are available at https://github.com/gmberton/EarthLoc", "keywords": ["Efficient and scalable vision"], "authors_list": ["Gabriele Berton", "Alex Stoken", "Barbara Caputo", "Carlo Masone"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f293"}, "filepath": "data/2403.07719.png", "tags": [], "_media_type": "image", "_rand": 0.999013816875616, "arXiv_link": "https://arxiv.org/abs/2403.07719", "other_link": "https://github.com/WonderLandxD/WiKG.", "title": "Dynamic Graph Representation with Knowledge-aware Attention for Histopathology Whole Slide Image Analysis", "abstract": "Histopathological whole slide images (WSIs) classification has become a\nfoundation task in medical microscopic imaging processing. Prevailing\napproaches involve learning WSIs as instance-bag representations, emphasizing\nsignificant instances but struggling to capture the interactions between\ninstances. Additionally, conventional graph representation methods utilize\nexplicit spatial positions to construct topological structures but restrict the\nflexible interaction capabilities between instances at arbitrary locations,\nparticularly when spatially distant. In response, we propose a novel dynamic\ngraph representation algorithm that conceptualizes WSIs as a form of the\nknowledge graph structure. Specifically, we dynamically construct neighbors and\ndirected edge embeddings based on the head and tail relationships between\ninstances. Then, we devise a knowledge-aware attention mechanism that can\nupdate the head node features by learning the joint attention score of each\nneighbor and edge. Finally, we obtain a graph-level embedding through the\nglobal pooling process of the updated head, serving as an implicit\nrepresentation for the WSI classification. Our end-to-end graph representation\nlearning approach has outperformed the state-of-the-art WSI analysis methods on\nthree TCGA benchmark datasets and in-house test sets. Our code is available at\nhttps://github.com/WonderLandxD/WiKG.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Jiawen Li", "Yuxuan Chen", "Hongbo Chu", "Sun Qiehe", "Tian Guan", "Anjia Han", "Yonghong He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f294"}, "filepath": "data/2401.08209.png", "tags": [], "_media_type": "image", "_rand": 0.9993319986149061, "arXiv_link": "https://arxiv.org/abs/2401.08209", "other_link": "", "title": "Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary", "abstract": "Single Image Super-Resolution is a classic computer vision problem that\ninvolves estimating high-resolution (HR) images from low-resolution (LR) ones.\nAlthough deep neural networks (DNNs), especially Transformers for\nsuper-resolution, have seen significant advancements in recent years,\nchallenges still remain, particularly in limited receptive field caused by\nwindow-based self-attention. To address these issues, we introduce a group of\nauxiliary Adaptive Token Dictionary to SR Transformer and establish an ATD-SR\nmethod. The introduced token dictionary could learn prior information from\ntraining data and adapt the learned prior to specific testing image through an\nadaptive refinement step. The refinement strategy could not only provide global\ninformation to all input tokens but also group image tokens into categories.\nBased on category partitions, we further propose a category-based\nself-attention mechanism designed to leverage distant but similar tokens for\nenhancing input features. The experimental results show that our method\nachieves the best performance on various single image super-resolution\nbenchmarks.", "keywords": ["Low-level vision"], "authors_list": ["Leheng Zhang", "Yawei Li", "Xingyu Zhou", "Xiaorui Zhao", "Shuhang Gu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f295"}, "filepath": "data/2403.19326.png", "tags": [], "_media_type": "image", "_rand": 0.9995349662606873, "arXiv_link": "https://arxiv.org/abs/2403.19326", "other_link": "", "title": "MedBN: Robust Test-Time Adaptation against Malicious Test Samples", "abstract": "Test-time adaptation (TTA) has emerged as a promising solution to address\nperformance decay due to unforeseen distribution shifts between training and\ntest data. While recent TTA methods excel in adapting to test data variations,\nsuch adaptability exposes a model to vulnerability against malicious examples,\nan aspect that has received limited attention. Previous studies have uncovered\nsecurity vulnerabilities within TTA even when a small proportion of the test\nbatch is maliciously manipulated. In response to the emerging threat, we\npropose median batch normalization (MedBN), leveraging the robustness of the\nmedian for statistics estimation within the batch normalization layer during\ntest-time inference. Our method is algorithm-agnostic, thus allowing seamless\nintegration with existing TTA frameworks. Our experimental results on benchmark\ndatasets, including CIFAR10-C, CIFAR100-C and ImageNet-C, consistently\ndemonstrate that MedBN outperforms existing approaches in maintaining robust\nperformance across different attack scenarios, encompassing both instant and\ncumulative attacks. Through extensive experiments, we show that our approach\nsustains the performance even in the absence of attacks, achieving a practical\nbalance between robustness and performance.", "keywords": [], "authors_list": ["Hyejin Park", "Jeongyeon Hwang", "Sunung Mun", "Sangdon Park", "Jungseul Ok"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f296"}, "filepath": "data/2405.07481.png", "tags": [], "_media_type": "image", "_rand": 0.9990657788095259, "arXiv_link": "https://arxiv.org/abs/2405.07481", "other_link": "", "title": "Text Grouping Adapter: Adapting Pre-trained Text Detector for Layout Analysis", "abstract": "Significant progress has been made in scene text detection models since the\nrise of deep learning, but scene text layout analysis, which aims to group\ndetected text instances as paragraphs, has not kept pace. Previous works either\ntreated text detection and grouping using separate models, or train a model\nfrom scratch while using a unified one. All of them have not yet made full use\nof the already well-trained text detectors and easily obtainable detection\ndatasets. In this paper, we present Text Grouping Adapter (TGA), a module that\ncan enable the utilization of various pre-trained text detectors to learn\nlayout analysis, allowing us to adopt a well-trained text detector right off\nthe shelf or just fine-tune it efficiently. Designed to be compatible with\nvarious text detector architectures, TGA takes detected text regions and image\nfeatures as universal inputs to assemble text instance features. To capture\nbroader contextual information for layout analysis, we propose to predict text\ngroup masks from text instance features by one-to-many assignment. Our\ncomprehensive experiments demonstrate that, even with frozen pre-trained\nmodels, incorporating our TGA into various pre-trained text detectors and text\nspotters can achieve superior layout analysis performance, simultaneously\ninheriting generalized text detection ability from pre-training. In the case of\nfull parameter fine-tuning, we can further improve layout analysis performance.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Tianci Bi", "Xiaoyi Zhang", "Zhizheng Zhang", "Wenxuan Xie", "Cuiling Lan", "Yan Lu", "Nanning Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f297"}, "filepath": "data/2404.19294.png", "tags": [], "_media_type": "image", "_rand": 0.9993366766768121, "arXiv_link": "https://arxiv.org/abs/2404.19294", "other_link": "", "title": "Masked Spatial Propagation Network for Sparsity-Adaptive Depth Refinement", "abstract": "The main function of depth completion is to compensate for an insufficient\nand unpredictable number of sparse depth measurements of hardware sensors.\nHowever, existing research on depth completion assumes that the sparsity -- the\nnumber of points or LiDAR lines -- is fixed for training and testing. Hence,\nthe completion performance drops severely when the number of sparse depths\nchanges significantly. To address this issue, we propose the sparsity-adaptive\ndepth refinement (SDR) framework, which refines monocular depth estimates using\nsparse depth points. For SDR, we propose the masked spatial propagation network\n(MSPN) to perform SDR with a varying number of sparse depths effectively by\ngradually propagating sparse depth information throughout the entire depth map.\nExperimental results demonstrate that MPSN achieves state-of-the-art\nperformance on both SDR and conventional depth completion scenarios.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Jinyoung Jun", "Jae-Han Lee", "Chang-Su Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f298"}, "filepath": "data/2403.04258.png", "tags": [], "_media_type": "image", "_rand": 0.9990875748034049, "arXiv_link": "https://arxiv.org/abs/2403.04258", "other_link": "https://nifangbaage.github.io/DATTT.", "title": "Depth-aware Test-Time Training for Zero-shot Video Object Segmentation", "abstract": "Zero-shot Video Object Segmentation (ZSVOS) aims at segmenting the primary\nmoving object without any human annotations. Mainstream solutions mainly focus\non learning a single model on large-scale video datasets, which struggle to\ngeneralize to unseen videos. In this work, we introduce a test-time training\n(TTT) strategy to address the problem. Our key insight is to enforce the model\nto predict consistent depth during the TTT process. In detail, we first train a\nsingle network to perform both segmentation and depth prediction tasks. This\ncan be effectively learned with our specifically designed depth modulation\nlayer. Then, for the TTT process, the model is updated by predicting consistent\ndepth maps for the same frame under different data augmentations. In addition,\nwe explore different TTT weight updating strategies. Our empirical results\nsuggest that the momentum-based weight initialization and looping-based\ntraining scheme lead to more stable improvements. Experiments show that the\nproposed method achieves clear improvements on ZSVOS. Our proposed video TTT\nstrategy provides significant superiority over state-of-the-art TTT methods.\nOur code is available at: https://nifangbaage.github.io/DATTT.", "keywords": [], "authors_list": ["Weihuang Liu", "Xi Shen", "Haolun Li", "Xiuli Bi", "Bo Liu", "Chi-Man Pun", "Xiaodong Cun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f299"}, "filepath": "data/2403.03077.png", "tags": [], "_media_type": "image", "_rand": 0.9999213106506083, "arXiv_link": "https://arxiv.org/abs/2403.03077", "other_link": "", "title": "Viewpoint-Aware Visual Grounding in 3D Scenes", "abstract": "3D visual grounding involves matching natural language descriptions with\ntheir corresponding objects in 3D spaces. Existing methods often face\nchallenges with accuracy in object recognition and struggle in interpreting\ncomplex linguistic queries, particularly with descriptions that involve\nmultiple anchors or are view-dependent. In response, we present the MiKASA\n(Multi-Key-Anchor Scene-Aware) Transformer. Our novel end-to-end trained model\nintegrates a self-attention-based scene-aware object encoder and an original\nmulti-key-anchor technique, enhancing object recognition accuracy and the\nunderstanding of spatial relationships. Furthermore, MiKASA improves the\nexplainability of decision-making, facilitating error diagnosis. Our model\nachieves the highest overall accuracy in the Referit3D challenge for both the\nSr3D and Nr3D datasets, particularly excelling by a large margin in categories\nthat require viewpoint-dependent descriptions.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Xiangxi Shi", "Zhonghua Wu", "Stefan Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f29a"}, "filepath": "data/2403.02767.png", "tags": [], "_media_type": "image", "_rand": 0.9995222764873868, "arXiv_link": "https://arxiv.org/abs/2403.02767", "other_link": "", "title": "DeconfuseTrack: Dealing with Confusion for Multi-Object Tracking", "abstract": "Accurate data association is crucial in reducing confusion, such as ID\nswitches and assignment errors, in multi-object tracking (MOT). However,\nexisting advanced methods often overlook the diversity among trajectories and\nthe ambiguity and conflicts present in motion and appearance cues, leading to\nconfusion among detections, trajectories, and associations when performing\nsimple global data association. To address this issue, we propose a simple,\nversatile, and highly interpretable data association approach called Decomposed\nData Association (DDA). DDA decomposes the traditional association problem into\nmultiple sub-problems using a series of non-learning-based modules and\nselectively addresses the confusion in each sub-problem by incorporating\ntargeted exploitation of new cues. Additionally, we introduce Occlusion-aware\nNon-Maximum Suppression (ONMS) to retain more occluded detections, thereby\nincreasing opportunities for association with trajectories and indirectly\nreducing the confusion caused by missed detections. Finally, based on DDA and\nONMS, we design a powerful multi-object tracker named DeconfuseTrack,\nspecifically focused on resolving confusion in MOT. Extensive experiments\nconducted on the MOT17 and MOT20 datasets demonstrate that our proposed DDA and\nONMS significantly enhance the performance of several popular trackers.\nMoreover, DeconfuseTrack achieves state-of-the-art performance on the MOT17 and\nMOT20 test sets, significantly outperforms the baseline tracker ByteTrack in\nmetrics such as HOTA, IDF1, AssA. This validates that our tracking design\neffectively reduces confusion caused by simple global association.", "keywords": [], "authors_list": ["Cheng Huang", "Shoudong Han", "Mengyu He", "Wenbo Zheng", "Yuhao Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f29b"}, "filepath": "data/2311.17590v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993053460058643, "arXiv_link": "https://arxiv.org/html/2311.17590v2", "other_link": "https://ziqiaopeng.github.io/synctalk", "title": "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis", "abstract": "Achieving high synchronization in the synthesis of realistic, speech-driven\ntalking head videos presents a significant challenge. Traditional Generative\nAdversarial Networks (GAN) struggle to maintain consistent facial identity,\nwhile Neural Radiance Fields (NeRF) methods, although they can address this\nissue, often produce mismatched lip movements, inadequate facial expressions,\nand unstable head poses. A lifelike talking head requires synchronized\ncoordination of subject identity, lip movements, facial expressions, and head\nposes. The absence of these synchronizations is a fundamental flaw, leading to\nunrealistic and artificial outcomes. To address the critical issue of\nsynchronization, identified as the \"devil\" in creating realistic talking heads,\nwe introduce SyncTalk. This NeRF-based method effectively maintains subject\nidentity, enhancing synchronization and realism in talking head synthesis.\nSyncTalk employs a Face-Sync Controller to align lip movements with speech and\ninnovatively uses a 3D facial blendshape model to capture accurate facial\nexpressions. Our Head-Sync Stabilizer optimizes head poses, achieving more\nnatural head movements. The Portrait-Sync Generator restores hair details and\nblends the generated head with the torso for a seamless visual experience.\nExtensive experiments and user studies demonstrate that SyncTalk outperforms\nstate-of-the-art methods in synchronization and realism. We recommend watching\nthe supplementary video: https://ziqiaopeng.github.io/synctalk", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Ziqiao Peng", "Wentao Hu", "Yue Shi", "Xiangyu Zhu", "Xiaomei Zhang", "Hao Zhao", "Jun He", "Hongyan Liu", "Zhaoxin Fan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f29c"}, "filepath": "data/2403.18293.png", "tags": [], "_media_type": "image", "_rand": 0.9994923597360669, "arXiv_link": "https://arxiv.org/abs/2403.18293", "other_link": "https://kdiaaa.github.io/tda/}.", "title": "Efficient Test-Time Adaptation of Vision-Language Models", "abstract": "Test-time adaptation with pre-trained vision-language models has attracted\nincreasing attention for tackling distribution shifts during the test time.\nThough prior studies have achieved very promising performance, they involve\nintensive computation which is severely unaligned with test-time adaptation. We\ndesign TDA, a training-free dynamic adapter that enables effective and\nefficient test-time adaptation with vision-language models. TDA works with a\nlightweight key-value cache that maintains a dynamic queue with few-shot pseudo\nlabels as values and the corresponding test-sample features as keys. Leveraging\nthe key-value cache, TDA allows adapting to test data gradually via progressive\npseudo label refinement which is super-efficient without incurring any\nbackpropagation. In addition, we introduce negative pseudo labeling that\nalleviates the adverse impact of pseudo label noises by assigning pseudo labels\nto certain negative classes when the model is uncertain about its pseudo label\npredictions. Extensive experiments over two benchmarks demonstrate TDA's\nsuperior effectiveness and efficiency as compared with the state-of-the-art.\nThe code has been released in \\url{https://kdiaaa.github.io/tda/}.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Adilbek Karmanov", "Dayan Guan", "Shijian Lu", "Abdulmotaleb El Saddik", "Eric P. Xing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f29d"}, "filepath": "data/2307.08919v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999930788022476, "arXiv_link": "https://arxiv.org/abs/2307.08919v2", "other_link": "", "title": "Systematic comparison of semi-supervised and self-supervised learning for medical image classification", "abstract": "In many medical image classification problems, labeled data is scarce while\nunlabeled data is more available. Semi-supervised learning and self-supervised\nlearning are two different research directions that can improve accuracy by\nlearning from extra unlabeled data. Recent methods from both directions have\nreported significant gains on traditional benchmarks. Yet past benchmarks do\nnot focus on medical tasks and rarely compare self- and semi- methods together\non equal footing. Furthermore, past benchmarks often handle hyperparameter\ntuning suboptimally. First, they may not tune hyperparameters at all, leading\nto underfitting. Second, when tuning does occur, it often unrealistically uses\na labeled validation set much larger than the train set. Both cases make\npreviously published rankings of methods difficult to translate to practical\nsettings. This study contributes a systematic evaluation of self- and semi-\nmethods with a unified experimental protocol intended to guide a practitioner\nwith scarce overall labeled data and a limited compute budget. We answer two\nkey questions: Can hyperparameter tuning be effective with realistic-sized\nvalidation sets? If so, when all methods are tuned well, which self- or\nsemi-supervised methods reach the best accuracy? Our study compares 13\nrepresentative semi- and self-supervised methods to strong labeled-set-only\nbaselines on 4 medical datasets. From 20000+ total GPU hours of computation, we\nprovide valuable best practices to resource-constrained, results-focused\npractitioners.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Zhe Huang", "Ruijie Jiang", "Shuchin Aeron", "Michael C. Hughes"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f29e"}, "filepath": "data/2306.05427.png", "tags": [], "_media_type": "image", "_rand": 0.9997752104421671, "arXiv_link": "https://arxiv.org/abs/2306.05427", "other_link": "", "title": "Grounded Text-to-Image Synthesis with Attention Refocusing", "abstract": "Driven by the scalable diffusion models trained on large-scale datasets,\ntext-to-image synthesis methods have shown compelling results. However, these\nmodels still fail to precisely follow the text prompt involving multiple\nobjects, attributes, or spatial compositions. In this paper, we reveal the\npotential causes in the diffusion model's cross-attention and self-attention\nlayers. We propose two novel losses to refocus attention maps according to a\ngiven spatial layout during sampling. Creating the layouts manually requires\nadditional effort and can be tedious. Therefore, we explore using large\nlanguage models (LLM) to produce these layouts for our method. We conduct\nextensive experiments on the DrawBench, HRS, and TIFA benchmarks to evaluate\nour proposed method. We show that our proposed attention refocusing effectively\nimproves the controllability of existing approaches.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Quynh Phung", "Songwei Ge", "Jia-Bin Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f29f"}, "filepath": "data/2403.11234.png", "tags": [], "_media_type": "image", "_rand": 0.9998092894020127, "arXiv_link": "https://arxiv.org/abs/2403.11234", "other_link": "", "title": "Universal Semi-Supervised Domain Adaptation by Mitigating Common-Class Bias", "abstract": "Domain adaptation is a critical task in machine learning that aims to improve\nmodel performance on a target domain by leveraging knowledge from a related\nsource domain. In this work, we introduce Universal Semi-Supervised Domain\nAdaptation (UniSSDA), a practical yet challenging setting where the target\ndomain is partially labeled, and the source and target label space may not\nstrictly match. UniSSDA is at the intersection of Universal Domain Adaptation\n(UniDA) and Semi-Supervised Domain Adaptation (SSDA): the UniDA setting does\nnot allow for fine-grained categorization of target private classes not\nrepresented in the source domain, while SSDA focuses on the restricted\nclosed-set setting where source and target label spaces match exactly. Existing\nUniDA and SSDA methods are susceptible to common-class bias in UniSSDA\nsettings, where models overfit to data distributions of classes common to both\ndomains at the expense of private classes. We propose a new prior-guided\npseudo-label refinement strategy to reduce the reinforcement of common-class\nbias due to pseudo-labeling, a common label propagation strategy in domain\nadaptation. We demonstrate the effectiveness of the proposed strategy on\nbenchmark datasets Office-Home, DomainNet, and VisDA. The proposed strategy\nattains the best performance across UniSSDA adaptation settings and establishes\na new baseline for UniSSDA.", "keywords": [], "authors_list": ["Wenyu Zhang", "Qingmu Liu", "Felix Ong", "Mohamed Ragab", "Chuan-Sheng Foo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a0"}, "filepath": "data/2403.14418.png", "tags": [], "_media_type": "image", "_rand": 0.9999274289692687, "arXiv_link": "https://arxiv.org/abs/2403.14418", "other_link": "", "title": "OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation", "abstract": "The booming of 3D recognition in the 2020s began with the introduction of\npoint cloud transformers. They quickly overwhelmed sparse CNNs and became\nstate-of-the-art models, especially in 3D semantic segmentation. However,\nsparse CNNs are still valuable networks, due to their efficiency treasure, and\nease of application. In this work, we reexamine the design distinctions and\ntest the limits of what a sparse CNN can achieve. We discover that the key\ncredit to the performance difference is adaptivity. Specifically, we propose\ntwo key components, i.e., adaptive receptive fields (spatially) and adaptive\nrelation, to bridge the gap. This exploration led to the creation of\nOmni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a\nlightweight module to greatly enhance the adaptivity of sparse CNNs at minimal\ncomputational cost. Without any self-attention modules, OA-CNNs favorably\nsurpass point transformers in terms of accuracy in both indoor and outdoor\nscenes, with much less latency and memory cost. Notably, it achieves 76.1%,\n78.9%, and 70.6% mIoU on ScanNet v2, nuScenes, and SemanticKITTI validation\nbenchmarks respectively, while maintaining at most 5x better speed than\ntransformer counterparts. This revelation highlights the potential of pure\nsparse CNNs to outperform transformer-related networks.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Bohao Peng", "Xiaoyang Wu", "Li Jiang", "Yukang Chen", "Hengshuang Zhao", "Zhuotao Tian", "Jiaya Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a1"}, "filepath": "data/2404.07155.png", "tags": [], "_media_type": "image", "_rand": 0.9995926309938845, "arXiv_link": "https://arxiv.org/abs/2404.07155", "other_link": "https://senqiaoyang.com/project/ULDA", "title": "Unified Language-driven Zero-shot Domain Adaptation", "abstract": "This paper introduces Unified Language-driven Zero-shot Domain Adaptation\n(ULDA), a novel task setting that enables a single model to adapt to diverse\ntarget domains without explicit domain-ID knowledge. We identify the\nconstraints in the existing language-driven zero-shot domain adaptation task,\nparticularly the requirement for domain IDs and domain-specific models, which\nmay restrict flexibility and scalability. To overcome these issues, we propose\na new framework for ULDA, consisting of Hierarchical Context Alignment (HCA),\nDomain Consistent Representation Learning (DCRL), and Text-Driven Rectifier\n(TDR). These components work synergistically to align simulated features with\ntarget text across multiple visual levels, retain semantic correlations between\ndifferent regional representations, and rectify biases between simulated and\nreal target visual features, respectively. Our extensive empirical evaluations\ndemonstrate that this framework achieves competitive performance in both\nsettings, surpassing even the model that requires domain-ID, showcasing its\nsuperiority and generalization ability. The proposed method is not only\neffective but also maintains practicality and efficiency, as it does not\nintroduce additional computational costs during inference. Our project page is\nhttps://senqiaoyang.com/project/ULDA .", "keywords": ["Efficient and scalable vision"], "authors_list": ["Senqiao Yang", "Zhuotao Tian", "Li Jiang", "Jiaya Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a2"}, "filepath": "data/2403.04700.png", "tags": [], "_media_type": "image", "_rand": 0.9997389498453098, "arXiv_link": "https://arxiv.org/abs/2403.04700", "other_link": "https://github.com/chen-si-jia/Trajectory-Long-tail-Distribution-for-MOT.", "title": "Delving into the Trajectory Long-tail Distribution for Muti-object Tracking", "abstract": "Multiple Object Tracking (MOT) is a critical area within computer vision,\nwith a broad spectrum of practical implementations. Current research has\nprimarily focused on the development of tracking algorithms and enhancement of\npost-processing techniques. Yet, there has been a lack of thorough examination\nconcerning the nature of tracking data it self. In this study, we pioneer an\nexploration into the distribution patterns of tracking data and identify a\npronounced long-tail distribution issue within existing MOT datasets. We note a\nsignificant imbalance in the distribution of trajectory lengths across\ndifferent pedestrians, a phenomenon we refer to as ``pedestrians trajectory\nlong-tail distribution''. Addressing this challenge, we introduce a bespoke\nstrategy designed to mitigate the effects of this skewed distribution.\nSpecifically, we propose two data augmentation strategies, including Stationary\nCamera View Data Augmentation (SVA) and Dynamic Camera View Data Augmentation\n(DVA) , designed for viewpoint states and the Group Softmax (GS) module for\nRe-ID. SVA is to backtrack and predict the pedestrian trajectory of tail\nclasses, and DVA is to use diffusion model to change the background of the\nscene. GS divides the pedestrians into unrelated groups and performs softmax\noperation on each group individually. Our proposed strategies can be integrated\ninto numerous existing tracking systems, and extensive experimentation\nvalidates the efficacy of our method in reducing the influence of long-tail\ndistribution on multi-object tracking performance. The code is available at\nhttps://github.com/chen-si-jia/Trajectory-Long-tail-Distribution-for-MOT.", "keywords": [], "authors_list": ["Sijia Chen", "En Yu", "Jinyang Li", "Wenbing Tao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a3"}, "filepath": "data/2403.02769.png", "tags": [], "_media_type": "image", "_rand": 0.9995843539444272, "arXiv_link": "https://arxiv.org/abs/2403.02769", "other_link": "", "title": "HUNTER: Unsupervised Human-centric 3D Detection via Transferring Knowledge from Synthetic Instances to Real Scenes", "abstract": "Human-centric 3D scene understanding has recently drawn increasing attention,\ndriven by its critical impact on robotics. However, human-centric real-life\nscenarios are extremely diverse and complicated, and humans have intricate\nmotions and interactions. With limited labeled data, supervised methods are\ndifficult to generalize to general scenarios, hindering real-life applications.\nMimicking human intelligence, we propose an unsupervised 3D detection method\nfor human-centric scenarios by transferring the knowledge from synthetic human\ninstances to real scenes. To bridge the gap between the distinct data\nrepresentations and feature distributions of synthetic models and real point\nclouds, we introduce novel modules for effective instance-to-scene\nrepresentation transfer and synthetic-to-real feature alignment. Remarkably,\nour method exhibits superior performance compared to current state-of-the-art\ntechniques, achieving 87.8% improvement in mAP and closely approaching the\nperformance of fully supervised methods (62.15 mAP vs. 69.02 mAP) on HuCenLife\nDataset.", "keywords": ["Scene analysis and understanding", "Biometrics and human analysis"], "authors_list": ["Yichen Yao", "Zimo Jiang", "YUJING SUN", "Zhencai Zhu", "Xinge Zhu", "Runnan Chen", "Yuexin Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a4"}, "filepath": "data/2312.12198.png", "tags": [], "_media_type": "image", "_rand": 0.9999132685368385, "arXiv_link": "https://arxiv.org/abs/2312.12198", "other_link": "", "title": "Mask Grounding for Referring Image Segmentation", "abstract": "Referring Image Segmentation (RIS) is a challenging task that requires an\nalgorithm to segment objects referred by free-form language expressions.\nDespite significant progress in recent years, most state-of-the-art (SOTA)\nmethods still suffer from considerable language-image modality gap at the pixel\nand word level. These methods generally 1) rely on sentence-level language\nfeatures for language-image alignment and 2) lack explicit training supervision\nfor fine-grained visual grounding. Consequently, they exhibit weak object-level\ncorrespondence between visual and language features. Without well-grounded\nfeatures, prior methods struggle to understand complex expressions that require\nstrong reasoning over relationships among multiple objects, especially when\ndealing with rarely used or ambiguous clauses. To tackle this challenge, we\nintroduce a novel Mask Grounding auxiliary task that significantly improves\nvisual grounding within language features, by explicitly teaching the model to\nlearn fine-grained correspondence between masked textual tokens and their\nmatching visual objects. Mask Grounding can be directly used on prior RIS\nmethods and consistently bring improvements. Furthermore, to holistically\naddress the modality gap, we also design a cross-modal alignment loss and an\naccompanying alignment module. These additions work synergistically with Mask\nGrounding. With all these techniques, our comprehensive approach culminates in\nMagNet (Mask-grounded Network), an architecture that significantly outperforms\nprior arts on three key benchmarks (RefCOCO, RefCOCO+ and G-Ref), demonstrating\nour method's effectiveness in addressing current limitations of RIS algorithms.\nOur code and pre-trained weights will be released.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yong Xien Chng", "Henry Zheng", "Yizeng Han", "Xuchong QIU", "Gao Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a5"}, "filepath": "data/2312.04802.png", "tags": [], "_media_type": "image", "_rand": 0.9993392394256673, "arXiv_link": "https://arxiv.org/abs/2312.04802", "other_link": "", "title": "MimicDiffusion: Purifying Adversarial Perturbation via Mimicking Clean Diffusion Model", "abstract": "Deep neural networks (DNNs) are vulnerable to adversarial perturbation, where\nan imperceptible perturbation is added to the image that can fool the DNNs.\nDiffusion-based adversarial purification focuses on using the diffusion model\nto generate a clean image against such adversarial attacks. Unfortunately, the\ngenerative process of the diffusion model is also inevitably affected by\nadversarial perturbation since the diffusion model is also a deep network where\nits input has adversarial perturbation. In this work, we propose\nMimicDiffusion, a new diffusion-based adversarial purification technique, that\ndirectly approximates the generative process of the diffusion model with the\nclean image as input. Concretely, we analyze the differences between the guided\nterms using the clean image and the adversarial sample. After that, we first\nimplement MimicDiffusion based on Manhattan distance. Then, we propose two\nguidance to purify the adversarial perturbation and approximate the clean\ndiffusion model. Extensive experiments on three image datasets including\nCIFAR-10, CIFAR-100, and ImageNet with three classifier backbones including\nWideResNet-70-16, WideResNet-28-10, and ResNet50 demonstrate that\nMimicDiffusion significantly performs better than the state-of-the-art\nbaselines. On CIFAR-10, CIFAR-100, and ImageNet, it achieves 92.67\\%, 61.35\\%,\nand 61.53\\% average robust accuracy, which are 18.49\\%, 13.23\\%, and 17.64\\%\nhigher, respectively. The code is available in the supplementary material.", "keywords": [], "authors_list": ["Kaiyu Song", "Hanjiang Lai", "Yan Pan", "Jian Yin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a6"}, "filepath": "data/2311.18608.png", "tags": [], "_media_type": "image", "_rand": 0.9991770972959341, "arXiv_link": "https://arxiv.org/abs/2311.18608", "other_link": "https://hyelinnam.github.io/CDS/", "title": "Contrastive Denoising Score for Text-guided Latent Diffusion Image Editing", "abstract": "With the remarkable advent of text-to-image diffusion models, image editing\nmethods have become more diverse and continue to evolve. A promising recent\napproach in this realm is Delta Denoising Score (DDS) - an image editing\ntechnique based on Score Distillation Sampling (SDS) framework that leverages\nthe rich generative prior of text-to-image diffusion models. However, relying\nsolely on the difference between scoring functions is insufficient for\npreserving specific structural elements from the original image, a crucial\naspect of image editing. To address this, here we present an embarrassingly\nsimple yet very powerful modification of DDS, called Contrastive Denoising\nScore (CDS), for latent diffusion models (LDM). Inspired by the similarities\nand differences between DDS and the contrastive learning for unpaired\nimage-to-image translation(CUT), we introduce a straightforward approach using\nCUT loss within the DDS framework. Rather than employing auxiliary networks as\nin the original CUT approach, we leverage the intermediate features of LDM,\nspecifically those from the self-attention layers, which possesses rich spatial\ninformation. Our approach enables zero-shot image-to-image translation and\nneural radiance field (NeRF) editing, achieving structural correspondence\nbetween the input and output while maintaining content controllability.\nQualitative results and comparisons demonstrates the effectiveness of our\nproposed method. Project page: https://hyelinnam.github.io/CDS/", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Hyelin Nam", "Gihyun Kwon", "Geon Yeong Park", "Jong Chul Ye"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a7"}, "filepath": "data/2405.04309.png", "tags": [], "_media_type": "image", "_rand": 0.999023013375046, "arXiv_link": "https://arxiv.org/abs/2405.04309", "other_link": "", "title": "Non-rigid Structure-from-Motion: Temporally-smooth Procrustean Alignment and Spatially-variant Deformation Modeling", "abstract": "Even though Non-rigid Structure-from-Motion (NRSfM) has been extensively\nstudied and great progress has been made, there are still key challenges that\nhinder their broad real-world applications: 1) the inherent motion/rotation\nambiguity requires either explicit camera motion recovery with extra constraint\nor complex Procrustean Alignment; 2) existing low-rank modeling of the global\nshape can over-penalize drastic deformations in the 3D shape sequence. This\npaper proposes to resolve the above issues from a spatial-temporal modeling\nperspective. First, we propose a novel Temporally-smooth Procrustean Alignment\nmodule that estimates 3D deforming shapes and adjusts the camera motion by\naligning the 3D shape sequence consecutively. Our new alignment module remedies\nthe requirement of complex reference 3D shape during alignment, which is more\nconductive to non-isotropic deformation modeling. Second, we propose a\nspatial-weighted approach to enforce the low-rank constraint adaptively at\ndifferent locations to accommodate drastic spatially-variant deformation\nreconstruction better. Our modeling outperform existing low-rank based methods,\nand extensive experiments across different datasets validate the effectiveness\nof our method.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Jiawei Shi", "Hui Deng", "Yuchao Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a8"}, "filepath": "data/2403.11397.png", "tags": [], "_media_type": "image", "_rand": 0.9995473852489911, "arXiv_link": "https://arxiv.org/abs/2403.11397", "other_link": "", "title": "Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization", "abstract": "The task of No-Reference Image Quality Assessment (NR-IQA) is to estimate the\nquality score of an input image without additional information. NR-IQA models\nplay a crucial role in the media industry, aiding in performance evaluation and\noptimization guidance. However, these models are found to be vulnerable to\nadversarial attacks, which introduce imperceptible perturbations to input\nimages, resulting in significant changes in predicted scores. In this paper, we\npropose a defense method to improve the stability in predicted scores when\nattacked by small perturbations, thus enhancing the adversarial robustness of\nNR-IQA models. To be specific, we present theoretical evidence showing that the\nmagnitude of score changes is related to the $\\ell_1$ norm of the model's\ngradient with respect to the input image. Building upon this theoretical\nfoundation, we propose a norm regularization training strategy aimed at\nreducing the $\\ell_1$ norm of the gradient, thereby boosting the robustness of\nNR-IQA models. Experiments conducted on four NR-IQA baseline models demonstrate\nthe effectiveness of our strategy in reducing score changes in the presence of\nadversarial attacks. To the best of our knowledge, this work marks the first\nattempt to defend against adversarial attacks on NR-IQA models. Our study\noffers valuable insights into the adversarial robustness of NR-IQA models and\nprovides a foundation for future research in this area.", "keywords": [], "authors_list": ["Yujia Liu", "Chenxi Yang", "Dingquan Li", "Jianhao Ding", "Tingting Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2a9"}, "filepath": "data/2403.20031.png", "tags": [], "_media_type": "image", "_rand": 0.9992061832788957, "arXiv_link": "https://arxiv.org/abs/2403.20031", "other_link": "", "title": "A Unified Framework for Human-centric Point Cloud Video Understanding", "abstract": "Human-centric Point Cloud Video Understanding (PVU) is an emerging field\nfocused on extracting and interpreting human-related features from sequences of\nhuman point clouds, further advancing downstream human-centric tasks and\napplications. Previous works usually focus on tackling one specific task and\nrely on huge labeled data, which has poor generalization capability.\nConsidering that human has specific characteristics, including the structural\nsemantics of human body and the dynamics of human motions, we propose a unified\nframework to make full use of the prior knowledge and explore the inherent\nfeatures in the data itself for generalized human-centric point cloud video\nunderstanding. Extensive experiments demonstrate that our method achieves\nstate-of-the-art performance on various human-related tasks, including action\nrecognition and 3D pose estimation. All datasets and code will be released\nsoon.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Yiteng Xu", "Kecheng Ye", "xiao han", "yiming ren", "Xinge Zhu", "Yuexin Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2aa"}, "filepath": "data/2312.16245.png", "tags": [], "_media_type": "image", "_rand": 0.9991023149849428, "arXiv_link": "https://arxiv.org/abs/2312.16245", "other_link": "https://github.com/dyhBUPT/iKUN.", "title": "iKUN: Speak to Trackers without Retraining", "abstract": "Referring multi-object tracking (RMOT) aims to track multiple objects based\non input textual descriptions. Previous works realize it by simply integrating\nan extra textual module into the multi-object tracker. However, they typically\nneed to retrain the entire framework and have difficulties in optimization. In\nthis work, we propose an insertable Knowledge Unification Network, termed iKUN,\nto enable communication with off-the-shelf trackers in a plug-and-play manner.\nConcretely, a knowledge unification module (KUM) is designed to adaptively\nextract visual features based on textual guidance. Meanwhile, to improve the\nlocalization accuracy, we present a neural version of Kalman filter (NKF) to\ndynamically adjust process noise and observation noise based on the current\nmotion status. Moreover, to address the problem of open-set long-tail\ndistribution of textual descriptions, a test-time similarity calibration method\nis proposed to refine the confidence score with pseudo frequency. Extensive\nexperiments on Refer-KITTI dataset verify the effectiveness of our framework.\nFinally, to speed up the development of RMOT, we also contribute a more\nchallenging dataset, Refer-Dance, by extending public DanceTrack dataset with\nmotion and dressing descriptions. The codes and dataset are available at\nhttps://github.com/dyhBUPT/iKUN.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yunhao Du", "Cheng Lei", "Zhicheng Zhao", "Fei Su"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ab"}, "filepath": "data/2402.17376.png", "tags": [], "_media_type": "image", "_rand": 0.9993394463543697, "arXiv_link": "https://arxiv.org/abs/2402.17376", "other_link": "", "title": "Accelerating Diffusion Sampling with Optimized Time Steps", "abstract": "Diffusion probabilistic models (DPMs) have shown remarkable performance in\nhigh-resolution image synthesis, but their sampling efficiency is still to be\ndesired due to the typically large number of sampling steps. Recent\nadvancements in high-order numerical ODE solvers for DPMs have enabled the\ngeneration of high-quality images with much fewer sampling steps. While this is\na significant development, most sampling methods still employ uniform time\nsteps, which is not optimal when using a small number of steps. To address this\nissue, we propose a general framework for designing an optimization problem\nthat seeks more appropriate time steps for a specific numerical ODE solver for\nDPMs. This optimization problem aims to minimize the distance between the\nground-truth solution to the ODE and an approximate solution corresponding to\nthe numerical solver. It can be efficiently solved using the constrained trust\nregion method, taking less than $15$ seconds. Our extensive experiments on both\nunconditional and conditional sampling using pixel- and latent-space DPMs\ndemonstrate that, when combined with the state-of-the-art sampling method\nUniPC, our optimized time steps significantly improve image generation\nperformance in terms of FID scores for datasets such as CIFAR-10 and ImageNet,\ncompared to using uniform time steps.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Shuchen Xue", "Zhaoqiang Liu", "Fei Chen", "Shifeng Zhang", "Tianyang Hu", "Enze Xie", "Zhenguo Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ac"}, "filepath": "data/2311.16973.png", "tags": [], "_media_type": "image", "_rand": 0.9996161604558504, "arXiv_link": "https://arxiv.org/abs/2311.16973", "other_link": "", "title": "DemoFusion: Democratising High-Resolution Image Generation With No $$$", "abstract": "High-resolution image generation with Generative Artificial Intelligence\n(GenAI) has immense potential but, due to the enormous capital investment\nrequired for training, it is increasingly centralised to a few large\ncorporations, and hidden behind paywalls. This paper aims to democratise\nhigh-resolution GenAI by advancing the frontier of high-resolution generation\nwhile remaining accessible to a broad audience. We demonstrate that existing\nLatent Diffusion Models (LDMs) possess untapped potential for higher-resolution\nimage generation. Our novel DemoFusion framework seamlessly extends open-source\nGenAI models, employing Progressive Upscaling, Skip Residual, and Dilated\nSampling mechanisms to achieve higher-resolution image generation. The\nprogressive nature of DemoFusion requires more passes, but the intermediate\nresults can serve as \"previews\", facilitating rapid prompt iteration.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Ruoyi DU", "Dongliang Chang", "Timothy Hospedales", "Yi-Zhe Song", "Zhanyu Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ad"}, "filepath": "data/2312.01897.png", "tags": [], "_media_type": "image", "_rand": 0.9995013284939072, "arXiv_link": "https://arxiv.org/abs/2312.01897", "other_link": "", "title": "Adapting Short-Term Transformers for Action Detection in Untrimmed Videos", "abstract": "Vision Transformer (ViT) has shown high potential in video recognition, owing\nto its flexible design, adaptable self-attention mechanisms, and the efficacy\nof masked pre-training. Yet, it remains unclear how to adapt these pre-trained\nshort-term ViTs for temporal action detection (TAD) in untrimmed videos. The\nexisting works treat them as off-the-shelf feature extractors for each\nshort-trimmed snippet without capturing the fine-grained relation among\ndifferent snippets in a broader temporal context. To mitigate this issue, this\npaper focuses on designing a new mechanism for adapting these pre-trained ViT\nmodels as a unified long-form video transformer to fully unleash its modeling\npower in capturing inter-snippet relation, while still keeping low computation\noverhead and memory consumption for efficient TAD. To this end, we design\neffective cross-snippet propagation modules to gradually exchange short-term\nvideo information among different snippets from two levels. For inner-backbone\ninformation propagation, we introduce a cross-snippet propagation strategy to\nenable multi-snippet temporal feature interaction inside the backbone.For\npost-backbone information propagation, we propose temporal transformer layers\nfor further clip-level modeling. With the plain ViT-B pre-trained with\nVideoMAE, our end-to-end temporal action detector (ViT-TAD) yields a very\ncompetitive performance to previous temporal action detectors, riching up to\n69.5 average mAP on THUMOS14, 37.40 average mAP on ActivityNet-1.3 and 17.20\naverage mAP on FineAction.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Min Yang", "gaohuan", "Ping Guo", "Limin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ae"}, "filepath": "data/2402.17171.png", "tags": [], "_media_type": "image", "_rand": 0.9999467335481214, "arXiv_link": "https://arxiv.org/abs/2402.17171", "other_link": "", "title": "LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free Environment", "abstract": "For human-centric large-scale scenes, fine-grained modeling for 3D human\nglobal pose and shape is significant for scene understanding and can benefit\nmany real-world applications. In this paper, we present LiveHPS, a novel\nsingle-LiDAR-based approach for scene-level human pose and shape estimation\nwithout any limitation of light conditions and wearable devices. In particular,\nwe design a distillation mechanism to mitigate the distribution-varying effect\nof LiDAR point clouds and exploit the temporal-spatial geometric and dynamic\ninformation existing in consecutive frames to solve the occlusion and noise\ndisturbance. LiveHPS, with its efficient configuration and high-quality output,\nis well-suited for real-world applications. Moreover, we propose a huge human\nmotion dataset, named FreeMotion, which is collected in various scenarios with\ndiverse human poses, shapes and translations. It consists of multi-modal and\nmulti-view acquisition data from calibrated and synchronized LiDARs, cameras,\nand IMUs. Extensive experiments on our new dataset and other public datasets\ndemonstrate the SOTA performance and robustness of our approach. We will\nrelease our code and dataset soon.", "keywords": ["Scene analysis and understanding", "Biometrics and human analysis"], "authors_list": ["yiming ren", "xiao han", "Chengfeng Zhao", "Jingya Wang", "Lan Xu", "Jingyi Yu", "Yuexin Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2af"}, "filepath": "data/2312.05006.png", "tags": [], "_media_type": "image", "_rand": 0.9993759672375437, "arXiv_link": "https://arxiv.org/abs/2312.05006", "other_link": "", "title": "CoDe: An Explicit Content Decoupling Framework for Image Restoration", "abstract": "Adverse weather image restoration strives to recover clear images from those\naffected by various weather types, such as rain, haze, and snow. Each weather\ntype calls for a tailored degradation removal approach due to its unique impact\non images. Conversely, content reconstruction can employ a uniform approach, as\nthe underlying image content remains consistent. Although previous techniques\ncan handle multiple weather types within a single network, they neglect the\ncrucial distinction between these two processes, limiting the quality of\nrestored images. This work introduces a novel adverse weather image restoration\nmethod, called DDCNet, which decouples the degradation removal and content\nreconstruction process at the feature level based on their channel statistics.\nSpecifically, we exploit the unique advantages of the Fourier transform in both\nthese two processes: (1) the degradation information is mainly located in the\namplitude component of the Fourier domain, and (2) the Fourier domain contains\nglobal information. The former facilitates channel-dependent degradation\nremoval operation, allowing the network to tailor responses to various adverse\nweather types; the latter, by integrating Fourier's global properties into\nchannel-independent content features, enhances network capacity for consistent\nglobal content reconstruction. We further augment the degradation removal\nprocess with a degradation mapping loss function. Extensive experiments\ndemonstrate our method achieves state-of-the-art performance in multiple\nadverse weather removal benchmarks.", "keywords": ["Low-level vision", "Efficient and scalable vision"], "authors_list": ["Enxuan Gu", "Hongwei Ge", "Yong Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b0"}, "filepath": "data/2311.10529.png", "tags": [], "_media_type": "image", "_rand": 0.9994252639773701, "arXiv_link": "https://arxiv.org/abs/2311.10529", "other_link": "", "title": "MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation", "abstract": "The Segment Anything Model (SAM) has recently emerged as a groundbreaking\nfoundation model for prompt-driven image segmentation tasks. However, both the\noriginal SAM and its medical variants require slice-by-slice manual prompting\nof target structures, which directly increase the burden for applications.\nDespite attempts of auto-prompting to turn SAM into a fully automatic manner,\nit still exhibits subpar performance and lacks of reliability especially in the\nfield of medical imaging. In this paper, we propose UR-SAM, an uncertainty\nrectified SAM framework to enhance the reliability for auto-prompting medical\nimage segmentation. Building upon a localization framework for automatic prompt\ngeneration, our method incorporates a prompt augmentation module to obtain a\nseries of input prompts for SAM for uncertainty estimation and an\nuncertainty-based rectification module to further utilize the distribution of\nestimated uncertainty to improve the segmentation performance. Extensive\nexperiments on two public 3D medical datasets covering the segmentation of 35\norgans demonstrate that without supplementary training or fine-tuning, our\nmethod further improves the segmentation performance with up to 10.7 % and 13.8\n% in dice similarity coefficient, demonstrating efficiency and broad\ncapabilities for medical image segmentation without manual prompting.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiaolong Deng", "Huisi Wu", "Runhao Zeng", "Jing Qin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b1"}, "filepath": "data/2312.02149.png", "tags": [], "_media_type": "image", "_rand": 0.9991134253352351, "arXiv_link": "https://arxiv.org/abs/2312.02149", "other_link": "", "title": "Generative Powers of Ten", "abstract": "We present a method that uses a text-to-image model to generate consistent\ncontent across multiple image scales, enabling extreme semantic zooms into a\nscene, e.g., ranging from a wide-angle landscape view of a forest to a macro\nshot of an insect sitting on one of the tree branches. We achieve this through\na joint multi-scale diffusion sampling approach that encourages consistency\nacross different scales while preserving the integrity of each individual\nsampling process. Since each generated scale is guided by a different text\nprompt, our method enables deeper levels of zoom than traditional\nsuper-resolution methods that may struggle to create new contextual structure\nat vastly different scales. We compare our method qualitatively with\nalternative techniques in image super-resolution and outpainting, and show that\nour method is most effective at generating consistent multi-scale content.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Xiaojuan Wang", "Janne Kontkanen", "Brian Curless", "Steve Seitz", "Ira Kemelmacher-Shlizerman", "Ben Mildenhall", "Pratul P. Srinivasan", "Dor Verbin", "Aleksander Holynski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b2"}, "filepath": "data/2307.11558.png", "tags": [], "_media_type": "image", "_rand": 0.9995548708008497, "arXiv_link": "https://arxiv.org/abs/2307.11558", "other_link": "https://github.com/zhjohnchan/SK-VG}.", "title": "When Visual Grounding Meets Gigapixel-level Large-scale Scenes: Benchmark and Approach", "abstract": "Visual grounding (VG) aims to establish fine-grained alignment between vision\nand language. Ideally, it can be a testbed for vision-and-language models to\nevaluate their understanding of the images and texts and their reasoning\nabilities over their joint space. However, most existing VG datasets are\nconstructed using simple description texts, which do not require sufficient\nreasoning over the images and texts. This has been demonstrated in a recent\nstudy~\\cite{luo2022goes}, where a simple LSTM-based text encoder without\npretraining can achieve state-of-the-art performance on mainstream VG datasets.\nTherefore, in this paper, we propose a novel benchmark of \\underline{S}cene\n\\underline{K}nowledge-guided \\underline{V}isual \\underline{G}rounding (SK-VG),\nwhere the image content and referring expressions are not sufficient to ground\nthe target objects, forcing the models to have a reasoning ability on the\nlong-form scene knowledge. To perform this task, we propose two approaches to\naccept the triple-type input, where the former embeds knowledge into the image\nfeatures before the image-query interaction; the latter leverages linguistic\nstructure to assist in computing the image-text matching. We conduct extensive\nexperiments to analyze the above methods and show that the proposed approaches\nachieve promising results but still leave room for improvement, including\nperformance and interpretability. The dataset and code are available at\n\\url{https://github.com/zhjohnchan/SK-VG}.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["TAO MA", "Bing Bai", "Haozhe Lin", "Heyuan Wang", "Yu Wang", "Lin Luo", "Lu Fang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b3"}, "filepath": "data/2403.15330.png", "tags": [], "_media_type": "image", "_rand": 0.999932431707992, "arXiv_link": "https://arxiv.org/abs/2403.15330", "other_link": "", "title": "Selectively Informative Description can Reduce Undesired Embedding Entanglements in Text-to-Image Personalization", "abstract": "In text-to-image personalization, a timely and crucial challenge is the\ntendency of generated images overfitting to the biases present in the reference\nimages. We initiate our study with a comprehensive categorization of the biases\ninto background, nearby-object, tied-object, substance (in style\nre-contextualization), and pose biases. These biases manifest in the generated\nimages due to their entanglement into the subject embedding. This undesired\nembedding entanglement not only results in the reflection of biases from the\nreference images into the generated images but also notably diminishes the\nalignment of the generated images with the given generation prompt. To address\nthis challenge, we propose SID~(Selectively Informative Description), a text\ndescription strategy that deviates from the prevalent approach of only\ncharacterizing the subject's class identification. SID is generated utilizing\nmultimodal GPT-4 and can be seamlessly integrated into optimization-based\nmodels. We present comprehensive experimental results along with analyses of\ncross-attention maps, subject-alignment, non-subject-disentanglement, and\ntext-alignment.", "keywords": [], "authors_list": ["Jimyeong Kim", "Jungwon Park", "Wonjong Rhee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b4"}, "filepath": "data/2403.15192.png", "tags": [], "_media_type": "image", "_rand": 0.9994904119364384, "arXiv_link": "https://arxiv.org/abs/2403.15192", "other_link": "https://github.com/yimeng-fan/SFOD.", "title": "SFOD: Spiking Fusion Object Detector", "abstract": "Event cameras, characterized by high temporal resolution, high dynamic range,\nlow power consumption, and high pixel bandwidth, offer unique capabilities for\nobject detection in specialized contexts. Despite these advantages, the\ninherent sparsity and asynchrony of event data pose challenges to existing\nobject detection algorithms. Spiking Neural Networks (SNNs), inspired by the\nway the human brain codes and processes information, offer a potential solution\nto these difficulties. However, their performance in object detection using\nevent cameras is limited in current implementations. In this paper, we propose\nthe Spiking Fusion Object Detector (SFOD), a simple and efficient approach to\nSNN-based object detection. Specifically, we design a Spiking Fusion Module,\nachieving the first-time fusion of feature maps from different scales in SNNs\napplied to event cameras. Additionally, through integrating our analysis and\nexperiments conducted during the pretraining of the backbone network on the\nNCAR dataset, we delve deeply into the impact of spiking decoding strategies\nand loss functions on model performance. Thereby, we establish state-of-the-art\nclassification results based on SNNs, achieving 93.7\\% accuracy on the NCAR\ndataset. Experimental results on the GEN1 detection dataset demonstrate that\nthe SFOD achieves a state-of-the-art mAP of 32.1\\%, outperforming existing\nSNN-based approaches. Our research not only underscores the potential of SNNs\nin object detection with event cameras but also propels the advancement of\nSNNs. Code is available at https://github.com/yimeng-fan/SFOD.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques"], "authors_list": ["Yimeng Fan", "Wei Zhang", "Changsong Liu", "Mingyang Li", "Wenrui Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b5"}, "filepath": "data/2403.03037.png", "tags": [], "_media_type": "image", "_rand": 0.9990701160892236, "arXiv_link": "https://arxiv.org/abs/2403.03037", "other_link": "", "title": "A Backpack Full of Skills: Egocentric Video Understanding with Diverse Task Perspectives", "abstract": "Human comprehension of a video stream is naturally broad: in a few instants,\nwe are able to understand what is happening, the relevance and relationship of\nobjects, and forecast what will follow in the near future, everything all at\nonce. We believe that - to effectively transfer such an holistic perception to\nintelligent machines - an important role is played by learning to correlate\nconcepts and to abstract knowledge coming from different tasks, to\nsynergistically exploit them when learning novel skills. To accomplish this, we\nseek for a unified approach to video understanding which combines shared\ntemporal modelling of human actions with minimal overhead, to support multiple\ndownstream tasks and enable cooperation when learning novel skills. We then\npropose EgoPack, a solution that creates a collection of task perspectives that\ncan be carried across downstream tasks and used as a potential source of\nadditional insights, as a backpack of skills that a robot can carry around and\nuse when needed. We demonstrate the effectiveness and efficiency of our\napproach on four Ego4D benchmarks, outperforming current state-of-the-art\nmethods.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding", "Biometrics and human analysis"], "authors_list": ["Simone Alberto Peirone", "Francesca Pistilli", "Antonio Alliegro", "Giuseppe Averta"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b6"}, "filepath": "data/2310.14695.png", "tags": [], "_media_type": "image", "_rand": 0.9996160710977444, "arXiv_link": "https://arxiv.org/abs/2310.14695", "other_link": "", "title": "How Far Can We Compress Instant NGP-Based NeRF?", "abstract": "Modeling 3D scenes by volumetric feature grids is one of the promising\ndirections of neural approximations to improve Neural Radiance Fields (NeRF).\nInstant-NGP (INGP) introduced multi-resolution hash encoding from a lookup\ntable of trainable feature grids which enabled learning high-quality neural\ngraphics primitives in a matter of seconds. However, this improvement came at\nthe cost of higher storage size. In this paper, we address this challenge by\nintroducing instant learning of compression-aware NeRF features (CAwa-NeRF),\nthat allows exporting the zip compressed feature grids at the end of the model\ntraining with a negligible extra time overhead without changing neither the\nstorage architecture nor the parameters used in the original INGP paper.\nNonetheless, the proposed method is not limited to INGP but could also be\nadapted to any model. By means of extensive simulations, our proposed instant\nlearning pipeline can achieve impressive results on different kinds of static\nscenes such as single object masked background scenes and real-life scenes\ncaptured in our studio. In particular, for single object masked background\nscenes CAwa-NeRF compresses the feature grids down to 6% (1.2 MB) of the\noriginal size without any loss in the PSNR (33 dB) or down to 2.4% (0.53 MB)\nwith a slight virtual loss (32.31 dB).", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Yihang Chen", "Qianyi Wu", "Mehrtash Harandi", "Jianfei Cai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b7"}, "filepath": "data/2402.17723.png", "tags": [], "_media_type": "image", "_rand": 0.999892264099184, "arXiv_link": "https://arxiv.org/abs/2402.17723", "other_link": "https://yzxing87.github.io/Seeing-and-Hearing/", "title": "Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners", "abstract": "Video and audio content creation serves as the core technique for the movie\nindustry and professional users. Recently, existing diffusion-based methods\ntackle video and audio generation separately, which hinders the technique\ntransfer from academia to industry. In this work, we aim at filling the gap,\nwith a carefully designed optimization-based framework for cross-visual-audio\nand joint-visual-audio generation. We observe the powerful generation ability\nof off-the-shelf video or audio generation models. Thus, instead of training\nthe giant models from scratch, we propose to bridge the existing strong models\nwith a shared latent representation space. Specifically, we propose a\nmultimodality latent aligner with the pre-trained ImageBind model. Our latent\naligner shares a similar core as the classifier guidance that guides the\ndiffusion denoising process during inference time. Through carefully designed\noptimization strategy and loss functions, we show the superior performance of\nour method on joint video-audio generation, visual-steered audio generation,\nand audio-steered visual generation tasks. The project website can be found at\nhttps://yzxing87.github.io/Seeing-and-Hearing/", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yazhou Xing", "Yingqing He", "Zeyue Tian", "Xintao Wang", "Qifeng Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b8"}, "filepath": "data/2312.10656.png", "tags": [], "_media_type": "image", "_rand": 0.999467476942282, "arXiv_link": "https://arxiv.org/abs/2312.10656", "other_link": "", "title": "VidToMe: Video Token Merging for Zero-Shot Video Editing", "abstract": "Diffusion models have made significant advances in generating high-quality\nimages, but their application to video generation has remained challenging due\nto the complexity of temporal motion. Zero-shot video editing offers a solution\nby utilizing pre-trained image diffusion models to translate source videos into\nnew ones. Nevertheless, existing methods struggle to maintain strict temporal\nconsistency and efficient memory consumption. In this work, we propose a novel\napproach to enhance temporal consistency in generated videos by merging\nself-attention tokens across frames. By aligning and compressing temporally\nredundant tokens across frames, our method improves temporal coherence and\nreduces memory consumption in self-attention computations. The merging strategy\nmatches and aligns tokens according to the temporal correspondence between\nframes, facilitating natural temporal consistency in generated video frames. To\nmanage the complexity of video processing, we divide videos into chunks and\ndevelop intra-chunk local token merging and inter-chunk global token merging,\nensuring both short-term video continuity and long-term content consistency.\nOur video editing approach seamlessly extends the advancements in image editing\nto video editing, rendering favorable results in temporal consistency over\nstate-of-the-art methods.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Xirui Li", "Chao Ma", "Xiaokang Yang", "Ming-Hsuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2b9"}, "filepath": "data/2311.17089.png", "tags": [], "_media_type": "image", "_rand": 0.9999436536458658, "arXiv_link": "https://arxiv.org/abs/2311.17089", "other_link": "https://jokeryan.github.io/projects/ms-gs/", "title": "Multi-Scale 3D Gaussian Splatting for Anti-Aliased Rendering", "abstract": "3D Gaussians have recently emerged as a highly efficient representation for\n3D reconstruction and rendering. Despite its high rendering quality and speed\nat high resolutions, they both deteriorate drastically when rendered at lower\nresolutions or from far away camera position. During low resolution or far away\nrendering, the pixel size of the image can fall below the Nyquist frequency\ncompared to the screen size of each splatted 3D Gaussian and leads to aliasing\neffect. The rendering is also drastically slowed down by the sequential alpha\nblending of more splatted Gaussians per pixel. To address these issues, we\npropose a multi-scale 3D Gaussian splatting algorithm, which maintains\nGaussians at different scales to represent the same scene. Higher-resolution\nimages are rendered with more small Gaussians, and lower-resolution images are\nrendered with fewer larger Gaussians. With similar training time, our algorithm\ncan achieve 13\\%-66\\% PSNR and 160\\%-2400\\% rendering speed improvement at\n4$\\times$-128$\\times$ scale rendering on Mip-NeRF360 dataset compared to the\nsingle scale 3D Gaussian splitting. Our code and more results are available on\nour project website https://jokeryan.github.io/projects/ms-gs/", "keywords": ["Efficient and scalable vision", "Computational imaging and physics-based vision"], "authors_list": ["Zhiwen Yan", "Weng Fei Low", "Yu Chen", "Gim Hee Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ba"}, "filepath": "data/2404.02405.png", "tags": [], "_media_type": "image", "_rand": 0.9991615796532175, "arXiv_link": "https://arxiv.org/abs/2404.02405", "other_link": "https://github.com/Dotori-HJ/TE-TAD", "title": "TE-TAD: Towards Fully End-to-End Temporal Action Detection via Time-Aligned Coordinate Expression", "abstract": "In this paper, we investigate that the normalized coordinate expression is a\nkey factor as reliance on hand-crafted components in query-based detectors for\ntemporal action detection (TAD). Despite significant advancements towards an\nend-to-end framework in object detection, query-based detectors have been\nlimited in achieving full end-to-end modeling in TAD. To address this issue, we\npropose \\modelname{}, a full end-to-end temporal action detection transformer\nthat integrates time-aligned coordinate expression. We reformulate coordinate\nexpression utilizing actual timeline values, ensuring length-invariant\nrepresentations from the extremely diverse video duration environment.\nFurthermore, our proposed adaptive query selection dynamically adjusts the\nnumber of queries based on video length, providing a suitable solution for\nvarying video durations compared to a fixed query set. Our approach not only\nsimplifies the TAD process by eliminating the need for hand-crafted components\nbut also significantly improves the performance of query-based detectors. Our\nTE-TAD outperforms the previous query-based detectors and achieves competitive\nperformance compared to state-of-the-art methods on popular benchmark datasets.\nCode is available at: https://github.com/Dotori-HJ/TE-TAD", "keywords": [], "authors_list": ["Ho-Joong Kim", "Jung-Ho Hong", "Heejo Kong", "Seong-Whan Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2bb"}, "filepath": "data/2309.14786.png", "tags": [], "_media_type": "image", "_rand": 0.9995482859397474, "arXiv_link": "https://arxiv.org/abs/2309.14786", "other_link": "", "title": "Dual Prototype Attention for Unsupervised Video Object Segmentation", "abstract": "Unsupervised video object segmentation (VOS) is a task that aims to detect\nthe most salient object in a video without external guidance about the object.\nTo leverage the property that salient objects usually have distinctive\nmovements compared to the background, recent methods collaboratively use motion\ncues extracted from optical flow maps with appearance cues extracted from RGB\nimages. However, as optical flow maps are usually very relevant to segmentation\nmasks, the network is easy to be learned overly dependent on the motion cues\nduring network training. As a result, such two-stream approaches are vulnerable\nto confusing motion cues, making their prediction unstable. To relieve this\nissue, we design a novel motion-as-option network by treating motion cues as\noptional. During network training, RGB images are randomly provided to the\nmotion encoder instead of optical flow maps, to implicitly reduce motion\ndependency of the network. As the learned motion encoder can deal with both RGB\nimages and optical flow maps, two different predictions can be generated\ndepending on which source information is used as motion input. In order to\nfully exploit this property, we also propose an adaptive output selection\nalgorithm to adopt optimal prediction result at test time. Our proposed\napproach affords state-of-the-art performance on all public benchmark datasets,\neven maintaining real-time inference speed.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Suhwan Cho", "Minhyeok Lee", "Seunghoon Lee", "Dogyoon Lee", "Heeseung Choi", "Ig-Jae Kim", "Sangyoun Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2bc"}, "filepath": "data/2307.09437.png", "tags": [], "_media_type": "image", "_rand": 0.9992201107488533, "arXiv_link": "https://arxiv.org/abs/2307.09437", "other_link": "", "title": "Adaptive Slot Attention: Object Discovery with Dynamic Slot Number", "abstract": "The extraction of modular object-centric representations for downstream tasks\nis an emerging area of research. Learning grounded representations of objects\nthat are guaranteed to be stable and invariant promises robust performance\nacross different tasks and environments. Slot Attention (SA) learns\nobject-centric representations by assigning objects to \\textit{slots}, but\npresupposes a \\textit{single} distribution from which all slots are randomly\ninitialised. This results in an inability to learn \\textit{specialized} slots\nwhich bind to specific object types and remain invariant to identity-preserving\nchanges in object appearance. To address this, we present\n\\emph{\\textsc{Co}nditional \\textsc{S}lot \\textsc{A}ttention} (\\textsc{CoSA})\nusing a novel concept of \\emph{Grounded Slot Dictionary} (GSD) inspired by\nvector quantization. Our proposed GSD comprises (i) canonical object-level\nproperty vectors and (ii) parametric Gaussian distributions, which define a\nprior over the slots. We demonstrate the benefits of our method in multiple\ndownstream tasks such as scene generation, composition, and task adaptation,\nwhilst remaining competitive with SA in popular object discovery benchmarks.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Ke Fan", "Zechen Bai", "Tianjun Xiao", "Tong He", "Max Horn", "Yanwei Fu", "Francesco Locatello", "Zheng Zhang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2bd"}, "filepath": "data/2312.07423.png", "tags": [], "_media_type": "image", "_rand": 0.9999246721465186, "arXiv_link": "https://arxiv.org/abs/2312.07423", "other_link": "", "title": "Holoported Characters: Real-time Free-viewpoint Rendering of Humans from Sparse RGB Cameras", "abstract": "We present the first approach to render highly realistic free-viewpoint\nvideos of a human actor in general apparel, from sparse multi-view recording to\ndisplay, in real-time at an unprecedented 4K resolution. At inference, our\nmethod only requires four camera views of the moving actor and the respective\n3D skeletal pose. It handles actors in wide clothing, and reproduces even\nfine-scale dynamic detail, e.g. clothing wrinkles, face expressions, and hand\ngestures. At training time, our learning-based approach expects dense\nmulti-view video and a rigged static surface scan of the actor. Our method\ncomprises three main stages. Stage 1 is a skeleton-driven neural approach for\nhigh-quality capture of the detailed dynamic mesh geometry. Stage 2 is a novel\nsolution to create a view-dependent texture using four test-time camera views\nas input. Finally, stage 3 comprises a new image-based refinement network\nrendering the final 4K image given the output from the previous stages. Our\napproach establishes a new benchmark for real-time rendering resolution and\nquality using sparse input camera views, unlocking possibilities for immersive\ntelepresence.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis", "Vision systems and graphics integration"], "authors_list": ["Ashwath Shetty", "Marc Habermann", "Guoxing Sun", "Diogo Luvizon", "Vladislav Golyanik", "Christian Theobalt"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2be"}, "filepath": "data/2306.09348.png", "tags": [], "_media_type": "image", "_rand": 0.9995764977888749, "arXiv_link": "https://arxiv.org/abs/2306.09348", "other_link": "", "title": "Seeing the World through Your Eyes", "abstract": "The reflective nature of the human eye is an underappreciated source of\ninformation about what the world around us looks like. By imaging the eyes of a\nmoving person, we can collect multiple views of a scene outside the camera's\ndirect line of sight through the reflections in the eyes. In this paper, we\nreconstruct a 3D scene beyond the camera's line of sight using portrait images\ncontaining eye reflections. This task is challenging due to 1) the difficulty\nof accurately estimating eye poses and 2) the entangled appearance of the eye\niris and the scene reflections. Our method jointly refines the cornea poses,\nthe radiance field depicting the scene, and the observer's eye iris texture. We\nfurther propose a simple regularization prior on the iris texture pattern to\nimprove reconstruction quality. Through various experiments on synthetic and\nreal-world captures featuring people with varied eye colors, we demonstrate the\nfeasibility of our approach to recover 3D scenes using eye reflections.", "keywords": ["Scene analysis and understanding", "Biometrics and human analysis"], "authors_list": ["Hadi Alzayer", "Kevin Zhang", "Brandon Y. Feng", "Christopher Metzler", "Jia-Bin Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2bf"}, "filepath": "data/2402.08622.png", "tags": [], "_media_type": "image", "_rand": 0.9992188166175968, "arXiv_link": "https://arxiv.org/abs/2402.08622", "other_link": "", "title": "NeRF Analogies - Example-Based Visual Attribute Transfer for NeRFs", "abstract": "A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry\nand appearance of a scene. We here ask the question whether we can transfer the\nappearance from a source NeRF onto a target 3D geometry in a semantically\nmeaningful way, such that the resulting new NeRF retains the target geometry\nbut has an appearance that is an analogy to the source NeRF. To this end, we\ngeneralize classic image analogies from 2D images to NeRFs. We leverage\ncorrespondence transfer along semantic affinity that is driven by semantic\nfeatures from large, pre-trained 2D image models to achieve multi-view\nconsistent appearance transfer. Our method allows exploring the mix-and-match\nproduct space of 3D geometry and appearance. We show that our method\noutperforms traditional stylization-based methods and that a large majority of\nusers prefer our method over several typical baselines.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Michael Fischer", "Zhengqin Li", "Thu Nguyen-Phuoc", "Alja\u017e Bo\u017ei\u010d", "Zhao Dong", "Carl Marshall", "Tobias Ritschel"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c0"}, "filepath": "data/2312.12468.png", "tags": [], "_media_type": "image", "_rand": 0.9991086416049471, "arXiv_link": "https://arxiv.org/abs/2312.12468", "other_link": "", "title": "MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers", "abstract": "Recent advances in generative AI have significantly enhanced image and video\nediting, particularly in the context of text prompt control. State-of-the-art\napproaches predominantly rely on diffusion models to accomplish these tasks.\nHowever, the computational demands of diffusion-based methods are substantial,\noften necessitating large-scale paired datasets for training, and therefore\nchallenging the deployment in real applications. To address these issues, this\npaper breaks down the text-based video editing task into two stages. First, we\nleverage an pre-trained text-to-image diffusion model to simultaneously edit\nfew keyframes in an zero-shot way. Second, we introduce an efficient model\ncalled MaskINT, which is built on non-autoregressive masked generative\ntransformers and specializes in frame interpolation between the edited\nkeyframes, using the structural guidance from intermediate frames. Experimental\nresults suggest that our MaskINT achieves comparable performance with\ndiffusion-based methodologies, while significantly improve the inference time.\nThis research offers a practical solution for text-based video editing and\nshowcases the potential of non-autoregressive masked generative transformers in\nthis domain.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Haoyu Ma", "Shahin Mahdizadehaghdam", "Bichen Wu", "Zhipeng Fan", "Yuchao Gu", "Wenliang Zhao", "Lior Shapira", "Xiaohui Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c1"}, "filepath": "data/2312.00852.png", "tags": [], "_media_type": "image", "_rand": 0.9998211240008574, "arXiv_link": "https://arxiv.org/abs/2312.00852", "other_link": "", "title": "Beyond First-Order Tweedie: Solving Inverse Problems using Latent Diffusion", "abstract": "Sampling from the posterior distribution poses a major computational\nchallenge in solving inverse problems using latent diffusion models. Common\nmethods rely on Tweedie's first-order moments, which are known to induce a\nquality-limiting bias. Existing second-order approximations are impractical due\nto prohibitive computational costs, making standard reverse diffusion processes\nintractable for posterior sampling. This paper introduces Second-order Tweedie\nsampler from Surrogate Loss (STSL), a novel sampler that offers efficiency\ncomparable to first-order Tweedie with a tractable reverse process using\nsecond-order approximation. Our theoretical results reveal that the\nsecond-order approximation is lower bounded by our surrogate loss that only\nrequires $O(1)$ compute using the trace of the Hessian, and by the lower bound\nwe derive a new drift term to make the reverse process tractable. Our method\nsurpasses SoTA solvers PSLD and P2L, achieving 4X and 8X reduction in neural\nfunction evaluations, respectively, while notably enhancing sampling quality on\nFFHQ, ImageNet, and COCO benchmarks. In addition, we show STSL extends to\ntext-guided image editing and addresses residual distortions present from\ncorrupted images in leading text-guided image editing methods. To our best\nknowledge, this is the first work to offer an efficient second-order\napproximation in solving inverse problems using latent diffusion and editing\nreal-world images with corruptions.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Litu Rout", "Yujia Chen", "Abhishek Kumar", "Constantine Caramanis", "Sanjay Shakkottai", "Wen-Sheng Chu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c2"}, "filepath": "data/2403.15891.png", "tags": [], "_media_type": "image", "_rand": 0.9993816112323928, "arXiv_link": "https://arxiv.org/abs/2403.15891", "other_link": "", "title": "Human Motion Prediction under Unexpected Perturbation", "abstract": "We investigate a new task in human motion prediction, which is predicting\nmotions under unexpected physical perturbation potentially involving multiple\npeople. Compared with existing research, this task involves predicting less\ncontrolled, unpremeditated and pure reactive motions in response to external\nimpact and how such motions can propagate through people. It brings new\nchallenges such as data scarcity and predicting complex interactions. To this\nend, we propose a new method capitalizing differential physics and deep neural\nnetworks, leading to an explicit Latent Differential Physics (LDP) model.\nThrough experiments, we demonstrate that LDP has high data efficiency,\noutstanding prediction accuracy, strong generalizability and good\nexplainability. Since there is no similar research, a comprehensive comparison\nwith 11 adapted baselines from several relevant domains is conducted, showing\nLDP outperforming existing research both quantitatively and qualitatively,\nimproving prediction accuracy by as much as 70%, and demonstrating\nsignificantly stronger generalization.", "keywords": ["Biometrics and human analysis", "Computational imaging and physics-based vision"], "authors_list": ["Jiangbei Yue", "Baiyi Li", "Julien Pettr\u00e9", "Armin Seyfried", "He Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c3"}, "filepath": "data/2309.07849v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994779338266411, "arXiv_link": "https://arxiv.org/html/2309.07849v3", "other_link": "", "title": "TASeg: Temporal Aggregation Network for LiDAR Semantic Segmentation", "abstract": "LiDAR semantic segmentation plays a crucial role in enabling autonomous\ndriving and robots to understand their surroundings accurately and robustly. A\nmultitude of methods exist within this domain, including point-based,\nrange-image-based, polar-coordinate-based, and hybrid strategies. Among these,\nrange-image-based techniques have gained widespread adoption in practical\napplications due to their efficiency. However, they face a significant\nchallenge known as the ``many-to-one'' problem caused by the range image's\nlimited horizontal and vertical angular resolution. As a result, around 20% of\nthe 3D points can be occluded. In this paper, we present TFNet, a\nrange-image-based LiDAR semantic segmentation method that utilizes temporal\ninformation to address this issue. Specifically, we incorporate a temporal\nfusion layer to extract useful information from previous scans and integrate it\nwith the current scan. We then design a max-voting-based post-processing\ntechnique to correct false predictions, particularly those caused by the\n``many-to-one'' issue. We evaluated the approach on two benchmarks and\ndemonstrated that the plug-in post-processing technique is generic and can be\napplied to various networks.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Xiaopei Wu", "Yuenan Hou", "Xiaoshui Huang", "Binbin Lin", "Tong He", "Xinge Zhu", "Yuexin Ma", "Boxi Wu", "Haifeng Liu", "Deng Cai", "Wanli Ouyang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c4"}, "filepath": "data/2403.20126.png", "tags": [], "_media_type": "image", "_rand": 0.9996025667163055, "arXiv_link": "https://arxiv.org/abs/2403.20126", "other_link": "https://github.com/clovaai/ECLIPSE.", "title": "ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning", "abstract": "Panoptic segmentation, combining semantic and instance segmentation, stands\nas a cutting-edge computer vision task. Despite recent progress with deep\nlearning models, the dynamic nature of real-world applications necessitates\ncontinual learning, where models adapt to new classes (plasticity) over time\nwithout forgetting old ones (catastrophic forgetting). Current continual\nsegmentation methods often rely on distillation strategies like knowledge\ndistillation and pseudo-labeling, which are effective but result in increased\ntraining complexity and computational overhead. In this paper, we introduce a\nnovel and efficient method for continual panoptic segmentation based on Visual\nPrompt Tuning, dubbed ECLIPSE. Our approach involves freezing the base model\nparameters and fine-tuning only a small set of prompt embeddings, addressing\nboth catastrophic forgetting and plasticity and significantly reducing the\ntrainable parameters. To mitigate inherent challenges such as error propagation\nand semantic drift in continual segmentation, we propose logit manipulation to\neffectively leverage common knowledge across the classes. Experiments on ADE20K\ncontinual panoptic segmentation benchmark demonstrate the superiority of\nECLIPSE, notably its robustness against catastrophic forgetting and its\nreasonable plasticity, achieving a new state-of-the-art. The code is available\nat https://github.com/clovaai/ECLIPSE.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Beomyoung Kim", "Joonsang Yu", "Sung Ju Hwang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c5"}, "filepath": "data/2311.17286.png", "tags": [], "_media_type": "image", "_rand": 0.9995951031722639, "arXiv_link": "https://arxiv.org/abs/2311.17286", "other_link": "https://github.com/Wuziyi616/LEOD", "title": "LEOD: Label-Efficient Object Detection for Event Cameras", "abstract": "Object detection with event cameras benefits from the sensor's low latency\nand high dynamic range. However, it is costly to fully label event streams for\nsupervised training due to their high temporal resolution. To reduce this cost,\nwe present LEOD, the first method for label-efficient event-based detection.\nOur approach unifies weakly- and semi-supervised object detection with a\nself-training mechanism. We first utilize a detector pre-trained on limited\nlabels to produce pseudo ground truth on unlabeled events. Then, the detector\nis re-trained with both real and generated labels. Leveraging the temporal\nconsistency of events, we run bi-directional inference and apply tracking-based\npost-processing to enhance the quality of pseudo labels. To stabilize training\nagainst label noise, we further design a soft anchor assignment strategy. We\nintroduce new experimental protocols to evaluate the task of label-efficient\nevent-based detection on Gen1 and 1Mpx datasets. LEOD consistently outperforms\nsupervised baselines across various labeling ratios. For example, on Gen1, it\nimproves mAP by 8.6% and 7.8% for RVT-S trained with 1% and 2% labels. On 1Mpx,\nRVT-S with 10% labels even surpasses its fully-supervised counterpart using\n100% labels. LEOD maintains its effectiveness even when all labeled data are\navailable, reaching new state-of-the-art results. Finally, we show that our\nmethod readily scales to improve larger detectors as well. Code is released at\nhttps://github.com/Wuziyi616/LEOD", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ziyi Wu", "Mathias Gehrig", "Qing Lyu", "Xudong Liu", "Igor Gilitschenski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c6"}, "filepath": "data/2404.11511.png", "tags": [], "_media_type": "image", "_rand": 0.9996858577952838, "arXiv_link": "https://arxiv.org/abs/2404.11511", "other_link": "", "title": "CodedEvents: Optimal Point-Spread-Function Engineering for 3D-Tracking with Event Cameras", "abstract": "Traditional cameras face a trade-off between low-light performance and\nhigh-speed imaging: longer exposure times to capture sufficient light results\nin motion blur, whereas shorter exposures result in Poisson-corrupted noisy\nimages. While burst photography techniques help mitigate this tradeoff,\nconventional cameras are fundamentally limited in their sensor noise\ncharacteristics. Event cameras and single-photon avalanche diode (SPAD) sensors\nhave emerged as promising alternatives to conventional cameras due to their\ndesirable properties. SPADs are capable of single-photon sensitivity with\nmicrosecond temporal resolution, and event cameras can measure brightness\nchanges up to 1 MHz with low bandwidth requirements. We show that these\nproperties are complementary, and can help achieve low-light, high-speed image\nreconstruction with low bandwidth requirements. We introduce a sensor fusion\nframework to combine SPADs with event cameras to improves the reconstruction of\nhigh-speed, low-light scenes while reducing the high bandwidth cost associated\nwith using every SPAD frame. Our evaluation, on both synthetic and real sensor\ndata, demonstrates significant enhancements ( > 5 dB PSNR) in reconstructing\nlow-light scenes at high temporal resolution (100 kHz) compared to conventional\ncameras. Event-SPAD fusion shows great promise for real-world applications,\nsuch as robotics or medical imaging.", "keywords": ["Low-level vision", "Medical imaging and biological vision"], "authors_list": ["Sachin Shah", "Matthew Chan", "Haoming Cai", "Jingxi Chen", "Sakshum Kulshrestha", "Chahat Deep Singh", "Yiannis Aloimonos", "Christopher Metzler"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c7"}, "filepath": "data/2312.03029.png", "tags": [], "_media_type": "image", "_rand": 0.9992004664016778, "arXiv_link": "https://arxiv.org/abs/2312.03029", "other_link": "", "title": "Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians", "abstract": "Creating high-fidelity 3D head avatars has always been a research hotspot,\nbut there remains a great challenge under lightweight sparse view setups. In\nthis paper, we propose Gaussian Head Avatar represented by controllable 3D\nGaussians for high-fidelity head avatar modeling. We optimize the neutral 3D\nGaussians and a fully learned MLP-based deformation field to capture complex\nexpressions. The two parts benefit each other, thereby our method can model\nfine-grained dynamic details while ensuring expression accuracy. Furthermore,\nwe devise a well-designed geometry-guided initialization strategy based on\nimplicit SDF and Deep Marching Tetrahedra for the stability and convergence of\nthe training procedure. Experiments show our approach outperforms other\nstate-of-the-art sparse-view methods, achieving ultra high-fidelity rendering\nquality at 2K resolution even under exaggerated expressions.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Yuelang Xu", "Benwang Chen", "Zhe Li", "Hongwen Zhang", "Lizhen Wang", "Zerong Zheng", "Yebin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c8"}, "filepath": "data/2310.01218.png", "tags": [], "_media_type": "image", "_rand": 0.9992356957281756, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2310.01218", "other_link": "", "title": "Rethinking the Objectives of Vector-Quantized Tokenizers for Image Synthesis", "abstract": "The great success of Large Language Models (LLMs) has expanded the potential\nof multimodality, contributing to the gradual evolution of General Artificial\nIntelligence (AGI). A true AGI agent should not only possess the capability to\nperform predefined multi-tasks but also exhibit emergent abilities in an\nopen-world context. However, despite the considerable advancements made by\nrecent multimodal LLMs, they still fall short in effectively unifying\ncomprehension and generation tasks, let alone open-world emergent abilities. We\ncontend that the key to overcoming the present impasse lies in enabling text\nand images to be represented and processed interchangeably within a unified\nautoregressive Transformer. To this end, we introduce SEED, an elaborate image\ntokenizer that empowers LLMs with the ability to SEE and Draw at the same time.\nWe identify two crucial design principles: (1) Image tokens should be\nindependent of 2D physical patch positions and instead be produced with a 1D\ncausal dependency, exhibiting intrinsic interdependence that aligns with the\nleft-to-right autoregressive prediction mechanism in LLMs. (2) Image tokens\nshould capture high-level semantics consistent with the degree of semantic\nabstraction in words, and be optimized for both discriminativeness and\nreconstruction during the tokenizer training phase. With SEED tokens, LLM is\nable to perform scalable multimodal autoregression under its original training\nrecipe, i.e., next-word prediction. SEED-LLaMA is therefore produced by\nlarge-scale pretraining and instruction tuning on the interleaved textual and\nvisual data, demonstrating impressive performance on a broad range of\nmultimodal comprehension and generation tasks. More importantly, SEED-LLaMA has\nexhibited compositional emergent abilities such as multi-turn in-context\nmultimodal generation, acting like your AI assistant.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yuchao Gu", "Xintao Wang", "Yixiao Ge", "Ying Shan", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2c9"}, "filepath": "data/2403.03447.png", "tags": [], "_media_type": "image", "_rand": 0.9998948396709457, "arXiv_link": "https://arxiv.org/abs/2403.03447", "other_link": "", "title": "HDRFlow: Real-Time HDR Video Reconstruction with Large Motions", "abstract": "Reconstructing High Dynamic Range (HDR) video from image sequences captured\nwith alternating exposures is challenging, especially in the presence of large\ncamera or object motion. Existing methods typically align low dynamic range\nsequences using optical flow or attention mechanism for deghosting. However,\nthey often struggle to handle large complex motions and are computationally\nexpensive. To address these challenges, we propose a robust and efficient flow\nestimator tailored for real-time HDR video reconstruction, named HDRFlow.\nHDRFlow has three novel designs: an HDR-domain alignment loss (HALoss), an\nefficient flow network with a multi-size large kernel (MLK), and a new HDR flow\ntraining scheme. The HALoss supervises our flow network to learn an\nHDR-oriented flow for accurate alignment in saturated and dark regions. The MLK\ncan effectively model large motions at a negligible cost. In addition, we\nincorporate synthetic data, Sintel, into our training dataset, utilizing both\nits provided forward flow and backward flow generated by us to supervise our\nflow network, enhancing our performance in large motion regions. Extensive\nexperiments demonstrate that our HDRFlow outperforms previous methods on\nstandard benchmarks. To the best of our knowledge, HDRFlow is the first\nreal-time HDR video reconstruction method for video sequences captured with\nalternating exposures, capable of processing 720p resolution inputs at 25ms.", "keywords": ["Image and video generation and manipulation", "Low-level vision"], "authors_list": ["Gangwei Xu", "Yujin Wang", "Jinwei Gu", "Tianfan Xue", "Xin Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ca"}, "filepath": "data/2402.18934.png", "tags": [], "_media_type": "image", "_rand": 0.9996777199738175, "arXiv_link": "https://arxiv.org/abs/2402.18934", "other_link": "", "title": "LiSA: LiDAR Localization with Semantic Awareness", "abstract": "LiDAR-based localization is valuable for applications like mining surveys and\nunderground facility maintenance. However, existing methods can struggle when\ndealing with uninformative geometric structures in challenging scenarios. This\npaper presents RELEAD, a LiDAR-centric solution designed to address\nscan-matching degradation. Our method enables degeneracy-free point cloud\nregistration by solving constrained ESIKF updates in the front end and\nincorporates multisensor constraints, even when dealing with outlier\nmeasurements, through graph optimization based on Graduated Non-Convexity\n(GNC). Additionally, we propose a robust Incremental Fixed Lag Smoother (rIFL)\nfor efficient GNC-based optimization. RELEAD has undergone extensive evaluation\nin degenerate scenarios and has outperformed existing state-of-the-art\nLiDAR-Inertial odometry and LiDAR-Visual-Inertial odometry methods.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Bochun Yang", "Zijun Li", "Wen Li", "zhipeng cai", "Chenglu Wen", "Yu Zang", "Matthias Mueller", "Cheng Wang"], "category_name": "Robotics", "all_categories": ["Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2cb"}, "filepath": "data/2309.05950.png", "tags": [], "_media_type": "image", "_rand": 0.9994256660694854, "arXiv_link": "https://arxiv.org/abs/2309.05950", "other_link": "", "title": "Language Models as Black-Box Optimizers for Vision-Language Models", "abstract": "Vision-language models (VLMs) pre-trained on web-scale datasets have\ndemonstrated remarkable capabilities on downstream tasks when fine-tuned with\nminimal data. However, many VLMs rely on proprietary data and are not\nopen-source, which restricts the use of white-box approaches for fine-tuning.\nAs such, we aim to develop a black-box approach to optimize VLMs through\nnatural language prompts, thereby avoiding the need to access model parameters,\nfeature embeddings, or even output logits. We propose employing chat-based LLMs\nto search for the best text prompt for VLMs. Specifically, we adopt an\nautomatic hill-climbing procedure that converges to an effective prompt by\nevaluating the performance of current prompts and asking LLMs to refine them\nbased on textual feedback, all within a conversational process without\nhuman-in-the-loop. In a challenging 1-shot image classification setup, our\nsimple approach surpasses the white-box continuous prompting method (CoOp) by\nan average of 1.5% across 11 datasets including ImageNet. Our approach also\noutperforms both human-engineered and LLM-generated prompts. We highlight the\nadvantage of conversational feedback that incorporates both positive and\nnegative prompts, suggesting that LLMs can utilize the implicit gradient\ndirection in textual feedback for a more efficient search. In addition, we find\nthat the text prompts generated through our strategy are not only more\ninterpretable but also transfer well across different VLM architectures in a\nblack-box manner. Lastly, we apply our framework to optimize the\nstate-of-the-art black-box VLM (DALL-E 3) for text-to-image generation, prompt\ninversion, and personalization.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Shihong Liu", "Samuel Yu", "Zhiqiu Lin", "Deepak Pathak", "Deva Ramanan"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Computer Vision and Pattern Recognition", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2cc"}, "filepath": "data/2401.12425.png", "tags": [], "_media_type": "image", "_rand": 0.999991961908858, "arXiv_link": "https://arxiv.org/abs/2401.12425", "other_link": "", "title": "The Neglected Tails of Vision-Language Models", "abstract": "Vision-language models (VLMs) excel in zero-shot recognition but their\nperformance varies greatly across different visual concepts. For example,\nalthough CLIP achieves impressive accuracy on ImageNet (60-80%), its\nperformance drops below 10% for more than ten concepts like night snake,\npresumably due to their limited presence in the pretraining data. However,\nmeasuring the frequency of concepts in VLMs' large-scale datasets is\nchallenging. We address this by using large language models (LLMs) to count the\nnumber of pretraining texts that contain synonyms of these concepts. Our\nanalysis confirms that popular datasets, such as LAION, exhibit a long-tailed\nconcept distribution, yielding biased performance in VLMs. We also find that\ndownstream applications of VLMs, including visual chatbots (e.g., GPT-4V) and\ntext-to-image models (e.g., Stable Diffusion), often fail to recognize or\ngenerate images of rare concepts identified by our method. To mitigate the\nimbalanced performance of zero-shot VLMs, we propose REtrieval-Augmented\nLearning (REAL). First, instead of prompting VLMs using the original class\nnames, REAL uses their most frequent synonyms found in pretraining texts. This\nsimple change already outperforms costly human-engineered and LLM-enriched\nprompts over nine benchmark datasets. Second, REAL trains a linear classifier\non a small yet balanced set of pretraining data retrieved using concept\nsynonyms. REAL surpasses the previous zero-shot SOTA, using 400x less storage\nand 10,000x less training time!", "keywords": ["Large multimodal models and prompting techniques", "Efficient and scalable vision"], "authors_list": ["Shubham Parashar", "Tian Liu", "Zhiqiu Lin", "Xiangjue Dong", "Yanan Li", "James Caverlee", "Deva Ramanan", "Shu Kong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2cd"}, "filepath": "data/2405.10185.png", "tags": [], "_media_type": "image", "_rand": 0.9994235159797277, "arXiv_link": "https://arxiv.org/abs/2405.10185", "other_link": "", "title": "DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data", "abstract": "Instance segmentation is data-hungry, and as model capacity increases, data\nscale becomes crucial for improving the accuracy. Most instance segmentation\ndatasets today require costly manual annotation, limiting their data scale.\nModels trained on such data are prone to overfitting on the training set,\nespecially for those rare categories. While recent works have delved into\nexploiting generative models to create synthetic datasets for data\naugmentation, these approaches do not efficiently harness the full potential of\ngenerative models.\n To address these issues, we introduce a more efficient strategy to construct\ngenerative datasets for data augmentation, termed DiverGen. Firstly, we provide\nan explanation of the role of generative data from the perspective of\ndistribution discrepancy. We investigate the impact of different data on the\ndistribution learned by the model. We argue that generative data can expand the\ndata distribution that the model can learn, thus mitigating overfitting.\nAdditionally, we find that the diversity of generative data is crucial for\nimproving model performance and enhance it through various strategies,\nincluding category diversity, prompt diversity, and generative model diversity.\nWith these strategies, we can scale the data to millions while maintaining the\ntrend of model performance improvement. On the LVIS dataset, DiverGen\nsignificantly outperforms the strong model X-Paste, achieving +1.1 box AP and\n+1.1 mask AP across all categories, and +1.9 box AP and +2.5 mask AP for rare\ncategories.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chengxiang Fan", "Muzhi Zhu", "Hao Chen", "Yang Liu", "Weijia Wu", "Huaqi Zhang", "Chunhua Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ce"}, "filepath": "data/2403.03532.png", "tags": [], "_media_type": "image", "_rand": 0.9994494876130077, "arXiv_link": "https://arxiv.org/abs/2403.03532", "other_link": "", "title": "Extend Your Own Correspondences: Unsupervised Distant Point Cloud Registration by Progressive Distance Extension", "abstract": "Registration of point clouds collected from a pair of distant vehicles\nprovides a comprehensive and accurate 3D view of the driving scenario, which is\nvital for driving safety related applications, yet existing literature suffers\nfrom the expensive pose label acquisition and the deficiency to generalize to\nnew data distributions. In this paper, we propose EYOC, an unsupervised distant\npoint cloud registration method that adapts to new point cloud distributions on\nthe fly, requiring no global pose labels. The core idea of EYOC is to train a\nfeature extractor in a progressive fashion, where in each round, the feature\nextractor, trained with near point cloud pairs, can label slightly farther\npoint cloud pairs, enabling self-supervision on such far point cloud pairs.\nThis process continues until the derived extractor can be used to register\ndistant point clouds. Particularly, to enable high-fidelity correspondence\nlabel generation, we devise an effective spatial filtering scheme to select the\nmost representative correspondences to register a point cloud pair, and then\nutilize the aligned point clouds to discover more correct correspondences.\nExperiments show that EYOC can achieve comparable performance with\nstate-of-the-art supervised methods at a lower training cost. Moreover, it\noutwits supervised methods regarding generalization performance on new data\ndistributions.", "keywords": [], "authors_list": ["Quan Liu", "Hongzi Zhu", "Zhenxi Wang", "Yunsong Zhou", "Shan Chang", "Minyi Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2cf"}, "filepath": "data/2312.04803.png", "tags": [], "_media_type": "image", "_rand": 0.9995661605024826, "arXiv_link": "https://arxiv.org/abs/2312.04803", "other_link": "", "title": "SuperNormal: Neural Surface Reconstruction via Multi-View Normal Integration", "abstract": "We present SuperNormal, a fast, high-fidelity approach to multi-view 3D\nreconstruction using surface normal maps. With a few minutes, SuperNormal\nproduces detailed surfaces on par with 3D scanners. We harness volume rendering\nto optimize a neural signed distance function (SDF) powered by multi-resolution\nhash encoding. To accelerate training, we propose directional finite difference\nand patch-based ray marching to approximate the SDF gradients numerically.\nWhile not compromising reconstruction quality, this strategy is nearly twice as\nefficient as analytical gradients and about three times faster than\naxis-aligned finite difference. Experiments on the benchmark dataset\ndemonstrate the superiority of SuperNormal in efficiency and accuracy compared\nto existing multi-view photometric stereo methods. On our captured objects,\nSuperNormal produces more fine-grained geometry than recent neural 3D\nreconstruction methods.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques"], "authors_list": ["Xu Cao", "Takafumi Taketomi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d0"}, "filepath": "data/2404.03831.png", "tags": [], "_media_type": "image", "_rand": 0.9996152778264306, "arXiv_link": "https://arxiv.org/abs/2404.03831", "other_link": "", "title": "SleepVST: Sleep Staging from Near-Infrared Video Signals using Pre-Trained Transformers", "abstract": "Advances in camera-based physiological monitoring have enabled the robust,\nnon-contact measurement of respiration and the cardiac pulse, which are known\nto be indicative of the sleep stage. This has led to research into camera-based\nsleep monitoring as a promising alternative to \"gold-standard\" polysomnography,\nwhich is cumbersome, expensive to administer, and hence unsuitable for\nlonger-term clinical studies. In this paper, we introduce SleepVST, a\ntransformer model which enables state-of-the-art performance in camera-based\nsleep stage classification (sleep staging). After pre-training on contact\nsensor data, SleepVST outperforms existing methods for cardio-respiratory sleep\nstaging on the SHHS and MESA datasets, achieving total Cohen's kappa scores of\n0.75 and 0.77 respectively. We then show that SleepVST can be successfully\ntransferred to cardio-respiratory waveforms extracted from video, enabling\nfully contact-free sleep staging. Using a video dataset of 50 nights, we\nachieve a total accuracy of 78.8\\% and a Cohen's $\\kappa$ of 0.71 in four-class\nvideo-based sleep staging, setting a new state-of-the-art in the domain.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Jonathan F. Carter", "Joao Jorge", "Oliver Gibson", "Lionel Tarassenko"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Human-Computer Interaction", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d1"}, "filepath": "data/2404.16552.png", "tags": [], "_media_type": "image", "_rand": 0.9998926779491834, "arXiv_link": "https://arxiv.org/abs/2404.16552", "other_link": "https://github.com/petrhruby97/efficient\\_absolute}", "title": "Efficient Solution of Point-Line Absolute Pose", "abstract": "We revisit certain problems of pose estimation based on 3D--2D\ncorrespondences between features which may be points or lines. Specifically, we\naddress the two previously-studied minimal problems of estimating camera\nextrinsics from $p \\in \\{ 1, 2 \\}$ point--point correspondences and $l=3-p$\nline--line correspondences. To the best of our knowledge, all of the\npreviously-known practical solutions to these problems required computing the\nroots of degree $\\ge 4$ (univariate) polynomials when $p=2$, or degree $\\ge 8$\npolynomials when $p=1.$ We describe and implement two elementary solutions\nwhich reduce the degrees of the needed polynomials from $4$ to $2$ and from $8$\nto $4$, respectively. We show experimentally that the resulting solvers are\nnumerically stable and fast: when compared to the previous state-of-the art, we\nmay obtain nearly an order of magnitude speedup. The code is available at\n\\url{https://github.com/petrhruby97/efficient\\_absolute}", "keywords": ["Efficient and scalable vision"], "authors_list": ["Petr Hruby", "Timothy Duff", "Marc Pollefeys"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d2"}, "filepath": "data/2306.02240.png", "tags": [], "_media_type": "image", "_rand": 0.9993087110352428, "arXiv_link": "https://arxiv.org/abs/2306.02240", "other_link": "", "title": "ProTeCt: Prompt Tuning for Taxonomic Open Set Classification", "abstract": "Visual-language foundation models, like CLIP, learn generalized\nrepresentations that enable zero-shot open-set classification. Few-shot\nadaptation methods, based on prompt tuning, have been shown to further improve\nperformance on downstream datasets. However, these methods do not fare well in\nthe taxonomic open set (TOS) setting, where the classifier is asked to make\npredictions from label sets across different levels of semantic granularity.\nFrequently, they infer incorrect labels at coarser taxonomic class levels, even\nwhen the inference at the leaf level (original class labels) is correct. To\naddress this problem, we propose a prompt tuning technique that calibrates the\nhierarchical consistency of model predictions. A set of metrics of hierarchical\nconsistency, the Hierarchical Consistent Accuracy (HCA) and the Mean Treecut\nAccuracy (MTA), are first proposed to evaluate TOS model performance. A new\nPrompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed\nto calibrate classification across label set granularities. Results show that\nProTeCt can be combined with existing prompt tuning methods to significantly\nimprove TOS classification without degrading the leaf level classification\nperformance.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Tz-Ying Wu", "Chih-Hui Ho", "Nuno Vasconcelos"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d3"}, "filepath": "data/2404.01415.png", "tags": [], "_media_type": "image", "_rand": 0.9990811191091793, "arXiv_link": "https://arxiv.org/abs/2404.01415", "other_link": "", "title": "On the Faithfulness of Vision Transformer Explanations", "abstract": "To interpret Vision Transformers, post-hoc explanations assign salience\nscores to input pixels, providing human-understandable heatmaps. However,\nwhether these interpretations reflect true rationales behind the model's output\nis still underexplored. To address this gap, we study the faithfulness\ncriterion of explanations: the assigned salience scores should represent the\ninfluence of the corresponding input pixels on the model's predictions. To\nevaluate faithfulness, we introduce Salience-guided Faithfulness Coefficient\n(SaCo), a novel evaluation metric leveraging essential information of salience\ndistribution. Specifically, we conduct pair-wise comparisons among distinct\npixel groups and then aggregate the differences in their salience scores,\nresulting in a coefficient that indicates the explanation's degree of\nfaithfulness. Our explorations reveal that current metrics struggle to\ndifferentiate between advanced explanation methods and Random Attribution,\nthereby failing to capture the faithfulness property. In contrast, our proposed\nSaCo offers a reliable faithfulness measurement, establishing a robust metric\nfor interpretations. Furthermore, our SaCo demonstrates that the use of\ngradient and multi-layer aggregation can markedly enhance the faithfulness of\nattention-based explanation, shedding light on potential paths for advancing\nVision Transformer explainability.", "keywords": [], "authors_list": ["Junyi Wu", "Weitai Kang", "Hao Tang", "Yuan Hong", "Yan Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d4"}, "filepath": "data/2404.03913.png", "tags": [], "_media_type": "image", "_rand": 0.9995565247057753, "arXiv_link": "https://arxiv.org/abs/2404.03913", "other_link": "", "title": "Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models", "abstract": "While there has been significant progress in customizing text-to-image\ngeneration models, generating images that combine multiple personalized\nconcepts remains challenging. In this work, we introduce Concept Weaver, a\nmethod for composing customized text-to-image diffusion models at inference\ntime. Specifically, the method breaks the process into two steps: creating a\ntemplate image aligned with the semantics of input prompts, and then\npersonalizing the template using a concept fusion strategy. The fusion strategy\nincorporates the appearance of the target concepts into the template image\nwhile retaining its structural details. The results indicate that our method\ncan generate multiple custom concepts with higher identity fidelity compared to\nalternative approaches. Furthermore, the method is shown to seamlessly handle\nmore than two concepts and closely follow the semantic meaning of the input\nprompt without blending appearances across different subjects.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Gihyun Kwon", "Simon Jenni", "Ding Li", "Joon-Young Lee", "Jong Chul Ye", "Fabian Caba Heilbron"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d5"}, "filepath": "data/2402.05235.png", "tags": [], "_media_type": "image", "_rand": 0.9991436507749827, "arXiv_link": "https://arxiv.org/abs/2402.05235", "other_link": "https://yashkant.github.io/spad", "title": "SPAD: Spatially Aware Multiview Diffusers", "abstract": "We present SPAD, a novel approach for creating consistent multi-view images\nfrom text prompts or single images. To enable multi-view generation, we\nrepurpose a pretrained 2D diffusion model by extending its self-attention\nlayers with cross-view interactions, and fine-tune it on a high quality subset\nof Objaverse. We find that a naive extension of the self-attention proposed in\nprior work (e.g. MVDream) leads to content copying between views. Therefore, we\nexplicitly constrain the cross-view attention based on epipolar geometry. To\nfurther enhance 3D consistency, we utilize Plucker coordinates derived from\ncamera rays and inject them as positional encoding. This enables SPAD to reason\nover spatial proximity in 3D well. In contrast to recent works that can only\ngenerate views at fixed azimuth and elevation, SPAD offers full camera control\nand achieves state-of-the-art results in novel view synthesis on unseen objects\nfrom the Objaverse and Google Scanned Objects datasets. Finally, we demonstrate\nthat text-to-3D generation using SPAD prevents the multi-face Janus issue. See\nmore details at our webpage: https://yashkant.github.io/spad", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yash Kant", "Aliaksandr Siarohin", "Ziyi Wu", "Michael Vasilkovsky", "Guocheng Qian", "Jian Ren", "Riza Alp Guler", "Bernard Ghanem", "Sergey Tulyakov", "Igor Gilitschenski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d6"}, "filepath": "data/2405.17240.png", "tags": [], "_media_type": "image", "_rand": 0.9995658955095219, "arXiv_link": "https://arxiv.org/abs/2405.17240", "other_link": "https://github.com/Snowfallingplum/CSD-MT.", "title": "Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth", "abstract": "The absence of real targets to guide the model training is one of the main\nproblems with the makeup transfer task. Most existing methods tackle this\nproblem by synthesizing pseudo ground truths (PGTs). However, the generated\nPGTs are often sub-optimal and their imprecision will eventually lead to\nperformance degradation. To alleviate this issue, in this paper, we propose a\nnovel Content-Style Decoupled Makeup Transfer (CSD-MT) method, which works in a\npurely unsupervised manner and thus eliminates the negative effects of\ngenerating PGTs. Specifically, based on the frequency characteristics analysis,\nwe assume that the low-frequency (LF) component of a face image is more\nassociated with its makeup style information, while the high-frequency (HF)\ncomponent is more related to its content details. This assumption allows CSD-MT\nto decouple the content and makeup style information in each face image through\nthe frequency decomposition. After that, CSD-MT realizes makeup transfer by\nmaximizing the consistency of these two types of information between the\ntransferred result and input images, respectively. Two newly designed loss\nfunctions are also introduced to further improve the transfer performance.\nExtensive quantitative and qualitative analyses show the effectiveness of our\nCSD-MT method. Our code is available at\nhttps://github.com/Snowfallingplum/CSD-MT.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhaoyang Sun", "Shengwu Xiong", "Yaxiong Chen", "Yi Rong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d7"}, "filepath": "data/2311.11125.png", "tags": [], "_media_type": "image", "_rand": 0.9998048056424522, "arXiv_link": "https://arxiv.org/abs/2311.11125", "other_link": "", "title": "SecondPose: SE(3)-Consistent Dual-Stream Feature Fusion for Category-Level Pose Estimation", "abstract": "Category-level object pose estimation, aiming to predict the 6D pose and 3D\nsize of objects from known categories, typically struggles with large\nintra-class shape variation. Existing works utilizing mean shapes often fall\nshort of capturing this variation. To address this issue, we present\nSecondPose, a novel approach integrating object-specific geometric features\nwith semantic category priors from DINOv2. Leveraging the advantage of DINOv2\nin providing SE(3)-consistent semantic features, we hierarchically extract two\ntypes of SE(3)-invariant geometric features to further encapsulate\nlocal-to-global object-specific information. These geometric features are then\npoint-aligned with DINOv2 features to establish a consistent object\nrepresentation under SE(3) transformations, facilitating the mapping from\ncamera space to the pre-defined canonical space, thus further enhancing pose\nestimation. Extensive experiments on NOCS-REAL275 demonstrate that SecondPose\nachieves a 12.4% leap forward over the state-of-the-art. Moreover, on a more\ncomplex dataset HouseCat6D which provides photometrically challenging objects,\nSecondPose still surpasses other competitors by a large margin.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yamei Chen", "Yan Di", "Guangyao Zhai", "Fabian Manhardt", "Chenyangguang Zhang", "Ruida Zhang", "Federico Tombari", "Nassir Navab", "Benjamin Busam"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d8"}, "filepath": "data/2401.09603.png", "tags": [], "_media_type": "image", "_rand": 0.9996585613045199, "arXiv_link": "https://arxiv.org/abs/2401.09603", "other_link": "", "title": "Rethinking FID: Towards a Better Evaluation Metric for Image Generation", "abstract": "As with many machine learning problems, the progress of image generation\nmethods hinges on good evaluation metrics. One of the most popular is the\nFrechet Inception Distance (FID). FID estimates the distance between a\ndistribution of Inception-v3 features of real images, and those of images\ngenerated by the algorithm. We highlight important drawbacks of FID:\nInception's poor representation of the rich and varied content generated by\nmodern text-to-image models, incorrect normality assumptions, and poor sample\ncomplexity. We call for a reevaluation of FID's use as the primary quality\nmetric for generated images. We empirically demonstrate that FID contradicts\nhuman raters, it does not reflect gradual improvement of iterative\ntext-to-image models, it does not capture distortion levels, and that it\nproduces inconsistent results when varying the sample size. We also propose an\nalternative new metric, CMMD, based on richer CLIP embeddings and the maximum\nmean discrepancy distance with the Gaussian RBF kernel. It is an unbiased\nestimator that does not make any assumptions on the probability distribution of\nthe embeddings and is sample efficient. Through extensive experiments and\nanalysis, we demonstrate that FID-based evaluations of text-to-image models may\nbe unreliable, and that CMMD offers a more robust and reliable assessment of\nimage quality.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Sadeep Jayasumana", "Srikumar Ramalingam", "Andreas Veit", "Daniel Glasner", "Ayan Chakrabarti", "Sanjiv Kumar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2d9"}, "filepath": "data/2403.00274.png", "tags": [], "_media_type": "image", "_rand": 0.9990182970140846, "arXiv_link": "https://arxiv.org/abs/2403.00274", "other_link": "", "title": "CustomListener: Text-guided Responsive Interaction for User-friendly Listening Head Generation", "abstract": "Listening head generation aims to synthesize a non-verbal responsive listener\nhead by modeling the correlation between the speaker and the listener in\ndynamic conversion.The applications of listener agent generation in virtual\ninteraction have promoted many works achieving the diverse and fine-grained\nmotion generation. However, they can only manipulate motions through simple\nemotional labels, but cannot freely control the listener's motions. Since\nlistener agents should have human-like attributes (e.g. identity, personality)\nwhich can be freely customized by users, this limits their realism. In this\npaper, we propose a user-friendly framework called CustomListener to realize\nthe free-form text prior guided listener generation. To achieve\nspeaker-listener coordination, we design a Static to Dynamic Portrait module\n(SDP), which interacts with speaker information to transform static text into\ndynamic portrait token with completion rhythm and amplitude information. To\nachieve coherence between segments, we design a Past Guided Generation Module\n(PGG) to maintain the consistency of customized listener attributes through the\nmotion prior, and utilize a diffusion-based structure conditioned on the\nportrait token and the motion prior to realize the controllable generation. To\ntrain and evaluate our model, we have constructed two text-annotated listening\nhead datasets based on ViCo and RealTalk, which provide text-video paired\nlabels. Extensive experiments have verified the effectiveness of our model.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Xi Liu", "Ying Guo", "Cheng Zhen", "Tong Li", "Yingying Ao", "Pengfei Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2da"}, "filepath": "data/2404.17528.png", "tags": [], "_media_type": "image", "_rand": 0.9991047365266593, "arXiv_link": "https://arxiv.org/abs/2404.17528", "other_link": "https://github.com/TQTQliu/GeFu", "title": "Geometry-aware Reconstruction and Fusion-refined Rendering for Generalizable Neural Radiance Fields", "abstract": "Generalizable NeRF aims to synthesize novel views for unseen scenes. Common\npractices involve constructing variance-based cost volumes for geometry\nreconstruction and encoding 3D descriptors for decoding novel views. However,\nexisting methods show limited generalization ability in challenging conditions\ndue to inaccurate geometry, sub-optimal descriptors, and decoding strategies.\nWe address these issues point by point. First, we find the variance-based cost\nvolume exhibits failure patterns as the features of pixels corresponding to the\nsame point can be inconsistent across different views due to occlusions or\nreflections. We introduce an Adaptive Cost Aggregation (ACA) approach to\namplify the contribution of consistent pixel pairs and suppress inconsistent\nones. Unlike previous methods that solely fuse 2D features into descriptors,\nour approach introduces a Spatial-View Aggregator (SVA) to incorporate 3D\ncontext into descriptors through spatial and inter-view interaction. When\ndecoding the descriptors, we observe the two existing decoding strategies excel\nin different areas, which are complementary. A Consistency-Aware Fusion (CAF)\nstrategy is proposed to leverage the advantages of both. We incorporate the\nabove ACA, SVA, and CAF into a coarse-to-fine framework, termed Geometry-aware\nReconstruction and Fusion-refined Rendering (GeFu). GeFu attains\nstate-of-the-art performance across multiple datasets. Code is available at\nhttps://github.com/TQTQliu/GeFu .", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["TIANQI LIU", "Xinyi Ye", "Min Shi", "Zihao Huang", "Zhiyu Pan", "Zhan Peng", "Zhiguo Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2db"}, "filepath": "data/2403.00592.png", "tags": [], "_media_type": "image", "_rand": 0.9992738219199632, "arXiv_link": "https://arxiv.org/abs/2403.00592", "other_link": "https://github.com/ZhaochongAn/COSeg", "title": "Rethinking Few-shot 3D Point Cloud Semantic Segmentation", "abstract": "This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS),\nwith a focus on two significant issues in the state-of-the-art: foreground\nleakage and sparse point distribution. The former arises from non-uniform point\nsampling, allowing models to distinguish the density disparities between\nforeground and background for easier segmentation. The latter results from\nsampling only 2,048 points, limiting semantic information and deviating from\nthe real-world practice. To address these issues, we introduce a standardized\nFS-PCS setting, upon which a new benchmark is built. Moreover, we propose a\nnovel FS-PCS model. While previous methods are based on feature optimization by\nmainly refining support features to enhance prototypes, our method is based on\ncorrelation optimization, referred to as Correlation Optimization Segmentation\n(COSeg). Specifically, we compute Class-specific Multi-prototypical Correlation\n(CMC) for each query point, representing its correlations to category\nprototypes. Then, we propose the Hyper Correlation Augmentation (HCA) module to\nenhance CMC. Furthermore, tackling the inherent property of few-shot training\nto incur base susceptibility for models, we propose to learn non-parametric\nprototypes for the base classes during training. The learned base prototypes\nare used to calibrate correlations for the background class through a Base\nPrototypes Calibration (BPC) module. Experiments on popular datasets\ndemonstrate the superiority of COSeg over existing methods. The code is\navailable at: https://github.com/ZhaochongAn/COSeg", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Zhaochong An", "Guolei Sun", "Yun Liu", "Fayao Liu", "Zongwei Wu", "Dan Wang", "Luc Van Gool", "Serge Belongie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2dc"}, "filepath": "data/2401.16741v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996885706073451, "arXiv_link": "https://arxiv.org/abs/2401.16741v1", "other_link": "", "title": "MESA: Matching Everything by Segmenting Anything", "abstract": "Feature matching is a crucial task in the field of computer vision, which\ninvolves finding correspondences between images. Previous studies achieve\nremarkable performance using learning-based feature comparison. However, the\npervasive presence of matching redundancy between images gives rise to\nunnecessary and error-prone computations in these methods, imposing limitations\non their accuracy. To address this issue, we propose MESA, a novel approach to\nestablish precise area (or region) matches for efficient matching redundancy\nreduction. MESA first leverages the advanced image understanding capability of\nSAM, a state-of-the-art foundation model for image segmentation, to obtain\nimage areas with implicit semantic. Then, a multi-relational graph is proposed\nto model the spatial structure of these areas and construct their scale\nhierarchy. Based on graphical models derived from the graph, the area matching\nis reformulated as an energy minimization task and effectively resolved.\nExtensive experiments demonstrate that MESA yields substantial precision\nimprovement for multiple point matchers in indoor and outdoor downstream tasks,\ne.g. +13.61% for DKM in indoor pose estimation.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yesheng Zhang", "Xu Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2dd"}, "filepath": "data/2404.02900.png", "tags": [], "_media_type": "image", "_rand": 0.9998351352410143, "arXiv_link": "https://arxiv.org/abs/2404.02900", "other_link": "", "title": "DeiT-LT: Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets", "abstract": "Vision Transformer (ViT) has emerged as a prominent architecture for various\ncomputer vision tasks. In ViT, we divide the input image into patch tokens and\nprocess them through a stack of self attention blocks. However, unlike\nConvolutional Neural Networks (CNN), ViTs simple architecture has no\ninformative inductive bias (e.g., locality,etc. ). Due to this, ViT requires a\nlarge amount of data for pre-training. Various data efficient approaches (DeiT)\nhave been proposed to train ViT on balanced datasets effectively. However,\nlimited literature discusses the use of ViT for datasets with long-tailed\nimbalances. In this work, we introduce DeiT-LT to tackle the problem of\ntraining ViTs from scratch on long-tailed datasets. In DeiT-LT, we introduce an\nefficient and effective way of distillation from CNN via distillation DIST\ntoken by using out-of-distribution images and re-weighting the distillation\nloss to enhance focus on tail classes. This leads to the learning of local\nCNN-like features in early ViT blocks, improving generalization for tail\nclasses. Further, to mitigate overfitting, we propose distilling from a flat\nCNN teacher, which leads to learning low-rank generalizable features for DIST\ntokens across all ViT blocks. With the proposed DeiT-LT scheme, the\ndistillation DIST token becomes an expert on the tail classes, and the\nclassifier CLS token becomes an expert on the head classes. The experts help to\neffectively learn features corresponding to both the majority and minority\nclasses using a distinct set of tokens within the same ViT architecture. We\nshow the effectiveness of DeiT-LT for training ViT from scratch on datasets\nranging from small-scale CIFAR-10 LT to large-scale iNaturalist-2018.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Harsh Rangwani", "Pradipto Mondal", "Mayank Mishra", "Ashish Asokan", "R. Venkatesh Babu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2de"}, "filepath": "data/2312.03626.png", "tags": [], "_media_type": "image", "_rand": 0.9999087518206268, "arXiv_link": "https://arxiv.org/abs/2312.03626", "other_link": "", "title": "TokenCompose: Text-to-Image Diffusion with Token-level Supervision", "abstract": "We present TokenCompose, a Latent Diffusion Model for text-to-image\ngeneration that achieves enhanced consistency between user-specified text\nprompts and model-generated images. Despite its tremendous success, the\nstandard denoising process in the Latent Diffusion Model takes text prompts as\nconditions only, absent explicit constraint for the consistency between the\ntext prompts and the image contents, leading to unsatisfactory results for\ncomposing multiple object categories. TokenCompose aims to improve\nmulti-category instance composition by introducing the token-wise consistency\nterms between the image content and object segmentation maps in the finetuning\nstage. TokenCompose can be applied directly to the existing training pipeline\nof text-conditioned diffusion models without extra human labeling information.\nBy finetuning Stable Diffusion, the model exhibits significant improvements in\nmulti-category instance composition and enhanced photorealism for its generated\nimages.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Zirui Wang", "Zhizhou Sha", "Zheng Ding", "Yilin Wang", "Zhuowen Tu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2df"}, "filepath": "data/2403.04583.png", "tags": [], "_media_type": "image", "_rand": 0.9995758303300107, "arXiv_link": "https://arxiv.org/abs/2403.04583", "other_link": "https://github.com/ChaehyeonSong/discocal.", "title": "Unbiased Estimator for Distorted Conic in Camera Calibration", "abstract": "In the literature, points and conics have been major features for camera\ngeometric calibration. Although conics are more informative features than\npoints, the loss of the conic property under distortion has critically limited\nthe utility of conic features in camera calibration. Many existing approaches\naddressed conic-based calibration by ignoring distortion or introducing 3D\nspherical targets to circumvent this limitation. In this paper, we present a\nnovel formulation for conic-based calibration using moments. Our derivation is\nbased on the mathematical finding that the first moment can be estimated\nwithout bias even under distortion. This allows us to track moment changes\nduring projection and distortion, ensuring the preservation of the first moment\nof the distorted conic. With an unbiased estimator, the circular patterns can\nbe accurately detected at the sub-pixel level and can now be fully exploited\nfor an entire calibration pipeline, resulting in significantly improved\ncalibration. The entire code is readily available from\nhttps://github.com/ChaehyeonSong/discocal.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Chaehyeon Song", "Jaeho Shin", "Myung-Hwan Jeon", "Jongwoo Lim", "Ayoung Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e0"}, "filepath": "data/2312.16943.png", "tags": [], "_media_type": "image", "_rand": 0.9997895548636262, "arXiv_link": "https://arxiv.org/abs/2312.16943", "other_link": "", "title": "Unleashing Channel Potential: Space-Frequency Selection Convolution for SAR Object Detection", "abstract": "Deep learning has driven significant progress in object detection using\nSynthetic Aperture Radar (SAR) imagery. Existing methods, while achieving\npromising results, often struggle to effectively integrate local and global\ninformation, particularly direction-aware features. This paper proposes\nSAR-Net, a novel framework specifically designed for global fusion of\ndirection-aware information in SAR object detection. SAR-Net leverages two key\ninnovations: the Unity Compensation Mechanism (UCM) and the Direction-aware\nAttention Module (DAM). UCM facilitates the establishment of complementary\nrelationships among features across different scales, enabling efficient global\ninformation fusion and transmission. Additionally, DAM, through bidirectional\nattention polymerization, captures direction-aware information, effectively\neliminating background interference. Extensive experiments demonstrate the\neffectiveness of SAR-Net, achieving state-of-the-art results on aircraft\n(SAR-AIRcraft-1.0) and ship datasets (SSDD, HRSID), confirming its\ngeneralization capability and robustness.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Ke Li", "Di Wang", "Zhangyuan Hu", "Wenxuan Zhu", "Shaofeng Li", "Quan Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e1"}, "filepath": "data/2403.16379.png", "tags": [], "_media_type": "image", "_rand": 0.9998363681017843, "arXiv_link": "https://arxiv.org/abs/2403.16379", "other_link": "https://github.com/thu-nics/FlashEval.", "title": "FlashEval: Towards Fast and Accurate Evaluation of Text-to-image Diffusion Generative Models", "abstract": "In recent years, there has been significant progress in the development of\ntext-to-image generative models. Evaluating the quality of the generative\nmodels is one essential step in the development process. Unfortunately, the\nevaluation process could consume a significant amount of computational\nresources, making the required periodic evaluation of model performance (e.g.,\nmonitoring training progress) impractical. Therefore, we seek to improve the\nevaluation efficiency by selecting the representative subset of the text-image\ndataset. We systematically investigate the design choices, including the\nselection criteria (textural features or image-based metrics) and the selection\ngranularity (prompt-level or set-level). We find that the insights from prior\nwork on subset selection for training data do not generalize to this problem,\nand we propose FlashEval, an iterative search algorithm tailored to evaluation\ndata selection. We demonstrate the effectiveness of FlashEval on ranking\ndiffusion models with various configurations, including architectures,\nquantization levels, and sampler schedules on COCO and DiffusionDB datasets.\nOur searched 50-item subset could achieve comparable evaluation quality to the\nrandomly sampled 500-item subset for COCO annotations on unseen models,\nachieving a 10x evaluation speedup. We release the condensed subset of these\ncommonly used datasets to help facilitate diffusion algorithm design and\nevaluation, and open-source FlashEval as a tool for condensing future datasets,\naccessible at https://github.com/thu-nics/FlashEval.", "keywords": ["Efficient and scalable vision"], "authors_list": ["LIn Zhao", "Tianchen Zhao", "Zinan Lin", "Xuefei Ning", "Guohao Dai", "Huazhong Yang", "Yu Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e2"}, "filepath": "data/2404.05207.png", "tags": [], "_media_type": "image", "_rand": 0.9994600314516509, "arXiv_link": "https://arxiv.org/abs/2404.05207", "other_link": "", "title": "Fair-VPT: Fair Visual Prompt Tuning for Image Classification", "abstract": "Recent progress has shown great potential of visual prompt tuning (VPT) when\nadapting pre-trained vision transformers to various downstream tasks. However,\nmost existing solutions independently optimize prompts at each layer, thereby\nneglecting the usage of task-relevant information encoded in prompt tokens\nacross layers. Additionally, existing prompt structures are prone to\ninterference from task-irrelevant noise in input images, which can do harm to\nthe sharing of task-relevant information. In this paper, we propose a novel VPT\napproach, \\textbf{iVPT}. It innovatively incorporates a cross-layer dynamic\nconnection (CDC) for input prompt tokens from adjacent layers, enabling\neffective sharing of task-relevant information. Furthermore, we design a\ndynamic aggregation (DA) module that facilitates selective sharing of\ninformation between layers. The combination of CDC and DA enhances the\nflexibility of the attention process within the VPT framework. Building upon\nthese foundations, iVPT introduces an attentive reinforcement (AR) mechanism,\nby automatically identifying salient image tokens, which are further enhanced\nby prompt tokens in an additive manner. Extensive experiments on 24 image\nclassification and semantic segmentation benchmarks clearly demonstrate the\nadvantage of the proposed iVPT, compared to the state-of-the-art counterparts.", "keywords": [], "authors_list": ["Sungho Park", "Hyeran Byun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e3"}, "filepath": "data/2308.12469.png", "tags": [], "_media_type": "image", "_rand": 0.9990147441695237, "arXiv_link": "https://arxiv.org/abs/2308.12469", "other_link": "https://sites.google.com/view/diffseg/home}.", "title": "Diffuse, Attend, and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion", "abstract": "Producing quality segmentation masks for images is a fundamental problem in\ncomputer vision. Recent research has explored large-scale supervised training\nto enable zero-shot segmentation on virtually any image style and unsupervised\ntraining to enable segmentation without dense annotations. However,\nconstructing a model capable of segmenting anything in a zero-shot manner\nwithout any annotations is still challenging. In this paper, we propose to\nutilize the self-attention layers in stable diffusion models to achieve this\ngoal because the pre-trained stable diffusion model has learned inherent\nconcepts of objects within its attention layers. Specifically, we introduce a\nsimple yet effective iterative merging process based on measuring KL divergence\namong attention maps to merge them into valid segmentation masks. The proposed\nmethod does not require any training or language dependency to extract quality\nsegmentation for any images. On COCO-Stuff-27, our method surpasses the prior\nunsupervised zero-shot SOTA method by an absolute 26% in pixel accuracy and 17%\nin mean IoU. The project page is at\n\\url{https://sites.google.com/view/diffseg/home}.", "keywords": [], "authors_list": ["Junjiao Tian", "Lavisha Aggarwal", "Andrea Colaco", "Zsolt Kira", "Mar Gonzalez-Franco"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e4"}, "filepath": "data/2312.01068.png", "tags": [], "_media_type": "image", "_rand": 0.9996056639230108, "arXiv_link": "https://arxiv.org/abs/2312.01068", "other_link": "", "title": "DPHMs: Diffusion Parametric Head Models for Depth-based Tracking", "abstract": "We introduce Diffusion Parametric Head Models (DPHMs), a generative model\nthat enables robust volumetric head reconstruction and tracking from monocular\ndepth sequences. While recent volumetric head models, such as NPHMs, can now\nexcel in representing high-fidelity head geometries, tracking and\nreconstructing heads from real-world single-view depth sequences remains very\nchallenging, as the fitting to partial and noisy observations is\nunderconstrained. To tackle these challenges, we propose a latent\ndiffusion-based prior to regularize volumetric head reconstruction and\ntracking. This prior-based regularizer effectively constrains the identity and\nexpression codes to lie on the underlying latent manifold which represents\nplausible head shapes. To evaluate the effectiveness of the diffusion-based\nprior, we collect a dataset of monocular Kinect sequences consisting of various\ncomplex facial expression motions and rapid transitions. We compare our method\nto state-of-the-art tracking methods and demonstrate improved head identity\nreconstruction as well as robust expression tracking.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Jiapeng Tang", "Angela Dai", "Yinyu Nie", "Lev Markhasin", "Justus Thies", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e5"}, "filepath": "data/2311.16854.png", "tags": [], "_media_type": "image", "_rand": 0.9992514892418916, "arXiv_link": "https://arxiv.org/abs/2311.16854", "other_link": "", "title": "A Unified Approach for Text- and Image-guided 4D Scene Generation", "abstract": "Large-scale diffusion generative models are greatly simplifying image, video\nand 3D asset creation from user-provided text prompts and images. However, the\nchallenging problem of text-to-4D dynamic 3D scene generation with diffusion\nguidance remains largely unexplored. We propose Dream-in-4D, which features a\nnovel two-stage approach for text-to-4D synthesis, leveraging (1) 3D and 2D\ndiffusion guidance to effectively learn a high-quality static 3D asset in the\nfirst stage; (2) a deformable neural radiance field that explicitly\ndisentangles the learned static asset from its deformation, preserving quality\nduring motion learning; and (3) a multi-resolution feature grid for the\ndeformation field with a displacement total variation loss to effectively learn\nmotion with video diffusion guidance in the second stage. Through a user\npreference study, we demonstrate that our approach significantly advances image\nand motion quality, 3D consistency and text fidelity for text-to-4D generation\ncompared to baseline approaches. Thanks to its motion-disentangled\nrepresentation, Dream-in-4D can also be easily adapted for controllable\ngeneration where appearance is defined by one or multiple images, without the\nneed to modify the motion learning stage. Thus, our method offers, for the\nfirst time, a unified approach for text-to-4D, image-to-4D and personalized 4D\ngeneration tasks.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yufeng Zheng", "Xueting Li", "Koki Nagano", "Sifei Liu", "Otmar Hilliges", "Shalini De Mello"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e6"}, "filepath": "data/2311.17119.png", "tags": [], "_media_type": "image", "_rand": 0.9998306420931097, "arXiv_link": "https://arxiv.org/abs/2311.17119", "other_link": "", "title": "Continuous Pose for Monocular Cameras in Neural Implicit Representation", "abstract": "In this paper, we showcase the effectiveness of optimizing monocular camera\nposes as a continuous function of time. The camera poses are represented using\nan implicit neural function which maps the given time to the corresponding\ncamera pose. The mapped camera poses are then used for the downstream tasks\nwhere joint camera pose optimization is also required. While doing so, the\nnetwork parameters -- that implicitly represent camera poses -- are optimized.\nWe exploit the proposed method in four diverse experimental settings, namely,\n(1) NeRF from noisy poses; (2) NeRF from asynchronous Events; (3) Visual\nSimultaneous Localization and Mapping (vSLAM); and (4) vSLAM with IMUs. In all\nfour settings, the proposed method performs significantly better than the\ncompared baselines and the state-of-the-art methods. Additionally, using the\nassumption of continuous motion, changes in pose may actually live in a\nmanifold that has lower than 6 degrees of freedom (DOF) is also realized. We\ncall this low DOF motion representation as the \\emph{intrinsic motion} and use\nthe approach in vSLAM settings, showing impressive camera tracking performance.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Qi Ma", "Danda Paudel", "Ajad Chhatkuli", "Luc Van Gool"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e7"}, "filepath": "data/2311.13612.png", "tags": [], "_media_type": "image", "_rand": 0.99995999755079, "arXiv_link": "https://arxiv.org/abs/2311.13612", "other_link": "", "title": "Descriptor and Word Soups: Overcoming the Parameter Efficiency Accuracy Tradeoff for Out-of-Distribution Few-shot Learning", "abstract": "Over the past year, a large body of multimodal research has emerged around\nzero-shot evaluation using GPT descriptors. These studies boost the zero-shot\naccuracy of pretrained VL models with an ensemble of label-specific text\ngenerated by GPT. A recent study, WaffleCLIP, demonstrated that similar\nzero-shot accuracy can be achieved with an ensemble of random descriptors.\nHowever, both zero-shot methods are un-trainable and consequently sub-optimal\nwhen some few-shot out-of-distribution (OOD) training data is available.\nInspired by these prior works, we present two more flexible methods called\ndescriptor and word soups, which do not require an LLM at test time and can\nleverage training data to increase OOD target accuracy. Descriptor soup\ngreedily selects a small set of textual descriptors using generic few-shot\ntraining data, then calculates robust class embeddings using the selected\ndescriptors. Word soup greedily assembles a chain of words in a similar manner.\nCompared to existing few-shot soft prompt tuning methods, word soup requires\nfewer parameters by construction and less GPU memory, since it does not require\nbackpropagation. Both soups outperform current published few-shot methods, even\nwhen combined with SoTA zero-shot methods, on cross-dataset and domain\ngeneralization benchmarks. Compared with SoTA prompt and descriptor ensembling\nmethods, such as ProDA and WaffleCLIP, word soup achieves higher OOD accuracy\nwith fewer ensemble members. Please checkout our code:\ngithub.com/Chris210634/word_soups", "keywords": ["Efficient and scalable vision"], "authors_list": ["Christopher Liao", "Theodoros Tsiligkaridis", "Brian Kulis"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e8"}, "filepath": "data/2403.17936.png", "tags": [], "_media_type": "image", "_rand": 0.99923557631741, "arXiv_link": "https://arxiv.org/abs/2403.17936", "other_link": "", "title": "ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis", "abstract": "Gestures play a key role in human communication. Recent methods for co-speech\ngesture generation, while managing to generate beat-aligned motions, struggle\ngenerating gestures that are semantically aligned with the utterance. Compared\nto beat gestures that align naturally to the audio signal, semantically\ncoherent gestures require modeling the complex interactions between the\nlanguage and human motion, and can be controlled by focusing on certain words.\nTherefore, we present ConvoFusion, a diffusion-based approach for multi-modal\ngesture synthesis, which can not only generate gestures based on multi-modal\nspeech inputs, but can also facilitate controllability in gesture synthesis.\nOur method proposes two guidance objectives that allow the users to modulate\nthe impact of different conditioning modalities (e.g. audio vs text) as well as\nto choose certain words to be emphasized during gesturing. Our method is\nversatile in that it can be trained either for generating monologue gestures or\neven the conversational gestures. To further advance the research on\nmulti-party interactive gestures, the DnD Group Gesture dataset is released,\nwhich contains 6 hours of gesture data showing 5 people interacting with one\nanother. We compare our method with several recent works and demonstrate\neffectiveness of our method on a variety of tasks. We urge the reader to watch\nour supplementary video at our website.", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Muhammad Hamza Mughal", "Rishabh Dabral", "Ikhsanul Habibie", "Lucia Donatelli", "Marc Habermann", "Christian Theobalt"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2e9"}, "filepath": "data/2312.05634.png", "tags": [], "_media_type": "image", "_rand": 0.999419286999963, "arXiv_link": "https://arxiv.org/abs/2312.05634", "other_link": "https://github.com/huyquoctrinh/PGDS.", "title": "SEAS: ShapE-Aligned Supervision for Person Re-Identification", "abstract": "Person Re-Identification (Re-ID) task seeks to enhance the tracking of\nmultiple individuals by surveillance cameras. It supports multimodal tasks,\nincluding text-based person retrieval and human matching. One of the most\nsignificant challenges faced in Re-ID is clothes-changing, where the same\nperson may appear in different outfits. While previous methods have made\nnotable progress in maintaining clothing data consistency and handling clothing\nchange data, they still rely excessively on clothing information, which can\nlimit performance due to the dynamic nature of human appearances. To mitigate\nthis challenge, we propose the Pose-Guidance Deep Supervision (PGDS), an\neffective framework for learning pose guidance within the Re-ID task. It\nconsists of three modules: a human encoder, a pose encoder, and a Pose-to-Human\nProjection module (PHP). Our framework guides the human encoder, i.e., the main\nre-identification model, with pose information from the pose encoder through\nmultiple layers via the knowledge transfer mechanism from the PHP module,\nhelping the human encoder learn body parts information without increasing\ncomputation resources in the inference stage. Through extensive experiments,\nour method surpasses the performance of current state-of-the-art methods,\ndemonstrating its robustness and effectiveness for real-world applications. Our\ncode is available at https://github.com/huyquoctrinh/PGDS.", "keywords": ["Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Haidong Zhu", "Pranav Budhwant", "Zhaoheng Zheng", "Ram Nevatia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ea"}, "filepath": "data/2404.05675.png", "tags": [], "_media_type": "image", "_rand": 0.9996705373144262, "arXiv_link": "https://arxiv.org/abs/2404.05675", "other_link": "", "title": "Normalizing Flows on the Product Space of SO(3) Manifolds for Probabilistic Human Pose Modeling", "abstract": "Normalizing flows have proven their efficacy for density estimation in\nEuclidean space, but their application to rotational representations, crucial\nin various domains such as robotics or human pose modeling, remains\nunderexplored. Probabilistic models of the human pose can benefit from\napproaches that rigorously consider the rotational nature of human joints. For\nthis purpose, we introduce HuProSO3, a normalizing flow model that operates on\na high-dimensional product space of SO(3) manifolds, modeling the joint\ndistribution for human joints with three degrees of freedom. HuProSO3's\nadvantage over state-of-the-art approaches is demonstrated through its superior\nmodeling accuracy in three different applications and its capability to\nevaluate the exact likelihood. This work not only addresses the technical\nchallenge of learning densities on SO(3) manifolds, but it also has broader\nimplications for domains where the probabilistic regression of correlated 3D\nrotations is of importance.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Olaf D\u00fcnkel", "Tim Salzmann", "Florian Pfaff"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2eb"}, "filepath": "data/2312.10305.png", "tags": [], "_media_type": "image", "_rand": 0.9993676680404157, "arXiv_link": "https://arxiv.org/abs/2312.10305", "other_link": "", "title": "ES$^3$: Evolving Self-Supervised Learning of Robust Audio-Visual Speech Representations", "abstract": "Speech signals are inherently complex as they encompass both global acoustic\ncharacteristics and local semantic information. However, in the task of target\nspeech extraction, certain elements of global and local semantic information in\nthe reference speech, which are irrelevant to speaker identity, can lead to\nspeaker confusion within the speech extraction network. To overcome this\nchallenge, we propose a self-supervised disentangled representation learning\nmethod. Our approach tackles this issue through a two-phase process, utilizing\na reference speech encoding network and a global information disentanglement\nnetwork to gradually disentangle the speaker identity information from other\nirrelevant factors. We exclusively employ the disentangled speaker identity\ninformation to guide the speech extraction network. Moreover, we introduce the\nadaptive modulation Transformer to ensure that the acoustic representation of\nthe mixed signal remains undisturbed by the speaker embeddings. This component\nincorporates speaker embeddings as conditional information, facilitating\nnatural and efficient guidance for the speech extraction network. Experimental\nresults substantiate the effectiveness of our meticulously crafted approach,\nshowcasing a substantial reduction in the likelihood of speaker confusion.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yuanhang Zhang", "Shuang Yang", "Shiguang Shan", "Xilin Chen"], "category_name": "Sound", "all_categories": ["Sound", "Artificial Intelligence", "Machine Learning", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ec"}, "filepath": "data/2404.01203.png", "tags": [], "_media_type": "image", "_rand": 0.9990649446277091, "arXiv_link": "https://arxiv.org/abs/2404.01203", "other_link": "", "title": "Video Interpolation with Diffusion Models", "abstract": "We present VIDIM, a generative model for video interpolation, which creates\nshort videos given a start and end frame. In order to achieve high fidelity and\ngenerate motions unseen in the input data, VIDIM uses cascaded diffusion models\nto first generate the target video at low resolution, and then generate the\nhigh-resolution video conditioned on the low-resolution generated video. We\ncompare VIDIM to previous state-of-the-art methods on video interpolation, and\ndemonstrate how such works fail in most settings where the underlying motion is\ncomplex, nonlinear, or ambiguous while VIDIM can easily handle such cases. We\nadditionally demonstrate how classifier-free guidance on the start and end\nframe and conditioning the super-resolution model on the original\nhigh-resolution frames without additional parameters unlocks high-fidelity\nresults. VIDIM is fast to sample from as it jointly denoises all the frames to\nbe generated, requires less than a billion parameters per diffusion model to\nproduce compelling results, and still enjoys scalability and improved quality\nat larger parameter counts.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Siddhant Jain", "Daniel Watson", "Aleksander Holynski", "Eric Tabellion", "Ben Poole", "Janne Kontkanen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ed"}, "filepath": "data/2405.19876.png", "tags": [], "_media_type": "image", "_rand": 0.999433873134124, "arXiv_link": "https://arxiv.org/abs/2405.19876", "other_link": "", "title": "IReNe: Instant Recoloring of Neural Radiance Fields", "abstract": "Advances in NERFs have allowed for 3D scene reconstructions and novel view\nsynthesis. Yet, efficiently editing these representations while retaining\nphotorealism is an emerging challenge. Recent methods face three primary\nlimitations: they're slow for interactive use, lack precision at object\nboundaries, and struggle to ensure multi-view consistency. We introduce IReNe\nto address these limitations, enabling swift, near real-time color editing in\nNeRF. Leveraging a pre-trained NeRF model and a single training image with\nuser-applied color edits, IReNe swiftly adjusts network parameters in seconds.\nThis adjustment allows the model to generate new scene views, accurately\nrepresenting the color changes from the training image while also controlling\nobject boundaries and view-specific effects. Object boundary control is\nachieved by integrating a trainable segmentation module into the model. The\nprocess gains efficiency by retraining only the weights of the last network\nlayer. We observed that neurons in this layer can be classified into those\nresponsible for view-dependent appearance and those contributing to diffuse\nappearance. We introduce an automated classification approach to identify these\nneuron types and exclusively fine-tune the weights of the diffuse neurons. This\nfurther accelerates training and ensures consistent color edits across\ndifferent views. A thorough validation on a new dataset, with edited object\ncolors, shows significant quantitative and qualitative advancements over\ncompetitors, accelerating speeds by 5x to 500x.", "keywords": ["Efficient and scalable vision", "Efficient and scalable vision"], "authors_list": ["Alessio Mazzucchelli", "Adrian Garcia-Garcia", "Elena Garces", "Fernando Rivas-Manzaneque", "Francesc Moreno-Noguer", "Adrian Penate-Sanchez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ee"}, "filepath": "data/2403.06375.png", "tags": [], "_media_type": "image", "_rand": 0.9997564926282863, "arXiv_link": "https://arxiv.org/abs/2403.06375", "other_link": "", "title": "FlowVQTalker: High-Quality Emotional Talking Face Generation through Normalizing Flow and Quantization", "abstract": "Generating emotional talking faces is a practical yet challenging endeavor.\nTo create a lifelike avatar, we draw upon two critical insights from a human\nperspective: 1) The connection between audio and the non-deterministic facial\ndynamics, encompassing expressions, blinks, poses, should exhibit synchronous\nand one-to-many mapping. 2) Vibrant expressions are often accompanied by\nemotion-aware high-definition (HD) textures and finely detailed teeth. However,\nboth aspects are frequently overlooked by existing methods. To this end, this\npaper proposes using normalizing Flow and Vector-Quantization modeling to\nproduce emotional talking faces that satisfy both insights concurrently\n(FlowVQTalker). Specifically, we develop a flow-based coefficient generator\nthat encodes the dynamics of facial emotion into a multi-emotion-class latent\nspace represented as a mixture distribution. The generation process commences\nwith random sampling from the modeled distribution, guided by the accompanying\naudio, enabling both lip-synchronization and the uncertain nonverbal facial\ncues generation. Furthermore, our designed vector-quantization image generator\ntreats the creation of expressive facial images as a code query task, utilizing\na learned codebook to provide rich, high-quality textures that enhance the\nemotional perception of the results. Extensive experiments are conducted to\nshowcase the effectiveness of our approach.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Shuai Tan", "Bin Ji", "Ye Pan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ef"}, "filepath": "data/2403.13261.png", "tags": [], "_media_type": "image", "_rand": 0.9993274243632513, "arXiv_link": "https://arxiv.org/abs/2403.13261", "other_link": "", "title": "Self-Supervised Class-Agnostic Motion Prediction with Spatial and Temporal Consistency Regularizations", "abstract": "The perception of motion behavior in a dynamic environment holds significant\nimportance for autonomous driving systems, wherein class-agnostic motion\nprediction methods directly predict the motion of the entire point cloud. While\nmost existing methods rely on fully-supervised learning, the manual labeling of\npoint cloud data is laborious and time-consuming. Therefore, several\nannotation-efficient methods have been proposed to address this challenge.\nAlthough effective, these methods rely on weak annotations or additional\nmulti-modal data like images, and the potential benefits inherent in the point\ncloud sequence are still underexplored. To this end, we explore the feasibility\nof self-supervised motion prediction with only unlabeled LiDAR point clouds.\nInitially, we employ an optimal transport solver to establish coarse\ncorrespondences between current and future point clouds as the coarse pseudo\nmotion labels. Training models directly using such coarse labels leads to\nnoticeable spatial and temporal prediction inconsistencies. To mitigate these\nissues, we introduce three simple spatial and temporal regularization losses,\nwhich facilitate the self-supervised training process effectively. Experimental\nresults demonstrate the significant superiority of our approach over the\nstate-of-the-art self-supervised methods.", "keywords": [], "authors_list": ["Kewei Wang", "Yizheng Wu", "Jun Cen", "Zhiyu Pan", "Xingyi Li", "Zhe Wang", "Zhiguo Cao", "Guosheng Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f0"}, "filepath": "data/2306.15507.png", "tags": [], "_media_type": "image", "_rand": 0.9995495057430215, "arXiv_link": "https://arxiv.org/abs/2306.15507", "other_link": "", "title": "Latency Correction for Event-guided Deblurring and Frame Interpolation", "abstract": "This paper makes the first attempt to tackle the challenging task of\nrecovering arbitrary frame rate latent global shutter (GS) frames from two\nconsecutive rolling shutter (RS) frames, guided by the novel event camera data.\nAlthough events possess high temporal resolution, beneficial for video frame\ninterpolation (VFI), a hurdle in tackling this task is the lack of paired GS\nframes. Another challenge is that RS frames are susceptible to distortion when\ncapturing moving objects. To this end, we propose a novel self-supervised\nframework that leverages events to guide RS frame correction and VFI in a\nunified framework. Our key idea is to estimate the displacement field (DF)\nnon-linear dense 3D spatiotemporal information of all pixels during the\nexposure time, allowing for the reciprocal reconstruction between RS and GS\nframes as well as arbitrary frame rate VFI. Specifically, the displacement\nfield estimation (DFE) module is proposed to estimate the spatiotemporal motion\nfrom events to correct the RS distortion and interpolate the GS frames in one\nstep. We then combine the input RS frames and DF to learn a mapping for\nRS-to-GS frame interpolation. However, as the mapping is highly\nunder-constrained, we couple it with an inverse mapping (i.e., GS-to-RS) and RS\nframe warping (i.e., RS-to-RS) for self-supervision. As there is a lack of\nlabeled datasets for evaluation, we generate two synthetic datasets and collect\na real-world dataset to train and test our method. Experimental results show\nthat our method yields comparable or better performance with prior supervised\nmethods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yixin Yang", "Jinxiu Liang", "Bohan Yu", "Yan Chen", "Jimmy S. Ren", "Boxin Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f1"}, "filepath": "data/2403.09124.png", "tags": [], "_media_type": "image", "_rand": 0.9999264997726508, "arXiv_link": "https://arxiv.org/abs/2403.09124", "other_link": "https://github.com/Shimmer93/MPCount.", "title": "Single Domain Generalization for Crowd Counting", "abstract": "Due to its promising results, density map regression has been widely employed\nfor image-based crowd counting. The approach, however, often suffers from\nsevere performance degradation when tested on data from unseen scenarios, the\nso-called \"domain shift\" problem. To address the problem, we investigate in\nthis work single domain generalization (SDG) for crowd counting. The existing\nSDG approaches are mainly for image classification and segmentation, and can\nhardly be extended to our case due to its regression nature and label ambiguity\n(i.e., ambiguous pixel-level ground truths). We propose MPCount, a novel\neffective SDG approach even for narrow source distribution. MPCount stores\ndiverse density values for density map regression and reconstructs\ndomain-invariant features by means of only one memory bank, a content error\nmask and attention consistency loss. By partitioning the image into grids, it\nemploys patch-wise classification as an auxiliary task to mitigate label\nambiguity. Through extensive experiments on different datasets, MPCount is\nshown to significantly improve counting accuracy compared to the state of the\nart under diverse scenarios unobserved in the training data characterized by\nnarrow source distribution. Code is available at\nhttps://github.com/Shimmer93/MPCount.", "keywords": [], "authors_list": ["Zhuoxuan Peng", "S.-H. Gary Chan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f2"}, "filepath": "data/2401.13627.png", "tags": [], "_media_type": "image", "_rand": 0.9997950196326447, "arXiv_link": "https://arxiv.org/abs/2401.13627", "other_link": "", "title": "Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild", "abstract": "We introduce SUPIR (Scaling-UP Image Restoration), a groundbreaking image\nrestoration method that harnesses generative prior and the power of model\nscaling up. Leveraging multi-modal techniques and advanced generative prior,\nSUPIR marks a significant advance in intelligent and realistic image\nrestoration. As a pivotal catalyst within SUPIR, model scaling dramatically\nenhances its capabilities and demonstrates new potential for image restoration.\nWe collect a dataset comprising 20 million high-resolution, high-quality images\nfor model training, each enriched with descriptive text annotations. SUPIR\nprovides the capability to restore images guided by textual prompts, broadening\nits application scope and potential. Moreover, we introduce negative-quality\nprompts to further improve perceptual quality. We also develop a\nrestoration-guided sampling method to suppress the fidelity issue encountered\nin generative-based restoration. Experiments demonstrate SUPIR's exceptional\nrestoration effects and its novel capacity to manipulate restoration through\ntextual prompts.", "keywords": ["Efficient and scalable vision", "Low-level vision", "Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Fanghua Yu", "Jinjin Gu", "Zheyuan Li", "Jinfan Hu", "Xiangtao Kong", "Xintao Wang", "Jingwen He", "Yu Qiao", "Chao Dong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f3"}, "filepath": "data/2405.14062.png", "tags": [], "_media_type": "image", "_rand": 0.9994182867280588, "arXiv_link": "https://arxiv.org/abs/2405.14062", "other_link": "", "title": "ChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles", "abstract": "We present ChatScene, a Large Language Model (LLM)-based agent that leverages\nthe capabilities of LLMs to generate safety-critical scenarios for autonomous\nvehicles. Given unstructured language instructions, the agent first generates\ntextually described traffic scenarios using LLMs. These scenario descriptions\nare subsequently broken down into several sub-descriptions for specified\ndetails such as behaviors and locations of vehicles. The agent then\ndistinctively transforms the textually described sub-scenarios into\ndomain-specific languages, which then generate actual code for prediction and\ncontrol in simulators, facilitating the creation of diverse and complex\nscenarios within the CARLA simulation environment. A key part of our agent is a\ncomprehensive knowledge retrieval component, which efficiently translates\nspecific textual descriptions into corresponding domain-specific code snippets\nby training a knowledge database containing the scenario description and code\npairs. Extensive experimental results underscore the efficacy of ChatScene in\nimproving the safety of autonomous vehicles. For instance, the scenarios\ngenerated by ChatScene show a 15% increase in collision rates compared to\nstate-of-the-art baselines when tested against different reinforcement\nlearning-based ego vehicles. Furthermore, we show that by using our generated\nsafety-critical scenarios to fine-tune different RL-based autonomous driving\nmodels, they can achieve a 9% reduction in collision rates, surpassing current\nSOTA methods. ChatScene effectively bridges the gap between textual\ndescriptions of traffic scenarios and practical CARLA simulations, providing a\nunified way to conveniently generate safety-critical scenarios for safety\ntesting and improvement for AVs.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Jiawei Zhang", "Chejian Xu", "Bo Li"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f4"}, "filepath": "data/2404.00658.png", "tags": [], "_media_type": "image", "_rand": 0.9992006381989061, "arXiv_link": "https://arxiv.org/abs/2404.00658", "other_link": "https://github.com/JihuaPeng/KTPFormer.", "title": "KTPFormer: Kinematics and Trajectory Prior Knowledge-Enhanced Transformer for 3D Human Pose Estimation", "abstract": "This paper presents a novel Kinematics and Trajectory Prior\nKnowledge-Enhanced Transformer (KTPFormer), which overcomes the weakness in\nexisting transformer-based methods for 3D human pose estimation that the\nderivation of Q, K, V vectors in their self-attention mechanisms are all based\non simple linear mapping. We propose two prior attention modules, namely\nKinematics Prior Attention (KPA) and Trajectory Prior Attention (TPA) to take\nadvantage of the known anatomical structure of the human body and motion\ntrajectory information, to facilitate effective learning of global dependencies\nand features in the multi-head self-attention. KPA models kinematic\nrelationships in the human body by constructing a topology of kinematics, while\nTPA builds a trajectory topology to learn the information of joint motion\ntrajectory across frames. Yielding Q, K, V vectors with prior knowledge, the\ntwo modules enable KTPFormer to model both spatial and temporal correlations\nsimultaneously. Extensive experiments on three benchmarks (Human3.6M,\nMPI-INF-3DHP and HumanEva) show that KTPFormer achieves superior performance in\ncomparison to state-of-the-art methods. More importantly, our KPA and TPA\nmodules have lightweight plug-and-play designs and can be integrated into\nvarious transformer-based networks (i.e., diffusion-based) to improve the\nperformance with only a very small increase in the computational overhead. The\ncode is available at: https://github.com/JihuaPeng/KTPFormer.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Jihua Peng", "Yanghong Zhou", "Tracy P Y Mok"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f5"}, "filepath": "data/2403.10254.png", "tags": [], "_media_type": "image", "_rand": 0.9991438986085887, "arXiv_link": "https://arxiv.org/abs/2403.10254", "other_link": "https://github.com/924973292/EDITOR.", "title": "Magic Tokens: Select Diverse Tokens for Multi-modal Object Re-Identification", "abstract": "Single-modal object re-identification (ReID) faces great challenges in\nmaintaining robustness within complex visual scenarios. In contrast,\nmulti-modal object ReID utilizes complementary information from diverse\nmodalities, showing great potentials for practical applications. However,\nprevious methods may be easily affected by irrelevant backgrounds and usually\nignore the modality gaps. To address above issues, we propose a novel learning\nframework named \\textbf{EDITOR} to select diverse tokens from vision\nTransformers for multi-modal object ReID. We begin with a shared vision\nTransformer to extract tokenized features from different input modalities.\nThen, we introduce a Spatial-Frequency Token Selection (SFTS) module to\nadaptively select object-centric tokens with both spatial and frequency\ninformation. Afterwards, we employ a Hierarchical Masked Aggregation (HMA)\nmodule to facilitate feature interactions within and across modalities.\nFinally, to further reduce the effect of backgrounds, we propose a Background\nConsistency Constraint (BCC) and an Object-Centric Feature Refinement (OCFR).\nThey are formulated as two new loss functions, which improve the feature\ndiscrimination with background suppression. As a result, our framework can\ngenerate more discriminative features for multi-modal object ReID. Extensive\nexperiments on three multi-modal ReID benchmarks verify the effectiveness of\nour methods. The code is available at https://github.com/924973292/EDITOR.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Pingping Zhang", "Yuhao Wang", "Yang Liu", "Zhengzheng Tu", "Huchuan Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Information Retrieval", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f6"}, "filepath": "data/2405.20319v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995764028407476, "arXiv_link": "https://arxiv.org/html/2405.20319v1", "other_link": "", "title": "ShapeWalk: Compositional Shape Editing through Language-Guided Chains", "abstract": "The ability to edit 3D assets from natural language presents a compelling\nparadigm to aid in the democratization of 3D content creation. However, while\nnatural language is often effective at communicating general intent, it is\npoorly suited for specifying precise manipulation. To address this gap, we\nintroduce ParSEL, a system that enables controllable editing of high-quality 3D\nassets from natural language. Given a segmented 3D mesh and an editing request,\nParSEL produces a parameterized editing program. Adjusting the program\nparameters allows users to explore shape variations with a precise control over\nthe magnitudes of edits. To infer editing programs which align with an input\nedit request, we leverage the abilities of large-language models (LLMs).\nHowever, while we find that LLMs excel at identifying initial edit operations,\nthey often fail to infer complete editing programs, and produce outputs that\nviolate shape semantics. To overcome this issue, we introduce Analytical Edit\nPropagation (AEP), an algorithm which extends a seed edit with additional\noperations until a complete editing program has been formed. Unlike prior\nmethods, AEP searches for analytical editing operations compatible with a range\nof possible user edits through the integration of computer algebra systems for\ngeometric analysis. Experimentally we demonstrate ParSEL's effectiveness in\nenabling controllable editing of 3D objects through natural language requests\nover alternative system designs.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Habib Slim", "Mohamed Elhoseiny"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Human-Computer Interaction", "Symbolic Computation"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f7"}, "filepath": "data/2401.12175v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993545300451635, "arXiv_link": "https://arxiv.org/html/2401.12175v2", "other_link": "", "title": "R-Cyclic Diffuser: Reductive and Cyclic Latent Diffusion for 3D Clothed Human Digitalization", "abstract": "Reconstructing 3D humans from a single image has been extensively\ninvestigated. However, existing approaches often fall short on capturing fine\ngeometry and appearance details, hallucinating occluded parts with plausible\ndetails, and achieving generalization across unseen and in-the-wild datasets.\nWe present Human-LRM, a diffusion-guided feed-forward model that predicts the\nimplicit field of a human from a single image. Leveraging the power of the\nstate-of-the-art reconstruction model (i.e., LRM) and generative model (i.e\nStable Diffusion), our method is able to capture human without any template\nprior, e.g., SMPL, and effectively enhance occluded parts with rich and\nrealistic details. Our approach first uses a single-view LRM model with an\nenhanced geometry decoder to get the triplane NeRF representation. The novel\nview renderings from the triplane NeRF provide strong geometry and color prior,\nfrom which we generate photo-realistic details for the occluded parts using a\ndiffusion model. The generated multiple views then enable reconstruction with\nhigh-quality geometry and appearance, leading to superior overall performance\ncomparing to all existing human reconstruction methods.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Kennard Chan", "Fayao Liu", "Guosheng Lin", "Chuan-Sheng Foo", "Weisi Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f8"}, "filepath": "data/2306.09330.png", "tags": [], "_media_type": "image", "_rand": 0.9995413496385719, "arXiv_link": "https://arxiv.org/abs/2306.09330", "other_link": "", "title": "Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model", "abstract": "Arbitrary Style Transfer (AST) aims to transform images by adopting the style\nfrom any selected artwork. Nonetheless, the need to accommodate diverse and\nsubjective user preferences poses a significant challenge. While some users\nwish to preserve distinct content structures, others might favor a more\npronounced stylization. Despite advances in feed-forward AST methods, their\nlimited customizability hinders their practical application. We propose a new\napproach, ArtFusion, which provides a flexible balance between content and\nstyle. In contrast to traditional methods reliant on biased similarity losses,\nArtFusion utilizes our innovative Dual Conditional Latent Diffusion\nProbabilistic Models (Dual-cLDM). This approach mitigates repetitive patterns\nand enhances subtle artistic aspects like brush strokes and genre-specific\nfeatures. Despite the promising results of conditional diffusion probabilistic\nmodels (cDM) in various generative tasks, their introduction to style transfer\nis challenging due to the requirement for paired training data. ArtFusion\nsuccessfully navigates this issue, offering more practical and controllable\nstylization. A key element of our approach involves using a single image for\nboth content and style during model training, all the while maintaining\neffective stylization during inference. ArtFusion outperforms existing\napproaches on outstanding controllability and faithful presentation of artistic\ndetails, providing evidence of its superior style transfer capabilities.\nFurthermore, the Dual-cLDM utilized in ArtFusion carries the potential for a\nvariety of complex multi-condition generative tasks, thus greatly broadening\nthe impact of our research.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Wenfeng Song", "Xingliang Jin", "Shuai Li", "Chenglizhao Chen", "Aimin Hao", "Xia HOU", "Ning Li", "Hong Qin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2f9"}, "filepath": "data/2403.12722.png", "tags": [], "_media_type": "image", "_rand": 0.9994974714985414, "arXiv_link": "https://arxiv.org/abs/2403.12722", "other_link": "", "title": "HUGS: Holistic Urban 3D Scene Understanding via Gaussian Splatting", "abstract": "Holistic understanding of urban scenes based on RGB images is a challenging\nyet important problem. It encompasses understanding both the geometry and\nappearance to enable novel view synthesis, parsing semantic labels, and\ntracking moving objects. Despite considerable progress, existing approaches\noften focus on specific aspects of this task and require additional inputs such\nas LiDAR scans or manually annotated 3D bounding boxes. In this paper, we\nintroduce a novel pipeline that utilizes 3D Gaussian Splatting for holistic\nurban scene understanding. Our main idea involves the joint optimization of\ngeometry, appearance, semantics, and motion using a combination of static and\ndynamic 3D Gaussians, where moving object poses are regularized via physical\nconstraints. Our approach offers the ability to render new viewpoints in\nreal-time, yielding 2D and 3D semantic information with high accuracy, and\nreconstruct dynamic scenes, even in scenarios where 3D bounding box detection\nare highly noisy. Experimental results on KITTI, KITTI-360, and Virtual KITTI 2\ndemonstrate the effectiveness of our approach.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Hongyu Zhou", "Jiahao Shao", "Lu Xu", "Dongfeng Bai", "Weichao Qiu", "Bingbing Liu", "Yue Wang", "Andreas Geiger", "Yiyi Liao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2fa"}, "filepath": "data/2404.10880.png", "tags": [], "_media_type": "image", "_rand": 0.9999643211406825, "arXiv_link": "https://arxiv.org/abs/2404.10880", "other_link": "", "title": "HumMUSS: Human Motion Understanding using State Space Models", "abstract": "Understanding human motion from video is essential for a range of\napplications, including pose estimation, mesh recovery and action recognition.\nWhile state-of-the-art methods predominantly rely on transformer-based\narchitectures, these approaches have limitations in practical scenarios.\nTransformers are slower when sequentially predicting on a continuous stream of\nframes in real-time, and do not generalize to new frame rates. In light of\nthese constraints, we propose a novel attention-free spatiotemporal model for\nhuman motion understanding building upon recent advancements in state space\nmodels. Our model not only matches the performance of transformer-based models\nin various motion understanding tasks but also brings added benefits like\nadaptability to different video frame rates and enhanced training speed when\nworking with longer sequence of keypoints. Moreover, the proposed model\nsupports both offline and real-time applications. For real-time sequential\nprediction, our model is both memory efficient and several times faster than\ntransformer-based approaches while maintaining their high accuracy.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Arnab Mondal", "Stefano Alletto", "Denis Tome"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2fb"}, "filepath": "data/2404.10716.png", "tags": [], "_media_type": "image", "_rand": 0.9999602474544974, "arXiv_link": "https://arxiv.org/abs/2404.10716", "other_link": "", "title": "Towards Progressive Multi-Frequency Representation for Image Warping", "abstract": "While recent image warping approaches achieved remarkable success on existing\nbenchmarks, they still require training separate models for each specific task\nand cannot generalize well to different camera models or customized\nmanipulations. To address diverse types of warping in practice, we propose a\nMultiple-in-One image WArping model (named MOWA) in this work. Specifically, we\nmitigate the difficulty of multi-task learning by disentangling the motion\nestimation at both the region level and pixel level. To further enable dynamic\ntask-aware image warping, we introduce a lightweight point-based classifier\nthat predicts the task type, serving as prompts to modulate the feature maps\nfor better estimation. To our knowledge, this is the first work that solves\nmultiple practical warping tasks in one single model. Extensive experiments\ndemonstrate that our MOWA, which is trained on six tasks for multiple-in-one\nimage warping, outperforms state-of-the-art task-specific models across most\ntasks. Moreover, MOWA also exhibits promising potential to generalize into\nunseen scenes, as evidenced by cross-domain and zero-shot evaluations. The code\nwill be made publicly available.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Jun Xiao", "Zihang Lyu", "Cong Zhang", "Yakun Ju", "Changjian Shui", "Kin-man Lam"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2fc"}, "filepath": "data/2312.14135.png", "tags": [], "_media_type": "image", "_rand": 0.9998570958890178, "arXiv_link": "https://arxiv.org/abs/2312.14135", "other_link": "https://github.com/penghao-wu/vstar.", "title": "V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs", "abstract": "When we look around and perform complex tasks, how we see and selectively\nprocess what we see is crucial. However, the lack of this visual search\nmechanism in current multimodal LLMs (MLLMs) hinders their ability to focus on\nimportant visual details, especially when handling high-resolution and visually\ncrowded images. To address this, we introduce V*, an LLM-guided visual search\nmechanism that employs the world knowledge in LLMs for efficient visual\nquerying. When combined with an MLLM, this mechanism enhances collaborative\nreasoning, contextual understanding, and precise targeting of specific visual\nelements. This integration results in a new MLLM meta-architecture, named Show,\nsEArch, and TelL (SEAL). We further create V*Bench, a benchmark specifically\ndesigned to evaluate MLLMs in their ability to process high-resolution images\nand focus on visual details. Our study highlights the necessity of\nincorporating visual search capabilities into multimodal systems. The code is\navailable https://github.com/penghao-wu/vstar.", "keywords": ["Large multimodal models and prompting techniques", "Efficient and scalable vision"], "authors_list": ["Penghao Wu", "Saining Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2fd"}, "filepath": "data/2401.00988v1.png", "tags": [], "_media_type": "image", "_rand": 0.999586873535677, "arXiv_link": "https://arxiv.org/abs/2401.00988v1", "other_link": "", "title": "Holistic Autonomous Driving Understanding by Bird's-Eye-View Injected Multi-Modal Large Models", "abstract": "The rise of multimodal large language models (MLLMs) has spurred interest in\nlanguage-based driving tasks. However, existing research typically focuses on\nlimited tasks and often omits key multi-view and temporal information which is\ncrucial for robust autonomous driving. To bridge these gaps, we introduce\nNuInstruct, a novel dataset with 91K multi-view video-QA pairs across 17\nsubtasks, where each task demands holistic information (e.g., temporal,\nmulti-view, and spatial), significantly elevating the challenge level. To\nobtain NuInstruct, we propose a novel SQL-based method to generate\ninstruction-response pairs automatically, which is inspired by the driving\nlogical progression of humans. We further present BEV-InMLLM, an end-to-end\nmethod for efficiently deriving instruction-aware Bird's-Eye-View (BEV)\nfeatures, language-aligned for large language models. BEV-InMLLM integrates\nmulti-view, spatial awareness, and temporal semantics to enhance MLLMs'\ncapabilities on NuInstruct tasks. Moreover, our proposed BEV injection module\nis a plug-and-play method for existing MLLMs. Our experiments on NuInstruct\ndemonstrate that BEV-InMLLM significantly outperforms existing MLLMs, e.g.\naround 9% improvement on various tasks. We plan to release our NuInstruct for\nfuture research development.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Xinpeng Ding", "Jianhua Han", "Hang Xu", "Xiaodan Liang", "Wei Zhang", "Xiaomeng Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2fe"}, "filepath": "data/2403.06946.png", "tags": [], "_media_type": "image", "_rand": 0.9991776890323076, "arXiv_link": "https://arxiv.org/abs/2403.06946", "other_link": "https://github.com/TL-UESTC/UniMoS", "title": "Split to Merge: Unifying Separated Modalities for Unsupervised Domain Adaptation", "abstract": "Large vision-language models (VLMs) like CLIP have demonstrated good\nzero-shot learning performance in the unsupervised domain adaptation task. Yet,\nmost transfer approaches for VLMs focus on either the language or visual\nbranches, overlooking the nuanced interplay between both modalities. In this\nwork, we introduce a Unified Modality Separation (UniMoS) framework for\nunsupervised domain adaptation. Leveraging insights from modality gap studies,\nwe craft a nimble modality separation network that distinctly disentangles\nCLIP's features into language-associated and vision-associated components. Our\nproposed Modality-Ensemble Training (MET) method fosters the exchange of\nmodality-agnostic information while maintaining modality-specific nuances. We\nalign features across domains using a modality discriminator. Comprehensive\nevaluations on three benchmarks reveal our approach sets a new state-of-the-art\nwith minimal computational costs. Code: https://github.com/TL-UESTC/UniMoS", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xinyao Li", "Yuke Li", "Zhekai Du", "Fengling Li", "Ke Lu", "Jingjing Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f2ff"}, "filepath": "data/2403.20249.png", "tags": [], "_media_type": "image", "_rand": 0.9992871261556734, "arXiv_link": "https://arxiv.org/abs/2403.20249", "other_link": "https://wuyinwei-hah.github.io/rrnet.github.io/.", "title": "Relation Rectification in Diffusion Model", "abstract": "Despite their exceptional generative abilities, large text-to-image diffusion\nmodels, much like skilled but careless artists, often struggle with accurately\ndepicting visual relationships between objects. This issue, as we uncover\nthrough careful analysis, arises from a misaligned text encoder that struggles\nto interpret specific relationships and differentiate the logical order of\nassociated objects. To resolve this, we introduce a novel task termed Relation\nRectification, aiming to refine the model to accurately represent a given\nrelationship it initially fails to generate. To address this, we propose an\ninnovative solution utilizing a Heterogeneous Graph Convolutional Network\n(HGCN). It models the directional relationships between relation terms and\ncorresponding objects within the input prompts. Specifically, we optimize the\nHGCN on a pair of prompts with identical relational words but reversed object\norders, supplemented by a few reference images. The lightweight HGCN adjusts\nthe text embeddings generated by the text encoder, ensuring the accurate\nreflection of the textual relation in the embedding space. Crucially, our\nmethod retains the parameters of the text encoder and diffusion model,\npreserving the model's robust performance on unrelated descriptions. We\nvalidated our approach on a newly curated dataset of diverse relational data,\ndemonstrating both quantitative and qualitative enhancements in generating\nimages with precise visual relations. Project page:\nhttps://wuyinwei-hah.github.io/rrnet.github.io/.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yinwei Wu", "Xingyi Yang", "Xinchao Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f300"}, "filepath": "data/2401.04071v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995657620421845, "arXiv_link": "https://arxiv.org/abs/2401.04071v1", "other_link": "", "title": "Fun with Flags: Robust Principal Directions via Flag Manifolds", "abstract": "Principal component analysis (PCA), along with its extensions to manifolds\nand outlier contaminated data, have been indispensable in computer vision and\nmachine learning. In this work, we present a unifying formalism for PCA and its\nvariants, and introduce a framework based on the flags of linear subspaces, \\ie\na hierarchy of nested linear subspaces of increasing dimension, which not only\nallows for a common implementation but also yields novel variants, not explored\npreviously. We begin by generalizing traditional PCA methods that either\nmaximize variance or minimize reconstruction error. We expand these\ninterpretations to develop a wide array of new dimensionality reduction\nalgorithms by accounting for outliers and the data manifold. To devise a common\ncomputational approach, we recast robust and dual forms of PCA as optimization\nproblems on flag manifolds. We then integrate tangent space approximations of\nprincipal geodesic analysis (tangent-PCA) into this flag-based framework,\ncreating novel robust and dual geodesic PCA variations. The remarkable\nflexibility offered by the 'flagification' introduced here enables even more\nalgorithmic variants identified by specific flag types. Last but not least, we\npropose an effective convergent solver for these flag-formulations employing\nthe Stiefel manifold. Our empirical results on both real-world and synthetic\nscenarios, demonstrate the superiority of our novel algorithms, especially in\nterms of robustness to outliers on manifolds.", "keywords": [], "authors_list": ["Tolga Birdal", "Nathan Mankovich"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Unknown", "Optimization and Control", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f301"}, "filepath": "data/2403.07241.png", "tags": [], "_media_type": "image", "_rand": 0.9999328750794803, "arXiv_link": "https://arxiv.org/abs/2403.07241", "other_link": "", "title": "Calibrating Multi-modal Representations: A Pursuit of Group Robustness without Annotations", "abstract": "Fine-tuning pre-trained vision-language models, like CLIP, has yielded\nsuccess on diverse downstream tasks. However, several pain points persist for\nthis paradigm: (i) directly tuning entire pre-trained models becomes both\ntime-intensive and computationally costly. Additionally, these tuned models\ntend to become highly specialized, limiting their practicality for real-world\ndeployment; (ii) recent studies indicate that pre-trained vision-language\nclassifiers may overly depend on spurious features -- patterns that correlate\nwith the target in training data, but are not related to the true labeling\nfunction; and (iii) existing studies on mitigating the reliance on spurious\nfeatures, largely based on the assumption that we can identify such features,\ndoes not provide definitive assurance for real-world applications. As a\npiloting study, this work focuses on exploring mitigating the reliance on\nspurious features for CLIP without using any group annotation. To this end, we\nsystematically study the existence of spurious correlation on CLIP and\nCILP+ERM. We first, following recent work on Deep Feature Reweighting (DFR),\nverify that last-layer retraining can greatly improve group robustness on\npretrained CLIP. In view of them, we advocate a lightweight representation\ncalibration method for fine-tuning CLIP, by first generating a calibration set\nusing the pretrained CLIP, and then calibrating representations of samples\nwithin this set through contrastive learning, all without the need for group\nlabels. Extensive experiments and in-depth visualizations on several benchmarks\nvalidate the effectiveness of our proposals, largely reducing reliance and\nsignificantly boosting the model generalization.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques"], "authors_list": ["Chenyu You", "Yifei Min", "Weicheng Dai", "Jasjeet Sekhon", "Lawrence Staib", "James Duncan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f302"}, "filepath": "data/2310.09276.png", "tags": [], "_media_type": "image", "_rand": 0.999823748130294, "arXiv_link": "https://arxiv.org/abs/2310.09276", "other_link": "", "title": "A Novel Transformer based Network for Large Scale Multimodal and Multitask Learning", "abstract": "Change detection plays a fundamental role in Earth observation for analyzing\ntemporal iterations over time. However, recent studies have largely neglected\nthe utilization of multimodal data that presents significant practical and\ntechnical advantages compared to single-modal approaches. This research focuses\non leveraging {pre-event} digital surface model (DSM) data and {post-event}\ndigital aerial images captured at different times for detecting change beyond\n2D. We observe that the current change detection methods struggle with the\nmultitask conflicts between semantic and height change detection tasks. To\naddress this challenge, we propose an efficient Transformer-based network that\nlearns shared representation between cross-dimensional inputs through\ncross-attention. {It adopts a consistency constraint to establish the\nmultimodal relationship. Initially, pseudo-changes are derived by employing\nheight change thresholding. Subsequently, the $L2$ distance between semantic\nand pseudo-changes within their overlapping regions is minimized. This\nexplicitly endows the height change detection (regression task) and semantic\nchange detection (classification task) with representation consistency.} A\nDSM-to-image multimodal dataset encompassing three cities in the Netherlands\nwas constructed. It lays a new foundation for beyond-2D change detection from\ncross-dimensional inputs. Compared to five state-of-the-art change detection\nmethods, our model demonstrates consistent multitask superiority in terms of\nsemantic and height change detection. Furthermore, the consistency strategy can\nbe seamlessly adapted to the other methods, yielding promising improvements.", "keywords": ["Multimodal models and vision-language models", "Remote sensing and photogrammetry", "Efficient and scalable vision"], "authors_list": ["Siddharth Srivastava", "Gaurav Sharma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f303"}, "filepath": "data/2402.07635.png", "tags": [], "_media_type": "image", "_rand": 0.9992340321190035, "arXiv_link": "https://arxiv.org/abs/2402.07635", "other_link": "", "title": "Collaborative Semantic Occupancy Prediction with Hybrid Feature Fusion in Connected Automated Vehicles", "abstract": "Collaborative perception in automated vehicles leverages the exchange of\ninformation between agents, aiming to elevate perception results. Previous\ncamera-based collaborative 3D perception methods typically employ 3D bounding\nboxes or bird's eye views as representations of the environment. However, these\napproaches fall short in offering a comprehensive 3D environmental prediction.\nTo bridge this gap, we introduce the first method for collaborative 3D semantic\noccupancy prediction. Particularly, it improves local 3D semantic occupancy\npredictions by hybrid fusion of (i) semantic and occupancy task features, and\n(ii) compressed orthogonal attention features shared between vehicles.\nAdditionally, due to the lack of a collaborative perception dataset designed\nfor semantic occupancy prediction, we augment a current collaborative\nperception dataset to include 3D collaborative semantic occupancy labels for a\nmore robust evaluation. The experimental findings highlight that: (i) our\ncollaborative semantic occupancy predictions excel above the results from\nsingle vehicles by over 30%, and (ii) models anchored on semantic occupancy\noutpace state-of-the-art collaborative 3D detection techniques in subsequent\nperception applications, showcasing enhanced accuracy and enriched\nsemantic-awareness in road environments.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Rui Song", "Chenwei Liang", "Hu Cao", "Zhiran Yan", "Walter Zimmer", "Markus Gross", "Andreas Festag", "Alois Knoll"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f304"}, "filepath": "data/2312.06725.png", "tags": [], "_media_type": "image", "_rand": 0.9993586959769576, "arXiv_link": "https://arxiv.org/abs/2312.06725", "other_link": "https://huanngzh.github.io/EpiDiff/.", "title": "EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion", "abstract": "Generating multiview images from a single view facilitates the rapid\ngeneration of a 3D mesh conditioned on a single image. Recent methods that\nintroduce 3D global representation into diffusion models have shown the\npotential to generate consistent multiviews, but they have reduced generation\nspeed and face challenges in maintaining generalizability and quality. To\naddress this issue, we propose EpiDiff, a localized interactive multiview\ndiffusion model. At the core of the proposed approach is to insert a\nlightweight epipolar attention block into the frozen diffusion model,\nleveraging epipolar constraints to enable cross-view interaction among feature\nmaps of neighboring views. The newly initialized 3D modeling module preserves\nthe original feature distribution of the diffusion model, exhibiting\ncompatibility with a variety of base diffusion models. Experiments show that\nEpiDiff generates 16 multiview images in just 12 seconds, and it surpasses\nprevious methods in quality evaluation metrics, including PSNR, SSIM and LPIPS.\nAdditionally, EpiDiff can generate a more diverse distribution of views,\nimproving the reconstruction quality from generated multiviews. Please see our\nproject page at https://huanngzh.github.io/EpiDiff/.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Zehuan Huang", "Hao Wen", "Junting Dong", "Yaohui Wang", "Yangguang Li", "Xinyuan Chen", "Yan-Pei Cao", "Ding Liang", "Yu Qiao", "Bo Dai", "Lu Sheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f305"}, "filepath": "data/2404.07991.png", "tags": [], "_media_type": "image", "_rand": 0.9995751065571895, "arXiv_link": "https://arxiv.org/abs/2404.07991", "other_link": "", "title": "GaussianAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-Mesh", "abstract": "We introduce GoMAvatar, a novel approach for real-time, memory-efficient,\nhigh-quality animatable human modeling. GoMAvatar takes as input a single\nmonocular video to create a digital avatar capable of re-articulation in new\nposes and real-time rendering from novel viewpoints, while seamlessly\nintegrating with rasterization-based graphics pipelines. Central to our method\nis the Gaussians-on-Mesh representation, a hybrid 3D model combining rendering\nquality and speed of Gaussian splatting with geometry modeling and\ncompatibility of deformable meshes. We assess GoMAvatar on ZJU-MoCap data and\nvarious YouTube videos. GoMAvatar matches or surpasses current monocular human\nmodeling algorithms in rendering quality and significantly outperforms them in\ncomputational efficiency (43 FPS) while being memory-efficient (3.63 MB per\nsubject).", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis", "Vision systems and graphics integration"], "authors_list": ["Jing Wen", "Xiaoming Zhao", "Jason Ren", "Alexander G. Schwing", "Shenlong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f306"}, "filepath": "data/2404.04557.png", "tags": [], "_media_type": "image", "_rand": 0.9994755216421594, "arXiv_link": "https://arxiv.org/abs/2404.04557", "other_link": "https://github.com/zhiyuanYU134/MIRETR.", "title": "Learning Instance-Aware Correspondences for Robust Multi-Instance Point Cloud Registration in Cluttered Scenes", "abstract": "Multi-instance point cloud registration estimates the poses of multiple\ninstances of a model point cloud in a scene point cloud. Extracting accurate\npoint correspondence is to the center of the problem. Existing approaches\nusually treat the scene point cloud as a whole, overlooking the separation of\ninstances. Therefore, point features could be easily polluted by other points\nfrom the background or different instances, leading to inaccurate\ncorrespondences oblivious to separate instances, especially in cluttered\nscenes. In this work, we propose MIRETR, Multi-Instance REgistration\nTRansformer, a coarse-to-fine approach to the extraction of instance-aware\ncorrespondences. At the coarse level, it jointly learns instance-aware\nsuperpoint features and predicts per-instance masks. With instance masks, the\ninfluence from outside of the instance being concerned is minimized, such that\nhighly reliable superpoint correspondences can be extracted. The superpoint\ncorrespondences are then extended to instance candidates at the fine level\naccording to the instance masks. At last, an efficient candidate selection and\nrefinement algorithm is devised to obtain the final registrations. Extensive\nexperiments on three public benchmarks demonstrate the efficacy of our\napproach. In particular, MIRETR outperforms the state of the arts by 16.6\npoints on F1 score on the challenging ROBI benchmark. Code and models are\navailable at https://github.com/zhiyuanYU134/MIRETR.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Zhiyuan Yu", "Zheng Qin", "lintao zheng", "Kai Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f307"}, "filepath": "data/2403.19242.png", "tags": [], "_media_type": "image", "_rand": 0.9992606844683781, "arXiv_link": "https://arxiv.org/abs/2403.19242", "other_link": "", "title": "RTracker: Recoverable Tracking via PN Tree Structured Memory", "abstract": "Existing tracking methods mainly focus on learning better target\nrepresentation or developing more robust prediction models to improve tracking\nperformance. While tracking performance has significantly improved, the target\nloss issue occurs frequently due to tracking failures, complete occlusion, or\nout-of-view situations. However, considerably less attention is paid to the\nself-recovery issue of tracking methods, which is crucial for practical\napplications. To this end, we propose a recoverable tracking framework,\nRTracker, that uses a tree-structured memory to dynamically associate a tracker\nand a detector to enable self-recovery ability. Specifically, we propose a\nPositive-Negative Tree-structured memory to chronologically store and maintain\npositive and negative target samples. Upon the PN tree memory, we develop\ncorresponding walking rules for determining the state of the target and define\na set of control flows to unite the tracker and the detector in different\ntracking scenarios. Our core idea is to use the support samples of positive and\nnegative target categories to establish a relative distance-based criterion for\na reliable assessment of target loss. The favorable performance in comparison\nagainst the state-of-the-art methods on numerous challenging benchmarks\ndemonstrates the effectiveness of the proposed algorithm.", "keywords": [], "authors_list": ["Yuqing Huang", "Xin Li", "Zikun Zhou", "Yaowei Wang", "Zhenyu He", "Ming-Hsuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f308"}, "filepath": "data/2405.04953.png", "tags": [], "_media_type": "image", "_rand": 0.9996298672682015, "arXiv_link": "https://arxiv.org/abs/2405.04953", "other_link": "", "title": "Supervised Anomaly Detection for Complex Industrial Images", "abstract": "Automating visual inspection in industrial production lines is essential for\nincreasing product quality across various industries. Anomaly detection (AD)\nmethods serve as robust tools for this purpose. However, existing public\ndatasets primarily consist of images without anomalies, limiting the practical\napplication of AD methods in production settings. To address this challenge, we\npresent (1) the Valeo Anomaly Dataset (VAD), a novel real-world industrial\ndataset comprising 5000 images, including 2000 instances of challenging real\ndefects across more than 20 subclasses. Acknowledging that traditional AD\nmethods struggle with this dataset, we introduce (2) Segmentation-based Anomaly\nDetector (SegAD). First, SegAD leverages anomaly maps as well as segmentation\nmaps to compute local statistics. Next, SegAD uses these statistics and an\noptional supervised classifier score as input features for a Boosted Random\nForest (BRF) classifier, yielding the final anomaly score. Our SegAD achieves\nstate-of-the-art performance on both VAD (+2.1% AUROC) and the VisA dataset\n(+0.4% AUROC). The code and the models are publicly available.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Aimira Baitieva", "David Hurych", "Victor Besnier", "Olivier BERNARD"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f309"}, "filepath": "data/2404.04650.png", "tags": [], "_media_type": "image", "_rand": 0.9997439384945811, "arXiv_link": "https://arxiv.org/abs/2404.04650", "other_link": "https://github.com/xiefan-guo/initno.", "title": "InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization", "abstract": "Recent strides in the development of diffusion models, exemplified by\nadvancements such as Stable Diffusion, have underscored their remarkable\nprowess in generating visually compelling images. However, the imperative of\nachieving a seamless alignment between the generated image and the provided\nprompt persists as a formidable challenge. This paper traces the root of these\ndifficulties to invalid initial noise, and proposes a solution in the form of\nInitial Noise Optimization (InitNO), a paradigm that refines this noise.\nConsidering text prompts, not all random noises are effective in synthesizing\nsemantically-faithful images. We design the cross-attention response score and\nthe self-attention conflict score to evaluate the initial noise, bifurcating\nthe initial latent space into valid and invalid sectors. A strategically\ncrafted noise optimization pipeline is developed to guide the initial noise\ntowards valid regions. Our method, validated through rigorous experimentation,\nshows a commendable proficiency in generating images in strict accordance with\ntext prompts. Our code is available at https://github.com/xiefan-guo/initno.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Xiefan Guo", "Jinlin Liu", "Miaomiao Cui", "Jiankai Li", "Hongyu Yang", "Di Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f30a"}, "filepath": "data/2404.18448.png", "tags": [], "_media_type": "image", "_rand": 0.9994254059478462, "arXiv_link": "https://arxiv.org/abs/2404.18448", "other_link": "https://github.com/cwlee00/MFP.", "title": "MFP: Making Full use of Probability Maps for Interactive Image Segmentation", "abstract": "In recent interactive segmentation algorithms, previous probability maps are\nused as network input to help predictions in the current segmentation round.\nHowever, despite the utilization of previous masks, useful information\ncontained in the probability maps is not well propagated to the current\npredictions. In this paper, to overcome this limitation, we propose a novel and\neffective algorithm for click-based interactive image segmentation, called MFP,\nwhich attempts to make full use of probability maps. We first modulate previous\nprobability maps to enhance their representations of user-specified objects.\nThen, we feed the modulated probability maps as additional input to the\nsegmentation network. We implement the proposed MFP algorithm based on the\nResNet-34, HRNet-18, and ViT-B backbones and assess the performance extensively\non various datasets. It is demonstrated that MFP meaningfully outperforms the\nexisting algorithms using identical backbones. The source codes are available\nat https://github.com/cwlee00/MFP.", "keywords": [], "authors_list": ["Chaewon Lee", "Seon-Ho Lee", "Chang-Su Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f30b"}, "filepath": "data/2312.02719.png", "tags": [], "_media_type": "image", "_rand": 0.9994324277498786, "arXiv_link": "https://arxiv.org/abs/2312.02719", "other_link": "", "title": "A Conditional Denoising Diffusion Probabilistic Model for Point Cloud Upsampling", "abstract": "Point cloud upsampling (PCU) enriches the representation of raw point clouds,\nsignificantly improving the performance in downstream tasks such as\nclassification and reconstruction. Most of the existing point cloud upsampling\nmethods focus on sparse point cloud feature extraction and upsampling module\ndesign. In a different way, we dive deeper into directly modelling the gradient\nof data distribution from dense point clouds. In this paper, we proposed a\nconditional denoising diffusion probability model (DDPM) for point cloud\nupsampling, called PUDM. Specifically, PUDM treats the sparse point cloud as a\ncondition, and iteratively learns the transformation relationship between the\ndense point cloud and the noise. Simultaneously, PUDM aligns with a dual\nmapping paradigm to further improve the discernment of point features. In this\ncontext, PUDM enables learning complex geometry details in the ground truth\nthrough the dominant features, while avoiding an additional upsampling module\ndesign. Furthermore, to generate high-quality arbitrary-scale point clouds\nduring inference, PUDM exploits the prior knowledge of the scale between sparse\npoint clouds and dense point clouds during training by parameterizing a rate\nfactor. Moreover, PUDM exhibits strong noise robustness in experimental\nresults. In the quantitative and qualitative evaluations on PU1K and PUGAN,\nPUDM significantly outperformed existing methods in terms of Chamfer Distance\n(CD) and Hausdorff Distance (HD), achieving state of the art (SOTA)\nperformance.", "keywords": [], "authors_list": ["Wentao Qu", "Yuantian Shao", "Lingwu Meng", "Xiaoshui Huang", "Liang Xiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f30c"}, "filepath": "data/2405.04167.png", "tags": [], "_media_type": "image", "_rand": 0.9999892252921527, "arXiv_link": "https://arxiv.org/abs/2405.04167", "other_link": "", "title": "Bridging the Synthetic-to-Authentic Gap: Distortion-Guided Unsupervised Domain Adaptation for Blind Image Quality Assessment", "abstract": "The annotation of blind image quality assessment (BIQA) is labor-intensive\nand time-consuming, especially for authentic images. Training on synthetic data\nis expected to be beneficial, but synthetically trained models often suffer\nfrom poor generalization in real domains due to domain gaps. In this work, we\nmake a key observation that introducing more distortion types in the synthetic\ndataset may not improve or even be harmful to generalizing authentic image\nquality assessment. To solve this challenge, we propose distortion-guided\nunsupervised domain adaptation for BIQA (DGQA), a novel framework that\nleverages adaptive multi-domain selection via prior knowledge from distortion\nto match the data distribution between the source domains and the target\ndomain, thereby reducing negative transfer from the outlier source domains.\nExtensive experiments on two cross-domain settings (synthetic distortion to\nauthentic distortion and synthetic distortion to algorithmic distortion) have\ndemonstrated the effectiveness of our proposed DGQA. Besides, DGQA is\northogonal to existing model-based BIQA methods, and can be used in combination\nwith such models to improve performance with less training data.", "keywords": ["Low-level vision"], "authors_list": ["Aobo Li", "Jinjian Wu", "Yongxu Liu", "Leida Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f30d"}, "filepath": "data/2403.11549.png", "tags": [], "_media_type": "image", "_rand": 0.9991202635960951, "arXiv_link": "https://arxiv.org/abs/2403.11549", "other_link": "https://github.com/JiazuoYu/MoE-Adapters4CL", "title": "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters", "abstract": "Continual learning can empower vision-language models to continuously acquire\nnew knowledge, without the need for access to the entire historical dataset.\nHowever, mitigating the performance degradation in large-scale models is\nnon-trivial due to (i) parameter shifts throughout lifelong learning and (ii)\nsignificant computational burdens associated with full-model tuning. In this\nwork, we present a parameter-efficient continual learning framework to\nalleviate long-term forgetting in incremental learning with vision-language\nmodels. Our approach involves the dynamic expansion of a pre-trained CLIP\nmodel, through the integration of Mixture-of-Experts (MoE) adapters in response\nto new tasks. To preserve the zero-shot recognition capability of\nvision-language models, we further introduce a Distribution Discriminative\nAuto-Selector (DDAS) that automatically routes in-distribution and\nout-of-distribution inputs to the MoE Adapter and the original CLIP,\nrespectively. Through extensive experiments across various settings, our\nproposed method consistently outperforms previous state-of-the-art approaches\nwhile concurrently reducing parameter training burdens by 60%. Our code locates\nat https://github.com/JiazuoYu/MoE-Adapters4CL", "keywords": ["Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Jiazuo Yu", "Yunzhi Zhuge", "Lu Zhang", "Ping Hu", "Dong Wang", "Huchuan Lu", "You He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f30e"}, "filepath": "data/2312.03441v4.png", "tags": [], "_media_type": "image", "_rand": 0.9993320731924783, "arXiv_link": "https://arxiv.org/abs/2312.03441v4", "other_link": "https://github.com/Zplusdragon/UFineBench}.", "title": "UFineBench: Towards Text-based Person Retrieval with Ultra-fine Granularity", "abstract": "Existing text-based person retrieval datasets often have relatively\ncoarse-grained text annotations. This hinders the model to comprehend the\nfine-grained semantics of query texts in real scenarios. To address this\nproblem, we contribute a new benchmark named \\textbf{UFineBench} for text-based\nperson retrieval with ultra-fine granularity.\n Firstly, we construct a new \\textbf{dataset} named UFine6926. We collect a\nlarge number of person images and manually annotate each image with two\ndetailed textual descriptions, averaging 80.8 words each. The average word\ncount is three to four times that of the previous datasets. In addition of\nstandard in-domain evaluation, we also propose a special \\textbf{evaluation\nparadigm} more representative of real scenarios. It contains a new evaluation\nset with cross domains, cross textual granularity and cross textual styles,\nnamed UFine3C, and a new evaluation metric for accurately measuring retrieval\nability, named mean Similarity Distribution (mSD). Moreover, we propose CFAM, a\nmore efficient \\textbf{algorithm} especially designed for text-based person\nretrieval with ultra fine-grained texts. It achieves fine granularity mining by\nadopting a shared cross-modal granularity decoder and hard negative match\nmechanism.\n With standard in-domain evaluation, CFAM establishes competitive performance\nacross various datasets, especially on our ultra fine-grained UFine6926.\nFurthermore, by evaluating on UFine3C, we demonstrate that training on our\nUFine6926 significantly improves generalization to real scenarios compared with\nother coarse-grained datasets. The dataset and code will be made publicly\navailable at \\url{https://github.com/Zplusdragon/UFineBench}.", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Jialong Zuo", "Hanyu Zhou", "Ying Nie", "Feng Zhang", "Tianyu Guo", "Nong Sang", "Yunhe Wang", "Changxin Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f30f"}, "filepath": "data/2403.17638v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992555455478741, "arXiv_link": "https://arxiv.org/abs/2403.17638v1", "other_link": "https://github.com/HKCLynn/ReVoRF.", "title": "Learning with Unreliability: Fast Few-shot Voxel Radiance Fields with Relative Geometric Consistency", "abstract": "We propose a voxel-based optimization framework, ReVoRF, for few-shot\nradiance fields that strategically address the unreliability in pseudo novel\nview synthesis. Our method pivots on the insight that relative depth\nrelationships within neighboring regions are more reliable than the absolute\ncolor values in disoccluded areas. Consequently, we devise a bilateral\ngeometric consistency loss that carefully navigates the trade-off between color\nfidelity and geometric accuracy in the context of depth consistency for\nuncertain regions. Moreover, we present a reliability-guided learning strategy\nto discern and utilize the variable quality across synthesized views,\ncomplemented by a reliability-aware voxel smoothing algorithm that smoothens\nthe transition between reliable and unreliable data patches. Our approach\nallows for a more nuanced use of all available data, promoting enhanced\nlearning from regions previously considered unsuitable for high-quality\nreconstruction. Extensive experiments across diverse datasets reveal that our\napproach attains significant gains in efficiency and accuracy, delivering\nrendering speeds of 3 FPS, 7 mins to train a $360^\\circ$ scene, and a 5\\%\nimprovement in PSNR over existing few-shot methods. Code is available at\nhttps://github.com/HKCLynn/ReVoRF.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Xu Yingjie", "Bangzhen Liu", "Hao Tang", "Bailin Deng", "Shengfeng He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f310"}, "filepath": "data/2403.16141.png", "tags": [], "_media_type": "image", "_rand": 0.9995201655098945, "arXiv_link": "https://arxiv.org/abs/2403.16141", "other_link": "", "title": "Entity-NeRF: Detecting and Removing Moving Entities in Urban Scenes", "abstract": "Recent advancements in the study of Neural Radiance Fields (NeRF) for dynamic\nscenes often involve explicit modeling of scene dynamics. However, this\napproach faces challenges in modeling scene dynamics in urban environments,\nwhere moving objects of various categories and scales are present. In such\nsettings, it becomes crucial to effectively eliminate moving objects to\naccurately reconstruct static backgrounds. Our research introduces an\ninnovative method, termed here as Entity-NeRF, which combines the strengths of\nknowledge-based and statistical strategies. This approach utilizes entity-wise\nstatistics, leveraging entity segmentation and stationary entity classification\nthrough thing/stuff segmentation. To assess our methodology, we created an\nurban scene dataset masked with moving objects. Our comprehensive experiments\ndemonstrate that Entity-NeRF notably outperforms existing techniques in\nremoving moving objects and reconstructing static urban backgrounds, both\nquantitatively and qualitatively.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Takashi Otonari", "Satoshi Ikehata", "Kiyoharu Aizawa"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f311"}, "filepath": "data/2306.10013.png", "tags": [], "_media_type": "image", "_rand": 0.999370815887779, "arXiv_link": "https://arxiv.org/abs/2306.10013", "other_link": "https://github.com/Robertwyq/PanoOcc.", "title": "PanoOcc: Unified Occupancy Representation for Camera-based 3D Panoptic Segmentation", "abstract": "Comprehensive modeling of the surrounding 3D world is key to the success of\nautonomous driving. However, existing perception tasks like object detection,\nroad structure segmentation, depth & elevation estimation, and open-set object\nlocalization each only focus on a small facet of the holistic 3D scene\nunderstanding task. This divide-and-conquer strategy simplifies the algorithm\ndevelopment procedure at the cost of losing an end-to-end unified solution to\nthe problem. In this work, we address this limitation by studying camera-based\n3D panoptic segmentation, aiming to achieve a unified occupancy representation\nfor camera-only 3D scene understanding. To achieve this, we introduce a novel\nmethod called PanoOcc, which utilizes voxel queries to aggregate spatiotemporal\ninformation from multi-frame and multi-view images in a coarse-to-fine scheme,\nintegrating feature learning and scene representation into a unified occupancy\nrepresentation. We have conducted extensive ablation studies to verify the\neffectiveness and efficiency of the proposed method. Our approach achieves new\nstate-of-the-art results for camera-based semantic segmentation and panoptic\nsegmentation on the nuScenes dataset. Furthermore, our method can be easily\nextended to dense occupancy prediction and has shown promising performance on\nthe Occ3D benchmark. The code will be released at\nhttps://github.com/Robertwyq/PanoOcc.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yuqi Wang", "Yuntao Chen", "Xingyu Liao", "Lue Fan", "Zhaoxiang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f312"}, "filepath": "data/2401.01207.png", "tags": [], "_media_type": "image", "_rand": 0.9996665209990415, "arXiv_link": "https://arxiv.org/abs/2401.01207", "other_link": "", "title": "Towards a Simultaneous and Granular Identity-Expression Control in Personalized Face Generation", "abstract": "In human-centric content generation, the pre-trained text-to-image models\nstruggle to produce user-wanted portrait images, which retain the identity of\nindividuals while exhibiting diverse expressions. This paper introduces our\nefforts towards personalized face generation. To this end, we propose a novel\nmulti-modal face generation framework, capable of simultaneous\nidentity-expression control and more fine-grained expression synthesis. Our\nexpression control is so sophisticated that it can be specialized by the\nfine-grained emotional vocabulary. We devise a novel diffusion model that can\nundertake the task of simultaneously face swapping and reenactment. Due to the\nentanglement of identity and expression, it's nontrivial to separately and\nprecisely control them in one framework, thus has not been explored yet. To\novercome this, we propose several innovative designs in the conditional\ndiffusion model, including balancing identity and expression encoder, improved\nmidpoint sampling, and explicitly background conditioning. Extensive\nexperiments have demonstrated the controllability and scalability of the\nproposed framework, in comparison with state-of-the-art text-to-image, face\nswapping, and face reenactment methods.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Renshuai Liu", "Bowen Ma", "Wei Zhang", "Zhipeng Hu", "Changjie Fan", "Tangjie Lv", "Yu Ding", "Xuan Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f313"}, "filepath": "data/2403.09439.png", "tags": [], "_media_type": "image", "_rand": 0.9995370905882356, "arXiv_link": "https://arxiv.org/abs/2403.09439", "other_link": "", "title": "3D-SceneDreamer: Text-Driven 3D-Consistent Scene Generation", "abstract": "Text-driven 3D scene generation techniques have made rapid progress in recent\nyears. Their success is mainly attributed to using existing generative models\nto iteratively perform image warping and inpainting to generate 3D scenes.\nHowever, these methods heavily rely on the outputs of existing models, leading\nto error accumulation in geometry and appearance that prevent the models from\nbeing used in various scenarios (e.g., outdoor and unreal scenarios). To\naddress this limitation, we generatively refine the newly generated local views\nby querying and aggregating global 3D information, and then progressively\ngenerate the 3D scene. Specifically, we employ a tri-plane features-based NeRF\nas a unified representation of the 3D scene to constrain global 3D consistency,\nand propose a generative refinement network to synthesize new contents with\nhigher quality by exploiting the natural image prior from 2D diffusion model as\nwell as the global 3D information of the current scene. Our extensive\nexperiments demonstrate that, in comparison to previous methods, our approach\nsupports wide variety of scene generation and arbitrary camera trajectories\nwith improved visual quality and 3D consistency.", "keywords": ["Image and video generation and manipulation", "Scene analysis and understanding"], "authors_list": ["Songchun Zhang", "Yibo Zhang", "Quan Zheng", "Rui Ma", "Wei Hua", "Hujun Bao", "Weiwei Xu", "Changqing Zou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f314"}, "filepath": "data/2403.07592v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992741993935808, "arXiv_link": "https://arxiv.org/abs/2403.07592v1", "other_link": "", "title": "Accurate Spatial Gene Expression Prediction by Integrating Multi-Resolution Features", "abstract": "Recent advancements in Spatial Transcriptomics (ST) technology have\nfacilitated detailed gene expression analysis within tissue contexts. However,\nthe high costs and methodological limitations of ST necessitate a more robust\npredictive model. In response, this paper introduces TRIPLEX, a novel deep\nlearning framework designed to predict spatial gene expression from Whole Slide\nImages (WSIs). TRIPLEX uniquely harnesses multi-resolution features, capturing\ncellular morphology at individual spots, the local context around these spots,\nand the global tissue organization. By integrating these features through an\neffective fusion strategy, TRIPLEX achieves accurate gene expression\nprediction. Our comprehensive benchmark study, conducted on three public ST\ndatasets and supplemented with Visium data from 10X Genomics, demonstrates that\nTRIPLEX outperforms current state-of-the-art models in Mean Squared Error\n(MSE), Mean Absolute Error (MAE), and Pearson Correlation Coefficient (PCC).\nThe model's predictions align closely with ground truth gene expression\nprofiles and tumor annotations, underscoring TRIPLEX's potential in advancing\ncancer diagnosis and treatment.", "keywords": [], "authors_list": ["Youngmin Chung", "Ji Hun Ha", "Kyeong Chan Im", "Joo Sang Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f315"}, "filepath": "data/2404.04996.png", "tags": [], "_media_type": "image", "_rand": 0.9997779295859298, "arXiv_link": "https://arxiv.org/abs/2404.04996", "other_link": "https://github.com/Drchip61/Dual_SAM.", "title": "Fantastic Animals and Where to Find Them: Segment Any Marine Animal with Dual SAM", "abstract": "As an important pillar of underwater intelligence, Marine Animal Segmentation\n(MAS) involves segmenting animals within marine environments. Previous methods\ndon't excel in extracting long-range contextual features and overlook the\nconnectivity between discrete pixels. Recently, Segment Anything Model (SAM)\noffers a universal framework for general segmentation tasks. Unfortunately,\ntrained with natural images, SAM does not obtain the prior knowledge from\nmarine images. In addition, the single-position prompt of SAM is very\ninsufficient for prior guidance. To address these issues, we propose a novel\nfeature learning framework, named Dual-SAM for high-performance MAS. To this\nend, we first introduce a dual structure with SAM's paradigm to enhance feature\nlearning of marine images. Then, we propose a Multi-level Coupled Prompt (MCP)\nstrategy to instruct comprehensive underwater prior information, and enhance\nthe multi-level features of SAM's encoder with adapters. Subsequently, we\ndesign a Dilated Fusion Attention Module (DFAM) to progressively integrate\nmulti-level features from SAM's encoder. Finally, instead of directly\npredicting the masks of marine animals, we propose a Criss-Cross Connectivity\nPrediction (C$^3$P) paradigm to capture the inter-connectivity between discrete\npixels. With dual decoders, it generates pseudo-labels and achieves mutual\nsupervision for complementary feature representations, resulting in\nconsiderable improvements over previous techniques. Extensive experiments\nverify that our proposed method achieves state-of-the-art performances on five\nwidely-used MAS datasets. The code is available at\nhttps://github.com/Drchip61/Dual_SAM.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Pingping Zhang", "Tianyu Yan", "Yang Liu", "Huchuan Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f316"}, "filepath": "data/2401.06197.png", "tags": [], "_media_type": "image", "_rand": 0.9990608932163666, "arXiv_link": "https://arxiv.org/abs/2401.06197", "other_link": "", "title": "Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications", "abstract": "We introduce Deformable Convolution v4 (DCNv4), a highly efficient and\neffective operator designed for a broad spectrum of vision applications. DCNv4\naddresses the limitations of its predecessor, DCNv3, with two key enhancements:\n1. removing softmax normalization in spatial aggregation to enhance its dynamic\nproperty and expressive power and 2. optimizing memory access to minimize\nredundant operations for speedup. These improvements result in a significantly\nfaster convergence compared to DCNv3 and a substantial increase in processing\nspeed, with DCNv4 achieving more than three times the forward speed. DCNv4\ndemonstrates exceptional performance across various tasks, including image\nclassification, instance and semantic segmentation, and notably, image\ngeneration. When integrated into generative models like U-Net in the latent\ndiffusion model, DCNv4 outperforms its baseline, underscoring its possibility\nto enhance generative models. In practical applications, replacing DCNv3 with\nDCNv4 in the InternImage model to create FlashInternImage results in up to 80%\nspeed increase and further performance improvement without further\nmodifications. The advancements in speed and efficiency of DCNv4, combined with\nits robust performance across diverse vision tasks, show its potential as a\nfoundational building block for future vision models.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yuwen Xiong", "Zhiqi Li", "Yuntao Chen", "Feng Wang", "Xizhou Zhu", "Jiapeng Luo", "Wenhai Wang", "Tong Lu", "Hongsheng Li", "Yu Qiao", "Lewei Lu", "Jie Zhou", "Jifeng Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f317"}, "filepath": "data/2403.19412.png", "tags": [], "_media_type": "image", "_rand": 0.9998961856757586, "arXiv_link": "https://arxiv.org/abs/2403.19412", "other_link": "", "title": "A Simple and Effective Point-based Network for Event Camera 6-DOFs Pose Relocalization", "abstract": "Event cameras exhibit remarkable attributes such as high dynamic range,\nasynchronicity, and low latency, making them highly suitable for vision tasks\nthat involve high-speed motion in challenging lighting conditions. These\ncameras implicitly capture movement and depth information in events, making\nthem appealing sensors for Camera Pose Relocalization (CPR) tasks.\nNevertheless, existing CPR networks based on events neglect the pivotal\nfine-grained temporal information in events, resulting in unsatisfactory\nperformance. Moreover, the energy-efficient features are further compromised by\nthe use of excessively complex models, hindering efficient deployment on edge\ndevices. In this paper, we introduce PEPNet, a simple and effective point-based\nnetwork designed to regress six degrees of freedom (6-DOFs) event camera poses.\nWe rethink the relationship between the event camera and CPR tasks, leveraging\nthe raw Point Cloud directly as network input to harness the high-temporal\nresolution and inherent sparsity of events. PEPNet is adept at abstracting the\nspatial and implicit temporal features through hierarchical structure and\nexplicit temporal features by Attentive Bi-directional Long Short-Term Memory\n(A-Bi-LSTM). By employing a carefully crafted lightweight design, PEPNet\ndelivers state-of-the-art (SOTA) performance on both indoor and outdoor\ndatasets with meager computational resources. Specifically, PEPNet attains a\nsignificant 38% and 33% performance improvement on the random split IJRR and\nM3ED datasets, respectively. Moreover, the lightweight design version\nPEPNet$_{tiny}$ accomplishes results comparable to the SOTA while employing a\nmere 0.5% of the parameters.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Hongwei Ren", "Jiadong Zhu", "Yue Zhou", "Haotian FU", "Yulong Huang", "Bojun Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f318"}, "filepath": "data/2403.18551.png", "tags": [], "_media_type": "image", "_rand": 0.9997740779707551, "arXiv_link": "https://arxiv.org/abs/2403.18551", "other_link": "", "title": "Attention Calibration for Disentangled Text-to-Image Personalization", "abstract": "Recent thrilling progress in large-scale text-to-image (T2I) models has\nunlocked unprecedented synthesis quality of AI-generated content (AIGC)\nincluding image generation, 3D and video composition. Further, personalized\ntechniques enable appealing customized production of a novel concept given only\nseveral images as reference. However, an intriguing problem persists: Is it\npossible to capture multiple, novel concepts from one single reference image?\nIn this paper, we identify that existing approaches fail to preserve visual\nconsistency with the reference image and eliminate cross-influence from\nconcepts. To alleviate this, we propose an attention calibration mechanism to\nimprove the concept-level understanding of the T2I model. Specifically, we\nfirst introduce new learnable modifiers bound with classes to capture\nattributes of multiple concepts. Then, the classes are separated and\nstrengthened following the activation of the cross-attention operation,\nensuring comprehensive and self-contained concepts. Additionally, we suppress\nthe attention activation of different classes to mitigate mutual influence\namong concepts. Together, our proposed method, dubbed DisenDiff, can learn\ndisentangled multiple concepts from one single image and produce novel\ncustomized images with learned concepts. We demonstrate that our method\noutperforms the current state of the art in both qualitative and quantitative\nevaluations. More importantly, our proposed techniques are compatible with LoRA\nand inpainting pipelines, enabling more interactive experiences.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yanbing Zhang", "Mengping Yang", "Qin Zhou", "Zhe Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f319"}, "filepath": "data/2311.11284.png", "tags": [], "_media_type": "image", "_rand": 0.9991644540818351, "arXiv_link": "https://arxiv.org/abs/2311.11284", "other_link": "", "title": "LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching", "abstract": "The recent advancements in text-to-3D generation mark a significant milestone\nin generative models, unlocking new possibilities for creating imaginative 3D\nassets across various real-world scenarios. While recent advancements in\ntext-to-3D generation have shown promise, they often fall short in rendering\ndetailed and high-quality 3D models. This problem is especially prevalent as\nmany methods base themselves on Score Distillation Sampling (SDS). This paper\nidentifies a notable deficiency in SDS, that it brings inconsistent and\nlow-quality updating direction for the 3D model, causing the over-smoothing\neffect. To address this, we propose a novel approach called Interval Score\nMatching (ISM). ISM employs deterministic diffusing trajectories and utilizes\ninterval-based score matching to counteract over-smoothing. Furthermore, we\nincorporate 3D Gaussian Splatting into our text-to-3D generation pipeline.\nExtensive experiments show that our model largely outperforms the\nstate-of-the-art in quality and training efficiency.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Yixun Liang", "Xin Yang", "Jiantao Lin", "Haodong LI", "Xiaogang Xu", "Ying-Cong Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f31a"}, "filepath": "data/2404.06044.png", "tags": [], "_media_type": "image", "_rand": 0.999303611267378, "arXiv_link": "https://arxiv.org/abs/2404.06044", "other_link": "", "title": "Object Dynamics Modeling with Hierarchical Point Cloud-based Representations", "abstract": "Modeling object dynamics with a neural network is an important problem with\nnumerous applications. Most recent work has been based on graph neural\nnetworks. However, physics happens in 3D space, where geometric information\npotentially plays an important role in modeling physical phenomena. In this\nwork, we propose a novel U-net architecture based on continuous point\nconvolution which naturally embeds information from 3D coordinates and allows\nfor multi-scale feature representations with established downsampling and\nupsampling procedures. Bottleneck layers in the downsampled point clouds lead\nto better long-range interaction modeling. Besides, the flexibility of point\nconvolutions allows our approach to generalize to sparsely sampled points from\nmesh vertices and dynamically generate features on important interaction points\non mesh faces. Experimental results demonstrate that our approach significantly\nimproves the state-of-the-art, especially in scenarios that require accurate\ngravity or collision reasoning.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chanho Kim", "Li Fuxin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f31b"}, "filepath": "data/2312.06663.png", "tags": [], "_media_type": "image", "_rand": 0.9999524628472549, "arXiv_link": "https://arxiv.org/abs/2312.06663", "other_link": "", "title": "CAD: Photorealistic 3D Generation via Adversarial Distillation", "abstract": "The increased demand for 3D data in AR/VR, robotics and gaming applications,\ngave rise to powerful generative pipelines capable of synthesizing high-quality\n3D objects. Most of these models rely on the Score Distillation Sampling (SDS)\nalgorithm to optimize a 3D representation such that the rendered image\nmaintains a high likelihood as evaluated by a pre-trained diffusion model.\nHowever, finding a correct mode in the high-dimensional distribution produced\nby the diffusion model is challenging and often leads to issues such as\nover-saturation, over-smoothing, and Janus-like artifacts. In this paper, we\npropose a novel learning paradigm for 3D synthesis that utilizes pre-trained\ndiffusion models. Instead of focusing on mode-seeking, our method directly\nmodels the distribution discrepancy between multi-view renderings and diffusion\npriors in an adversarial manner, which unlocks the generation of high-fidelity\nand photorealistic 3D content, conditioned on a single image and prompt.\nMoreover, by harnessing the latent space of GANs and expressive diffusion model\npriors, our method facilitates a wide variety of 3D applications including\nsingle-view reconstruction, high diversity generation and continuous 3D\ninterpolation in the open domain. The experiments demonstrate the superiority\nof our pipeline compared to previous works in terms of generation quality and\ndiversity.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Ziyu Wan", "Despoina Paschalidou", "Ian Huang", "Hongyu Liu", "Bokui Shen", "Xiaoyu Xiang", "Jing Liao", "Leonidas Guibas"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f31c"}, "filepath": "data/2311.17857v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990517471136632, "arXiv_link": "https://arxiv.org/abs/2311.17857v1", "other_link": "", "title": "Gaussian Shell Maps for Efficient 3D Human Generation", "abstract": "Efficient generation of 3D digital humans is important in several industries,\nincluding virtual reality, social media, and cinematic production. 3D\ngenerative adversarial networks (GANs) have demonstrated state-of-the-art\n(SOTA) quality and diversity for generated assets. Current 3D GAN\narchitectures, however, typically rely on volume representations, which are\nslow to render, thereby hampering the GAN training and requiring\nmulti-view-inconsistent 2D upsamplers. Here, we introduce Gaussian Shell Maps\n(GSMs) as a framework that connects SOTA generator network architectures with\nemerging 3D Gaussian rendering primitives using an articulable multi\nshell--based scaffold. In this setting, a CNN generates a 3D texture stack with\nfeatures that are mapped to the shells. The latter represent inflated and\ndeflated versions of a template surface of a digital human in a canonical body\npose. Instead of rasterizing the shells directly, we sample 3D Gaussians on the\nshells whose attributes are encoded in the texture features. These Gaussians\nare efficiently and differentiably rendered. The ability to articulate the\nshells is important during GAN training and, at inference time, to deform a\nbody into arbitrary user-defined poses. Our efficient rendering scheme bypasses\nthe need for view-inconsistent upsamplers and achieves high-quality multi-view\nconsistent renderings at a native resolution of $512 \\times 512$ pixels. We\ndemonstrate that GSMs successfully generate 3D humans when trained on\nsingle-view datasets, including SHHQ and DeepFashion.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Rameen Abdal", "Wang Yifan", "Zifan Shi", "Yinghao Xu", "Ryan Po", "Zhengfei Kuang", "Qifeng Chen", "Dit-Yan Yeung", "Gordon Wetzstein"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f31d"}, "filepath": "data/2402.14000.png", "tags": [], "_media_type": "image", "_rand": 0.9994081076040489, "arXiv_link": "https://arxiv.org/abs/2402.14000", "other_link": "", "title": "3D-Aware Face Editing via Warping-Guided Latent Direction Learning", "abstract": "This work presents 3DPE, a practical method that can efficiently edit a face\nimage following given prompts, like reference images or text descriptions, in a\n3D-aware manner. To this end, a lightweight module is distilled from a 3D\nportrait generator and a text-to-image model, which provide prior knowledge of\nface geometry and superior editing capability, respectively. Such a design\nbrings two compelling advantages over existing approaches. First, our system\nachieves real-time editing with a feedforward network (i.e., ~0.04s per image),\nover 100x faster than the second competitor. Second, thanks to the powerful\npriors, our module could focus on the learning of editing-related variations,\nsuch that it manages to handle various types of editing simultaneously in the\ntraining phase and further supports fast adaptation to user-specified\ncustomized types of editing during inference (e.g., with ~5min fine-tuning per\nstyle). The code, the model, and the interface will be made publicly available\nto facilitate future research.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Yuhao Cheng", "Zhuo Chen", "Xingyu Ren", "Wenhan Zhu", "Zhengqin Xu", "Di Xu", "Yang Changpeng", "Yichao Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f31e"}, "filepath": "data/2312.04560.png", "tags": [], "_media_type": "image", "_rand": 0.9999740560046008, "arXiv_link": "https://arxiv.org/abs/2312.04560", "other_link": "https://ethanweber.me/nerfiller.", "title": "NeRFiller: Completing Scenes via Generative 3D Inpainting", "abstract": "We propose NeRFiller, an approach that completes missing portions of a 3D\ncapture via generative 3D inpainting using off-the-shelf 2D visual generative\nmodels. Often parts of a captured 3D scene or object are missing due to mesh\nreconstruction failures or a lack of observations (e.g., contact regions, such\nas the bottom of objects, or hard-to-reach areas). We approach this challenging\n3D inpainting problem by leveraging a 2D inpainting diffusion model. We\nidentify a surprising behavior of these models, where they generate more 3D\nconsistent inpaints when images form a 2$\\times$2 grid, and show how to\ngeneralize this behavior to more than four images. We then present an iterative\nframework to distill these inpainted regions into a single consistent 3D scene.\nIn contrast to related works, we focus on completing scenes rather than\ndeleting foreground objects, and our approach does not require tight 2D object\nmasks or text. We compare our approach to relevant baselines adapted to our\nsetting on a variety of scenes, where NeRFiller creates the most 3D consistent\nand plausible scene completions. Our project page is at\nhttps://ethanweber.me/nerfiller.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Ethan Weber", "Aleksander Holynski", "Varun Jampani", "Saurabh Saxena", "Noah Snavely", "Abhishek Kar", "Angjoo Kanazawa"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f31f"}, "filepath": "data/2404.11207v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997427084670526, "arXiv_link": "https://arxiv.org/abs/2404.11207v1", "other_link": "", "title": "Exploring the Transferability of Visual Prompting for Multimodal Large Language Models", "abstract": "Although Multimodal Large Language Models (MLLMs) have demonstrated promising\nversatile capabilities, their performance is still inferior to specialized\nmodels on downstream tasks, which makes adaptation necessary to enhance their\nutility. However, fine-tuning methods require independent training for every\nmodel, leading to huge computation and memory overheads. In this paper, we\npropose a novel setting where we aim to improve the performance of diverse\nMLLMs with a group of shared parameters optimized for a downstream task. To\nachieve this, we propose Transferable Visual Prompting (TVP), a simple and\neffective approach to generate visual prompts that can transfer to different\nmodels and improve their performance on downstream tasks after trained on only\none model. We introduce two strategies to address the issue of cross-model\nfeature corruption of existing visual prompting methods and enhance the\ntransferability of the learned prompts, including 1) Feature Consistency\nAlignment: which imposes constraints to the prompted feature changes to\nmaintain task-agnostic knowledge; 2) Task Semantics Enrichment: which\nencourages the prompted images to contain richer task-specific semantics with\nlanguage guidance. We validate the effectiveness of TVP through extensive\nexperiments with 6 modern MLLMs on a wide variety of tasks ranging from object\nrecognition and counting to multimodal reasoning and hallucination correction.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Yichi Zhang", "Yinpeng Dong", "Siyuan Zhang", "Tianzan Min", "Hang Su", "Jun Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f320"}, "filepath": "data/2404.09454v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996391185051193, "arXiv_link": "https://arxiv.org/abs/2404.09454v1", "other_link": "", "title": "Utility-Fairness Trade-Offs and How to Find Them", "abstract": "When building classification systems with demographic fairness\nconsiderations, there are two objectives to satisfy: 1) maximizing utility for\nthe specific task and 2) ensuring fairness w.r.t. a known demographic\nattribute. These objectives often compete, so optimizing both can lead to a\ntrade-off between utility and fairness. While existing works acknowledge the\ntrade-offs and study their limits, two questions remain unanswered: 1) What are\nthe optimal trade-offs between utility and fairness? and 2) How can we\nnumerically quantify these trade-offs from data for a desired prediction task\nand demographic attribute of interest? This paper addresses these questions. We\nintroduce two utility-fairness trade-offs: the Data-Space and Label-Space\nTrade-off. The trade-offs reveal three regions within the utility-fairness\nplane, delineating what is fully and partially possible and impossible. We\npropose U-FaTE, a method to numerically quantify the trade-offs for a given\nprediction task and group fairness definition from data samples. Based on the\ntrade-offs, we introduce a new scheme for evaluating representations. An\nextensive evaluation of fair representation learning methods and\nrepresentations from over 1000 pre-trained models revealed that most current\napproaches are far from the estimated and achievable fairness-utility\ntrade-offs across multiple datasets and prediction tasks.", "keywords": [], "authors_list": ["Sepehr Dehdashtian", "Bashir Sadeghi", "Vishnu Naresh Boddeti"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computers and Society", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f321"}, "filepath": "data/2310.04041.png", "tags": [], "_media_type": "image", "_rand": 0.9990665331799844, "arXiv_link": "https://arxiv.org/abs/2310.04041", "other_link": "https://github.com/Junoh-Kang/OGDM_edm.", "title": "Observation-Guided Diffusion Probabilistic Models", "abstract": "We propose a novel diffusion-based image generation method called the\nobservation-guided diffusion probabilistic model (OGDM), which effectively\naddresses the tradeoff between quality control and fast sampling. Our approach\nreestablishes the training objective by integrating the guidance of the\nobservation process with the Markov chain in a principled way. This is achieved\nby introducing an additional loss term derived from the observation based on a\nconditional discriminator on noise level, which employs a Bernoulli\ndistribution indicating whether its input lies on the (noisy) real manifold or\nnot. This strategy allows us to optimize the more accurate negative\nlog-likelihood induced in the inference stage especially when the number of\nfunction evaluations is limited. The proposed training scheme is also\nadvantageous even when incorporated only into the fine-tuning process, and it\nis compatible with various fast inference strategies since our method yields\nbetter denoising networks using the exactly the same inference procedure\nwithout incurring extra computational cost. We demonstrate the effectiveness of\nour training algorithm using diverse inference techniques on strong diffusion\nmodel baselines. Our implementation is available at\nhttps://github.com/Junoh-Kang/OGDM_edm.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Junoh Kang", "Jinyoung Choi", "Sungik Choi", "Bohyung Han"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f322"}, "filepath": "data/2405.13870.png", "tags": [], "_media_type": "image", "_rand": 0.9998196208986427, "arXiv_link": "https://arxiv.org/abs/2405.13870", "other_link": "https://github.com/aim-uofa/FreeCustom.", "title": "FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition", "abstract": "Benefiting from large-scale pre-trained text-to-image (T2I) generative\nmodels, impressive progress has been achieved in customized image generation,\nwhich aims to generate user-specified concepts. Existing approaches have\nextensively focused on single-concept customization and still encounter\nchallenges when it comes to complex scenarios that involve combining multiple\nconcepts. These approaches often require retraining/fine-tuning using a few\nimages, leading to time-consuming training processes and impeding their swift\nimplementation. Furthermore, the reliance on multiple images to represent a\nsingular concept increases the difficulty of customization. To this end, we\npropose FreeCustom, a novel tuning-free method to generate customized images of\nmulti-concept composition based on reference concepts, using only one image per\nconcept as input. Specifically, we introduce a new multi-reference\nself-attention (MRSA) mechanism and a weighted mask strategy that enables the\ngenerated image to access and focus more on the reference concepts. In\naddition, MRSA leverages our key finding that input concepts are better\npreserved when providing images with context interactions. Experiments show\nthat our method's produced images are consistent with the given concepts and\nbetter aligned with the input text. Our method outperforms or performs on par\nwith other training-based methods in terms of multi-concept composition and\nsingle-concept customization, but is simpler. Codes can be found at\nhttps://github.com/aim-uofa/FreeCustom.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Ganggui Ding", "Canyu Zhao", "Wen Wang", "Zhen Yang", "Zide Liu", "Hao Chen", "Chunhua Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f323"}, "filepath": "data/2401.06395.png", "tags": [], "_media_type": "image", "_rand": 0.999274082176503, "arXiv_link": "https://arxiv.org/abs/2401.06395", "other_link": "", "title": "ModaVerse: Efficiently Transforming Modalities with LLMs", "abstract": "Humans possess the capability to comprehend diverse modalities and seamlessly\ntransfer information between them. In this work, we introduce ModaVerse, a\nMulti-modal Large Language Model (MLLM) capable of comprehending and\ntransforming content across various modalities including images, videos, and\naudio. Predominant MLLM frameworks have largely relied on the alignment of\nlatent spaces of textual and non-textual features. This alignment process,\nwhich synchronizes a language model trained on textual data with encoders and\ndecoders trained on multi-modal data, often necessitates extensive training of\nseveral projection layers in multiple stages. Inspired by LLM-as-agent\nmethodologies, we propose a novel Input/Output (I/O) alignment mechanism that\noperates directly at the level of natural language. It aligns the LLM's output\nwith the input of generative models, avoiding the complexities associated with\nlatent feature alignments, and simplifying the multiple training stages of\nexisting MLLMs into a single, efficient process. This conceptual advancement\nleads to significant reductions in both data and computational costs. By\nconducting experiments on several benchmarks, we demonstrate that our approach\nattains comparable performance with the state of the art while achieving\nconsiderable efficiencies in data usage and training duration.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Xinyu Wang", "Bohan Zhuang", "Qi Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f324"}, "filepath": "data/2311.03524.png", "tags": [], "_media_type": "image", "_rand": 0.9990108653781637, "arXiv_link": "https://arxiv.org/abs/2311.03524", "other_link": "", "title": "Targeted Representation Alignment for Open-World Semi-Supervised Learning", "abstract": "Open-world semi-supervised learning aims at inferring both known and novel\nclasses in unlabeled data, by harnessing prior knowledge from a labeled set\nwith known classes. Despite its importance, there is a lack of theoretical\nfoundations for this problem. This paper bridges the gap by formalizing a\ngraph-theoretic framework tailored for the open-world setting, where the\nclustering can be theoretically characterized by graph factorization. Our\ngraph-theoretic framework illuminates practical algorithms and provides\nguarantees. In particular, based on our graph formulation, we apply the\nalgorithm called Spectral Open-world Representation Learning (SORL), and show\nthat minimizing our loss is equivalent to performing spectral decomposition on\nthe graph. Such equivalence allows us to derive a provable error bound on the\nclustering performance for both known and novel classes, and analyze rigorously\nwhen labeled data helps. Empirically, SORL can match or outperform several\nstrong baselines on common benchmark datasets, which is appealing for practical\nusage while enjoying theoretical guarantees.", "keywords": [], "authors_list": ["Ruixuan Xiao", "Lei Feng", "Kai Tang", "Junbo Zhao", "Yixuan Li", "Gang Chen", "Haobo Wang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f325"}, "filepath": "data/2310.10700.png", "tags": [], "_media_type": "image", "_rand": 0.9995945379185313, "arXiv_link": "https://arxiv.org/abs/2310.10700", "other_link": "", "title": "PELA: Learning Parameter-Efficient Models with Low-Rank Approximation", "abstract": "Applying a pre-trained large model to downstream tasks is prohibitive under\nresource-constrained conditions. Recent dominant approaches for addressing\nefficiency issues involve adding a few learnable parameters to the fixed\nbackbone model. This strategy, however, leads to more challenges in loading\nlarge models for downstream fine-tuning with limited resources. In this paper,\nwe propose a novel method for increasing the parameter efficiency of\npre-trained models by introducing an intermediate pre-training stage. To this\nend, we first employ low-rank approximation to compress the original large\nmodel and then devise a feature distillation module and a weight perturbation\nregularization module. These modules are specifically designed to enhance the\nlow-rank model. In particular, we update only the low-rank model while freezing\nthe backbone parameters during pre-training. This allows for direct and\nefficient utilization of the low-rank model for downstream fine-tuning tasks.\nThe proposed method achieves both efficiencies in terms of required parameters\nand computation time while maintaining comparable results with minimal\nmodifications to the backbone architecture. Specifically, when applied to three\nvision-only and one vision-language Transformer models, our approach often\ndemonstrates a merely $\\sim$0.6 point decrease in performance while reducing\nthe original parameter size by 1/3 to 2/3.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yangyang Guo", "Guangzhi Wang", "Mohan Kankanhalli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f326"}, "filepath": "data/2404.07713.png", "tags": [], "_media_type": "image", "_rand": 0.9991150208075985, "arXiv_link": "https://arxiv.org/abs/2404.07713", "other_link": "", "title": "Progressive Semantic-Guided Vision Transformer for Zero-Shot Learning", "abstract": "Zero-shot learning (ZSL) recognizes the unseen classes by conducting\nvisual-semantic interactions to transfer semantic knowledge from seen classes\nto unseen ones, supported by semantic information (e.g., attributes). However,\nexisting ZSL methods simply extract visual features using a pre-trained network\nbackbone (i.e., CNN or ViT), which fail to learn matched visual-semantic\ncorrespondences for representing semantic-related visual features as lacking of\nthe guidance of semantic information, resulting in undesirable visual-semantic\ninteractions. To tackle this issue, we propose a progressive semantic-guided\nvision transformer for zero-shot learning (dubbed ZSLViT). ZSLViT mainly\nconsiders two properties in the whole network: i) discover the semantic-related\nvisual representations explicitly, and ii) discard the semantic-unrelated\nvisual information. Specifically, we first introduce semantic-embedded token\nlearning to improve the visual-semantic correspondences via semantic\nenhancement and discover the semantic-related visual tokens explicitly with\nsemantic-guided token attention. Then, we fuse low semantic-visual\ncorrespondence visual tokens to discard the semantic-unrelated visual\ninformation for visual enhancement. These two operations are integrated into\nvarious encoders to progressively learn semantic-related visual representations\nfor accurate visual-semantic interactions in ZSL. The extensive experiments\nshow that our ZSLViT achieves significant performance gains on three popular\nbenchmark datasets, i.e., CUB, SUN, and AWA2.", "keywords": [], "authors_list": ["Shiming Chen", "Wenjin Hou", "Salman Khan", "Fahad Shahbaz Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f327"}, "filepath": "data/2403.01968v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998653843030659, "arXiv_link": "https://arxiv.org/html/2403.01968v1", "other_link": "", "title": "Endow SAM with Keen Eyes: Temporal-spatial Prompt Learning for Video Camouflaged Object Detection", "abstract": "Camouflage poses challenges in distinguishing a static target, whereas any\nmovement of the target can break this disguise. Existing video camouflaged\nobject detection (VCOD) approaches take noisy motion estimation as input or\nmodel motion implicitly, restricting detection performance in complex dynamic\nscenes. In this paper, we propose a novel Explicit Motion handling and\nInteractive Prompting framework for VCOD, dubbed EMIP, which handles motion\ncues explicitly using a frozen pre-trained optical flow fundamental model. EMIP\nis characterized by a two-stream architecture for simultaneously conducting\ncamouflaged segmentation and optical flow estimation. Interactions across the\ndual streams are realized in an interactive prompting way that is inspired by\nemerging visual prompt learning. Two learnable modules, i.e. the camouflaged\nfeeder and motion collector, are designed to incorporate segmentation-to-motion\nand motion-to-segmentation prompts, respectively, and enhance outputs of the\nboth streams. The prompt fed to the motion stream is learned by supervising\noptical flow in a self-supervised manner. Furthermore, we show that long-term\nhistorical information can also be incorporated as a prompt into EMIP and\nachieve more robust results with temporal consistency. Experimental results\ndemonstrate that our EMIP achieves new state-of-the-art records on popular VCOD\nbenchmarks. The code will be publicly available.", "keywords": ["Scene analysis and understanding", "Large multimodal models and prompting techniques"], "authors_list": ["Wenjun Hui", "Zhenfeng Zhu", "Shuai Zheng", "Yao Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f328"}, "filepath": "data/2312.05278v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992088091242233, "arXiv_link": "https://arxiv.org/html/2312.05278v2", "other_link": "", "title": "Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment", "abstract": "Large Vision Language Models (LVLMs) have demonstrated impressive zero-shot\ncapabilities in various vision-language dialogue scenarios. However, the\nabsence of fine-grained visual object detection hinders the model from\nunderstanding the details of images, leading to irreparable visual\nhallucinations and factual errors. In this paper, we propose Lyrics, a novel\nmulti-modal pre-training and instruction fine-tuning paradigm that bootstraps\nvision-language alignment from fine-grained cross-modal collaboration. Building\non the foundation of BLIP-2, Lyrics infuses local visual features extracted\nfrom a visual refiner that includes image tagging, object detection and\nsemantic segmentation modules into the Querying Transformer, while on the text\nside, the language inputs equip the boundary boxes and tags derived from the\nvisual refiner. We further introduce a two-stage training scheme, in which the\npre-training stage bridges the modality gap through explicit and comprehensive\nvision-language alignment targets. During the instruction fine-tuning stage, we\nintroduce semantic-aware visual feature extraction, a crucial method that\nenables the model to extract informative features from concrete visual objects.\nOur approach achieves robust performance on 13 datasets across various\nvision-language tasks, and demonstrates promising multi-modal understanding,\nperception and conversation capabilities in 11 scenario-based benchmark\ntoolkits.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zheren Fu", "Lei Zhang", "Hou Xia", "Zhendong Mao"], "category_name": "Computation and Language", "all_categories": ["Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f329"}, "filepath": "data/2401.00929.png", "tags": [], "_media_type": "image", "_rand": 0.9999985134717863, "arXiv_link": "https://arxiv.org/abs/2401.00929", "other_link": "https://GenH2R.github.io/.", "title": "GenH2R: Learning Generalizable Human-to-Robot Handover via Scalable Simulation, Demonstration, and Imitation", "abstract": "This paper presents GenH2R, a framework for learning generalizable\nvision-based human-to-robot (H2R) handover skills. The goal is to equip robots\nwith the ability to reliably receive objects with unseen geometry handed over\nby humans in various complex trajectories. We acquire such generalizability by\nlearning H2R handover at scale with a comprehensive solution including\nprocedural simulation assets creation, automated demonstration generation, and\neffective imitation learning. We leverage large-scale 3D model repositories,\ndexterous grasp generation methods, and curve-based 3D animation to create an\nH2R handover simulation environment named \\simabbns, surpassing the number of\nscenes in existing simulators by three orders of magnitude. We further\nintroduce a distillation-friendly demonstration generation method that\nautomatically generates a million high-quality demonstrations suitable for\nlearning. Finally, we present a 4D imitation learning method augmented by a\nfuture forecasting objective to distill demonstrations into a visuo-motor\nhandover policy. Experimental evaluations in both simulators and the real world\ndemonstrate significant improvements (at least +10\\% success rate) over\nbaselines in all cases. The project page is https://GenH2R.github.io/.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zifan Wang", "Junyu Chen", "Ziqing Chen", "Pengwei Xie", "Rui Chen", "Li Yi"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f32a"}, "filepath": "data/2403.12011.png", "tags": [], "_media_type": "image", "_rand": 0.9991852906600486, "arXiv_link": "https://arxiv.org/abs/2403.12011", "other_link": "https://mq-zhang1.github.io/HOIDiffusion", "title": "HOIDiffusion: Generating Realistic 3D Hand-Object Interaction Data", "abstract": "3D hand-object interaction data is scarce due to the hardware constraints in\nscaling up the data collection process. In this paper, we propose HOIDiffusion\nfor generating realistic and diverse 3D hand-object interaction data. Our model\nis a conditional diffusion model that takes both the 3D hand-object geometric\nstructure and text description as inputs for image synthesis. This offers a\nmore controllable and realistic synthesis as we can specify the structure and\nstyle inputs in a disentangled manner. HOIDiffusion is trained by leveraging a\ndiffusion model pre-trained on large-scale natural images and a few 3D human\ndemonstrations. Beyond controllable image synthesis, we adopt the generated 3D\ndata for learning 6D object pose estimation and show its effectiveness in\nimproving perception systems. Project page:\nhttps://mq-zhang1.github.io/HOIDiffusion", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Mengqi Zhang", "Yang Fu", "Zheng Ding", "Sifei Liu", "Zhuowen Tu", "Xiaolong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f32b"}, "filepath": "data/2401.14349.png", "tags": [], "_media_type": "image", "_rand": 0.9992625704900994, "arXiv_link": "https://arxiv.org/abs/2401.14349", "other_link": "", "title": "Learning to navigate efficiently and precisely in real environments", "abstract": "In the context of autonomous navigation of terrestrial robots, the creation\nof realistic models for agent dynamics and sensing is a widespread habit in the\nrobotics literature and in commercial applications, where they are used for\nmodel based control and/or for localization and mapping. The more recent\nEmbodied AI literature, on the other hand, focuses on modular or end-to-end\nagents trained in simulators like Habitat or AI-Thor, where the emphasis is put\non photo-realistic rendering and scene diversity, but high-fidelity robot\nmotion is assigned a less privileged role. The resulting sim2real gap\nsignificantly impacts transfer of the trained models to real robotic platforms.\nIn this work we explore end-to-end training of agents in simulation in settings\nwhich minimize the sim2real gap both, in sensing and in actuation. Our agent\ndirectly predicts (discretized) velocity commands, which are maintained through\nclosed-loop control in the real robot. The behavior of the real robot\n(including the underlying low-level controller) is identified and simulated in\na modified Habitat simulator. Noise models for odometry and localization\nfurther contribute in lowering the sim2real gap. We evaluate on real navigation\nscenarios, explore different localization and point goal calculation methods\nand report significant gains in performance and robustness compared to prior\nwork.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Guillaume Bono", "Herv\u00e9 Poirier", "Leonid Antsfeld", "Gianluca Monaci", "Boris Chidlovskii", "Christian Wolf"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f32c"}, "filepath": "data/2403.15009v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993147489162953, "arXiv_link": "https://arxiv.org/html/2403.15009v1", "other_link": "", "title": "TexOct: Generating Textures of 3D Models with Octree-based Diffusion", "abstract": "This paper presents TexRO, a novel method for generating delicate textures of\na known 3D mesh by optimizing its UV texture. The key contributions are\ntwo-fold. We propose an optimal viewpoint selection strategy, that finds the\nmost miniature set of viewpoints covering all the faces of a mesh. Our\nviewpoint selection strategy guarantees the completeness of a generated result.\nWe propose a recursive optimization pipeline that optimizes a UV texture at\nincreasing resolutions, with an adaptive denoising method that re-uses existing\ntextures for new texture generation. Through extensive experimentation, we\ndemonstrate the superior performance of TexRO in terms of texture quality,\ndetail preservation, visual consistency, and, notably runtime speed,\noutperforming other current methods. The broad applicability of TexRO is\nfurther confirmed through its successful use on diverse 3D models.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jialun Liu", "Chenming Wu", "Xinqi Liu", "Xing Liu", "Jinbo Wu", "Haotian Peng", "Chen Zhao", "Haocheng Feng", "Jingtuo Liu", "Errui Ding"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f32d"}, "filepath": "data/2311.17315.png", "tags": [], "_media_type": "image", "_rand": 0.9998615619488159, "arXiv_link": "https://arxiv.org/abs/2311.17315", "other_link": "", "title": "Explaining CLIP's performance disparities on data from blind/low vision users", "abstract": "Large multi-modal models (LMMs) hold the potential to usher in a new era of\nautomated visual assistance for people who are blind or low vision (BLV). Yet,\nthese models have not been systematically evaluated on data captured by BLV\nusers. We address this by empirically assessing CLIP, a widely-used LMM likely\nto underpin many assistive technologies. Testing 25 CLIP variants in a\nzero-shot classification task, we find that their accuracy is 15 percentage\npoints lower on average for images captured by BLV users than web-crawled\nimages. This disparity stems from CLIP's sensitivities to 1) image content\n(e.g. not recognizing disability objects as well as other objects); 2) image\nquality (e.g. not being robust to lighting variation); and 3) text content\n(e.g. not recognizing objects described by tactile adjectives as well as visual\nones). We delve deeper with a textual analysis of three common pre-training\ndatasets: LAION-400M, LAION-2B and DataComp-1B, showing that disability content\nis rarely mentioned. We then provide three examples that illustrate how the\nperformance disparities extend to three downstream models underpinned by CLIP:\nOWL-ViT, CLIPSeg and DALL-E2. We find that few-shot learning with as few as 5\nimages can mitigate CLIP's quality-of-service disparities for BLV users in some\nscenarios, which we discuss alongside a set of other possible mitigations.", "keywords": ["Vision applications for social good and ethics"], "authors_list": ["Daniela Massiceti", "Camilla Longden", "Agnieszka S\u0142owik", "Samuel Wills", "Martin Grayson", "Cecily Morrison"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f32e"}, "filepath": "data/2404.01260.png", "tags": [], "_media_type": "image", "_rand": 0.9995674908667503, "arXiv_link": "https://arxiv.org/abs/2404.01260", "other_link": "", "title": "Bridging Remote Sensors with Multisensor Geospatial Foundation Models", "abstract": "In the realm of geospatial analysis, the diversity of remote sensors,\nencompassing both optical and microwave technologies, offers a wealth of\ndistinct observational capabilities. Recognizing this, we present msGFM, a\nmultisensor geospatial foundation model that effectively unifies data from four\nkey sensor modalities. This integration spans an expansive dataset of two\nmillion multisensor images. msGFM is uniquely adept at handling both paired and\nunpaired sensor data. For data originating from identical geolocations, our\nmodel employs an innovative cross-sensor pretraining approach in masked image\nmodeling, enabling the synthesis of joint representations from diverse sensors.\nmsGFM, incorporating four remote sensors, upholds strong performance, forming a\ncomprehensive model adaptable to various sensor types. msGFM has demonstrated\nenhanced proficiency in a range of both single-sensor and multisensor\ndownstream tasks. These include scene classification, segmentation, cloud\nremoval, and pan-sharpening. A key discovery of our research is that\nrepresentations derived from natural images are not always compatible with the\ndistinct characteristics of geospatial remote sensors, underscoring the\nlimitations of existing representations in this field. Our work can serve as a\nguide for developing multisensor geospatial pretraining models, paving the way\nfor more advanced geospatial capabilities.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Boran Han", "Shuai Zhang", "Xingjian Shi", "Markus Reichstein"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f32f"}, "filepath": "data/2403.04272v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996132287699127, "arXiv_link": "https://arxiv.org/abs/2403.04272v1", "other_link": "https://github.com/mashijie1028/ActiveGCD", "title": "Active Generalized Category Discovery", "abstract": "Generalized Category Discovery (GCD) is a pragmatic and challenging\nopen-world task, which endeavors to cluster unlabeled samples from both novel\nand old classes, leveraging some labeled data of old classes. Given that\nknowledge learned from old classes is not fully transferable to new classes,\nand that novel categories are fully unlabeled, GCD inherently faces intractable\nproblems, including imbalanced classification performance and inconsistent\nconfidence between old and new classes, especially in the low-labeling regime.\nHence, some annotations of new classes are deemed necessary. However, labeling\nnew classes is extremely costly. To address this issue, we take the spirit of\nactive learning and propose a new setting called Active Generalized Category\nDiscovery (AGCD). The goal is to improve the performance of GCD by actively\nselecting a limited amount of valuable samples for labeling from the oracle. To\nsolve this problem, we devise an adaptive sampling strategy, which jointly\nconsiders novelty, informativeness and diversity to adaptively select novel\nsamples with proper uncertainty. However, owing to the varied orderings of\nlabel indices caused by the clustering of novel classes, the queried labels are\nnot directly applicable to subsequent training. To overcome this issue, we\nfurther propose a stable label mapping algorithm that transforms ground truth\nlabels to the label space of the classifier, thereby ensuring consistent\ntraining across different active selection stages. Our method achieves\nstate-of-the-art performance on both generic and fine-grained datasets. Our\ncode is available at https://github.com/mashijie1028/ActiveGCD", "keywords": [], "authors_list": ["Shijie Ma", "Fei Zhu", "Zhun Zhong", "Xu-Yao Zhang", "Cheng-Lin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f330"}, "filepath": "data/2312.02284.png", "tags": [], "_media_type": "image", "_rand": 0.9994166038416661, "arXiv_link": "https://arxiv.org/abs/2312.02284", "other_link": "", "title": "PatchFusion: An End-to-End Tile-Based Framework for High-Resolution Monocular Metric Depth Estimation", "abstract": "Single image depth estimation is a foundational task in computer vision and\ngenerative modeling. However, prevailing depth estimation models grapple with\naccommodating the increasing resolutions commonplace in today's consumer\ncameras and devices. Existing high-resolution strategies show promise, but they\noften face limitations, ranging from error propagation to the loss of\nhigh-frequency details. We present PatchFusion, a novel tile-based framework\nwith three key components to improve the current state of the art: (1) A\npatch-wise fusion network that fuses a globally-consistent coarse prediction\nwith finer, inconsistent tiled predictions via high-level feature guidance, (2)\nA Global-to-Local (G2L) module that adds vital context to the fusion network,\ndiscarding the need for patch selection heuristics, and (3) A Consistency-Aware\nTraining (CAT) and Inference (CAI) approach, emphasizing patch overlap\nconsistency and thereby eradicating the necessity for post-processing.\nExperiments on UnrealStereo4K, MVS-Synth, and Middleburry 2014 demonstrate that\nour framework can generate high-resolution depth maps with intricate details.\nPatchFusion is independent of the base model for depth estimation. Notably, our\nframework built on top of SOTA ZoeDepth brings improvements for a total of\n17.3% and 29.4% in terms of the root mean squared error (RMSE) on\nUnrealStereo4K and MVS-Synth, respectively.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zhenyu Li", "Shariq Bhat", "Peter Wonka"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f331"}, "filepath": "data/2312.07063.png", "tags": [], "_media_type": "image", "_rand": 0.9994894052919595, "arXiv_link": "https://arxiv.org/abs/2312.07063", "other_link": "", "title": "Template Free Reconstruction of Human-object Interaction with Procedural Interaction Generation", "abstract": "Reconstructing human-object interaction in 3D from a single RGB image is a\nchallenging task and existing data driven methods do not generalize beyond the\nobjects present in the carefully curated 3D interaction datasets. Capturing\nlarge-scale real data to learn strong interaction and 3D shape priors is very\nexpensive due to the combinatorial nature of human-object interactions. In this\npaper, we propose ProciGen (Procedural interaction Generation), a method to\nprocedurally generate datasets with both, plausible interaction and diverse\nobject variation. We generate 1M+ human-object interaction pairs in 3D and\nleverage this large-scale data to train our HDM (Hierarchical Diffusion Model),\na novel method to reconstruct interacting human and unseen objects, without any\ntemplates. Our HDM is an image-conditioned diffusion model that learns both\nrealistic interaction and highly accurate human and object shapes. Experiments\nshow that our HDM trained with ProciGen significantly outperforms prior methods\nthat requires template meshes and that our dataset allows training methods with\nstrong generalization ability to unseen object instances. Our code and data are\nreleased.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xianghui Xie", "Bharat Lal Bhatnagar", "Jan Lenssen", "Gerard Pons-Moll"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f332"}, "filepath": "data/2404.07448.png", "tags": [], "_media_type": "image", "_rand": 0.999659110181327, "arXiv_link": "https://arxiv.org/abs/2404.07448", "other_link": "https://github.com/Xujxyang/OpenTrans", "title": "Transferable and Principled Efficiency for Open-Vocabulary Segmentation", "abstract": "Recent success of pre-trained foundation vision-language models makes\nOpen-Vocabulary Segmentation (OVS) possible. Despite the promising performance,\nthis approach introduces heavy computational overheads for two challenges: 1)\nlarge model sizes of the backbone; 2) expensive costs during the fine-tuning.\nThese challenges hinder this OVS strategy from being widely applicable and\naffordable in real-world scenarios. Although traditional methods such as model\ncompression and efficient fine-tuning can address these challenges, they often\nrely on heuristics. This means that their solutions cannot be easily\ntransferred and necessitate re-training on different models, which comes at a\ncost. In the context of efficient OVS, we target achieving performance that is\ncomparable to or even better than prior OVS works based on large\nvision-language foundation models, by utilizing smaller models that incur lower\ntraining costs. The core strategy is to make our efficiency principled and thus\nseamlessly transferable from one OVS framework to others without further\ncustomization. Comprehensive experiments on diverse OVS benchmarks demonstrate\nour superior trade-off between segmentation accuracy and computation costs over\nprevious works. Our code is available on https://github.com/Xujxyang/OpenTrans", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jingxuan Xu", "Wuyang Chen", "Yao Zhao", "Yunchao Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f333"}, "filepath": "data/2403.02601.png", "tags": [], "_media_type": "image", "_rand": 0.9990357675510078, "arXiv_link": "https://arxiv.org/abs/2403.02601", "other_link": "", "title": "Low-Res Leads the Way: Improving Generalization for Super-Resolution by Self-Supervised Learning", "abstract": "For image super-resolution (SR), bridging the gap between the performance on\nsynthetic datasets and real-world degradation scenarios remains a challenge.\nThis work introduces a novel \"Low-Res Leads the Way\" (LWay) training framework,\nmerging Supervised Pre-training with Self-supervised Learning to enhance the\nadaptability of SR models to real-world images. Our approach utilizes a\nlow-resolution (LR) reconstruction network to extract degradation embeddings\nfrom LR images, merging them with super-resolved outputs for LR reconstruction.\nLeveraging unseen LR images for self-supervised learning guides the model to\nadapt its modeling space to the target domain, facilitating fine-tuning of SR\nmodels without requiring paired high-resolution (HR) images. The integration of\nDiscrete Wavelet Transform (DWT) further refines the focus on high-frequency\ndetails. Extensive evaluations show that our method significantly improves the\ngeneralization and detail restoration capabilities of SR models on unseen\nreal-world datasets, outperforming existing methods. Our training regime is\nuniversally compatible, requiring no network architecture modifications, making\nit a practical solution for real-world SR applications.", "keywords": ["Low-level vision"], "authors_list": ["Haoyu Chen", "Wenbo Li", "Jinjin Gu", "Jingjing Ren", "Haoze Sun", "Xueyi Zou", "Youliang Yan", "Zhensong Zhang", "Lei Zhu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f334"}, "filepath": "data/2404.03183.png", "tags": [], "_media_type": "image", "_rand": 0.9993738833227832, "arXiv_link": "https://arxiv.org/abs/2404.03183", "other_link": "", "title": "BodyMAP - Jointly Predicting Body Mesh and 3D Applied Pressure Map for People in Bed", "abstract": "Accurately predicting the 3D human posture and the pressure exerted on the\nbody for people resting in bed, visualized as a body mesh (3D pose & shape)\nwith a 3D pressure map, holds significant promise for healthcare applications,\nparticularly, in the prevention of pressure ulcers. Current methods focus on\nsingular facets of the problem -- predicting only 2D/3D poses, generating 2D\npressure images, predicting pressure only for certain body regions instead of\nthe full body, or forming indirect approximations to the 3D pressure map. In\ncontrast, we introduce BodyMAP, which jointly predicts the human body mesh and\n3D applied pressure map across the entire human body. Our network leverages\nmultiple visual modalities, incorporating both a depth image of a person in bed\nand its corresponding 2D pressure image acquired from a pressure-sensing\nmattress. The 3D pressure map is represented as a pressure value at each mesh\nvertex and thus allows for precise localization of high-pressure regions on the\nbody. Additionally, we present BodyMAP-WS, a new formulation of pressure\nprediction in which we implicitly learn pressure in 3D by aligning sensed 2D\npressure images with a differentiable 2D projection of the predicted 3D\npressure maps. In evaluations with real-world human data, our method\noutperforms the current state-of-the-art technique by 25% on both body mesh and\n3D applied pressure map prediction tasks for people in bed.", "keywords": ["Deep learning architectures and techniques", "Medical imaging and biological vision"], "authors_list": ["Abhishek Tandon", "Anujraaj Goyal", "Henry M. Clever", "Zackory Erickson"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f335"}, "filepath": "data/2404.02227.png", "tags": [], "_media_type": "image", "_rand": 0.9990670532973797, "arXiv_link": "https://arxiv.org/abs/2404.02227", "other_link": "https://github.com/Hai-chao-Zhang/OOSTraj}.", "title": "OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising", "abstract": "Trajectory prediction is fundamental in computer vision and autonomous\ndriving, particularly for understanding pedestrian behavior and enabling\nproactive decision-making. Existing approaches in this field often assume\nprecise and complete observational data, neglecting the challenges associated\nwith out-of-view objects and the noise inherent in sensor data due to limited\ncamera range, physical obstructions, and the absence of ground truth for\ndenoised sensor data. Such oversights are critical safety concerns, as they can\nresult in missing essential, non-visible objects. To bridge this gap, we\npresent a novel method for out-of-sight trajectory prediction that leverages a\nvision-positioning technique. Our approach denoises noisy sensor observations\nin an unsupervised manner and precisely maps sensor-based trajectories of\nout-of-sight objects into visual trajectories. This method has demonstrated\nstate-of-the-art performance in out-of-sight noisy sensor trajectory denoising\nand prediction on the Vi-Fi and JRDB datasets. By enhancing trajectory\nprediction accuracy and addressing the challenges of out-of-sight objects, our\nwork significantly contributes to improving the safety and reliability of\nautonomous driving in complex environments. Our work represents the first\ninitiative towards Out-Of-Sight Trajectory prediction (OOSTraj), setting a new\nbenchmark for future research. The code is available at\n\\url{https://github.com/Hai-chao-Zhang/OOSTraj}.", "keywords": [], "authors_list": ["Haichao Zhang", "Yi Xu", "Hongsheng Lu", "Takayuki Shimizu", "Yun Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f336"}, "filepath": "data/2404.16452.png", "tags": [], "_media_type": "image", "_rand": 0.9998361182459957, "arXiv_link": "https://arxiv.org/abs/2404.16452", "other_link": "https://github.com/Lihua-Jing/PAD.", "title": "PAD: Patch-Agnostic Defense against Adversarial Patch Attacks", "abstract": "Adversarial patch attacks present a significant threat to real-world object\ndetectors due to their practical feasibility. Existing defense methods, which\nrely on attack data or prior knowledge, struggle to effectively address a wide\nrange of adversarial patches. In this paper, we show two inherent\ncharacteristics of adversarial patches, semantic independence and spatial\nheterogeneity, independent of their appearance, shape, size, quantity, and\nlocation. Semantic independence indicates that adversarial patches operate\nautonomously within their semantic context, while spatial heterogeneity\nmanifests as distinct image quality of the patch area that differs from\noriginal clean image due to the independent generation process. Based on these\nobservations, we propose PAD, a novel adversarial patch localization and\nremoval method that does not require prior knowledge or additional training.\nPAD offers patch-agnostic defense against various adversarial patches,\ncompatible with any pre-trained object detectors. Our comprehensive digital and\nphysical experiments involving diverse patch types, such as localized noise,\nprintable, and naturalistic patches, exhibit notable improvements over\nstate-of-the-art works. Our code is available at\nhttps://github.com/Lihua-Jing/PAD.", "keywords": [], "authors_list": ["Lihua Jing", "Rui Wang", "Wenqi Ren", "Xin Dong", "Cong Zou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f337"}, "filepath": "data/2404.15707.png", "tags": [], "_media_type": "image", "_rand": 0.9996512068618806, "arXiv_link": "https://arxiv.org/abs/2404.15707", "other_link": "", "title": "ESR-NeRF: Emissive Source Reconstruction Using LDR Multi-view Images", "abstract": "Existing NeRF-based inverse rendering methods suppose that scenes are\nexclusively illuminated by distant light sources, neglecting the potential\ninfluence of emissive sources within a scene. In this work, we confront this\nlimitation using LDR multi-view images captured with emissive sources turned on\nand off. Two key issues must be addressed: 1) ambiguity arising from the\nlimited dynamic range along with unknown lighting details, and 2) the expensive\ncomputational cost in volume rendering to backtrace the paths leading to final\nobject colors. We present a novel approach, ESR-NeRF, leveraging neural\nnetworks as learnable functions to represent ray-traced fields. By training\nnetworks to satisfy light transport segments, we regulate outgoing radiances,\nprogressively identifying emissive sources while being aware of reflection\nareas. The results on scenes encompassing emissive sources with various\nproperties demonstrate the superiority of ESR-NeRF in qualitative and\nquantitative ways. Our approach also extends its applicability to the scenes\ndevoid of emissive sources, achieving lower CD metrics on the DTU dataset.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Jinseo Jeong", "Junseo Koo", "Qimeng Zhang", "Gunhee Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f338"}, "filepath": "data/2305.00163v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992891941332779, "arXiv_link": "https://arxiv.org/html/2305.00163v2", "other_link": "", "title": "Enhancing Video Super-Resolution via Implicit Resampling-based Alignment", "abstract": "In video super-resolution, it is common to use a frame-wise alignment to\nsupport the propagation of information over time. The role of alignment is\nwell-studied for low-level enhancement in video, but existing works overlook a\ncritical step -- resampling. We show through extensive experiments that for\nalignment to be effective, the resampling should preserve the reference\nfrequency spectrum while minimizing spatial distortions. However, most existing\nworks simply use a default choice of bilinear interpolation for resampling even\nthough bilinear interpolation has a smoothing effect and hinders\nsuper-resolution. From these observations, we propose an implicit\nresampling-based alignment. The sampling positions are encoded by a sinusoidal\npositional encoding, while the value is estimated with a coordinate network and\na window-based cross-attention. We show that bilinear interpolation inherently\nattenuates high-frequency information while an MLP-based coordinate network can\napproximate more frequencies. Experiments on synthetic and real-world datasets\nshow that alignment with our proposed implicit resampling enhances the\nperformance of state-of-the-art frameworks with minimal impact on both compute\nand parameters.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Kai Xu", "Ziwei Yu", "Xin Wang", "Michael Bi Mi", "Angela Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f339"}, "filepath": "data/2405.18810.png", "tags": [], "_media_type": "image", "_rand": 0.9998111587377042, "arXiv_link": "https://arxiv.org/abs/2405.18810", "other_link": "https://github.com/xjjxmu/UniPTS.", "title": "UniPTS: A Unified Framework for Proficient Post-Training Sparsity", "abstract": "Post-training Sparsity (PTS) is a recently emerged avenue that chases\nefficient network sparsity with limited data in need. Existing PTS methods,\nhowever, undergo significant performance degradation compared with traditional\nmethods that retrain the sparse networks via the whole dataset, especially at\nhigh sparsity ratios. In this paper, we attempt to reconcile this disparity by\ntransposing three cardinal factors that profoundly alter the performance of\nconventional sparsity into the context of PTS. Our endeavors particularly\ncomprise (1) A base-decayed sparsity objective that promotes efficient\nknowledge transferring from dense network to the sparse counterpart. (2) A\nreducing-regrowing search algorithm designed to ascertain the optimal sparsity\ndistribution while circumventing overfitting to the small calibration set in\nPTS. (3) The employment of dynamic sparse training predicated on the preceding\naspects, aimed at comprehensively optimizing the sparsity structure while\nensuring training stability. Our proposed framework, termed UniPTS, is\nvalidated to be much superior to existing PTS methods across extensive\nbenchmarks. As an illustration, it amplifies the performance of POT, a recently\nproposed recipe, from 3.9% to 68.6% when pruning ResNet-50 at 90% sparsity\nratio on ImageNet. We release the code of our paper at\nhttps://github.com/xjjxmu/UniPTS.", "keywords": ["Efficient and scalable vision"], "authors_list": ["JingJing Xie", "Yuxin Zhang", "Mingbao Lin", "ZhiHang Lin", "Liujuan Cao", "Rongrong Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f33a"}, "filepath": "data/2311.17516.png", "tags": [], "_media_type": "image", "_rand": 0.9994250687323664, "arXiv_link": "https://arxiv.org/abs/2311.17516", "other_link": "", "title": "MMA-Diffusion: MultiModal Attack on Diffusion Models", "abstract": "In recent years, Text-to-Image (T2I) models have seen remarkable\nadvancements, gaining widespread adoption. However, this progress has\ninadvertently opened avenues for potential misuse, particularly in generating\ninappropriate or Not-Safe-For-Work (NSFW) content. Our work introduces\nMMA-Diffusion, a framework that presents a significant and realistic threat to\nthe security of T2I models by effectively circumventing current defensive\nmeasures in both open-source models and commercial online services. Unlike\nprevious approaches, MMA-Diffusion leverages both textual and visual modalities\nto bypass safeguards like prompt filters and post-hoc safety checkers, thus\nexposing and highlighting the vulnerabilities in existing defense mechanisms.", "keywords": ["Multimodal models and vision-language models", "Image and video generation and manipulation"], "authors_list": ["Yijun Yang", "Ruiyuan Gao", "Xiaosen Wang", "Tsung-Yi Ho", "Xu Nan", "Qiang Xu"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f33b"}, "filepath": "data/2312.09181.png", "tags": [], "_media_type": "image", "_rand": 0.9995741691837184, "arXiv_link": "https://arxiv.org/abs/2312.09181", "other_link": "", "title": "Improving Training Efficiency of Diffusion Models via Multi-Stage Framework and Tailored Multi-Decoder Architectures", "abstract": "Diffusion models, emerging as powerful deep generative tools, excel in\nvarious applications. They operate through a two-steps process: introducing\nnoise into training samples and then employing a model to convert random noise\ninto new samples (e.g., images). However, their remarkable generative\nperformance is hindered by slow training and sampling. This is due to the\nnecessity of tracking extensive forward and reverse diffusion trajectories, and\nemploying a large model with numerous parameters across multiple timesteps\n(i.e., noise levels). To tackle these challenges, we present a multi-stage\nframework inspired by our empirical findings. These observations indicate the\nadvantages of employing distinct parameters tailored to each timestep while\nretaining universal parameters shared across all time steps. Our approach\ninvolves segmenting the time interval into multiple stages where we employ\ncustom multi-decoder U-net architecture that blends time-dependent models with\na universally shared encoder. Our framework enables the efficient distribution\nof computational resources and mitigates inter-stage interference, which\nsubstantially improves training efficiency. Extensive numerical experiments\naffirm the effectiveness of our framework, showcasing significant training and\nsampling efficiency enhancements on three state-of-the-art diffusion models,\nincluding large-scale latent diffusion models. Furthermore, our ablation\nstudies illustrate the impact of two important components in our framework: (i)\na novel timestep clustering algorithm for stage division, and (ii) an\ninnovative multi-decoder U-net architecture, seamlessly integrating universal\nand customized hyperparameters.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Huijie Zhang", "Yifu Lu", "Ismail Alkhouri", "Saiprasad Ravishankar", "Dogyoon Song", "Qing Qu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f33c"}, "filepath": "data/2402.02583.png", "tags": [], "_media_type": "image", "_rand": 0.9999680666655358, "arXiv_link": "https://arxiv.org/abs/2402.02583", "other_link": "https://github.com/MC-E/DragonDiffusion.", "title": "DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image Editing", "abstract": "Large-scale Text-to-Image (T2I) diffusion models have revolutionized image\ngeneration over the last few years. Although owning diverse and high-quality\ngeneration capabilities, translating these abilities to fine-grained image\nediting remains challenging. In this paper, we propose DiffEditor to rectify\ntwo weaknesses in existing diffusion-based image editing: (1) in complex\nscenarios, editing results often lack editing accuracy and exhibit unexpected\nartifacts; (2) lack of flexibility to harmonize editing operations, e.g.,\nimagine new content. In our solution, we introduce image prompts in\nfine-grained image editing, cooperating with the text prompt to better describe\nthe editing content. To increase the flexibility while maintaining content\nconsistency, we locally combine stochastic differential equation (SDE) into the\nordinary differential equation (ODE) sampling. In addition, we incorporate\nregional score-based gradient guidance and a time travel strategy into the\ndiffusion sampling, further improving the editing quality. Extensive\nexperiments demonstrate that our method can efficiently achieve\nstate-of-the-art performance on various fine-grained image editing tasks,\nincluding editing within a single image (e.g., object moving, resizing, and\ncontent dragging) and across images (e.g., appearance replacing and object\npasting). Our source code is released at\nhttps://github.com/MC-E/DragonDiffusion.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Chong Mou", "Xintao Wang", "Jiechong Song", "Ying Shan", "Jian Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f33d"}, "filepath": "data/2312.16933v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990451462371165, "arXiv_link": "http://export.arxiv.org/abs/2312.16933v1", "other_link": "", "title": "EvDiG: Event-guided Direct and Global Components Separation", "abstract": "Event cameras and RGB cameras exhibit complementary characteristics in\nimaging: the former possesses high dynamic range (HDR) and high temporal\nresolution, while the latter provides rich texture and color information. This\nmakes the integration of event cameras into middle- and high-level RGB-based\nvision tasks highly promising. However, challenges arise in multi-modal fusion,\ndata annotation, and model architecture design. In this paper, we propose\nEvPlug, which learns a plug-and-play event and image fusion module from the\nsupervision of the existing RGB-based model. The learned fusion module\nintegrates event streams with image features in the form of a plug-in, endowing\nthe RGB-based model to be robust to HDR and fast motion scenes while enabling\nhigh temporal resolution inference. Our method only requires unlabeled\nevent-image pairs (no pixel-wise alignment required) and does not alter the\nstructure or weights of the RGB-based model. We demonstrate the superiority of\nEvPlug in several vision tasks such as object detection, semantic segmentation,\nand 3D hand pose estimation", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["xinyu zhou", "Peiqi Duan", "Boyu Li", "Chu Zhou", "Chao Xu", "Boxin Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f33e"}, "filepath": "data/2403.05094.png", "tags": [], "_media_type": "image", "_rand": 0.9995183227197754, "arXiv_link": "https://arxiv.org/abs/2403.05094", "other_link": "", "title": "Face2Diffusion for Fast and Editable Face Personalization", "abstract": "Face personalization aims to insert specific faces, taken from images, into\npretrained text-to-image diffusion models. However, it is still challenging for\nprevious methods to preserve both the identity similarity and editability due\nto overfitting to training samples. In this paper, we propose Face2Diffusion\n(F2D) for high-editability face personalization. The core idea behind F2D is\nthat removing identity-irrelevant information from the training pipeline\nprevents the overfitting problem and improves editability of encoded faces. F2D\nconsists of the following three novel components: 1) Multi-scale identity\nencoder provides well-disentangled identity features while keeping the benefits\nof multi-scale information, which improves the diversity of camera poses. 2)\nExpression guidance disentangles face expressions from identities and improves\nthe controllability of face expressions. 3) Class-guided denoising\nregularization encourages models to learn how faces should be denoised, which\nboosts the text-alignment of backgrounds. Extensive experiments on the\nFaceForensics++ dataset and diverse prompts demonstrate our method greatly\nimproves the trade-off between the identity- and text-fidelity compared to\nprevious state-of-the-art methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Kaede Shiohara", "Toshihiko Yamasaki"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f33f"}, "filepath": "data/2205.11169.png", "tags": [], "_media_type": "image", "_rand": 0.9993111806956166, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2205.11169", "other_link": "https://github.com/thunlp/PEVL.", "title": "PeVL: Pose-Enhanced Vision-Language Model for Fine-Grained Human Action Recognition", "abstract": "Vision-language pre-training (VLP) has shown impressive performance on a wide\nrange of cross-modal tasks, where VLP models without reliance on object\ndetectors are becoming the mainstream due to their superior computation\nefficiency and competitive performance. However, the removal of object\ndetectors also deprives the capability of VLP models in explicit object\nmodeling, which is essential to various position-sensitive vision-language (VL)\ntasks, such as referring expression comprehension and visual commonsense\nreasoning. To address the challenge, we introduce PEVL that enhances the\npre-training and prompt tuning of VLP models with explicit object position\nmodeling. Specifically, PEVL reformulates discretized object positions and\nlanguage in a unified language modeling framework, which facilitates explicit\nVL alignment during pre-training, and also enables flexible prompt tuning for\nvarious downstream tasks. We show that PEVL enables state-of-the-art\nperformance of detector-free VLP models on position-sensitive tasks such as\nreferring expression comprehension and phrase grounding, and also improves the\nperformance on position-insensitive tasks with grounded inputs. We make the\ndata and code for this paper publicly available at\nhttps://github.com/thunlp/PEVL.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Haosong Zhang", "Mei Leong", "Liyuan Li", "Weisi Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f340"}, "filepath": "data/2404.00299.png", "tags": [], "_media_type": "image", "_rand": 0.99944039773635, "arXiv_link": "https://arxiv.org/abs/2404.00299", "other_link": "", "title": "HOI-M$^3$: Capture Multiple Humans and Objects Interaction within Contextual Environment", "abstract": "Humans naturally interact with both others and the surrounding multiple\nobjects, engaging in various social activities. However, recent advances in\nmodeling human-object interactions mostly focus on perceiving isolated\nindividuals and objects, due to fundamental data scarcity. In this paper, we\nintroduce HOI-M3, a novel large-scale dataset for modeling the interactions of\nMultiple huMans and Multiple objects. Notably, it provides accurate 3D tracking\nfor both humans and objects from dense RGB and object-mounted IMU inputs,\ncovering 199 sequences and 181M frames of diverse humans and objects under rich\nactivities. With the unique HOI-M3 dataset, we introduce two novel data-driven\ntasks with companion strong baselines: monocular capture and unstructured\ngeneration of multiple human-object interactions. Extensive experiments\ndemonstrate that our dataset is challenging and worthy of further research\nabout multiple human-object interactions and behavior analysis. Our HOI-M3\ndataset, corresponding codes, and pre-trained models will be disseminated to\nthe community for future research.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Juze Zhang", "Jingyan Zhang", "Zining Song", "Zhanhe Shi", "Chengfeng Zhao", "Ye Shi", "Jingyi Yu", "Lan Xu", "Jingya Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f341"}, "filepath": "data/2405.00587.png", "tags": [], "_media_type": "image", "_rand": 0.9993603053146364, "arXiv_link": "https://arxiv.org/abs/2405.00587", "other_link": "https://zhao-yian.github.io/GraCo.", "title": "GraCo: Granularity-Controllable Interactive Segmentation", "abstract": "Interactive Segmentation (IS) segments specific objects or parts in the image\naccording to user input. Current IS pipelines fall into two categories:\nsingle-granularity output and multi-granularity output. The latter aims to\nalleviate the spatial ambiguity present in the former. However, the\nmulti-granularity output pipeline suffers from limited interaction flexibility\nand produces redundant results. In this work, we introduce\nGranularity-Controllable Interactive Segmentation (GraCo), a novel approach\nthat allows precise control of prediction granularity by introducing additional\nparameters to input. This enhances the customization of the interactive system\nand eliminates redundancy while resolving ambiguity. Nevertheless, the\nexorbitant cost of annotating multi-granularity masks and the lack of available\ndatasets with granularity annotations make it difficult for models to acquire\nthe necessary guidance to control output granularity. To address this problem,\nwe design an any-granularity mask generator that exploits the semantic property\nof the pre-trained IS model to automatically generate abundant mask-granularity\npairs without requiring additional manual annotation. Based on these pairs, we\npropose a granularity-controllable learning strategy that efficiently imparts\nthe granularity controllability to the IS model. Extensive experiments on\nintricate scenarios at object and part levels demonstrate that our GraCo has\nsignificant advantages over previous methods. This highlights the potential of\nGraCo to be a flexible annotation tool, capable of adapting to diverse\nsegmentation scenarios. The project page: https://zhao-yian.github.io/GraCo.", "keywords": [], "authors_list": ["Yian Zhao", "Kehan Li", "Zesen Cheng", "Pengchong Qiao", "Xiawu Zheng", "Rongrong Ji", "Chang Liu", "Li Yuan", "Jie Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f342"}, "filepath": "data/2311.11600.png", "tags": [], "_media_type": "image", "_rand": 0.9994789419882939, "arXiv_link": "https://arxiv.org/abs/2311.11600", "other_link": "", "title": "Deep Equilibrium Diffusion Restoration with Parallel Sampling", "abstract": "Diffusion model-based image restoration (IR) aims to use diffusion models to\nrecover high-quality (HQ) images from degraded images, achieving promising\nperformance. Due to the inherent property of diffusion models, most existing\nmethods need long serial sampling chains to restore HQ images step-by-step,\nresulting in expensive sampling time and high computation costs. Moreover, such\nlong sampling chains hinder understanding the relationship between inputs and\nrestoration results since it is hard to compute the gradients in the whole\nchains. In this work, we aim to rethink the diffusion model-based IR models\nthrough a different perspective, i.e., a deep equilibrium (DEQ) fixed point\nsystem, called DeqIR. Specifically, we derive an analytical solution by\nmodeling the entire sampling chain in these IR models as a joint multivariate\nfixed point system. Based on the analytical solution, we can conduct parallel\nsampling and restore HQ images without training. Furthermore, we compute fast\ngradients via DEQ inversion and found that initialization optimization can\nboost image quality and control the generation direction. Extensive experiments\non benchmarks demonstrate the effectiveness of our method on typical IR tasks\nand real-world settings.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Jiezhang Cao", "Yue Shi", "Kai Zhang", "Yulun Zhang", "Radu Timofte", "Luc Van Gool"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f343"}, "filepath": "data/2405.19775.png", "tags": [], "_media_type": "image", "_rand": 0.9993520766669958, "arXiv_link": "https://arxiv.org/abs/2405.19775", "other_link": "", "title": "Puff-Net: Efficient Style Transfer with Pure Content and Style Feature Fusion Network", "abstract": "Style transfer aims to render an image with the artistic features of a style\nimage, while maintaining the original structure. Various methods have been put\nforward for this task, but some challenges still exist. For instance, it is\ndifficult for CNN-based methods to handle global information and long-range\ndependencies between input images, for which transformer-based methods have\nbeen proposed. Although transformers can better model the relationship between\ncontent and style images, they require high-cost hardware and time-consuming\ninference. To address these issues, we design a novel transformer model that\nincludes only the encoder, thus significantly reducing the computational cost.\nIn addition, we also find that existing style transfer methods may lead to\nimages under-stylied or missing content. In order to achieve better\nstylization, we design a content feature extractor and a style feature\nextractor, based on which pure content and style images can be fed to the\ntransformer. Finally, we propose a novel network termed Puff-Net, i.e., pure\ncontent and style feature fusion network. Through qualitative and quantitative\nexperiments, we demonstrate the advantages of our model compared to\nstate-of-the-art ones in the literature.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Sizhe Zheng", "Pan Gao", "Peng Zhou", "Jie Qin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f344"}, "filepath": "data/2405.04534.png", "tags": [], "_media_type": "image", "_rand": 0.9995676690955915, "arXiv_link": "https://arxiv.org/abs/2405.04534", "other_link": "https://dou-yiming.github.io/TaRF", "title": "Tactile-Augmented Radiance Fields", "abstract": "We present a scene representation, which we call a tactile-augmented radiance\nfield (TaRF), that brings vision and touch into a shared 3D space. This\nrepresentation can be used to estimate the visual and tactile signals for a\ngiven 3D position within a scene. We capture a scene's TaRF from a collection\nof photos and sparsely sampled touch probes. Our approach makes use of two\ninsights: (i) common vision-based touch sensors are built on ordinary cameras\nand thus can be registered to images using methods from multi-view geometry,\nand (ii) visually and structurally similar regions of a scene share the same\ntactile features. We use these insights to register touch signals to a captured\nvisual scene, and to train a conditional diffusion model that, provided with an\nRGB-D image rendered from a neural radiance field, generates its corresponding\ntactile signal. To evaluate our approach, we collect a dataset of TaRFs. This\ndataset contains more touch samples than previous real-world datasets, and it\nprovides spatially aligned visual signals for each captured touch signal. We\ndemonstrate the accuracy of our cross-modal generative model and the utility of\nthe captured visual-tactile data on several downstream tasks. Project page:\nhttps://dou-yiming.github.io/TaRF", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Yiming Dou", "Fengyu Yang", "Yi Liu", "Antonio Loquercio", "Andrew Owens"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f345"}, "filepath": "data/2402.08922.png", "tags": [], "_media_type": "image", "_rand": 0.999417930827232, "arXiv_link": "https://arxiv.org/abs/2402.08922", "other_link": "https://github.com/ruoxi-jia-group/Forward-INF.", "title": "The Mirrored Influence Hypothesis: Efficient Data Influence Estimation by Harnessing Forward Passes", "abstract": "Large-scale black-box models have become ubiquitous across numerous\napplications. Understanding the influence of individual training data sources\non predictions made by these models is crucial for improving their\ntrustworthiness. Current influence estimation techniques involve computing\ngradients for every training point or repeated training on different subsets.\nThese approaches face obvious computational challenges when scaled up to large\ndatasets and models.\n In this paper, we introduce and explore the Mirrored Influence Hypothesis,\nhighlighting a reciprocal nature of influence between training and test data.\nSpecifically, it suggests that evaluating the influence of training data on\ntest predictions can be reformulated as an equivalent, yet inverse problem:\nassessing how the predictions for training samples would be altered if the\nmodel were trained on specific test samples. Through both empirical and\ntheoretical validations, we demonstrate the wide applicability of our\nhypothesis. Inspired by this, we introduce a new method for estimating the\ninfluence of training data, which requires calculating gradients for specific\ntest samples, paired with a forward pass for each training point. This approach\ncan capitalize on the common asymmetry in scenarios where the number of test\nsamples under concurrent examination is much smaller than the scale of the\ntraining dataset, thus gaining a significant improvement in efficiency compared\nto existing approaches.\n We demonstrate the applicability of our method across a range of scenarios,\nincluding data attribution in diffusion models, data leakage detection,\nanalysis of memorization, mislabeled data detection, and tracing behavior in\nlanguage models. Our code will be made available at\nhttps://github.com/ruoxi-jia-group/Forward-INF.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Myeongseob Ko", "Feiyang Kang", "Weiyan Shi", "Ming Jin", "Zhou Yu", "Ruoxi Jia"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f346"}, "filepath": "data/2403.01427.png", "tags": [], "_media_type": "image", "_rand": 0.9999091053616052, "arXiv_link": "https://arxiv.org/abs/2403.01427", "other_link": "", "title": "Logit Standardization in Knowledge Distillation", "abstract": "Knowledge distillation involves transferring soft labels from a teacher to a\nstudent using a shared temperature-based softmax function. However, the\nassumption of a shared temperature between teacher and student implies a\nmandatory exact match between their logits in terms of logit range and\nvariance. This side-effect limits the performance of student, considering the\ncapacity discrepancy between them and the finding that the innate logit\nrelations of teacher are sufficient for student to learn. To address this\nissue, we propose setting the temperature as the weighted standard deviation of\nlogit and performing a plug-and-play Z-score pre-process of logit\nstandardization before applying softmax and Kullback-Leibler divergence. Our\npre-process enables student to focus on essential logit relations from teacher\nrather than requiring a magnitude match, and can improve the performance of\nexisting logit-based distillation methods. We also show a typical case where\nthe conventional setting of sharing temperature between teacher and student\ncannot reliably yield the authentic distillation evaluation; nonetheless, this\nchallenge is successfully alleviated by our Z-score. We extensively evaluate\nour method for various student and teacher models on CIFAR-100 and ImageNet,\nshowing its significant superiority. The vanilla knowledge distillation powered\nby our pre-process can achieve favorable performance against state-of-the-art\nmethods, and other distillation variants can obtain considerable gain with the\nassistance of our pre-process.", "keywords": [], "authors_list": ["Shangquan Sun", "Wenqi Ren", "Jingzhi Li", "Rui Wang", "Xiaochun Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f347"}, "filepath": "data/2404.05384.png", "tags": [], "_media_type": "image", "_rand": 0.9995635869900363, "arXiv_link": "https://arxiv.org/abs/2404.05384", "other_link": "https://github.com/SmilesDZgk/S-CFG.", "title": "Rethinking the Spatial Inconsistency in Classifier-Free Diffusion Guidance", "abstract": "Classifier-Free Guidance (CFG) has been widely used in text-to-image\ndiffusion models, where the CFG scale is introduced to control the strength of\ntext guidance on the whole image space. However, we argue that a global CFG\nscale results in spatial inconsistency on varying semantic strengths and\nsuboptimal image quality. To address this problem, we present a novel approach,\nSemantic-aware Classifier-Free Guidance (S-CFG), to customize the guidance\ndegrees for different semantic units in text-to-image diffusion models.\nSpecifically, we first design a training-free semantic segmentation method to\npartition the latent image into relatively independent semantic regions at each\ndenoising step. In particular, the cross-attention map in the denoising U-net\nbackbone is renormalized for assigning each patch to the corresponding token,\nwhile the self-attention map is used to complete the semantic regions. Then, to\nbalance the amplification of diverse semantic units, we adaptively adjust the\nCFG scales across different semantic regions to rescale the text guidance\ndegrees into a uniform level. Finally, extensive experiments demonstrate the\nsuperiority of S-CFG over the original CFG strategy on various text-to-image\ndiffusion models, without requiring any extra training cost. our codes are\navailable at https://github.com/SmilesDZgk/S-CFG.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Dazhong Shen", "Guanglu Song", "Zeyue Xue", "Fu-Yun Wang", "Yu Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f348"}, "filepath": "data/2404.00915.png", "tags": [], "_media_type": "image", "_rand": 0.9991035183134307, "arXiv_link": "https://arxiv.org/abs/2404.00915", "other_link": "", "title": "Scalable 3D Registration via Truncated Entry-wise Absolute Residuals", "abstract": "Given an input set of $3$D point pairs, the goal of outlier-robust $3$D\nregistration is to compute some rotation and translation that align as many\npoint pairs as possible. This is an important problem in computer vision, for\nwhich many highly accurate approaches have been recently proposed. Despite\ntheir impressive performance, these approaches lack scalability, often\noverflowing the $16$GB of memory of a standard laptop to handle roughly\n$30,000$ point pairs. In this paper, we propose a $3$D registration approach\nthat can process more than ten million ($10^7$) point pairs with over $99\\%$\nrandom outliers. Moreover, our method is efficient, entails low memory costs,\nand maintains high accuracy at the same time. We call our method TEAR, as it\ninvolves minimizing an outlier-robust loss that computes Truncated Entry-wise\nAbsolute Residuals. To minimize this loss, we decompose the original\n$6$-dimensional problem into two subproblems of dimensions $3$ and $2$,\nrespectively, solved in succession to global optimality via a customized\nbranch-and-bound method. While branch-and-bound is often slow and unscalable,\nthis does not apply to TEAR as we propose novel bounding functions that are\ntight and computationally efficient. Experiments on various datasets are\nconducted to validate the scalability and efficiency of our method.", "keywords": [], "authors_list": ["Tianyu Huang", "Liangzu Peng", "Rene Vidal", "Yun-Hui Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f349"}, "filepath": "data/2308.11408.png", "tags": [], "_media_type": "image", "_rand": 0.999888987129262, "arXiv_link": "https://arxiv.org/abs/2308.11408", "other_link": "https://gvecchio.com/matfuse.", "title": "MatFuse: Controllable Material Generation with Diffusion Models", "abstract": "Creating high-quality materials in computer graphics is a challenging and\ntime-consuming task, which requires great expertise. To simplify this process,\nwe introduce MatFuse, a unified approach that harnesses the generative power of\ndiffusion models for creation and editing of 3D materials. Our method\nintegrates multiple sources of conditioning, including color palettes,\nsketches, text, and pictures, enhancing creative possibilities and granting\nfine-grained control over material synthesis. Additionally, MatFuse enables\nmap-level material editing capabilities through latent manipulation by means of\na multi-encoder compression model which learns a disentangled latent\nrepresentation for each map. We demonstrate the effectiveness of MatFuse under\nmultiple conditioning settings and explore the potential of material editing.\nFinally, we assess the quality of the generated materials both quantitatively\nin terms of CLIP-IQA and FID scores and qualitatively by conducting a user\nstudy. Source code for training MatFuse and supplemental materials are publicly\navailable at https://gvecchio.com/matfuse.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Giuseppe Vecchio", "Renato Sortino", "Simone Palazzo", "Concetto Spampinato"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f34a"}, "filepath": "data/2304.08069v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990546204353665, "arXiv_link": "https://arxiv.org/html/2304.08069v3", "other_link": "https://zhao-yian.github.io/RTDETR.", "title": "DETRs Beat YOLOs on Real-time Object Detection", "abstract": "The YOLO series has become the most popular framework for real-time object\ndetection due to its reasonable trade-off between speed and accuracy. However,\nwe observe that the speed and accuracy of YOLOs are negatively affected by the\nNMS. Recently, end-to-end Transformer-based detectors (DETRs) have provided an\nalternative to eliminating NMS. Nevertheless, the high computational cost\nlimits their practicality and hinders them from fully exploiting the advantage\nof excluding NMS. In this paper, we propose the Real-Time DEtection TRansformer\n(RT-DETR), the first real-time end-to-end object detector to our best knowledge\nthat addresses the above dilemma. We build RT-DETR in two steps, drawing on the\nadvanced DETR: first we focus on maintaining accuracy while improving speed,\nfollowed by maintaining speed while improving accuracy. Specifically, we design\nan efficient hybrid encoder to expeditiously process multi-scale features by\ndecoupling intra-scale interaction and cross-scale fusion to improve speed.\nThen, we propose the uncertainty-minimal query selection to provide\nhigh-quality initial queries to the decoder, thereby improving accuracy. In\naddition, RT-DETR supports flexible speed tuning by adjusting the number of\ndecoder layers to adapt to various scenarios without retraining. Our\nRT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4\nGPU, outperforming previously advanced YOLOs in both speed and accuracy. We\nalso develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and\nM models). Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy\nand about 21 times in FPS. After pre-training with Objects365, RT-DETR-R50 /\nR101 achieves 55.3% / 56.2% AP. The project page:\nhttps://zhao-yian.github.io/RTDETR.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yian Zhao", "Wenyu Lv", "Shangliang Xu", "Jinman Wei", "Guanzhong Wang", "Qingqing Dang", "Yi Liu", "Jie Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f34b"}, "filepath": "data/2405.10690.png", "tags": [], "_media_type": "image", "_rand": 0.9998853087520572, "arXiv_link": "https://arxiv.org/abs/2405.10690", "other_link": "", "title": "Weakly-Supervised Audio-Visual Video Parsing with Prototype-based Pseudo-Labeling", "abstract": "Weakly supervised audio-visual video parsing (AVVP) methods aim to detect\naudible-only, visible-only, and audible-visible events using only video-level\nlabels. Existing approaches tackle this by leveraging unimodal and cross-modal\ncontexts. However, we argue that while cross-modal learning is beneficial for\ndetecting audible-visible events, in the weakly supervised scenario, it\nnegatively impacts unaligned audible or visible events by introducing\nirrelevant modality information. In this paper, we propose CoLeaF, a novel\nlearning framework that optimizes the integration of cross-modal context in the\nembedding space such that the network explicitly learns to combine cross-modal\ninformation for audible-visible events while filtering them out for unaligned\nevents. Additionally, as videos often involve complex class relationships,\nmodelling them improves performance. However, this introduces extra\ncomputational costs into the network. Our framework is designed to leverage\ncross-class relationships during training without incurring additional\ncomputations at inference. Furthermore, we propose new metrics to better\nevaluate a method's capabilities in performing AVVP. Our extensive experiments\ndemonstrate that CoLeaF significantly improves the state-of-the-art results by\nan average of 1.9% and 2.4% F-score on the LLP and UnAV-100 datasets,\nrespectively.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Kranthi Kumar Rachavarapu", "Kalyan Ramakrishnan", "A. N. Rajagopalan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f34c"}, "filepath": "data/2405.05714.png", "tags": [], "_media_type": "image", "_rand": 0.9995117384532212, "arXiv_link": "https://arxiv.org/abs/2405.05714", "other_link": "", "title": "Estimating Noisy Class Posterior with Part-level Labels for Noisy Label Learning", "abstract": "In noisy label learning, estimating noisy class posteriors plays a\nfundamental role for developing consistent classifiers, as it forms the basis\nfor estimating clean class posteriors and the transition matrix. Existing\nmethods typically learn noisy class posteriors by training a classification\nmodel with noisy labels. However, when labels are incorrect, these models may\nbe misled to overemphasize the feature parts that do not reflect the instance\ncharacteristics, resulting in significant errors in estimating noisy class\nposteriors. To address this issue, this paper proposes to augment the\nsupervised information with part-level labels, encouraging the model to focus\non and integrate richer information from various parts. Specifically, our\nmethod first partitions features into distinct parts by cropping instances,\nyielding part-level labels associated with these various parts. Subsequently,\nwe introduce a novel single-to-multiple transition matrix to model the\nrelationship between the noisy and part-level labels, which incorporates\npart-level labels into a classifier-consistent framework. Utilizing this\nframework with part-level labels, we can learn the noisy class posteriors more\nprecisely by guiding the model to integrate information from various parts,\nultimately improving the classification performance. Our method is\ntheoretically sound, while experiments show that it is empirically effective in\nsynthetic and real-world noisy benchmarks.", "keywords": [], "authors_list": ["Rui Zhao", "Bin Shi", "Jianfei Ruan", "Tianze Pan", "Bo Dong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f34d"}, "filepath": "data/2311.10605.png", "tags": [], "_media_type": "image", "_rand": 0.9993316963636216, "arXiv_link": "https://arxiv.org/abs/2311.10605", "other_link": "", "title": "CA-Jaccard: Camera-aware Jaccard Distance for Person Re-identification", "abstract": "Person re-identification (re-ID) is a challenging task that aims to learn\ndiscriminative features for person retrieval. In person re-ID, Jaccard distance\nis a widely used distance metric, especially in re-ranking and clustering\nscenarios. However, we discover that camera variation has a significant\nnegative impact on the reliability of Jaccard distance. In particular, Jaccard\ndistance calculates the distance based on the overlap of relevant neighbors.\nDue to camera variation, intra-camera samples dominate the relevant neighbors,\nwhich reduces the reliability of the neighbors by introducing intra-camera\nnegative samples and excluding inter-camera positive samples. To overcome this\nproblem, we propose a novel camera-aware Jaccard (CA-Jaccard) distance that\nleverages camera information to enhance the reliability of Jaccard distance.\nSpecifically, we design camera-aware k-reciprocal nearest neighbors (CKRNNs) to\nfind k-reciprocal nearest neighbors on the intra-camera and inter-camera\nranking lists, which improves the reliability of relevant neighbors and\nguarantees the contribution of inter-camera samples in the overlap. Moreover,\nwe propose a camera-aware local query expansion (CLQE) to mine reliable samples\nin relevant neighbors by exploiting camera variation as a strong constraint and\nassign these samples higher weights in overlap, further improving the\nreliability. Our CA-Jaccard distance is simple yet effective and can serve as a\ngeneral distance metric for person re-ID methods with high reliability and low\ncomputational cost. Extensive experiments demonstrate the effectiveness of our\nmethod.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yiyu Chen", "Zheyi Fan", "Zhaoru Chen", "Yixuan Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f34e"}, "filepath": "data/2403.16605.png", "tags": [], "_media_type": "image", "_rand": 0.9990702076433644, "arXiv_link": "https://arxiv.org/abs/2403.16605", "other_link": "", "title": "SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation", "abstract": "In recent years, semantic segmentation has become a pivotal tool in\nprocessing and interpreting satellite imagery. Yet, a prevalent limitation of\nsupervised learning techniques remains the need for extensive manual\nannotations by experts. In this work, we explore the potential of generative\nimage diffusion to address the scarcity of annotated data in earth observation\ntasks. The main idea is to learn the joint data manifold of images and labels,\nleveraging recent advancements in denoising diffusion probabilistic models. To\nthe best of our knowledge, we are the first to generate both images and\ncorresponding masks for satellite segmentation. We find that the obtained pairs\nnot only display high quality in fine-scale features but also ensure a wide\nsampling diversity. Both aspects are crucial for earth observation data, where\nsemantic classes can vary severely in scale and occurrence frequency. We employ\nthe novel data instances for downstream segmentation, as a form of data\naugmentation. In our experiments, we provide comparisons to prior works based\non discriminative diffusion models or GANs. We demonstrate that integrating\ngenerated samples yields significant quantitative improvements for satellite\nsemantic segmentation -- both compared to baselines and when training only on\nthe original data.", "keywords": [], "authors_list": ["Aysim Toker", "Marvin Eisenberger", "Daniel Cremers", "Laura Leal-Taixe"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f34f"}, "filepath": "data/2310.08854.png", "tags": [], "_media_type": "image", "_rand": 0.9990842724190906, "arXiv_link": "https://arxiv.org/abs/2310.08854", "other_link": "https://github.com/LeapLabTHU/Rank-DETR}.", "title": "EASE-DETR: Easing the Competition among Object Queries", "abstract": "Modern detection transformers (DETRs) use a set of object queries to predict\na list of bounding boxes, sort them by their classification confidence scores,\nand select the top-ranked predictions as the final detection results for the\ngiven input image. A highly performant object detector requires accurate\nranking for the bounding box predictions. For DETR-based detectors, the\ntop-ranked bounding boxes suffer from less accurate localization quality due to\nthe misalignment between classification scores and localization accuracy, thus\nimpeding the construction of high-quality detectors. In this work, we introduce\na simple and highly performant DETR-based object detector by proposing a series\nof rank-oriented designs, combinedly called Rank-DETR. Our key contributions\ninclude: (i) a rank-oriented architecture design that can prompt positive\npredictions and suppress the negative ones to ensure lower false positive\nrates, as well as (ii) a rank-oriented loss function and matching cost design\nthat prioritizes predictions of more accurate localization accuracy during\nranking to boost the AP under high IoU thresholds. We apply our method to\nimprove the recent SOTA methods (e.g., H-DETR and DINO-DETR) and report strong\nCOCO object detection results when using different backbones such as\nResNet-$50$, Swin-T, and Swin-L, demonstrating the effectiveness of our\napproach. Code is available at \\url{https://github.com/LeapLabTHU/Rank-DETR}.", "keywords": [], "authors_list": ["Yulu Gao", "Yifan Sun", "Xudong Ding", "Chuyang Zhao", "Si Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f350"}, "filepath": "data/2312.04372v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997980483787957, "arXiv_link": "https://arxiv.org/abs/2312.04372v2", "other_link": "https://github.com/PurdueDigitalTwin/LaMPilot.", "title": "LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs", "abstract": "Autonomous driving (AD) has made significant strides in recent years.\nHowever, existing frameworks struggle to interpret and execute spontaneous user\ninstructions, such as \"overtake the car ahead.\" Large Language Models (LLMs)\nhave demonstrated impressive reasoning capabilities showing potential to bridge\nthis gap. In this paper, we present LaMPilot, a novel framework that integrates\nLLMs into AD systems, enabling them to follow user instructions by generating\ncode that leverages established functional primitives. We also introduce\nLaMPilot-Bench, the first benchmark dataset specifically designed to\nquantitatively evaluate the efficacy of language model programs in AD. Adopting\nthe LaMPilot framework, we conduct extensive experiments to assess the\nperformance of off-the-shelf LLMs on LaMPilot-Bench. Our results demonstrate\nthe potential of LLMs in handling diverse driving scenarios and following user\ninstructions in driving. To facilitate further research in this area, we\nrelease our code and data at https://github.com/PurdueDigitalTwin/LaMPilot.", "keywords": [], "authors_list": ["Yunsheng Ma", "Can Cui", "Xu Cao", "Wenqian Ye", "Peiran Liu", "Juanwu Lu", "Amr Abdelraouf", "Rohit Gupta", "Kyungtae Han", "Aniket Bera", "James Rehg", "Ziran Wang"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f351"}, "filepath": "data/2312.02753.png", "tags": [], "_media_type": "image", "_rand": 0.9993928895269214, "arXiv_link": "https://arxiv.org/abs/2312.02753", "other_link": "", "title": "C3: High-performance and low-complexity neural compression from a single image or video", "abstract": "Most neural compression models are trained on large datasets of images or\nvideos in order to generalize to unseen data. Such generalization typically\nrequires large and expressive architectures with a high decoding complexity.\nHere we introduce C3, a neural compression method with strong rate-distortion\n(RD) performance that instead overfits a small model to each image or video\nseparately. The resulting decoding complexity of C3 can be an order of\nmagnitude lower than neural baselines with similar RD performance. C3 builds on\nCOOL-CHIC (Ladune et al.) and makes several simple and effective improvements\nfor images. We further develop new methodology to apply C3 to videos. On the\nCLIC2020 image benchmark, we match the RD performance of VTM, the reference\nimplementation of the H.266 codec, with less than 3k MACs/pixel for decoding.\nOn the UVG video benchmark, we match the RD performance of the Video\nCompression Transformer (Mentzer et al.), a well-established neural video\ncodec, with less than 5k MACs/pixel for decoding.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Hyunjik Kim", "Matthias Bauer", "Lucas Theis", "Jonathan Richard Schwarz", "Emilien Dupont"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f352"}, "filepath": "data/2404.03789.png", "tags": [], "_media_type": "image", "_rand": 0.9997777927769711, "arXiv_link": "https://arxiv.org/abs/2404.03789", "other_link": "https://github.com/PurdueDigitalTwin/seneva.", "title": "Quantifying Uncertainty in Motion Prediction with Variational Bayesian Mixture", "abstract": "Safety and robustness are crucial factors in developing trustworthy\nautonomous vehicles. One essential aspect of addressing these factors is to\nequip vehicles with the capability to predict future trajectories for all\nmoving objects in the surroundings and quantify prediction uncertainties. In\nthis paper, we propose the Sequential Neural Variational Agent (SeNeVA), a\ngenerative model that describes the distribution of future trajectories for a\nsingle moving object. Our approach can distinguish Out-of-Distribution data\nwhile quantifying uncertainty and achieving competitive performance compared to\nstate-of-the-art methods on the Argoverse 2 and INTERACTION datasets.\nSpecifically, a 0.446 meters minimum Final Displacement Error, a 0.203 meters\nminimum Average Displacement Error, and a 5.35% Miss Rate are achieved on the\nINTERACTION test set. Extensive qualitative and quantitative analysis is also\nprovided to evaluate the proposed model. Our open-source code is available at\nhttps://github.com/PurdueDigitalTwin/seneva.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Juanwu Lu", "Can Cui", "Yunsheng Ma", "Aniket Bera", "Ziran Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f353"}, "filepath": "data/2403.10030.png", "tags": [], "_media_type": "image", "_rand": 0.9996371030506914, "arXiv_link": "https://arxiv.org/abs/2403.10030", "other_link": "https://github.com/mlvlab/MCTF.", "title": "Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers", "abstract": "Vision Transformer (ViT) has emerged as a prominent backbone for computer\nvision. For more efficient ViTs, recent works lessen the quadratic cost of the\nself-attention layer by pruning or fusing the redundant tokens. However, these\nworks faced the speed-accuracy trade-off caused by the loss of information.\nHere, we argue that token fusion needs to consider diverse relations between\ntokens to minimize information loss. In this paper, we propose a Multi-criteria\nToken Fusion (MCTF), that gradually fuses the tokens based on multi-criteria\n(e.g., similarity, informativeness, and size of fused tokens). Further, we\nutilize the one-step-ahead attention, which is the improved approach to capture\nthe informativeness of the tokens. By training the model equipped with MCTF\nusing a token reduction consistency, we achieve the best speed-accuracy\ntrade-off in the image classification (ImageNet1K). Experimental results prove\nthat MCTF consistently surpasses the previous reduction methods with and\nwithout training. Specifically, DeiT-T and DeiT-S with MCTF reduce FLOPs by\nabout 44% while improving the performance (+0.5%, and +0.3%) over the base\nmodel, respectively. We also demonstrate the applicability of MCTF in various\nVision Transformers (e.g., T2T-ViT, LV-ViT), achieving at least 31% speedup\nwithout performance degradation. Code is available at\nhttps://github.com/mlvlab/MCTF.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Sanghyeok Lee", "Joonmyung Choi", "Hyunwoo J. Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f354"}, "filepath": "data/2404.05626.png", "tags": [], "_media_type": "image", "_rand": 0.999276294470559, "arXiv_link": "https://arxiv.org/abs/2404.05626", "other_link": "", "title": "Unsupervised Learning of Category-Level 3D Pose from Object-Centric Videos", "abstract": "3D object pose estimation is a challenging task. Previous works always\nrequire thousands of object images with annotated poses for learning the 3D\npose correspondence, which is laborious and time-consuming for labeling. In\nthis paper, we propose to learn a category-level 3D object pose estimator\nwithout pose annotations. Instead of using manually annotated images, we\nleverage diffusion models (e.g., Zero-1-to-3) to generate a set of images under\ncontrolled pose differences and propose to learn our object pose estimator with\nthose images. Directly using the original diffusion model leads to images with\nnoisy poses and artifacts. To tackle this issue, firstly, we exploit an image\nencoder, which is learned from a specially designed contrastive pose learning,\nto filter the unreasonable details and extract image feature maps.\nAdditionally, we propose a novel learning strategy that allows the model to\nlearn object poses from those generated image sets without knowing the\nalignment of their canonical poses. Experimental results show that our method\nhas the capability of category-level object pose estimation from a single shot\nsetting (as pose definition), while significantly outperforming other\nstate-of-the-art methods on the few-shot category-level object pose estimation\nbenchmarks.", "keywords": [], "authors_list": ["Leonhard Sommer", "Artur Jesslen", "Eddy Ilg", "Adam Kortylewski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f355"}, "filepath": "data/2401.04921.png", "tags": [], "_media_type": "image", "_rand": 0.9991563241368525, "arXiv_link": "https://arxiv.org/abs/2401.04921", "other_link": "https://github.com/KHB1698/DRPose.", "title": "DiffusionRegPose: Enhancing Multi-Person Pose Estimation using a Diffusion-Based End-to-End Regression Approach", "abstract": "Previous probabilistic models for 3D Human Pose Estimation (3DHPE) aimed to\nenhance pose accuracy by generating multiple hypotheses. However, most of the\nhypotheses generated deviate substantially from the true pose. Compared to\ndeterministic models, the excessive uncertainty in probabilistic models leads\nto weaker performance in single-hypothesis prediction. To address these two\nchallenges, we propose a diffusion-based refinement framework called DRPose,\nwhich refines the output of deterministic models by reverse diffusion and\nachieves more suitable multi-hypothesis prediction for the current pose\nbenchmark by multi-step refinement with multiple noises. To this end, we\npropose a Scalable Graph Convolution Transformer (SGCT) and a Pose Refinement\nModule (PRM) for denoising and refining. Extensive experiments on Human3.6M and\nMPI-INF-3DHP datasets demonstrate that our method achieves state-of-the-art\nperformance on both single and multi-hypothesis 3DHPE. Code is available at\nhttps://github.com/KHB1698/DRPose.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Dayi Tan", "Hansheng Chen", "Wei Tian", "Lu Xiong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f356"}, "filepath": "data/2403.06122.png", "tags": [], "_media_type": "image", "_rand": 0.999079356901281, "arXiv_link": "https://arxiv.org/abs/2403.06122", "other_link": "", "title": "Style Blind Domain Generalized Semantic Segmentation via Covariance Alignment and Semantic Consistence Contrastive Learning", "abstract": "Deep learning models for semantic segmentation often experience performance\ndegradation when deployed to unseen target domains unidentified during the\ntraining phase. This is mainly due to variations in image texture (\\ie style)\nfrom different data sources. To tackle this challenge, existing domain\ngeneralized semantic segmentation (DGSS) methods attempt to remove style\nvariations from the feature. However, these approaches struggle with the\nentanglement of style and content, which may lead to the unintentional removal\nof crucial content information, causing performance degradation. This study\naddresses this limitation by proposing BlindNet, a novel DGSS approach that\nblinds the style without external modules or datasets. The main idea behind our\nproposed approach is to alleviate the effect of style in the encoder whilst\nfacilitating robust segmentation in the decoder. To achieve this, BlindNet\ncomprises two key components: covariance alignment and semantic consistency\ncontrastive learning. Specifically, the covariance alignment trains the encoder\nto uniformly recognize various styles and preserve the content information of\nthe feature, rather than removing the style-sensitive factor. Meanwhile,\nsemantic consistency contrastive learning enables the decoder to construct\ndiscriminative class embedding space and disentangles features that are\nvulnerable to misclassification. Through extensive experiments, our approach\noutperforms existing DGSS methods, exhibiting robustness and superior\nperformance for semantic segmentation on unseen target domains.", "keywords": [], "authors_list": ["Woo-Jin Ahn", "Geun-Yeong Yang", "Hyunduck Choi", "Myo-Taeg Lim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f357"}, "filepath": "data/2306.16050.png", "tags": [], "_media_type": "image", "_rand": 0.9990271208530471, "arXiv_link": "https://arxiv.org/abs/2306.16050", "other_link": "", "title": "Robust Image Denoising through Adversarial Frequency Mixup", "abstract": "Deep neural networks (DNNs) have shown superior performance comparing to\ntraditional image denoising algorithms. However, DNNs are inevitably vulnerable\nwhile facing adversarial attacks. In this paper, we propose an adversarial\nattack method named denoising-PGD which can successfully attack all the current\ndeep denoising models while keep the noise distribution almost unchanged. We\nsurprisingly find that the current mainstream non-blind denoising models\n(DnCNN, FFDNet, ECNDNet, BRDNet), blind denoising models (DnCNN-B, Noise2Noise,\nRDDCNN-B, FAN), plug-and-play (DPIR, CurvPnP) and unfolding denoising models\n(DeamNet) almost share the same adversarial sample set on both grayscale and\ncolor images, respectively. Shared adversarial sample set indicates that all\nthese models are similar in term of local behaviors at the neighborhood of all\nthe test samples. Thus, we further propose an indicator to measure the local\nsimilarity of models, called robustness similitude. Non-blind denoising models\nare found to have high robustness similitude across each other, while\nhybrid-driven models are also found to have high robustness similitude with\npure data-driven non-blind denoising models. According to our robustness\nassessment, data-driven non-blind denoising models are the most robust. We use\nadversarial training to complement the vulnerability to adversarial attacks.\nMoreover, the model-driven image denoising BM3D shows resistance on adversarial\nattacks.", "keywords": ["Low-level vision"], "authors_list": ["Donghun Ryou", "Inju Ha", "Hyewon Yoo", "Dongwan Kim", "Bohyung Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f358"}, "filepath": "data/2403.07346v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990510086726798, "arXiv_link": "https://arxiv.org/abs/2403.07346v1", "other_link": "", "title": "Complementing Event Streams and RGB Frames for Hand Mesh Reconstruction", "abstract": "Reliable hand mesh reconstruction (HMR) from commonly-used color and depth\nsensors is challenging especially under scenarios with varied illuminations and\nfast motions. Event camera is a highly promising alternative for its high\ndynamic range and dense temporal resolution properties, but it lacks key\ntexture appearance for hand mesh reconstruction. In this paper, we propose\nEvRGBHand -- the first approach for 3D hand mesh reconstruction with an event\ncamera and an RGB camera compensating for each other. By fusing two modalities\nof data across time, space, and information dimensions,EvRGBHand can tackle\noverexposure and motion blur issues in RGB-based HMR and foreground scarcity\nand background overflow issues in event-based HMR. We further propose\nEvRGBDegrader, which allows our model to generalize effectively in challenging\nscenes, even when trained solely on standard scenes, thus reducing data\nacquisition costs. Experiments on real-world data demonstrate that EvRGBHand\ncan effectively solve the challenging issues when using either type of camera\nalone via retaining the merits of both, and shows the potential of\ngeneralization to outdoor scenes and another type of event camera.", "keywords": ["Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Jianping Jiang", "xinyu zhou", "Bingxuan Wang", "Xiaoming Deng", "Chao Xu", "Boxin Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f359"}, "filepath": "data/2403.12350.png", "tags": [], "_media_type": "image", "_rand": 0.9996035123615826, "arXiv_link": "https://arxiv.org/abs/2403.12350", "other_link": "https://github.com/nblt/F-SAM.", "title": "Friendly Sharpness-Aware Minimization", "abstract": "Sharpness-Aware Minimization (SAM) has been instrumental in improving deep\nneural network training by minimizing both training loss and loss sharpness.\nDespite the practical success, the mechanisms behind SAM's generalization\nenhancements remain elusive, limiting its progress in deep learning\noptimization. In this work, we investigate SAM's core components for\ngeneralization improvement and introduce \"Friendly-SAM\" (F-SAM) to further\nenhance SAM's generalization. Our investigation reveals the key role of\nbatch-specific stochastic gradient noise within the adversarial perturbation,\ni.e., the current minibatch gradient, which significantly influences SAM's\ngeneralization performance. By decomposing the adversarial perturbation in SAM\ninto full gradient and stochastic gradient noise components, we discover that\nrelying solely on the full gradient component degrades generalization while\nexcluding it leads to improved performance. The possible reason lies in the\nfull gradient component's increase in sharpness loss for the entire dataset,\ncreating inconsistencies with the subsequent sharpness minimization step solely\non the current minibatch data. Inspired by these insights, F-SAM aims to\nmitigate the negative effects of the full gradient component. It removes the\nfull gradient estimated by an exponentially moving average (EMA) of historical\nstochastic gradients, and then leverages stochastic gradient noise for improved\ngeneralization. Moreover, we provide theoretical validation for the EMA\napproximation and prove the convergence of F-SAM on non-convex problems.\nExtensive experiments demonstrate the superior generalization performance and\nrobustness of F-SAM over vanilla SAM. Code is available at\nhttps://github.com/nblt/F-SAM.", "keywords": [], "authors_list": ["Tao Li", "Pan Zhou", "Zhengbao He", "Xinwen Cheng", "Xiaolin Huang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f35a"}, "filepath": "data/2405.15605v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992341100096009, "arXiv_link": "https://arxiv.org/html/2405.15605v2", "other_link": "https://github.com/jjiantong/FastPGM.", "title": "Efficient Hyperparameter Optimization with Adaptive Fidelity Identification", "abstract": "Probabilistic graphical models (PGMs) serve as a powerful framework for\nmodeling complex systems with uncertainty and extracting valuable insights from\ndata. However, users face challenges when applying PGMs to their problems in\nterms of efficiency and usability. This paper presents Fast-PGM, an efficient\nand open-source library for PGM learning and inference. Fast-PGM supports\ncomprehensive tasks on PGMs, including structure and parameter learning, as\nwell as exact and approximate inference, and enhances efficiency of the tasks\nthrough computational and memory optimizations and parallelization techniques.\nConcurrently, Fast-PGM furnishes developers with flexible building blocks,\nfurnishes learners with detailed documentation, and affords non-experts\nuser-friendly interfaces, thereby ameliorating the usability of PGMs to users\nacross a spectrum of expertise levels. The source code of Fast-PGM is available\nat https://github.com/jjiantong/FastPGM.", "keywords": [], "authors_list": ["Jiantong Jiang", "Zeyi Wen", "Atif Mansoor", "Ajmal Mian"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f35b"}, "filepath": "data/2403.07246.png", "tags": [], "_media_type": "image", "_rand": 0.9999819798722508, "arXiv_link": "https://arxiv.org/abs/2403.07246", "other_link": "", "title": "Exploring Pose-Aware Human-Object Interaction via Hybrid Learning", "abstract": "Human-object interaction (HOI) detection aims to locate human-object pairs\nand identify their interaction categories in images. Most existing methods\nprimarily focus on supervised learning, which relies on extensive manual HOI\nannotations. In this paper, we propose a novel framework, termed Knowledge\nIntegration to HOI (KI2HOI), that effectively integrates the knowledge of\nvisual-language model to improve zero-shot HOI detection. Specifically, the\nverb feature learning module is designed based on visual semantics, by\nemploying the verb extraction decoder to convert corresponding verb queries\ninto interaction-specific category representations. We develop an effective\nadditive self-attention mechanism to generate more comprehensive visual\nrepresentations. Moreover, the innovative interaction representation decoder\neffectively extracts informative regions by integrating spatial and visual\nfeature information through a cross-attention mechanism. To deal with zero-shot\nlearning in low-data, we leverage a priori knowledge from the CLIP text encoder\nto initialize the linear classifier for enhanced interaction understanding.\nExtensive experiments conducted on the mainstream HICO-DET and V-COCO datasets\ndemonstrate that our model outperforms the previous methods in various\nzero-shot and full-supervised settings.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["EASTMAN Z Y WU", "Yali Li", "Yuan Wang", "Shengjin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f35c"}, "filepath": "data/2403.16788.png", "tags": [], "_media_type": "image", "_rand": 0.9993880788851007, "arXiv_link": "https://arxiv.org/abs/2403.16788", "other_link": "", "title": "HPL-ESS: Hybrid Pseudo-Labeling for Unsupervised Event-based Semantic Segmentation", "abstract": "Event-based semantic segmentation has gained popularity due to its capability\nto deal with scenarios under high-speed motion and extreme lighting conditions,\nwhich cannot be addressed by conventional RGB cameras. Since it is hard to\nannotate event data, previous approaches rely on event-to-image reconstruction\nto obtain pseudo labels for training. However, this will inevitably introduce\nnoise, and learning from noisy pseudo labels, especially when generated from a\nsingle source, may reinforce the errors. This drawback is also called\nconfirmation bias in pseudo-labeling. In this paper, we propose a novel hybrid\npseudo-labeling framework for unsupervised event-based semantic segmentation,\nHPL-ESS, to alleviate the influence of noisy pseudo labels. In particular, we\nfirst employ a plain unsupervised domain adaptation framework as our baseline,\nwhich can generate a set of pseudo labels through self-training. Then, we\nincorporate offline event-to-image reconstruction into the framework, and\nobtain another set of pseudo labels by predicting segmentation maps on the\nreconstructed images. A noisy label learning strategy is designed to mix the\ntwo sets of pseudo labels and enhance the quality. Moreover, we propose a soft\nprototypical alignment module to further improve the consistency of target\ndomain features. Extensive experiments show that our proposed method\noutperforms existing state-of-the-art methods by a large margin on the\nDSEC-Semantic dataset (+5.88% accuracy, +10.32% mIoU), which even surpasses\nseveral supervised methods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Linglin Jing", "Yiming Ding", "Yunpeng Gao", "Zhigang Wang", "Xu Yan", "Dong Wang", "Gerald Schaefer", "Hui Fang", "Bin Zhao", "Xuelong Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f35d"}, "filepath": "data/2312.04963.png", "tags": [], "_media_type": "image", "_rand": 0.999920786285605, "arXiv_link": "https://arxiv.org/abs/2312.04963", "other_link": "https://bidiff.github.io/.", "title": "Text-to-3D Generation with Bidirectional Diffusion using both 3D and 2D priors", "abstract": "Most 3D generation research focuses on up-projecting 2D foundation models\ninto the 3D space, either by minimizing 2D Score Distillation Sampling (SDS)\nloss or fine-tuning on multi-view datasets. Without explicit 3D priors, these\nmethods often lead to geometric anomalies and multi-view inconsistency.\nRecently, researchers have attempted to improve the genuineness of 3D objects\nby directly training on 3D datasets, albeit at the cost of low-quality texture\ngeneration due to the limited texture diversity in 3D datasets. To harness the\nadvantages of both approaches, we propose Bidirectional Diffusion(BiDiff), a\nunified framework that incorporates both a 3D and a 2D diffusion process, to\npreserve both 3D fidelity and 2D texture richness, respectively. Moreover, as a\nsimple combination may yield inconsistent generation results, we further bridge\nthem with novel bidirectional guidance. In addition, our method can be used as\nan initialization of optimization-based models to further improve the quality\nof 3D model and efficiency of optimization, reducing the generation process\nfrom 3.4 hours to 20 minutes. Experimental results have shown that our model\nachieves high-quality, diverse, and scalable 3D generation. Project website:\nhttps://bidiff.github.io/.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Lihe Ding", "Shaocong Dong", "Zhanpeng Huang", "Zibin Wang", "Yiyuan Zhang", "Kaixiong Gong", "Dan Xu", "Tianfan Xue"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f35e"}, "filepath": "data/2311.18259.png", "tags": [], "_media_type": "image", "_rand": 0.9994391447197968, "arXiv_link": "https://arxiv.org/abs/2311.18259", "other_link": "http://ego-exo4d-data.org/", "title": "Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives", "abstract": "We present Ego-Exo4D, a diverse, large-scale multimodal multiview video\ndataset and benchmark challenge. Ego-Exo4D centers around\nsimultaneously-captured egocentric and exocentric video of skilled human\nactivities (e.g., sports, music, dance, bike repair). 740 participants from 13\ncities worldwide performed these activities in 123 different natural scene\ncontexts, yielding long-form captures from 1 to 42 minutes each and 1,286 hours\nof video combined. The multimodal nature of the dataset is unprecedented: the\nvideo is accompanied by multichannel audio, eye gaze, 3D point clouds, camera\nposes, IMU, and multiple paired language descriptions -- including a novel\n\"expert commentary\" done by coaches and teachers and tailored to the\nskilled-activity domain. To push the frontier of first-person video\nunderstanding of skilled human activity, we also present a suite of benchmark\ntasks and their annotations, including fine-grained activity understanding,\nproficiency estimation, cross-view translation, and 3D hand/body pose. All\nresources are open sourced to fuel new research in the community. Project page:\nhttp://ego-exo4d-data.org/", "keywords": ["Scene analysis and understanding", "Biometrics and human analysis"], "authors_list": ["Kristen Grauman", "Andrew Westbury", "Lorenzo Torresani", "Kris Kitani", "Jitendra Malik", "Triantafyllos Afouras", "Kumar Ashutosh", "Vijay Baiyya", "Siddhant Bansal", "Bikram Boote", "Eugene Byrne", "Zachary Chavis", "Joya Chen", "Feng Cheng", "Fu-Jen Chu", "Sean Crane", "Avijit Dasgupta", "Jing Dong", "Maria Escobar", "Cristhian David Forigua Diaz", "Abrham Gebreselasie", "Sanjay Haresh", "Jing Huang", "Md Mohaiminul Islam", "Suyog Jain", "Rawal Khirodkar", "Devansh Kukreja", "Kevin Liang", "Jia-Wei Liu", "Sagnik Majumder", "Yongsen Mao", "Miguel Martin", "Effrosyni Mavroudi", "Tushar Nagarajan", "Francesco Ragusa", "Santhosh Kumar Ramakrishnan", "Luigi Seminara", "Arjun Somayazulu", "Yale Song", "Shan Su", "Zihui Xue", "Edward Zhang", "Jinxu Zhang", "Angela Castillo", "Changan Chen", "Fu Xinzhu", "Ryosuke Furuta", "Cristina Gonz\u00e1lez", "Gupta", "Jiabo Hu", "Yifei Huang", "Yiming Huang", "Weslie Khoo", "Anush Kumar", "Robert Kuo", "Sach Lakhavani", "Miao Liu", "Mi Luo", "Zhengyi Luo", "Brighid Meredith", "Austin Miller", "Oluwatumininu Oguntola", "Xiaqing Pan", "Penny Peng", "Shraman Pramanick", "Merey Ramazanova", "Fiona Ryan", "Wei Shan", "Kiran Somasundaram", "Chenan Song", "Audrey Southerland", "Masatoshi Tateno", "Huiyu Wang", "Yuchen Wang", "Takuma Yagi", "Mingfei Yan", "Xitong Yang", "Zecheng Yu", "Shengxin Zha", "Chen Zhao", "Ziwei Zhao", "Zhifan Zhu", "Jeff Zhuo", "Pablo ARBELAEZ", "Gedas Bertasius", "Dima Damen", "Jakob Engel", "Giovanni Maria Farinella", "Antonino Furnari", "Bernard Ghanem", "Judy Hoffman", "C.V. Jawahar", "Richard Newcombe", "Hyun Soo Park", "James Rehg", "Yoichi Sato", "Manolis Savva", "Jianbo Shi", "Mike Zheng Shou", "Michael Wray"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f35f"}, "filepath": "data/2405.17405.png", "tags": [], "_media_type": "image", "_rand": 0.9992641168208172, "arXiv_link": "https://arxiv.org/abs/2405.17405", "other_link": "https://human4dit.github.io.", "title": "Control4D: Efficient 4D Portrait Editing with Text", "abstract": "We present a novel approach for generating high-quality, spatio-temporally\ncoherent human videos from a single image under arbitrary viewpoints. Our\nframework combines the strengths of U-Nets for accurate condition injection and\ndiffusion transformers for capturing global correlations across viewpoints and\ntime. The core is a cascaded 4D transformer architecture that factorizes\nattention across views, time, and spatial dimensions, enabling efficient\nmodeling of the 4D space. Precise conditioning is achieved by injecting human\nidentity, camera parameters, and temporal signals into the respective\ntransformers. To train this model, we curate a multi-dimensional dataset\nspanning images, videos, multi-view data and 3D/4D scans, along with a\nmulti-dimensional training strategy. Our approach overcomes the limitations of\nprevious methods based on GAN or UNet-based diffusion models, which struggle\nwith complex motions and viewpoint changes. Through extensive experiments, we\ndemonstrate our method's ability to synthesize realistic, coherent and\nfree-view human videos, paving the way for advanced multimedia applications in\nareas such as virtual reality and animation. Our project website is\nhttps://human4dit.github.io.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Ruizhi Shao", "Jingxiang Sun", "Cheng Peng", "Zerong Zheng", "Boyao ZHOU", "Hongwen Zhang", "Yebin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f360"}, "filepath": "data/2403.17460.png", "tags": [], "_media_type": "image", "_rand": 0.9997949966203743, "arXiv_link": "https://arxiv.org/abs/2403.17460", "other_link": "https://github.com/dongrunmin/RefDiff.", "title": "Building Bridges across Spatial and Temporal Resolutions: Reference-Based Super-Resolution via Change Priors and Conditional Diffusion Model", "abstract": "Reference-based super-resolution (RefSR) has the potential to build bridges\nacross spatial and temporal resolutions of remote sensing images. However,\nexisting RefSR methods are limited by the faithfulness of content\nreconstruction and the effectiveness of texture transfer in large scaling\nfactors. Conditional diffusion models have opened up new opportunities for\ngenerating realistic high-resolution images, but effectively utilizing\nreference images within these models remains an area for further exploration.\nFurthermore, content fidelity is difficult to guarantee in areas without\nrelevant reference information. To solve these issues, we propose a\nchange-aware diffusion model named Ref-Diff for RefSR, using the land cover\nchange priors to guide the denoising process explicitly. Specifically, we\ninject the priors into the denoising model to improve the utilization of\nreference information in unchanged areas and regulate the reconstruction of\nsemantically relevant content in changed areas. With this powerful guidance, we\ndecouple the semantics-guided denoising and reference texture-guided denoising\nprocesses to improve the model performance. Extensive experiments demonstrate\nthe superior effectiveness and robustness of the proposed method compared with\nstate-of-the-art RefSR methods in both quantitative and qualitative\nevaluations. The code and data are available at\nhttps://github.com/dongrunmin/RefDiff.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Runmin Dong", "Shuai Yuan", "Bin Luo", "Mengxuan Chen", "Jinxiao Zhang", "Lixian Zhang", "Weijia Li", "Juepeng Zheng", "Haohuan Fu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f361"}, "filepath": "data/2312.04819.png", "tags": [], "_media_type": "image", "_rand": 0.9996270750940646, "arXiv_link": "https://arxiv.org/abs/2312.04819", "other_link": "https://github.com/NJU-RL/ACORM](https://github.com/NJU-RL/ACORM).", "title": "MaskCLR: Attention-Guided Contrastive Learning for Robust Action Representation Learning", "abstract": "Real-world multi-agent tasks usually involve dynamic team composition with\nthe emergence of roles, which should also be a key to efficient cooperation in\nmulti-agent reinforcement learning (MARL). Drawing inspiration from the\ncorrelation between roles and agent's behavior patterns, we propose a novel\nframework of **A**ttention-guided **CO**ntrastive **R**ole representation\nlearning for **M**ARL (**ACORM**) to promote behavior heterogeneity, knowledge\ntransfer, and skillful coordination across agents. First, we introduce mutual\ninformation maximization to formalize role representation learning, derive a\ncontrastive learning objective, and concisely approximate the distribution of\nnegative pairs. Second, we leverage an attention mechanism to prompt the global\nstate to attend to learned role representations in value decomposition,\nimplicitly guiding agent coordination in a skillful role space to yield more\nexpressive credit assignment. Experiments on challenging StarCraft II\nmicromanagement and Google research football tasks demonstrate the\nstate-of-the-art performance of our method and its advantages over existing\napproaches. Our code is available at\n[https://github.com/NJU-RL/ACORM](https://github.com/NJU-RL/ACORM).", "keywords": [], "authors_list": ["Mohamed Abdelfattah", "Mariam Hassan", "Alex Alahi"], "category_name": "Multiagent Systems", "all_categories": ["Multiagent Systems"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f362"}, "filepath": "data/2312.07246.png", "tags": [], "_media_type": "image", "_rand": 0.9997003886700486, "arXiv_link": "https://arxiv.org/abs/2312.07246", "other_link": "", "title": "Unifying Correspondence, Pose and NeRF for Pose-Free Novel View Synthesis from Stereo Pairs", "abstract": "This work delves into the task of pose-free novel view synthesis from stereo\npairs, a challenging and pioneering task in 3D vision. Our innovative\nframework, unlike any before, seamlessly integrates 2D correspondence matching,\ncamera pose estimation, and NeRF rendering, fostering a synergistic enhancement\nof these tasks. We achieve this through designing an architecture that utilizes\na shared representation, which serves as a foundation for enhanced 3D geometry\nunderstanding. Capitalizing on the inherent interplay between the tasks, our\nunified framework is trained end-to-end with the proposed training strategy to\nimprove overall model accuracy. Through extensive evaluations across diverse\nindoor and outdoor scenes from two real-world datasets, we demonstrate that our\napproach achieves substantial improvement over previous methodologies,\nespecially in scenarios characterized by extreme viewpoint changes and the\nabsence of accurate camera poses.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Sunghwan Hong", "Jaewoo Jung", "Heeseong Shin", "Jiaolong Yang", "Chong Luo", "Seungryong Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f363"}, "filepath": "data/2403.11193.png", "tags": [], "_media_type": "image", "_rand": 0.9991403177788256, "arXiv_link": "https://arxiv.org/abs/2403.11193", "other_link": "https://github.com/aeolusguan/NMRF", "title": "Neural Markov Random Field for Stereo Matching", "abstract": "Stereo matching is a core task for many computer vision and robotics\napplications. Despite their dominance in traditional stereo methods, the\nhand-crafted Markov Random Field (MRF) models lack sufficient modeling accuracy\ncompared to end-to-end deep models. While deep learning representations have\ngreatly improved the unary terms of the MRF models, the overall accuracy is\nstill severely limited by the hand-crafted pairwise terms and message passing.\nTo address these issues, we propose a neural MRF model, where both potential\nfunctions and message passing are designed using data-driven neural networks.\nOur fully data-driven model is built on the foundation of variational inference\ntheory, to prevent convergence issues and retain stereo MRF's graph inductive\nbias. To make the inference tractable and scale well to high-resolution images,\nwe also propose a Disparity Proposal Network (DPN) to adaptively prune the\nsearch space of disparity. The proposed approach ranks $1^{st}$ on both KITTI\n2012 and 2015 leaderboards among all published methods while running faster\nthan 100 ms. This approach significantly outperforms prior global methods,\ne.g., lowering D1 metric by more than 50% on KITTI 2015. In addition, our\nmethod exhibits strong cross-domain generalization and can recover sharp edges.\nThe codes at https://github.com/aeolusguan/NMRF", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Tongfan Guan", "Chen Wang", "Yun-Hui Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f364"}, "filepath": "data/2309.11281.png", "tags": [], "_media_type": "image", "_rand": 0.9999260665451422, "arXiv_link": "https://arxiv.org/abs/2309.11281", "other_link": "", "title": "Language-driven Object Fusion into Neural Radiance Fields with Pose-Conditioned Dataset Updates", "abstract": "Neural radiance field is an emerging rendering method that generates\nhigh-quality multi-view consistent images from a neural scene representation\nand volume rendering. Although neural radiance field-based techniques are\nrobust for scene reconstruction, their ability to add or remove objects remains\nlimited. This paper proposes a new language-driven approach for object\nmanipulation with neural radiance fields through dataset updates. Specifically,\nto insert a new foreground object represented by a set of multi-view images\ninto a background radiance field, we use a text-to-image diffusion model to\nlearn and generate combined images that fuse the object of interest into the\ngiven background across views. These combined images are then used for refining\nthe background radiance field so that we can render view-consistent images\ncontaining both the object and the background. To ensure view consistency, we\npropose a dataset updates strategy that prioritizes radiance field training\nwith camera views close to the already-trained views prior to propagating the\ntraining to remaining views. We show that under the same dataset updates\nstrategy, we can easily adapt our method for object insertion using data from\ntext-to-3D models as well as object removal. Experimental results show that our\nmethod generates photorealistic images of the edited scenes, and outperforms\nstate-of-the-art methods in 3D reconstruction and neural radiance field\nblending.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Ka Chun SHUM", "Jaeyeon Kim", "Binh-Son Hua", "Thanh Nguyen", "Sai-Kit Yeung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f365"}, "filepath": "data/2403.10519.png", "tags": [], "_media_type": "image", "_rand": 0.9997267236694427, "arXiv_link": "https://arxiv.org/abs/2403.10519", "other_link": "", "title": "Frozen Feature Augmentation for Few-Shot Image Classification", "abstract": "Training a linear classifier or lightweight model on top of pretrained vision\nmodel outputs, so-called 'frozen features', leads to impressive performance on\na number of downstream few-shot tasks. Currently, frozen features are not\nmodified during training. On the other hand, when networks are trained directly\non images, data augmentation is a standard recipe that improves performance\nwith no substantial overhead. In this paper, we conduct an extensive pilot\nstudy on few-shot image classification that explores applying data\naugmentations in the frozen feature space, dubbed 'frozen feature augmentation\n(FroFA)', covering twenty augmentations in total. Our study demonstrates that\nadopting a deceptively simple pointwise FroFA, such as brightness, can improve\nfew-shot performance consistently across three network architectures, three\nlarge pretraining datasets, and eight transfer datasets.", "keywords": [], "authors_list": ["Andreas B\u00e4r", "Neil Houlsby", "Mostafa Dehghani", "Manoj Kumar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f366"}, "filepath": "data/2311.15011.png", "tags": [], "_media_type": "image", "_rand": 0.9992862229994383, "arXiv_link": "https://arxiv.org/abs/2311.15011", "other_link": "https://github.com/Sssssuperior/VSCode.", "title": "VSCode: General Visual Salient and Camouflaged Object Detection with 2D Prompt Learning", "abstract": "Salient object detection (SOD) and camouflaged object detection (COD) are\nrelated yet distinct binary mapping tasks. These tasks involve multiple\nmodalities, sharing commonalities and unique cues. Existing research often\nemploys intricate task-specific specialist models, potentially leading to\nredundancy and suboptimal results. We introduce VSCode, a generalist model with\nnovel 2D prompt learning, to jointly address four SOD tasks and three COD\ntasks. We utilize VST as the foundation model and introduce 2D prompts within\nthe encoder-decoder architecture to learn domain and task-specific knowledge on\ntwo separate dimensions. A prompt discrimination loss helps disentangle\npeculiarities to benefit model optimization. VSCode outperforms\nstate-of-the-art methods across six tasks on 26 datasets and exhibits zero-shot\ngeneralization to unseen tasks by combining 2D prompts, such as RGB-D COD.\nSource code has been available at https://github.com/Sssssuperior/VSCode.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Ziyang Luo", "Nian Liu", "Wangbo Zhao", "Xuguang Yang", "Dingwen Zhang", "Deng-Ping Fan", "Fahad Shahbaz Khan", "Junwei Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f367"}, "filepath": "data/2312.00068.png", "tags": [], "_media_type": "image", "_rand": 0.9991593805244732, "arXiv_link": "https://arxiv.org/abs/2312.00068", "other_link": "https://kshitijbhat.github.io/glidr", "title": "GLiDR: Topologically Regularized Graph Generative Network for Sparse LiDAR Point Clouds", "abstract": "Sparse LiDAR point clouds cause severe loss of detail of static structures\nand reduce the density of static points available for navigation. Reduced\ndensity can be detrimental to navigation under several scenarios. We observe\nthat despite high sparsity, in most cases, the global topology of LiDAR\noutlining the static structures can be inferred. We utilize this property to\nobtain a backbone skeleton of a LiDAR scan in the form of a single connected\ncomponent that is a proxy to its global topology. We utilize the backbone to\naugment new points along static structures to overcome sparsity. Newly\nintroduced points could correspond to existing static structures or to static\npoints that were earlier obstructed by dynamic objects. To the best of our\nknowledge, we are the first to use such a strategy for sparse LiDAR point\nclouds. Existing solutions close to our approach fail to identify and preserve\nthe global static LiDAR topology and generate sub-optimal points. We propose\nGLiDR, a Graph Generative network that is topologically regularized using\n0-dimensional Persistent Homology ($\\mathcal{PH}$) constraints. This enables\nGLiDR to introduce newer static points along a topologically consistent global\nstatic LiDAR backbone. GLiDR generates precise static points using $32\\times$\nsparser dynamic scans and performs better than the baselines across three\ndatasets. GLiDR generates a valuable byproduct - an accurate binary\nsegmentation mask of static and dynamic objects that are helpful for navigation\nplanning and safety in constrained environments. The newly introduced static\npoints allow GLiDR to outperform LiDAR-based navigation using SLAM in several\nsettings. Source code is available at https://kshitijbhat.github.io/glidr", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Prashant Kumar", "Kshitij Madhav Bhat", "Vedang Bhupesh Shenvi Nadkarni", "Prem Kalra"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f368"}, "filepath": "data/2403.10335.png", "tags": [], "_media_type": "image", "_rand": 0.9998026147424879, "arXiv_link": "https://arxiv.org/abs/2403.10335", "other_link": "https://github.com/iSEE-Laboratory/NECA.", "title": "NECA: Neural Customizable Human Avatar", "abstract": "Human avatar has become a novel type of 3D asset with various applications.\nIdeally, a human avatar should be fully customizable to accommodate different\nsettings and environments. In this work, we introduce NECA, an approach capable\nof learning versatile human representation from monocular or sparse-view\nvideos, enabling granular customization across aspects such as pose, shadow,\nshape, lighting and texture. The core of our approach is to represent humans in\ncomplementary dual spaces and predict disentangled neural fields of geometry,\nalbedo, shadow, as well as an external lighting, from which we are able to\nderive realistic rendering with high-frequency details via volumetric\nrendering. Extensive experiments demonstrate the advantage of our method over\nthe state-of-the-art methods in photorealistic rendering, as well as various\nediting tasks such as novel pose synthesis and relighting. The code is\navailable at https://github.com/iSEE-Laboratory/NECA.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Junjin Xiao", "Qing Zhang", "Zhan Xu", "Wei-Shi Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f369"}, "filepath": "data/2403.03477.png", "tags": [], "_media_type": "image", "_rand": 0.9994911594647107, "arXiv_link": "https://arxiv.org/abs/2403.03477", "other_link": "https://github.com/jordangong/CoMasTRe.", "title": "Continual Segmentation with Disentangled Objectness Learning and Class Recognition", "abstract": "Most continual segmentation methods tackle the problem as a per-pixel\nclassification task. However, such a paradigm is very challenging, and we find\nquery-based segmenters with built-in objectness have inherent advantages\ncompared with per-pixel ones, as objectness has strong transfer ability and\nforgetting resistance. Based on these findings, we propose CoMasTRe by\ndisentangling continual segmentation into two stages: forgetting-resistant\ncontinual objectness learning and well-researched continual classification.\nCoMasTRe uses a two-stage segmenter learning class-agnostic mask proposals at\nthe first stage and leaving recognition to the second stage. During continual\nlearning, a simple but effective distillation is adopted to strengthen\nobjectness. To further mitigate the forgetting of old classes, we design a\nmulti-label class distillation strategy suited for segmentation. We assess the\neffectiveness of CoMasTRe on PASCAL VOC and ADE20K. Extensive experiments show\nthat our method outperforms per-pixel and query-based methods on both datasets.\nCode will be available at https://github.com/jordangong/CoMasTRe.", "keywords": [], "authors_list": ["Yizheng Gong", "Siyue Yu", "Xiaoyang Wang", "Jimin Xiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f36a"}, "filepath": "data/2311.15977.png", "tags": [], "_media_type": "image", "_rand": 0.9992457708558146, "arXiv_link": "https://arxiv.org/abs/2311.15977", "other_link": "https://yan-xia.github.io/projects/text2loc/}.", "title": "Text2Loc: 3D Point Cloud Localization from Natural Language", "abstract": "We tackle the problem of 3D point cloud localization based on a few natural\nlinguistic descriptions and introduce a novel neural network, Text2Loc, that\nfully interprets the semantic relationship between points and text. Text2Loc\nfollows a coarse-to-fine localization pipeline: text-submap global place\nrecognition, followed by fine localization. In global place recognition,\nrelational dynamics among each textual hint are captured in a hierarchical\ntransformer with max-pooling (HTM), whereas a balance between positive and\nnegative pairs is maintained using text-submap contrastive learning. Moreover,\nwe propose a novel matching-free fine localization method to further refine the\nlocation predictions, which completely removes the need for complicated\ntext-instance matching and is lighter, faster, and more accurate than previous\nmethods. Extensive experiments show that Text2Loc improves the localization\naccuracy by up to $2\\times$ over the state-of-the-art on the KITTI360Pose\ndataset. Our project page is publicly available at\n\\url{https://yan-xia.github.io/projects/text2loc/}.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Yan Xia", "Letian Shi", "Zifeng Ding", "Jo\u00e3o F. Henriques", "Daniel Cremers"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f36b"}, "filepath": "data/2403.17334.png", "tags": [], "_media_type": "image", "_rand": 0.9990140106484059, "arXiv_link": "https://arxiv.org/abs/2403.17334", "other_link": "", "title": "OVER-NAV: Elevating Iterative Vision-and-Language Navigation with Open-Vocabulary Detection and StructurEd Representation", "abstract": "Recent advances in Iterative Vision-and-Language Navigation (IVLN) introduce\na more meaningful and practical paradigm of VLN by maintaining the agent's\nmemory across tours of scenes. Although the long-term memory aligns better with\nthe persistent nature of the VLN task, it poses more challenges on how to\nutilize the highly unstructured navigation memory with extremely sparse\nsupervision. Towards this end, we propose OVER-NAV, which aims to go over and\nbeyond the current arts of IVLN techniques. In particular, we propose to\nincorporate LLMs and open-vocabulary detectors to distill key information and\nestablish correspondence between multi-modal signals. Such a mechanism\nintroduces reliable cross-modal supervision and enables on-the-fly\ngeneralization to unseen scenes without the need of extra annotation and\nre-training. To fully exploit the interpreted navigation data, we further\nintroduce a structured representation, coded Omnigraph, to effectively\nintegrate multi-modal information along the tour. Accompanied with a novel\nomnigraph fusion mechanism, OVER-NAV is able to extract the most relevant\nknowledge from omnigraph for a more accurate navigating action. In addition,\nOVER-NAV seamlessly supports both discrete and continuous environments under a\nunified framework. We demonstrate the superiority of OVER-NAV in extensive\nexperiments.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Ganlong Zhao", "Guanbin Li", "Weikai Chen", "Yizhou Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f36c"}, "filepath": "data/2403.16510.png", "tags": [], "_media_type": "image", "_rand": 0.9992882139538545, "arXiv_link": "https://arxiv.org/abs/2403.16510", "other_link": "https://github.com/ICTMCG/Make-Your-Anchor}.", "title": "Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework", "abstract": "Despite the remarkable process of talking-head-based avatar-creating\nsolutions, directly generating anchor-style videos with full-body motions\nremains challenging. In this study, we propose Make-Your-Anchor, a novel system\nnecessitating only a one-minute video clip of an individual for training,\nsubsequently enabling the automatic generation of anchor-style videos with\nprecise torso and hand movements. Specifically, we finetune a proposed\nstructure-guided diffusion model on input video to render 3D mesh conditions\ninto human appearances. We adopt a two-stage training strategy for the\ndiffusion model, effectively binding movements with specific appearances. To\nproduce arbitrary long temporal video, we extend the 2D U-Net in the frame-wise\ndiffusion model to a 3D style without additional training cost, and a simple\nyet effective batch-overlapped temporal denoising module is proposed to bypass\nthe constraints on video length during inference. Finally, a novel\nidentity-specific face enhancement module is introduced to improve the visual\nquality of facial regions in the output videos. Comparative experiments\ndemonstrate the effectiveness and superiority of the system in terms of visual\nquality, temporal coherence, and identity preservation, outperforming SOTA\ndiffusion/non-diffusion methods. Project page:\n\\url{https://github.com/ICTMCG/Make-Your-Anchor}.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Ziyao Huang", "Fan Tang", "Yong Zhang", "Xiaodong Cun", "Juan Cao", "Jintao Li", "Tong-yee Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f36d"}, "filepath": "data/2403.16002.png", "tags": [], "_media_type": "image", "_rand": 0.9990451344957121, "arXiv_link": "https://arxiv.org/abs/2403.16002", "other_link": "https://github.com/hoqolo/SDSTrack.", "title": "SDSTrack: Self-Distillation Symmetric Adapter Learning for Multi-Modal Visual Object Tracking", "abstract": "Multimodal Visual Object Tracking (VOT) has recently gained significant\nattention due to its robustness. Early research focused on fully fine-tuning\nRGB-based trackers, which was inefficient and lacked generalized representation\ndue to the scarcity of multimodal data. Therefore, recent studies have utilized\nprompt tuning to transfer pre-trained RGB-based trackers to multimodal data.\nHowever, the modality gap limits pre-trained knowledge recall, and the\ndominance of the RGB modality persists, preventing the full utilization of\ninformation from other modalities. To address these issues, we propose a novel\nsymmetric multimodal tracking framework called SDSTrack. We introduce\nlightweight adaptation for efficient fine-tuning, which directly transfers the\nfeature extraction ability from RGB to other domains with a small number of\ntrainable parameters and integrates multimodal features in a balanced,\nsymmetric manner. Furthermore, we design a complementary masked patch\ndistillation strategy to enhance the robustness of trackers in complex\nenvironments, such as extreme weather, poor imaging, and sensor failure.\nExtensive experiments demonstrate that SDSTrack outperforms state-of-the-art\nmethods in various multimodal tracking scenarios, including RGB+Depth,\nRGB+Thermal, and RGB+Event tracking, and exhibits impressive results in extreme\nconditions. Our source code is available at https://github.com/hoqolo/SDSTrack.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Xiaojun Hou", "Jiazheng Xing", "Yijie Qian", "Yaowei Guo", "Shuo Xin", "Junhao Chen", "Kai Tang", "Mengmeng Wang", "Zhengkai Jiang", "Liang Liu", "Yong Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f36e"}, "filepath": "data/2311.16728.png", "tags": [], "_media_type": "image", "_rand": 0.9997004884670816, "arXiv_link": "https://arxiv.org/abs/2311.16728", "other_link": "", "title": "Photo-SLAM: Real-time Simultaneous Localization and Photorealistic Mapping for Monocular, Stereo, and RGB-D Cameras", "abstract": "The integration of neural rendering and the SLAM system recently showed\npromising results in joint localization and photorealistic view reconstruction.\nHowever, existing methods, fully relying on implicit representations, are so\nresource-hungry that they cannot run on portable devices, which deviates from\nthe original intention of SLAM. In this paper, we present Photo-SLAM, a novel\nSLAM framework with a hyper primitives map. Specifically, we simultaneously\nexploit explicit geometric features for localization and learn implicit\nphotometric features to represent the texture information of the observed\nenvironment. In addition to actively densifying hyper primitives based on\ngeometric features, we further introduce a Gaussian-Pyramid-based training\nmethod to progressively learn multi-level features, enhancing photorealistic\nmapping performance. The extensive experiments with monocular, stereo, and\nRGB-D datasets prove that our proposed system Photo-SLAM significantly\noutperforms current state-of-the-art SLAM systems for online photorealistic\nmapping, e.g., PSNR is 30% higher and rendering speed is hundreds of times\nfaster in the Replica dataset. Moreover, the Photo-SLAM can run at real-time\nspeed using an embedded platform such as Jetson AGX Orin, showing the potential\nof robotics applications.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Huajian Huang", "Longwei Li", "Hui Cheng", "Sai-Kit Yeung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f36f"}, "filepath": "data/2404.09216.png", "tags": [], "_media_type": "image", "_rand": 0.9996853835537819, "arXiv_link": "https://arxiv.org/abs/2404.09216", "other_link": "", "title": "Learning Background Prompts to Discover Implicit Knowledge for Open Vocabulary Object Detection", "abstract": "Existing open-vocabulary object detectors typically require a predefined set\nof categories from users, significantly confining their application scenarios.\nIn this paper, we introduce DetCLIPv3, a high-performing detector that excels\nnot only at both open-vocabulary object detection, but also generating\nhierarchical labels for detected objects. DetCLIPv3 is characterized by three\ncore designs: 1. Versatile model architecture: we derive a robust open-set\ndetection framework which is further empowered with generation ability via the\nintegration of a caption head. 2. High information density data: we develop an\nauto-annotation pipeline leveraging visual large language model to refine\ncaptions for large-scale image-text pairs, providing rich, multi-granular\nobject labels to enhance the training. 3. Efficient training strategy: we\nemploy a pre-training stage with low-resolution inputs that enables the object\ncaptioner to efficiently learn a broad spectrum of visual concepts from\nextensive image-text paired data. This is followed by a fine-tuning stage that\nleverages a small number of high-resolution samples to further enhance\ndetection performance. With these effective designs, DetCLIPv3 demonstrates\nsuperior open-vocabulary detection performance, \\eg, our Swin-T backbone model\nachieves a notable 47.0 zero-shot fixed AP on the LVIS minival benchmark,\noutperforming GLIPv2, GroundingDINO, and DetCLIPv2 by 18.0/19.6/6.6 AP,\nrespectively. DetCLIPv3 also achieves a state-of-the-art 19.7 AP in dense\ncaptioning task on VG dataset, showcasing its strong generative capability.", "keywords": ["Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Jiaming Li", "Jiacheng Zhang", "Jichang Li", "Ge Li", "Si Liu", "Liang Lin", "Guanbin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f370"}, "filepath": "data/2202.08449.png", "tags": [], "_media_type": "image", "_rand": 0.9992100348709052, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2202.08449", "other_link": "https://ai4ce.github.io/V2X-Sim/}.", "title": "Multiagent Multitraversal Multimodal Self-Driving: Open MARS Dataset", "abstract": "Vehicle-to-everything (V2X) communication techniques enable the collaboration\nbetween vehicles and many other entities in the neighboring environment, which\ncould fundamentally improve the perception system for autonomous driving.\nHowever, the lack of a public dataset significantly restricts the research\nprogress of collaborative perception. To fill this gap, we present V2X-Sim, a\ncomprehensive simulated multi-agent perception dataset for V2X-aided autonomous\ndriving. V2X-Sim provides: (1) \\hl{multi-agent} sensor recordings from the\nroad-side unit (RSU) and multiple vehicles that enable collaborative\nperception, (2) multi-modality sensor streams that facilitate multi-modality\nperception, and (3) diverse ground truths that support various perception\ntasks. Meanwhile, we build an open-source testbed and provide a benchmark for\nthe state-of-the-art collaborative perception algorithms on three tasks,\nincluding detection, tracking and segmentation. V2X-Sim seeks to stimulate\ncollaborative perception research for autonomous driving before realistic\ndatasets become widely available. Our dataset and code are available at\n\\url{https://ai4ce.github.io/V2X-Sim/}.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yiming Li", "Zhiheng Li", "Nuo Chen", "Moonjun Gong", "Zonglin Lyu", "Zehong Wang", "Peili Jiang", "Chen Feng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f371"}, "filepath": "data/2404.07949.png", "tags": [], "_media_type": "image", "_rand": 0.9991009053611914, "arXiv_link": "https://arxiv.org/abs/2404.07949", "other_link": "https://chengzhag.github.io/publication/panfusion.", "title": "Taming Stable Diffusion for Text to 360$^{\\circ}$ Panorama Image Generation", "abstract": "Generative models, e.g., Stable Diffusion, have enabled the creation of\nphotorealistic images from text prompts. Yet, the generation of 360-degree\npanorama images from text remains a challenge, particularly due to the dearth\nof paired text-panorama data and the domain gap between panorama and\nperspective images. In this paper, we introduce a novel dual-branch diffusion\nmodel named PanFusion to generate a 360-degree image from a text prompt. We\nleverage the stable diffusion model as one branch to provide prior knowledge in\nnatural image generation and register it to another panorama branch for\nholistic image generation. We propose a unique cross-attention mechanism with\nprojection awareness to minimize distortion during the collaborative denoising\nprocess. Our experiments validate that PanFusion surpasses existing methods\nand, thanks to its dual-branch structure, can integrate additional constraints\nlike room layout for customized panorama outputs. Code is available at\nhttps://chengzhag.github.io/publication/panfusion.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Cheng Zhang", "Qianyi Wu", "Camilo Cruz Gambardella", "Xiaoshui Huang", "Dinh Phung", "Wanli Ouyang", "Jianfei Cai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f372"}, "filepath": "data/2311.10950.png", "tags": [], "_media_type": "image", "_rand": 0.999422639194835, "arXiv_link": "https://arxiv.org/abs/2311.10950", "other_link": "", "title": "Snapshot Lidar: Fourier embedding of amplitude and phase for single-image depth reconstruction", "abstract": "The realm of classical phase retrieval concerns itself with the arduous task\nof recovering a signal from its Fourier magnitude measurements, which are\nfraught with inherent ambiguities. A single-exposure intensity measurement is\ncommonly deemed insufficient for the reconstruction of the primal signal, given\nthat the absent phase component is imperative for the inverse transformation.\nIn this work, we present a novel single-shot phase retrieval paradigm from a\nfractional Fourier transform (FrFT) perspective, which involves integrating the\nFrFT-based physical measurement model within a self-supervised reconstruction\nscheme. Specifically, the proposed FrFT-based measurement model addresses the\naliasing artifacts problem in the numerical calculation of Fresnel diffraction,\nfeaturing adaptability to both short-distance and long-distance propagation\nscenarios. Moreover, the intensity measurement in the FrFT domain proves highly\neffective in alleviating the ambiguities of phase retrieval and relaxing the\nprevious conditions on oversampled or multiple measurements in the Fourier\ndomain. Furthermore, the proposed self-supervised reconstruction approach\nharnesses the fast discrete algorithm of FrFT alongside untrained neural\nnetwork priors, thereby attaining preeminent results. Through numerical\nsimulations, we demonstrate that both amplitude and phase objects can be\neffectively retrieved from a single-shot intensity measurement using the\nproposed approach and provide a promising technique for support-free coherent\ndiffraction imaging.", "keywords": [], "authors_list": ["Sarah Friday", "Yunzi Shi", "Yaswanth Kumar Cherivirala", "Vishwanath Saragadam", "Adithya Pediredla"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f373"}, "filepath": "data/2404.17825.png", "tags": [], "_media_type": "image", "_rand": 0.9991039531053807, "arXiv_link": "https://arxiv.org/abs/2404.17825", "other_link": "", "title": "ODCR: Orthogonal Decoupling Contrastive Regularization for Unpaired Image Dehazing", "abstract": "Unpaired image dehazing (UID) holds significant research importance due to\nthe challenges in acquiring haze/clear image pairs with identical backgrounds.\nThis paper proposes a novel method for UID named Orthogonal Decoupling\nContrastive Regularization (ODCR). Our method is grounded in the assumption\nthat an image consists of both haze-related features, which influence the\ndegree of haze, and haze-unrelated features, such as texture and semantic\ninformation. ODCR aims to ensure that the haze-related features of the dehazing\nresult closely resemble those of the clear image, while the haze-unrelated\nfeatures align with the input hazy image. To accomplish the motivation,\nOrthogonal MLPs optimized geometrically on the Stiefel manifold are proposed,\nwhich can project image features into an orthogonal space, thereby reducing the\nrelevance between different features. Furthermore, a task-driven Depth-wise\nFeature Classifier (DWFC) is proposed, which assigns weights to the orthogonal\nfeatures based on the contribution of each channel's feature in predicting\nwhether the feature source is hazy or clear in a self-supervised fashion.\nFinally, a Weighted PatchNCE (WPNCE) loss is introduced to achieve the pulling\nof haze-related features in the output image toward those of clear images,\nwhile bringing haze-unrelated features close to those of the hazy input.\nExtensive experiments demonstrate the superior performance of our ODCR method\non UID.", "keywords": [], "authors_list": ["Zhongze Wang", "Haitao Zhao", "Jingchao Peng", "Lujian Yao", "Kaijie Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f374"}, "filepath": "data/2312.07061.png", "tags": [], "_media_type": "image", "_rand": 0.9996262712810181, "arXiv_link": "https://arxiv.org/abs/2312.07061", "other_link": "https://github.com/JingyangXiang/MaxQ}.", "title": "MaxQ: Multi-Axis Query for N:M Sparsity Network", "abstract": "N:M sparsity has received increasing attention due to its remarkable\nperformance and latency trade-off compared with structured and unstructured\nsparsity. However, existing N:M sparsity methods do not differentiate the\nrelative importance of weights among blocks and leave important weights\nunderappreciated. Besides, they directly apply N:M sparsity to the whole\nnetwork, which will cause severe information loss. Thus, they are still\nsub-optimal. In this paper, we propose an efficient and effective Multi-Axis\nQuery methodology, dubbed as MaxQ, to rectify these problems. During the\ntraining, MaxQ employs a dynamic approach to generate soft N:M masks,\nconsidering the weight importance across multiple axes. This method enhances\nthe weights with more importance and ensures more effective updates. Meanwhile,\na sparsity strategy that gradually increases the percentage of N:M weight\nblocks is applied, which allows the network to heal from the pruning-induced\ndamage progressively. During the runtime, the N:M soft masks can be precomputed\nas constants and folded into weights without causing any distortion to the\nsparse pattern and incurring additional computational overhead. Comprehensive\nexperiments demonstrate that MaxQ achieves consistent improvements across\ndiverse CNN architectures in various computer vision tasks, including image\nclassification, object detection and instance segmentation. For ResNet50 with\n1:16 sparse pattern, MaxQ can achieve 74.6\\% top-1 accuracy on ImageNet and\nimprove by over 2.8\\% over the state-of-the-art. Codes and checkpoints are\navailable at \\url{https://github.com/JingyangXiang/MaxQ}.", "keywords": ["Efficient and scalable vision", "Efficient and scalable vision"], "authors_list": ["Jingyang Xiang", "Siqi Li", "Junhao Chen", "Zhuangzhi Chen", "Tianxin Huang", "Linpeng Peng", "Yong Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f375"}, "filepath": "data/2403.07636v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991272518079986, "arXiv_link": "https://arxiv.org/abs/2403.07636v2", "other_link": "https://github.com/HieuPhan33/MAVL.", "title": "Decomposing Disease Descriptions for Enhanced Pathology Detection: A Multi-Aspect Vision-Language Pre-training Framework", "abstract": "Medical vision language pre-training (VLP) has emerged as a frontier of\nresearch, enabling zero-shot pathological recognition by comparing the query\nimage with the textual descriptions for each disease. Due to the complex\nsemantics of biomedical texts, current methods struggle to align medical images\nwith key pathological findings in unstructured reports. This leads to the\nmisalignment with the target disease's textual representation. In this paper,\nwe introduce a novel VLP framework designed to dissect disease descriptions\ninto their fundamental aspects, leveraging prior knowledge about the visual\nmanifestations of pathologies. This is achieved by consulting a large language\nmodel and medical experts. Integrating a Transformer module, our approach\naligns an input image with the diverse elements of a disease, generating\naspect-centric image representations. By consolidating the matches from each\naspect, we improve the compatibility between an image and its associated\ndisease. Additionally, capitalizing on the aspect-oriented representations, we\npresent a dual-head Transformer tailored to process known and unknown diseases,\noptimizing the comprehensive detection efficacy. Conducting experiments on\nseven downstream datasets, ours improves the accuracy of recent methods by up\nto 8.56% and 17.0% for seen and unseen categories, respectively. Our code is\nreleased at https://github.com/HieuPhan33/MAVL.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Vu Minh Hieu Phan", "Yutong Xie", "Yuankai Qi", "Lingqiao Liu", "Liyang Liu", "Bowen Zhang", "Zhibin Liao", "Qi Wu", "Minh-Son To", "Johan Verjans"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f376"}, "filepath": "data/2403.01482.png", "tags": [], "_media_type": "image", "_rand": 0.9996275586439034, "arXiv_link": "https://arxiv.org/abs/2403.01482", "other_link": "", "title": "EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation", "abstract": "Semantic segmentation has innately relied on extensive pixel-level annotated\ndata, leading to the emergence of unsupervised methodologies. Among them,\nleveraging self-supervised Vision Transformers for unsupervised semantic\nsegmentation (USS) has been making steady progress with expressive deep\nfeatures. Yet, for semantically segmenting images with complex objects, a\npredominant challenge remains: the lack of explicit object-level semantic\nencoding in patch-level features. This technical limitation often leads to\ninadequate segmentation of complex objects with diverse structures. To address\nthis gap, we present a novel approach, EAGLE, which emphasizes object-centric\nrepresentation learning for unsupervised semantic segmentation. Specifically,\nwe introduce EiCue, a spectral technique providing semantic and structural cues\nthrough an eigenbasis derived from the semantic similarity matrix of deep image\nfeatures and color affinity from an image. Further, by incorporating our\nobject-centric contrastive loss with EiCue, we guide our model to learn\nobject-level representations with intra- and inter-image object-feature\nconsistency, thereby enhancing semantic accuracy. Extensive experiments on\nCOCO-Stuff, Cityscapes, and Potsdam-3 datasets demonstrate the state-of-the-art\nUSS results of EAGLE with accurate and consistent semantic segmentation across\ncomplex scenes.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Chanyoung Kim", "Woojung Han", "Dayun Ju", "Seong Jae Hwang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f377"}, "filepath": "data/2312.01725.png", "tags": [], "_media_type": "image", "_rand": 0.9994048294795368, "arXiv_link": "https://arxiv.org/abs/2312.01725", "other_link": "https://github.com/rlawjdghek/StableVITON.", "title": "StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On", "abstract": "Given a clothing image and a person image, an image-based virtual try-on aims\nto generate a customized image that appears natural and accurately reflects the\ncharacteristics of the clothing image. In this work, we aim to expand the\napplicability of the pre-trained diffusion model so that it can be utilized\nindependently for the virtual try-on task.The main challenge is to preserve the\nclothing details while effectively utilizing the robust generative capability\nof the pre-trained model. In order to tackle these issues, we propose\nStableVITON, learning the semantic correspondence between the clothing and the\nhuman body within the latent space of the pre-trained diffusion model in an\nend-to-end manner. Our proposed zero cross-attention blocks not only preserve\nthe clothing details by learning the semantic correspondence but also generate\nhigh-fidelity images by utilizing the inherent knowledge of the pre-trained\nmodel in the warping process. Through our proposed novel attention total\nvariation loss and applying augmentation, we achieve the sharp attention map,\nresulting in a more precise representation of clothing details. StableVITON\noutperforms the baselines in qualitative and quantitative evaluation, showing\npromising quality in arbitrary person images. Our code is available at\nhttps://github.com/rlawjdghek/StableVITON.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jeongho Kim", "Gyojung Gu", "Minho Park", "Sunghyun Park", "Jaegul Choo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f378"}, "filepath": "data/2310.00944.png", "tags": [], "_media_type": "image", "_rand": 0.9993898265310004, "arXiv_link": "https://arxiv.org/abs/2310.00944", "other_link": "", "title": "Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions", "abstract": "LiDAR sensors are used in autonomous driving applications to accurately\nperceive the environment. However, they are affected by adverse weather\nconditions such as snow, fog, and rain. These everyday phenomena introduce\nunwanted noise into the measurements, severely degrading the performance of\nLiDAR-based perception systems. In this work, we propose a framework for\nimproving the robustness of LiDAR-based 3D object detectors against road spray.\nOur approach uses a state-of-the-art adverse weather detection network to\nfilter out spray from the LiDAR point cloud, which is then used as input for\nthe object detector. In this way, the detected objects are less affected by the\nadverse weather in the scene, resulting in a more accurate perception of the\nenvironment. In addition to adverse weather filtering, we explore the use of\nradar targets to further filter false positive detections. Tests on real-world\ndata show that our approach improves the robustness to road spray of several\npopular 3D object detectors.", "keywords": [], "authors_list": ["Yujeong Chae", "Hyeonseong Kim", "Kuk-Jin Yoon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f379"}, "filepath": "data/2312.06742.png", "tags": [], "_media_type": "image", "_rand": 0.999447703072408, "arXiv_link": "https://arxiv.org/abs/2312.06742", "other_link": "https://github.com/kakaobrain/honeybee.", "title": "Honeybee: Locality-enhanced Projector for Multimodal LLM", "abstract": "In Multimodal Large Language Models (MLLMs), a visual projector plays a\ncrucial role in bridging pre-trained vision encoders with LLMs, enabling\nprofound visual understanding while harnessing the LLMs' robust capabilities.\nDespite the importance of the visual projector, it has been relatively less\nexplored. In this study, we first identify two essential projector properties:\n(i) flexibility in managing the number of visual tokens, crucial for MLLMs'\noverall efficiency, and (ii) preservation of local context from visual\nfeatures, vital for spatial understanding. Based on these findings, we propose\na novel projector design that is both flexible and locality-enhanced,\neffectively satisfying the two desirable properties. Additionally, we present\ncomprehensive strategies to effectively utilize multiple and multifaceted\ninstruction datasets. Through extensive experiments, we examine the impact of\nindividual design choices. Finally, our proposed MLLM, Honeybee, remarkably\noutperforms previous state-of-the-art methods across various benchmarks,\nincluding MME, MMBench, SEED-Bench, and LLaVA-Bench, achieving significantly\nhigher efficiency. Code and models are available at\nhttps://github.com/kakaobrain/honeybee.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Junbum Cha", "Woo-Young Kang", "Jonghwan Mun", "Byungseok Roh"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f37a"}, "filepath": "data/2403.19232.png", "tags": [], "_media_type": "image", "_rand": 0.9994683919549432, "arXiv_link": "https://arxiv.org/abs/2403.19232", "other_link": "", "title": "AZ-NAS: Assembling Zero-Cost Proxies for Network Architecture Search", "abstract": "Training-free network architecture search (NAS) aims to discover\nhigh-performing networks with zero-cost proxies, capturing network\ncharacteristics related to the final performance. However, network rankings\nestimated by previous training-free NAS methods have shown weak correlations\nwith the performance. To address this issue, we propose AZ-NAS, a novel\napproach that leverages the ensemble of various zero-cost proxies to enhance\nthe correlation between a predicted ranking of networks and the ground truth\nsubstantially in terms of the performance. To achieve this, we introduce four\nnovel zero-cost proxies that are complementary to each other, analyzing\ndistinct traits of architectures in the views of expressivity, progressivity,\ntrainability, and complexity. The proxy scores can be obtained simultaneously\nwithin a single forward and backward pass, making an overall NAS process highly\nefficient. In order to integrate the rankings predicted by our proxies\neffectively, we introduce a non-linear ranking aggregation method that\nhighlights the networks highly-ranked consistently across all the proxies.\nExperimental results conclusively demonstrate the efficacy and efficiency of\nAZ-NAS, outperforming state-of-the-art methods on standard benchmarks, all\nwhile maintaining a reasonable runtime cost.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Junghyup Lee", "Bumsub Ham"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f37b"}, "filepath": "data/2403.00486.png", "tags": [], "_media_type": "image", "_rand": 0.9994340306057165, "arXiv_link": "https://arxiv.org/abs/2403.00486", "other_link": "https://github.com/Windsrain/Selective-Stereo.", "title": "Selective-Stereo: Adaptive Frequency Information Selection for Stereo Matching", "abstract": "Stereo matching methods based on iterative optimization, like RAFT-Stereo and\nIGEV-Stereo, have evolved into a cornerstone in the field of stereo matching.\nHowever, these methods struggle to simultaneously capture high-frequency\ninformation in edges and low-frequency information in smooth regions due to the\nfixed receptive field. As a result, they tend to lose details, blur edges, and\nproduce false matches in textureless areas. In this paper, we propose Selective\nRecurrent Unit (SRU), a novel iterative update operator for stereo matching.\nThe SRU module can adaptively fuse hidden disparity information at multiple\nfrequencies for edge and smooth regions. To perform adaptive fusion, we\nintroduce a new Contextual Spatial Attention (CSA) module to generate attention\nmaps as fusion weights. The SRU empowers the network to aggregate hidden\ndisparity information across multiple frequencies, mitigating the risk of vital\nhidden disparity information loss during iterative processes. To verify SRU's\nuniversality, we apply it to representative iterative stereo matching methods,\ncollectively referred to as Selective-Stereo. Our Selective-Stereo ranks\n$1^{st}$ on KITTI 2012, KITTI 2015, ETH3D, and Middlebury leaderboards among\nall published methods. Code is available at\nhttps://github.com/Windsrain/Selective-Stereo.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Xianqi Wang", "Gangwei Xu", "Hao Jia", "Xin Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f37c"}, "filepath": "data/2401.02400.png", "tags": [], "_media_type": "image", "_rand": 0.999712885992394, "arXiv_link": "https://arxiv.org/abs/2401.02400", "other_link": "", "title": "Learning the 3D Fauna of the Web", "abstract": "Learning 3D models of all animals on the Earth requires massively scaling up\nexisting solutions. With this ultimate goal in mind, we develop 3D-Fauna, an\napproach that learns a pan-category deformable 3D animal model for more than\n100 animal species jointly. One crucial bottleneck of modeling animals is the\nlimited availability of training data, which we overcome by simply learning\nfrom 2D Internet images. We show that prior category-specific attempts fail to\ngeneralize to rare species with limited training images. We address this\nchallenge by introducing the Semantic Bank of Skinned Models (SBSM), which\nautomatically discovers a small set of base animal shapes by combining\ngeometric inductive priors with semantic knowledge implicitly captured by an\noff-the-shelf self-supervised feature extractor. To train such a model, we also\ncontribute a new large-scale dataset of diverse animal species. At inference\ntime, given a single image of any quadruped animal, our model reconstructs an\narticulated 3D mesh in a feed-forward fashion within seconds.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zizhang Li", "Dor Litvak", "Ruining Li", "Yunzhi Zhang", "Tomas Jakab", "Christian Rupprecht", "Shangzhe Wu", "Andrea Vedaldi", "Jiajun Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f37d"}, "filepath": "data/2403.04303.png", "tags": [], "_media_type": "image", "_rand": 0.9993876529215938, "arXiv_link": "https://arxiv.org/abs/2403.04303", "other_link": "", "title": "LORS: Low-rank Residual Structure for Parameter-Efficient Network Stacking", "abstract": "Deep learning models, particularly those based on transformers, often employ\nnumerous stacked structures, which possess identical architectures and perform\nsimilar functions. While effective, this stacking paradigm leads to a\nsubstantial increase in the number of parameters, posing challenges for\npractical applications. In today's landscape of increasingly large models,\nstacking depth can even reach dozens, further exacerbating this issue. To\nmitigate this problem, we introduce LORS (LOw-rank Residual Structure). LORS\nallows stacked modules to share the majority of parameters, requiring a much\nsmaller number of unique ones per module to match or even surpass the\nperformance of using entirely distinct ones, thereby significantly reducing\nparameter usage. We validate our method by applying it to the stacked decoders\nof a query-based object detector, and conduct extensive experiments on the\nwidely used MS COCO dataset. Experimental results demonstrate the effectiveness\nof our method, as even with a 70\\% reduction in the parameters of the decoder,\nour method still enables the model to achieve comparable or", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jialin Li", "Qiang Nie", "Weifu Fu", "Yuhuan Lin", "Guangpin Tao", "Yong Liu", "Chengjie Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f37e"}, "filepath": "data/2403.17001.png", "tags": [], "_media_type": "image", "_rand": 0.9998217976800348, "arXiv_link": "https://arxiv.org/abs/2403.17001", "other_link": "https://vp3d-cvpr24.github.io.", "title": "VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation", "abstract": "Recent innovations on text-to-3D generation have featured Score Distillation\nSampling (SDS), which enables the zero-shot learning of implicit 3D models\n(NeRF) by directly distilling prior knowledge from 2D diffusion models.\nHowever, current SDS-based models still struggle with intricate text prompts\nand commonly result in distorted 3D models with unrealistic textures or\ncross-view inconsistency issues. In this work, we introduce a novel Visual\nPrompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the\nvisual appearance knowledge in 2D visual prompt to boost text-to-3D generation.\nInstead of solely supervising SDS with text prompt, VP3D first capitalizes on\n2D diffusion model to generate a high-quality image from input text, which\nsubsequently acts as visual prompt to strengthen SDS optimization with explicit\nvisual appearance. Meanwhile, we couple the SDS optimization with additional\ndifferentiable reward function that encourages rendering images of 3D models to\nbetter visually align with 2D visual prompt and semantically match with text\nprompt. Through extensive experiments, we show that the 2D Visual Prompt in our\nVP3D significantly eases the learning of visual appearance of 3D models and\nthus leads to higher visual fidelity with more detailed textures. It is also\nappealing in view that when replacing the self-generating visual prompt with a\ngiven reference image, VP3D is able to trigger a new task of stylized\ntext-to-3D generation. Our project page is available at\nhttps://vp3d-cvpr24.github.io.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yang Chen", "Yingwei Pan", "haibo yang", "Ting Yao", "Tao Mei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f37f"}, "filepath": "data/2401.09414.png", "tags": [], "_media_type": "image", "_rand": 0.9993502656910318, "arXiv_link": "https://arxiv.org/abs/2401.09414", "other_link": "https://github.com/zhuangshaobin/Vlogger.", "title": "Vlogger: Make Your Dream A Vlog", "abstract": "In this work, we present Vlogger, a generic AI system for generating a\nminute-level video blog (i.e., vlog) of user descriptions. Different from short\nvideos with a few seconds, vlog often contains a complex storyline with\ndiversified scenes, which is challenging for most existing video generation\napproaches. To break through this bottleneck, our Vlogger smartly leverages\nLarge Language Model (LLM) as Director and decomposes a long video generation\ntask of vlog into four key stages, where we invoke various foundation models to\nplay the critical roles of vlog professionals, including (1) Script, (2) Actor,\n(3) ShowMaker, and (4) Voicer. With such a design of mimicking human beings,\nour Vlogger can generate vlogs through explainable cooperation of top-down\nplanning and bottom-up shooting. Moreover, we introduce a novel video diffusion\nmodel, ShowMaker, which serves as a videographer in our Vlogger for generating\nthe video snippet of each shooting scene. By incorporating Script and Actor\nattentively as textual and visual prompts, it can effectively enhance\nspatial-temporal coherence in the snippet. Besides, we design a concise mixed\ntraining paradigm for ShowMaker, boosting its capacity for both T2V generation\nand prediction. Finally, the extensive experiments show that our method\nachieves state-of-the-art performance on zero-shot T2V generation and\nprediction tasks. More importantly, Vlogger can generate over 5-minute vlogs\nfrom open-world descriptions, without loss of video coherence on script and\nactor. The code and model is all available at\nhttps://github.com/zhuangshaobin/Vlogger.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Shaobin Zhuang", "Kunchang Li", "Xinyuan Chen", "Yaohui Wang", "Ziwei Liu", "Yu Qiao", "Yali Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f380"}, "filepath": "data/2403.10099.png", "tags": [], "_media_type": "image", "_rand": 0.9993716906756893, "arXiv_link": "https://arxiv.org/abs/2403.10099", "other_link": "https://github.com/lolrudy/KP-RED.", "title": "KP-RED: Exploiting Semantic Keypoints for Joint 3D Shape Retrieval and Deformation", "abstract": "In this paper, we present KP-RED, a unified KeyPoint-driven REtrieval and\nDeformation framework that takes object scans as input and jointly retrieves\nand deforms the most geometrically similar CAD models from a pre-processed\ndatabase to tightly match the target. Unlike existing dense matching based\nmethods that typically struggle with noisy partial scans, we propose to\nleverage category-consistent sparse keypoints to naturally handle both full and\npartial object scans. Specifically, we first employ a lightweight retrieval\nmodule to establish a keypoint-based embedding space, measuring the similarity\namong objects by dynamically aggregating deformation-aware local-global\nfeatures around extracted keypoints. Objects that are close in the embedding\nspace are considered similar in geometry. Then we introduce the neural\ncage-based deformation module that estimates the influence vector of each\nkeypoint upon cage vertices inside its local support region to control the\ndeformation of the retrieved shape. Extensive experiments on the synthetic\ndataset PartNet and the real-world dataset Scan2CAD demonstrate that KP-RED\nsurpasses existing state-of-the-art approaches by a large margin. Codes and\ntrained models will be released in https://github.com/lolrudy/KP-RED.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ruida Zhang", "Chenyangguang Zhang", "Yan Di", "Fabian Manhardt", "Xingyu Liu", "Federico Tombari", "Xiangyang Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f381"}, "filepath": "data/2312.13108.png", "tags": [], "_media_type": "image", "_rand": 0.9993630313616184, "arXiv_link": "https://arxiv.org/abs/2312.13108", "other_link": "", "title": "AssistGUI: Task-Oriented PC Graphical User Interface Automation", "abstract": "Graphical User Interface (GUI) automation holds significant promise for\nassisting users with complex tasks, thereby boosting human productivity.\nExisting works leveraging Large Language Model (LLM) or LLM-based AI agents\nhave shown capabilities in automating tasks on Android and Web platforms.\nHowever, these tasks are primarily aimed at simple device usage and\nentertainment operations. This paper presents a novel benchmark, AssistGUI, to\nevaluate whether models are capable of manipulating the mouse and keyboard on\nthe Windows platform in response to user-requested tasks. We carefully\ncollected a set of 100 tasks from nine widely-used software applications, such\nas, After Effects and MS Word, each accompanied by the necessary project files\nfor better evaluation. Moreover, we propose an advanced Actor-Critic Embodied\nAgent framework, which incorporates a sophisticated GUI parser driven by an\nLLM-agent and an enhanced reasoning mechanism adept at handling lengthy\nprocedural tasks. Our experimental results reveal that our GUI Parser and\nReasoning mechanism outshine existing methods in performance. Nevertheless, the\npotential remains substantial, with the best model attaining only a 46% success\nrate on our benchmark. We conclude with a thorough analysis of the current\nmethods' limitations, setting the stage for future breakthroughs in this\ndomain.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Difei Gao", "Lei Ji", "Zechen Bai", "Mingyu Ouyang", "Peiran Li", "Dongxing Mao", "Qin WU", "Weichen Zhang", "Peiyi Wang", "Xiangwu Guo", "Hengxu Wang", "Luowei Zhou", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f382"}, "filepath": "data/2310.11696.png", "tags": [], "_media_type": "image", "_rand": 0.999885476842218, "arXiv_link": "https://arxiv.org/abs/2310.11696", "other_link": "", "title": "MOHO: Learning Single-view Hand-held Object Reconstruction with Multi-view Occlusion-Aware Supervision", "abstract": "Previous works concerning single-view hand-held object reconstruction\ntypically rely on supervision from 3D ground-truth models, which are hard to\ncollect in real world. In contrast, readily accessible hand-object videos offer\na promising training data source, but they only give heavily occluded object\nobservations. In this paper, we present a novel synthetic-to-real framework to\nexploit Multi-view Occlusion-aware supervision from hand-object videos for\nHand-held Object reconstruction (MOHO) from a single image, tackling two\npredominant challenges in such setting: hand-induced occlusion and object's\nself-occlusion. First, in the synthetic pre-training stage, we render a\nlarge-scaled synthetic dataset SOMVideo with hand-object images and multi-view\nocclusion-free supervisions, adopted to address hand-induced occlusion in both\n2D and 3D spaces. Second, in the real-world finetuning stage, MOHO leverages\nthe amodal-mask-weighted geometric supervision to mitigate the unfaithful\nguidance caused by the hand-occluded supervising views in real world. Moreover,\ndomain-consistent occlusion-aware features are amalgamated in MOHO to resist\nobject's self-occlusion for inferring the complete object shape. Extensive\nexperiments on HO3D and DexYCB datasets demonstrate 2D-supervised MOHO gains\nsuperior results against 3D-supervised methods by a large margin.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chenyangguang Zhang", "Guanlong Jiao", "Yan Di", "Gu Wang", "Ziqin Huang", "Ruida Zhang", "Fabian Manhardt", "Bowen Fu", "Federico Tombari", "Xiangyang Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f383"}, "filepath": "data/2312.00375.png", "tags": [], "_media_type": "image", "_rand": 0.999077676160507, "arXiv_link": "https://arxiv.org/abs/2312.00375", "other_link": "https://faceg2e.github.io/.", "title": "Text-Guided 3D Face Synthesis - From Generation to Editing", "abstract": "Text-guided 3D face synthesis has achieved remarkable results by leveraging\ntext-to-image (T2I) diffusion models. However, most existing works focus solely\non the direct generation, ignoring the editing, restricting them from\nsynthesizing customized 3D faces through iterative adjustments. In this paper,\nwe propose a unified text-guided framework from face generation to editing. In\nthe generation stage, we propose a geometry-texture decoupled generation to\nmitigate the loss of geometric details caused by coupling. Besides, decoupling\nenables us to utilize the generated geometry as a condition for texture\ngeneration, yielding highly geometry-texture aligned results. We further employ\na fine-tuned texture diffusion model to enhance texture quality in both RGB and\nYUV space. In the editing stage, we first employ a pre-trained diffusion model\nto update facial geometry or texture based on the texts. To enable sequential\nediting, we introduce a UV domain consistency preservation regularization,\npreventing unintentional changes to irrelevant facial attributes. Besides, we\npropose a self-guided consistency weight strategy to improve editing efficacy\nwhile preserving consistency. Through comprehensive experiments, we showcase\nour method's superiority in face synthesis. Project page:\nhttps://faceg2e.github.io/.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yunjie Wu", "Yapeng Meng", "Zhipeng Hu", "Lincheng Li", "Haoqian Wu", "Kun Zhou", "Weiwei Xu", "Xin Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f384"}, "filepath": "data/2311.06067.png", "tags": [], "_media_type": "image", "_rand": 0.9992747777693171, "arXiv_link": "https://arxiv.org/abs/2311.06067", "other_link": "", "title": "Characteristics Matching Based Hash Codes Generation for Efficient Fine-grained Image Retrieval", "abstract": "In recent years, hashing methods have been popular in the large-scale media\nsearch for low storage and strong representation capabilities. To describe\nobjects with similar overall appearance but subtle differences, more and more\nstudies focus on hashing-based fine-grained image retrieval. Existing hashing\nnetworks usually generate both local and global features through attention\nguidance on the same deep activation tensor, which limits the diversity of\nfeature representations. To handle this limitation, we substitute convolutional\ndescriptors for attention-guided features and propose an Attributes Grouping\nand Mining Hashing (AGMH), which groups and embeds the category-specific visual\nattributes in multiple descriptors to generate a comprehensive feature\nrepresentation for efficient fine-grained image retrieval. Specifically, an\nAttention Dispersion Loss (ADL) is designed to force the descriptors to attend\nto various local regions and capture diverse subtle details. Moreover, we\npropose a Stepwise Interactive External Attention (SIEA) to mine critical\nattributes in each descriptor and construct correlations between fine-grained\nattributes and objects. The attention mechanism is dedicated to learning\ndiscrete attributes, which will not cost additional computations in hash codes\ngeneration. Finally, the compact binary codes are learned by preserving\npairwise similarities. Experimental results demonstrate that AGMH consistently\nyields the best performance against state-of-the-art methods on fine-grained\nbenchmark datasets.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhen-Duo Chen", "Li-Jun Zhao", "Zi-Chao Zhang", "Xin Luo", "Xin-Shun Xu"], "category_name": "Information Retrieval", "all_categories": ["Information Retrieval", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f385"}, "filepath": "data/2312.04651.png", "tags": [], "_media_type": "image", "_rand": 0.9995544683357467, "arXiv_link": "https://arxiv.org/abs/2312.04651", "other_link": "", "title": "VOODOO 3D: VOlumetric pOrtrait Disentanglement fOr Online 3D head reenactment", "abstract": "We present a 3D-aware one-shot head reenactment method based on a fully\nvolumetric neural disentanglement framework for source appearance and driver\nexpressions. Our method is real-time and produces high-fidelity and\nview-consistent output, suitable for 3D teleconferencing systems based on\nholographic displays. Existing cutting-edge 3D-aware reenactment methods often\nuse neural radiance fields or 3D meshes to produce view-consistent appearance\nencoding, but, at the same time, they rely on linear face models, such as 3DMM,\nto achieve its disentanglement with facial expressions. As a result, their\nreenactment results often exhibit identity leakage from the driver or have\nunnatural expressions. To address these problems, we propose a neural\nself-supervised disentanglement approach that lifts both the source image and\ndriver video frame into a shared 3D volumetric representation based on\ntri-planes. This representation can then be freely manipulated with expression\ntri-planes extracted from the driving images and rendered from an arbitrary\nview using neural radiance fields. We achieve this disentanglement via\nself-supervised learning on a large in-the-wild video dataset. We further\nintroduce a highly effective fine-tuning approach to improve the\ngeneralizability of the 3D lifting using the same real-world data. We\ndemonstrate state-of-the-art performance on a wide range of datasets, and also\nshowcase high-quality 3D-aware head reenactment on highly challenging and\ndiverse subjects, including non-frontal head poses and complex expressions for\nboth source and driver.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Phong Tran", "Egor Zakharov", "Long Nhat Ho", "Anh Tran", "Liwen Hu", "Hao Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f386"}, "filepath": "data/2311.17034.png", "tags": [], "_media_type": "image", "_rand": 0.9994316998304251, "arXiv_link": "https://arxiv.org/abs/2311.17034", "other_link": "https://telling-left-from-right.github.io/.", "title": "Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence", "abstract": "While pre-trained large-scale vision models have shown significant promise\nfor semantic correspondence, their features often struggle to grasp the\ngeometry and orientation of instances. This paper identifies the importance of\nbeing geometry-aware for semantic correspondence and reveals a limitation of\nthe features of current foundation models under simple post-processing. We show\nthat incorporating this information can markedly enhance semantic\ncorrespondence performance with simple but effective solutions in both\nzero-shot and supervised settings. We also construct a new challenging\nbenchmark for semantic correspondence built from an existing animal pose\nestimation dataset, for both pre-training validating models. Our method\nachieves a PCK@0.10 score of 65.4 (zero-shot) and 85.6 (supervised) on the\nchallenging SPair-71k dataset, outperforming the state of the art by 5.5p and\n11.0p absolute gains, respectively. Our code and datasets are publicly\navailable at: https://telling-left-from-right.github.io/.", "keywords": [], "authors_list": ["Junyi Zhang", "Charles Herrmann", "Junhwa Hur", "Eric Chen", "Varun Jampani", "Deqing Sun", "Ming-Hsuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f387"}, "filepath": "data/2403.07369.png", "tags": [], "_media_type": "image", "_rand": 0.9994367157909851, "arXiv_link": "https://arxiv.org/abs/2403.07369", "other_link": "", "title": "Federated Generalized Category Discovery", "abstract": "In this paper, we study the problem of Generalized Category Discovery (GCD),\nwhich aims to cluster unlabeled data from both known and unknown categories\nusing the knowledge of labeled data from known categories. Current GCD methods\nrely on only visual cues, which however neglect the multi-modality perceptive\nnature of human cognitive processes in discovering novel visual categories. To\naddress this, we propose a two-phase TextGCD framework to accomplish\nmulti-modality GCD by exploiting powerful Visual-Language Models. TextGCD\nmainly includes a retrieval-based text generation (RTG) phase and a\ncross-modality co-teaching (CCT) phase. First, RTG constructs a visual lexicon\nusing category tags from diverse datasets and attributes from Large Language\nModels, generating descriptive texts for images in a retrieval manner. Second,\nCCT leverages disparities between textual and visual modalities to foster\nmutual learning, thereby enhancing visual GCD. In addition, we design an\nadaptive class aligning strategy to ensure the alignment of category\nperceptions between modalities as well as a soft-voting mechanism to integrate\nmulti-modality cues. Experiments on eight datasets show the large superiority\nof our approach over state-of-the-art methods. Notably, our approach\noutperforms the best competitor, by 7.7% and 10.8% in All accuracy on\nImageNet-1k and CUB, respectively.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Nan Pu", "Wenjing Li", "Xinyuan Ji", "Yalan Qin", "Nicu Sebe", "Zhun Zhong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f388"}, "filepath": "data/2405.00900.png", "tags": [], "_media_type": "image", "_rand": 0.9993887900026673, "arXiv_link": "https://arxiv.org/abs/2405.00900", "other_link": "", "title": "LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes", "abstract": "Photorealistic simulation plays a crucial role in applications such as\nautonomous driving, where advances in neural radiance fields (NeRFs) may allow\nbetter scalability through the automatic creation of digital 3D assets.\nHowever, reconstruction quality suffers on street scenes due to largely\ncollinear camera motions and sparser samplings at higher speeds. On the other\nhand, the application often demands rendering from camera views that deviate\nfrom the inputs to accurately simulate behaviors like lane changes. In this\npaper, we propose several insights that allow a better utilization of Lidar\ndata to improve NeRF quality on street scenes. First, our framework learns a\ngeometric scene representation from Lidar, which is fused with the implicit\ngrid-based representation for radiance decoding, thereby supplying stronger\ngeometric information offered by explicit point cloud. Second, we put forth a\nrobust occlusion-aware depth supervision scheme, which allows utilizing\ndensified Lidar points by accumulation. Third, we generate augmented training\nviews from Lidar points for further improvement. Our insights translate to\nlargely improved novel view synthesis under real driving scenes.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Shanlin Sun", "Bingbing Zhuang", "Ziyu Jiang", "Buyu Liu", "Xiaohui Xie", "Manmohan Chandraker"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f389"}, "filepath": "data/2308.09421.png", "tags": [], "_media_type": "image", "_rand": 0.9990798471400345, "arXiv_link": "https://arxiv.org/abs/2308.09421", "other_link": "https://github.com/cskkxjk/MonoNeRD.", "title": "Learning Occupancy for Monocular 3D Object Detection", "abstract": "In the field of monocular 3D detection, it is common practice to utilize\nscene geometric clues to enhance the detector's performance. However, many\nexisting works adopt these clues explicitly such as estimating a depth map and\nback-projecting it into 3D space. This explicit methodology induces sparsity in\n3D representations due to the increased dimensionality from 2D to 3D, and leads\nto substantial information loss, especially for distant and occluded objects.\nTo alleviate this issue, we propose MonoNeRD, a novel detection framework that\ncan infer dense 3D geometry and occupancy. Specifically, we model scenes with\nSigned Distance Functions (SDF), facilitating the production of dense 3D\nrepresentations. We treat these representations as Neural Radiance Fields\n(NeRF) and then employ volume rendering to recover RGB images and depth maps.\nTo the best of our knowledge, this work is the first to introduce volume\nrendering for M3D, and demonstrates the potential of implicit reconstruction\nfor image-based 3D perception. Extensive experiments conducted on the KITTI-3D\nbenchmark and Waymo Open Dataset demonstrate the effectiveness of MonoNeRD.\nCodes are available at https://github.com/cskkxjk/MonoNeRD.", "keywords": ["Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Liang Peng", "Junkai Xu", "Haoran Cheng", "Zheng Yang", "Xiaopei Wu", "Wei Qian", "Wenxiao Wang", "Boxi Wu", "Deng Cai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f38a"}, "filepath": "data/2404.12538.png", "tags": [], "_media_type": "image", "_rand": 0.9997158762381769, "arXiv_link": "https://arxiv.org/abs/2404.12538", "other_link": "", "title": "CaDeT: a Causal Disentanglement Approach for Robust Trajectory Prediction in Autonomous Driving", "abstract": "As a safety critical task, autonomous driving requires accurate predictions\nof road users' future trajectories for safe motion planning, particularly under\nchallenging conditions. Yet, many recent deep learning methods suffer from a\ndegraded performance on the challenging scenarios, mainly because these\nscenarios appear less frequently in the training data. To address such a\nlong-tail issue, existing methods force challenging scenarios closer together\nin the feature space during training to trigger information sharing among them\nfor more robust learning. These methods, however, primarily rely on the motion\npatterns to characterize scenarios, omitting more informative contextual\ninformation, such as interactions and scene layout. We argue that exploiting\nsuch information not only improves prediction accuracy but also scene\ncompliance of the generated trajectories. In this paper, we propose to\nincorporate richer training dynamics information into a prototypical\ncontrastive learning framework. More specifically, we propose a two-stage\nprocess. First, we generate rich contextual features using a baseline\nencoder-decoder framework. These features are split into clusters based on the\nmodel's output errors, using the training dynamics information, and a prototype\nis computed within each cluster. Second, we retrain the model using the\nprototypes in a contrastive learning framework. We conduct empirical\nevaluations of our approach using two large-scale naturalistic datasets and\nshow that our method achieves state-of-the-art performance by improving\naccuracy and scene compliance on the long-tail samples. Furthermore, we perform\nexperiments on a subset of the clusters to highlight the additional benefit of\nour approach in reducing training bias.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Mozhgan Pourkeshavarz", "Junrui Zhang", "Amir Rasouli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f38b"}, "filepath": "data/2403.15835.png", "tags": [], "_media_type": "image", "_rand": 0.9997158112681499, "arXiv_link": "https://arxiv.org/abs/2403.15835", "other_link": "", "title": "Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression", "abstract": "Recent Vision Transformer Compression (VTC) works mainly follow a two-stage\nscheme, where the importance score of each model unit is first evaluated or\npreset in each submodule, followed by the sparsity score evaluation according\nto the target sparsity constraint. Such a separate evaluation process induces\nthe gap between importance and sparsity score distributions, thus causing high\nsearch costs for VTC. In this work, for the first time, we investigate how to\nintegrate the evaluations of importance and sparsity scores into a single\nstage, searching the optimal subnets in an efficient manner. Specifically, we\npresent OFB, a cost-efficient approach that simultaneously evaluates both\nimportance and sparsity scores, termed Once for Both (OFB), for VTC. First, a\nbi-mask scheme is developed by entangling the importance score and the\ndifferentiable sparsity score to jointly determine the pruning potential\n(prunability) of each unit. Such a bi-mask search strategy is further used\ntogether with a proposed adaptive one-hot loss to realize the\nprogressive-and-efficient search for the most important subnet. Finally,\nProgressive Masked Image Modeling (PMIM) is proposed to regularize the feature\nspace to be more representative during the search process, which may be\ndegraded by the dimension reduction. Extensive experiments demonstrate that OFB\ncan achieve superior compression performance over state-of-the-art\nsearching-based and pruning-based methods under various Vision Transformer\narchitectures, meanwhile promoting search efficiency significantly, e.g.,\ncosting one GPU search day for the compression of DeiT-S on ImageNet-1K.", "keywords": [], "authors_list": ["Hancheng Ye", "Chong Yu", "Peng Ye", "Renqiu Xia", "Bo Zhang", "Yansong Tang", "Jiwen Lu", "Tao Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f38c"}, "filepath": "data/2312.02134.png", "tags": [], "_media_type": "image", "_rand": 0.9990777036440673, "arXiv_link": "https://arxiv.org/abs/2312.02134", "other_link": "", "title": "GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians", "abstract": "We present GaussianAvatar, an efficient approach to creating realistic human\navatars with dynamic 3D appearances from a single video. We start by\nintroducing animatable 3D Gaussians to explicitly represent humans in various\nposes and clothing styles. Such an explicit and animatable representation can\nfuse 3D appearances more efficiently and consistently from 2D observations. Our\nrepresentation is further augmented with dynamic properties to support\npose-dependent appearance modeling, where a dynamic appearance network along\nwith an optimizable feature tensor is designed to learn the\nmotion-to-appearance mapping. Moreover, by leveraging the differentiable motion\ncondition, our method enables a joint optimization of motions and appearances\nduring avatar modeling, which helps to tackle the long-standing issue of\ninaccurate motion estimation in monocular settings. The efficacy of\nGaussianAvatar is validated on both the public dataset and our collected\ndataset, demonstrating its superior performances in terms of appearance quality\nand rendering efficiency.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Liangxiao Hu", "Hongwen Zhang", "Yuxiang Zhang", "Boyao ZHOU", "Boning Liu", "Shengping Zhang", "Liqiang Nie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f38d"}, "filepath": "data/2403.09634.png", "tags": [], "_media_type": "image", "_rand": 0.9998166616017241, "arXiv_link": "https://arxiv.org/abs/2403.09634", "other_link": "", "title": "OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning", "abstract": "Visual object tracking aims to localize the target object of each frame based\non its initial appearance in the first frame. Depending on the input modility,\ntracking tasks can be divided into RGB tracking and RGB+X (e.g. RGB+N, and\nRGB+D) tracking. Despite the different input modalities, the core aspect of\ntracking is the temporal matching. Based on this common ground, we present a\ngeneral framework to unify various tracking tasks, termed as OneTracker.\nOneTracker first performs a large-scale pre-training on a RGB tracker called\nFoundation Tracker. This pretraining phase equips the Foundation Tracker with a\nstable ability to estimate the location of the target object. Then we regard\nother modality information as prompt and build Prompt Tracker upon Foundation\nTracker. Through freezing the Foundation Tracker and only adjusting some\nadditional trainable parameters, Prompt Tracker inhibits the strong\nlocalization ability from Foundation Tracker and achieves parameter-efficient\nfinetuning on downstream RGB+X tracking tasks. To evaluate the effectiveness of\nour general framework OneTracker, which is consisted of Foundation Tracker and\nPrompt Tracker, we conduct extensive experiments on 6 popular tracking tasks\nacross 11 benchmarks and our OneTracker outperforms other models and achieves\nstate-of-the-art performance.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Lingyi Hong", "Shilin Yan", "Renrui Zhang", "Wanyun Li", "Xinyu Zhou", "Pinxue Guo", "Kaixun Jiang", "Yiting Cheng", "Jinglun Li", "Zhaoyu Chen", "Wenqiang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f38e"}, "filepath": "data/2402.19002.png", "tags": [], "_media_type": "image", "_rand": 0.999077556250302, "arXiv_link": "https://arxiv.org/abs/2402.19002", "other_link": "", "title": "GigaTraj: Predicting Long-term Trajectories of Hundreds of Pedestrians in Gigapixel Complex Scenes", "abstract": "Predicting the future trajectories of pedestrians on the road is an important\ntask for autonomous driving. The pedestrian trajectory prediction is affected\nby scene paths, pedestrian's intentions and decision-making, which is a\nmulti-modal problem. Most recent studies use past trajectories to predict a\nvariety of potential future trajectory distributions, which do not account for\nthe scene context and pedestrian targets. Instead of predicting the future\ntrajectory directly, we propose to use scene context and observed trajectory to\npredict the goal points first, and then reuse the goal points to predict the\nfuture trajectories. By leveraging the information from scene context and\nobserved trajectory, the uncertainty can be limited to a few target areas,\nwhich represent the \"goals\" of the pedestrians. In this paper, we propose\nGoalNet, a new trajectory prediction neural network based on the goal areas of\na pedestrian. Our network can predict both pedestrian's trajectories and\nbounding boxes. The overall model is efficient and modular, and its outputs can\nbe changed according to the usage scenario. Experimental results show that\nGoalNet significantly improves the previous state-of-the-art performance by\n48.7% on the JAAD and 40.8% on the PIE dataset.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Haozhe Lin", "Chunyu Wei", "Li He", "Yuchen Guo", "Yuchy Zhao", "Shanglong Li", "Lu Fang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f38f"}, "filepath": "data/2405.04771.png", "tags": [], "_media_type": "image", "_rand": 0.9995263176052821, "arXiv_link": "https://arxiv.org/abs/2405.04771", "other_link": "", "title": "Exploring Vision Transformers for 3D Human Motion-Language Models with Motion Patches", "abstract": "To build a cross-modal latent space between 3D human motion and language,\nacquiring large-scale and high-quality human motion data is crucial. However,\nunlike the abundance of image data, the scarcity of motion data has limited the\nperformance of existing motion-language models. To counter this, we introduce\n\"motion patches\", a new representation of motion sequences, and propose using\nVision Transformers (ViT) as motion encoders via transfer learning, aiming to\nextract useful knowledge from the image domain and apply it to the motion\ndomain. These motion patches, created by dividing and sorting skeleton joints\nbased on body parts in motion sequences, are robust to varying skeleton\nstructures, and can be regarded as color image patches in ViT. We find that\ntransfer learning with pre-trained weights of ViT obtained through training\nwith 2D image data can boost the performance of motion analysis, presenting a\npromising direction for addressing the issue of limited motion data. Our\nextensive experiments show that the proposed motion patches, used jointly with\nViT, achieve state-of-the-art performance in the benchmarks of text-to-motion\nretrieval, and other novel challenging tasks, such as cross-skeleton\nrecognition, zero-shot motion classification, and human interaction\nrecognition, which are currently impeded by the lack of data.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Qing Yu", "Mikihiro Tanaka", "Kent Fujiwara"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f390"}, "filepath": "data/2312.01099.png", "tags": [], "_media_type": "image", "_rand": 0.9994055602723328, "arXiv_link": "https://arxiv.org/abs/2312.01099", "other_link": "https://github.com/Dootmaan/ICMIL/tree/confidence_based", "title": "ViLa-MIL: Dual-scale Vision-Language Multiple Instance Learning for Whole Slide Image Classification", "abstract": "Multiple Instance Learning (MIL) has demonstrated promise in Whole Slide\nImage (WSI) classification. However, a major challenge persists due to the high\ncomputational cost associated with processing these gigapixel images. Existing\nmethods generally adopt a two-stage approach, comprising a non-learnable\nfeature embedding stage and a classifier training stage. Though it can greatly\nreduce the memory consumption by using a fixed feature embedder pre-trained on\nother domains, such scheme also results in a disparity between the two stages,\nleading to suboptimal classification accuracy. To address this issue, we\npropose that a bag-level classifier can be a good instance-level teacher. Based\non this idea, we design Iteratively Coupled Multiple Instance Learning (ICMIL)\nto couple the embedder and the bag classifier at a low cost. ICMIL initially\nfix the patch embedder to train the bag classifier, followed by fixing the bag\nclassifier to fine-tune the patch embedder. The refined embedder can then\ngenerate better representations in return, leading to a more accurate\nclassifier for the next iteration. To realize more flexible and more effective\nembedder fine-tuning, we also introduce a teacher-student framework to\nefficiently distill the category knowledge in the bag classifier to help the\ninstance-level embedder fine-tuning. Thorough experiments were conducted on\nfour distinct datasets to validate the effectiveness of ICMIL. The experimental\nresults consistently demonstrate that our method significantly improves the\nperformance of existing MIL backbones, achieving state-of-the-art results. The\ncode is available at: https://github.com/Dootmaan/ICMIL/tree/confidence_based", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Jiangbo Shi", "Chen Li", "Tieliang Gong", "Yefeng Zheng", "Huazhu Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f391"}, "filepath": "data/2308.16246.png", "tags": [], "_media_type": "image", "_rand": 0.999220169134492, "arXiv_link": "http://export.arxiv.org/abs/2308.16246", "other_link": "", "title": "Neural Visibility Field for Uncertainty-Driven Active Mapping", "abstract": "We address the problem of active mapping with a continually-learned neural\nscene representation, namely Active Neural Mapping. The key lies in actively\nfinding the target space to be explored with efficient agent movement, thus\nminimizing the map uncertainty on-the-fly within a previously unseen\nenvironment. In this paper, we examine the weight space of the\ncontinually-learned neural field, and show empirically that the neural\nvariability, the prediction robustness against random weight perturbation, can\nbe directly utilized to measure the instant uncertainty of the neural map.\nTogether with the continuous geometric information inherited in the neural map,\nthe agent can be guided to find a traversable path to gradually gain knowledge\nof the environment. We present for the first time an active mapping system with\na coordinate-based implicit neural representation for online scene\nreconstruction. Experiments in the visually-realistic Gibson and Matterport3D\nenvironment demonstrate the efficacy of the proposed method.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Shangjie Xue", "Jesse Dill", "Pranay Mathur", "Frank Dellaert", "Panagiotis Tsiotras", "Danfei Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f392"}, "filepath": "data/2403.06093.png", "tags": [], "_media_type": "image", "_rand": 0.9999423305304886, "arXiv_link": "https://arxiv.org/abs/2403.06093", "other_link": "https://github.com/nullmax-vision/QAF2D.", "title": "Enhancing 3D Object Detection with 2D Detection-Guided Query Anchors", "abstract": "Multi-camera-based 3D object detection has made notable progress in the past\nseveral years. However, we observe that there are cases (e.g. faraway regions)\nin which popular 2D object detectors are more reliable than state-of-the-art 3D\ndetectors. In this paper, to improve the performance of query-based 3D object\ndetectors, we present a novel query generating approach termed QAF2D, which\ninfers 3D query anchors from 2D detection results. A 2D bounding box of an\nobject in an image is lifted to a set of 3D anchors by associating each sampled\npoint within the box with depth, yaw angle, and size candidates. Then, the\nvalidity of each 3D anchor is verified by comparing its projection in the image\nwith its corresponding 2D box, and only valid anchors are kept and used to\nconstruct queries. The class information of the 2D bounding box associated with\neach query is also utilized to match the predicted boxes with ground truth for\nthe set-based loss. The image feature extraction backbone is shared between the\n3D detector and 2D detector by adding a small number of prompt parameters. We\nintegrate QAF2D into three popular query-based 3D object detectors and carry\nout comprehensive evaluations on the nuScenes dataset. The largest improvement\nthat QAF2D can bring about on the nuScenes validation subset is $2.3\\%$ NDS and\n$2.7\\%$ mAP. Code is available at https://github.com/nullmax-vision/QAF2D.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haoxuanye Ji", "Pengpeng Liang", "Erkang Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f393"}, "filepath": "data/2403.17719.png", "tags": [], "_media_type": "image", "_rand": 0.9999979656064496, "arXiv_link": "https://arxiv.org/abs/2403.17719", "other_link": "", "title": "Resolution Limit of Single-Photon LIDAR", "abstract": "Single-photon Light Detection and Ranging (LiDAR) systems are often equipped\nwith an array of detectors for improved spatial resolution and sensing speed.\nHowever, given a fixed amount of flux produced by the laser transmitter across\nthe scene, the per-pixel Signal-to-Noise Ratio (SNR) will decrease when more\npixels are packed in a unit space. This presents a fundamental trade-off\nbetween the spatial resolution of the sensor array and the SNR received at each\npixel. Theoretical characterization of this fundamental limit is explored. By\nderiving the photon arrival statistics and introducing a series of new\napproximation techniques, the Mean Squared Error (MSE) of the\nmaximum-likelihood estimator of the time delay is derived. The theoretical\npredictions align well with simulations and real data.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Stanley H. Chan", "Hashan K Weerasooriya", "Weijian Zhang", "Pamela Abshire", "Istvan Gyongy", "Robert Henderson"], "category_name": "Signal Processing", "all_categories": ["Signal Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f394"}, "filepath": "data/2312.00057.png", "tags": [], "_media_type": "image", "_rand": 0.9992228264650983, "arXiv_link": "https://arxiv.org/abs/2312.00057", "other_link": "https://github.com/South7X/VA3.", "title": "VA3: Virtually Assured Amplification Attack on Probabilistic Copyright Protection for Text-to-Image Generative Models", "abstract": "The booming use of text-to-image generative models has raised concerns about\ntheir high risk of producing copyright-infringing content. While probabilistic\ncopyright protection methods provide a probabilistic guarantee against such\ninfringement, in this paper, we introduce Virtually Assured Amplification\nAttack (VA3), a novel online attack framework that exposes the vulnerabilities\nof these protection mechanisms. The proposed framework significantly amplifies\nthe probability of generating infringing content on the sustained interactions\nwith generative models and a non-trivial lower-bound on the success probability\nof each engagement. Our theoretical and experimental results demonstrate the\neffectiveness of our approach under various scenarios. These findings highlight\nthe potential risk of implementing probabilistic copyright protection in\npractical applications of text-to-image generative models. Code is available at\nhttps://github.com/South7X/VA3.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Xiang Li", "Qianli Shen", "Kenji Kawaguchi"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f395"}, "filepath": "data/2312.12471.png", "tags": [], "_media_type": "image", "_rand": 0.9993748026073076, "arXiv_link": "https://arxiv.org/abs/2312.12471", "other_link": "https://github.com/zkawfanx/Atlantis.", "title": "Atlantis: Enabling Underwater Depth Estimation with Stable Diffusion", "abstract": "Monocular depth estimation has experienced significant progress on\nterrestrial images in recent years, largely due to deep learning advancements.\nHowever, it remains inadequate for underwater scenes, primarily because of data\nscarcity. Given the inherent challenges of light attenuation and backscattering\nin water, acquiring clear underwater images or precise depth information is\nnotably difficult and costly. Consequently, learning-based approaches often\nrely on synthetic data or turn to unsupervised or self-supervised methods to\nmitigate this lack of data. Nonetheless, the performance of these methods is\noften constrained by the domain gap and looser constraints. In this paper, we\npropose a novel pipeline for generating photorealistic underwater images using\naccurate terrestrial depth data. This approach facilitates the training of\nsupervised models for underwater depth estimation, effectively reducing the\nperformance disparity between terrestrial and underwater environments. Contrary\nto prior synthetic datasets that merely apply style transfer to terrestrial\nimages without altering the scene content, our approach uniquely creates\nvibrant, non-existent underwater scenes by leveraging terrestrial depth data\nthrough the innovative Stable Diffusion model. Specifically, we introduce a\nunique Depth2Underwater ControlNet, trained on specially prepared \\{Underwater,\nDepth, Text\\} data triplets, for this generation task. Our newly developed\ndataset enables terrestrial depth estimation models to achieve considerable\nimprovements, both quantitatively and qualitatively, on unseen underwater\nimages, surpassing their terrestrial pre-trained counterparts. Moreover, the\nenhanced depth accuracy for underwater scenes also aids underwater image\nrestoration techniques that rely on depth maps, further demonstrating our\ndataset's utility. The dataset will be available at\nhttps://github.com/zkawfanx/Atlantis.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Fan Zhang", "Shaodi You", "Yu Li", "Ying Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f396"}, "filepath": "data/2312.04655.png", "tags": [], "_media_type": "image", "_rand": 0.9990664177452768, "arXiv_link": "https://arxiv.org/abs/2312.04655", "other_link": "", "title": "ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations", "abstract": "Text-to-image (T2I) diffusion models, notably the unCLIP models (e.g.,\nDALL-E-2), achieve state-of-the-art (SOTA) performance on various compositional\nT2I benchmarks, at the cost of significant computational resources. The unCLIP\nstack comprises T2I prior and diffusion image decoder. The T2I prior model\nalone adds a billion parameters compared to the Latent Diffusion Models, which\nincreases the computational and high-quality data requirements. We introduce\nECLIPSE, a novel contrastive learning method that is both parameter and\ndata-efficient. ECLIPSE leverages pre-trained vision-language models (e.g.,\nCLIP) to distill the knowledge into the prior model. We demonstrate that the\nECLIPSE trained prior, with only 3.3% of the parameters and trained on a mere\n2.8% of the data, surpasses the baseline T2I priors with an average of 71.6%\npreference score under resource-limited setting. It also attains performance on\npar with SOTA big models, achieving an average of 63.36% preference score in\nterms of the ability to follow the text compositions. Extensive experiments on\ntwo unCLIP diffusion image decoders, Karlo and Kandinsky, affirm that ECLIPSE\npriors consistently deliver high performance while significantly reducing\nresource dependency.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Maitreya Patel", "Changhoon Kim", "Sheng Cheng", "Chitta Baral", "'YZ' Yezhou Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f397"}, "filepath": "data/2312.06968.png", "tags": [], "_media_type": "image", "_rand": 0.9991896094484001, "arXiv_link": "https://arxiv.org/abs/2312.06968", "other_link": "https://github.com/X-PLUG/mPLUG-HalOwl/tree/main/hacl.", "title": "Hallucination Augmented Contrastive Learning for Multimodal Large Language Model", "abstract": "Multi-modal large language models (MLLMs) have been shown to efficiently\nintegrate natural language with visual information to handle multi-modal tasks.\nHowever, MLLMs still face a fundamental limitation of hallucinations, where\nthey tend to generate erroneous or fabricated information. In this paper, we\naddress hallucinations in MLLMs from a novel perspective of representation\nlearning. We first analyzed the representation distribution of textual and\nvisual tokens in MLLM, revealing two important findings: 1) there is a\nsignificant gap between textual and visual representations, indicating\nunsatisfactory cross-modal representation alignment; 2) representations of\ntexts that contain and do not contain hallucinations are entangled, making it\nchallenging to distinguish them. These two observations inspire us with a\nsimple yet effective method to mitigate hallucinations. Specifically, we\nintroduce contrastive learning into MLLMs and use text with hallucination as\nhard negative examples, naturally bringing representations of non-hallucinative\ntext and visual samples closer while pushing way representations of\nnon-hallucinating and hallucinative text. We evaluate our method quantitatively\nand qualitatively, showing its effectiveness in reducing hallucination\noccurrences and improving performance across multiple benchmarks. On the\nMMhal-Bench benchmark, our method obtains a 34.66% /29.5% improvement over the\nbaseline MiniGPT-4/LLaVA. Our code is available on\nhttps://github.com/X-PLUG/mPLUG-HalOwl/tree/main/hacl.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Chaoya Jiang", "Haiyang Xu", "Mengfan Dong", "Jiaxing Chen", "Wei Ye", "Ming Yan", "Qinghao Ye", "Ji Zhang", "Fei Huang", "Fei Huang", "Shikun Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f398"}, "filepath": "data/2307.07944.png", "tags": [], "_media_type": "image", "_rand": 0.9993233287660334, "arXiv_link": "https://arxiv.org/abs/2307.07944", "other_link": "https://github.com/zhuoxiao-chen/ReDB-DA-3Ddet.", "title": "Active Domain Adaptation with False Negative Prediction for Object Detection", "abstract": "Unsupervised domain adaptation (DA) with the aid of pseudo labeling\ntechniques has emerged as a crucial approach for domain-adaptive 3D object\ndetection. While effective, existing DA methods suffer from a substantial drop\nin performance when applied to a multi-class training setting, due to the\nco-existence of low-quality pseudo labels and class imbalance issues. In this\npaper, we address this challenge by proposing a novel ReDB framework tailored\nfor learning to detect all classes at once. Our approach produces Reliable,\nDiverse, and class-Balanced pseudo 3D boxes to iteratively guide the\nself-training on a distributionally different target domain. To alleviate\ndisruptions caused by the environmental discrepancy (e.g., beam numbers), the\nproposed cross-domain examination (CDE) assesses the correctness of pseudo\nlabels by copy-pasting target instances into a source environment and measuring\nthe prediction consistency. To reduce computational overhead and mitigate the\nobject shift (e.g., scales and point densities), we design an overlapped boxes\ncounting (OBC) metric that allows to uniformly downsample pseudo-labeled\nobjects across different geometric characteristics. To confront the issue of\ninter-class imbalance, we progressively augment the target point clouds with a\nclass-balanced set of pseudo-labeled target instances and source objects, which\nboosts recognition accuracies on both frequently appearing and rare classes.\nExperimental results on three benchmark datasets using both voxel-based (i.e.,\nSECOND) and point-based 3D detectors (i.e., PointRCNN) demonstrate that our\nproposed ReDB approach outperforms existing 3D domain adaptation methods by a\nlarge margin, improving 23.15% mAP on the nuScenes $\\rightarrow$ KITTI task.\nThe code is available at https://github.com/zhuoxiao-chen/ReDB-DA-3Ddet.", "keywords": [], "authors_list": ["Yuzuru Nakamura", "Yasunori Ishii", "Takayoshi Yamashita"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f399"}, "filepath": "data/2311.17060v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999189225182429, "arXiv_link": "https://arxiv.org/abs/2311.17060v1", "other_link": "https://astra-vision.github.io/MaterialPalette/", "title": "Material Palette: Extraction of Materials from a Single Image", "abstract": "In this paper, we propose a method to extract physically-based rendering\n(PBR) materials from a single real-world image. We do so in two steps: first,\nwe map regions of the image to material concepts using a diffusion model, which\nallows the sampling of texture images resembling each material in the scene.\nSecond, we benefit from a separate network to decompose the generated textures\ninto Spatially Varying BRDFs (SVBRDFs), providing us with materials ready to be\nused in rendering applications. Our approach builds on existing synthetic\nmaterial libraries with SVBRDF ground truth, but also exploits a\ndiffusion-generated RGB texture dataset to allow generalization to new samples\nusing unsupervised domain adaptation (UDA). Our contributions are thoroughly\nevaluated on synthetic and real-world datasets. We further demonstrate the\napplicability of our method for editing 3D scenes with materials estimated from\nreal photographs. The code and models will be made open-source. Project page:\nhttps://astra-vision.github.io/MaterialPalette/", "keywords": ["Deep learning architectures and techniques", "Vision systems and graphics integration"], "authors_list": ["Ivan Lopes", "Fabio Pizzati", "Raoul de Charette"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f39a"}, "filepath": "data/2402.19302.png", "tags": [], "_media_type": "image", "_rand": 0.9994688739865193, "arXiv_link": "https://arxiv.org/abs/2402.19302", "other_link": "https://github.com/IIT-PAVIS/DiffAssemble", "title": "DiffAssemble: A Unified Graph-Diffusion Model for 2D and 3D Reassembly", "abstract": "Reassembly tasks play a fundamental role in many fields and multiple\napproaches exist to solve specific reassembly problems. In this context, we\nposit that a general unified model can effectively address them all,\nirrespective of the input data type (images, 3D, etc.). We introduce\nDiffAssemble, a Graph Neural Network (GNN)-based architecture that learns to\nsolve reassembly tasks using a diffusion model formulation. Our method treats\nthe elements of a set, whether pieces of 2D patch or 3D object fragments, as\nnodes of a spatial graph. Training is performed by introducing noise into the\nposition and rotation of the elements and iteratively denoising them to\nreconstruct the coherent initial pose. DiffAssemble achieves state-of-the-art\n(SOTA) results in most 2D and 3D reassembly tasks and is the first\nlearning-based approach that solves 2D puzzles for both rotation and\ntranslation. Furthermore, we highlight its remarkable reduction in run-time,\nperforming 11 times faster than the quickest optimization-based method for\npuzzle solving. Code available at https://github.com/IIT-PAVIS/DiffAssemble", "keywords": [], "authors_list": ["Gianluca Scarpellini", "Stefano Fiorini", "Francesco Giuliari", "Pietro Morerio", "Alessio Del Bue"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f39b"}, "filepath": "data/2401.09340.png", "tags": [], "_media_type": "image", "_rand": 0.9993489251172804, "arXiv_link": "https://arxiv.org/abs/2401.09340", "other_link": "https://scene-verse.github.io.", "title": "Situational Awareness Matters in 3D Vision Language Reasoning", "abstract": "3D vision-language grounding, which focuses on aligning language with the 3D\nphysical environment, stands as a cornerstone in the development of embodied\nagents. In comparison to recent advancements in the 2D domain, grounding\nlanguage in 3D scenes faces several significant challenges: (i) the inherent\ncomplexity of 3D scenes due to the diverse object configurations, their rich\nattributes, and intricate relationships; (ii) the scarcity of paired 3D\nvision-language data to support grounded learning; and (iii) the absence of a\nunified learning framework to distill knowledge from grounded 3D data. In this\nwork, we aim to address these three major challenges in 3D vision-language by\nexamining the potential of systematically upscaling 3D vision-language learning\nin indoor environments. We introduce the first million-scale 3D vision-language\ndataset, SceneVerse, encompassing about 68K 3D indoor scenes and comprising\n2.5M vision-language pairs derived from both human annotations and our scalable\nscene-graph-based generation approach. We demonstrate that this scaling allows\nfor a unified pre-training framework, Grounded Pre-training for Scenes (GPS),\nfor 3D vision-language learning. Through extensive experiments, we showcase the\neffectiveness of GPS by achieving state-of-the-art performance on all existing\n3D visual grounding benchmarks. The vast potential of SceneVerse and GPS is\nunveiled through zero-shot transfer experiments in the challenging 3D\nvision-language tasks. Project website: https://scene-verse.github.io.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yunze Man", "Liang-Yan Gui", "Yu-Xiong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f39c"}, "filepath": "data/2312.02158.png", "tags": [], "_media_type": "image", "_rand": 0.9992461053405436, "arXiv_link": "https://arxiv.org/abs/2312.02158", "other_link": "https://astra-vision.github.io/PaSCo", "title": "PaSCo: Urban 3D Panoptic Scene Completion with Uncertainty Awareness", "abstract": "We propose the task of Panoptic Scene Completion (PSC) which extends the\nrecently popular Semantic Scene Completion (SSC) task with instance-level\ninformation to produce a richer understanding of the 3D scene. Our PSC proposal\nutilizes a hybrid mask-based technique on the non-empty voxels from sparse\nmulti-scale completions. Whereas the SSC literature overlooks uncertainty which\nis critical for robotics applications, we instead propose an efficient\nensembling to estimate both voxel-wise and instance-wise uncertainties along\nPSC. This is achieved by building on a multi-input multi-output (MIMO)\nstrategy, while improving performance and yielding better uncertainty for\nlittle additional compute. Additionally, we introduce a technique to aggregate\npermutation-invariant mask predictions. Our experiments demonstrate that our\nmethod surpasses all baselines in both Panoptic Scene Completion and\nuncertainty estimation on three large-scale autonomous driving datasets. Our\ncode and data are available at https://astra-vision.github.io/PaSCo .", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Anh-Quan Cao", "Angela Dai", "Raoul de Charette"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f39d"}, "filepath": "data/2402.17292v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991288795584183, "arXiv_link": "https://arxiv.org/html/2402.17292v1", "other_link": "", "title": "DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models", "abstract": "Text-to-Avatar generation has recently made significant strides due to\nadvancements in diffusion models. However, most existing work remains\nconstrained by limited diversity, producing avatars with subtle differences in\nappearance for a given text prompt. We design DivAvatar, a novel framework that\ngenerates diverse avatars, empowering 3D creatives with a multitude of distinct\nand richly varied 3D avatars from a single text prompt. Different from most\nexisting work that exploits scene-specific 3D representations such as NeRF,\nDivAvatar finetunes a 3D generative model (i.e., EVA3D), allowing diverse\navatar generation from simply noise sampling in inference time. DivAvatar has\ntwo key designs that help achieve generation diversity and visual quality. The\nfirst is a noise sampling technique during training phase which is critical in\ngenerating diverse appearances. The second is a semantic-aware zoom mechanism\nand a novel depth loss, the former producing appearances of high textual\nfidelity by separate fine-tuning of specific body parts and the latter\nimproving geometry quality greatly by smoothing the generated mesh in the\nfeatures space. Extensive experiments show that DivAvatar is highly versatile\nin generating avatars of diverse appearances.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Yukang Cao", "Yan-Pei Cao", "Kai Han", "Ying Shan", "Kwan-Yee K. Wong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f39e"}, "filepath": "data/2311.05698.png", "tags": [], "_media_type": "image", "_rand": 0.9998514052057201, "arXiv_link": "https://arxiv.org/abs/2311.05698", "other_link": "", "title": "Mirasol3B: A Multimodal Autoregressive Model for Time-Aligned and Contextual Modalities", "abstract": "One of the main challenges of multimodal learning is the need to combine\nheterogeneous modalities (e.g., video, audio, text). For example, video and\naudio are obtained at much higher rates than text and are roughly aligned in\ntime. They are often not synchronized with text, which comes as a global\ncontext, e.g., a title, or a description. Furthermore, video and audio inputs\nare of much larger volumes, and grow as the video length increases, which\nnaturally requires more compute dedicated to these modalities and makes\nmodeling of long-range dependencies harder.\n We here decouple the multimodal modeling, dividing it into separate, focused\nautoregressive models, processing the inputs according to the characteristics\nof the modalities. We propose a multimodal model, called Mirasol3B, consisting\nof an autoregressive component for the time-synchronized modalities (audio and\nvideo), and an autoregressive component for the context modalities which are\nnot necessarily aligned in time but are still sequential. To address the\nlong-sequences of the video-audio inputs, we propose to further partition the\nvideo and audio sequences in consecutive snippets and autoregressively process\ntheir representations. To that end, we propose a Combiner mechanism, which\nmodels the audio-video information jointly within a timeframe. The Combiner\nlearns to extract audio and video features from raw spatio-temporal signals,\nand then learns to fuse these features producing compact but expressive\nrepresentations per snippet.\n Our approach achieves the state-of-the-art on well established multimodal\nbenchmarks, outperforming much larger models. It effectively addresses the high\ncomputational demand of media inputs by both learning compact representations,\ncontrolling the sequence length of the audio-video feature representations, and\nmodeling their dependencies in time.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["AJ Piergiovanni", "Isaac Noble", "Dahun Kim", "Michael Ryoo", "Victor Gomes", "Anelia Angelova"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f39f"}, "filepath": "data/2404.03138.png", "tags": [], "_media_type": "image", "_rand": 0.9994411839235151, "arXiv_link": "https://arxiv.org/abs/2404.03138", "other_link": "", "title": "Discontinuity-preserving Normal Integration with Auxiliary Edges", "abstract": "Many surface reconstruction methods incorporate normal integration, which is\na process to obtain a depth map from surface gradients. In this process, the\ninput may represent a surface with discontinuities, e.g., due to\nself-occlusion. To reconstruct an accurate depth map from the input normal map,\nhidden surface gradients occurring from the jumps must be handled. To model\nthese jumps correctly, we design a novel discretization scheme for the domain\nof normal integration. Our key idea is to introduce auxiliary edges, which\nbridge between piecewise-smooth patches in the domain so that the magnitude of\nhidden jumps can be explicitly expressed. Using the auxiliary edges, we design\na novel algorithm to optimize the discontinuity and the depth map from the\ninput normal map. Our method optimizes discontinuities by using a combination\nof iterative re-weighted least squares and iterative filtering of the jump\nmagnitudes on auxiliary edges to provide strong sparsity regularization.\nCompared to previous discontinuity-preserving normal integration methods, which\nmodel the magnitudes of jumps only implicitly, our method reconstructs subtle\ndiscontinuities accurately thanks to our explicit representation of jumps\nallowing for strong sparsity regularization.", "keywords": ["Low-level vision"], "authors_list": ["Hyomin Kim", "Yucheol Jung", "Seungyong Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a0"}, "filepath": "data/2403.01105.png", "tags": [], "_media_type": "image", "_rand": 0.9995678971897013, "arXiv_link": "https://arxiv.org/abs/2403.01105", "other_link": "", "title": "Depth Information Assisted Collaborative Mutual Promotion Network for Single Image Dehazing", "abstract": "Recovering a clear image from a single hazy image is an open inverse problem.\nAlthough significant research progress has been made, most existing methods\nignore the effect that downstream tasks play in promoting upstream dehazing.\nFrom the perspective of the haze generation mechanism, there is a potential\nrelationship between the depth information of the scene and the hazy image.\nBased on this, we propose a dual-task collaborative mutual promotion framework\nto achieve the dehazing of a single image. This framework integrates depth\nestimation and dehazing by a dual-task interaction mechanism and achieves\nmutual enhancement of their performance. To realize the joint optimization of\nthe two tasks, an alternative implementation mechanism with the difference\nperception is developed. On the one hand, the difference perception between the\ndepth maps of the dehazing result and the ideal image is proposed to promote\nthe dehazing network to pay attention to the non-ideal areas of the dehazing.\nOn the other hand, by improving the depth estimation performance in the\ndifficult-to-recover areas of the hazy image, the dehazing network can\nexplicitly use the depth information of the hazy image to assist the clear\nimage recovery. To promote the depth estimation, we propose to use the\ndifference between the dehazed image and the ground truth to guide the depth\nestimation network to focus on the dehazed unideal areas. It allows dehazing\nand depth estimation to leverage their strengths in a mutually reinforcing\nmanner. Experimental results show that the proposed method can achieve better\nperformance than that of the state-of-the-art approaches.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yafei Zhang", "Shen Zhou", "Huafeng Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a1"}, "filepath": "data/2402.17300v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995090382066821, "arXiv_link": "https://arxiv.org/abs/2402.17300v1", "other_link": "https://github.com/Luffy03/VoCo.", "title": "VoCo: A Simple-yet-Effective Volume Contrastive Learning Framework for 3D Medical Image Analysis", "abstract": "Self-Supervised Learning (SSL) has demonstrated promising results in 3D\nmedical image analysis. However, the lack of high-level semantics in\npre-training still heavily hinders the performance of downstream tasks. We\nobserve that 3D medical images contain relatively consistent contextual\nposition information, i.e., consistent geometric relations between different\norgans, which leads to a potential way for us to learn consistent semantic\nrepresentations in pre-training. In this paper, we propose a\nsimple-yet-effective Volume Contrast (VoCo) framework to leverage the\ncontextual position priors for pre-training. Specifically, we first generate a\ngroup of base crops from different regions while enforcing feature discrepancy\namong them, where we employ them as class assignments of different regions.\nThen, we randomly crop sub-volumes and predict them belonging to which class\n(located at which region) by contrasting their similarity to different base\ncrops, which can be seen as predicting contextual positions of different\nsub-volumes. Through this pretext task, VoCo implicitly encodes the contextual\nposition priors into model representations without the guidance of annotations,\nenabling us to effectively improve the performance of downstream tasks that\nrequire high-level semantics. Extensive experimental results on six downstream\ntasks demonstrate the superior effectiveness of VoCo. Code will be available at\nhttps://github.com/Luffy03/VoCo.", "keywords": ["Deep learning architectures and techniques", "Medical imaging and biological vision"], "authors_list": ["Linshan Wu", "Linshan Wu", "Jia-Xin Zhuang", "Hao Chen"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a2"}, "filepath": "data/2312.01564.png", "tags": [], "_media_type": "image", "_rand": 0.9990667918155475, "arXiv_link": "https://arxiv.org/abs/2312.01564", "other_link": "", "title": "JoAPR: Cleaning the Lens of Prompt Learning for Vision-Language Models", "abstract": "The choice of input text prompt plays a critical role in the performance of\nVision-Language Pretrained (VLP) models such as CLIP. We present APoLLo, a\nunified multi-modal approach that combines Adapter and Prompt learning for\nVision-Language models. Our method is designed to substantially improve the\ngeneralization capabilities of VLP models when they are fine-tuned in a\nfew-shot setting. We introduce trainable cross-attention-based adapter layers\nin conjunction with vision and language encoders to strengthen the alignment\nbetween the two modalities. We enforce consistency between the respective\nencoder branches (receiving augmented inputs) to prevent overfitting in\ndownstream tasks. Our method is evaluated on three representative tasks:\ngeneralization to novel classes, cross-dataset evaluation, and unseen domain\nshifts. In practice, APoLLo achieves a relative gain up to 6.03% over MaPLe\n(SOTA) on novel classes for 10 diverse image recognition datasets.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["YUNCHENG GUO", "Xiaodong Gu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computation and Language", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a3"}, "filepath": "data/2403.03370.png", "tags": [], "_media_type": "image", "_rand": 0.9996429738517423, "arXiv_link": "https://arxiv.org/abs/2403.03370", "other_link": "", "title": "F$^3$Loc: Fusion and Filtering for Floorplan Localization", "abstract": "In this paper we propose an efficient data-driven solution to\nself-localization within a floorplan. Floorplan data is readily available,\nlong-term persistent and inherently robust to changes in the visual appearance.\nOur method does not require retraining per map and location or demand a large\ndatabase of images of the area of interest. We propose a novel probabilistic\nmodel consisting of an observation and a novel temporal filtering module.\nOperating internally with an efficient ray-based representation, the\nobservation module consists of a single and a multiview module to predict\nhorizontal depth from images and fuses their results to benefit from advantages\noffered by either methodology. Our method operates on conventional consumer\nhardware and overcomes a common limitation of competing methods that often\ndemand upright images. Our full system meets real-time requirements, while\noutperforming the state-of-the-art by a significant margin.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Changan Chen", "Rui Wang", "Christoph Vogel", "Marc Pollefeys"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a4"}, "filepath": "data/2312.10671.png", "tags": [], "_media_type": "image", "_rand": 0.9996306239427797, "arXiv_link": "https://arxiv.org/abs/2312.10671", "other_link": "", "title": "Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance", "abstract": "We introduce Open3DIS, a novel solution designed to tackle the problem of\nOpen-Vocabulary Instance Segmentation within 3D scenes. Objects within 3D\nenvironments exhibit diverse shapes, scales, and colors, making precise\ninstance-level identification a challenging task. Recent advancements in\nOpen-Vocabulary scene understanding have made significant strides in this area\nby employing class-agnostic 3D instance proposal networks for object\nlocalization and learning queryable features for each 3D mask. While these\nmethods produce high-quality instance proposals, they struggle with identifying\nsmall-scale and geometrically ambiguous objects. The key idea of our method is\na new module that aggregates 2D instance masks across frames and maps them to\ngeometrically coherent point cloud regions as high-quality object proposals\naddressing the above limitations. These are then combined with 3D\nclass-agnostic instance proposals to include a wide range of objects in the\nreal world. To validate our approach, we conducted experiments on three\nprominent datasets, including ScanNet200, S3DIS, and Replica, demonstrating\nsignificant performance gains in segmenting objects with diverse categories\nover the state-of-the-art approaches.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Phuc Nguyen", "Tuan Duc Ngo", "Evangelos Kalogerakis", "Chuang Gan", "Anh Tran", "Cuong Pham", "Khoi Nguyen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a5"}, "filepath": "data/2403.19944.png", "tags": [], "_media_type": "image", "_rand": 0.9997278411711649, "arXiv_link": "https://arxiv.org/abs/2403.19944", "other_link": "", "title": "Binarized Low-light Raw Video Enhancement", "abstract": "Recently, deep neural networks have achieved excellent performance on\nlow-light raw video enhancement. However, they often come with high\ncomputational complexity and large memory costs, which hinder their\napplications on resource-limited devices. In this paper, we explore the\nfeasibility of applying the extremely compact binary neural network (BNN) to\nlow-light raw video enhancement. Nevertheless, there are two main issues with\nbinarizing video enhancement models. One is how to fuse the temporal\ninformation to improve low-light denoising without complex modules. The other\nis how to narrow the performance gap between binary convolutions with the full\nprecision ones. To address the first issue, we introduce a spatial-temporal\nshift operation, which is easy-to-binarize and effective. The temporal shift\nefficiently aggregates the features of neighbor frames and the spatial shift\nhandles the misalignment caused by the large motion in videos. For the second\nissue, we present a distribution-aware binary convolution, which captures the\ndistribution characteristics of real-valued input and incorporates them into\nplain binary convolutions to alleviate the degradation in performance.\nExtensive quantitative and qualitative experiments have shown our\nhigh-efficiency binarized low-light raw video enhancement method can attain a\npromising performance.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Gengchen Zhang", "Yulun Zhang", "Xin Yuan", "Ying Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a6"}, "filepath": "data/2401.02847.png", "tags": [], "_media_type": "image", "_rand": 0.9992948962188664, "arXiv_link": "https://arxiv.org/abs/2401.02847", "other_link": "https://github.com/xiaorongjun000/Self-Rectification", "title": "Generating Non-Stationary Textures using Self-Rectification", "abstract": "This paper addresses the challenge of example-based non-stationary texture\nsynthesis. We introduce a novel twostep approach wherein users first modify a\nreference texture using standard image editing tools, yielding an initial rough\ntarget for the synthesis. Subsequently, our proposed method, termed\n\"self-rectification\", automatically refines this target into a coherent,\nseamless texture, while faithfully preserving the distinct visual\ncharacteristics of the reference exemplar. Our method leverages a pre-trained\ndiffusion network, and uses self-attention mechanisms, to gradually align the\nsynthesized texture with the reference, ensuring the retention of the\nstructures in the provided target. Through experimental validation, our\napproach exhibits exceptional proficiency in handling non-stationary textures,\ndemonstrating significant advancements in texture synthesis when compared to\nexisting state-of-the-art techniques. Code is available at\nhttps://github.com/xiaorongjun000/Self-Rectification", "keywords": [], "authors_list": ["Yang Zhou", "Rongjun Xiao", "Dani Lischinski", "Daniel Cohen-Or", "Hui Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a7"}, "filepath": "data/2404.01225.png", "tags": [], "_media_type": "image", "_rand": 0.9992514769808735, "arXiv_link": "https://arxiv.org/abs/2404.01225", "other_link": "https://taohuumd.github.io/projects/SurMo/", "title": "SurMo: Surface-based 4D Motion Modeling for Dynamic Human Rendering", "abstract": "Dynamic human rendering from video sequences has achieved remarkable progress\nby formulating the rendering as a mapping from static poses to human images.\nHowever, existing methods focus on the human appearance reconstruction of every\nsingle frame while the temporal motion relations are not fully explored. In\nthis paper, we propose a new 4D motion modeling paradigm, SurMo, that jointly\nmodels the temporal dynamics and human appearances in a unified framework with\nthree key designs: 1) Surface-based motion encoding that models 4D human\nmotions with an efficient compact surface-based triplane. It encodes both\nspatial and temporal motion relations on the dense surface manifold of a\nstatistical body template, which inherits body topology priors for\ngeneralizable novel view synthesis with sparse training observations. 2)\nPhysical motion decoding that is designed to encourage physical motion learning\nby decoding the motion triplane features at timestep t to predict both spatial\nderivatives and temporal derivatives at the next timestep t+1 in the training\nstage. 3) 4D appearance decoding that renders the motion triplanes into images\nby an efficient volumetric surface-conditioned renderer that focuses on the\nrendering of body surfaces with motion learning conditioning. Extensive\nexperiments validate the state-of-the-art performance of our new paradigm and\nillustrate the expressiveness of surface-based motion triplanes for rendering\nhigh-fidelity view-consistent humans with fast motions and even\nmotion-dependent shadows. Our project page is at:\nhttps://taohuumd.github.io/projects/SurMo/", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Tao Hu", "Fangzhou Hong", "Ziwei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a8"}, "filepath": "data/2312.01831v2.png", "tags": [], "_media_type": "image", "_rand": 0.999100107629856, "arXiv_link": "https://arxiv.org/html/2312.01831v2", "other_link": "", "title": "Equivariant plug-and-play image reconstruction", "abstract": "Plug-and-play algorithms constitute a popular framework for solving inverse\nimaging problems that rely on the implicit definition of an image prior via a\ndenoiser. These algorithms can leverage powerful pre-trained denoisers to solve\na wide range of imaging tasks, circumventing the necessity to train models on a\nper-task basis. Unfortunately, plug-and-play methods often show unstable\nbehaviors, hampering their promise of versatility and leading to suboptimal\nquality of reconstructed images. In this work, we show that enforcing\nequivariance to certain groups of transformations (rotations, reflections,\nand/or translations) on the denoiser strongly improves the stability of the\nalgorithm as well as its reconstruction quality. We provide a theoretical\nanalysis that illustrates the role of equivariance on better performance and\nstability. We present a simple algorithm that enforces equivariance on any\nexisting denoiser by simply applying a random transformation to the input of\nthe denoiser and the inverse transformation to the output at each iteration of\nthe algorithm. Experiments on multiple imaging modalities and denoising\nnetworks show that the equivariant plug-and-play algorithm improves both the\nreconstruction performance and the stability compared to their non-equivariant\ncounterparts.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Matthieu Terris", "Thomas Moreau", "Nelly Pustelnik", "Juli\u00e1n Tachella"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3a9"}, "filepath": "data/2312.02126.png", "tags": [], "_media_type": "image", "_rand": 0.9990376368011707, "arXiv_link": "https://arxiv.org/abs/2312.02126", "other_link": "", "title": "SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM", "abstract": "Dense simultaneous localization and mapping (SLAM) is crucial for robotics\nand augmented reality applications. However, current methods are often hampered\nby the non-volumetric or implicit way they represent a scene. This work\nintroduces SplaTAM, an approach that, for the first time, leverages explicit\nvolumetric representations, i.e., 3D Gaussians, to enable high-fidelity\nreconstruction from a single unposed RGB-D camera, surpassing the capabilities\nof existing methods. SplaTAM employs a simple online tracking and mapping\nsystem tailored to the underlying Gaussian representation. It utilizes a\nsilhouette mask to elegantly capture the presence of scene density. This\ncombination enables several benefits over prior representations, including fast\nrendering and dense optimization, quickly determining if areas have been\npreviously mapped, and structured map expansion by adding more Gaussians.\nExtensive experiments show that SplaTAM achieves up to 2x superior performance\nin camera pose estimation, map construction, and novel-view synthesis over\nexisting methods, paving the way for more immersive high-fidelity SLAM\napplications.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Nikhil Keetha", "Jay Karhade", "Krishna Murthy Jatavallabhula", "Gengshan Yang", "Sebastian Scherer", "Deva Ramanan", "Jonathon Luiten"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3aa"}, "filepath": "data/2404.13153.png", "tags": [], "_media_type": "image", "_rand": 0.9990637817594723, "arXiv_link": "https://arxiv.org/abs/2404.13153", "other_link": "https://github.com/ChengxuLiu/MISCFilter", "title": "Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring", "abstract": "Eliminating image blur produced by various kinds of motion has been a\nchallenging problem. Dominant approaches rely heavily on model capacity to\nremove blurring by reconstructing residual from blurry observation in feature\nspace. These practices not only prevent the capture of spatially variable\nmotion in the real world but also ignore the tailored handling of various\nmotions in image space. In this paper, we propose a novel real-world deblurring\nfiltering model called the Motion-adaptive Separable Collaborative (MISC)\nFilter. In particular, we use a motion estimation network to capture motion\ninformation from neighborhoods, thereby adaptively estimating spatially-variant\nmotion flow, mask, kernels, weights, and offsets to obtain the MISC Filter. The\nMISC Filter first aligns the motion-induced blurring patterns to the motion\nmiddle along the predicted flow direction, and then collaboratively filters the\naligned image through the predicted kernels, weights, and offsets to generate\nthe output. This design can handle more generalized and complex motion in a\nspatially differentiated manner. Furthermore, we analyze the relationships\nbetween the motion estimation network and the residual reconstruction network.\nExtensive experiments on four widely used benchmarks demonstrate that our\nmethod provides an effective solution for real-world motion blur removal and\nachieves state-of-the-art performance. Code is available at\nhttps://github.com/ChengxuLiu/MISCFilter", "keywords": ["Low-level vision"], "authors_list": ["Chengxu Liu", "Xuan Wang", "Xiangyu Xu", "Ruhao Tian", "Shuai Li", "Xueming Qian", "Ming-Hsuan Yang"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ab"}, "filepath": "data/2405.07364.png", "tags": [], "_media_type": "image", "_rand": 0.9999352478195886, "arXiv_link": "https://arxiv.org/abs/2405.07364", "other_link": "https://github.com/amaralibey/Bag-of-Queries.", "title": "BoQ: A Place is Worth a Bag of Learnable Queries", "abstract": "In visual place recognition, accurately identifying and matching images of\nlocations under varying environmental conditions and viewpoints remains a\nsignificant challenge. In this paper, we introduce a new technique, called\nBag-of-Queries (BoQ), which learns a set of global queries designed to capture\nuniversal place-specific attributes. Unlike existing methods that employ\nself-attention and generate the queries directly from the input features, BoQ\nemploys distinct learnable global queries, which probe the input features via\ncross-attention, ensuring consistent information aggregation. In addition, our\ntechnique provides an interpretable attention mechanism and integrates with\nboth CNN and Vision Transformer backbones. The performance of BoQ is\ndemonstrated through extensive experiments on 14 large-scale benchmarks. It\nconsistently outperforms current state-of-the-art techniques including NetVLAD,\nMixVPR and EigenPlaces. Moreover, as a global retrieval technique (one-stage),\nBoQ surpasses two-stage retrieval methods, such as Patch-NetVLAD, TransVPR and\nR2Former, all while being orders of magnitude faster and more efficient. The\ncode and model weights are publicly available at\nhttps://github.com/amaralibey/Bag-of-Queries.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Amar Ali-bey", "Brahim Chaib-draa", "Philippe Gigu\u00e8re"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ac"}, "filepath": "data/2403.00459.png", "tags": [], "_media_type": "image", "_rand": 0.9998857667769849, "arXiv_link": "https://arxiv.org/abs/2403.00459", "other_link": "https://github.com/zichongc/DoesFS", "title": "Deformable One-shot Face Stylization via DINO Semantic Guidance", "abstract": "This paper addresses the complex issue of one-shot face stylization, focusing\non the simultaneous consideration of appearance and structure, where previous\nmethods have fallen short. We explore deformation-aware face stylization that\ndiverges from traditional single-image style reference, opting for a real-style\nimage pair instead. The cornerstone of our method is the utilization of a\nself-supervised vision transformer, specifically DINO-ViT, to establish a\nrobust and consistent facial structure representation across both real and\nstyle domains. Our stylization process begins by adapting the StyleGAN\ngenerator to be deformation-aware through the integration of spatial\ntransformers (STN). We then introduce two innovative constraints for generator\nfine-tuning under the guidance of DINO semantics: i) a directional deformation\nloss that regulates directional vectors in DINO space, and ii) a relative\nstructural consistency constraint based on DINO token self-similarities,\nensuring diverse generation. Additionally, style-mixing is employed to align\nthe color generation with the reference, minimizing inconsistent\ncorrespondences. This framework delivers enhanced deformability for general\none-shot face stylization, achieving notable efficiency with a fine-tuning\nduration of approximately 10 minutes. Extensive qualitative and quantitative\ncomparisons demonstrate our superiority over state-of-the-art one-shot face\nstylization methods. Code is available at https://github.com/zichongc/DoesFS", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yang Zhou", "Zichong Chen", "Hui Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ad"}, "filepath": "data/2311.15980.png", "tags": [], "_media_type": "image", "_rand": 0.9993056217176628, "arXiv_link": "https://arxiv.org/abs/2311.15980", "other_link": "https://nju-3dv.github.io/projects/direct25.", "title": "Direct2.5: Diverse Text-to-3D Generation via Multi-view 2.5D Diffusion", "abstract": "Recent advances in generative AI have unveiled significant potential for the\ncreation of 3D content. However, current methods either apply a pre-trained 2D\ndiffusion model with the time-consuming score distillation sampling (SDS), or a\ndirect 3D diffusion model trained on limited 3D data losing generation\ndiversity. In this work, we approach the problem by employing a multi-view 2.5D\ndiffusion fine-tuned from a pre-trained 2D diffusion model. The multi-view 2.5D\ndiffusion directly models the structural distribution of 3D data, while still\nmaintaining the strong generalization ability of the original 2D diffusion\nmodel, filling the gap between 2D diffusion-based and direct 3D diffusion-based\nmethods for 3D content generation. During inference, multi-view normal maps are\ngenerated using the 2.5D diffusion, and a novel differentiable rasterization\nscheme is introduced to fuse the almost consistent multi-view normal maps into\na consistent 3D model. We further design a normal-conditioned multi-view image\ngeneration module for fast appearance generation given the 3D geometry. Our\nmethod is a one-pass diffusion process and does not require any SDS\noptimization as post-processing. We demonstrate through extensive experiments\nthat, our direct 2.5D generation with the specially-designed fusion scheme can\nachieve diverse, mode-seeking-free, and high-fidelity 3D content generation in\nonly 10 seconds. Project page: https://nju-3dv.github.io/projects/direct25.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Yuanxun Lu", "Jingyang Zhang", "Shiwei Li", "Tian Fang", "David McKinnon", "Yanghai Tsin", "Long Quan", "Xun Cao", "Yao Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ae"}, "filepath": "data/2403.18360.png", "tags": [], "_media_type": "image", "_rand": 0.9991225447446546, "arXiv_link": "https://arxiv.org/abs/2403.18360", "other_link": "https://dotrannhattuong.github.io/ECB/website.", "title": "Learning CNN on ViT: A Hybrid Model to Explicitly Class-specific Boundaries for Domain Adaptation", "abstract": "Most domain adaptation (DA) methods are based on either a convolutional\nneural networks (CNNs) or a vision transformers (ViTs). They align the\ndistribution differences between domains as encoders without considering their\nunique characteristics. For instance, ViT excels in accuracy due to its\nsuperior ability to capture global representations, while CNN has an advantage\nin capturing local representations. This fact has led us to design a hybrid\nmethod to fully take advantage of both ViT and CNN, called Explicitly\nClass-specific Boundaries (ECB). ECB learns CNN on ViT to combine their\ndistinct strengths. In particular, we leverage ViT's properties to explicitly\nfind class-specific decision boundaries by maximizing the discrepancy between\nthe outputs of the two classifiers to detect target samples far from the source\nsupport. In contrast, the CNN encoder clusters target features based on the\npreviously defined class-specific boundaries by minimizing the discrepancy\nbetween the probabilities of the two classifiers. Finally, ViT and CNN mutually\nexchange knowledge to improve the quality of pseudo labels and reduce the\nknowledge discrepancies of these models. Compared to conventional DA methods,\nour ECB achieves superior performance, which verifies its effectiveness in this\nhybrid model. The project website can be found\nhttps://dotrannhattuong.github.io/ECB/website.", "keywords": [], "authors_list": ["Ba Hung Ngo", "Nhat-Tuong Do-Tran", "Tuan-Ngoc Nguyen", "Hae-Gon Jeon", "Tae Jong Choi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3af"}, "filepath": "data/2404.02176.png", "tags": [], "_media_type": "image", "_rand": 0.9998940812868996, "arXiv_link": "https://arxiv.org/abs/2404.02176", "other_link": "", "title": "Versatile Navigation under Partial Observability via Value-Guided Diffusion Policy", "abstract": "Route planning for navigation under partial observability plays a crucial\nrole in modern robotics and autonomous driving. Existing route planning\napproaches can be categorized into two main classes: traditional autoregressive\nand diffusion-based methods. The former often fails due to its myopic nature,\nwhile the latter either assumes full observability or struggles to adapt to\nunfamiliar scenarios, due to strong couplings with behavior cloning from\nexperts. To address these deficiencies, we propose a versatile diffusion-based\napproach for both 2D and 3D route planning under partial observability.\nSpecifically, our value-guided diffusion policy first generates plans to\npredict actions across various timesteps, providing ample foresight to the\nplanning. It then employs a differentiable planner with state estimations to\nderive a value function, directing the agent's exploration and goal-seeking\nbehaviors without seeking experts while explicitly addressing partial\nobservability. During inference, our policy is further enhanced by a\nbest-plan-selection strategy, substantially boosting the planning success rate.\nMoreover, we propose projecting point clouds, derived from RGB-D inputs, onto\n2D grid-based bird-eye-view maps via semantic segmentation, generalizing to 3D\nenvironments. This simple yet effective adaption enables zero-shot transfer\nfrom 2D-trained policy to 3D, cutting across the laborious training for 3D\npolicy, and thus certifying our versatility. Experimental results demonstrate\nour superior performance, particularly in navigating situations beyond expert\ndemonstrations, surpassing state-of-the-art autoregressive and diffusion-based\nbaselines for both 2D and 3D scenarios.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Gengyu Zhang", "Hao Tang", "Yan Yan"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b0"}, "filepath": "data/2405.09771.png", "tags": [], "_media_type": "image", "_rand": 0.9994152521035545, "arXiv_link": "https://arxiv.org/abs/2405.09771", "other_link": "", "title": "PerAda: Parameter-Efficient Federated Learning Personalization with Generalization Guarantees", "abstract": "Federated Prompt Learning (FPL) incorporates large pre-trained\nVision-Language models (VLM) into federated learning through prompt tuning. The\ntransferable representations and remarkable generalization capacity of VLM make\nthem highly compatible with the integration of federated learning. Addressing\ndata heterogeneity in federated learning requires personalization, but\nexcessive focus on it across clients could compromise the model's ability to\ngeneralize effectively. To preserve the impressive generalization capability of\nVLM, it is crucial to strike a balance between personalization and\ngeneralization in FPL. To tackle this challenge, we proposed Federated Prompt\nLearning with CLIP Generalization and low-rank Personalization (FedPGP), which\nemploys pre-trained CLIP to provide knowledge-guidance on the global prompt for\nimproved generalization and incorporates a low-rank adaptation term to\npersonalize the global prompt. Further, FedPGP integrates a prompt-wise\ncontrastive loss to achieve knowledge guidance and personalized adaptation\nsimultaneously, enabling a harmonious balance between personalization and\ngeneralization in FPL. We conduct extensive experiments on various datasets to\nexplore base-to-novel generalization in both category-level and domain-level\nscenarios with heterogeneous data, showing the superiority of FedPGP in\nbalancing generalization and personalization.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Chulin Xie", "De-An Huang", "Wenda Chu", "Daguang Xu", "Chaowei Xiao", "Bo Li", "Anima Anandkumar"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b1"}, "filepath": "data/2312.00404v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993296945414747, "arXiv_link": "https://arxiv.org/html/2312.00404v1", "other_link": "", "title": "Bi-Causal: Group Activity Recognition via Bidirectional Causality", "abstract": "Human activity recognition (HAR) is a key challenge in pervasive computing\nand its solutions have been presented based on various disciplines.\nSpecifically, for HAR in a smart space without privacy and accessibility\nissues, data streams generated by deployed pervasive sensors are leveraged. In\nthis paper, we focus on a group activity by which a group of users perform a\ncollaborative task without user identification and propose an efficient group\nactivity recognition scheme which extracts causality patterns from pervasive\nsensor event sequences generated by a group of users to support as good\nrecognition accuracy as the state-of-the-art graphical model. To filter out\nirrelevant noise events from a given data stream, a set of rules is leveraged\nto highlight causally related events. Then, a pattern-tree algorithm extracts\nfrequent causal patterns by means of a growing tree structure. Based on the\nextracted patterns, a weighted sum-based pattern matching algorithm computes\nthe likelihoods of stored group activities to the given test event sequence by\nmeans of matched event pattern counts for group activity recognition. We\nevaluate the proposed scheme using the data collected from our testbed and\nCASAS datasets where users perform their tasks on a daily basis and validate\nits effectiveness in a real environment. Experiment results show that the\nproposed scheme performs higher recognition accuracy and with a small amount of\nruntime overhead than the existing schemes.", "keywords": [], "authors_list": ["Youliang Zhang", "Wenxuan Liu", "danni xu", "Zhuo Zhou", "Zheng Wang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Databases"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b2"}, "filepath": "data/2311.13127v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999336897789806, "arXiv_link": "https://arxiv.org/abs/2311.13127v3", "other_link": "https://github.com/liuyixin-louis/MetaCloak.", "title": "MetaCloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning", "abstract": "Text-to-image diffusion models allow seamless generation of personalized\nimages from scant reference photos. Yet, these tools, in the wrong hands, can\nfabricate misleading or harmful content, endangering individuals. To address\nthis problem, existing poisoning-based approaches perturb user images in an\nimperceptible way to render them \"unlearnable\" from malicious uses. We identify\ntwo limitations of these defending approaches: i) sub-optimal due to the\nhand-crafted heuristics for solving the intractable bilevel optimization and\nii) lack of robustness against simple data transformations like Gaussian\nfiltering. To solve these challenges, we propose MetaCloak, which solves the\nbi-level poisoning problem with a meta-learning framework with an additional\ntransformation sampling process to craft transferable and robust perturbation.\nSpecifically, we employ a pool of surrogate diffusion models to craft\ntransferable and model-agnostic perturbation. Furthermore, by incorporating an\nadditional transformation process, we design a simple denoising-error\nmaximization loss that is sufficient for causing transformation-robust semantic\ndistortion and degradation in a personalized generation. Extensive experiments\non the VGGFace2 and CelebA-HQ datasets show that MetaCloak outperforms existing\napproaches. Notably, MetaCloak can successfully fool online training services\nlike Replicate, in a black-box manner, demonstrating the effectiveness of\nMetaCloak in real-world scenarios. Our code is available at\nhttps://github.com/liuyixin-louis/MetaCloak.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yixin Liu", "Chenrui Fan", "Yutong Dai", "Xun Chen", "Pan Zhou", "Lichao Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b3"}, "filepath": "data/2312.02813.png", "tags": [], "_media_type": "image", "_rand": 0.9994854818281181, "arXiv_link": "https://arxiv.org/abs/2312.02813", "other_link": "", "title": "BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models", "abstract": "Diffusion models have made tremendous progress in text-driven image and video\ngeneration. Now text-to-image foundation models are widely applied to various\ndownstream image synthesis tasks, such as controllable image generation and\nimage editing, while downstream video synthesis tasks are less explored for\nseveral reasons. First, it requires huge memory and computation overhead to\ntrain a video generation foundation model. Even with video foundation models,\nadditional costly training is still required for downstream video synthesis\ntasks. Second, although some works extend image diffusion models into videos in\na training-free manner, temporal consistency cannot be well preserved. Finally,\nthese adaption methods are specifically designed for one task and fail to\ngeneralize to different tasks. To mitigate these issues, we propose a\ntraining-free general-purpose video synthesis framework, coined as {\\bf\nBIVDiff}, via bridging specific image diffusion models and general\ntext-to-video foundation diffusion models. Specifically, we first use a\nspecific image diffusion model (e.g., ControlNet and Instruct Pix2Pix) for\nframe-wise video generation, then perform Mixed Inversion on the generated\nvideo, and finally input the inverted latents into the video diffusion models\n(e.g., VidRD and ZeroScope) for temporal smoothing. This decoupled framework\nenables flexible image model selection for different purposes with strong task\ngeneralization and high efficiency. To validate the effectiveness and general\nuse of BIVDiff, we perform a wide range of video synthesis tasks, including\ncontrollable video generation, video editing, video inpainting, and\noutpainting.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Fengyuan Shi", "Jiaxi Gu", "Hang Xu", "Songcen Xu", "Wei Zhang", "Limin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b4"}, "filepath": "data/2312.07395.png", "tags": [], "_media_type": "image", "_rand": 0.9996713318140505, "arXiv_link": "https://arxiv.org/abs/2312.07395", "other_link": "", "title": "A Simple Recipe for Contrastively Pre-training Video-First Encoders Beyond 16 Frames", "abstract": "Understanding long, real-world videos requires modeling of long-range visual\ndependencies. To this end, we explore video-first architectures, building on\nthe common paradigm of transferring large-scale, image--text models to video\nvia shallow temporal fusion. However, we expose two limitations to the\napproach: (1) decreased spatial capabilities, likely due to poor\nvideo--language alignment in standard video datasets, and (2) higher memory\nconsumption, bottlenecking the number of frames that can be processed. To\nmitigate the memory bottleneck, we systematically analyze the memory/accuracy\ntrade-off of various efficient methods: factorized attention,\nparameter-efficient image-to-video adaptation, input masking, and\nmulti-resolution patchification. Surprisingly, simply masking large portions of\nthe video (up to 75%) during contrastive pre-training proves to be one of the\nmost robust ways to scale encoders to videos up to 4.3 minutes at 1 FPS. Our\nsimple approach for training long video-to-text models, which scales to 1B\nparameters, does not add new architectural complexity and is able to outperform\nthe popular paradigm of using much larger LLMs as an information aggregator\nover segment-based information on benchmarks with long-range temporal\ndependencies (YouCook2, EgoSchema).", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Pinelopi Papalampidi", "Skanda Koppula", "Shreya Pathak", "Justin Chiu", "Joseph Heyward", "Viorica Patraucean", "Jiajun Shen", "Antoine Miech", "Andrew Zisserman", "Aida Nematzadeh"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b5"}, "filepath": "data/2403.02265v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994681106427492, "arXiv_link": "https://arxiv.org/abs/2403.02265v1", "other_link": "", "title": "DaReNeRF: Direction-aware Representation for Dynamic Scenes", "abstract": "Addressing the intricate challenge of modeling and re-rendering dynamic\nscenes, most recent approaches have sought to simplify these complexities using\nplane-based explicit representations, overcoming the slow training time issues\nassociated with methods like Neural Radiance Fields (NeRF) and implicit\nrepresentations. However, the straightforward decomposition of 4D dynamic\nscenes into multiple 2D plane-based representations proves insufficient for\nre-rendering high-fidelity scenes with complex motions. In response, we present\na novel direction-aware representation (DaRe) approach that captures scene\ndynamics from six different directions. This learned representation undergoes\nan inverse dual-tree complex wavelet transformation (DTCWT) to recover\nplane-based information. DaReNeRF computes features for each space-time point\nby fusing vectors from these recovered planes. Combining DaReNeRF with a tiny\nMLP for color regression and leveraging volume rendering in training yield\nstate-of-the-art performance in novel view synthesis for complex dynamic\nscenes. Notably, to address redundancy introduced by the six real and six\nimaginary direction-aware wavelet coefficients, we introduce a trainable\nmasking approach, mitigating storage issues without significant performance\ndecline. Moreover, DaReNeRF maintains a 2x reduction in training time compared\nto prior art while delivering superior performance.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation", "Scene analysis and understanding"], "authors_list": ["Ange Lou", "Benjamin Planche", "Zhongpai Gao", "Yamin Li", "Tianyu Luan", "Hao Ding", "Terrence Chen", "Jack Noble", "Ziyan Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b6"}, "filepath": "data/2403.02746.png", "tags": [], "_media_type": "image", "_rand": 0.999969904083319, "arXiv_link": "https://arxiv.org/abs/2403.02746", "other_link": "", "title": "Learning without Exact Guidance: Updating Large-scale High-resolution Land Cover Maps from Low-resolution Historical Labels", "abstract": "Large-scale high-resolution (HR) land-cover mapping is a vital task to survey\nthe Earth's surface and resolve many challenges facing humanity. However, it is\nstill a non-trivial task hindered by complex ground details, various landforms,\nand the scarcity of accurate training labels over a wide-span geographic area.\nIn this paper, we propose an efficient, weakly supervised framework\n(Paraformer) to guide large-scale HR land-cover mapping with easy-access\nhistorical land-cover data of low resolution (LR). Specifically, existing\nland-cover mapping approaches reveal the dominance of CNNs in preserving local\nground details but still suffer from insufficient global modeling in various\nlandforms. Therefore, we design a parallel CNN-Transformer feature extractor in\nParaformer, consisting of a downsampling-free CNN branch and a Transformer\nbranch, to jointly capture local and global contextual information. Besides,\nfacing the spatial mismatch of training data, a pseudo-label-assisted training\n(PLAT) module is adopted to reasonably refine LR labels for weakly supervised\nsemantic segmentation of HR images. Experiments on two large-scale datasets\ndemonstrate the superiority of Paraformer over other state-of-the-art methods\nfor automatically updating HR land-cover maps from LR historical labels.", "keywords": ["Efficient and scalable vision", "Remote sensing and photogrammetry"], "authors_list": ["Zhuohong Li", "Wei He", "Jiepan Li", "Fangxiao Lu", "Hongyan Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b7"}, "filepath": "data/2403.19474.png", "tags": [], "_media_type": "image", "_rand": 0.9995915460638658, "arXiv_link": "https://arxiv.org/abs/2403.19474", "other_link": "", "title": "SG-PGM: Partial Graph Matching Network with Semantic Geometric Fusion for 3D Scene Graph Alignment and Its Downstream Tasks", "abstract": "Scene graphs have been recently introduced into 3D spatial understanding as a\ncomprehensive representation of the scene. The alignment between 3D scene\ngraphs is the first step of many downstream tasks such as scene graph aided\npoint cloud registration, mosaicking, overlap checking, and robot navigation.\nIn this work, we treat 3D scene graph alignment as a partial graph-matching\nproblem and propose to solve it with a graph neural network. We reuse the\ngeometric features learned by a point cloud registration method and associate\nthe clustered point-level geometric features with the node-level semantic\nfeature via our designed feature fusion module. Partial matching is enabled by\nusing a learnable method to select the top-k similar node pairs. Subsequent\ndownstream tasks such as point cloud registration are achieved by running a\npre-trained registration network within the matched regions. We further propose\na point-matching rescoring method, that uses the node-wise alignment of the 3D\nscene graph to reweight the matching candidates from a pre-trained point cloud\nregistration method. It reduces the false point correspondences estimated\nespecially in low-overlapping cases. Experiments show that our method improves\nthe alignment accuracy by 10~20% in low-overlap and random transformation\nscenarios and outperforms the existing work in multiple downstream tasks.", "keywords": ["Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Yaxu Xie", "Alain Pagani", "Didier Stricker"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b8"}, "filepath": "data/2403.05369.png", "tags": [], "_media_type": "image", "_rand": 0.9996130980323071, "arXiv_link": "https://arxiv.org/abs/2403.05369", "other_link": "https://github.com/Linwei-Chen/FADC.", "title": "Frequency-Adaptive Dilated Convolution for Semantic Segmentation", "abstract": "Dilated convolution, which expands the receptive field by inserting gaps\nbetween its consecutive elements, is widely employed in computer vision. In\nthis study, we propose three strategies to improve individual phases of dilated\nconvolution from the view of spectrum analysis. Departing from the conventional\npractice of fixing a global dilation rate as a hyperparameter, we introduce\nFrequency-Adaptive Dilated Convolution (FADC), which dynamically adjusts\ndilation rates spatially based on local frequency components. Subsequently, we\ndesign two plug-in modules to directly enhance effective bandwidth and\nreceptive field size. The Adaptive Kernel (AdaKern) module decomposes\nconvolution weights into low-frequency and high-frequency components,\ndynamically adjusting the ratio between these components on a per-channel\nbasis. By increasing the high-frequency part of convolution weights, AdaKern\ncaptures more high-frequency components, thereby improving effective bandwidth.\nThe Frequency Selection (FreqSelect) module optimally balances high- and\nlow-frequency components in feature representations through spatially variant\nreweighting. It suppresses high frequencies in the background to encourage FADC\nto learn a larger dilation, thereby increasing the receptive field for an\nexpanded scope. Extensive experiments on segmentation and object detection\nconsistently validate the efficacy of our approach. The code is publicly\navailable at https://github.com/Linwei-Chen/FADC.", "keywords": [], "authors_list": ["Linwei Chen", "Lin Gu", "Dezhi Zheng", "Ying Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3b9"}, "filepath": "data/2404.14006.png", "tags": [], "_media_type": "image", "_rand": 0.9994455175325677, "arXiv_link": "https://arxiv.org/abs/2404.14006", "other_link": "", "title": "Distilled Datamodel with Reverse Gradient Matching", "abstract": "The proliferation of large-scale AI models trained on extensive datasets has\nrevolutionized machine learning. With these models taking on increasingly\ncentral roles in various applications, the need to understand their behavior\nand enhance interpretability has become paramount. To investigate the impact of\nchanges in training data on a pre-trained model, a common approach is\nleave-one-out retraining. This entails systematically altering the training\ndataset by removing specific samples to observe resulting changes within the\nmodel. However, retraining the model for each altered dataset presents a\nsignificant computational challenge, given the need to perform this operation\nfor every dataset variation. In this paper, we introduce an efficient framework\nfor assessing data impact, comprising offline training and online evaluation\nstages. During the offline training phase, we approximate the influence of\ntraining data on the target model through a distilled synset, formulated as a\nreversed gradient matching problem. For online evaluation, we expedite the\nleave-one-out process using the synset, which is then utilized to compute the\nattribution matrix based on the evaluation objective. Experimental evaluations,\nincluding training data attribution and assessments of data quality,\ndemonstrate that our proposed method achieves comparable model behavior\nevaluation while significantly speeding up the process compared to the direct\nretraining method.", "keywords": ["Efficient and scalable vision", "Efficient and scalable vision"], "authors_list": ["Jingwen Ye", "Ruonan Yu", "Songhua Liu", "Xinchao Wang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ba"}, "filepath": "data/2403.06974.png", "tags": [], "_media_type": "image", "_rand": 0.999499973496028, "arXiv_link": "https://arxiv.org/abs/2403.06974", "other_link": "https://xuxw98.github.io/Online3D/}{Project", "title": "Memory-based Adapters for Online 3D Scene Perception", "abstract": "In this paper, we propose a new framework for online 3D scene perception.\nConventional 3D scene perception methods are offline, i.e., take an already\nreconstructed 3D scene geometry as input, which is not applicable in robotic\napplications where the input data is streaming RGB-D videos rather than a\ncomplete 3D scene reconstructed from pre-collected RGB-D videos. To deal with\nonline 3D scene perception tasks where data collection and perception should be\nperformed simultaneously, the model should be able to process 3D scenes frame\nby frame and make use of the temporal information. To this end, we propose an\nadapter-based plug-and-play module for the backbone of 3D scene perception\nmodel, which constructs memory to cache and aggregate the extracted RGB-D\nfeatures to empower offline models with temporal learning ability.\nSpecifically, we propose a queued memory mechanism to cache the supporting\npoint cloud and image features. Then we devise aggregation modules which\ndirectly perform on the memory and pass temporal information to current frame.\nWe further propose 3D-to-2D adapter to enhance image features with strong\nglobal context. Our adapters can be easily inserted into mainstream offline\narchitectures of different tasks and significantly boost their performance on\nonline tasks. Extensive experiments on ScanNet and SceneNN datasets demonstrate\nour approach achieves leading performance on three 3D scene perception tasks\ncompared with state-of-the-art online methods by simply finetuning existing\noffline models, without any model and task-specific designs.\n\\href{https://xuxw98.github.io/Online3D/}{Project page}.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiuwei Xu", "Chong Xia", "Ziwei Wang", "Linqing Zhao", "Linqing Zhao", "Yueqi Duan", "Jie Zhou", "Jiwen Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3bb"}, "filepath": "data/2404.14016.png", "tags": [], "_media_type": "image", "_rand": 0.9997879108503386, "arXiv_link": "https://arxiv.org/abs/2404.14016", "other_link": "", "title": "Ungeneralizable Examples", "abstract": "The training of contemporary deep learning models heavily relies on publicly\navailable data, posing a risk of unauthorized access to online data and raising\nconcerns about data privacy. Current approaches to creating unlearnable data\ninvolve incorporating small, specially designed noises, but these methods\nstrictly limit data usability, overlooking its potential usage in authorized\nscenarios. In this paper, we extend the concept of unlearnable data to\nconditional data learnability and introduce \\textbf{U}n\\textbf{G}eneralizable\n\\textbf{E}xamples (UGEs). UGEs exhibit learnability for authorized users while\nmaintaining unlearnability for potential hackers. The protector defines the\nauthorized network and optimizes UGEs to match the gradients of the original\ndata and its ungeneralizable version, ensuring learnability. To prevent\nunauthorized learning, UGEs are trained by maximizing a designated distance\nloss in a common feature space. Additionally, to further safeguard the\nauthorized side from potential attacks, we introduce additional undistillation\noptimization. Experimental results on multiple datasets and various networks\ndemonstrate that the proposed UGEs framework preserves data usability while\nreducing training performance on hacker networks, even under different types of\nattacks.", "keywords": [], "authors_list": ["Jingwen Ye", "Xinchao Wang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3bc"}, "filepath": "data/2312.13396v1.png", "tags": [], "_media_type": "image", "_rand": 0.999948963299582, "arXiv_link": "https://arxiv.org/html/2312.13396v1", "other_link": "", "title": "IIRP-Net: Iterative Inference Residual Pyramid Network for Enhanced Image Registration", "abstract": "Single-image super-resolution (SISR) has seen significant advancements\nthrough the integration of deep learning. However, the substantial\ncomputational and memory requirements of existing methods often limit their\npractical application. This paper introduces a new Efficient Pyramid Network\n(EPNet) that harmoniously merges an Edge Split Pyramid Module (ESPM) with a\nPanoramic Feature Extraction Module (PFEM) to overcome the limitations of\nexisting methods, particularly in terms of computational efficiency. The ESPM\napplies a pyramid-based channel separation strategy, boosting feature\nextraction while maintaining computational efficiency. The PFEM, a novel fusion\nof CNN and Transformer structures, enables the concurrent extraction of local\nand global features, thereby providing a panoramic view of the image landscape.\nOur architecture integrates the PFEM in a manner that facilitates the\nstreamlined exchange of feature information and allows for the further\nrefinement of image texture details. Experimental results indicate that our\nmodel outperforms existing state-of-the-art methods in image resolution\nquality, while considerably decreasing computational and memory costs. This\nresearch contributes to the ongoing evolution of efficient and practical SISR\nmethodologies, bearing broader implications for the field of computer vision.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Tai Ma", "zhangsuwei", "Jiafeng Li", "Ying Wen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3bd"}, "filepath": "data/2403.05890.png", "tags": [], "_media_type": "image", "_rand": 0.9992349712492901, "arXiv_link": "https://arxiv.org/abs/2403.05890", "other_link": "", "title": "Towards Efficient Replay in Federated Incremental Learning", "abstract": "In Federated Learning (FL), the data in each client is typically assumed\nfixed or static. However, data often comes in an incremental manner in\nreal-world applications, where the data domain may increase dynamically. In\nthis work, we study catastrophic forgetting with data heterogeneity in\nFederated Incremental Learning (FIL) scenarios where edge clients may lack\nenough storage space to retain full data. We propose to employ a simple,\ngeneric framework for FIL named Re-Fed, which can coordinate each client to\ncache important samples for replay. More specifically, when a new task arrives,\neach client first caches selected previous samples based on their global and\nlocal importance. Then, the client trains the local model with both the cached\nsamples and the samples from the new task. Theoretically, we analyze the\nability of Re-Fed to discover important samples for replay thus alleviating the\ncatastrophic forgetting problem. Moreover, we empirically show that Re-Fed\nachieves competitive performance compared to state-of-the-art methods.", "keywords": [], "authors_list": ["Yichen Li", "Qunwei Li", "Haozhao Wang", "Ruixuan Li", "Wenliang Zhong", "Guannan Zhang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Distributed, Parallel, and Cluster Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3be"}, "filepath": "data/2404.01725.png", "tags": [], "_media_type": "image", "_rand": 0.9991120440115148, "arXiv_link": "https://arxiv.org/abs/2404.01725", "other_link": "https://github.com/xingaoli/DP-HOI.", "title": "Disentangled Pre-training for Human-Object Interaction Detection", "abstract": "Detecting human-object interaction (HOI) has long been limited by the amount\nof supervised data available. Recent approaches address this issue by\npre-training according to pseudo-labels, which align object regions with HOI\ntriplets parsed from image captions. However, pseudo-labeling is tricky and\nnoisy, making HOI pre-training a complex process. Therefore, we propose an\nefficient disentangled pre-training method for HOI detection (DP-HOI) to\naddress this problem. First, DP-HOI utilizes object detection and action\nrecognition datasets to pre-train the detection and interaction decoder layers,\nrespectively. Then, we arrange these decoder layers so that the pre-training\narchitecture is consistent with the downstream HOI detection task. This\nfacilitates efficient knowledge transfer. Specifically, the detection decoder\nidentifies reliable human instances in each action recognition dataset image,\ngenerates one corresponding query, and feeds it into the interaction decoder\nfor verb classification. Next, we combine the human instance verb predictions\nin the same image and impose image-level supervision. The DP-HOI structure can\nbe easily adapted to the HOI detection task, enabling effective model parameter\ninitialization. Therefore, it significantly enhances the performance of\nexisting HOI detection models on a broad range of rare categories. The code and\npre-trained weight are available at https://github.com/xingaoli/DP-HOI.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Zhuolong Li", "Xingao Li", "Changxing Ding", "Xiangmin Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3bf"}, "filepath": "data/2403.02330v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995907557167458, "arXiv_link": "https://arxiv.org/abs/2403.02330v1", "other_link": "", "title": "RegionGPT: Towards Region Understanding Vision Language Model", "abstract": "Vision language models (VLMs) have experienced rapid advancements through the\nintegration of large language models (LLMs) with image-text pairs, yet they\nstruggle with detailed regional visual understanding due to limited spatial\nawareness of the vision encoder, and the use of coarse-grained training data\nthat lacks detailed, region-specific captions. To address this, we introduce\nRegionGPT (short as RGPT), a novel framework designed for complex region-level\ncaptioning and understanding. RGPT enhances the spatial awareness of regional\nrepresentation with simple yet effective modifications to existing visual\nencoders in VLMs. We further improve performance on tasks requiring a specific\noutput scope by integrating task-guided instruction prompts during both\ntraining and inference phases, while maintaining the model's versatility for\ngeneral-purpose tasks. Additionally, we develop an automated region caption\ndata generation pipeline, enriching the training set with detailed region-level\ncaptions. We demonstrate that a universal RGPT model can be effectively applied\nand significantly enhancing performance across a range of region-level tasks,\nincluding but not limited to complex region descriptions, reasoning, object\nclassification, and referring expressions comprehension.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Scene analysis and understanding"], "authors_list": ["Qiushan Guo", "Shalini De Mello", "Danny Yin", "Wonmin Byeon", "Ka Chun Cheung", "Yizhou Yu", "Ping Luo", "Sifei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c0"}, "filepath": "data/2404.13605.png", "tags": [], "_media_type": "image", "_rand": 0.9991337575774236, "arXiv_link": "https://arxiv.org/abs/2404.13605", "other_link": "", "title": "Turb-Seg-Res: A Segment-then-Restore Pipeline for Dynamic Videos with Atmospheric Turbulence", "abstract": "Tackling image degradation due to atmospheric turbulence, particularly in\ndynamic environment, remains a challenge for long-range imaging systems.\nExisting techniques have been primarily designed for static scenes or scenes\nwith small motion. This paper presents the first segment-then-restore pipeline\nfor restoring the videos of dynamic scenes in turbulent environment. We\nleverage mean optical flow with an unsupervised motion segmentation method to\nseparate dynamic and static scene components prior to restoration. After camera\nshake compensation and segmentation, we introduce foreground/background\nenhancement leveraging the statistics of turbulence strength and a transformer\nmodel trained on a novel noise-based procedural turbulence generator for fast\ndataset augmentation. Benchmarked against existing restoration methods, our\napproach restores most of the geometric distortion and enhances sharpness for\nvideos. We make our code, simulator, and data publicly available to advance the\nfield of video restoration from turbulence: riponcs.github.io/TurbSegRes", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Ripon Saha", "Dehao Qin", "Nianyi Li", "Jinwei Ye", "Suren Jayasuriya"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c1"}, "filepath": "data/2404.01612.png", "tags": [], "_media_type": "image", "_rand": 0.9997761108179143, "arXiv_link": "https://arxiv.org/abs/2404.01612", "other_link": "https://github.com/LMozart/CVPR2024-SpinUP.", "title": "Spin-UP: Spin Light for Natural Light Uncalibrated Photometric Stereo", "abstract": "Natural Light Uncalibrated Photometric Stereo (NaUPS) relieves the strict\nenvironment and light assumptions in classical Uncalibrated Photometric Stereo\n(UPS) methods. However, due to the intrinsic ill-posedness and high-dimensional\nambiguities, addressing NaUPS is still an open question. Existing works impose\nstrong assumptions on the environment lights and objects' material, restricting\nthe effectiveness in more general scenarios. Alternatively, some methods\nleverage supervised learning with intricate models while lacking\ninterpretability, resulting in a biased estimation. In this work, we proposed\nSpin Light Uncalibrated Photometric Stereo (Spin-UP), an unsupervised method to\ntackle NaUPS in various environment lights and objects. The proposed method\nuses a novel setup that captures the object's images on a rotatable platform,\nwhich mitigates NaUPS's ill-posedness by reducing unknowns and provides\nreliable priors to alleviate NaUPS's ambiguities. Leveraging neural inverse\nrendering and the proposed training strategies, Spin-UP recovers surface\nnormals, environment light, and isotropic reflectance under complex natural\nlight with low computational cost. Experiments have shown that Spin-UP\noutperforms other supervised / unsupervised NaUPS methods and achieves\nstate-of-the-art performance on synthetic and real-world datasets. Codes and\ndata are available at https://github.com/LMozart/CVPR2024-SpinUP.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Zongrui Li", "Zhan Lu", "Haojie Yan", "Boxin Shi", "Gang Pan", "Qian Zheng", "Xudong Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c2"}, "filepath": "data/2312.02914.png", "tags": [], "_media_type": "image", "_rand": 0.999782462924568, "arXiv_link": "https://arxiv.org/abs/2312.02914", "other_link": "", "title": "Unsupervised Video Domain Adaptation with Masked Pre-Training and Collaborative Self-Training", "abstract": "In this work, we tackle the problem of unsupervised domain adaptation (UDA)\nfor video action recognition. Our approach, which we call UNITE, uses an image\nteacher model to adapt a video student model to the target domain. UNITE first\nemploys self-supervised pre-training to promote discriminative feature learning\non target domain videos using a teacher-guided masked distillation objective.\nWe then perform self-training on masked target data, using the video student\nmodel and image teacher model together to generate improved pseudolabels for\nunlabeled target videos. Our self-training process successfully leverages the\nstrengths of both models to achieve strong transfer performance across domains.\nWe evaluate our approach on multiple video domain adaptation benchmarks and\nobserve significant improvements upon previously reported results.", "keywords": [], "authors_list": ["Arun Reddy", "William Paul", "Corban Rivera", "Ketul Shah", "Celso M. de Melo", "Rama Chellappa"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c3"}, "filepath": "data/2404.03242.png", "tags": [], "_media_type": "image", "_rand": 0.999876107102844, "arXiv_link": "https://arxiv.org/abs/2404.03242", "other_link": "", "title": "Would Deep Generative Models Amplify Bias in Future Models?", "abstract": "We investigate the impact of deep generative models on potential social\nbiases in upcoming computer vision models. As the internet witnesses an\nincreasing influx of AI-generated images, concerns arise regarding inherent\nbiases that may accompany them, potentially leading to the dissemination of\nharmful content. This paper explores whether a detrimental feedback loop,\nresulting in bias amplification, would occur if generated images were used as\nthe training data for future models. We conduct simulations by progressively\nsubstituting original images in COCO and CC3M datasets with images generated\nthrough Stable Diffusion. The modified datasets are used to train OpenCLIP and\nimage captioning models, which we evaluate in terms of quality and bias.\nContrary to expectations, our findings indicate that introducing generated\nimages during training does not uniformly amplify bias. Instead, instances of\nbias mitigation across specific tasks are observed. We further explore the\nfactors that may influence these phenomena, such as artifacts in image\ngeneration (e.g., blurry faces) or pre-existing biases in the original\ndatasets.", "keywords": ["Vision applications for social good and ethics", "Deep learning architectures and techniques"], "authors_list": ["Tianwei Chen", "Yusuke Hirota", "Mayu Otani", "Noa Garcia", "Yuta Nakashima"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c4"}, "filepath": "data/2312.04410.png", "tags": [], "_media_type": "image", "_rand": 0.9999861876961402, "arXiv_link": "https://arxiv.org/abs/2312.04410", "other_link": "https://github.com/SHI-Labs/Smooth-Diffusion.", "title": "Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models", "abstract": "Recently, diffusion models have made remarkable progress in text-to-image\n(T2I) generation, synthesizing images with high fidelity and diverse contents.\nDespite this advancement, latent space smoothness within diffusion models\nremains largely unexplored. Smooth latent spaces ensure that a perturbation on\nan input latent corresponds to a steady change in the output image. This\nproperty proves beneficial in downstream tasks, including image interpolation,\ninversion, and editing. In this work, we expose the non-smoothness of diffusion\nlatent spaces by observing noticeable visual fluctuations resulting from minor\nlatent variations. To tackle this issue, we propose Smooth Diffusion, a new\ncategory of diffusion models that can be simultaneously high-performing and\nsmooth. Specifically, we introduce Step-wise Variation Regularization to\nenforce the proportion between the variations of an arbitrary input latent and\nthat of the output image is a constant at any diffusion training step. In\naddition, we devise an interpolation standard deviation (ISTD) metric to\neffectively assess the latent space smoothness of a diffusion model. Extensive\nquantitative and qualitative experiments demonstrate that Smooth Diffusion\nstands out as a more desirable solution not only in T2I generation but also\nacross various downstream tasks. Smooth Diffusion is implemented as a\nplug-and-play Smooth-LoRA to work with various community models. Code is\navailable at https://github.com/SHI-Labs/Smooth-Diffusion.", "keywords": [], "authors_list": ["Jiayi Guo", "Xingqian Xu", "Yifan Pu", "Zanlin Ni", "Chaofei Wang", "Manushree Vasu", "Shiji Song", "Gao Huang", "Humphrey Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c5"}, "filepath": "data/2312.04076.png", "tags": [], "_media_type": "image", "_rand": 0.999106636302485, "arXiv_link": "https://arxiv.org/abs/2312.04076", "other_link": "https://github.com/zhaohengz/LLaMP.", "title": "Large Language Models are Good Prompt Learners for Low-Shot Image Classification", "abstract": "Low-shot image classification, where training images are limited or\ninaccessible, has benefited from recent progress on pre-trained vision-language\n(VL) models with strong generalizability, e.g. CLIP. Prompt learning methods\nbuilt with VL models generate text features from the class names that only have\nconfined class-specific information. Large Language Models (LLMs), with their\nvast encyclopedic knowledge, emerge as the complement. Thus, in this paper, we\ndiscuss the integration of LLMs to enhance pre-trained VL models, specifically\non low-shot classification. However, the domain gap between language and vision\nblocks the direct application of LLMs. Thus, we propose LLaMP, Large Language\nModels as Prompt learners, that produces adaptive prompts for the CLIP text\nencoder, establishing it as the connecting bridge. Experiments show that,\ncompared with other state-of-the-art prompt learning methods, LLaMP yields\nbetter performance on both zero-shot generalization and few-shot image\nclassification, over a spectrum of 11 datasets. Code will be made available at:\nhttps://github.com/zhaohengz/LLaMP.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Zhaoheng Zheng", "Jingmin Wei", "Xuefeng Hu", "Haidong Zhu", "Ram Nevatia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c6"}, "filepath": "data/2403.11222.png", "tags": [], "_media_type": "image", "_rand": 0.9994194979035382, "arXiv_link": "https://arxiv.org/abs/2403.11222", "other_link": "https://github.com/BIT-Vision/SpikeNeRF.", "title": "SpikeNeRF: Learning Neural Radiance Fields from Continuous Spike Stream", "abstract": "Spike cameras, leveraging spike-based integration sampling and high temporal\nresolution, offer distinct advantages over standard cameras. However, existing\napproaches reliant on spike cameras often assume optimal illumination, a\ncondition frequently unmet in real-world scenarios. To address this, we\nintroduce SpikeNeRF, the first work that derives a NeRF-based volumetric scene\nrepresentation from spike camera data. Our approach leverages NeRF's multi-view\nconsistency to establish robust self-supervision, effectively eliminating\nerroneous measurements and uncovering coherent structures within exceedingly\nnoisy input amidst diverse real-world illumination scenarios. The framework\ncomprises two core elements: a spike generation model incorporating an\nintegrate-and-fire neuron layer and parameters accounting for non-idealities,\nsuch as threshold variation, and a spike rendering loss capable of generalizing\nacross varying illumination conditions. We describe how to effectively optimize\nneural radiance fields to render photorealistic novel views from the novel\ncontinuous spike stream, demonstrating advantages over other vision sensors in\ncertain scenes. Empirical evaluations conducted on both real and novel\nrealistically simulated sequences affirm the efficacy of our methodology. The\ndataset and source code are released at\nhttps://github.com/BIT-Vision/SpikeNeRF.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Lin Zhu", "Kangmin Jia", "Yifan Zhao", "Yunshan Qi", "Lizhi Wang", "Hua Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c7"}, "filepath": "data/2404.14542.png", "tags": [], "_media_type": "image", "_rand": 0.9990481533675731, "arXiv_link": "https://arxiv.org/abs/2404.14542", "other_link": "", "title": "UVEB: A Large-scale Benchmark and Baseline Towards Real-World Underwater Video Enhancement", "abstract": "Learning-based underwater image enhancement (UIE) methods have made great\nprogress. However, the lack of large-scale and high-quality paired training\nsamples has become the main bottleneck hindering the development of UIE. The\ninter-frame information in underwater videos can accelerate or optimize the UIE\nprocess. Thus, we constructed the first large-scale high-resolution underwater\nvideo enhancement benchmark (UVEB) to promote the development of underwater\nvision.It contains 1,308 pairs of video sequences and more than 453,000\nhigh-resolution with 38\\% Ultra-High-Definition (UHD) 4K frame pairs. UVEB\ncomes from multiple countries, containing various scenes and video degradation\ntypes to adapt to diverse and complex underwater environments. We also propose\nthe first supervised underwater video enhancement method, UVE-Net. UVE-Net\nconverts the current frame information into convolutional kernels and passes\nthem to adjacent frames for efficient inter-frame information exchange. By\nfully utilizing the redundant degraded information of underwater videos,\nUVE-Net completes video enhancement better. Experiments show the effective\nnetwork design and good performance of UVE-Net.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["yaofeng xie", "Lingwei Kong", "Kai Chen", "Zheng Ziqiang", "Xiao Yu", "Zhibin Yu", "Bing Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c8"}, "filepath": "data/2404.15815.png", "tags": [], "_media_type": "image", "_rand": 0.9990321803940315, "arXiv_link": "https://arxiv.org/abs/2404.15815", "other_link": "https://github.com/iSEE-Laboratory/S2HGrasp.", "title": "Single-View Scene Point Cloud Human Grasp Generation", "abstract": "In this work, we explore a novel task of generating human grasps based on\nsingle-view scene point clouds, which more accurately mirrors the typical\nreal-world situation of observing objects from a single viewpoint. Due to the\nincompleteness of object point clouds and the presence of numerous scene\npoints, the generated hand is prone to penetrating into the invisible parts of\nthe object and the model is easily affected by scene points. Thus, we introduce\nS2HGrasp, a framework composed of two key modules: the Global Perception module\nthat globally perceives partial object point clouds, and the DiffuGrasp module\ndesigned to generate high-quality human grasps based on complex inputs that\ninclude scene points. Additionally, we introduce S2HGD dataset, which comprises\napproximately 99,000 single-object single-view scene point clouds of 1,668\nunique objects, each annotated with one human grasp. Our extensive experiments\ndemonstrate that S2HGrasp can not only generate natural human grasps regardless\nof scene points, but also effectively prevent penetration between the hand and\ninvisible parts of the object. Moreover, our model showcases strong\ngeneralization capability when applied to unseen objects. Our code and dataset\nare available at https://github.com/iSEE-Laboratory/S2HGrasp.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Yan-Kang Wang", "Chengyi Xing", "Yi-Lin Wei", "Xiao-Ming Wu", "Wei-Shi Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3c9"}, "filepath": "data/2312.07472.png", "tags": [], "_media_type": "image", "_rand": 0.9998055266168813, "arXiv_link": "https://arxiv.org/abs/2312.07472", "other_link": "", "title": "MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception", "abstract": "It is a long-lasting goal to design an embodied system that can solve\nlong-horizon open-world tasks in human-like ways. However, existing approaches\nusually struggle with compound difficulties caused by the logic-aware\ndecomposition and context-aware execution of these tasks. To this end, we\nintroduce MP5, an open-ended multimodal embodied system built upon the\nchallenging Minecraft simulator, which can decompose feasible sub-objectives,\ndesign sophisticated situation-aware plans, and perform embodied action\ncontrol, with frequent communication with a goal-conditioned active perception\nscheme. Specifically, MP5 is developed on top of recent advances in Multimodal\nLarge Language Models (MLLMs), and the system is modulated into functional\nmodules that can be scheduled and collaborated to ultimately solve pre-defined\ncontext- and process-dependent tasks. Extensive experiments prove that MP5 can\nachieve a 22% success rate on difficult process-dependent tasks and a 91%\nsuccess rate on tasks that heavily depend on the context. Moreover, MP5\nexhibits a remarkable ability to address many open-ended tasks that are\nentirely novel.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yiran Qin", "Enshen Zhou", "Qichang Liu", "Zhenfei Yin", "Lu Sheng", "Ruimao Zhang", "Yu Qiao", "Jing Shao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ca"}, "filepath": "data/2404.03398.png", "tags": [], "_media_type": "image", "_rand": 0.9998591755917476, "arXiv_link": "https://arxiv.org/abs/2404.03398", "other_link": "", "title": "Scaling Up Video Summarization Pretraining with Large Language Models", "abstract": "Long-form video content constitutes a significant portion of internet\ntraffic, making automated video summarization an essential research problem.\nHowever, existing video summarization datasets are notably limited in their\nsize, constraining the effectiveness of state-of-the-art methods for\ngeneralization. Our work aims to overcome this limitation by capitalizing on\nthe abundance of long-form videos with dense speech-to-video alignment and the\nremarkable capabilities of recent large language models (LLMs) in summarizing\nlong text. We introduce an automated and scalable pipeline for generating a\nlarge-scale video summarization dataset using LLMs as Oracle summarizers. By\nleveraging the generated dataset, we analyze the limitations of existing\napproaches and propose a new video summarization model that effectively\naddresses them. To facilitate further research in the field, our work also\npresents a new benchmark dataset that contains 1200 long videos each with\nhigh-quality summaries annotated by professionals. Extensive experiments\nclearly indicate that our proposed approach sets a new state-of-the-art in\nvideo summarization across several benchmarks.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Dawit Argaw Argaw", "Seunghyun Yoon", "Fabian Caba Heilbron", "Hanieh Deilamsalehy", "Trung Bui", "Zhaowen Wang", "Franck Dernoncourt", "Joon Chung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3cb"}, "filepath": "data/2402.17417.png", "tags": [], "_media_type": "image", "_rand": 0.9991417981278896, "arXiv_link": "https://arxiv.org/abs/2402.17417", "other_link": "https://github.com/laihaoran/CARZero.", "title": "CARZero: Cross-Attention Alignment for Radiology Zero-Shot Classification", "abstract": "The advancement of Zero-Shot Learning in the medical domain has been driven\nforward by using pre-trained models on large-scale image-text pairs, focusing\non image-text alignment. However, existing methods primarily rely on cosine\nsimilarity for alignment, which may not fully capture the complex relationship\nbetween medical images and reports. To address this gap, we introduce a novel\napproach called Cross-Attention Alignment for Radiology Zero-Shot\nClassification (CARZero). Our approach innovatively leverages cross-attention\nmechanisms to process image and report features, creating a Similarity\nRepresentation that more accurately reflects the intricate relationships in\nmedical semantics. This representation is then linearly projected to form an\nimage-text similarity matrix for cross-modality alignment. Additionally,\nrecognizing the pivotal role of prompt selection in zero-shot learning, CARZero\nincorporates a Large Language Model-based prompt alignment strategy. This\nstrategy standardizes diverse diagnostic expressions into a unified format for\nboth training and inference phases, overcoming the challenges of manual prompt\ndesign. Our approach is simple yet effective, demonstrating state-of-the-art\nperformance in zero-shot classification on five official chest radiograph\ndiagnostic test sets, including remarkable results on datasets with long-tail\ndistributions of rare diseases. This achievement is attributed to our new\nimage-text alignment strategy, which effectively addresses the complex\nrelationship between medical images and reports. Code and models are available\nat https://github.com/laihaoran/CARZero.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Haoran Lai", "Qingsong Yao", "Zihang Jiang", "Rongsheng Wang", "Zhiyang He", "Xiaodong Tao", "S Kevin Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3cc"}, "filepath": "data/2309.13596v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994095166732943, "arXiv_link": "https://arxiv.org/html/2309.13596v2", "other_link": "", "title": "LiDAR-Net: A Real-scanned 3D Point Cloud Dataset for Indoor Scenes", "abstract": "Advanced Driver-Assistance Systems (ADAS) have successfully integrated\nlearning-based techniques into vehicle perception and decision-making. However,\ntheir application in 3D lane detection for effective driving environment\nperception is hindered by the lack of comprehensive LiDAR datasets. The sparse\nnature of LiDAR point cloud data prevents an efficient manual annotation\nprocess. To solve this problem, we present LiSV-3DLane, a large-scale 3D lane\ndataset that comprises 20k frames of surround-view LiDAR point clouds with\nenriched semantic annotation. Unlike existing datasets confined to a frontal\nperspective, LiSV-3DLane provides a full 360-degree spatial panorama around the\nego vehicle, capturing complex lane patterns in both urban and highway\nenvironments. We leverage the geometric traits of lane lines and the intrinsic\nspatial attributes of LiDAR data to design a simple yet effective automatic\nannotation pipeline for generating finer lane labels. To propel future\nresearch, we propose a novel LiDAR-based 3D lane detection model, LiLaDet,\nincorporating the spatial geometry learning of the LiDAR point cloud into\nBird's Eye View (BEV) based lane identification. Experimental results indicate\nthat LiLaDet outperforms existing camera- and LiDAR-based approaches in the 3D\nlane detection task on the K-Lane dataset and our LiSV-3DLane.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Yanwen Guo", "Yuanqi Li", "Dayong Ren", "Xiaohong Zhang", "Jiawei Li", "Liang Pu", "Changfeng Ma", "xiaoyu zhan", "Jie Guo", "Mingqiang Wei", "Yan Zhang", "Piaopiao Yu", "Shuangyu Yang", "Donghao Ji", "Huisheng Ye", "Hao Sun", "Yansong Liu", "Yinuo Chen", "Jiaqi Zhu", "Hongyu Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3cd"}, "filepath": "data/2312.10461.png", "tags": [], "_media_type": "image", "_rand": 0.9996043086003723, "arXiv_link": "https://arxiv.org/abs/2312.10461", "other_link": "https://github.com/chuangchuangtan/NPR-DeepfakeDetection.", "title": "Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection", "abstract": "Recently, the proliferation of highly realistic synthetic images, facilitated\nthrough a variety of GANs and Diffusions, has significantly heightened the\nsusceptibility to misuse. While the primary focus of deepfake detection has\ntraditionally centered on the design of detection algorithms, an investigative\ninquiry into the generator architectures has remained conspicuously absent in\nrecent years. This paper contributes to this lacuna by rethinking the\narchitectures of CNN-based generators, thereby establishing a generalized\nrepresentation of synthetic artifacts. Our findings illuminate that the\nup-sampling operator can, beyond frequency-based artifacts, produce generalized\nforgery artifacts. In particular, the local interdependence among image pixels\ncaused by upsampling operators is significantly demonstrated in synthetic\nimages generated by GAN or diffusion. Building upon this observation, we\nintroduce the concept of Neighboring Pixel Relationships(NPR) as a means to\ncapture and characterize the generalized structural artifacts stemming from\nup-sampling operations. A comprehensive analysis is conducted on an open-world\ndataset, comprising samples generated by \\tft{28 distinct generative models}.\nThis analysis culminates in the establishment of a novel state-of-the-art\nperformance, showcasing a remarkable \\tft{11.6\\%} improvement over existing\nmethods. The code is available at\nhttps://github.com/chuangchuangtan/NPR-DeepfakeDetection.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Chuangchuang Tan", "Huan Liu", "Yao Zhao", "Shikui Wei", "Guanghua Gu", "Ping Liu", "Yunchao Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ce"}, "filepath": "data/2402.17323.png", "tags": [], "_media_type": "image", "_rand": 0.9993747372897537, "arXiv_link": "https://arxiv.org/abs/2402.17323", "other_link": "", "title": "SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection", "abstract": "In the field of class incremental learning (CIL), generative replay has\nbecome increasingly prominent as a method to mitigate the catastrophic\nforgetting, alongside the continuous improvements in generative models.\nHowever, its application in class incremental object detection (CIOD) has been\nsignificantly limited, primarily due to the complexities of scenes involving\nmultiple labels. In this paper, we propose a novel approach called stable\ndiffusion deep generative replay (SDDGR) for CIOD. Our method utilizes a\ndiffusion-based generative model with pre-trained text-to-diffusion networks to\ngenerate realistic and diverse synthetic images. SDDGR incorporates an\niterative refinement strategy to produce high-quality images encompassing old\nclasses. Additionally, we adopt an L2 knowledge distillation technique to\nimprove the retention of prior knowledge in synthetic images. Furthermore, our\napproach includes pseudo-labeling for old objects within new task images,\npreventing misclassification as background elements. Extensive experiments on\nthe COCO 2017 dataset demonstrate that SDDGR significantly outperforms existing\nalgorithms, achieving a new state-of-the-art in various CIOD scenarios. The\nsource code will be made available to the public.", "keywords": [], "authors_list": ["JUNSU KIM", "Hoseong Cho", "Jihyeon Kim", "Yihalem Tiruneh", "Seungryul Baek"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3cf"}, "filepath": "data/2404.11062.png", "tags": [], "_media_type": "image", "_rand": 0.9990450482876121, "arXiv_link": "https://arxiv.org/abs/2404.11062", "other_link": "", "title": "Mean-Shift Feature Transformer", "abstract": "We report on a reduced time variation of a time scale with respect to\nCoordinated Universal Time (UTC) by steering a hydrogen-maser-based time scale\nwith a near-continuously operating optical lattice clock. The time scale is\ngenerated in a post-processing analysis for 230 days with a hydrogen maser with\nits fractional frequency stability limited by a flicker floor of\n$2\\times10^{-15}$ and an Yb optical lattice clock operated with an uptime of\n81.6 $\\%$. During the 230-day period, the root mean square time variation of\nour time scale with respect to UTC is 0.52 ns, which is a better performance\ncompared with those of time scales steered by microwave fountain clocks that\nexhibit root mean square variations from 0.99 ns to 1.6 ns. With the high\nuptime achieved by the Yb optical lattice clock, our simulation implies the\npotential of generating a state-of-the-art time scale with a time variation of\n$<0.1$ ns over a month using a better hydrogen maser reaching the mid\n$10^{-16}$ level. This work demonstrates that a use of an optical clock with a\nhigh uptime enhances the stability of a time scale.", "keywords": [], "authors_list": ["Takumi Kobayashi"], "category_name": "", "all_categories": ["Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d0"}, "filepath": "data/2402.18490.png", "tags": [], "_media_type": "image", "_rand": 0.9999724306664555, "arXiv_link": "https://arxiv.org/abs/2402.18490", "other_link": "https://alanzhangcs.github.io/tamm-page.", "title": "TAMM: TriAdapter Multi-Modal Learning for 3D Shape Understanding", "abstract": "The limited scale of current 3D shape datasets hinders the advancements in 3D\nshape understanding, and motivates multi-modal learning approaches which\ntransfer learned knowledge from data-abundant 2D image and language modalities\nto 3D shapes. However, even though the image and language representations have\nbeen aligned by cross-modal models like CLIP, we find that the image modality\nfails to contribute as much as the language in existing multi-modal 3D\nrepresentation learning methods. This is attributed to the domain shift in the\n2D images and the distinct focus of each modality. To more effectively leverage\nboth modalities in the pre-training, we introduce TriAdapter Multi-Modal\nLearning (TAMM) -- a novel two-stage learning approach based on three\nsynergistic adapters. First, our CLIP Image Adapter mitigates the domain gap\nbetween 3D-rendered images and natural images, by adapting the visual\nrepresentations of CLIP for synthetic image-text pairs. Subsequently, our Dual\nAdapters decouple the 3D shape representation space into two complementary\nsub-spaces: one focusing on visual attributes and the other for semantic\nunderstanding, which ensure a more comprehensive and effective multi-modal\npre-training. Extensive experiments demonstrate that TAMM consistently enhances\n3D representations for a wide range of 3D encoder architectures, pre-training\ndatasets, and downstream tasks. Notably, we boost the zero-shot classification\naccuracy on Objaverse-LVIS from 46.8\\% to 50.7\\%, and improve the 5-way 10-shot\nlinear probing classification accuracy on ModelNet40 from 96.1\\% to 99.0\\%.\nProject page: https://alanzhangcs.github.io/tamm-page.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Zhihao Zhang", "Shengcao Cao", "Yu-Xiong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d1"}, "filepath": "data/2306.13631.png", "tags": [], "_media_type": "image", "_rand": 0.9996114849008936, "arXiv_link": "https://arxiv.org/abs/2306.13631", "other_link": "", "title": "Open-Vocabulary 3D Semantic Segmentation with Foundation Models", "abstract": "We introduce the task of open-vocabulary 3D instance segmentation. Current\napproaches for 3D instance segmentation can typically only recognize object\ncategories from a pre-defined closed set of classes that are annotated in the\ntraining datasets. This results in important limitations for real-world\napplications where one might need to perform tasks guided by novel,\nopen-vocabulary queries related to a wide variety of objects. Recently,\nopen-vocabulary 3D scene understanding methods have emerged to address this\nproblem by learning queryable features for each point in the scene. While such\na representation can be directly employed to perform semantic segmentation,\nexisting methods cannot separate multiple object instances. In this work, we\naddress this limitation, and propose OpenMask3D, which is a zero-shot approach\nfor open-vocabulary 3D instance segmentation. Guided by predicted\nclass-agnostic 3D instance masks, our model aggregates per-mask features via\nmulti-view fusion of CLIP-based image embeddings. Experiments and ablation\nstudies on ScanNet200 and Replica show that OpenMask3D outperforms other\nopen-vocabulary methods, especially on the long-tail distribution. Qualitative\nexperiments further showcase OpenMask3D's ability to segment object properties\nbased on free-form queries describing geometry, affordances, and materials.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Li Jiang", "Shaoshuai Shi", "Bernt Schiele"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d2"}, "filepath": "data/2402.16407.png", "tags": [], "_media_type": "image", "_rand": 0.9991468080659405, "arXiv_link": "http://export.arxiv.org/abs/2402.16407", "other_link": "", "title": "Multiplane Prior Guided Few-Shot Aerial Scene Rendering", "abstract": "Neural Radiance Field (NeRF) has shown impressive results in novel view\nsynthesis, particularly in Virtual Reality (VR) and Augmented Reality (AR),\nthanks to its ability to represent scenes continuously. However, when just a\nfew input view images are available, NeRF tends to overfit the given views and\nthus make the estimated depths of pixels share almost the same value. Unlike\nprevious methods that conduct regularization by introducing complex priors or\nadditional supervisions, we propose a simple yet effective method that\nexplicitly builds depth-aware consistency across input views to tackle this\nchallenge. Our key insight is that by forcing the same spatial points to be\nsampled repeatedly in different input views, we are able to strengthen the\ninteractions between views and therefore alleviate the overfitting problem. To\nachieve this, we build the neural networks on layered representations\n(\\textit{i.e.}, multiplane images), and the sampling point can thus be\nresampled on multiple discrete planes. Furthermore, to regularize the unseen\ntarget views, we constrain the rendered colors and depths from different input\nviews to be the same. Although simple, extensive experiments demonstrate that\nour proposed method can achieve better synthesis quality over state-of-the-art\nmethods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Zihan Gao", "Licheng Jiao", "Lingling Li", "Xu Liu", "Fang Liu", "Puhua Chen", "Yuwei Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d3"}, "filepath": "data/2311.18828.png", "tags": [], "_media_type": "image", "_rand": 0.9991925086695996, "arXiv_link": "https://arxiv.org/abs/2311.18828", "other_link": "", "title": "One-step Diffusion with Distribution Matching Distillation", "abstract": "Diffusion models generate high-quality images but require dozens of forward\npasses. We introduce Distribution Matching Distillation (DMD), a procedure to\ntransform a diffusion model into a one-step image generator with minimal impact\non image quality. We enforce the one-step image generator match the diffusion\nmodel at distribution level, by minimizing an approximate KL divergence whose\ngradient can be expressed as the difference between 2 score functions, one of\nthe target distribution and the other of the synthetic distribution being\nproduced by our one-step generator. The score functions are parameterized as\ntwo diffusion models trained separately on each distribution. Combined with a\nsimple regression loss matching the large-scale structure of the multi-step\ndiffusion outputs, our method outperforms all published few-step diffusion\napproaches, reaching 2.62 FID on ImageNet 64x64 and 11.49 FID on zero-shot\nCOCO-30k, comparable to Stable Diffusion but orders of magnitude faster.\nUtilizing FP16 inference, our model generates images at 20 FPS on modern\nhardware.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Tianwei Yin", "Micha\u00ebl Gharbi", "Micha\u00ebl Gharbi", "Richard Zhang", "Eli Shechtman", "Fredo Durand", "William Freeman", "Taesung Park"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d4"}, "filepath": "data/2403.17801.png", "tags": [], "_media_type": "image", "_rand": 0.9995016217525403, "arXiv_link": "https://arxiv.org/abs/2403.17801", "other_link": "", "title": "Towards 3D Vision with Low-Cost Single-Photon Cameras", "abstract": "We present a method for reconstructing 3D shape of arbitrary Lambertian\nobjects based on measurements by miniature, energy-efficient, low-cost\nsingle-photon cameras. These cameras, operating as time resolved image sensors,\nilluminate the scene with a very fast pulse of diffuse light and record the\nshape of that pulse as it returns back from the scene at a high temporal\nresolution. We propose to model this image formation process, account for its\nnon-idealities, and adapt neural rendering to reconstruct 3D geometry from a\nset of spatially distributed sensors with known poses. We show that our\napproach can successfully recover complex 3D shapes from simulated data. We\nfurther demonstrate 3D object reconstruction from real-world captures,\nutilizing measurements from a commodity proximity sensor. Our work draws a\nconnection between image-based modeling and active range scanning and is a step\ntowards 3D vision with single-photon cameras.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Fangzhou Mu", "Carter Sifferman", "Sacha Jungerman", "Yiquan Li", "Zhiyue Han", "Michael Gleicher", "Mohit Gupta", "Yin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d5"}, "filepath": "data/2405.06903.png", "tags": [], "_media_type": "image", "_rand": 0.9993390366735815, "arXiv_link": "https://arxiv.org/abs/2405.06903", "other_link": "https://warshallrho.github.io/unigarmentmanip.", "title": "UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence", "abstract": "Garment manipulation (e.g., unfolding, folding and hanging clothes) is\nessential for future robots to accomplish home-assistant tasks, while highly\nchallenging due to the diversity of garment configurations, geometries and\ndeformations. Although able to manipulate similar shaped garments in a certain\ntask, previous works mostly have to design different policies for different\ntasks, could not generalize to garments with diverse geometries, and often rely\nheavily on human-annotated data. In this paper, we leverage the property that,\ngarments in a certain category have similar structures, and then learn the\ntopological dense (point-level) visual correspondence among garments in the\ncategory level with different deformations in the self-supervised manner. The\ntopological correspondence can be easily adapted to the functional\ncorrespondence to guide the manipulation policies for various downstream tasks,\nwithin only one or few-shot demonstrations. Experiments over garments in 3\ndifferent categories on 3 representative tasks in diverse scenarios, using one\nor two arms, taking one or more steps, inputting flat or messy garments,\ndemonstrate the effectiveness of our proposed method. Project page:\nhttps://warshallrho.github.io/unigarmentmanip.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Ruihai Wu", "Haoran Lu", "Yiyan Wang", "Yubo Wang", "Hao Dong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d6"}, "filepath": "data/2312.08606.png", "tags": [], "_media_type": "image", "_rand": 0.9992099318769305, "arXiv_link": "https://arxiv.org/abs/2312.08606", "other_link": "https://github.com/AlexZou14/VQCNIR.", "title": "Learning Diffusion Texture Priors for Image Restoration", "abstract": "Night photography often struggles with challenges like low light and\nblurring, stemming from dark environments and prolonged exposures. Current\nmethods either disregard priors and directly fitting end-to-end networks,\nleading to inconsistent illumination, or rely on unreliable handcrafted priors\nto constrain the network, thereby bringing the greater error to the final\nresult. We believe in the strength of data-driven high-quality priors and\nstrive to offer a reliable and consistent prior, circumventing the restrictions\nof manual priors. In this paper, we propose Clearer Night Image Restoration\nwith Vector-Quantized Codebook (VQCNIR) to achieve remarkable and consistent\nrestoration outcomes on real-world and synthetic benchmarks. To ensure the\nfaithful restoration of details and illumination, we propose the incorporation\nof two essential modules: the Adaptive Illumination Enhancement Module (AIEM)\nand the Deformable Bi-directional Cross-Attention (DBCA) module. The AIEM\nleverages the inter-channel correlation of features to dynamically maintain\nillumination consistency between degraded features and high-quality codebook\nfeatures. Meanwhile, the DBCA module effectively integrates texture and\nstructural information through bi-directional cross-attention and deformable\nconvolution, resulting in enhanced fine-grained detail and structural fidelity\nacross parallel decoders. Extensive experiments validate the remarkable\nbenefits of VQCNIR in enhancing image quality under low-light conditions,\nshowcasing its state-of-the-art performance on both synthetic and real-world\ndatasets. The code is available at https://github.com/AlexZou14/VQCNIR.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Tian Ye", "Sixiang Chen", "Wenhao Chai", "Zhaohu Xing", "Jing Qin", "Ge lin", "Lei Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d7"}, "filepath": "data/2404.07178.png", "tags": [], "_media_type": "image", "_rand": 0.999231043890268, "arXiv_link": "https://arxiv.org/abs/2404.07178", "other_link": "", "title": "Move Anything with Layered Scene Diffusion", "abstract": "Diffusion models generate images with an unprecedented level of quality, but\nhow can we freely rearrange image layouts? Recent works generate controllable\nscenes via learning spatially disentangled latent codes, but these methods do\nnot apply to diffusion models due to their fixed forward process. In this work,\nwe propose SceneDiffusion to optimize a layered scene representation during the\ndiffusion sampling process. Our key insight is that spatial disentanglement can\nbe obtained by jointly denoising scene renderings at different spatial layouts.\nOur generated scenes support a wide range of spatial editing operations,\nincluding moving, resizing, cloning, and layer-wise appearance editing\noperations, including object restyling and replacing. Moreover, a scene can be\ngenerated conditioned on a reference image, thus enabling object moving for\nin-the-wild images. Notably, this approach is training-free, compatible with\ngeneral text-to-image diffusion models, and responsive in less than a second.", "keywords": ["Image and video generation and manipulation", "Scene analysis and understanding"], "authors_list": ["Jiawei Ren", "Mengmeng Xu", "Jui-Chieh Wu", "Ziwei Liu", "Tao Xiang", "Antoine Toisoul"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d8"}, "filepath": "data/2312.01017.png", "tags": [], "_media_type": "image", "_rand": 0.9996402155860818, "arXiv_link": "https://arxiv.org/abs/2312.01017", "other_link": "", "title": "Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense Interactions through Masked Modeling", "abstract": "Humans possess a remarkable ability to integrate auditory and visual\ninformation, enabling a deeper understanding of the surrounding environment.\nThis early fusion of audio and visual cues, demonstrated through cognitive\npsychology and neuroscience research, offers promising potential for developing\nmultimodal perception models. However, training early fusion architectures\nposes significant challenges, as the increased model expressivity requires\nrobust learning frameworks to harness their enhanced capabilities. In this\npaper, we address this challenge by leveraging the masked reconstruction\nframework, previously successful in unimodal settings, to train audio-visual\nencoders with early fusion. Additionally, we propose an attention-based fusion\nmodule that captures interactions between local audio and visual\nrepresentations, enhancing the model's ability to capture fine-grained\ninteractions. While effective, this procedure can become computationally\nintractable, as the number of local representations increases. Thus, to address\nthe computational complexity, we propose an alternative procedure that\nfactorizes the local representations before representing audio-visual\ninteractions. Extensive evaluations on a variety of datasets demonstrate the\nsuperiority of our approach in audio-event classification, visual sound\nlocalization, sound separation, and audio-visual segmentation. These\ncontributions enable the efficient training of deeply integrated audio-visual\nmodels and significantly advance the usefulness of early fusion architectures.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Shentong Mo", "Pedro Morgado"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Multimedia", "Sound"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3d9"}, "filepath": "data/2312.02232.png", "tags": [], "_media_type": "image", "_rand": 0.9995112422348298, "arXiv_link": "https://arxiv.org/abs/2312.02232", "other_link": "", "title": "HumanNeRF-SE: A Simple yet Effective Approach to Animate HumanNeRF with Diverse Poses", "abstract": "We present HumanNeRF-SE, a simple yet effective method that synthesizes\ndiverse novel pose images with simple input. Previous HumanNeRF works require a\nlarge number of optimizable parameters to fit the human images. Instead, we\nreload these approaches by combining explicit and implicit human\nrepresentations to design both generalized rigid deformation and specific\nnon-rigid deformation. Our key insight is that explicit shape can reduce the\nsampling points used to fit implicit representation, and frozen blending\nweights from SMPL constructing a generalized rigid deformation can effectively\navoid overfitting and improve pose generalization performance. Our architecture\ninvolving both explicit and implicit representation is simple yet effective.\nExperiments demonstrate our model can synthesize images under arbitrary poses\nwith few-shot input and increase the speed of synthesizing images by 15 times\nthrough a reduction in computational complexity without using any existing\nacceleration modules. Compared to the state-of-the-art HumanNeRF studies,\nHumanNeRF-SE achieves better performance with fewer learnable parameters and\nless training time.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Caoyuan Ma", "Yu-Lun Liu", "Zhixiang Wang", "Wu Liu", "Xinchen Liu", "Zheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3da"}, "filepath": "data/2401.12979.png", "tags": [], "_media_type": "image", "_rand": 0.9992310382566948, "arXiv_link": "https://arxiv.org/abs/2401.12979", "other_link": "", "title": "GALA: Generating Animatable Layered Assets from a Single Scan", "abstract": "We present GALA, a framework that takes as input a single-layer clothed 3D\nhuman mesh and decomposes it into complete multi-layered 3D assets. The outputs\ncan then be combined with other assets to create novel clothed human avatars\nwith any pose. Existing reconstruction approaches often treat clothed humans as\na single-layer of geometry and overlook the inherent compositionality of humans\nwith hairstyles, clothing, and accessories, thereby limiting the utility of the\nmeshes for downstream applications. Decomposing a single-layer mesh into\nseparate layers is a challenging task because it requires the synthesis of\nplausible geometry and texture for the severely occluded regions. Moreover,\neven with successful decomposition, meshes are not normalized in terms of poses\nand body shapes, failing coherent composition with novel identities and poses.\nTo address these challenges, we propose to leverage the general knowledge of a\npretrained 2D diffusion model as geometry and appearance prior for humans and\nother assets. We first separate the input mesh using the 3D surface\nsegmentation extracted from multi-view 2D segmentations. Then we synthesize the\nmissing geometry of different layers in both posed and canonical spaces using a\nnovel pose-guided Score Distillation Sampling (SDS) loss. Once we complete\ninpainting high-fidelity 3D geometry, we also apply the same SDS loss to its\ntexture to obtain the complete appearance including the initially occluded\nregions. Through a series of decomposition steps, we obtain multiple layers of\n3D assets in a shared canonical space normalized in terms of poses and human\nshapes, hence supporting effortless composition to novel identities and\nreanimation with novel poses. Our experiments demonstrate the effectiveness of\nour approach for decomposition, canonicalization, and composition tasks\ncompared to existing solutions.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Taeksoo Kim", "Byungjun Kim", "Shunsuke Saito", "Hanbyul Joo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3db"}, "filepath": "data/2401.01862.png", "tags": [], "_media_type": "image", "_rand": 0.9999676338438268, "arXiv_link": "https://arxiv.org/abs/2401.01862", "other_link": "", "title": "A Vision Check-up for Language Models", "abstract": "What does learning to model relationships between strings teach large\nlanguage models (LLMs) about the visual world? We systematically evaluate LLMs'\nabilities to generate and recognize an assortment of visual concepts of\nincreasing complexity and then demonstrate how a preliminary visual\nrepresentation learning system can be trained using models of text. As language\nmodels lack the ability to consume or output visual information as pixels, we\nuse code to represent images in our study. Although LLM-generated images do not\nlook like natural images, results on image generation and the ability of models\nto correct these generated images indicate that precise modeling of strings can\nteach language models about numerous aspects of the visual world. Furthermore,\nexperiments on self-supervised visual representation learning, utilizing images\ngenerated with text models, highlight the potential to train vision models\ncapable of making semantic assessments of natural images using just LLMs.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Pratyusha Sharma", "Tamar Rott Shaham", "Manel Baradad", "Stephanie Fu", "Adrian Rodriguez-Munoz", "Shivam Duggal", "Phillip Isola", "Antonio Torralba"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3dc"}, "filepath": "data/2308.10299.png", "tags": [], "_media_type": "image", "_rand": 0.9993718404441186, "arXiv_link": "https://arxiv.org/abs/2308.10299", "other_link": "https://github.com/Trustworthy-AI-Group/BSR", "title": "Boosting Adversarial Transferability by Block Shuffle and Rotation", "abstract": "Adversarial examples mislead deep neural networks with imperceptible\nperturbations and have brought significant threats to deep learning. An\nimportant aspect is their transferability, which refers to their ability to\ndeceive other models, thus enabling attacks in the black-box setting. Though\nvarious methods have been proposed to boost transferability, the performance\nstill falls short compared with white-box attacks. In this work, we observe\nthat existing input transformation based attacks, one of the mainstream\ntransfer-based attacks, result in different attention heatmaps on various\nmodels, which might limit the transferability. We also find that breaking the\nintrinsic relation of the image can disrupt the attention heatmap of the\noriginal image. Based on this finding, we propose a novel input transformation\nbased attack called block shuffle and rotation (BSR). Specifically, BSR splits\nthe input image into several blocks, then randomly shuffles and rotates these\nblocks to construct a set of new images for gradient calculation. Empirical\nevaluations on the ImageNet dataset demonstrate that BSR could achieve\nsignificantly better transferability than the existing input transformation\nbased methods under single-model and ensemble-model settings. Combining BSR\nwith the current input transformation method can further improve the\ntransferability, which significantly outperforms the state-of-the-art methods.\nCode is available at https://github.com/Trustworthy-AI-Group/BSR", "keywords": [], "authors_list": ["Kunyu Wang", "he xuanran", "Wenxuan Wang", "Xiaosen Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3dd"}, "filepath": "data/2308.13712.png", "tags": [], "_media_type": "image", "_rand": 0.9992614841701685, "arXiv_link": "https://arxiv.org/abs/2308.13712", "other_link": "https://github.com/nachifur/RDDM).", "title": "Residual Learning in Diffusion Models", "abstract": "We propose residual denoising diffusion models (RDDM), a novel dual diffusion\nprocess that decouples the traditional single denoising diffusion process into\nresidual diffusion and noise diffusion. This dual diffusion framework expands\nthe denoising-based diffusion models, initially uninterpretable for image\nrestoration, into a unified and interpretable model for both image generation\nand restoration by introducing residuals. Specifically, our residual diffusion\nrepresents directional diffusion from the target image to the degraded input\nimage and explicitly guides the reverse generation process for image\nrestoration, while noise diffusion represents random perturbations in the\ndiffusion process. The residual prioritizes certainty, while the noise\nemphasizes diversity, enabling RDDM to effectively unify tasks with varying\ncertainty or diversity requirements, such as image generation and restoration.\nWe demonstrate that our sampling process is consistent with that of DDPM and\nDDIM through coefficient transformation, and propose a partially\npath-independent generation process to better understand the reverse process.\nNotably, our RDDM enables a generic UNet, trained with only an L1 loss and a\nbatch size of 1, to compete with state-of-the-art image restoration methods. We\nprovide code and pre-trained models to encourage further exploration,\napplication, and development of our innovative framework\n(https://github.com/nachifur/RDDM).", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Junyu Zhang", "Daochang Liu", "Eunbyung Park", "Shichao Zhang", "Chang Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3de"}, "filepath": "data/2312.08875.png", "tags": [], "_media_type": "image", "_rand": 0.9990775667508621, "arXiv_link": "https://arxiv.org/abs/2312.08875", "other_link": "", "title": "What, How, and When Should Object Detectors Update in Continually Changing Test Domains?", "abstract": "It is a well-known fact that the performance of deep learning models\ndeteriorates when they encounter a distribution shift at test time. Test-time\nadaptation (TTA) algorithms have been proposed to adapt the model online while\ninferring test data. However, existing research predominantly focuses on\nclassification tasks through the optimization of batch normalization layers or\nclassification heads, but this approach limits its applicability to various\nmodel architectures like Transformers and makes it challenging to apply to\nother tasks, such as object detection. In this paper, we propose a novel online\nadaption approach for object detection in continually changing test domains,\nconsidering which part of the model to update, how to update it, and when to\nperform the update. By introducing architecture-agnostic and lightweight\nadaptor modules and only updating these while leaving the pre-trained backbone\nunchanged, we can rapidly adapt to new test domains in an efficient way and\nprevent catastrophic forgetting. Furthermore, we present a practical and\nstraightforward class-wise feature aligning method for object detection to\nresolve domain shifts. Additionally, we enhance efficiency by determining when\nthe model is sufficiently adapted or when additional adaptation is needed due\nto changes in the test distribution. Our approach surpasses baselines on widely\nused benchmarks, achieving improvements of up to 4.9\\%p and 7.9\\%p in mAP for\nCOCO $\\rightarrow$ COCO-corrupted and SHIFT, respectively, while maintaining\nabout 20 FPS or higher.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jayeon Yoo", "Dongkwan Lee", "Inseop Chung", "Donghyun Kim", "Nojun Kwak"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3df"}, "filepath": "data/2403.09101.png", "tags": [], "_media_type": "image", "_rand": 0.9997374718673862, "arXiv_link": "https://arxiv.org/abs/2403.09101", "other_link": "", "title": "Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement", "abstract": "Adversarial training (AT) is currently one of the most effective ways to\nobtain the robustness of deep neural networks against adversarial attacks.\nHowever, most AT methods suffer from robust overfitting, i.e., a significant\ngeneralization gap in adversarial robustness between the training and testing\ncurves. In this paper, we first identify a connection between robust\noverfitting and the excessive memorization of noisy labels in AT from a view of\ngradient norm. As such label noise is mainly caused by a distribution mismatch\nand improper label assignments, we are motivated to propose a label refinement\napproach for AT. Specifically, our Self-Guided Label Refinement first\nself-refines a more accurate and informative label distribution from\nover-confident hard labels, and then it calibrates the training by dynamically\nincorporating knowledge from self-distilled models into the current model and\nthus requiring no external teachers. Empirical results demonstrate that our\nmethod can simultaneously boost the standard accuracy and robust performance\nacross multiple benchmark datasets, attack types, and architectures. In\naddition, we also provide a set of analyses from the perspectives of\ninformation theory to dive into our method and suggest the importance of soft\nlabels for robust generalization.", "keywords": [], "authors_list": ["Daiwei Yu", "Zhuorong Li", "Lina Wei", "Canghong Jin", "Yun Zhang", "Sixian Chan"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e0"}, "filepath": "data/2405.09924.png", "tags": [], "_media_type": "image", "_rand": 0.9992156052079088, "arXiv_link": "https://arxiv.org/abs/2405.09924", "other_link": "", "title": "Infrared Adversarial Car Stickers", "abstract": "Infrared physical adversarial examples are of great significance for studying\nthe security of infrared AI systems that are widely used in our lives such as\nautonomous driving. Previous infrared physical attacks mainly focused on 2D\ninfrared pedestrian detection which may not fully manifest its destructiveness\nto AI systems. In this work, we propose a physical attack method against\ninfrared detectors based on 3D modeling, which is applied to a real car. The\ngoal is to design a set of infrared adversarial stickers to make cars invisible\nto infrared detectors at various viewing angles, distances, and scenes. We\nbuild a 3D infrared car model with real infrared characteristics and propose an\ninfrared adversarial pattern generation method based on 3D mesh shadow. We\npropose a 3D control points-based mesh smoothing algorithm and use a set of\nsmoothness loss functions to enhance the smoothness of adversarial meshes and\nfacilitate the sticker implementation. Besides, We designed the aluminum\nstickers and conducted physical experiments on two real Mercedes-Benz A200L\ncars. Our adversarial stickers hid the cars from Faster RCNN, an object\ndetector, at various viewing angles, distances, and scenes. The attack success\nrate (ASR) was 91.49% for real cars. In comparison, the ASRs of random stickers\nand no sticker were only 6.21% and 0.66%, respectively. In addition, the ASRs\nof the designed stickers against six unseen object detectors such as YOLOv3 and\nDeformable DETR were between 73.35%-95.80%, showing good transferability of the\nattack performance across detectors.", "keywords": [], "authors_list": ["Xiaopei Zhu", "Yuqiu Liu", "Zhanhao Hu", "Jianmin Li", "Xiaolin Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e1"}, "filepath": "data/2404.04819.png", "tags": [], "_media_type": "image", "_rand": 0.9995003165983706, "arXiv_link": "https://arxiv.org/abs/2404.04819", "other_link": "https://github.com/dqj5182/CONTHO_RELEASE.", "title": "Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer", "abstract": "Human-object contact serves as a strong cue to understand how humans\nphysically interact with objects. Nevertheless, it is not widely explored to\nutilize human-object contact information for the joint reconstruction of 3D\nhuman and object from a single image. In this work, we present a novel joint 3D\nhuman-object reconstruction method (CONTHO) that effectively exploits contact\ninformation between humans and objects. There are two core designs in our\nsystem: 1) 3D-guided contact estimation and 2) contact-based 3D human and\nobject refinement. First, for accurate human-object contact estimation, CONTHO\ninitially reconstructs 3D humans and objects and utilizes them as explicit 3D\nguidance for contact estimation. Second, to refine the initial reconstructions\nof 3D human and object, we propose a novel contact-based refinement Transformer\nthat effectively aggregates human features and object features based on the\nestimated human-object contact. The proposed contact-based refinement prevents\nthe learning of erroneous correlation between human and object, which enables\naccurate 3D reconstruction. As a result, our CONTHO achieves state-of-the-art\nperformance in both human-object contact estimation and joint reconstruction of\n3D human and object. The code is publicly available at\nhttps://github.com/dqj5182/CONTHO_RELEASE.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Hyeongjin Nam", "Daniel Jung", "Gyeongsik Moon", "Kyoung Mu Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e2"}, "filepath": "data/2403.18452v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991678674004663, "arXiv_link": "https://arxiv.org/abs/2403.18452v1", "other_link": "https://github.com/inhwanbae/SingularTrajectory", "title": "SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model", "abstract": "There are five types of trajectory prediction tasks: deterministic,\nstochastic, domain adaptation, momentary observation, and few-shot. These\nassociated tasks are defined by various factors, such as the length of input\npaths, data split and pre-processing methods. Interestingly, even though they\ncommonly take sequential coordinates of observations as input and infer future\npaths in the same coordinates as output, designing specialized architectures\nfor each task is still necessary. For the other task, generality issues can\nlead to sub-optimal performances. In this paper, we propose SingularTrajectory,\na diffusion-based universal trajectory prediction framework to reduce the\nperformance gap across the five tasks. The core of SingularTrajectory is to\nunify a variety of human dynamics representations on the associated tasks. To\ndo this, we first build a Singular space to project all types of motion\npatterns from each task into one embedding space. We next propose an adaptive\nanchor working in the Singular space. Unlike traditional fixed anchor methods\nthat sometimes yield unacceptable paths, our adaptive anchor enables correct\nanchors, which are put into a wrong location, based on a traversability map.\nFinally, we adopt a diffusion-based predictor to further enhance the prototype\npaths using a cascaded denoising process. Our unified framework ensures the\ngenerality across various benchmark settings such as input modality, and\ntrajectory lengths. Extensive experiments on five public benchmarks demonstrate\nthat SingularTrajectory substantially outperforms existing models, highlighting\nits effectiveness in estimating general dynamics of human movements. Code is\npublicly available at https://github.com/inhwanbae/SingularTrajectory .", "keywords": [], "authors_list": ["Inhwan Bae", "Young-Jae Park", "Hae-Gon Jeon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e3"}, "filepath": "data/2402.18528.png", "tags": [], "_media_type": "image", "_rand": 0.999274799809101, "arXiv_link": "https://arxiv.org/abs/2402.18528", "other_link": "", "title": "Gradient Reweighting: Towards Imbalanced Class-Incremental Learning", "abstract": "Class-Incremental Learning (CIL) trains a model to continually recognize new\nclasses from non-stationary data while retaining learned knowledge. A major\nchallenge of CIL arises when applying to real-world data characterized by\nnon-uniform distribution, which introduces a dual imbalance problem involving\n(i) disparities between stored exemplars of old tasks and new class data\n(inter-phase imbalance), and (ii) severe class imbalances within each\nindividual task (intra-phase imbalance). We show that this dual imbalance issue\ncauses skewed gradient updates with biased weights in FC layers, thus inducing\nover/under-fitting and catastrophic forgetting in CIL. Our method addresses it\nby reweighting the gradients towards balanced optimization and unbiased\nclassifier learning. Additionally, we observe imbalanced forgetting where\nparadoxically the instance-rich classes suffer higher performance degradation\nduring CIL due to a larger amount of training data becoming unavailable in\nsubsequent learning phases. To tackle this, we further introduce a\ndistribution-aware knowledge distillation loss to mitigate forgetting by\naligning output logits proportionally with the distribution of lost training\ndata. We validate our method on CIFAR-100, ImageNetSubset, and Food101 across\nvarious evaluation protocols and demonstrate consistent improvements compared\nto existing works, showing great potential to apply CIL in real-world scenarios\nwith enhanced robustness and effectiveness.", "keywords": [], "authors_list": ["Jiangpeng He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e4"}, "filepath": "data/2306.16999.png", "tags": [], "_media_type": "image", "_rand": 0.9991196876139676, "arXiv_link": "https://arxiv.org/abs/2306.16999", "other_link": "", "title": "Batch Normalization Alleviates the Spectral Bias in Coordinate Networks", "abstract": "Regularization is a set of techniques that are used to improve the\ngeneralization ability of deep neural networks. In this paper, we introduce\nspectral batch normalization (SBN), a novel effective method to improve\ngeneralization by normalizing feature maps in the frequency (spectral) domain.\nThe activations of residual networks without batch normalization (BN) tend to\nexplode exponentially in the depth of the network at initialization. This leads\nto extremely large feature map norms even though the parameters are relatively\nsmall. These explosive dynamics can be very detrimental to learning. BN makes\nweight decay regularization on the scaling factors $\\gamma, \\beta$\napproximately equivalent to an additive penalty on the norm of the feature\nmaps, which prevents extremely large feature map norms to a certain degree.\nHowever, we show experimentally that, despite the approximate additive penalty\nof BN, feature maps in deep neural networks (DNNs) tend to explode at the\nbeginning of the network and that feature maps of DNNs contain large values\nduring the whole training. This phenomenon also occurs in a weakened form in\nnon-residual networks. SBN addresses large feature maps by normalizing them in\nthe frequency domain. In our experiments, we empirically show that SBN prevents\nexploding feature maps at initialization and large feature map values during\nthe training. Moreover, the normalization of feature maps in the frequency\ndomain leads to more uniform distributed frequency components. This discourages\nthe DNNs to rely on single frequency components of feature maps. These,\ntogether with other effects of SBN, have a regularizing effect on the training\nof residual and non-residual networks. We show experimentally that using SBN in\naddition to standard regularization methods improves the performance of DNNs by\na relevant margin, e.g. ResNet50 on ImageNet by 0.71%.", "keywords": [], "authors_list": ["Zhicheng Cai", "Hao Zhu", "Qiu Shen", "Xinran Wang", "Xun Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e5"}, "filepath": "data/2401.06056.png", "tags": [], "_media_type": "image", "_rand": 0.9995534144871966, "arXiv_link": "https://arxiv.org/abs/2401.06056", "other_link": "https://www.gvecchio.com/matsynth.", "title": "MatSynth: A Modern PBR Materials Dataset", "abstract": "We introduce MatSynth, a dataset of 4,000+ CC0 ultra-high resolution PBR\nmaterials. Materials are crucial components of virtual relightable assets,\ndefining the interaction of light at the surface of geometries. Given their\nimportance, significant research effort was dedicated to their representation,\ncreation and acquisition. However, in the past 6 years, most research in\nmaterial acquisiton or generation relied either on the same unique dataset, or\non company-owned huge library of procedural materials. With this dataset we\npropose a significantly larger, more diverse, and higher resolution set of\nmaterials than previously publicly available. We carefully discuss the data\ncollection process and demonstrate the benefits of this dataset on material\nacquisition and generation applications. The complete data further contains\nmetadata with each material's origin, license, category, tags, creation method\nand, when available, descriptions and physical size, as well as 3M+ renderings\nof the augmented materials, in 1K, under various environment lightings. The\nMatSynth dataset is released through the project page at:\nhttps://www.gvecchio.com/matsynth.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Giuseppe Vecchio", "Valentin Deschaintre"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e6"}, "filepath": "data/2311.10696.png", "tags": [], "_media_type": "image", "_rand": 0.9997730745684502, "arXiv_link": "https://arxiv.org/abs/2311.10696", "other_link": "", "title": "Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation", "abstract": "A versatile medical image segmentation model applicable to images acquired\nwith diverse equipment and protocols can facilitate model deployment and\nmaintenance. However, building such a model typically demands a large, diverse,\nand fully annotated dataset, which is challenging to obtain due to the\nlabor-intensive nature of data curation. To address this challenge, we propose\na cost-effective alternative that harnesses multi-source data with only partial\nor sparse segmentation labels for training, substantially reducing the cost of\ndeveloping a versatile model. We devise strategies for model\nself-disambiguation, prior knowledge incorporation, and imbalance mitigation to\ntackle challenges associated with inconsistently labeled multi-source data,\nincluding label ambiguity and modality, dataset, and class imbalances.\nExperimental results on a multi-modal dataset compiled from eight different\nsources for abdominal structure segmentation have demonstrated the\neffectiveness and superior performance of our method compared to\nstate-of-the-art alternative approaches. We anticipate that its cost-saving\nfeatures, which optimize the utilization of existing annotated data and reduce\nannotation efforts for new data, will have a significant impact in the field.", "keywords": [], "authors_list": ["Xiaoyang Chen", "Hao Zheng", "Yuemeng LI", "Yuncong Ma", "Liang Ma", "Hongming Li", "Yong Fan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e7"}, "filepath": "data/2405.00256.png", "tags": [], "_media_type": "image", "_rand": 0.9995151693060518, "arXiv_link": "https://arxiv.org/abs/2405.00256", "other_link": "https://asam2024.github.io/.", "title": "ASAM: Boosting Segment Anything Model with Adversarial Tuning", "abstract": "In the evolving landscape of computer vision, foundation models have emerged\nas pivotal tools, exhibiting exceptional adaptability to a myriad of tasks.\nAmong these, the Segment Anything Model (SAM) by Meta AI has distinguished\nitself in image segmentation. However, SAM, like its counterparts, encounters\nlimitations in specific niche applications, prompting a quest for enhancement\nstrategies that do not compromise its inherent capabilities. This paper\nintroduces ASAM, a novel methodology that amplifies SAM's performance through\nadversarial tuning. We harness the potential of natural adversarial examples,\ninspired by their successful implementation in natural language processing. By\nutilizing a stable diffusion model, we augment a subset (1%) of the SA-1B\ndataset, generating adversarial instances that are more representative of\nnatural variations rather than conventional imperceptible perturbations. Our\napproach maintains the photorealism of adversarial examples and ensures\nalignment with original mask annotations, thereby preserving the integrity of\nthe segmentation task. The fine-tuned ASAM demonstrates significant\nimprovements across a diverse range of segmentation tasks without necessitating\nadditional data or architectural modifications. The results of our extensive\nevaluations confirm that ASAM establishes new benchmarks in segmentation tasks,\nthereby contributing to the advancement of foundational models in computer\nvision. Our project page is in https://asam2024.github.io/.", "keywords": [], "authors_list": ["Bo Li", "Haoke Xiao", "Lv Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e8"}, "filepath": "data/2307.04684.png", "tags": [], "_media_type": "image", "_rand": 0.9993701737760016, "arXiv_link": "https://arxiv.org/abs/2307.04684", "other_link": "", "title": "FreeDrag: Feature Dragging for Reliable Point-based Image Editing", "abstract": "To serve the intricate and varied demands of image editing, precise and\nflexible manipulation in image content is indispensable. Recently, Drag-based\nediting methods have gained impressive performance. However, these methods\npredominantly center on point dragging, resulting in two noteworthy drawbacks,\nnamely \"miss tracking\", where difficulties arise in accurately tracking the\npredetermined handle points, and \"ambiguous tracking\", where tracked points are\npotentially positioned in wrong regions that closely resemble the handle\npoints. To address the above issues, we propose FreeDrag, a feature dragging\nmethodology designed to free the burden on point tracking. The FreeDrag\nincorporates two key designs, i.e., template feature via adaptive updating and\nline search with backtracking, the former improves the stability against\ndrastic content change by elaborately controls feature updating scale after\neach dragging, while the latter alleviates the misguidance from similar points\nby actively restricting the search area in a line. These two technologies\ntogether contribute to a more stable semantic dragging with higher efficiency.\nComprehensive experimental results substantiate that our approach significantly\noutperforms pre-existing methodologies, offering reliable point-based editing\neven in various complex scenarios.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Pengyang Ling", "Lin Chen", "Pan Zhang", "Huaian Chen", "Yi Jin", "Jinjin Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Human-Computer Interaction", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3e9"}, "filepath": "data/2401.08937.png", "tags": [], "_media_type": "image", "_rand": 0.9997882510219332, "arXiv_link": "https://arxiv.org/abs/2401.08937", "other_link": "", "title": "ICON: Incremental CONfidence for Joint Pose and Radiance Field Optimization", "abstract": "Neural Radiance Fields (NeRF) exhibit remarkable performance for Novel View\nSynthesis (NVS) given a set of 2D images. However, NeRF training requires\naccurate camera pose for each input view, typically obtained by\nStructure-from-Motion (SfM) pipelines. Recent works have attempted to relax\nthis constraint, but they still often rely on decent initial poses which they\ncan refine. Here we aim at removing the requirement for pose initialization. We\npresent Incremental CONfidence (ICON), an optimization procedure for training\nNeRFs from 2D video frames. ICON only assumes smooth camera motion to estimate\ninitial guess for poses. Further, ICON introduces ``confidence\": an adaptive\nmeasure of model quality used to dynamically reweight gradients. ICON relies on\nhigh-confidence poses to learn NeRF, and high-confidence 3D structure (as\nencoded by NeRF) to learn poses. We show that ICON, without prior pose\ninitialization, achieves superior performance in both CO3D and HO3D versus\nmethods which use SfM pose.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Weiyao Wang", "Pierre Gleize", "Hao Tang", "Xingyu Chen", "Kevin Liang", "Matt Feiszli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ea"}, "filepath": "data/2306.00987.png", "tags": [], "_media_type": "image", "_rand": 0.9994163947824496, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2306.00987", "other_link": "", "title": "StyLitGAN: Image-based Relighting via Latent Control", "abstract": "Intrinsic images, in the original sense, are image-like maps of scene\nproperties like depth, normal, albedo or shading. This paper demonstrates that\nStyleGAN can easily be induced to produce intrinsic images. The procedure is\nstraightforward. We show that, if StyleGAN produces $G({w})$ from latents\n${w}$, then for each type of intrinsic image, there is a fixed offset ${d}_c$\nso that $G({w}+{d}_c)$ is that type of intrinsic image for $G({w})$. Here\n${d}_c$ is {\\em independent of ${w}$}. The StyleGAN we used was pretrained by\nothers, so this property is not some accident of our training regime. We show\nthat there are image transformations StyleGAN will {\\em not} produce in this\nfashion, so StyleGAN is not a generic image regression engine.\n It is conceptually exciting that an image generator should ``know'' and\nrepresent intrinsic images. There may also be practical advantages to using a\ngenerative model to produce intrinsic images. The intrinsic images obtained\nfrom StyleGAN compare well both qualitatively and quantitatively with those\nobtained by using SOTA image regression techniques; but StyleGAN's intrinsic\nimages are robust to relighting effects, unlike SOTA methods.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques", "Low-level vision"], "authors_list": ["Anand Bhattad", "James Soole", "David Forsyth"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3eb"}, "filepath": "data/2312.09250.png", "tags": [], "_media_type": "image", "_rand": 0.999015068781251, "arXiv_link": "https://arxiv.org/abs/2312.09250", "other_link": "", "title": "Single Mesh Diffusion Models with Field Latents for Texture Generation", "abstract": "We introduce a framework for intrinsic latent diffusion models operating\ndirectly on the surfaces of 3D shapes, with the goal of synthesizing\nhigh-quality textures. Our approach is underpinned by two contributions: field\nlatents, a latent representation encoding textures as discrete vector fields on\nthe mesh vertices, and field latent diffusion models, which learn to denoise a\ndiffusion process in the learned latent space on the surface. We consider a\nsingle-textured-mesh paradigm, where our models are trained to generate\nvariations of a given texture on a mesh. We show the synthesized textures are\nof superior fidelity compared those from existing single-textured-mesh\ngenerative models. Our models can also be adapted for user-controlled editing\ntasks such as inpainting and label-guided generation. The efficacy of our\napproach is due in part to the equivariance of our proposed framework under\nisometries, allowing our models to seamlessly reproduce details across locally\nsimilar regions and opening the door to a notion of generative texture\ntransfer.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Thomas W. Mitchel", "Carlos Esteves", "Ameesh Makadia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ec"}, "filepath": "data/2403.06392.png", "tags": [], "_media_type": "image", "_rand": 0.9996496310911767, "arXiv_link": "https://arxiv.org/abs/2403.06392", "other_link": "", "title": "Label-Efficient Group Robustness via Out-of-Distribution Concept Curation", "abstract": "Generalizing to out-of-distribution (OOD) data or unseen domain, termed OOD\ngeneralization, still lacks appropriate theoretical guarantees. Canonical OOD\nbounds focus on different distance measurements between source and target\ndomains but fail to consider the optimization property of the learned model. As\nempirically shown in recent work, the sharpness of learned minima influences\nOOD generalization. To bridge this gap between optimization and OOD\ngeneralization, we study the effect of sharpness on how a model tolerates data\nchange in domain shift which is usually captured by \"robustness\" in\ngeneralization. In this paper, we give a rigorous connection between sharpness\nand robustness, which gives better OOD guarantees for robust algorithms. It\nalso provides a theoretical backing for \"flat minima leads to better OOD\ngeneralization\". Overall, we propose a sharpness-based OOD generalization bound\nby taking robustness into consideration, resulting in a tighter bound than\nnon-robust guarantees. Our findings are supported by the experiments on a ridge\nregression model, as well as the experiments on deep learning classification\ntasks.", "keywords": [], "authors_list": ["Yiwei Yang", "Anthony Liu", "Robert Wolfe", "Aylin Caliskan", "Bill Howe"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ed"}, "filepath": "data/2312.11911.png", "tags": [], "_media_type": "image", "_rand": 0.9993702186669998, "arXiv_link": "https://arxiv.org/abs/2312.11911", "other_link": "https://youtu.be/Nn40U4e5Si8.", "title": "EventPS: Real-Time Photometric Stereo Using an Event Camera", "abstract": "Event cameras are bio-inspired, motion-activated sensors that demonstrate\nsubstantial potential in handling challenging situations, such as motion blur\nand high-dynamic range. In this paper, we proposed EVI-SAM to tackle the\nproblem of 6 DoF pose tracking and 3D reconstruction using monocular event\ncamera. A novel event-based hybrid tracking framework is designed to estimate\nthe pose, leveraging the robustness of feature matching and the precision of\ndirect alignment. Specifically, we develop an event-based 2D-2D alignment to\nconstruct the photometric constraint, and tightly integrate it with the\nevent-based reprojection constraint. The mapping module recovers the dense and\ncolorful depth of the scene through the image-guided event-based mapping\nmethod. Subsequently, the appearance, texture, and surface mesh of the 3D scene\ncan be reconstructed by fusing the dense depth map from multiple viewpoints\nusing truncated signed distance function (TSDF) fusion. To the best of our\nknowledge, this is the first non-learning work to realize event-based dense\nmapping. Numerical evaluations are performed on both publicly available and\nself-collected datasets, which qualitatively and quantitatively demonstrate the\nsuperior performance of our method. Our EVI-SAM effectively balances accuracy\nand robustness while maintaining computational efficiency, showcasing superior\npose tracking and dense mapping performance in challenging scenarios. Video\nDemo: https://youtu.be/Nn40U4e5Si8.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Bohan Yu", "Jieji Ren", "Jin Han", "Feishi Wang", "Jinxiu Liang", "Boxin Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ee"}, "filepath": "data/2404.01524.png", "tags": [], "_media_type": "image", "_rand": 0.9990142148108772, "arXiv_link": "https://arxiv.org/abs/2404.01524", "other_link": "https://github.com/dealicious-inc/RGLDv2-clean.", "title": "On Train-Test Class Overlap and Detection for Image Retrieval", "abstract": "How important is it for training and evaluation sets to not have class\noverlap in image retrieval? We revisit Google Landmarks v2 clean, the most\npopular training set, by identifying and removing class overlap with Revisited\nOxford and Paris [34], the most popular evaluation set. By comparing the\noriginal and the new RGLDv2-clean on a benchmark of reproduced state-of-the-art\nmethods, our findings are striking. Not only is there a dramatic drop in\nperformance, but it is inconsistent across methods, changing the ranking.What\ndoes it take to focus on objects or interest and ignore background clutter when\nindexing? Do we need to train an object detector and the representation\nseparately? Do we need location supervision? We introduce Single-stage\nDetect-to-Retrieve (CiDeR), an end-to-end, single-stage pipeline to detect\nobjects of interest and extract a global image representation. We outperform\nprevious state-of-the-art on both existing training sets and the new\nRGLDv2-clean. Our dataset is available at\nhttps://github.com/dealicious-inc/RGLDv2-clean.", "keywords": [], "authors_list": ["Chull Hwan Song", "Jooyoung Yoon", "Taebaek Hwang", "Shunghyun Choi", "Yeong Hyeon Gu", "Yannis Avrithis"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ef"}, "filepath": "data/2404.18399.png", "tags": [], "_media_type": "image", "_rand": 0.9991182086575966, "arXiv_link": "https://arxiv.org/abs/2404.18399", "other_link": "https://github.com/Jinwon-Ko/SLCD.", "title": "Semantic Line Combination Detector", "abstract": "A novel algorithm, called semantic line combination detector (SLCD), to find\nan optimal combination of semantic lines is proposed in this paper. It\nprocesses all lines in each line combination at once to assess the overall\nharmony of the lines. First, we generate various line combinations from\nreliable lines. Second, we estimate the score of each line combination and\ndetermine the best one. Experimental results demonstrate that the proposed SLCD\noutperforms existing semantic line detectors on various datasets. Moreover, it\nis shown that SLCD can be applied effectively to three vision tasks of\nvanishing point detection, symmetry axis detection, and composition-based image\nretrieval. Our codes are available at https://github.com/Jinwon-Ko/SLCD.", "keywords": [], "authors_list": ["JINWON KO", "Dongkwon Jin", "Chang-Su Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f0"}, "filepath": "data/2405.06216.png", "tags": [], "_media_type": "image", "_rand": 0.9996934695780694, "arXiv_link": "https://arxiv.org/abs/2405.06216", "other_link": "", "title": "Event-based Structure-from-Orbit", "abstract": "Event sensors offer high temporal resolution visual sensing, which makes them\nideal for perceiving fast visual phenomena without suffering from motion blur.\nCertain applications in robotics and vision-based navigation require 3D\nperception of an object undergoing circular or spinning motion in front of a\nstatic camera, such as recovering the angular velocity and shape of the object.\nThe setting is equivalent to observing a static object with an orbiting camera.\nIn this paper, we propose event-based structure-from-orbit (eSfO), where the\naim is to simultaneously reconstruct the 3D structure of a fast spinning object\nobserved from a static event camera, and recover the equivalent orbital motion\nof the camera. Our contributions are threefold: since state-of-the-art event\nfeature trackers cannot handle periodic self-occlusion due to the spinning\nmotion, we develop a novel event feature tracker based on spatio-temporal\nclustering and data association that can better track the helical trajectories\nof valid features in the event data. The feature tracks are then fed to our\nnovel factor graph-based structure-from-orbit back-end that calculates the\norbital motion parameters (e.g., spin rate, relative rotational axis) that\nminimize the reprojection error. For evaluation, we produce a new event dataset\nof objects under spinning motion. Comparisons against ground truth indicate the\nefficacy of eSfO.", "keywords": [], "authors_list": ["Ethan Elms", "Yasir Latif", "Tae Ha Park", "Tat-Jun Chin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f1"}, "filepath": "data/2403.18575.png", "tags": [], "_media_type": "image", "_rand": 0.9995786921526361, "arXiv_link": "https://arxiv.org/abs/2403.18575", "other_link": "https://github.com/hxwork/HandBooster_Pytorch.", "title": "HandBooster: Boosting 3D Hand-Mesh Reconstruction by Conditional Synthesis and Sampling of Hand-Object Interactions", "abstract": "Reconstructing 3D hand mesh robustly from a single image is very challenging,\ndue to the lack of diversity in existing real-world datasets. While data\nsynthesis helps relieve the issue, the syn-to-real gap still hinders its usage.\nIn this work, we present HandBooster, a new approach to uplift the data\ndiversity and boost the 3D hand-mesh reconstruction performance by training a\nconditional generative space on hand-object interactions and purposely sampling\nthe space to synthesize effective data samples. First, we construct versatile\ncontent-aware conditions to guide a diffusion model to produce realistic images\nwith diverse hand appearances, poses, views, and backgrounds; favorably,\naccurate 3D annotations are obtained for free. Then, we design a novel\ncondition creator based on our similarity-aware distribution sampling\nstrategies to deliberately find novel and realistic interaction poses that are\ndistinctive from the training set. Equipped with our method, several baselines\ncan be significantly improved beyond the SOTA on the HO3D and DexYCB\nbenchmarks. Our code will be released on\nhttps://github.com/hxwork/HandBooster_Pytorch.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Hao Xu", "Li Haipeng", "Yinqiao Wang", "Shuaicheng Liu", "Chi-Wing Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f2"}, "filepath": "data/2312.03045.png", "tags": [], "_media_type": "image", "_rand": 0.9998303897295387, "arXiv_link": "https://arxiv.org/abs/2312.03045", "other_link": "", "title": "Customization Assistant for Text-to-image Generation", "abstract": "Customizing pre-trained text-to-image generation model has attracted massive\nresearch interest recently, due to its huge potential in real-world\napplications. Although existing methods are able to generate creative content\nfor a novel concept contained in single user-input image, their capability are\nstill far from perfection. Specifically, most existing methods require\nfine-tuning the generative model on testing images. Some existing methods do\nnot require fine-tuning, while their performance are unsatisfactory.\nFurthermore, the interaction between users and models are still limited to\ndirective and descriptive prompts such as instructions and captions. In this\nwork, we build a customization assistant based on pre-trained large language\nmodel and diffusion model, which can not only perform customized generation in\na tuning-free manner, but also enable more user-friendly interactions: users\ncan chat with the assistant and input either ambiguous text or clear\ninstruction. Specifically, we propose a new framework consists of a new model\ndesign and a novel training strategy. The resulting assistant can perform\ncustomized generation in 2-5 seconds without any test time fine-tuning.\nExtensive experiments are conducted, competitive results have been obtained\nacross different domains, illustrating the effectiveness of the proposed\nmethod.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yufan Zhou", "Ruiyi Zhang", "Jiuxiang Gu", "Tong Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f3"}, "filepath": "data/2404.15655.png", "tags": [], "_media_type": "image", "_rand": 0.9995330956341365, "arXiv_link": "https://arxiv.org/abs/2404.15655", "other_link": "https://github.com/Alexander-Yao/Multi-MaP.", "title": "Multi-Modal Proxy Learning Towards Personalized Visual Multiple Clustering", "abstract": "Multiple clustering has gained significant attention in recent years due to\nits potential to reveal multiple hidden structures of data from different\nperspectives. The advent of deep multiple clustering techniques has notably\nadvanced the performance by uncovering complex patterns and relationships\nwithin large datasets. However, a major challenge arises as users often do not\nneed all the clusterings that algorithms generate, and figuring out the one\nneeded requires a substantial understanding of each clustering result.\nTraditionally, aligning a user's brief keyword of interest with the\ncorresponding vision components was challenging, but the emergence of\nmulti-modal and large language models (LLMs) has begun to bridge this gap. In\nresponse, given unlabeled target visual data, we propose Multi-MaP, a novel\nmethod employing a multi-modal proxy learning process. It leverages CLIP\nencoders to extract coherent text and image embeddings, with GPT-4 integrating\nusers' interests to formulate effective textual contexts. Moreover, reference\nword constraint and concept-level constraint are designed to learn the optimal\ntext proxy according to the user's interest. Multi-MaP not only adeptly\ncaptures a user's interest via a keyword but also facilitates identifying\nrelevant clusterings. Our extensive experiments show that Multi-MaP\nconsistently outperforms state-of-the-art methods in all benchmark\nmulti-clustering vision tasks. Our code is available at\nhttps://github.com/Alexander-Yao/Multi-MaP.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jiawei Yao", "Qi Qian", "Juhua Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f4"}, "filepath": "data/2404.06244.png", "tags": [], "_media_type": "image", "_rand": 0.9999170349531973, "arXiv_link": "https://arxiv.org/abs/2404.06244", "other_link": "", "title": "Anchor-based Robust Finetuning of Vision-Language Models", "abstract": "We aim at finetuning a vision-language model without hurting its\nout-of-distribution (OOD) generalization. We address two types of OOD\ngeneralization, i.e., i) domain shift such as natural to sketch images, and ii)\nzero-shot capability to recognize the category that was not contained in the\nfinetune data. Arguably, the diminished OOD generalization after finetuning\nstems from the excessively simplified finetuning target, which only provides\nthe class information, such as ``a photo of a [CLASS]''. This is distinct from\nthe process in that CLIP was pretrained, where there is abundant text\nsupervision with rich semantic information. Therefore, we propose to compensate\nfor the finetune process using auxiliary supervision with rich semantic\ninformation, which acts as anchors to preserve the OOD generalization.\nSpecifically, two types of anchors are elaborated in our method, including i)\ntext-compensated anchor which uses the images from the finetune set but\nenriches the text supervision from a pretrained captioner, ii) image-text-pair\nanchor which is retrieved from the dataset similar to pretraining data of CLIP\naccording to the downstream task, associating with the original CLIP text with\nrich semantics. Those anchors are utilized as auxiliary semantic information to\nmaintain the original feature space of CLIP, thereby preserving the OOD\ngeneralization capabilities. Comprehensive experiments demonstrate that our\nmethod achieves in-distribution performance akin to conventional finetuning\nwhile attaining new state-of-the-art results on domain shift and zero-shot\nlearning benchmarks.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jinwei Han", "Zhiwen Lin", "Zhongyisun Sun", "Yingguo Gao", "Ke Yan", "Shouhong Ding", "Yuan Gao", "Gui-Song Xia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f5"}, "filepath": "data/2308.15074.png", "tags": [], "_media_type": "image", "_rand": 0.9990192528879273, "arXiv_link": "https://arxiv.org/abs/2308.15074", "other_link": "https://github.com/lixiaotong97/PED.", "title": "LEAD: Exploring Logit Space Evolution for Model Selection", "abstract": "Transfer learning has become crucial in computer vision tasks due to the vast\navailability of pre-trained deep learning models. However, selecting the\noptimal pre-trained model from a diverse pool for a specific downstream task\nremains a challenge. Existing methods for measuring the transferability of\npre-trained models rely on statistical correlations between encoded static\nfeatures and task labels, but they overlook the impact of underlying\nrepresentation dynamics during fine-tuning, leading to unreliable results,\nespecially for self-supervised models. In this paper, we present an insightful\nphysics-inspired approach named PED to address these challenges. We reframe the\nchallenge of model selection through the lens of potential energy and directly\nmodel the interaction forces that influence fine-tuning dynamics. By capturing\nthe motion of dynamic representations to decline the potential energy within a\nforce-driven physical model, we can acquire an enhanced and more stable\nobservation for estimating transferability. The experimental results on 10\ndownstream tasks and 12 self-supervised models demonstrate that our approach\ncan seamlessly integrate into existing ranking techniques and enhance their\nperformances, revealing its effectiveness for the model selection task and its\npotential for understanding the mechanism in transfer learning. Code will be\navailable at https://github.com/lixiaotong97/PED.", "keywords": [], "authors_list": ["Zixuan Hu", "Xiaotong Li", "SHIXIANG TANG", "Jun Liu", "Yichun Hu", "Ling-Yu Duan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f6"}, "filepath": "data/2306.07520.png", "tags": [], "_media_type": "image", "_rand": 0.9991297773512329, "arXiv_link": "https://arxiv.org/abs/2306.07520", "other_link": "https://github.com/hwz-zju/Instruct-ReID.", "title": "Instruct-ReID: A Multi-purpose Person Re-identification Task with Instructions", "abstract": "Human intelligence can retrieve any person according to both visual and\nlanguage descriptions. However, the current computer vision community studies\nspecific person re-identification (ReID) tasks in different scenarios\nseparately, which limits the applications in the real world. This paper strives\nto resolve this problem by proposing a new instruct-ReID task that requires the\nmodel to retrieve images according to the given image or language instructions.\nOur instruct-ReID is a more general ReID setting, where existing 6 ReID tasks\ncan be viewed as special cases by designing different instructions. We propose\na large-scale OmniReID benchmark and an adaptive triplet loss as a baseline\nmethod to facilitate research in this new setting. Experimental results show\nthat the proposed multi-purpose ReID model, trained on our OmniReID benchmark\nwithout fine-tuning, can improve +0.5%, +0.6%, +7.7% mAP on Market1501, MSMT17,\nCUHK03 for traditional ReID, +6.4%, +7.1%, +11.2% mAP on PRCC, VC-Clothes, LTCC\nfor clothes-changing ReID, +11.7% mAP on COCAS+ real2 for clothes template\nbased clothes-changing ReID when using only RGB images, +24.9% mAP on COCAS+\nreal2 for our newly defined language-instructed ReID, +4.3% on LLCM for\nvisible-infrared ReID, +2.6% on CUHK-PEDES for text-to-image ReID. The\ndatasets, the model, and code will be available at\nhttps://github.com/hwz-zju/Instruct-ReID.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Weizhen He", "Yiheng Deng", "SHIXIANG TANG", "Qihao CHEN", "Qingsong Xie", "Yizhou Wang", "Lei Bai", "Feng Zhu", "Rui Zhao", "Wanli Ouyang", "Donglian Qi", "Yunfeng Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f7"}, "filepath": "data/2311.15841.png", "tags": [], "_media_type": "image", "_rand": 0.9996454110012575, "arXiv_link": "https://arxiv.org/abs/2311.15841", "other_link": "https://adi-t2i.github.io/ADI.", "title": "Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation", "abstract": "This study focuses on a novel task in text-to-image (T2I) generation, namely\naction customization. The objective of this task is to learn the co-existing\naction from limited data and generalize it to unseen humans or even animals.\nExperimental results show that existing subject-driven customization methods\nfail to learn the representative characteristics of actions and struggle in\ndecoupling actions from context features, including appearance. To overcome the\npreference for low-level features and the entanglement of high-level features,\nwe propose an inversion-based method Action-Disentangled Identifier (ADI) to\nlearn action-specific identifiers from the exemplar images. ADI first expands\nthe semantic conditioning space by introducing layer-wise identifier tokens,\nthereby increasing the representational richness while distributing the\ninversion across different features. Then, to block the inversion of\naction-agnostic features, ADI extracts the gradient invariance from the\nconstructed sample triples and masks the updates of irrelevant channels. To\ncomprehensively evaluate the task, we present an ActionBench that includes a\nvariety of actions, each accompanied by meticulously selected samples. Both\nquantitative and qualitative results show that our ADI outperforms existing\nbaselines in action-customized T2I generation. Our project page is at\nhttps://adi-t2i.github.io/ADI.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Siteng Huang", "Biao Gong", "Yutong Feng", "Xi Chen", "Yuqian Fu", "Yu Liu", "Donglin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f8"}, "filepath": "data/2311.16494.png", "tags": [], "_media_type": "image", "_rand": 0.9997347798550684, "arXiv_link": "https://arxiv.org/abs/2311.16494", "other_link": "", "title": "ArGue: Attribute-Guided Prompt Tuning for Vision-Language Models", "abstract": "Although soft prompt tuning is effective in efficiently adapting\nVision-Language (V&L) models for downstream tasks, it shows limitations in\ndealing with distribution shifts. We address this issue with Attribute-Guided\nPrompt Tuning (ArGue), making three key contributions. 1) In contrast to the\nconventional approach of directly appending soft prompts preceding class names,\nwe align the model with primitive visual attributes generated by Large Language\nModels (LLMs). We posit that a model's ability to express high confidence in\nthese attributes signifies its capacity to discern the correct class\nrationales. 2) We introduce attribute sampling to eliminate disadvantageous\nattributes, thus only semantically meaningful attributes are preserved. 3) We\npropose negative prompting, explicitly enumerating class-agnostic attributes to\nactivate spurious correlations and encourage the model to generate highly\northogonal probability distributions in relation to these negative features. In\nexperiments, our method significantly outperforms current state-of-the-art\nprompt tuning methods on both novel class prediction and out-of-distribution\ngeneralization tasks.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Xinyu Tian", "Shu Zou", "Zhaoyuan Yang", "Jing Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3f9"}, "filepath": "data/2404.14471.png", "tags": [], "_media_type": "image", "_rand": 0.9993679274279129, "arXiv_link": "https://arxiv.org/abs/2404.14471", "other_link": "https://github.com/shiyi-zh0408/NAE_CVPR2024.", "title": "Narrative Action Evaluation with Prompt-Guided Multimodal Interaction", "abstract": "In this paper, we investigate a new problem called narrative action\nevaluation (NAE). NAE aims to generate professional commentary that evaluates\nthe execution of an action. Unlike traditional tasks such as score-based action\nquality assessment and video captioning involving superficial sentences, NAE\nfocuses on creating detailed narratives in natural language. These narratives\nprovide intricate descriptions of actions along with objective evaluations. NAE\nis a more challenging task because it requires both narrative flexibility and\nevaluation rigor. One existing possible solution is to use multi-task learning,\nwhere narrative language and evaluative information are predicted separately.\nHowever, this approach results in reduced performance for individual tasks\nbecause of variations between tasks and differences in modality between\nlanguage information and evaluation information. To address this, we propose a\nprompt-guided multimodal interaction framework. This framework utilizes a pair\nof transformers to facilitate the interaction between different modalities of\ninformation. It also uses prompts to transform the score regression task into a\nvideo-text matching task, thus enabling task interactivity. To support further\nresearch in this field, we re-annotate the MTL-AQA and FineGym datasets with\nhigh-quality and comprehensive action narration. Additionally, we establish\nbenchmarks for NAE. Extensive experiment results prove that our method\noutperforms separate learning methods and naive multi-task learning methods.\nData and code are released at https://github.com/shiyi-zh0408/NAE_CVPR2024.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Shiyi Zhang", "Sule Bai", "Guangyi Chen", "Lei Chen", "Jiwen Lu", "Junle Wang", "Yansong Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3fa"}, "filepath": "data/2401.07402.png", "tags": [], "_media_type": "image", "_rand": 0.9992757449962355, "arXiv_link": "https://arxiv.org/abs/2401.07402", "other_link": "", "title": "Improved Implicit Neural Representation with Fourier Reparameterized Training", "abstract": "Implicit Neural Representation (INR) as a mighty representation paradigm has\nachieved success in various computer vision tasks recently. Due to the\nlow-frequency bias issue of vanilla multi-layer perceptron (MLP), existing\nmethods have investigated advanced techniques, such as positional encoding and\nperiodic activation function, to improve the accuracy of INR. In this paper, we\nconnect the network training bias with the reparameterization technique and\ntheoretically prove that weight reparameterization could provide us a chance to\nalleviate the spectral bias of MLP. Based on our theoretical analysis, we\npropose a Fourier reparameterization method which learns coefficient matrix of\nfixed Fourier bases to compose the weights of MLP. We evaluate the proposed\nFourier reparameterization method on different INR tasks with various MLP\narchitectures, including vanilla MLP, MLP with positional encoding and MLP with\nadvanced activation function, etc. The superiority approximation results on\ndifferent MLP architectures clearly validate the advantage of our proposed\nmethod. Armed with our Fourier reparameterization method, better INR with more\ntextures and less artifacts can be learned from the training data.", "keywords": ["Low-level vision", "Low-level vision"], "authors_list": ["Kexuan Shi", "Xingyu Zhou", "Shuhang Gu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3fb"}, "filepath": "data/2311.18168.png", "tags": [], "_media_type": "image", "_rand": 0.9995730656403461, "arXiv_link": "https://arxiv.org/abs/2311.18168", "other_link": "", "title": "Probabilistic Speech-Driven 3D Facial Motion Synthesis: New Benchmarks, Methods, and Applications", "abstract": "We consider the task of animating 3D facial geometry from speech signal.\nExisting works are primarily deterministic, focusing on learning a one-to-one\nmapping from speech signal to 3D face meshes on small datasets with limited\nspeakers. While these models can achieve high-quality lip articulation for\nspeakers in the training set, they are unable to capture the full and diverse\ndistribution of 3D facial motions that accompany speech in the real world.\nImportantly, the relationship between speech and facial motion is one-to-many,\ncontaining both inter-speaker and intra-speaker variations and necessitating a\nprobabilistic approach. In this paper, we identify and address key challenges\nthat have so far limited the development of probabilistic models: lack of\ndatasets and metrics that are suitable for training and evaluating them, as\nwell as the difficulty of designing a model that generates diverse results\nwhile remaining faithful to a strong conditioning signal as speech. We first\npropose large-scale benchmark datasets and metrics suitable for probabilistic\nmodeling. Then, we demonstrate a probabilistic model that achieves both\ndiversity and fidelity to speech, outperforming other methods across the\nproposed benchmarks. Finally, we showcase useful applications of probabilistic\nmodels trained on these large-scale datasets: we can generate diverse\nspeech-driven 3D facial motion that matches unseen speaker styles extracted\nfrom reference clips; and our synthetic meshes can be used to improve the\nperformance of downstream audio-visual models.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Karren Yang", "Anurag Ranjan", "Jen-Hao Rick Chang", "Raviteja Vemulapalli", "Oncel Tuzel"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3fc"}, "filepath": "data/2308.12831.png", "tags": [], "_media_type": "image", "_rand": 0.9995835768976905, "arXiv_link": "https://arxiv.org/abs/2308.12831", "other_link": "", "title": "EFormer: Enhanced Transformer towards Semantic-Contour Features of Foreground for Portraits Matting", "abstract": "The portrait matting task aims to extract an alpha matte with complete\nsemantics and finely-detailed contours. In comparison to CNN-based approaches,\ntransformers with self-attention module have a better capacity to capture\nlong-range dependencies and low-frequency semantic information of a portrait.\nHowever, the recent research shows that self-attention mechanism struggles with\nmodeling high-frequency contour information and capturing fine contour details,\nwhich can lead to bias while predicting the portrait's contours. To deal with\nthis issue, we propose EFormer to enhance the model's attention towards both of\nthe low-frequency semantic and high-frequency contour features. For the\nhigh-frequency contours, our research demonstrates that cross-attention module\nbetween different resolutions can guide our model to allocate attention\nappropriately to these contour regions. Supported on this, we can successfully\nextract the high-frequency detail information around the portrait's contours,\nwhich are previously ignored by self-attention. Based on cross-attention\nmodule, we further build a semantic and contour detector (SCD) to accurately\ncapture both of the low-frequency semantic and high-frequency contour features.\nAnd we design contour-edge extraction branch and semantic extraction branch to\nextract refined high-frequency contour features and complete low-frequency\nsemantic information, respectively. Finally, we fuse the two kinds of features\nand leverage segmentation head to generate a predicted portrait matte.\nExperiments on VideoMatte240K (JPEG SD Format) and Adobe Image Matting (AIM)\ndatasets demonstrate that EFormer outperforms previous portrait matte methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zitao Wang", "Qiguang Miao", "Yue Xi", "Peipei Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3fd"}, "filepath": "data/2405.14294v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997551152862754, "arXiv_link": "https://arxiv.org/html/2405.14294v1", "other_link": "", "title": "Frozen CLIP: A Strong Backbone for Weakly Supervised Semantic Segmentation", "abstract": "This work presents a tuning-free semantic segmentation framework based on\nclassifying SAM masks by CLIP, which is universally applicable to various types\nof supervision. Initially, we utilize CLIP's zero-shot classification ability\nto generate pseudo-labels or perform open-vocabulary segmentation. However, the\nmisalignment between mask and CLIP text embeddings leads to suboptimal results.\nTo address this issue, we propose discrimination-bias aligned CLIP to closely\nalign mask and text embedding, offering an overhead-free performance gain. We\nthen construct a global-local consistent classifier to classify SAM masks,\nwhich reveals the intrinsic structure of high-quality embeddings produced by\nDBA-CLIP and demonstrates robustness against noisy pseudo-labels. Extensive\nexperiments validate the efficiency and effectiveness of our method, and we\nachieve state-of-the-art (SOTA) or competitive performance across various\ndatasets and supervision types.", "keywords": [], "authors_list": ["Bingfeng Zhang", "Siyue Yu", "Yunchao Wei", "Yao Zhao", "Jimin Xiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3fe"}, "filepath": "data/2403.01813.png", "tags": [], "_media_type": "image", "_rand": 0.9991232723263849, "arXiv_link": "https://arxiv.org/abs/2403.01813", "other_link": "", "title": "A Simple Baseline for Efficient Hand Mesh Reconstruction", "abstract": "3D hand pose estimation has found broad application in areas such as gesture\nrecognition and human-machine interaction tasks. As performance improves, the\ncomplexity of the systems also increases, which can limit the comparative\nanalysis and practical implementation of these methods. In this paper, we\npropose a simple yet effective baseline that not only surpasses\nstate-of-the-art (SOTA) methods but also demonstrates computational efficiency.\nTo establish this baseline, we abstract existing work into two components: a\ntoken generator and a mesh regressor, and then examine their core structures. A\ncore structure, in this context, is one that fulfills intrinsic functions,\nbrings about significant improvements, and achieves excellent performance\nwithout unnecessary complexities. Our proposed approach is decoupled from any\nmodifications to the backbone, making it adaptable to any modern models. Our\nmethod outperforms existing solutions, achieving state-of-the-art (SOTA)\nresults across multiple datasets. On the FreiHAND dataset, our approach\nproduced a PA-MPJPE of 5.7mm and a PA-MPVPE of 6.0mm. Similarly, on the Dexycb\ndataset, we observed a PA-MPJPE of 5.5mm and a PA-MPVPE of 5.0mm. As for\nperformance speed, our method reached up to 33 frames per second (fps) when\nusing HRNet and up to 70 fps when employing FastViT-MA36", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["zhishan zhou", "shihao zhou", "Zhi Lv", "minqiang zou", "Yao Tang", "Jiajun Liang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f3ff"}, "filepath": "data/2403.09623.png", "tags": [], "_media_type": "image", "_rand": 0.9993326470479362, "arXiv_link": "http://export.arxiv.org/abs/2403.09623", "other_link": "https://statho.github.io/ScoreHMR.", "title": "Score-Guided Diffusion for 3D Human Recovery", "abstract": "We present Score-Guided Human Mesh Recovery (ScoreHMR), an approach for\nsolving inverse problems for 3D human pose and shape reconstruction. These\ninverse problems involve fitting a human body model to image observations,\ntraditionally solved through optimization techniques. ScoreHMR mimics model\nfitting approaches, but alignment with the image observation is achieved\nthrough score guidance in the latent space of a diffusion model. The diffusion\nmodel is trained to capture the conditional distribution of the human model\nparameters given an input image. By guiding its denoising process with a\ntask-specific score, ScoreHMR effectively solves inverse problems for various\napplications without the need for retraining the task-agnostic diffusion model.\nWe evaluate our approach on three settings/applications. These are: (i)\nsingle-frame model fitting; (ii) reconstruction from multiple uncalibrated\nviews; (iii) reconstructing humans in video sequences. ScoreHMR consistently\noutperforms all optimization baselines on popular benchmarks across all\nsettings. We make our code and models available at the\nhttps://statho.github.io/ScoreHMR.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Anastasis Stathopoulos", "Ligong Han", "Dimitris N. Metaxas"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f400"}, "filepath": "data/2403.13417.png", "tags": [], "_media_type": "image", "_rand": 0.9990036734164593, "arXiv_link": "https://arxiv.org/abs/2403.13417", "other_link": "https://github.com/ycwu1997/D-Persona.", "title": "Diversified and Personalized Multi-rater Medical Image Segmentation", "abstract": "Annotation ambiguity due to inherent data uncertainties such as blurred\nboundaries in medical scans and different observer expertise and preferences\nhas become a major obstacle for training deep-learning based medical image\nsegmentation models. To address it, the common practice is to gather multiple\nannotations from different experts, leading to the setting of multi-rater\nmedical image segmentation. Existing works aim to either merge different\nannotations into the \"groundtruth\" that is often unattainable in numerous\nmedical contexts, or generate diverse results, or produce personalized results\ncorresponding to individual expert raters. Here, we bring up a more ambitious\ngoal for multi-rater medical image segmentation, i.e., obtaining both\ndiversified and personalized results. Specifically, we propose a two-stage\nframework named D-Persona (first Diversification and then Personalization). In\nStage I, we exploit multiple given annotations to train a Probabilistic U-Net\nmodel, with a bound-constrained loss to improve the prediction diversity. In\nthis way, a common latent space is constructed in Stage I, where different\nlatent codes denote diversified expert opinions. Then, in Stage II, we design\nmultiple attention-based projection heads to adaptively query the corresponding\nexpert prompts from the shared latent space, and then perform the personalized\nmedical image segmentation. We evaluated the proposed model on our in-house\nNasopharyngeal Carcinoma dataset and the public lung nodule dataset (i.e.,\nLIDC-IDRI). Extensive experiments demonstrated our D-Persona can provide\ndiversified and personalized results at the same time, achieving new SOTA\nperformance for multi-rater medical image segmentation. Our code will be\nreleased at https://github.com/ycwu1997/D-Persona.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Yicheng Wu", "Xiangde Luo", "Zhe Xu", "Xiaoqing Guo", "Lie Ju", "Zongyuan Ge", "Wenjun Liao", "Jianfei Cai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f401"}, "filepath": "data/2307.09481.png", "tags": [], "_media_type": "image", "_rand": 0.9996112396886255, "arXiv_link": "https://arxiv.org/abs/2307.09481", "other_link": "https://damo-vilab.github.io/AnyDoor-Page/.", "title": "AnyDoor: Zero-shot Object-level Image Customization", "abstract": "This work presents AnyDoor, a diffusion-based image generator with the power\nto teleport target objects to new scenes at user-specified locations in a\nharmonious way. Instead of tuning parameters for each object, our model is\ntrained only once and effortlessly generalizes to diverse object-scene\ncombinations at the inference stage. Such a challenging zero-shot setting\nrequires an adequate characterization of a certain object. To this end, we\ncomplement the commonly used identity feature with detail features, which are\ncarefully designed to maintain texture details yet allow versatile local\nvariations (e.g., lighting, orientation, posture, etc.), supporting the object\nin favorably blending with different surroundings. We further propose to borrow\nknowledge from video datasets, where we can observe various forms (i.e., along\nthe time axis) of a single object, leading to stronger model generalizability\nand robustness. Extensive experiments demonstrate the superiority of our\napproach over existing alternatives as well as its great potential in\nreal-world applications, such as virtual try-on and object moving. Project page\nis https://damo-vilab.github.io/AnyDoor-Page/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Xi Chen", "Lianghua Huang", "Yu Liu", "Yujun Shen", "Deli Zhao", "Hengshuang Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f402"}, "filepath": "data/2403.06092.png", "tags": [], "_media_type": "image", "_rand": 0.9992946316274695, "arXiv_link": "https://arxiv.org/abs/2403.06092", "other_link": "", "title": "Is Vanilla MLP in Neural Radiance Field Enough for Few-shot View Synthesis?", "abstract": "Neural Radiance Field (NeRF) has achieved superior performance for novel view\nsynthesis by modeling the scene with a Multi-Layer Perception (MLP) and a\nvolume rendering procedure, however, when fewer known views are given (i.e.,\nfew-shot view synthesis), the model is prone to overfit the given views. To\nhandle this issue, previous efforts have been made towards leveraging learned\npriors or introducing additional regularizations. In contrast, in this paper,\nwe for the first time provide an orthogonal method from the perspective of\nnetwork structure. Given the observation that trivially reducing the number of\nmodel parameters alleviates the overfitting issue, but at the cost of missing\ndetails, we propose the multi-input MLP (mi-MLP) that incorporates the inputs\n(i.e., location and viewing direction) of the vanilla MLP into each layer to\nprevent the overfitting issue without harming detailed synthesis. To further\nreduce the artifacts, we propose to model colors and volume density separately\nand present two regularization terms. Extensive experiments on multiple\ndatasets demonstrate that: 1) although the proposed mi-MLP is easy to\nimplement, it is surprisingly effective as it boosts the PSNR of the baseline\nfrom $14.73$ to $24.23$. 2) the overall framework achieves state-of-the-art\nresults on a wide range of benchmarks. We will release the code upon\npublication.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Hanxin Zhu", "Tianyu He", "Xin Li", "Bingchen Li", "Zhibo Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f403"}, "filepath": "data/2405.12725.png", "tags": [], "_media_type": "image", "_rand": 0.9991149485247272, "arXiv_link": "https://arxiv.org/abs/2405.12725", "other_link": "https://github.com/AntigoneRandy/QuantBackdoor_EFRAP.", "title": "Nearest Is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks", "abstract": "Model quantization is widely used to compress and accelerate deep neural\nnetworks. However, recent studies have revealed the feasibility of weaponizing\nmodel quantization via implanting quantization-conditioned backdoors (QCBs).\nThese special backdoors stay dormant on released full-precision models but will\ncome into effect after standard quantization. Due to the peculiarity of QCBs,\nexisting defenses have minor effects on reducing their threats or are even\ninfeasible. In this paper, we conduct the first in-depth analysis of QCBs. We\nreveal that the activation of existing QCBs primarily stems from the nearest\nrounding operation and is closely related to the norms of neuron-wise\ntruncation errors (i.e., the difference between the continuous full-precision\nweights and its quantized version). Motivated by these insights, we propose\nError-guided Flipped Rounding with Activation Preservation (EFRAP), an\neffective and practical defense against QCBs. Specifically, EFRAP learns a\nnon-nearest rounding strategy with neuron-wise error norm and layer-wise\nactivation preservation guidance, flipping the rounding strategies of neurons\ncrucial for backdoor effects but with minimal impact on clean accuracy.\nExtensive evaluations on benchmark datasets demonstrate that our EFRAP can\ndefeat state-of-the-art QCB attacks under various settings. Code is available\nat https://github.com/AntigoneRandy/QuantBackdoor_EFRAP.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Boheng Li", "Yishuo Cai", "Haowei Li", "Feng Xue", "Zhifeng Li", "Yiming Li"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f404"}, "filepath": "data/2404.16510.png", "tags": [], "_media_type": "image", "_rand": 0.9999466688490164, "arXiv_link": "https://arxiv.org/abs/2404.16510", "other_link": "https://interactive-3d.github.io/}.", "title": "Interactive3D: Create What You Want by Interactive 3D Generation", "abstract": "3D object generation has undergone significant advancements, yielding\nhigh-quality results. However, fall short of achieving precise user control,\noften yielding results that do not align with user expectations, thus limiting\ntheir applicability. User-envisioning 3D object generation faces significant\nchallenges in realizing its concepts using current generative models due to\nlimited interaction capabilities. Existing methods mainly offer two approaches:\n(i) interpreting textual instructions with constrained controllability, or (ii)\nreconstructing 3D objects from 2D images. Both of them limit customization to\nthe confines of the 2D reference and potentially introduce undesirable\nartifacts during the 3D lifting process, restricting the scope for direct and\nversatile 3D modifications. In this work, we introduce Interactive3D, an\ninnovative framework for interactive 3D generation that grants users precise\ncontrol over the generative process through extensive 3D interaction\ncapabilities. Interactive3D is constructed in two cascading stages, utilizing\ndistinct 3D representations. The first stage employs Gaussian Splatting for\ndirect user interaction, allowing modifications and guidance of the generative\ndirection at any intermediate step through (i) Adding and Removing components,\n(ii) Deformable and Rigid Dragging, (iii) Geometric Transformations, and (iv)\nSemantic Editing. Subsequently, the Gaussian splats are transformed into\nInstantNGP. We introduce a novel (v) Interactive Hash Refinement module to\nfurther add details and extract the geometry in the second stage. Our\nexperiments demonstrate that Interactive3D markedly improves the\ncontrollability and quality of 3D generation. Our project webpage is available\nat \\url{https://interactive-3d.github.io/}.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Shaocong Dong", "Lihe Ding", "Zhanpeng Huang", "Zibin Wang", "Tianfan Xue", "Dan Xu"], "category_name": "Graphics", "all_categories": ["Graphics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f405"}, "filepath": "data/2404.02233.png", "tags": [], "_media_type": "image", "_rand": 0.9996114437889839, "arXiv_link": "https://arxiv.org/abs/2404.02233", "other_link": "", "title": "Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models", "abstract": "Understanding what deep network models capture in their learned\nrepresentations is a fundamental challenge in computer vision. We present a new\nmethodology to understanding such vision models, the Visual Concept Connectome\n(VCC), which discovers human interpretable concepts and their interlayer\nconnections in a fully unsupervised manner. Our approach simultaneously reveals\nfine-grained concepts at a layer, connection weightings across all layers and\nis amendable to global analysis of network structure (e.g., branching pattern\nof hierarchical concept assemblies). Previous work yielded ways to extract\ninterpretable concepts from single layers and examine their impact on\nclassification, but did not afford multilayer concept analysis across an entire\nnetwork architecture. Quantitative and qualitative empirical results show the\neffectiveness of VCCs in the domain of image classification. Also, we leverage\nVCCs for the application of failure mode debugging to reveal where mistakes\narise in deep networks.", "keywords": [], "authors_list": ["Matthew Kowal", "Richard P. Wildes", "Kosta Derpanis"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f406"}, "filepath": "data/2311.11863.png", "tags": [], "_media_type": "image", "_rand": 0.9990477467573525, "arXiv_link": "https://arxiv.org/abs/2311.11863", "other_link": "", "title": "GP-NeRF: Generalized Perception NeRF for Context-Aware 3D Scene Understanding", "abstract": "Applying NeRF to downstream perception tasks for scene understanding and\nrepresentation is becoming increasingly popular. Most existing methods treat\nsemantic prediction as an additional rendering task, \\textit{i.e.}, the \"label\nrendering\" task, to build semantic NeRFs. However, by rendering\nsemantic/instance labels per pixel without considering the contextual\ninformation of the rendered image, these methods usually suffer from unclear\nboundary segmentation and abnormal segmentation of pixels within an object. To\nsolve this problem, we propose Generalized Perception NeRF (GP-NeRF), a novel\npipeline that makes the widely used segmentation model and NeRF work compatibly\nunder a unified framework, for facilitating context-aware 3D scene perception.\nTo accomplish this goal, we introduce transformers to aggregate radiance as\nwell as semantic embedding fields jointly for novel views and facilitate the\njoint volumetric rendering of both fields. In addition, we propose two\nself-distillation mechanisms, i.e., the Semantic Distill Loss and the\nDepth-Guided Semantic Distill Loss, to enhance the discrimination and quality\nof the semantic field and the maintenance of geometric consistency. In\nevaluation, we conduct experimental comparisons under two perception tasks\n(\\textit{i.e.} semantic and instance segmentation) using both synthetic and\nreal-world datasets. Notably, our method outperforms SOTA approaches by 6.94\\%,\n11.76\\%, and 8.47\\% on generalized semantic segmentation, finetuning semantic\nsegmentation, and instance segmentation, respectively.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hao Li", "Dingwen Zhang", "Yalun Dai", "Nian Liu", "Lechao Cheng", "Li Jingfeng", "Jingdong Wang", "Junwei Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f407"}, "filepath": "data/2311.13793.png", "tags": [], "_media_type": "image", "_rand": 0.9992937043640924, "arXiv_link": "https://arxiv.org/abs/2311.13793", "other_link": "", "title": "Evidential Active Recognition: Intelligent and Prudent Open-World Embodied Perception", "abstract": "Active recognition enables robots to intelligently explore novel\nobservations, thereby acquiring more information while circumventing undesired\nviewing conditions. Recent approaches favor learning policies from simulated or\ncollected data, wherein appropriate actions are more frequently selected when\nthe recognition is accurate. However, most recognition modules are developed\nunder the closed-world assumption, which makes them ill-equipped to handle\nunexpected inputs, such as the absence of the target object in the current\nobservation. To address this issue, we propose treating active recognition as a\nsequential evidence-gathering process, providing by-step uncertainty\nquantification and reliable prediction under the evidence combination theory.\nAdditionally, the reward function developed in this paper effectively\ncharacterizes the merit of actions when operating in open-world environments.\nTo evaluate the performance, we collect a dataset from an indoor simulator,\nencompassing various recognition challenges such as distance, occlusion levels,\nand visibility. Through a series of experiments on recognition and robustness\nanalysis, we demonstrate the necessity of introducing uncertainties to active\nrecognition and the superior performance of the proposed method.", "keywords": [], "authors_list": ["Lei Fan", "Mingfu Liang", "Yunxuan Li", "Gang Hua", "Ying Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f408"}, "filepath": "data/2402.16594.png", "tags": [], "_media_type": "image", "_rand": 0.9999982603580785, "arXiv_link": "https://arxiv.org/abs/2402.16594", "other_link": "", "title": "CURSOR: Scalable Mixed-Order Hypergraph Matching with CUR Decomposition", "abstract": "To achieve greater accuracy, hypergraph matching algorithms require\nexponential increases in computational resources. Recent kd-tree-based\napproximate nearest neighbor (ANN) methods, despite the sparsity of their\ncompatibility tensor, still require exhaustive calculations for large-scale\ngraph matching. This work utilizes CUR tensor decomposition and introduces a\nnovel cascaded second and third-order hypergraph matching framework (CURSOR)\nfor efficient hypergraph matching. A CUR-based second-order graph matching\nalgorithm is used to provide a rough match, and then the core of CURSOR, a\nfiber-CUR-based tensor generation method, directly calculates entries of the\ncompatibility tensor by leveraging the initial second-order match result. This\nsignificantly decreases the time complexity and tensor density. A probability\nrelaxation labeling (PRL)-based matching algorithm, especially suitable for\nsparse tensors, is developed. Experiment results on large-scale synthetic\ndatasets and widely-adopted benchmark sets demonstrate the superiority of\nCURSOR over existing methods. The tensor generation method in CURSOR can be\nintegrated seamlessly into existing hypergraph matching methods to improve\ntheir performance and lower their computational costs.", "keywords": [], "authors_list": ["Qixuan Zheng", "Ming Zhang", "Hong Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f409"}, "filepath": "data/2308.14740.png", "tags": [], "_media_type": "image", "_rand": 0.9996811180239654, "arXiv_link": "https://arxiv.org/abs/2308.14740", "other_link": "", "title": "Total Selfie: Generating Full-Body Selfies", "abstract": "We present a method to generate full-body selfies from photographs originally\ntaken at arms length. Because self-captured photos are typically taken close\nup, they have limited field of view and exaggerated perspective that distorts\nfacial shapes. We instead seek to generate the photo some one else would take\nof you from a few feet away. Our approach takes as input four selfies of your\nface and body, a background image, and generates a full-body selfie in a\ndesired target pose. We introduce a novel diffusion-based approach to combine\nall of this information into high-quality, well-composed photos of you with the\ndesired pose and background.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Bowei Chen", "Brian Curless", "Ira Kemelmacher-Shlizerman", "Steve Seitz"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f40a"}, "filepath": "data/2312.08071v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994307453320884, "arXiv_link": "https://arxiv.org/abs/2312.08071v1", "other_link": "", "title": "Novel View Synthesis with View-Dependent Effects from a Single Image", "abstract": "In this paper, we firstly consider view-dependent effects into single\nimage-based novel view synthesis (NVS) problems. For this, we propose to\nexploit the camera motion priors in NVS to model view-dependent appearance or\neffects (VDE) as the negative disparity in the scene. By recognizing\nspecularities \"follow\" the camera motion, we infuse VDEs into the input images\nby aggregating input pixel colors along the negative depth region of the\nepipolar lines. Also, we propose a `relaxed volumetric rendering' approximation\nthat allows computing the densities in a single pass, improving efficiency for\nNVS from single images. Our method can learn single-image NVS from image\nsequences only, which is a completely self-supervised learning method, for the\nfirst time requiring neither depth nor camera pose annotations. We present\nextensive experiment results and show that our proposed method can learn NVS\nwith VDEs, outperforming the SOTA single-view NVS methods on the RealEstate10k\nand MannequinChallenge datasets.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Juan Luis Gonzalez Bello", "Munchurl Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f40b"}, "filepath": "data/2401.10005v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992359575738605, "arXiv_link": "https://arxiv.org/html/2401.10005v1", "other_link": "", "title": "TRINS: Towards Multimodal Language Models That Can Read", "abstract": "The increasing demand for intelligent systems capable of interpreting and\nreasoning about visual content requires the development of Large Multi-Modal\nModels (LMMs) that are not only accurate but also have explicit reasoning\ncapabilities. This paper presents a novel approach to imbue an LMM with the\nability to conduct explicit reasoning based on visual content and textual\ninstructions. We introduce a system that can ask a question to acquire\nnecessary knowledge, thereby enhancing the robustness and explicability of the\nreasoning process. Our method comprises the development of a novel dataset\ngenerated by a Large Language Model (LLM), designed to promote chain-of-thought\nreasoning combined with a question-asking mechanism. We designed an LMM, which\nhas high capabilities on region awareness to address the intricate requirements\nof image-text alignment. The model undergoes a three-stage training phase,\nstarting with large-scale image-text alignment using a large-scale datasets,\nfollowed by instruction tuning, and fine-tuning with a focus on\nchain-of-thought reasoning. The results demonstrate a stride toward a more\nrobust, accurate, and interpretable LMM, capable of reasoning explicitly and\nseeking information proactively when confronted with ambiguous visual input.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Ruiyi Zhang", "Yanzhe Zhang", "Jian Chen", "Yufan Zhou", "Jiuxiang Gu", "Changyou Chen", "Tong Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f40c"}, "filepath": "data/2312.16837.png", "tags": [], "_media_type": "image", "_rand": 0.9992783193281031, "arXiv_link": "https://arxiv.org/abs/2312.16837", "other_link": "https://younglbw.github.io/DiffusionGAN3D-homepage/.", "title": "DiffusionGAN3D: Boosting Text-guided 3D Generation and Domain Adaptation by Combining 3D GANs and Diffusion Priors", "abstract": "Text-guided domain adaptation and generation of 3D-aware portraits find many\napplications in various fields. However, due to the lack of training data and\nthe challenges in handling the high variety of geometry and appearance, the\nexisting methods for these tasks suffer from issues like inflexibility,\ninstability, and low fidelity. In this paper, we propose a novel framework\nDiffusionGAN3D, which boosts text-guided 3D domain adaptation and generation by\ncombining 3D GANs and diffusion priors. Specifically, we integrate the\npre-trained 3D generative models (e.g., EG3D) and text-to-image diffusion\nmodels. The former provides a strong foundation for stable and high-quality\navatar generation from text. And the diffusion models in turn offer powerful\npriors and guide the 3D generator finetuning with informative direction to\nachieve flexible and efficient text-guided domain adaptation. To enhance the\ndiversity in domain adaptation and the generation capability in text-to-avatar,\nwe introduce the relative distance loss and case-specific learnable triplane\nrespectively. Besides, we design a progressive texture refinement module to\nimprove the texture quality for both tasks above. Extensive experiments\ndemonstrate that the proposed framework achieves excellent results in both\ndomain adaptation and text-to-avatar tasks, outperforming existing methods in\nterms of generation quality and efficiency. The project homepage is at\nhttps://younglbw.github.io/DiffusionGAN3D-homepage/.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Biwen Lei", "Kai Yu", "Mengyang Feng", "Miaomiao Cui", "Xuansong Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f40d"}, "filepath": "data/2402.17065.png", "tags": [], "_media_type": "image", "_rand": 0.9996969092191039, "arXiv_link": "https://arxiv.org/abs/2402.17065", "other_link": "https://github.com/khorrams/utlo.", "title": "Taming the Tail in Class-Conditional GANs: Knowledge Sharing via Unconditional Training at Lower Resolutions", "abstract": "Despite the extensive research on training generative adversarial networks\n(GANs) with limited training data, learning to generate images from long-tailed\ntraining distributions remains fairly unexplored. In the presence of imbalanced\nmulti-class training data, GANs tend to favor classes with more samples,\nleading to the generation of low-quality and less diverse samples in tail\nclasses. In this study, we aim to improve the training of class-conditional\nGANs with long-tailed data. We propose a straightforward yet effective method\nfor knowledge sharing, allowing tail classes to borrow from the rich\ninformation from classes with more abundant training data. More concretely, we\npropose modifications to existing class-conditional GAN architectures to ensure\nthat the lower-resolution layers of the generator are trained entirely\nunconditionally while reserving class-conditional generation for the\nhigher-resolution layers. Experiments on several long-tail benchmarks and GAN\narchitectures demonstrate a significant improvement over existing methods in\nboth the diversity and fidelity of the generated images. The code is available\nat https://github.com/khorrams/utlo.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Saeed Khorram", "Mingqi Jiang", "Mohamad Shahbazi", "Mohamad Hosein Danesh", "Li Fuxin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f40e"}, "filepath": "data/2310.03827.png", "tags": [], "_media_type": "image", "_rand": 0.9995431929644096, "arXiv_link": "https://arxiv.org/abs/2310.03827", "other_link": "", "title": "AVFF: Audio-Visual Feature Fusion for Video Deepfake Detection", "abstract": "Deepfakes are AI-generated media in which an image or video has been\ndigitally modified. The advancements made in deepfake technology have led to\nprivacy and security issues. Most deepfake detection techniques rely on the\ndetection of a single modality. Existing methods for audio-visual detection do\nnot always surpass that of the analysis based on single modalities. Therefore,\nthis paper proposes an audio-visual-based method for deepfake detection, which\nintegrates fine-grained deepfake identification with binary classification. We\ncategorize the samples into four types by combining labels specific to each\nsingle modality. This method enhances the detection under intra-domain and\ncross-domain testing.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Trevine Oorloff", "Surya Koppisetti", "Nicolo Bonettini", "Divyaraj Solanki", "Ben Colman", "Yaser Yacoob", "Ali Shahriyari", "Gaurav Bharaj"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f40f"}, "filepath": "data/2403.07692.png", "tags": [], "_media_type": "image", "_rand": 0.9996474875972776, "arXiv_link": "https://arxiv.org/abs/2403.07692", "other_link": "", "title": "Masked AutoDecoder is Effective Multi-Task Vision Generalist", "abstract": "Inspired by the success of general-purpose models in NLP, recent studies\nattempt to unify different vision tasks in the same sequence format and employ\nautoregressive Transformers for sequence prediction. They apply uni-directional\nattention to capture sequential dependencies and generate task sequences\nrecursively. However, such autoregressive Transformers may not fit vision tasks\nwell, as vision task sequences usually lack the sequential dependencies\ntypically observed in natural languages. In this work, we design Masked\nAutoDecoder~(MAD), an effective multi-task vision generalist. MAD consists of\ntwo core designs. First, we develop a parallel decoding framework that\nintroduces bi-directional attention to capture contextual dependencies\ncomprehensively and decode vision task sequences in parallel. Second, we design\na masked sequence modeling approach that learns rich task contexts by masking\nand reconstructing task sequences. In this way, MAD handles all the tasks by a\nsingle network branch and a simple cross-entropy loss with minimal\ntask-specific designs. Extensive experiments demonstrate the great potential of\nMAD as a new paradigm for unifying various vision tasks. MAD achieves superior\nperformance and inference efficiency compared to autoregressive counterparts\nwhile obtaining competitive accuracy with task-specific models. Code will be\nreleased.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Han Qiu", "Jiaxing Huang", "Peng Gao", "Lewei Lu", "Xiaoqin Zhang", "Shijian Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f410"}, "filepath": "data/2311.12588.png", "tags": [], "_media_type": "image", "_rand": 0.9994797934177122, "arXiv_link": "https://arxiv.org/abs/2311.12588", "other_link": "", "title": "HiPose: Hierarchical Binary Surface Encoding and Correspondence Pruning for RGB-D 6DoF Object Pose Estimation", "abstract": "In this work, we present a novel dense-correspondence method for 6DoF object\npose estimation from a single RGB-D image. While many existing data-driven\nmethods achieve impressive performance, they tend to be time-consuming due to\ntheir reliance on rendering-based refinement approaches. To circumvent this\nlimitation, we present HiPose, which establishes 3D-3D correspondences in a\ncoarse-to-fine manner with a hierarchical binary surface encoding. Unlike\nprevious dense-correspondence methods, we estimate the correspondence surface\nby employing point-to-surface matching and iteratively constricting the surface\nuntil it becomes a correspondence point while gradually removing outliers.\nExtensive experiments on public benchmarks LM-O, YCB-V, and T-Less demonstrate\nthat our method surpasses all refinement-free methods and is even on par with\nexpensive refinement-based approaches. Crucially, our approach is\ncomputationally efficient and enables real-time critical applications with high\naccuracy requirements.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yongliang Lin", "Yongzhi Su", "Praveen Nathan", "Sandeep Inuganti", "Yan Di", "Martin Sundermeyer", "Fabian Manhardt", "Didier Stricker", "Jason Rambach", "Yu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f411"}, "filepath": "data/2312.02190.png", "tags": [], "_media_type": "image", "_rand": 0.9992754370251525, "arXiv_link": "https://arxiv.org/abs/2312.02190", "other_link": "https://diffusionhandles.github.io/", "title": "Diffusion Handles: Enabling 3D Edits for Diffusion Models by Lifting Activations to 3D", "abstract": "Diffusion Handles is a novel approach to enabling 3D object edits on\ndiffusion images. We accomplish these edits using existing pre-trained\ndiffusion models, and 2D image depth estimation, without any fine-tuning or 3D\nobject retrieval. The edited results remain plausible, photo-real, and preserve\nobject identity. Diffusion Handles address a critically missing facet of\ngenerative image based creative design, and significantly advance the\nstate-of-the-art in generative image editing. Our key insight is to lift\ndiffusion activations for an object to 3D using a proxy depth, 3D-transform the\ndepth and associated activations, and project them back to image space. The\ndiffusion process applied to the manipulated activations with identity control,\nproduces plausible edited images showing complex 3D occlusion and lighting\neffects. We evaluate Diffusion Handles: quantitatively, on a large synthetic\ndata benchmark; and qualitatively by a user study, showing our output to be\nmore plausible, and better than prior art at both, 3D editing and identity\ncontrol. Project Webpage: https://diffusionhandles.github.io/", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Karran Pandey", "Paul Guerrero", "Matheus Gadelha", "Yannick Hold-Geoffroy", "Karan Singh", "Niloy J. Mitra"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f412"}, "filepath": "data/2312.12730.png", "tags": [], "_media_type": "image", "_rand": 0.9996106563719683, "arXiv_link": "https://arxiv.org/abs/2312.12730", "other_link": "", "title": "A Closer Look at the Few-Shot Adaptation of Large Vision-Language Models", "abstract": "Efficient transfer learning (ETL) is receiving increasing attention to adapt\nlarge pre-trained language-vision models on downstream tasks with a few labeled\nsamples. While significant progress has been made, we reveal that\nstate-of-the-art ETL approaches exhibit strong performance only in\nnarrowly-defined experimental setups, and with a careful adjustment of\nhyperparameters based on a large corpus of labeled samples. In particular, we\nmake two interesting, and surprising empirical observations. First, to\noutperform a simple Linear Probing baseline, these methods require to optimize\ntheir hyper-parameters on each target task. And second, they typically\nunderperform -- sometimes dramatically -- standard zero-shot predictions in the\npresence of distributional drifts. Motivated by the unrealistic assumptions\nmade in the existing literature, i.e., access to a large validation set and\ncase-specific grid-search for optimal hyperparameters, we propose a novel\napproach that meets the requirements of real-world scenarios. More concretely,\nwe introduce a CLass-Adaptive linear Probe (CLAP) objective, whose balancing\nterm is optimized via an adaptation of the general Augmented Lagrangian method\ntailored to this context. We comprehensively evaluate CLAP on a broad span of\ndatasets and scenarios, demonstrating that it consistently outperforms SoTA\napproaches, while yet being a much more efficient alternative.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Julio Silva-Rodr\u00edguez", "Sina Hajimiri", "Ismail Ben Ayed", "Jose Dolz"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f413"}, "filepath": "data/2403.06846.png", "tags": [], "_media_type": "image", "_rand": 0.9990753068597129, "arXiv_link": "https://arxiv.org/abs/2403.06846", "other_link": "", "title": "DiaLoc: An Iterative Approach to Embodied Dialog Localization", "abstract": "Multimodal learning has advanced the performance for many vision-language\ntasks. However, most existing works in embodied dialog research focus on\nnavigation and leave the localization task understudied. The few existing\ndialog-based localization approaches assume the availability of entire dialog\nprior to localizaiton, which is impractical for deployed dialog-based\nlocalization. In this paper, we propose DiaLoc, a new dialog-based localization\nframework which aligns with a real human operator behavior. Specifically, we\nproduce an iterative refinement of location predictions which can visualize\ncurrent pose believes after each dialog turn. DiaLoc effectively utilizes the\nmultimodal data for multi-shot localization, where a fusion encoder fuses\nvision and dialog information iteratively. We achieve state-of-the-art results\non embodied dialog-based localization task, in single-shot (+7.08% in\nAcc5@valUnseen) and multi-shot settings (+10.85% in Acc5@valUnseen). DiaLoc\nnarrows the gap between simulation and real-world applications, opening doors\nfor future research on collaborative localization and navigation.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Chao Zhang", "Mohan Li", "Ignas Budvytis", "Stephan Liwicki"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f414"}, "filepath": "data/2311.00618.png", "tags": [], "_media_type": "image", "_rand": 0.9992931897150155, "arXiv_link": "https://arxiv.org/abs/2311.00618", "other_link": "", "title": "De-Diffusion Makes Text a Strong Cross-Modal Interface", "abstract": "We demonstrate text as a strong cross-modal interface. Rather than relying on\ndeep embeddings to connect image and language as the interface representation,\nour approach represents an image as text, from which we enjoy the\ninterpretability and flexibility inherent to natural language. We employ an\nautoencoder that uses a pre-trained text-to-image diffusion model for decoding.\nThe encoder is trained to transform an input image into text, which is then fed\ninto the fixed text-to-image diffusion decoder to reconstruct the original\ninput -- a process we term De-Diffusion. Experiments validate both the\nprecision and comprehensiveness of De-Diffusion text representing images, such\nthat it can be readily ingested by off-the-shelf text-to-image tools and LLMs\nfor diverse multi-modal tasks. For example, a single De-Diffusion model can\ngeneralize to provide transferable prompts for different text-to-image tools,\nand also achieves a new state of the art on open-ended vision-language tasks by\nsimply prompting large language models with few-shot examples.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Chen Wei", "Chenxi Liu", "Siyuan Qiao", "Zhishuai Zhang", "Alan L. Yuille", "Jiahui Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f415"}, "filepath": "data/2312.03596.png", "tags": [], "_media_type": "image", "_rand": 0.9991726413565477, "arXiv_link": "https://arxiv.org/abs/2312.03596", "other_link": "https://exitudio.github.io/MMM-page}.", "title": "MMM: Generative Masked Motion Model", "abstract": "Recent advances in text-to-motion generation using diffusion and\nautoregressive models have shown promising results. However, these models often\nsuffer from a trade-off between real-time performance, high fidelity, and\nmotion editability. To address this gap, we introduce MMM, a novel yet simple\nmotion generation paradigm based on Masked Motion Model. MMM consists of two\nkey components: (1) a motion tokenizer that transforms 3D human motion into a\nsequence of discrete tokens in latent space, and (2) a conditional masked\nmotion transformer that learns to predict randomly masked motion tokens,\nconditioned on the pre-computed text tokens. By attending to motion and text\ntokens in all directions, MMM explicitly captures inherent dependency among\nmotion tokens and semantic mapping between motion and text tokens. During\ninference, this allows parallel and iterative decoding of multiple motion\ntokens that are highly consistent with fine-grained text descriptions,\ntherefore simultaneously achieving high-fidelity and high-speed motion\ngeneration. In addition, MMM has innate motion editability. By simply placing\nmask tokens in the place that needs editing, MMM automatically fills the gaps\nwhile guaranteeing smooth transitions between editing and non-editing parts.\nExtensive experiments on the HumanML3D and KIT-ML datasets demonstrate that MMM\nsurpasses current leading methods in generating high-quality motion (evidenced\nby superior FID scores of 0.08 and 0.429), while offering advanced editing\nfeatures such as body-part modification, motion in-betweening, and the\nsynthesis of long motion sequences. In addition, MMM is two orders of magnitude\nfaster on a single mid-range GPU than editable motion diffusion models. Our\nproject page is available at \\url{https://exitudio.github.io/MMM-page}.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Ekkasit Pinyoanuntapong", "Pu Wang", "Minwoo Lee", "Chen Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f416"}, "filepath": "data/2403.11529.png", "tags": [], "_media_type": "image", "_rand": 0.9998597204421699, "arXiv_link": "https://arxiv.org/abs/2403.11529", "other_link": "https://github.com/zht8506/QMVOS.", "title": "RMem: Restricted Memory Banks Improve Video Object Segmentation", "abstract": "Storing intermediate frame segmentations as memory for long-range context\nmodeling, spatial-temporal memory-based methods have recently showcased\nimpressive results in semi-supervised video object segmentation (SVOS).\nHowever, these methods face two key limitations: 1) relying on non-local\npixel-level matching to read memory, resulting in noisy retrieved features for\nsegmentation; 2) segmenting each object independently without interaction.\nThese shortcomings make the memory-based methods struggle in similar object and\nmulti-object segmentation. To address these issues, we propose a query\nmodulation method, termed QMVOS. This method summarizes object features into\ndynamic queries and then treats them as dynamic filters for mask prediction,\nthereby providing high-level descriptions and object-level perception for the\nmodel. Efficient and effective multi-object interactions are realized through\ninter-query attention. Extensive experiments demonstrate that our method can\nbring significant improvements to the memory-based SVOS method and achieve\ncompetitive performance on standard SVOS benchmarks. The code is available at\nhttps://github.com/zht8506/QMVOS.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Junbao Zhou", "Ziqi Pang", "Yu-Xiong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f417"}, "filepath": "data/2308.13888.png", "tags": [], "_media_type": "image", "_rand": 0.9992681236647404, "arXiv_link": "https://arxiv.org/abs/2308.13888", "other_link": "", "title": "Neural Implicit Morphing of Face Images", "abstract": "Face morphing is a problem in computer graphics with numerous artistic and\nforensic applications. It is challenging due to variations in pose, lighting,\ngender, and ethnicity. This task consists of a warping for feature alignment\nand a blending for a seamless transition between the warped images. We propose\nto leverage coord-based neural networks to represent such warpings and\nblendings of face images. During training, we exploit the smoothness and\nflexibility of such networks by combining energy functionals employed in\nclassical approaches without discretizations. Additionally, our method is\ntime-dependent, allowing a continuous warping/blending of the images. During\nmorphing inference, we need both direct and inverse transformations of the\ntime-dependent warping. The first (second) is responsible for warping the\ntarget (source) image into the source (target) image. Our neural warping stores\nthose maps in a single network dismissing the need for inverting them. The\nresults of our experiments indicate that our method is competitive with both\nclassical and generative models under the lens of image quality and\nface-morphing detectors. Aesthetically, the resulting images present a seamless\nblending of diverse faces not yet usual in the literature.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Guilherme Schardong", "Tiago Novello", "Hallison Paz", "Iurii Medvedev", "Vin\u00edcius Silva", "Luiz Velho", "Nuno Gon\u00e7alves"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f418"}, "filepath": "data/2312.17172.png", "tags": [], "_media_type": "image", "_rand": 0.9998149984309662, "arXiv_link": "https://arxiv.org/abs/2312.17172", "other_link": "", "title": "Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action", "abstract": "We present Unified-IO 2, the first autoregressive multimodal model that is\ncapable of understanding and generating image, text, audio, and action. To\nunify different modalities, we tokenize inputs and outputs -- images, text,\naudio, action, bounding boxes, etc., into a shared semantic space and then\nprocess them with a single encoder-decoder transformer model. Since training\nwith such diverse modalities is challenging, we propose various architectural\nimprovements to stabilize model training. We train our model from scratch on a\nlarge multimodal pre-training corpus from diverse sources with a multimodal\nmixture of denoisers objective. To learn an expansive set of skills, such as\nfollowing multimodal instructions, we construct and finetune on an ensemble of\n120 datasets with prompts and augmentations. With a single unified model,\nUnified-IO 2 achieves state-of-the-art performance on the GRIT benchmark and\nstrong results in more than 35 benchmarks, including image generation and\nunderstanding, natural language understanding, video and audio understanding,\nand robotic manipulation. We release all our models to the research community.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Image and video generation and manipulation"], "authors_list": ["Jiasen Lu", "Christopher Clark", "Sangho Lee", "Zichen Zhang", "Savya Khosla", "Ryan Marten", "Derek Hoiem", "Aniruddha Kembhavi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f419"}, "filepath": "data/2404.01491.png", "tags": [], "_media_type": "image", "_rand": 0.999364723820393, "arXiv_link": "https://arxiv.org/abs/2404.01491", "other_link": "", "title": "SUGAR: Pre-training 3D Visual Representation for Robotics", "abstract": "Learning generalizable visual representations from Internet data has yielded\npromising results for robotics. Yet, prevailing approaches focus on\npre-training 2D representations, being sub-optimal to deal with occlusions and\naccurately localize objects in complex 3D scenes. Meanwhile, 3D representation\nlearning has been limited to single-object understanding. To address these\nlimitations, we introduce a novel 3D pre-training framework for robotics named\nSUGAR that captures semantic, geometric and affordance properties of objects\nthrough 3D point clouds. We underscore the importance of cluttered scenes in 3D\nrepresentation learning, and automatically construct a multi-object dataset\nbenefiting from cost-free supervision in simulation. SUGAR employs a versatile\ntransformer-based model to jointly address five pre-training tasks, namely\ncross-modal knowledge distillation for semantic learning, masked point modeling\nto understand geometry structures, grasping pose synthesis for object\naffordance, 3D instance segmentation and referring expression grounding to\nanalyze cluttered scenes. We evaluate our learned representation on three\nrobotic-related tasks, namely, zero-shot 3D object recognition, referring\nexpression grounding, and language-driven robotic manipulation. Experimental\nresults show that SUGAR's 3D representation outperforms state-of-the-art 2D and\n3D representations.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Shizhe Chen", "Ricardo Garcia Pinel", "Ivan Laptev", "Cordelia Schmid"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f41a"}, "filepath": "data/2404.02788.png", "tags": [], "_media_type": "image", "_rand": 0.9998794685900354, "arXiv_link": "https://arxiv.org/abs/2404.02788", "other_link": "https://xiangyueliu.github.io/GenN2N/", "title": "GenN2N: Generative NeRF2NeRF Translation", "abstract": "We present GenN2N, a unified NeRF-to-NeRF translation framework for various\nNeRF translation tasks such as text-driven NeRF editing, colorization,\nsuper-resolution, inpainting, etc. Unlike previous methods designed for\nindividual translation tasks with task-specific schemes, GenN2N achieves all\nthese NeRF editing tasks by employing a plug-and-play image-to-image translator\nto perform editing in the 2D domain and lifting 2D edits into the 3D NeRF\nspace. Since the 3D consistency of 2D edits may not be assured, we propose to\nmodel the distribution of the underlying 3D edits through a generative model\nthat can cover all possible edited NeRFs. To model the distribution of 3D\nedited NeRFs from 2D edited images, we carefully design a VAE-GAN that encodes\nimages while decoding NeRFs. The latent space is trained to align with a\nGaussian distribution and the NeRFs are supervised through an adversarial loss\non its renderings. To ensure the latent code does not depend on 2D viewpoints\nbut truly reflects the 3D edits, we also regularize the latent code through a\ncontrastive learning scheme. Extensive experiments on various editing tasks\nshow GenN2N, as a universal framework, performs as well or better than\ntask-specific specialists while possessing flexible generative power. More\nresults on our project page: https://xiangyueliu.github.io/GenN2N/", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiangyue Liu", "Han Xue", "Kunming Luo", "Ping Tan", "Li Yi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f41b"}, "filepath": "data/2312.14985.png", "tags": [], "_media_type": "image", "_rand": 0.9992124664293262, "arXiv_link": "https://arxiv.org/abs/2312.14985", "other_link": "https://github.com/NannanLi999/UniHuman.", "title": "UniHuman: A Unified Model For Editing Human Images in the Wild", "abstract": "Human image editing includes tasks like changing a person's pose, their\nclothing, or editing the image according to a text prompt. However, prior work\noften tackles these tasks separately, overlooking the benefit of mutual\nreinforcement from learning them jointly. In this paper, we propose UniHuman, a\nunified model that addresses multiple facets of human image editing in\nreal-world settings. To enhance the model's generation quality and\ngeneralization capacity, we leverage guidance from human visual encoders and\nintroduce a lightweight pose-warping module that can exploit different pose\nrepresentations, accommodating unseen textures and patterns. Furthermore, to\nbridge the disparity between existing human editing benchmarks with real-world\ndata, we curated 400K high-quality human image-text pairs for training and\ncollected 2K human images for out-of-domain testing, both encompassing diverse\nclothing styles, backgrounds, and age groups. Experiments on both in-domain and\nout-of-domain test sets demonstrate that UniHuman outperforms task-specific\nmodels by a significant margin. In user studies, UniHuman is preferred by the\nusers in an average of 77% of cases. Our project is available at\nhttps://github.com/NannanLi999/UniHuman.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Nannan Li", "Qing Liu", "Krishna Kumar Singh", "Yilin Wang", "Jianming Zhang", "Bryan A. Plummer", "Zhe Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f41c"}, "filepath": "data/2402.19479.png", "tags": [], "_media_type": "image", "_rand": 0.9994698511860733, "arXiv_link": "https://arxiv.org/abs/2402.19479", "other_link": "", "title": "Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers", "abstract": "The quality of the data and annotation upper-bounds the quality of a\ndownstream model. While there exist large text corpora and image-text pairs,\nhigh-quality video-text data is much harder to collect. First of all, manual\nlabeling is more time-consuming, as it requires an annotator to watch an entire\nvideo. Second, videos have a temporal dimension, consisting of several scenes\nstacked together, and showing multiple actions. Accordingly, to establish a\nvideo dataset with high-quality captions, we propose an automatic approach\nleveraging multimodal inputs, such as textual video description, subtitles, and\nindividual video frames. Specifically, we curate 3.8M high-resolution videos\nfrom the publicly available HD-VILA-100M dataset. We then split them into\nsemantically consistent video clips, and apply multiple cross-modality teacher\nmodels to obtain captions for each video. Next, we finetune a retrieval model\non a small subset where the best caption of each video is manually selected and\nthen employ the model in the whole dataset to select the best caption as the\nannotation. In this way, we get 70M videos paired with high-quality text\ncaptions. We dub the dataset as Panda-70M. We show the value of the proposed\ndataset on three downstream tasks: video captioning, video and text retrieval,\nand text-driven video generation. The models trained on the proposed data score\nsubstantially better on the majority of metrics across all the tasks.", "keywords": ["Multimodal models and vision-language models", "Image and video generation and manipulation"], "authors_list": ["Tsai-Shien Chen", "Aliaksandr Siarohin", "Willi Menapace", "Ekaterina Deyneka", "Hsiang-wei Chao", "Byung Jeon", "Yuwei Fang", "Hsin-Ying Lee", "Jian Ren", "Ming-Hsuan Yang", "Sergey Tulyakov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f41d"}, "filepath": "data/2405.12978.png", "tags": [], "_media_type": "image", "_rand": 0.9990406909622329, "arXiv_link": "https://arxiv.org/abs/2405.12978", "other_link": "", "title": "Personalized Residuals for Concept-Driven Text-to-Image Generation", "abstract": "We present personalized residuals and localized attention-guided sampling for\nefficient concept-driven generation using text-to-image diffusion models. Our\nmethod first represents concepts by freezing the weights of a pretrained\ntext-conditioned diffusion model and learning low-rank residuals for a small\nsubset of the model's layers. The residual-based approach then directly enables\napplication of our proposed sampling technique, which applies the learned\nresiduals only in areas where the concept is localized via cross-attention and\napplies the original diffusion weights in all other regions. Localized sampling\ntherefore combines the learned identity of the concept with the existing\ngenerative prior of the underlying diffusion model. We show that personalized\nresiduals effectively capture the identity of a concept in ~3 minutes on a\nsingle GPU without the use of regularization images and with fewer parameters\nthan previous models, and localized sampling allows using the original model as\nstrong prior for large parts of the image.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Cusuh Ham", "Matthew Fisher", "James Hays", "Nicholas Kolkin", "Yuchen Liu", "Richard Zhang", "Tobias Hinz"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f41e"}, "filepath": "data/2405.10053.png", "tags": [], "_media_type": "image", "_rand": 0.9994271613689727, "arXiv_link": "https://arxiv.org/abs/2405.10053", "other_link": "", "title": "SHiNe: Semantic Hierarchy Nexus for Open-vocabulary Object Detection", "abstract": "Open-vocabulary object detection (OvOD) has transformed detection into a\nlanguage-guided task, empowering users to freely define their class\nvocabularies of interest during inference. However, our initial investigation\nindicates that existing OvOD detectors exhibit significant variability when\ndealing with vocabularies across various semantic granularities, posing a\nconcern for real-world deployment. To this end, we introduce Semantic Hierarchy\nNexus (SHiNe), a novel classifier that uses semantic knowledge from class\nhierarchies. It runs offline in three steps: i) it retrieves relevant\nsuper-/sub-categories from a hierarchy for each target class; ii) it integrates\nthese categories into hierarchy-aware sentences; iii) it fuses these sentence\nembeddings to generate the nexus classifier vector. Our evaluation on various\ndetection benchmarks demonstrates that SHiNe enhances robustness across diverse\nvocabulary granularities, achieving up to +31.9% mAP50 with ground truth\nhierarchies, while retaining improvements using hierarchies generated by large\nlanguage models. Moreover, when applied to open-vocabulary classification on\nImageNet-1k, SHiNe improves the CLIP zero-shot baseline by +2.8% accuracy.\nSHiNe is training-free and can be seamlessly integrated with any off-the-shelf\nOvOD detector, without incurring additional computational overhead during\ninference. The code is open source.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Mingxuan Liu", "Tyler Hayes", "Elisa Ricci", "Gabriela Csurka", "Riccardo Volpi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f41f"}, "filepath": "data/2405.11913.png", "tags": [], "_media_type": "image", "_rand": 0.9998454077606108, "arXiv_link": "http://export.arxiv.org/abs/2405.11913", "other_link": "https://github.com/sizhelee/Diff-BGM.", "title": "Diff-BGM: A Diffusion Model for Video Background Music Generation", "abstract": "When editing a video, a piece of attractive background music is\nindispensable. However, video background music generation tasks face several\nchallenges, for example, the lack of suitable training datasets, and the\ndifficulties in flexibly controlling the music generation process and\nsequentially aligning the video and music. In this work, we first propose a\nhigh-quality music-video dataset BGM909 with detailed annotation and shot\ndetection to provide multi-modal information about the video and music. We then\npresent evaluation metrics to assess music quality, including music diversity\nand alignment between music and video with retrieval precision metrics.\nFinally, we propose the Diff-BGM framework to automatically generate the\nbackground music for a given video, which uses different signals to control\ndifferent aspects of the music during the generation process, i.e., uses\ndynamic video features to control music rhythm and semantic features to control\nthe melody and atmosphere. We propose to align the video and music sequentially\nby introducing a segment-aware cross-attention layer. Experiments verify the\neffectiveness of our proposed method. The code and models are available at\nhttps://github.com/sizhelee/Diff-BGM.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Sizhe Li", "Yiming Qin", "Minghang Zheng", "Xin Jin", "Yang Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f420"}, "filepath": "data/2405.19902.png", "tags": [], "_media_type": "image", "_rand": 0.9997354322147041, "arXiv_link": "https://arxiv.org/abs/2405.19902", "other_link": "", "title": "Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection", "abstract": "Label noise, commonly found in real-world datasets, has a detrimental impact\non a model's generalization. To effectively detect incorrectly labeled\ninstances, previous works have mostly relied on distinguishable training\nsignals, such as training loss, as indicators to differentiate between clean\nand noisy labels. However, they have limitations in that the training signals\nincompletely reveal the model's behavior and are not effectively generalized to\nvarious noise types, resulting in limited detection accuracy. In this paper, we\npropose DynaCor framework that distinguishes incorrectly labeled instances from\ncorrectly labeled ones based on the dynamics of the training signals. To cope\nwith the absence of supervision for clean and noisy labels, DynaCor first\nintroduces a label corruption strategy that augments the original dataset with\nintentionally corrupted labels, enabling indirect simulation of the model's\nbehavior on noisy labels. Then, DynaCor learns to identify clean and noisy\ninstances by inducing two clearly distinguishable clusters from the latent\nrepresentations of training dynamics. Our comprehensive experiments show that\nDynaCor outperforms the state-of-the-art competitors and shows strong\nrobustness to various noise types and noise rates.", "keywords": [], "authors_list": ["Suyeon Kim", "Dongha Lee", "SeongKu Kang", "Sukang Chae", "Sanghwan Jang", "Hwanjo Yu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f421"}, "filepath": "data/2312.06462.png", "tags": [], "_media_type": "image", "_rand": 0.9997255367977447, "arXiv_link": "https://arxiv.org/abs/2312.06462", "other_link": "https://yannqi.github.io/AVS-COMBO/.", "title": "Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-Visual Segmentation", "abstract": "Recently, an audio-visual segmentation (AVS) task has been introduced, aiming\nto group pixels with sounding objects within a given video. This task\nnecessitates a first-ever audio-driven pixel-level understanding of the scene,\nposing significant challenges. In this paper, we propose an innovative\naudio-visual transformer framework, termed COMBO, an acronym for COoperation of\nMulti-order Bilateral relatiOns. For the first time, our framework explores\nthree types of bilateral entanglements within AVS: pixel entanglement, modality\nentanglement, and temporal entanglement. Regarding pixel entanglement, we\nemploy a Siam-Encoder Module (SEM) that leverages prior knowledge to generate\nmore precise visual features from the foundational model. For modality\nentanglement, we design a Bilateral-Fusion Module (BFM), enabling COMBO to\nalign corresponding visual and auditory signals bi-directionally. As for\ntemporal entanglement, we introduce an innovative adaptive inter-frame\nconsistency loss according to the inherent rules of temporal. Comprehensive\nexperiments and ablation studies on AVSBench-object (84.7 mIoU on S4, 59.2 mIou\non MS3) and AVSBench-semantic (42.1 mIoU on AVSS) datasets demonstrate that\nCOMBO surpasses previous state-of-the-art methods. Code and more results will\nbe publicly available at https://yannqi.github.io/AVS-COMBO/.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Qi Yang", "Xing Nie", "Tong Li", "Gaopengfei", "Ying Guo", "Cheng Zhen", "Pengfei Yan", "Shiming Xiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f422"}, "filepath": "data/2404.19110.png", "tags": [], "_media_type": "image", "_rand": 0.999854095073595, "arXiv_link": "https://arxiv.org/abs/2404.19110", "other_link": "", "title": "EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars", "abstract": "Head avatars animated by visual signals have gained popularity, particularly\nin cross-driving synthesis where the driver differs from the animated\ncharacter, a challenging but highly practical approach. The recently presented\nMegaPortraits model has demonstrated state-of-the-art results in this domain.\nWe conduct a deep examination and evaluation of this model, with a particular\nfocus on its latent space for facial expression descriptors, and uncover\nseveral limitations with its ability to express intense face motions. To\naddress these limitations, we propose substantial changes in both training\npipeline and model architecture, to introduce our EMOPortraits model, where we:\n Enhance the model's capability to faithfully support intense, asymmetric face\nexpressions, setting a new state-of-the-art result in the emotion transfer\ntask, surpassing previous methods in both metrics and quality.\n Incorporate speech-driven mode to our model, achieving top-tier performance\nin audio-driven facial animation, making it possible to drive source identity\nthrough diverse modalities, including visual signal, audio, or a blend of both.\n We propose a novel multi-view video dataset featuring a wide range of intense\nand asymmetric facial expressions, filling the gap with absence of such data in\nexisting datasets.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Nikita Drobyshev", "Antoni Bigata Casademunt", "Konstantinos Vougioukas", "Zoe Landgraf", "Stavros Petridis", "Maja Pantic"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f423"}, "filepath": "data/2404.08392.png", "tags": [], "_media_type": "image", "_rand": 0.9991805035468493, "arXiv_link": "https://arxiv.org/abs/2404.08392", "other_link": "https://github.com/GustavoVargasHakim/NCTTT.git", "title": "NC-TTT: A Noise Constrastive Approach for Test-Time Training", "abstract": "Despite their exceptional performance in vision tasks, deep learning models\noften struggle when faced with domain shifts during testing. Test-Time Training\n(TTT) methods have recently gained popularity by their ability to enhance the\nrobustness of models through the addition of an auxiliary objective that is\njointly optimized with the main task. Being strictly unsupervised, this\nauxiliary objective is used at test time to adapt the model without any access\nto labels. In this work, we propose Noise-Contrastive Test-Time Training\n(NC-TTT), a novel unsupervised TTT technique based on the discrimination of\nnoisy feature maps. By learning to classify noisy views of projected feature\nmaps, and then adapting the model accordingly on new domains, classification\nperformance can be recovered by an important margin. Experiments on several\npopular test-time adaptation baselines demonstrate the advantages of our method\ncompared to recent approaches for this task. The code can be found\nat:https://github.com/GustavoVargasHakim/NCTTT.git", "keywords": [], "authors_list": ["David OSOWIECHI", "Gustavo Vargas Hakim", "Mehrdad Noori", "Milad Cheraghalikhani", "Ali Bahri", "Moslem Yazdanpanah", "Ismail Ben Ayed", "Christian Desrosiers"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f424"}, "filepath": "data/2312.11972.png", "tags": [], "_media_type": "image", "_rand": 0.9997567347391342, "arXiv_link": "https://arxiv.org/abs/2312.11972", "other_link": "https://github.com/Dingpx/EAI.", "title": "Forecasting of 3D Whole-body Human Poses with Grasping Objects", "abstract": "Human motion forecasting, with the goal of estimating future human behavior\nover a period of time, is a fundamental task in many real-world applications.\nHowever, existing works typically concentrate on predicting the major joints of\nthe human body without considering the delicate movements of the human hands.\nIn practical applications, hand gesture plays an important role in human\ncommunication with the real world, and expresses the primary intention of human\nbeings. In this work, we are the first to formulate a whole-body human pose\nforecasting task, which jointly predicts the future body and hand activities.\nCorrespondingly, we propose a novel Encoding-Alignment-Interaction (EAI)\nframework that aims to predict both coarse (body joints) and fine-grained\n(gestures) activities collaboratively, enabling expressive and\ncross-facilitated forecasting of 3D whole-body human motions. Specifically, our\nmodel involves two key constituents: cross-context alignment (XCA) and\ncross-context interaction (XCI). Considering the heterogeneous information\nwithin the whole-body, XCA aims to align the latent features of various human\ncomponents, while XCI focuses on effectively capturing the context interaction\namong the human components. We conduct extensive experiments on a\nnewly-introduced large-scale benchmark and achieve state-of-the-art\nperformance. The code is public for research purposes at\nhttps://github.com/Dingpx/EAI.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["yan haitao", "Qiongjie Cui", "Jiexin Xie", "Shijie Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f425"}, "filepath": "data/2402.17951.png", "tags": [], "_media_type": "image", "_rand": 0.9998040428592414, "arXiv_link": "https://arxiv.org/abs/2402.17951", "other_link": "", "title": "QN-Mixer: A Quasi-Newton MLP-Mixer Model for Sparse-View CT Reconstruction", "abstract": "Inverse problems span across diverse fields. In medical contexts, computed\ntomography (CT) plays a crucial role in reconstructing a patient's internal\nstructure, presenting challenges due to artifacts caused by inherently\nill-posed inverse problems. Previous research advanced image quality via\npost-processing and deep unrolling algorithms but faces challenges, such as\nextended convergence times with ultra-sparse data. Despite enhancements,\nresulting images often show significant artifacts, limiting their effectiveness\nfor real-world diagnostic applications. We aim to explore deep second-order\nunrolling algorithms for solving imaging inverse problems, emphasizing their\nfaster convergence and lower time complexity compared to common first-order\nmethods like gradient descent. In this paper, we introduce QN-Mixer, an\nalgorithm based on the quasi-Newton approach. We use learned parameters through\nthe BFGS algorithm and introduce Incept-Mixer, an efficient neural architecture\nthat serves as a non-local regularization term, capturing long-range\ndependencies within images. To address the computational demands typically\nassociated with quasi-Newton algorithms that require full Hessian matrix\ncomputations, we present a memory-efficient alternative. Our approach\nintelligently downsamples gradient information, significantly reducing\ncomputational requirements while maintaining performance. The approach is\nvalidated through experiments on the sparse-view CT problem, involving various\ndatasets and scanning protocols, and is compared with post-processing and deep\nunrolling state-of-the-art approaches. Our method outperforms existing\napproaches and achieves state-of-the-art performance in terms of SSIM and PSNR,\nall while reducing the number of unrolling iterations required.", "keywords": ["Efficient and scalable vision", "Medical imaging and biological vision"], "authors_list": ["Ishak Ayad", "Nicolas Larue", "Mai K. Nguyen"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f426"}, "filepath": "data/2308.03005.png", "tags": [], "_media_type": "image", "_rand": 0.9991764856656771, "arXiv_link": "http://export.arxiv.org/abs/2308.03005", "other_link": "", "title": "Class Tokens Infusion for Weakly Supervised Semantic Segmentation", "abstract": "This paper proposes a novel transformer-based framework that aims to enhance\nweakly supervised semantic segmentation (WSSS) by generating accurate\nclass-specific object localization maps as pseudo labels. Building upon the\nobservation that the attended regions of the one-class token in the standard\nvision transformer can contribute to a class-agnostic localization map, we\nexplore the potential of the transformer model to capture class-specific\nattention for class-discriminative object localization by learning multiple\nclass tokens. We introduce a Multi-Class Token transformer, which incorporates\nmultiple class tokens to enable class-aware interactions with the patch tokens.\nTo achieve this, we devise a class-aware training strategy that establishes a\none-to-one correspondence between the output class tokens and the ground-truth\nclass labels. Moreover, a Contrastive-Class-Token (CCT) module is proposed to\nenhance the learning of discriminative class tokens, enabling the model to\nbetter capture the unique characteristics and properties of each class. As a\nresult, class-discriminative object localization maps can be effectively\ngenerated by leveraging the class-to-patch attentions associated with different\nclass tokens. To further refine these localization maps, we propose the\nutilization of patch-level pairwise affinity derived from the patch-to-patch\ntransformer attention. Furthermore, the proposed framework seamlessly\ncomplements the Class Activation Mapping (CAM) method, resulting in\nsignificantly improved WSSS performance on the PASCAL VOC 2012 and MS COCO 2014\ndatasets. These results underline the importance of the class token for WSSS.", "keywords": [], "authors_list": ["Sung-Hoon Yoon", "Hoyong Kwon", "Hyeonseong Kim", "Kuk-Jin Yoon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f427"}, "filepath": "data/2303.10891.png", "tags": [], "_media_type": "image", "_rand": 0.9995488524801102, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2303.10891", "other_link": "", "title": "Dual-consistency Model Inversion for Non-exemplar Class Incremental Learning", "abstract": "This paper investigates a new, practical, but challenging problem named\nNon-exemplar Online Class-incremental continual Learning (NO-CL), which aims to\npreserve the discernibility of base classes without buffering data examples and\nefficiently learn novel classes continuously in a single-pass (i.e., online)\ndata stream. The challenges of this task are mainly two-fold: (1) Both base and\nnovel classes suffer from severe catastrophic forgetting as no previous samples\nare available for replay. (2) As the online data can only be observed once,\nthere is no way to fully re-train the whole model, e.g., re-calibrate the\ndecision boundaries via prototype alignment or feature distillation. In this\npaper, we propose a novel Dual-prototype Self-augment and Refinement method\n(DSR) for NO-CL problem, which consists of two strategies: 1) Dual class\nprototypes: vanilla and high-dimensional prototypes are exploited to utilize\nthe pre-trained information and obtain robust quasi-orthogonal representations\nrather than example buffers for both privacy preservation and memory reduction.\n2) Self-augment and refinement: Instead of updating the whole network, we\noptimize high-dimensional prototypes alternatively with the extra projection\nmodule based on self-augment vanilla prototypes, through a bi-level\noptimization problem. Extensive experiments demonstrate the effectiveness and\nsuperiority of the proposed DSR in NO-CL.", "keywords": [], "authors_list": ["Zihuan Qiu", "Yi Xu", "Fanman Meng", "Hongliang Li", "Linfeng Xu", "Qingbo Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f428"}, "filepath": "data/2404.16030.png", "tags": [], "_media_type": "image", "_rand": 0.9993837805953845, "arXiv_link": "https://arxiv.org/abs/2404.16030", "other_link": "https://github.com/facebookresearch/MetaCLIP/tree/main/mode.", "title": "MoDE: CLIP Data Experts via Clustering", "abstract": "The success of contrastive language-image pretraining (CLIP) relies on the\nsupervision from the pairing between images and captions, which tends to be\nnoisy in web-crawled data. We present Mixture of Data Experts (MoDE) and learn\na system of CLIP data experts via clustering. Each data expert is trained on\none data cluster, being less sensitive to false negative noises in other\nclusters. At inference time, we ensemble their outputs by applying weights\ndetermined through the correlation between task metadata and cluster\nconditions. To estimate the correlation precisely, the samples in one cluster\nshould be semantically similar, but the number of data experts should still be\nreasonable for training and inference. As such, we consider the ontology in\nhuman language and propose to use fine-grained cluster centers to represent\neach data expert at a coarse-grained level. Experimental studies show that four\nCLIP data experts on ViT-B/16 outperform the ViT-L/14 by OpenAI CLIP and\nOpenCLIP on zero-shot image classification but with less ($<$35\\%) training\ncost. Meanwhile, MoDE can train all data expert asynchronously and can flexibly\ninclude new data experts. The code is available at\nhttps://github.com/facebookresearch/MetaCLIP/tree/main/mode.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jiawei Ma", "Po-Yao Huang", "Saining Xie", "Shang-Wen Li", "Luke Zettlemoyer", "Shih-Fu Chang", "Wen-tau Yih", "Hu Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f429"}, "filepath": "data/2403.07359v4.png", "tags": [], "_media_type": "image", "_rand": 0.9990830766368667, "arXiv_link": "https://arxiv.org/abs/2403.07359v4", "other_link": "", "title": "FSC: Few-point Shape Completion", "abstract": "While previous studies have demonstrated successful 3D object shape\ncompletion with a sufficient number of points, they often fail in scenarios\nwhen a few points, e.g. tens of points, are observed. Surprisingly, via entropy\nanalysis, we find that even a few points, e.g. 64 points, could retain\nsubstantial information to help recover the 3D shape of the object. To address\nthe challenge of shape completion with very sparse point clouds, we then\npropose Few-point Shape Completion (FSC) model, which contains a novel\ndual-branch feature extractor for handling extremely sparse inputs, coupled\nwith an extensive branch for maximal point utilization with a saliency branch\nfor dynamic importance assignment. This model is further bolstered by a\ntwo-stage revision network that refines both the extracted features and the\ndecoder output, enhancing the detail and authenticity of the completed point\ncloud. Our experiments demonstrate the feasibility of recovering 3D shapes from\na few points. The proposed Few-point Shape Completion (FSC) model outperforms\nprevious methods on both few-point inputs and many-point inputs, and shows good\ngeneralizability to different object categories.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xianzu Wu", "Xianfeng Wu", "Tianyu Luan", "Yajing Bai", "Zhongyuan Lai", "Junsong Yuan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f42a"}, "filepath": "data/2402.02235.png", "tags": [], "_media_type": "image", "_rand": 0.9996600294567409, "arXiv_link": "https://arxiv.org/abs/2402.02235", "other_link": "", "title": "Equivariant Multi-Modality Image Fusion", "abstract": "Image fusion integrates essential information from multiple source images\ninto a single composite, emphasizing the highlighting structure and textures,\nand refining imperfect areas. Existing methods predominantly focus on\npixel-level and semantic visual features for recognition. However, they\ninsufficiently explore the deeper semantic information at a text-level beyond\nvision. Therefore, we introduce a novel fusion paradigm named image Fusion via\nvIsion-Language Model (FILM), for the first time, utilizing explicit textual\ninformation in different source images to guide image fusion. In FILM, input\nimages are firstly processed to generate semantic prompts, which are then fed\ninto ChatGPT to obtain rich textual descriptions. These descriptions are fused\nin the textual domain and guide the extraction of crucial visual features from\nthe source images through cross-attention, resulting in a deeper level of\ncontextual understanding directed by textual semantic information. The final\nfused image is created by vision feature decoder. This paradigm achieves\nsatisfactory results in four image fusion tasks: infrared-visible, medical,\nmulti-exposure, and multi-focus image fusion. We also propose a vision-language\ndataset containing ChatGPT-based paragraph descriptions for the ten image\nfusion datasets in four fusion tasks, facilitating future research in\nvision-language model-based image fusion. Code and dataset will be released.", "keywords": ["Image and video generation and manipulation", "Medical imaging and biological vision"], "authors_list": ["Zixiang Zhao", "Haowen Bai", "Jiangshe Zhang", "Yulun Zhang", "Kai Zhang", "Shuang Xu", "Dongdong Chen", "Radu Timofte", "Luc Van Gool"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f42b"}, "filepath": "data/2312.03442.png", "tags": [], "_media_type": "image", "_rand": 0.9997704901381792, "arXiv_link": "https://arxiv.org/abs/2312.03442", "other_link": "", "title": "High-Quality Facial Geometry and Appearance Capture at Home", "abstract": "Facial geometry and appearance capture have demonstrated tremendous success\nin 3D scanning real humans in studios. Recent works propose to democratize this\ntechnique while keeping the results high quality. However, they are still\ninconvenient for daily usage. In addition, they focus on an easier problem of\nonly capturing facial skin. This paper proposes a novel method for high-quality\nface capture, featuring an easy-to-use system and the capability to model the\ncomplete face with skin, mouth interior, hair, and eyes. We reconstruct facial\ngeometry and appearance from a single co-located smartphone flashlight sequence\ncaptured in a dim room where the flashlight is the dominant light source (e.g.\nrooms with curtains or at night). To model the complete face, we propose a\nnovel hybrid representation to effectively model both eyes and other facial\nregions, along with novel techniques to learn it from images. We apply a\ncombined lighting model to compactly represent real illuminations and exploit a\nmorphable face albedo model as a reflectance prior to disentangle diffuse and\nspecular. Experiments show that our method can capture high-quality 3D\nrelightable scans.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Yuxuan Han", "Junfeng Lyu", "Feng Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f42c"}, "filepath": "data/2405.06600.png", "tags": [], "_media_type": "image", "_rand": 0.9994323732248026, "arXiv_link": "https://arxiv.org/abs/2405.06600", "other_link": "", "title": "Multi-Object Tracking in the Dark", "abstract": "Low-light scenes are prevalent in real-world applications (e.g. autonomous\ndriving and surveillance at night). Recently, multi-object tracking in various\npractical use cases have received much attention, but multi-object tracking in\ndark scenes is rarely considered. In this paper, we focus on multi-object\ntracking in dark scenes. To address the lack of datasets, we first build a\nLow-light Multi-Object Tracking (LMOT) dataset. LMOT provides well-aligned\nlow-light video pairs captured by our dual-camera system, and high-quality\nmulti-object tracking annotations for all videos. Then, we propose a low-light\nmulti-object tracking method, termed as LTrack. We introduce the adaptive\nlow-pass downsample module to enhance low-frequency components of images\noutside the sensor noises. The degradation suppression learning strategy\nenables the model to learn invariant information under noise disturbance and\nimage quality degradation. These components improve the robustness of\nmulti-object tracking in dark scenes. We conducted a comprehensive analysis of\nour LMOT dataset and proposed LTrack. Experimental results demonstrate the\nsuperiority of the proposed method and its competitiveness in real night\nlow-light scenes. Dataset and Code: https: //github.com/ying-fu/LMOT", "keywords": ["Low-level vision"], "authors_list": ["Xinzhe Wang", "Kang Ma", "Qiankun Liu", "Yunhao Zou", "Ying Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f42d"}, "filepath": "data/2311.10111.png", "tags": [], "_media_type": "image", "_rand": 0.9990279283040339, "arXiv_link": "https://arxiv.org/abs/2311.10111", "other_link": "https://github.com/Hritikbansal/videocon.", "title": "VideoCon: Robust Video-Language Alignment via Contrast Captions", "abstract": "Despite being (pre)trained on a massive amount of data, state-of-the-art\nvideo-language alignment models are not robust to semantically-plausible\ncontrastive changes in the video captions. Our work addresses this by\nidentifying a broad spectrum of contrast misalignments, such as replacing\nentities, actions, and flipping event order, which alignment models should be\nrobust against. To this end, we introduce the VideoCon, a video-language\nalignment dataset constructed by a large language model that generates\nplausible contrast video captions and explanations for differences between\noriginal and contrast video captions. Then, a generative video-language model\nis finetuned with VideoCon to assess video-language entailment and generate\nexplanations. Our VideoCon-based alignment model significantly outperforms\ncurrent models. It exhibits a 12-point increase in AUC for the video-language\nalignment task on human-generated contrast captions. Finally, our model sets\nnew state of the art zero-shot performance in temporally-extensive\nvideo-language tasks such as text-to-video retrieval (SSv2-Temporal) and video\nquestion answering (ATP-Hard). Moreover, our model shows superior performance\non novel videos and human-crafted captions and explanations. Our code and data\nare available at https://github.com/Hritikbansal/videocon.", "keywords": [], "authors_list": ["Hritik Bansal", "Yonatan Bitton", "Idan Szpektor", "Kai-Wei Chang", "Aditya Grover"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f42e"}, "filepath": "data/2311.16922.png", "tags": [], "_media_type": "image", "_rand": 0.9993899404451544, "arXiv_link": "https://arxiv.org/abs/2311.16922", "other_link": "", "title": "Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding", "abstract": "Large Vision-Language Models (LVLMs) have advanced considerably, intertwining\nvisual recognition and language understanding to generate content that is not\nonly coherent but also contextually attuned. Despite their success, LVLMs still\nsuffer from the issue of object hallucinations, where models generate plausible\nyet incorrect outputs that include objects that do not exist in the images. To\nmitigate this issue, we introduce Visual Contrastive Decoding (VCD), a simple\nand training-free method that contrasts output distributions derived from\noriginal and distorted visual inputs. The proposed VCD effectively reduces the\nover-reliance on statistical bias and unimodal priors, two essential causes of\nobject hallucinations. This adjustment ensures the generated content is closely\ngrounded to visual inputs, resulting in contextually accurate outputs. Our\nexperiments show that VCD, without either additional training or the usage of\nexternal tools, significantly mitigates the object hallucination issue across\ndifferent LVLM families. Beyond mitigating object hallucinations, VCD also\nexcels in general LVLM benchmarks, highlighting its wide-ranging applicability.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Sicong Leng", "Hang Zhang", "Guanzheng Chen", "Xin Li", "Shijian Lu", "Chunyan Miao", "Lidong Bing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f42f"}, "filepath": "data/2401.10171.png", "tags": [], "_media_type": "image", "_rand": 0.9994773090408869, "arXiv_link": "https://arxiv.org/abs/2401.10171", "other_link": "https://shinobi.aengelhardt.com", "title": "SHINOBI: SHape and Illumination using Neural Object decomposition via BRDF optimization and Inverse rendering from unconstrained Image collections", "abstract": "We present SHINOBI, an end-to-end framework for the reconstruction of shape,\nmaterial, and illumination from object images captured with varying lighting,\npose, and background. Inverse rendering of an object based on unconstrained\nimage collections is a long-standing challenge in computer vision and graphics\nand requires a joint optimization over shape, radiance, and pose. We show that\nan implicit shape representation based on a multi-resolution hash encoding\nenables faster and robust shape reconstruction with joint camera alignment\noptimization that outperforms prior work. Further, to enable the editing of\nillumination and object reflectance (i.e. material) we jointly optimize BRDF\nand illumination together with the object's shape. Our method is class-agnostic\nand works on in-the-wild image collections of objects to produce relightable 3D\nassets for several use cases such as AR/VR, movies, games, etc. Project page:\nhttps://shinobi.aengelhardt.com Video:\nhttps://www.youtube.com/watch?v=iFENQ6AcYd8&feature=youtu.be", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Computational imaging and physics-based vision"], "authors_list": ["Andreas Engelhardt", "Amit Raj", "Mark Boss", "Yunzhi Zhang", "Abhishek Kar", "Yuanzhen Li", "Ricardo Martin-Brualla", "Jonathan T. Barron", "Deqing Sun", "Hendrik Lensch", "Varun Jampani"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f430"}, "filepath": "data/2307.07313.png", "tags": [], "_media_type": "image", "_rand": 0.9991789732237419, "arXiv_link": "https://arxiv.org/abs/2307.07313", "other_link": "https://github.com/JanEGerken/HEAL-SWIN.", "title": "HEAL-SWIN: A Vision Transformer On The Sphere", "abstract": "High-resolution wide-angle fisheye images are becoming more and more\nimportant for robotics applications such as autonomous driving. However, using\nordinary convolutional neural networks or vision transformers on this data is\nproblematic due to projection and distortion losses introduced when projecting\nto a rectangular grid on the plane. We introduce the HEAL-SWIN transformer,\nwhich combines the highly uniform Hierarchical Equal Area iso-Latitude\nPixelation (HEALPix) grid used in astrophysics and cosmology with the\nHierarchical Shifted-Window (SWIN) transformer to yield an efficient and\nflexible model capable of training on high-resolution, distortion-free\nspherical data. In HEAL-SWIN, the nested structure of the HEALPix grid is used\nto perform the patching and windowing operations of the SWIN transformer,\nenabling the network to process spherical representations with minimal\ncomputational overhead. We demonstrate the superior performance of our model on\nboth synthetic and real automotive datasets, as well as a selection of other\nimage datasets, for semantic segmentation, depth regression and classification\ntasks. Our code is publicly available at\nhttps://github.com/JanEGerken/HEAL-SWIN.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Oscar Carlsson", "Jan E. Gerken", "Hampus Linander", "Heiner Spiess", "Fredrik Ohlsson", "Christoffer Petersson", "Daniel Persson"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f431"}, "filepath": "data/2311.12075.png", "tags": [], "_media_type": "image", "_rand": 0.9990590056025634, "arXiv_link": "https://arxiv.org/abs/2311.12075", "other_link": "", "title": "BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning", "abstract": "Studying backdoor attacks is valuable for model copyright protection and\nenhancing defenses. While existing backdoor attacks have successfully infected\nmultimodal contrastive learning models such as CLIP, they can be easily\ncountered by specialized backdoor defenses for MCL models. This paper reveals\nthe threats in this practical scenario that backdoor attacks can remain\neffective even after defenses and introduces the \\emph{\\toolns} attack, which\nis resistant to backdoor detection and model fine-tuning defenses. To achieve\nthis, we draw motivations from the perspective of the Bayesian rule and propose\na dual-embedding guided framework for backdoor attacks. Specifically, we ensure\nthat visual trigger patterns approximate the textual target semantics in the\nembedding space, making it challenging to detect the subtle parameter\nvariations induced by backdoor learning on such natural trigger patterns.\nAdditionally, we optimize the visual trigger patterns to align the poisoned\nsamples with target vision features in order to hinder the backdoor unlearning\nthrough clean fine-tuning. Extensive experiments demonstrate that our attack\nsignificantly outperforms state-of-the-art baselines (+45.3% ASR) in the\npresence of SoTA backdoor defenses, rendering these mitigation and detection\nstrategies virtually ineffective. Furthermore, our approach effectively attacks\nsome more rigorous scenarios like downstream tasks. We believe that this paper\nraises awareness regarding the potential threats associated with the practical\napplication of multimodal contrastive learning and encourages the development\nof more robust defense mechanisms.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Siyuan Liang", "Mingli Zhu", "Aishan Liu", "Baoyuan Wu", "Xiaochun Cao", "Ee-Chien Chang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f432"}, "filepath": "data/2405.09342.png", "tags": [], "_media_type": "image", "_rand": 0.9995132749235807, "arXiv_link": "https://arxiv.org/abs/2405.09342", "other_link": "https://github.com/Cisse-away/PDDM).", "title": "Flexible Depth Completion for Sparse and Varying Point Densities", "abstract": "Image-guided depth completion aims at generating a dense depth map from\nsparse LiDAR data and RGB image. Recent methods have shown promising\nperformance by reformulating it as a classification problem with two sub-tasks:\ndepth discretization and probability prediction. They divide the depth range\ninto several discrete depth values as depth categories, serving as priors for\nscene depth distributions. However, previous depth discretization methods are\neasy to be impacted by depth distribution variations across different scenes,\nresulting in suboptimal scene depth distribution priors. To address the above\nproblem, we propose a progressive depth decoupling and modulating network,\nwhich incrementally decouples the depth range into bins and adaptively\ngenerates multi-scale dense depth maps in multiple stages. Specifically, we\nfirst design a Bins Initializing Module (BIM) to construct the seed bins by\nexploring the depth distribution information within a sparse depth map,\nadapting variations of depth distribution. Then, we devise an incremental depth\ndecoupling branch to progressively refine the depth distribution information\nfrom global to local. Meanwhile, an adaptive depth modulating branch is\ndeveloped to progressively improve the probability representation from\ncoarse-grained to fine-grained. And the bi-directional information interactions\nare proposed to strengthen the information interaction between those two\nbranches (sub-tasks) for promoting information complementation in each branch.\nFurther, we introduce a multi-scale supervision mechanism to learn the depth\ndistribution information in latent features and enhance the adaptation\ncapability across different scenes. Experimental results on public datasets\ndemonstrate that our method outperforms the state-of-the-art methods. The code\nwill be open-sourced at [this https URL](https://github.com/Cisse-away/PDDM).", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Jinhyung Park", "Yu-Jhe Li", "Kris Kitani"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f433"}, "filepath": "data/2312.00075.png", "tags": [], "_media_type": "image", "_rand": 0.9994149996194894, "arXiv_link": "https://arxiv.org/abs/2312.00075", "other_link": "https://ubc-vision.github.io/nf-soft-mining/.", "title": "Accelerating Neural Field Training via Soft Mining", "abstract": "We present an approach to accelerate Neural Field training by efficiently\nselecting sampling locations. While Neural Fields have recently become popular,\nit is often trained by uniformly sampling the training domain, or through\nhandcrafted heuristics. We show that improved convergence and final training\nquality can be achieved by a soft mining technique based on importance\nsampling: rather than either considering or ignoring a pixel completely, we\nweigh the corresponding loss by a scalar. To implement our idea we use Langevin\nMonte-Carlo sampling. We show that by doing so, regions with higher error are\nbeing selected more frequently, leading to more than 2x improvement in\nconvergence speed. The code and related resources for this study are publicly\navailable at https://ubc-vision.github.io/nf-soft-mining/.", "keywords": [], "authors_list": ["Shakiba Kheradmand", "Daniel Rebain", "Gopal Sharma", "Hossam Isack", "Abhishek Kar", "Andrea Tagliasacchi", "Kwang Moo Yi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f434"}, "filepath": "data/2312.09337.png", "tags": [], "_media_type": "image", "_rand": 0.9993613665111576, "arXiv_link": "https://arxiv.org/abs/2312.09337", "other_link": "https://promptable-behaviors.github.io", "title": "Promptable Behaviors: Personalizing Multi-Objective Rewards from Human Preferences", "abstract": "Customizing robotic behaviors to be aligned with diverse human preferences is\nan underexplored challenge in the field of embodied AI. In this paper, we\npresent Promptable Behaviors, a novel framework that facilitates efficient\npersonalization of robotic agents to diverse human preferences in complex\nenvironments. We use multi-objective reinforcement learning to train a single\npolicy adaptable to a broad spectrum of preferences. We introduce three\ndistinct methods to infer human preferences by leveraging different types of\ninteractions: (1) human demonstrations, (2) preference feedback on trajectory\ncomparisons, and (3) language instructions. We evaluate the proposed method in\npersonalized object-goal navigation and flee navigation tasks in ProcTHOR and\nRoboTHOR, demonstrating the ability to prompt agent behaviors to satisfy human\npreferences in various scenarios. Project page:\nhttps://promptable-behaviors.github.io", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Minyoung Hwang", "Luca Weihs", "Chanwoo Park", "Kimin Lee", "Aniruddha Kembhavi", "Kiana Ehsani"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f435"}, "filepath": "data/2404.16222.png", "tags": [], "_media_type": "image", "_rand": 0.9991741177437301, "arXiv_link": "https://arxiv.org/abs/2404.16222", "other_link": "", "title": "Step differences in instructional video", "abstract": "Comparing a user video to a reference how-to video is a key requirement for\nAR/VR technology delivering personalized assistance tailored to the user's\nprogress. However, current approaches for language-based assistance can only\nanswer questions about a single video. We propose an approach that first\nautomatically generates large amounts of visual instruction tuning data\ninvolving pairs of videos from HowTo100M by leveraging existing step\nannotations and accompanying narrations, and then trains a video-conditioned\nlanguage model to jointly reason across multiple raw videos. Our model achieves\nstate-of-the-art performance at identifying differences between video pairs and\nranking videos based on the severity of these differences, and shows promising\nability to perform general reasoning over multiple videos.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Tushar Nagarajan", "Lorenzo Torresani"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f436"}, "filepath": "data/2312.08963.png", "tags": [], "_media_type": "image", "_rand": 0.999856295518033, "arXiv_link": "https://arxiv.org/abs/2312.08963", "other_link": "", "title": "LEMON: Learning 3D Human-Object Interaction Relation from 2D Images", "abstract": "Learning 3D human-object interaction relation is pivotal to embodied AI and\ninteraction modeling. Most existing methods approach the goal by learning to\npredict isolated interaction elements, e.g., human contact, object affordance,\nand human-object spatial relation, primarily from the perspective of either the\nhuman or the object. Which underexploit certain correlations between the\ninteraction counterparts (human and object), and struggle to address the\nuncertainty in interactions. Actually, objects' functionalities potentially\naffect humans' interaction intentions, which reveals what the interaction is.\nMeanwhile, the interacting humans and objects exhibit matching geometric\nstructures, which presents how to interact. In light of this, we propose\nharnessing these inherent correlations between interaction counterparts to\nmitigate the uncertainty and jointly anticipate the above interaction elements\nin 3D space. To achieve this, we present LEMON (LEarning 3D huMan-Object\niNteraction relation), a unified model that mines interaction intentions of the\ncounterparts and employs curvatures to guide the extraction of geometric\ncorrelations, combining them to anticipate the interaction elements. Besides,\nthe 3D Interaction Relation dataset (3DIR) is collected to serve as the test\nbed for training and evaluation. Extensive experiments demonstrate the\nsuperiority of LEMON over methods estimating each element in isolation.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yuhang Yang", "Wei Zhai", "Hongchen Luo", "Yang Cao", "Zheng-Jun Zha"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f437"}, "filepath": "data/2404.04242.png", "tags": [], "_media_type": "image", "_rand": 0.9998135605349826, "arXiv_link": "https://arxiv.org/abs/2404.04242", "other_link": "", "title": "Physical Property Understanding from Language-Embedded Feature Fields", "abstract": "Can computers perceive the physical properties of objects solely through\nvision? Research in cognitive science and vision science has shown that humans\nexcel at identifying materials and estimating their physical properties based\npurely on visual appearance. In this paper, we present a novel approach for\ndense prediction of the physical properties of objects using a collection of\nimages. Inspired by how humans reason about physics through vision, we leverage\nlarge language models to propose candidate materials for each object. We then\nconstruct a language-embedded point cloud and estimate the physical properties\nof each 3D point using a zero-shot kernel regression approach. Our method is\naccurate, annotation-free, and applicable to any object in the open world.\nExperiments demonstrate the effectiveness of the proposed approach in various\nphysical property reasoning tasks, such as estimating the mass of common\nobjects, as well as other properties like friction and hardness.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Albert J. Zhai", "Yuan Shen", "Emily Y. Chen", "Gloria Wang", "Xinlei Wang", "Sheng Wang", "Kaiyu Guan", "Shenlong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f438"}, "filepath": "data/2312.12478.png", "tags": [], "_media_type": "image", "_rand": 0.9999349894179967, "arXiv_link": "https://arxiv.org/abs/2312.12478", "other_link": "https://github.com/fangkaipeng/ProS.", "title": "ProS: Prompting-to-simulate Generalized knowledge for Universal Cross-Domain Retrieval", "abstract": "The goal of Universal Cross-Domain Retrieval (UCDR) is to achieve robust\nperformance in generalized test scenarios, wherein data may belong to strictly\nunknown domains and categories during training. Recently, pre-trained models\nwith prompt tuning have shown strong generalization capabilities and attained\nnoteworthy achievements in various downstream tasks, such as few-shot learning\nand video-text retrieval. However, applying them directly to UCDR may not\nsufficiently to handle both domain shift (i.e., adapting to unfamiliar domains)\nand semantic shift (i.e., transferring to unknown categories). To this end, we\npropose \\textbf{Pro}mpting-to-\\textbf{S}imulate (ProS), the first method to\napply prompt tuning for UCDR. ProS employs a two-step process to simulate\nContent-aware Dynamic Prompts (CaDP) which can impact models to produce\ngeneralized features for UCDR. Concretely, in Prompt Units Learning stage, we\nintroduce two Prompt Units to individually capture domain and semantic\nknowledge in a mask-and-align way. Then, in Context-aware Simulator Learning\nstage, we train a Content-aware Prompt Simulator under a simulated test\nscenarios to produce the corresponding CaDP. Extensive experiments conducted on\nthree benchmark datasets show that our method achieves new state-of-the-art\nperformance without bringing excessive parameters. Our method is publicly\navailable at https://github.com/fangkaipeng/ProS.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Fang Kaipeng", "Jingkuan Song", "Lianli Gao", "Pengpeng Zeng", "Zhi-Qi Cheng", "Xiyao LI", "Heng Tao Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f439"}, "filepath": "data/2311.18775.png", "tags": [], "_media_type": "image", "_rand": 0.9992292780140607, "arXiv_link": "https://arxiv.org/abs/2311.18775", "other_link": "", "title": "CoDi-2: Interleaved and In-Context Any-to-Any Generation", "abstract": "We present CoDi-2, a versatile and interactive Multimodal Large Language\nModel (MLLM) that can follow complex multimodal interleaved instructions,\nconduct in-context learning (ICL), reason, chat, edit, etc., in an any-to-any\ninput-output modality paradigm. By aligning modalities with language for both\nencoding and generation, CoDi-2 empowers Large Language Models (LLMs) to not\nonly understand complex modality-interleaved instructions and in-context\nexamples, but also autoregressively generate grounded and coherent multimodal\noutputs in the continuous feature space. To train CoDi-2, we build a\nlarge-scale generation dataset encompassing in-context multimodal instructions\nacross text, vision, and audio. CoDi-2 demonstrates a wide range of zero-shot\ncapabilities for multimodal generation, such as in-context learning, reasoning,\nand compositionality of any-to-any modality generation through multi-round\ninteractive conversation. CoDi-2 surpasses previous domain-specific models on\ntasks such as subject-driven image generation, vision transformation, and audio\nediting. CoDi-2 signifies a substantial breakthrough in developing a\ncomprehensive multimodal foundation model adept at interpreting in-context\nlanguage-vision-audio interleaved instructions and producing multimodal\noutputs.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zineng Tang", "Ziyi Yang", "MAHMOUD KHADEMI", "Yang Liu", "Chenguang Zhu", "Mohit Bansal"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f43a"}, "filepath": "data/2401.04608.png", "tags": [], "_media_type": "image", "_rand": 0.999122570796367, "arXiv_link": "https://arxiv.org/abs/2401.04608", "other_link": "", "title": "EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models", "abstract": "Recent years have witnessed remarkable progress in image generation task,\nwhere users can create visually astonishing images with high-quality. However,\nexisting text-to-image diffusion models are proficient in generating concrete\nconcepts (dogs) but encounter challenges with more abstract ones (emotions).\nSeveral efforts have been made to modify image emotions with color and style\nadjustments, facing limitations in effectively conveying emotions with fixed\nimage contents. In this work, we introduce Emotional Image Content Generation\n(EICG), a new task to generate semantic-clear and emotion-faithful images given\nemotion categories. Specifically, we propose an emotion space and construct a\nmapping network to align it with the powerful Contrastive Language-Image\nPre-training (CLIP) space, providing a concrete interpretation of abstract\nemotions. Attribute loss and emotion confidence are further proposed to ensure\nthe semantic diversity and emotion fidelity of the generated images. Our method\noutperforms the state-of-the-art text-to-image approaches both quantitatively\nand qualitatively, where we derive three custom metrics, i.e., emotion\naccuracy, semantic clarity and semantic diversity. In addition to generation,\nour method can help emotion understanding and inspire emotional art design.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Jingyuan Yang", "Jiawei Feng", "Hui Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f43b"}, "filepath": "data/2309.13006.png", "tags": [], "_media_type": "image", "_rand": 0.9999911587919794, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2309.13006", "other_link": "", "title": "Rapid 3D Model Generation with Intuitive 3D Input", "abstract": "The rapid development of AR/VR brings tremendous demands for 3D content.\nWhile the widely-used Computer-Aided Design (CAD) method requires a\ntime-consuming and labor-intensive modeling process, sketch-based 3D modeling\noffers a potential solution as a natural form of computer-human interaction.\nHowever, the sparsity and ambiguity of sketches make it challenging to generate\nhigh-fidelity content reflecting creators' ideas. Precise drawing from multiple\nviews or strategic step-by-step drawings is often required to tackle the\nchallenge but is not friendly to novice users. In this work, we introduce a\nnovel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only\na single free-hand sketch without inputting multiple sketches or view\ninformation. Specifically, we introduce a lightweight generation network for\nefficient inference in real-time and a structural-aware adversarial training\napproach with a Stroke Enhancement Module (SEM) to capture the structural\ninformation to facilitate learning of the realistic and fine-detailed shape\nstructures for high-fidelity performance. Extensive experiments demonstrated\nthe effectiveness of our approach with the state-of-the-art (SOTA) performance\non both synthetic and real datasets.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Tianrun Chen", "Chaotao Ding", "Shangzhan Zhang", "Chunan Yu", "Ying Zang", "Zejian Li", "Sida Peng", "Lingyun Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f43c"}, "filepath": "data/2311.16500.png", "tags": [], "_media_type": "image", "_rand": 0.999100985458755, "arXiv_link": "https://arxiv.org/abs/2311.16500", "other_link": "", "title": "L-MAGIC: Language Model Assisted Generation of Images with Consistency", "abstract": "In this paper, we introduce a Multimodal Large Language Model-based\nGeneration Assistant (LLMGA), leveraging the vast reservoir of knowledge and\nproficiency in reasoning, comprehension, and response inherent in Large\nLanguage Models (LLMs) to assist users in image generation and editing.\nDiverging from existing approaches where Multimodal Large Language Models\n(MLLMs) generate fixed-size embeddings to control Stable Diffusion (SD), our\nLLMGA provides a detailed language generation prompt for precise control over\nSD. This not only augments LLM context understanding but also reduces noise in\ngeneration prompts, yields images with more intricate and precise content, and\nelevates the interpretability of the network. To this end, we curate a\ncomprehensive dataset comprising prompt refinement, similar image generation,\ninpainting \\& outpainting, and instruction-based editing. Moreover, we propose\na two-stage training scheme. In the first stage, we train the MLLM to grasp the\nproperties of image generation and editing, enabling it to generate detailed\nprompts. In the second stage, we optimize SD to align with the MLLM's\ngeneration prompts. Additionally, we propose a reference-based restoration\nnetwork to alleviate texture, brightness, and contrast disparities between\ngenerated and preserved regions during inpainting and outpainting. Extensive\nresults show that LLMGA has promising generation and editing capabilities and\ncan enable more flexible and expansive applications in an interactive manner.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["zhipeng cai", "Matthias Mueller", "Reiner Birkl", "Diana Wofk", "Shao-Yen Tseng", "JunDa Cheng", "Gabriela Ben Melech Stan", "Vasudev Lal", "Michael Paulitsch"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f43d"}, "filepath": "data/2312.11360v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998024029446828, "arXiv_link": "https://arxiv.org/abs/2312.11360v1", "other_link": "https://kim-youwang.github.io/paint-it", "title": "Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering", "abstract": "We present Paint-it, a text-driven high-fidelity texture map synthesis method\nfor 3D meshes via neural re-parameterized texture optimization. Paint-it\nsynthesizes texture maps from a text description by\nsynthesis-through-optimization, exploiting the Score-Distillation Sampling\n(SDS). We observe that directly applying SDS yields undesirable texture quality\ndue to its noisy gradients. We reveal the importance of texture\nparameterization when using SDS. Specifically, we propose Deep Convolutional\nPhysically-Based Rendering (DC-PBR) parameterization, which re-parameterizes\nthe physically-based rendering (PBR) texture maps with randomly initialized\nconvolution-based neural kernels, instead of a standard pixel-based\nparameterization. We show that DC-PBR inherently schedules the optimization\ncurriculum according to texture frequency and naturally filters out the noisy\nsignals from SDS. In experiments, Paint-it obtains remarkable quality PBR\ntexture maps within 15 min., given only a text description. We demonstrate the\ngeneralizability and practicality of Paint-it by synthesizing high-quality\ntexture maps for large-scale mesh datasets and showing test-time applications\nsuch as relighting and material control using a popular graphics engine.\nProject page: https://kim-youwang.github.io/paint-it", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Computational imaging and physics-based vision"], "authors_list": ["Kim Youwang", "Tae-Hyun Oh", "Gerard Pons-Moll"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f43e"}, "filepath": "data/2312.07526.png", "tags": [], "_media_type": "image", "_rand": 0.9997628183909198, "arXiv_link": "https://arxiv.org/abs/2312.07526", "other_link": "https://github.com/open-mmlab/mmpose/tree/main/projects/rtmo.", "title": "RTMO: Towards High-Performance One-Stage Real-Time Multi-Person Pose Estimation", "abstract": "Real-time multi-person pose estimation presents significant challenges in\nbalancing speed and precision. While two-stage top-down methods slow down as\nthe number of people in the image increases, existing one-stage methods often\nfail to simultaneously deliver high accuracy and real-time performance. This\npaper introduces RTMO, a one-stage pose estimation framework that seamlessly\nintegrates coordinate classification by representing keypoints using dual 1-D\nheatmaps within the YOLO architecture, achieving accuracy comparable to\ntop-down methods while maintaining high speed. We propose a dynamic coordinate\nclassifier and a tailored loss function for heatmap learning, specifically\ndesigned to address the incompatibilities between coordinate classification and\ndense prediction models. RTMO outperforms state-of-the-art one-stage pose\nestimators, achieving 1.1% higher AP on COCO while operating about 9 times\nfaster with the same backbone. Our largest model, RTMO-l, attains 74.8% AP on\nCOCO val2017 and 141 FPS on a single V100 GPU, demonstrating its efficiency and\naccuracy. The code and models are available at\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Peng Lu", "Tao Jiang", "Yining Li", "Xiangtai Li", "Kai Chen", "Wenming Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f43f"}, "filepath": "data/2404.15263.png", "tags": [], "_media_type": "image", "_rand": 0.9994604569740292, "arXiv_link": "https://arxiv.org/abs/2404.15263", "other_link": "", "title": "Multi-Session SLAM using Wide-Baseline Optical Flow", "abstract": "We introduce a new system for Multi-Session SLAM, which tracks camera motion\nacross multiple disjoint videos under a single global reference. Our approach\ncouples the prediction of optical flow with solver layers to estimate camera\npose. The backbone is trained end-to-end using a novel differentiable solver\nfor wide-baseline two-view pose. The full system can connect disjoint\nsequences, perform visual odometry, and global optimization. Compared to\nexisting approaches, our design is accurate and robust to catastrophic\nfailures. Code is available at github.com/princeton-vl/MultiSlam_DiffPose", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Lahav Lipson", "Jia Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f440"}, "filepath": "data/2312.03391.png", "tags": [], "_media_type": "image", "_rand": 0.9995208261969165, "arXiv_link": "https://arxiv.org/abs/2312.03391", "other_link": "", "title": "Action Scene Graphs for Long-Form Understanding of Egocentric Videos", "abstract": "We present Egocentric Action Scene Graphs (EASGs), a new representation for\nlong-form understanding of egocentric videos. EASGs extend standard\nmanually-annotated representations of egocentric videos, such as verb-noun\naction labels, by providing a temporally evolving graph-based description of\nthe actions performed by the camera wearer, including interacted objects, their\nrelationships, and how actions unfold in time. Through a novel annotation\nprocedure, we extend the Ego4D dataset by adding manually labeled Egocentric\nAction Scene Graphs offering a rich set of annotations designed for long-from\negocentric video understanding. We hence define the EASG generation task and\nprovide a baseline approach, establishing preliminary benchmarks. Experiments\non two downstream tasks, egocentric action anticipation and egocentric activity\nsummarization, highlight the effectiveness of EASGs for long-form egocentric\nvideo understanding. We will release the dataset and the code to replicate\nexperiments and annotations.", "keywords": [], "authors_list": ["Ivan Rodin", "Antonino Furnari", "Kyle Min", "Subarna Tripathi", "Giovanni Maria Farinella"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f441"}, "filepath": "data/2403.02090.png", "tags": [], "_media_type": "image", "_rand": 0.999007089472178, "arXiv_link": "https://arxiv.org/abs/2403.02090", "other_link": "https://sangmin-git.github.io/projects/MMSI.", "title": "Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations", "abstract": "Understanding social interactions involving both verbal and non-verbal cues\nis essential for effectively interpreting social situations. However, most\nprior works on multimodal social cues focus predominantly on single-person\nbehaviors or rely on holistic visual representations that are not aligned to\nutterances in multi-party environments. Consequently, they are limited in\nmodeling the intricate dynamics of multi-party interactions. In this paper, we\nintroduce three new challenging tasks to model the fine-grained dynamics\nbetween multiple people: speaking target identification, pronoun coreference\nresolution, and mentioned player prediction. We contribute extensive data\nannotations to curate these new challenges in social deduction game settings.\nFurthermore, we propose a novel multimodal baseline that leverages densely\naligned language-visual representations by synchronizing visual features with\ntheir corresponding utterances. This facilitates concurrently capturing verbal\nand non-verbal cues pertinent to social reasoning. Experiments demonstrate the\neffectiveness of the proposed approach with densely aligned multimodal\nrepresentations in modeling fine-grained social interactions. Project website:\nhttps://sangmin-git.github.io/projects/MMSI.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Sangmin Lee", "Bolin Lai", "Fiona Ryan", "Bikram Boote", "James Rehg"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f442"}, "filepath": "data/2312.13150.png", "tags": [], "_media_type": "image", "_rand": 0.9999287772976442, "arXiv_link": "https://arxiv.org/abs/2312.13150", "other_link": "https://szymanowiczs.github.io/splatter-image.", "title": "Splatter Image: Ultra-Fast Single-View 3D Reconstruction", "abstract": "We introduce the \\method, an ultra-efficient approach for monocular 3D object\nreconstruction. Splatter Image is based on Gaussian Splatting, which allows\nfast and high-quality reconstruction of 3D scenes from multiple images. We\napply Gaussian Splatting to monocular reconstruction by learning a neural\nnetwork that, at test time, performs reconstruction in a feed-forward manner,\nat 38 FPS. Our main innovation is the surprisingly straightforward design of\nthis network, which, using 2D operators, maps the input image to one 3D\nGaussian per pixel. The resulting set of Gaussians thus has the form an image,\nthe Splatter Image. We further extend the method take several images as input\nvia cross-view attention. Owning to the speed of the renderer (588 FPS), we use\na single GPU for training while generating entire images at each iteration to\noptimize perceptual metrics like LPIPS. On several synthetic, real,\nmulti-category and large-scale benchmark datasets, we achieve better results in\nterms of PSNR, LPIPS, and other metrics while training and evaluating much\nfaster than prior works. Code, models, demo and more results are available at\nhttps://szymanowiczs.github.io/splatter-image.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Stanislaw Szymanowicz", "Christian Rupprecht", "Andrea Vedaldi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f443"}, "filepath": "data/2403.11496.png", "tags": [], "_media_type": "image", "_rand": 0.9991064489832748, "arXiv_link": "https://arxiv.org/abs/2403.11496", "other_link": "", "title": "MCD: Diverse Large-Scale Multi-Campus Dataset for Robot Perception", "abstract": "Perception plays a crucial role in various robot applications. However,\nexisting well-annotated datasets are biased towards autonomous driving\nscenarios, while unlabelled SLAM datasets are quickly over-fitted, and often\nlack environment and domain variations. To expand the frontier of these fields,\nwe introduce a comprehensive dataset named MCD (Multi-Campus Dataset),\nfeaturing a wide range of sensing modalities, high-accuracy ground truth, and\ndiverse challenging environments across three Eurasian university campuses. MCD\ncomprises both CCS (Classical Cylindrical Spinning) and NRE (Non-Repetitive\nEpicyclic) lidars, high-quality IMUs (Inertial Measurement Units), cameras, and\nUWB (Ultra-WideBand) sensors. Furthermore, in a pioneering effort, we introduce\nsemantic annotations of 29 classes over 59k sparse NRE lidar scans across three\ndomains, thus providing a novel challenge to existing semantic segmentation\nresearch upon this largely unexplored lidar modality. Finally, we propose, for\nthe first time to the best of our knowledge, continuous-time ground truth based\non optimization-based registration of lidar-inertial data on large survey-grade\nprior maps, which are also publicly released, each several times the size of\nexisting ones. We conduct a rigorous evaluation of numerous state-of-the-art\nalgorithms on MCD, report their performance, and highlight the challenges\nawaiting solutions from the research community.", "keywords": [], "authors_list": ["Thien-Minh Nguyen", "Shenghai Yuan", "Thien Nguyen", "Pengyu Yin", "Haozhi Cao", "Lihua Xie", "Maciej Wozniak", "Patric Jensfelt", "Marko Thiel", "Justin Ziegenbein", "Noel Blunder"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f444"}, "filepath": "data/2403.01901.png", "tags": [], "_media_type": "image", "_rand": 0.9990266453654865, "arXiv_link": "https://arxiv.org/abs/2403.01901", "other_link": "https://github.com/modelscope/facechain.", "title": "FaceChain-ImagineID: Freely Crafting High-Fidelity Diverse Talking Faces from Disentangled Audio", "abstract": "In this paper, we abstract the process of people hearing speech, extracting\nmeaningful cues, and creating various dynamically audio-consistent talking\nfaces, termed Listening and Imagining, into the task of high-fidelity diverse\ntalking faces generation from a single audio. Specifically, it involves two\ncritical challenges: one is to effectively decouple identity, content, and\nemotion from entangled audio, and the other is to maintain intra-video\ndiversity and inter-video consistency. To tackle the issues, we first dig out\nthe intricate relationships among facial factors and simplify the decoupling\nprocess, tailoring a Progressive Audio Disentanglement for accurate facial\ngeometry and semantics learning, where each stage incorporates a customized\ntraining module responsible for a specific factor. Secondly, to achieve\nvisually diverse and audio-synchronized animation solely from input audio\nwithin a single model, we introduce the Controllable Coherent Frame generation,\nwhich involves the flexible integration of three trainable adapters with frozen\nLatent Diffusion Models (LDMs) to focus on maintaining facial geometry and\nsemantics, as well as texture and temporal coherence between frames. In this\nway, we inherit high-quality diverse generation from LDMs while significantly\nimproving their controllability at a low training cost. Extensive experiments\ndemonstrate the flexibility and effectiveness of our method in handling this\nparadigm. The codes will be released at\nhttps://github.com/modelscope/facechain.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Chao Xu", "Yang Liu", "Jiazheng Xing", "Weida Wang", "Mingze Sun", "Jun Dan", "Tianxin Huang", "Siyuan Li", "Zhi-Qi Cheng", "Ying Tai", "Baigui Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f445"}, "filepath": "data/2311.13187v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998464455168113, "arXiv_link": "https://arxiv.org/abs/2311.13187v1", "other_link": "", "title": "NeISF: Neural Incident Stokes Field for Geometry and Material Estimation", "abstract": "Multi-view inverse rendering is the problem of estimating the scene\nparameters such as shapes, materials, or illuminations from a sequence of\nimages captured under different viewpoints. Many approaches, however, assume\nsingle light bounce and thus fail to recover challenging scenarios like\ninter-reflections. On the other hand, simply extending those methods to\nconsider multi-bounced light requires more assumptions to alleviate the\nambiguity. To address this problem, we propose Neural Incident Stokes Fields\n(NeISF), a multi-view inverse rendering framework that reduces ambiguities\nusing polarization cues. The primary motivation for using polarization cues is\nthat it is the accumulation of multi-bounced light, providing rich information\nabout geometry and material. Based on this knowledge, the proposed incident\nStokes field efficiently models the accumulated polarization effect with the\naid of an original physically-based differentiable polarimetric renderer.\nLastly, experimental results show that our method outperforms the existing\nworks in synthetic and real scenarios.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Chenhao Li", "Taishi Ono", "Takeshi Uemori", "Hajime Mihara", "Alexander Gatto", "Hajime Nagahara", "Yusuke Moriuchi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f446"}, "filepath": "data/2312.08371.png", "tags": [], "_media_type": "image", "_rand": 0.999522901726279, "arXiv_link": "https://arxiv.org/abs/2312.08371", "other_link": "https://github.com/kuanchihhuang/PTT.", "title": "PTT: Point-Trajectory Transformer for Efficient Temporal 3D Object Detection", "abstract": "Recent temporal LiDAR-based 3D object detectors achieve promising performance\nbased on the two-stage proposal-based approach. They generate 3D box candidates\nfrom the first-stage dense detector, followed by different temporal aggregation\nmethods. However, these approaches require per-frame objects or whole point\nclouds, posing challenges related to memory bank utilization. Moreover, point\nclouds and trajectory features are combined solely based on concatenation,\nwhich may neglect effective interactions between them. In this paper, we\npropose a point-trajectory transformer with long short-term memory for\nefficient temporal 3D object detection. To this end, we only utilize point\nclouds of current-frame objects and their historical trajectories as input to\nminimize the memory bank storage requirement. Furthermore, we introduce modules\nto encode trajectory features, focusing on long short-term and future-aware\nperspectives, and then effectively aggregate them with point cloud features. We\nconduct extensive experiments on the large-scale Waymo dataset to demonstrate\nthat our approach performs well against state-of-the-art methods. Code and\nmodels will be made publicly available at https://github.com/kuanchihhuang/PTT.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Kuan-Chih Huang", "Weijie Lyu", "Ming-Hsuan Yang", "Yi-Hsuan Tsai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f447"}, "filepath": "data/2404.03518.png", "tags": [], "_media_type": "image", "_rand": 0.9993253879401629, "arXiv_link": "https://arxiv.org/abs/2404.03518", "other_link": "https://github.com/MartyrPenink/SDPose.", "title": "SDPose: Tokenized Pose Estimation via Circulation-Guide Self-Distillation", "abstract": "Recently, transformer-based methods have achieved state-of-the-art prediction\nquality on human pose estimation(HPE). Nonetheless, most of these\ntop-performing transformer-based models are too computation-consuming and\nstorage-demanding to deploy on edge computing platforms. Those\ntransformer-based models that require fewer resources are prone to\nunder-fitting due to their smaller scale and thus perform notably worse than\ntheir larger counterparts. Given this conundrum, we introduce SDPose, a new\nself-distillation method for improving the performance of small\ntransformer-based models. To mitigate the problem of under-fitting, we design a\ntransformer module named Multi-Cycled Transformer(MCT) based on multiple-cycled\nforwards to more fully exploit the potential of small model parameters.\nFurther, in order to prevent the additional inference compute-consuming brought\nby MCT, we introduce a self-distillation scheme, extracting the knowledge from\nthe MCT module to a naive forward model. Specifically, on the MSCOCO validation\ndataset, SDPose-T obtains 69.7% mAP with 4.4M parameters and 1.8 GFLOPs.\nFurthermore, SDPose-S-V2 obtains 73.5% mAP on the MSCOCO validation dataset\nwith 6.2M parameters and 4.7 GFLOPs, achieving a new state-of-the-art among\npredominant tiny neural network methods. Our code is available at\nhttps://github.com/MartyrPenink/SDPose.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chen Sichen", "Yingyi Zhang", "Siming Huang", "Ran Yi", "Ke Fan", "Ruixin Zhang", "Peixian Chen", "Jun Wang", "Shouhong Ding", "Lizhuang Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f448"}, "filepath": "data/2309.16397.png", "tags": [], "_media_type": "image", "_rand": 0.9995191626851059, "arXiv_link": "https://arxiv.org/abs/2309.16397", "other_link": "", "title": "Uncertainty-aware Action Decoupling Transformer for Action Anticipation", "abstract": "Offline Reinforcement Learning (RL) has emerged as a promising framework for\nlearning policies without active interactions, making it especially appealing\nfor autonomous driving tasks. Recent successes of Transformers inspire casting\noffline RL as sequence modeling, which performs well in long-horizon tasks.\nHowever, they are overly optimistic in stochastic environments with incorrect\nassumptions that the same goal can be consistently achieved by identical\nactions. In this paper, we introduce an UNcertainty-awaRE deciSion Transformer\n(UNREST) for planning in stochastic driving environments without introducing\nadditional transition or complex generative models. Specifically, UNREST\nestimates state uncertainties by the conditional mutual information between\ntransitions and returns, and segments sequences accordingly. Discovering the\n`uncertainty accumulation' and `temporal locality' properties of driving\nenvironments, UNREST replaces the global returns in decision transformers with\nless uncertain truncated returns, to learn from true outcomes of agent actions\nrather than environment transitions. We also dynamically evaluate environmental\nuncertainty during inference for cautious planning. Extensive experimental\nresults demonstrate UNREST's superior performance in various driving scenarios\nand the power of our uncertainty estimation strategy.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Hongji Guo", "Nakul Agarwal", "Shao-Yuan Lo", "Kwonjoon Lee", "Qiang Ji"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f449"}, "filepath": "data/2403.11448.png", "tags": [], "_media_type": "image", "_rand": 0.9992435413571902, "arXiv_link": "https://arxiv.org/abs/2403.11448", "other_link": "", "title": "Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM", "abstract": "Numerous studies have demonstrated the susceptibility of deep neural networks\n(DNNs) to subtle adversarial perturbations, prompting the development of many\nadvanced adversarial defense methods aimed at mitigating adversarial attacks.\nCurrent defense strategies usually train DNNs for a specific adversarial attack\nmethod and can achieve good robustness in defense against this type of\nadversarial attack. Nevertheless, when subjected to evaluations involving\nunfamiliar attack modalities, empirical evidence reveals a pronounced\ndeterioration in the robustness of DNNs. Meanwhile, there is a trade-off\nbetween the classification accuracy of clean examples and adversarial examples.\nMost defense methods often sacrifice the accuracy of clean examples in order to\nimprove the adversarial robustness of DNNs. To alleviate these problems and\nenhance the overall robust generalization of DNNs, we propose the Test-Time\nPixel-Level Adversarial Purification (TPAP) method. This approach is based on\nthe robust overfitting characteristic of DNNs to the fast gradient sign method\n(FGSM) on training and test datasets. It utilizes FGSM for adversarial\npurification, to process images for purifying unknown adversarial perturbations\nfrom pixels at testing time in a \"counter changes with changelessness\" manner,\nthereby enhancing the defense capability of DNNs against various unknown\nadversarial attacks. Extensive experimental results show that our method can\neffectively improve both overall robust generalization of DNNs, notably over\nprevious methods.", "keywords": [], "authors_list": ["Linyu Tang", "Lei Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f44a"}, "filepath": "data/2401.10831.png", "tags": [], "_media_type": "image", "_rand": 0.9998762550177577, "arXiv_link": "https://arxiv.org/abs/2401.10831", "other_link": "", "title": "Compositional Video Understanding with Spatiotemporal Structure-based Transformers", "abstract": "This paper studies the problem of concept-based interpretability of\ntransformer representations for videos. Concretely, we seek to explain the\ndecision-making process of video transformers based on high-level,\nspatiotemporal concepts that are automatically discovered. Prior research on\nconcept-based interpretability has concentrated solely on image-level tasks.\nComparatively, video models deal with the added temporal dimension, increasing\ncomplexity and posing challenges in identifying dynamic concepts over time. In\nthis work, we systematically address these challenges by introducing the first\nVideo Transformer Concept Discovery (VTCD) algorithm. To this end, we propose\nan efficient approach for unsupervised identification of units of video\ntransformer representations - concepts, and ranking their importance to the\noutput of a model. The resulting concepts are highly interpretable, revealing\nspatio-temporal reasoning mechanisms and object-centric representations in\nunstructured video models. Performing this analysis jointly over a diverse set\nof supervised and self-supervised representations, we discover that some of\nthese mechanism are universal in video transformers. Finally, we show that VTCD\ncan be used for fine-grained action recognition and video object segmentation.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Hoyeoung Yun", "Jinwoo Ahn", "Minseo Kim", "Eun-Sol Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f44b"}, "filepath": "data/2311.16707.png", "tags": [], "_media_type": "image", "_rand": 0.9990331516201736, "arXiv_link": "https://arxiv.org/abs/2311.16707", "other_link": "", "title": "Correlation-aware Coarse-to-fine MLPs for Deformable Medical Image Registration", "abstract": "Dense prediction is a fundamental requirement for many medical vision tasks\nsuch as medical image restoration, registration, and segmentation. The most\npopular vision model, Convolutional Neural Networks (CNNs), has reached\nbottlenecks due to the intrinsic locality of convolution operations. Recently,\ntransformers have been widely adopted for dense prediction for their capability\nto capture long-range visual dependence. However, due to the high computational\ncomplexity and large memory consumption of self-attention operations,\ntransformers are usually used at downsampled feature resolutions. Such usage\ncannot effectively leverage the tissue-level textural information available\nonly at the full image resolution. This textural information is crucial for\nmedical dense prediction as it can differentiate the subtle human anatomy in\nmedical images. In this study, we hypothesize that Multi-layer Perceptrons\n(MLPs) are superior alternatives to transformers in medical dense prediction\nwhere tissue-level details dominate the performance, as MLPs enable long-range\ndependence at the full image resolution. To validate our hypothesis, we develop\na full-resolution hierarchical MLP framework that uses MLPs beginning from the\nfull image resolution. We evaluate this framework with various MLP blocks on a\nwide range of medical dense prediction tasks including restoration,\nregistration, and segmentation. Extensive experiments on six public\nwell-benchmarked datasets show that, by simply using MLPs at full resolution,\nour framework outperforms its CNN and transformer counterparts and achieves\nstate-of-the-art performance on various medical dense prediction tasks.", "keywords": ["Medical imaging and biological vision", "Efficient and scalable vision"], "authors_list": ["Mingyuan Meng", "Dagan Feng", "Lei Bi", "Jinman Kim"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f44c"}, "filepath": "data/2311.10707.png", "tags": [], "_media_type": "image", "_rand": 0.999608341939397, "arXiv_link": "https://arxiv.org/abs/2311.10707", "other_link": "https://github.com/Cecile-hi/Multimodal-Learning-with-Alternating-Unimodal-Adaptation.", "title": "Multimodal Representation Learning by Alternating Unimodal Adaptation", "abstract": "Multimodal learning, which integrates data from diverse sensory modes, plays\na pivotal role in artificial intelligence. However, existing multimodal\nlearning methods often struggle with challenges where some modalities appear\nmore dominant than others during multimodal learning, resulting in suboptimal\nperformance. To address this challenge, we propose MLA (Multimodal Learning\nwith Alternating Unimodal Adaptation). MLA reframes the conventional joint\nmultimodal learning process by transforming it into an alternating unimodal\nlearning process, thereby minimizing interference between modalities.\nSimultaneously, it captures cross-modal interactions through a shared head,\nwhich undergoes continuous optimization across different modalities. This\noptimization process is controlled by a gradient modification mechanism to\nprevent the shared head from losing previously acquired information. During the\ninference phase, MLA utilizes a test-time uncertainty-based model fusion\nmechanism to integrate multimodal information. Extensive experiments are\nconducted on five diverse datasets, encompassing scenarios with complete\nmodalities and scenarios with missing modalities. These experiments demonstrate\nthe superiority of MLA over competing prior approaches. Our code is available\nat\nhttps://github.com/Cecile-hi/Multimodal-Learning-with-Alternating-Unimodal-Adaptation.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Xiaohui Zhang", "Xiaohui Zhang", "Jaehong Yoon", "Mohit Bansal", "Huaxiu Yao"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f44d"}, "filepath": "data/2310.00132.png", "tags": [], "_media_type": "image", "_rand": 0.9994702470687545, "arXiv_link": "https://arxiv.org/abs/2310.00132", "other_link": "https://github.com/lxa9867/QSD.", "title": "Towards Robust Audiovisual Segmentation in Complex Environments with Quantization-based Semantic Decomposition", "abstract": "Audiovisual segmentation (AVS) is a challenging task that aims to segment\nvisual objects in videos according to their associated acoustic cues. With\nmultiple sound sources and background disturbances involved, establishing\nrobust correspondences between audio and visual contents poses unique\nchallenges due to (1) complex entanglement across sound sources and (2)\nfrequent changes in the occurrence of distinct sound events. Assuming sound\nevents occur independently, the multi-source semantic space can be represented\nas the Cartesian product of single-source sub-spaces. We are motivated to\ndecompose the multi-source audio semantics into single-source semantics for\nmore effective interactions with visual content. We propose a semantic\ndecomposition method based on product quantization, where the multi-source\nsemantics can be decomposed and represented by several disentangled and\nnoise-suppressed single-source semantics. Furthermore, we introduce a\nglobal-to-local quantization mechanism, which distills knowledge from stable\nglobal (clip-level) features into local (frame-level) ones, to handle frequent\nchanges in audio semantics. Extensive experiments demonstrate that our\nsemantically decomposed audio representation significantly improves AVS\nperformance, e.g., +21.2% mIoU on the challenging AVS-Semantic benchmark with\nResNet50 backbone. https://github.com/lxa9867/QSD.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Xiang Li", "Jinglu Wang", "Xiaohao Xu", "Xiulian Peng", "Rita Singh", "Yan Lu", "Bhiksha Raj"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f44e"}, "filepath": "data/2312.04565v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996558600376731, "arXiv_link": "https://arxiv.org/abs/2312.04565v1", "other_link": "", "title": "MuRF: Multi-Baseline Radiance Fields", "abstract": "We present Multi-Baseline Radiance Fields (MuRF), a general feed-forward\napproach to solving sparse view synthesis under multiple different baseline\nsettings (small and large baselines, and different number of input views). To\nrender a target novel view, we discretize the 3D space into planes parallel to\nthe target image plane, and accordingly construct a target view frustum volume.\nSuch a target volume representation is spatially aligned with the target view,\nwhich effectively aggregates relevant information from the input views for\nhigh-quality rendering. It also facilitates subsequent radiance field\nregression with a convolutional network thanks to its axis-aligned nature. The\n3D context modeled by the convolutional network enables our method to synthesis\nsharper scene structures than prior works. Our MuRF achieves state-of-the-art\nperformance across multiple different baseline settings and diverse scenarios\nranging from simple objects (DTU) to complex indoor and outdoor scenes\n(RealEstate10K and LLFF). We also show promising zero-shot generalization\nabilities on the Mip-NeRF 360 dataset, demonstrating the general applicability\nof MuRF.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Haofei Xu", "Anpei Chen", "Yuedong Chen", "Christos Sakaridis", "Yulun Zhang", "Marc Pollefeys", "Andreas Geiger", "Fisher Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f44f"}, "filepath": "data/2403.01300.png", "tags": [], "_media_type": "image", "_rand": 0.9994899093724461, "arXiv_link": "https://arxiv.org/abs/2403.01300", "other_link": "", "title": "Causal Mode Multiplexer: A Novel Framework for Unbiased Multispectral Pedestrian Detection", "abstract": "RGBT multispectral pedestrian detection has emerged as a promising solution\nfor safety-critical applications that require day/night operations. However,\nthe modality bias problem remains unsolved as multispectral pedestrian\ndetectors learn the statistical bias in datasets. Specifically, datasets in\nmultispectral pedestrian detection mainly distribute between ROTO (day) and\nRXTO (night) data; the majority of the pedestrian labels statistically co-occur\nwith their thermal features. As a result, multispectral pedestrian detectors\nshow poor generalization ability on examples beyond this statistical\ncorrelation, such as ROTX data. To address this problem, we propose a novel\nCausal Mode Multiplexer (CMM) framework that effectively learns the causalities\nbetween multispectral inputs and predictions. Moreover, we construct a new\ndataset (ROTX-MP) to evaluate modality bias in multispectral pedestrian\ndetection. ROTX-MP mainly includes ROTX examples not presented in previous\ndatasets. Extensive experiments demonstrate that our proposed CMM framework\ngeneralizes well on existing datasets (KAIST, CVC-14, FLIR) and the new\nROTX-MP. We will release our new dataset to the public for future research.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Taeheon Kim", "Sebin Shin", "Youngjoon Yu", "Hak Gu Kim", "Yong Man Ro"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f450"}, "filepath": "data/2404.07992v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997332188185347, "arXiv_link": "https://arxiv.org/abs/2404.07992v1", "other_link": "", "title": "GoMVS: Geometrically Consistent Cost Aggregation for Multi-View Stereo", "abstract": "Matching cost aggregation plays a fundamental role in learning-based\nmulti-view stereo networks. However, directly aggregating adjacent costs can\nlead to suboptimal results due to local geometric inconsistency. Related\nmethods either seek selective aggregation or improve aggregated depth in the 2D\nspace, both are unable to handle geometric inconsistency in the cost volume\neffectively. In this paper, we propose GoMVS to aggregate geometrically\nconsistent costs, yielding better utilization of adjacent geometries. More\nspecifically, we correspond and propagate adjacent costs to the reference pixel\nby leveraging the local geometric smoothness in conjunction with surface\nnormals. We achieve this by the geometric consistent propagation (GCP) module.\nIt computes the correspondence from the adjacent depth hypothesis space to the\nreference depth space using surface normals, then uses the correspondence to\npropagate adjacent costs to the reference geometry, followed by a convolution\nfor aggregation. Our method achieves new state-of-the-art performance on DTU,\nTanks & Temple, and ETH3D datasets. Notably, our method ranks 1st on the Tanks\n& Temple Advanced benchmark.", "keywords": ["Low-level vision"], "authors_list": ["Jiang Wu", "Rui Li", "Haofei Xu", "Wenxun Zhao", "Yu Zhu", "Jinqiu Sun", "Yanning Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f451"}, "filepath": "data/2402.03312.png", "tags": [], "_media_type": "image", "_rand": 0.9995125832211675, "arXiv_link": "https://arxiv.org/abs/2402.03312", "other_link": "", "title": "Test-Time Adaptation for Depth Completion", "abstract": "It is common to observe performance degradation when transferring models\ntrained on some (source) datasets to target testing data due to a domain gap\nbetween them. Existing methods for bridging this gap, such as domain adaptation\n(DA), may require the source data on which the model was trained (often not\navailable), while others, i.e., source-free DA, require many passes through the\ntesting data. We propose an online test-time adaptation method for depth\ncompletion, the task of inferring a dense depth map from a single image and\nassociated sparse depth map, that closes the performance gap in a single pass.\nWe first present a study on how the domain shift in each data modality affects\nmodel performance. Based on our observations that the sparse depth modality\nexhibits a much smaller covariate shift than the image, we design an embedding\nmodule trained in the source domain that preserves a mapping from features\nencoding only sparse depth to those encoding image and sparse depth. During\ntest time, sparse depth features are projected using this map as a proxy for\nsource domain features and are used as guidance to train a set of auxiliary\nparameters (i.e., adaptation layer) to align image and sparse depth features\nfrom the target test domain to that of the source domain. We evaluate our\nmethod on indoor and outdoor scenarios and show that it improves over baselines\nby an average of 21.1%.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hyoungseob Park", "Anjali W Gupta", "Alex Wong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f452"}, "filepath": "data/2404.02478.png", "tags": [], "_media_type": "image", "_rand": 0.999523212907341, "arXiv_link": "https://arxiv.org/abs/2404.02478", "other_link": "https://github.com/lapisrocks/fedselect.", "title": "FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning", "abstract": "Standard federated learning approaches suffer when client data distributions\nhave sufficient heterogeneity. Recent methods addressed the client data\nheterogeneity issue via personalized federated learning (PFL) - a class of FL\nalgorithms aiming to personalize learned global knowledge to better suit the\nclients' local data distributions. Existing PFL methods usually decouple global\nupdates in deep neural networks by performing personalization on particular\nlayers (i.e. classifier heads) and global aggregation for the rest of the\nnetwork. However, preselecting network layers for personalization may result in\nsuboptimal storage of global knowledge. In this work, we propose FedSelect, a\nnovel PFL algorithm inspired by the iterative subnetwork discovery procedure\nused for the Lottery Ticket Hypothesis. FedSelect incrementally expands\nsubnetworks to personalize client parameters, concurrently conducting global\naggregations on the remaining parameters. This approach enables the\npersonalization of both client parameters and subnetwork structure during the\ntraining process. Finally, we show that FedSelect outperforms recent\nstate-of-the-art PFL algorithms under challenging client data heterogeneity\nsettings and demonstrates robustness to various real-world distributional\nshifts. Our code is available at https://github.com/lapisrocks/fedselect.", "keywords": [], "authors_list": ["Rishub Tamirisa", "Chulin Xie", "Wenxuan Bao", "Andy Zhou", "Ron Arel", "Aviv Shamsian"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f453"}, "filepath": "data/2403.19780.png", "tags": [], "_media_type": "image", "_rand": 0.99982747487759, "arXiv_link": "https://arxiv.org/abs/2403.19780", "other_link": "", "title": "Mitigating Motion Blur in Neural Radiance Fields with Events and Frames", "abstract": "Neural Radiance Fields (NeRFs) have shown great potential in novel view\nsynthesis. However, they struggle to render sharp images when the data used for\ntraining is affected by motion blur. On the other hand, event cameras excel in\ndynamic scenes as they measure brightness changes with microsecond resolution\nand are thus only marginally affected by blur. Recent methods attempt to\nenhance NeRF reconstructions under camera motion by fusing frames and events.\nHowever, they face challenges in recovering accurate color content or constrain\nthe NeRF to a set of predefined camera poses, harming reconstruction quality in\nchallenging conditions. This paper proposes a novel formulation addressing\nthese issues by leveraging both model- and learning-based modules. We\nexplicitly model the blur formation process, exploiting the event double\nintegral as an additional model-based prior. Additionally, we model the\nevent-pixel response using an end-to-end learnable response function, allowing\nour method to adapt to non-idealities in the real event-camera sensor. We show,\non synthetic and real data, that the proposed approach outperforms existing\ndeblur NeRFs that use only frames as well as those that combine frames and\nevents by +6.13dB and +2.48dB, respectively.", "keywords": ["Deep learning architectures and techniques", "Low-level vision", "Computational imaging and physics-based vision"], "authors_list": ["Marco Cannici", "Davide Scaramuzza"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f454"}, "filepath": "data/2403.17709.png", "tags": [], "_media_type": "image", "_rand": 0.9992225644549707, "arXiv_link": "https://arxiv.org/abs/2403.17709", "other_link": "https://github.com/mlvlab/SpeaQ.", "title": "Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection", "abstract": "Visual Relationship Detection (VRD) has seen significant advancements with\nTransformer-based architectures recently. However, we identify two key\nlimitations in a conventional label assignment for training Transformer-based\nVRD models, which is a process of mapping a ground-truth (GT) to a prediction.\nUnder the conventional assignment, an unspecialized query is trained since a\nquery is expected to detect every relation, which makes it difficult for a\nquery to specialize in specific relations. Furthermore, a query is also\ninsufficiently trained since a GT is assigned only to a single prediction,\ntherefore near-correct or even correct predictions are suppressed by being\nassigned no relation as a GT. To address these issues, we propose Groupwise\nQuery Specialization and Quality-Aware Multi-Assignment (SpeaQ). Groupwise\nQuery Specialization trains a specialized query by dividing queries and\nrelations into disjoint groups and directing a query in a specific query group\nsolely toward relations in the corresponding relation group. Quality-Aware\nMulti-Assignment further facilitates the training by assigning a GT to multiple\npredictions that are significantly close to a GT in terms of a subject, an\nobject, and the relation in between. Experimental results and analyses show\nthat SpeaQ effectively trains specialized queries, which better utilize the\ncapacity of a model, resulting in consistent performance gains with zero\nadditional inference cost across multiple VRD models and benchmarks. Code is\navailable at https://github.com/mlvlab/SpeaQ.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Jongha Kim", "Jihwan Park", "Jinyoung Park", "Jinyoung Kim", "Sehyung Kim", "Hyunwoo J. Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f455"}, "filepath": "data/2403.13171.png", "tags": [], "_media_type": "image", "_rand": 0.999657783199265, "arXiv_link": "https://arxiv.org/abs/2403.13171", "other_link": "", "title": "LUWA Dataset: Learning Lithic Use-Wear Analysis on Microscopic Images", "abstract": "Lithic Use-Wear Analysis (LUWA) using microscopic images is an underexplored\nvision-for-science research area. It seeks to distinguish the worked material,\nwhich is critical for understanding archaeological artifacts, material\ninteractions, tool functionalities, and dental records. However, this\nchallenging task goes beyond the well-studied image classification problem for\ncommon objects. It is affected by many confounders owing to the complex wear\nmechanism and microscopic imaging, which makes it difficult even for human\nexperts to identify the worked material successfully. In this paper, we\ninvestigate the following three questions on this unique vision task for the\nfirst time:(i) How well can state-of-the-art pre-trained models (like DINOv2)\ngeneralize to the rarely seen domain? (ii) How can few-shot learning be\nexploited for scarce microscopic images? (iii) How do the ambiguous\nmagnification and sensing modality influence the classification accuracy? To\nstudy these, we collaborated with archaeologists and built the first\nopen-source and the largest LUWA dataset containing 23,130 microscopic images\nwith different magnifications and sensing modalities. Extensive experiments\nshow that existing pre-trained models notably outperform human experts but\nstill leave a large gap for improvements. Most importantly, the LUWA dataset\nprovides an underexplored opportunity for vision and learning communities and\ncomplements existing image classification problems on common objects.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jing Zhang", "Irving Fang", "Hao Wu", "Akshat Kaushik", "Alice Rodriguez", "Hanwen Zhao", "Juexiao Zhang", "Zhuo Zheng", "Radu Iovita", "Chen Feng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f456"}, "filepath": "data/2309.10314.png", "tags": [], "_media_type": "image", "_rand": 0.9996279301702617, "arXiv_link": "https://arxiv.org/abs/2309.10314", "other_link": "", "title": "Flow-Guided Online Stereo Rectification for Wide Baseline Stereo", "abstract": "Accurate estimation of stereo camera extrinsic parameters is the key to\nguarantee the performance of stereo matching algorithms. In prior arts, the\nonline self-calibration of stereo cameras has commonly been formulated as a\nspecialized visual odometry problem, without taking into account the principles\nof stereo rectification. In this paper, we first delve deeply into the concept\nof rectifying homography, which serves as the cornerstone for the development\nof our novel stereo camera online self-calibration algorithm, for cases where\nonly a single pair of images is available. Furthermore, we introduce a simple\nyet effective solution for global optimum extrinsic parameter estimation in the\npresence of stereo video sequences. Additionally, we emphasize the\nimpracticality of using three Euler angles and three components in the\ntranslation vectors for performance quantification. Instead, we introduce four\nnew evaluation metrics to quantify the robustness and accuracy of extrinsic\nparameter estimation, applicable to both single-pair and multi-pair cases.\nExtensive experiments conducted across indoor and outdoor environments using\nvarious experimental setups validate the effectiveness of our proposed\nalgorithm. The comprehensive evaluation results demonstrate its superior\nperformance in comparison to the baseline algorithm. Our source code, demo\nvideo, and supplement are publicly available at mias.group/StereoCalibrator.", "keywords": [], "authors_list": ["Anush Kumar", "Fahim Mannan", "Omid Hosseini Jafari", "Shile Li", "Felix Heide"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f457"}, "filepath": "data/2402.06559.png", "tags": [], "_media_type": "image", "_rand": 0.9990586121734807, "arXiv_link": "https://arxiv.org/abs/2402.06559", "other_link": "", "title": "Diffusion-ES: Gradient-free Planning with Diffusion for Autonomous and Instruction-guided Driving", "abstract": "Diffusion models excel at modeling complex and multimodal trajectory\ndistributions for decision-making and control. Reward-gradient guided denoising\nhas been recently proposed to generate trajectories that maximize both a\ndifferentiable reward function and the likelihood under the data distribution\ncaptured by a diffusion model. Reward-gradient guided denoising requires a\ndifferentiable reward function fitted to both clean and noised samples,\nlimiting its applicability as a general trajectory optimizer. In this paper, we\npropose DiffusionES, a method that combines gradient-free optimization with\ntrajectory denoising to optimize black-box non-differentiable objectives while\nstaying in the data manifold. Diffusion-ES samples trajectories during\nevolutionary search from a diffusion model and scores them using a black-box\nreward function. It mutates high-scoring trajectories using a truncated\ndiffusion process that applies a small number of noising and denoising steps,\nallowing for much more efficient exploration of the solution space. We show\nthat DiffusionES achieves state-of-the-art performance on nuPlan, an\nestablished closed-loop planning benchmark for autonomous driving. Diffusion-ES\noutperforms existing sampling-based planners, reactive deterministic or\ndiffusion-based policies, and reward-gradient guidance. Additionally, we show\nthat unlike prior guidance methods, our method can optimize non-differentiable\nlanguage-shaped reward functions generated by few-shot LLM prompting. When\nguided by a human teacher that issues instructions to follow, our method can\ngenerate novel, highly complex behaviors, such as aggressive lane weaving,\nwhich are not present in the training data. This allows us to solve the hardest\nnuPlan scenarios which are beyond the capabilities of existing trajectory\noptimization methods and driving policies.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Brian Yang", "Huangyuan Su", "Nikolaos Gkanatsios", "Tsung-Wei Ke", "Ayush Jain", "Jeff Schneider", "Katerina Fragkiadaki"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computation and Language", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f458"}, "filepath": "data/2311.01734.png", "tags": [], "_media_type": "image", "_rand": 0.9994018281294786, "arXiv_link": "https://arxiv.org/abs/2311.01734", "other_link": "https://github.com/UCSC-VLAA/MixCon3D.", "title": "Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training", "abstract": "Contrastive learning has emerged as a promising paradigm for 3D open-world\nunderstanding, i.e., aligning point cloud representation to image and text\nembedding space individually. In this paper, we introduce MixCon3D, a simple\nyet effective method aiming to sculpt holistic 3D representation in contrastive\nlanguage-image-3D pre-training. In contrast to point cloud only, we develop the\n3D object-level representation from complementary perspectives, e.g.,\nmulti-view rendered images with the point cloud. Then, MixCon3D performs\nlanguage-3D contrastive learning, comprehensively depicting real-world 3D\nobjects and bolstering text alignment. Additionally, we pioneer the first\nthorough investigation of various training recipes for the 3D contrastive\nlearning paradigm, building a solid baseline with improved performance.\nExtensive experiments conducted on three representative benchmarks reveal that\nour method significantly improves over the baseline, surpassing the previous\nstate-of-the-art performance on the challenging 1,156-category Objaverse-LVIS\ndataset by 5.7%. The versatility of MixCon3D is showcased in applications such\nas text-to-3D retrieval and point cloud captioning, further evidencing its\nefficacy in diverse scenarios. The code is available at\nhttps://github.com/UCSC-VLAA/MixCon3D.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yipeng Gao", "Zeyu Wang", "Wei-Shi Zheng", "Cihang Xie", "Yuyin Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f459"}, "filepath": "data/2405.12759.png", "tags": [], "_media_type": "image", "_rand": 0.99949514721471, "arXiv_link": "https://arxiv.org/abs/2405.12759", "other_link": "https://light.princeton.edu/gatedrccbstereo/", "title": "Cross-spectral Gated-RGB Stereo Depth Estimation", "abstract": "Gated cameras flood-illuminate a scene and capture the time-gated impulse\nresponse of a scene. By employing nanosecond-scale gates, existing sensors are\ncapable of capturing mega-pixel gated images, delivering dense depth improving\non today's LiDAR sensors in spatial resolution and depth precision. Although\ngated depth estimation methods deliver a million of depth estimates per frame,\ntheir resolution is still an order below existing RGB imaging methods. In this\nwork, we combine high-resolution stereo HDR RCCB cameras with gated imaging,\nallowing us to exploit depth cues from active gating, multi-view RGB and\nmulti-view NIR sensing -- multi-view and gated cues across the entire spectrum.\nThe resulting capture system consists only of low-cost CMOS sensors and\nflood-illumination. We propose a novel stereo-depth estimation method that is\ncapable of exploiting these multi-modal multi-view depth cues, including the\nactive illumination that is measured by the RCCB camera when removing the\nIR-cut filter. The proposed method achieves accurate depth at long ranges,\noutperforming the next best existing method by 39% for ranges of 100 to 220m in\nMAE on accumulated LiDAR ground-truth. Our code, models and datasets are\navailable at https://light.princeton.edu/gatedrccbstereo/ .", "keywords": ["Computational imaging and physics-based vision", "Remote sensing and photogrammetry"], "authors_list": ["Samuel Brucker", "Stefanie Walz", "Mario Bijelic", "Felix Heide"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f45a"}, "filepath": "data/2311.16491.png", "tags": [], "_media_type": "image", "_rand": 0.9997066635083447, "arXiv_link": "https://arxiv.org/abs/2311.16491", "other_link": "", "title": "$\\mathcal{Z}^*$: Zero-shot $\\underline{S}$tyle $\\underline{T}$ransfer via $\\underline{A}$ttention $\\underline{R}$eweighting", "abstract": "Despite the remarkable progress in image style transfer, formulating style in\nthe context of art is inherently subjective and challenging. In contrast to\nexisting learning/tuning methods, this study shows that vanilla diffusion\nmodels can directly extract style information and seamlessly integrate the\ngenerative prior into the content image without retraining. Specifically, we\nadopt dual denoising paths to represent content/style references in latent\nspace and then guide the content image denoising process with style latent\ncodes. We further reveal that the cross-attention mechanism in latent diffusion\nmodels tends to blend the content and style images, resulting in stylized\noutputs that deviate from the original content image. To overcome this\nlimitation, we introduce a cross-attention rearrangement strategy. Through\ntheoretical analysis and experiments, we demonstrate the effectiveness and\nsuperiority of the diffusion-based $\\underline{Z}$ero-shot $\\underline{S}$tyle\n$\\underline{T}$ransfer via $\\underline{A}$ttention $\\underline{R}$earrangement,\nZ-STAR.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yingying Deng", "Xiangyu He", "Fan Tang", "Weiming Dong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f45b"}, "filepath": "data/2310.16825.png", "tags": [], "_media_type": "image", "_rand": 0.9998399116218747, "arXiv_link": "https://arxiv.org/abs/2310.16825", "other_link": "https://github.com/mosaicml/diffusion/blob/main/assets/common-canvas.md", "title": "CommonCanvas: Open Diffusion Models Trained on Creative-Commons Images", "abstract": "We assemble a dataset of Creative-Commons-licensed (CC) images, which we use\nto train a set of open diffusion models that are qualitatively competitive with\nStable Diffusion 2 (SD2). This task presents two challenges: (1)\nhigh-resolution CC images lack the captions necessary to train text-to-image\ngenerative models; (2) CC images are relatively scarce. In turn, to address\nthese challenges, we use an intuitive transfer learning technique to produce a\nset of high-quality synthetic captions paired with curated CC images. We then\ndevelop a data- and compute-efficient training recipe that requires as little\nas 3% of the LAION-2B data needed to train existing SD2 models, but obtains\ncomparable quality. These results indicate that we have a sufficient number of\nCC images (~70 million) for training high-quality models. Our training recipe\nalso implements a variety of optimizations that achieve ~3X training speed-ups,\nenabling rapid model iteration. We leverage this recipe to train several\nhigh-quality text-to-image models, which we dub the CommonCanvas family. Our\nlargest model achieves comparable performance to SD2 on a human evaluation,\ndespite being trained on our CC dataset that is significantly smaller than\nLAION and using synthetic captions for training. We release our models, data,\nand code at\nhttps://github.com/mosaicml/diffusion/blob/main/assets/common-canvas.md", "keywords": ["Efficient and scalable vision"], "authors_list": ["Aaron Gokaslan", "A. Feder Cooper", "Jasmine Collins", "Landan Seguin", "Austin Jacobson", "Mihir Patel", "Jonathan Frankle", "Cory Stephenson", "Volodymyr Kuleshov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computers and Society"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f45c"}, "filepath": "data/2404.13819.png", "tags": [], "_media_type": "image", "_rand": 0.9996075605216762, "arXiv_link": "https://arxiv.org/abs/2404.13819", "other_link": "", "title": "HOIST-Former: Hand-held Objects Identification, Segmentation, and Tracking in the Wild", "abstract": "We address the challenging task of identifying, segmenting, and tracking\nhand-held objects, which is crucial for applications such as human action\nsegmentation and performance evaluation. This task is particularly challenging\ndue to heavy occlusion, rapid motion, and the transitory nature of objects\nbeing hand-held, where an object may be held, released, and subsequently picked\nup again. To tackle these challenges, we have developed a novel\ntransformer-based architecture called HOIST-Former. HOIST-Former is adept at\nspatially and temporally segmenting hands and objects by iteratively pooling\nfeatures from each other, ensuring that the processes of identification,\nsegmentation, and tracking of hand-held objects depend on the hands' positions\nand their contextual appearance. We further refine HOIST-Former with a contact\nloss that focuses on areas where hands are in contact with objects. Moreover,\nwe also contribute an in-the-wild video dataset called HOIST, which comprises\n4,125 videos complete with bounding boxes, segmentation masks, and tracking IDs\nfor hand-held objects. Through experiments on the HOIST dataset and two\nadditional public datasets, we demonstrate the efficacy of HOIST-Former in\nsegmenting and tracking hand-held objects.", "keywords": [], "authors_list": ["Supreeth Narasimhaswamy", "Huy Anh Nguyen", "Lihan Huang", "Minh Hoai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f45d"}, "filepath": "data/2312.00825.png", "tags": [], "_media_type": "image", "_rand": 0.9992487548017919, "arXiv_link": "https://arxiv.org/abs/2312.00825", "other_link": "", "title": "SocialCounterfactuals: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples", "abstract": "While vision-language models (VLMs) have achieved remarkable performance\nimprovements recently, there is growing evidence that these models also posses\nharmful biases with respect to social attributes such as gender and race. Prior\nstudies have primarily focused on probing such bias attributes individually\nwhile ignoring biases associated with intersections between social attributes.\nThis could be due to the difficulty of collecting an exhaustive set of\nimage-text pairs for various combinations of social attributes. To address this\nchallenge, we employ text-to-image diffusion models to produce counterfactual\nexamples for probing intersectional social biases at scale. Our approach\nutilizes Stable Diffusion with cross attention control to produce sets of\ncounterfactual image-text pairs that are highly similar in their depiction of a\nsubject (e.g., a given occupation) while differing only in their depiction of\nintersectional social attributes (e.g., race & gender). Through our\nover-generate-then-filter methodology, we produce SocialCounterfactuals, a\nhigh-quality dataset containing 171k image-text pairs for probing\nintersectional biases related to gender, race, and physical characteristics. We\nconduct extensive experiments to demonstrate the usefulness of our generated\ndataset for probing and mitigating intersectional social biases in\nstate-of-the-art VLMs.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Phillip Howard", "Avinash Madasu", "Tiep Le", "Gustavo Lujan-Moreno", "Anahita Bhiwandiwalla", "Vasudev Lal"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f45e"}, "filepath": "data/2405.10575.png", "tags": [], "_media_type": "image", "_rand": 0.9998573891484493, "arXiv_link": "https://arxiv.org/abs/2405.10575", "other_link": "", "title": "Accurate Training Data for Occupancy Map Prediction in Automated Driving using Evidence Theory", "abstract": "Automated driving fundamentally requires knowledge about the surrounding\ngeometry of the scene. Modern approaches use only captured images to predict\noccupancy maps that represent the geometry. Training these approaches requires\naccurate data that may be acquired with the help of LiDAR scanners. We show\nthat the techniques used for current benchmarks and training datasets to\nconvert LiDAR scans into occupancy grid maps yield very low quality, and\nsubsequently present a novel approach using evidence theory that yields more\naccurate reconstructions. We demonstrate that these are superior by a large\nmargin, both qualitatively and quantitatively, and that we additionally obtain\nmeaningful uncertainty estimates. When converting the occupancy maps back to\ndepth estimates and comparing them with the raw LiDAR measurements, our method\nyields a MAE improvement of 30% to 52% on nuScenes and 53% on Waymo over other\noccupancy ground-truth data. Finally, we use the improved occupancy maps to\ntrain a state-of-the-art occupancy prediction method and demonstrate that it\nimproves the MAE by 25% on nuScenes.", "keywords": [], "authors_list": ["Jonas K\u00e4lble", "Sascha Wirges", "Maxim Tatarchenko", "Eddy Ilg"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f45f"}, "filepath": "data/2311.14097.png", "tags": [], "_media_type": "image", "_rand": 0.9995629204396841, "arXiv_link": "https://arxiv.org/abs/2311.14097", "other_link": "https://github.com/kong13661/ACT", "title": "ACT-Diffusion: Efficient Adversarial Consistency Training for One-step Diffusion Models", "abstract": "Though diffusion models excel in image generation, their step-by-step\ndenoising leads to slow generation speeds. Consistency training addresses this\nissue with single-step sampling but often produces lower-quality generations\nand requires high training costs. In this paper, we show that optimizing\nconsistency training loss minimizes the Wasserstein distance between target and\ngenerated distributions. As timestep increases, the upper bound accumulates\nprevious consistency training losses. Therefore, larger batch sizes are needed\nto reduce both current and accumulated losses. We propose Adversarial\nConsistency Training (ACT), which directly minimizes the Jensen-Shannon (JS)\ndivergence between distributions at each timestep using a discriminator.\nTheoretically, ACT enhances generation quality, and convergence. By\nincorporating a discriminator into the consistency training framework, our\nmethod achieves improved FID scores on CIFAR10 and ImageNet 64$\\times$64 and\nLSUN Cat 256$\\times$256 datasets, retains zero-shot image inpainting\ncapabilities, and uses less than $1/6$ of the original batch size and fewer\nthan $1/2$ of the model parameters and training steps compared to the baseline\nmethod, this leads to a substantial reduction in resource consumption. Our code\nis available:https://github.com/kong13661/ACT", "keywords": ["Efficient and scalable vision"], "authors_list": ["Fei Kong", "Jinhao Duan", "Lichao Sun", "Hao Cheng", "Renjing Xu", "Heng Tao Shen", "Xiaofeng Zhu", "Xiaoshuang Shi", "Kaidi Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f460"}, "filepath": "data/2403.20142.png", "tags": [], "_media_type": "image", "_rand": 0.99914546208468, "arXiv_link": "https://arxiv.org/abs/2403.20142", "other_link": "https://github.com/sian-wusidi/StegoGAN.", "title": "StegoGAN: Leveraging Steganography for Non-Bijective Image-to-Image Translation", "abstract": "Most image-to-image translation models postulate that a unique correspondence\nexists between the semantic classes of the source and target domains. However,\nthis assumption does not always hold in real-world scenarios due to divergent\ndistributions, different class sets, and asymmetrical information\nrepresentation. As conventional GANs attempt to generate images that match the\ndistribution of the target domain, they may hallucinate spurious instances of\nclasses absent from the source domain, thereby diminishing the usefulness and\nreliability of translated images. CycleGAN-based methods are also known to hide\nthe mismatched information in the generated images to bypass cycle consistency\nobjectives, a process known as steganography. In response to the challenge of\nnon-bijective image translation, we introduce StegoGAN, a novel model that\nleverages steganography to prevent spurious features in generated images. Our\napproach enhances the semantic consistency of the translated images without\nrequiring additional postprocessing or supervision. Our experimental\nevaluations demonstrate that StegoGAN outperforms existing GAN-based models\nacross various non-bijective image-to-image translation tasks, both\nqualitatively and quantitatively. Our code and pretrained models are accessible\nat https://github.com/sian-wusidi/StegoGAN.", "keywords": [], "authors_list": ["Sidi Wu", "Yizi Chen", "Loic Landrieu", "Nicolas Gonthier", "Samuel Mermet", "Lorenz Hurni", "Konrad Schindler"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f461"}, "filepath": "data/2312.12418v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991643316060794, "arXiv_link": "https://arxiv.org/html/2312.12418v1", "other_link": "", "title": "LASA: Instance Reconstruction from Real Scans using A Large-scale Aligned Shape Annotation Dataset", "abstract": "Instance shape reconstruction from a 3D scene involves recovering the full\ngeometries of multiple objects at the semantic instance level. Many methods\nleverage data-driven learning due to the intricacies of scene complexity and\nsignificant indoor occlusions. Training these methods often requires a\nlarge-scale, high-quality dataset with aligned and paired shape annotations\nwith real-world scans. Existing datasets are either synthetic or misaligned,\nrestricting the performance of data-driven methods on real data. To this end,\nwe introduce LASA, a Large-scale Aligned Shape Annotation Dataset comprising\n10,412 high-quality CAD annotations aligned with 920 real-world scene scans\nfrom ArkitScenes, created manually by professional artists. On this top, we\npropose a novel Diffusion-based Cross-Modal Shape Reconstruction (DisCo)\nmethod. It is empowered by a hybrid feature aggregation design to fuse\nmulti-modal inputs and recover high-fidelity object geometries. Besides, we\npresent an Occupancy-Guided 3D Object Detection (OccGOD) method and demonstrate\nthat our shape annotations provide scene occupancy clues that can further\nimprove 3D object detection. Supported by LASA, extensive experiments show that\nour methods achieve state-of-the-art performance in both instance-level scene\nreconstruction and 3D object detection tasks.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Haolin Liu", "Chongjie Ye", "Yinyu Nie", "Yingfan He", "Xiaoguang Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f462"}, "filepath": "data/2312.00065.png", "tags": [], "_media_type": "image", "_rand": 0.9997192480518043, "arXiv_link": "https://arxiv.org/abs/2312.00065", "other_link": "https://ubc-vision.github.io/StableKeypoints/", "title": "Unsupervised Keypoints from Pretrained Diffusion Models", "abstract": "Unsupervised learning of keypoints and landmarks has seen significant\nprogress with the help of modern neural network architectures, but performance\nis yet to match the supervised counterpart, making their practicability\nquestionable. We leverage the emergent knowledge within text-to-image diffusion\nmodels, towards more robust unsupervised keypoints. Our core idea is to find\ntext embeddings that would cause the generative model to consistently attend to\ncompact regions in images (i.e. keypoints). To do so, we simply optimize the\ntext embedding such that the cross-attention maps within the denoising network\nare localized as Gaussians with small standard deviations. We validate our\nperformance on multiple datasets: the CelebA, CUB-200-2011, Tai-Chi-HD,\nDeepFashion, and Human3.6m datasets. We achieve significantly improved\naccuracy, sometimes even outperforming supervised ones, particularly for data\nthat is non-aligned and less curated. Our code is publicly available and can be\nfound through our project page: https://ubc-vision.github.io/StableKeypoints/", "keywords": [], "authors_list": ["Eric Hedlin", "Gopal Sharma", "Shweta Mahajan", "Xingzhe He", "Hossam Isack", "Abhishek Kar", "Helge Rhodin", "Andrea Tagliasacchi", "Kwang Moo Yi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f463"}, "filepath": "data/2308.01557.png", "tags": [], "_media_type": "image", "_rand": 0.9993898654983752, "arXiv_link": "http://export.arxiv.org/abs/2308.01557", "other_link": "", "title": "READ: Retrieval-Enhanced Asymmetric Diffusion for Motion Planning", "abstract": "Learning priors on trajectory distributions can help accelerate robot motion\nplanning optimization. Given previously successful plans, learning trajectory\ngenerative models as priors for a new planning problem is highly desirable.\nPrior works propose several ways on utilizing this prior to bootstrapping the\nmotion planning problem. Either sampling the prior for initializations or using\nthe prior distribution in a maximum-a-posterior formulation for trajectory\noptimization. In this work, we propose learning diffusion models as priors. We\nthen can sample directly from the posterior trajectory distribution conditioned\non task goals, by leveraging the inverse denoising process of diffusion models.\nFurthermore, diffusion has been recently shown to effectively encode data\nmultimodality in high-dimensional settings, which is particularly well-suited\nfor large trajectory dataset. To demonstrate our method efficacy, we compare\nour proposed method - Motion Planning Diffusion - against several baselines in\nsimulated planar robot and 7-dof robot arm manipulator environments. To assess\nthe generalization capabilities of our method, we test it in environments with\npreviously unseen obstacles. Our experiments show that diffusion models are\nstrong priors to encode high-dimensional trajectory distributions of robot\nmotions.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Takeru Oba", "Matthew Walter", "Norimichi Ukita"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f464"}, "filepath": "data/2404.00546.png", "tags": [], "_media_type": "image", "_rand": 0.999424042629985, "arXiv_link": "https://arxiv.org/abs/2404.00546", "other_link": "", "title": "On the Estimation of Image-matching Uncertainty in Visual Place Recognition", "abstract": "In Visual Place Recognition (VPR) the pose of a query image is estimated by\ncomparing the image to a map of reference images with known reference poses. As\nis typical for image retrieval problems, a feature extractor maps the query and\nreference images to a feature space, where a nearest neighbor search is then\nperformed. However, till recently little attention has been given to\nquantifying the confidence that a retrieved reference image is a correct match.\nHighly certain but incorrect retrieval can lead to catastrophic failure of\nVPR-based localization pipelines. This work compares for the first time the\nmain approaches for estimating the image-matching uncertainty, including the\ntraditional retrieval-based uncertainty estimation, more recent data-driven\naleatoric uncertainty estimation, and the compute-intensive geometric\nverification. We further formulate a simple baseline method, ``SUE'', which\nunlike the other methods considers the freely-available poses of the reference\nimages in the map. Our experiments reveal that a simple L2-distance between the\nquery and reference descriptors is already a better estimate of image-matching\nuncertainty than current data-driven approaches. SUE outperforms the other\nefficient uncertainty estimation methods, and its uncertainty estimates\ncomplement the computationally expensive geometric verification approach.\nFuture works for uncertainty estimation in VPR should consider the baselines\ndiscussed in this work.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Mubariz Zaffar", "Liangliang Nan", "Julian F. P. Kooij"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f465"}, "filepath": "data/2402.16846.png", "tags": [], "_media_type": "image", "_rand": 0.9993190570003351, "arXiv_link": "https://arxiv.org/abs/2402.16846", "other_link": "", "title": "GROUNDHOG: Grounding Large Language Models to Holistic Segmentation", "abstract": "Most multimodal large language models (MLLMs) learn language-to-object\ngrounding through causal language modeling where grounded objects are captured\nby bounding boxes as sequences of location tokens. This paradigm lacks\npixel-level representations that are important for fine-grained visual\nunderstanding and diagnosis. In this work, we introduce GROUNDHOG, an MLLM\ndeveloped by grounding Large Language Models to holistic segmentation.\nGROUNDHOG incorporates a masked feature extractor and converts extracted\nfeatures into visual entity tokens for the MLLM backbone, which then connects\ngroundable phrases to unified grounding masks by retrieving and merging the\nentity masks. To train GROUNDHOG, we carefully curated M3G2, a grounded visual\ninstruction tuning dataset with Multi-Modal Multi-Grained Grounding, by\nharvesting a collection of segmentation-grounded datasets with rich\nannotations. Our experimental results show that GROUNDHOG achieves superior\nperformance on various language grounding tasks without task-specific\nfine-tuning, and significantly reduces object hallucination. GROUNDHOG also\ndemonstrates better grounding towards complex forms of visual input and\nprovides easy-to-understand diagnosis in failure cases.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yichi Zhang", "Ziqiao Ma", "Xiaofeng Gao", "Suhaila Shakiah", "Qiaozi Gao", "Joyce Chai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f466"}, "filepath": "data/2312.12423.png", "tags": [], "_media_type": "image", "_rand": 0.9997563630742151, "arXiv_link": "https://arxiv.org/abs/2312.12423", "other_link": "https://shramanpramanick.github.io/VistaLLM/.", "title": "Jack of All Tasks, Master of Many: Designing General-purpose Coarse-to-Fine Vision-Language Model", "abstract": "The ability of large language models (LLMs) to process visual inputs has\ngiven rise to general-purpose vision systems, unifying various vision-language\n(VL) tasks by instruction tuning. However, due to the enormous diversity in\ninput-output formats in the vision domain, existing general-purpose models fail\nto successfully integrate segmentation and multi-image inputs with coarse-level\ntasks into a single framework. In this work, we introduce VistaLLM, a powerful\nvisual system that addresses coarse- and fine-grained VL tasks over single and\nmultiple input images using a unified framework. VistaLLM utilizes an\ninstruction-guided image tokenizer that filters global embeddings using task\ndescriptions to extract compressed and refined features from numerous images.\nMoreover, VistaLLM employs a gradient-aware adaptive sampling technique to\nrepresent binary segmentation masks as sequences, significantly improving over\npreviously used uniform sampling. To bolster the desired capability of\nVistaLLM, we curate CoinIt, a comprehensive coarse-to-fine instruction tuning\ndataset with 6.8M samples. We also address the lack of multi-image grounding\ndatasets by introducing a novel task, AttCoSeg (Attribute-level\nCo-Segmentation), which boosts the model's reasoning and grounding capability\nover multiple input images. Extensive experiments on a wide range of V- and VL\ntasks demonstrate the effectiveness of VistaLLM by achieving consistent\nstate-of-the-art performance over strong baselines across all downstream tasks.\nOur project page can be found at https://shramanpramanick.github.io/VistaLLM/.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Shraman Pramanick", "Guangxing Han", "Rui Hou", "Sayan Nag", "Ser-Nam Lim", "Nicolas Ballas", "Qifan Wang", "Rama Chellappa", "Amjad Almahairi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f467"}, "filepath": "data/2403.01619.png", "tags": [], "_media_type": "image", "_rand": 0.9998155684394847, "arXiv_link": "https://arxiv.org/abs/2403.01619", "other_link": "", "title": "Spectrum AUC Difference (SAUCD): Human Aligned 3D Shape Evaluation", "abstract": "Existing 3D mesh shape evaluation metrics mainly focus on the overall shape\nbut are usually less sensitive to local details. This makes them inconsistent\nwith human evaluation, as human perception cares about both overall and\ndetailed shape. In this paper, we propose an analytic metric named Spectrum\nArea Under the Curve Difference (SAUCD) that demonstrates better consistency\nwith human evaluation. To compare the difference between two shapes, we first\ntransform the 3D mesh to the spectrum domain using the discrete\nLaplace-Beltrami operator and Fourier transform. Then, we calculate the Area\nUnder the Curve (AUC) difference between the two spectrums, so that each\nfrequency band that captures either the overall or detailed shape is equitably\nconsidered. Taking human sensitivity across frequency bands into account, we\nfurther extend our metric by learning suitable weights for each frequency band\nwhich better aligns with human perception. To measure the performance of SAUCD,\nwe build a 3D mesh evaluation dataset called Shape Grading, along with manual\nannotations from more than 800 subjects. By measuring the correlation between\nour metric and human evaluation, we demonstrate that SAUCD is well aligned with\nhuman evaluation, and outperforms previous 3D mesh metrics.", "keywords": [], "authors_list": ["Tianyu Luan", "Zhong Li", "Lele Chen", "Xuan Gong", "Lichang Chen", "Yi Xu", "Junsong Yuan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f468"}, "filepath": "data/2403.01818.png", "tags": [], "_media_type": "image", "_rand": 0.9997446733253049, "arXiv_link": "https://arxiv.org/abs/2403.01818", "other_link": "https://github.com/xmed-lab/AllSpark.", "title": "AllSpark: Reborn Labeled Features from Unlabeled in Transformer for Semi-Supervised Semantic Segmentation", "abstract": "Semi-supervised semantic segmentation (SSSS) has been proposed to alleviate\nthe burden of time-consuming pixel-level manual labeling, which leverages\nlimited labeled data along with larger amounts of unlabeled data. Current\nstate-of-the-art methods train the labeled data with ground truths and\nunlabeled data with pseudo labels. However, the two training flows are\nseparate, which allows labeled data to dominate the training process, resulting\nin low-quality pseudo labels and, consequently, sub-optimal results. To\nalleviate this issue, we present AllSpark, which reborns the labeled features\nfrom unlabeled ones with the channel-wise cross-attention mechanism. We further\nintroduce a Semantic Memory along with a Channel Semantic Grouping strategy to\nensure that unlabeled features adequately represent labeled features. The\nAllSpark shed new light on the architecture level designs of SSSS rather than\nframework level, which avoids increasingly complicated training pipeline\ndesigns. It can also be regarded as a flexible bottleneck module that can be\nseamlessly integrated into a general transformer-based segmentation model. The\nproposed AllSpark outperforms existing methods across all evaluation protocols\non Pascal, Cityscapes and COCO benchmarks without bells-and-whistles. Code and\nmodel weights are available at: https://github.com/xmed-lab/AllSpark.", "keywords": [], "authors_list": ["Haonan Wang", "Qixiang ZHANG", "Yi Li", "Xiaomeng Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f469"}, "filepath": "data/2403.06862.png", "tags": [], "_media_type": "image", "_rand": 0.9990467601123599, "arXiv_link": "https://arxiv.org/abs/2403.06862", "other_link": "", "title": "Real-Time Simulated Avatar from Head-Mounted Sensors", "abstract": "We present SimXR, a method for controlling a simulated avatar from\ninformation (headset pose and cameras) obtained from AR / VR headsets. Due to\nthe challenging viewpoint of head-mounted cameras, the human body is often\nclipped out of view, making traditional image-based egocentric pose estimation\nchallenging. On the other hand, headset poses provide valuable information\nabout overall body motion, but lack fine-grained details about the hands and\nfeet. To synergize headset poses with cameras, we control a humanoid to track\nheadset movement while analyzing input images to decide body movement. When\nbody parts are seen, the movements of hands and feet will be guided by the\nimages; when unseen, the laws of physics guide the controller to generate\nplausible motion. We design an end-to-end method that does not rely on any\nintermediate representations and learns to directly map from images and headset\nposes to humanoid control signals. To train our method, we also propose a\nlarge-scale synthetic dataset created using camera configurations compatible\nwith a commercially available VR headset (Quest 2) and show promising results\non real-world captures. To demonstrate the applicability of our framework, we\nalso test it on an AR headset with a forward-facing camera.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Zhengyi Luo", "Jinkun Cao", "Rawal Khirodkar", "Alexander Winkler", "Jing Huang", "Kris Kitani", "Weipeng Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f46a"}, "filepath": "data/2306.15755.png", "tags": [], "_media_type": "image", "_rand": 0.9995434848622851, "arXiv_link": "https://arxiv.org/abs/2306.15755", "other_link": "", "title": "Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving", "abstract": "In autonomous driving, behavior prediction is fundamental for safe motion\nplanning, hence the security and robustness of prediction models against\nadversarial attacks are of paramount importance. We propose a novel adversarial\nbackdoor attack against trajectory prediction models as a means of studying\ntheir potential vulnerabilities. Our attack affects the victim at training time\nvia naturalistic, hence stealthy, poisoned samples crafted using a novel\ntwo-step approach. First, the triggers are crafted by perturbing the trajectory\nof attacking vehicle and then disguised by transforming the scene using a\nbi-level optimization technique. The proposed attack does not depend on a\nparticular model architecture and operates in a black-box manner, thus can be\neffective without any knowledge of the victim model. We conduct extensive\nempirical studies using state-of-the-art prediction models on two benchmark\ndatasets using metrics customized for trajectory prediction. We show that the\nproposed attack is highly effective, as it can significantly hinder the\nperformance of prediction models, unnoticeable by the victims, and efficient as\nit forces the victim to generate malicious behavior even under constrained\nconditions. Via ablative studies, we analyze the impact of different attack\ndesign choices followed by an evaluation of existing defence mechanisms against\nthe proposed attack.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Mozhgan Pourkeshavarz", "Mohammad Sabokrou", "Amir Rasouli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f46b"}, "filepath": "data/2311.13657.png", "tags": [], "_media_type": "image", "_rand": 0.9993002754526712, "arXiv_link": "https://arxiv.org/abs/2311.13657", "other_link": "", "title": "KD-DETR: Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling", "abstract": "As pretrained transformer language models continue to achieve\nstate-of-the-art performance, the Natural Language Processing community has\npushed for advances in model compression and efficient attention mechanisms to\naddress high computational requirements and limited input sequence length.\nDespite these separate efforts, no investigation has been done into the\nintersection of these two fields. In this work, we provide an evaluation of\nmodel compression via knowledge distillation on efficient attention\ntransformers. We provide cost-performance trade-offs for the compression of\nstate-of-the-art efficient attention architectures and the gains made in\nperformance in comparison to their full attention counterparts. Furthermore, we\nintroduce a new long-context Named Entity Recognition dataset, GONERD, to train\nand test the performance of NER models on long sequences. We find that\ndistilled efficient attention transformers can preserve a significant amount of\noriginal model performance, preserving up to 98.6% across short-context tasks\n(GLUE, SQUAD, CoNLL-2003), up to 94.6% across long-context\nQuestion-and-Answering tasks (HotpotQA, TriviaQA), and up to 98.8% on\nlong-context Named Entity Recognition (GONERD), while decreasing inference\ntimes by up to 57.8%. We find that, for most models on most tasks, performing\nknowledge distillation is an effective method to yield high-performing\nefficient attention models with low costs.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yu Wang", "Xin Li", "Shengzhao Wen", "gang zhang", "Haixiao Yue", "Haocheng Feng", "Junyu Han", "Errui Ding"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f46c"}, "filepath": "data/2402.05917v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996440630277943, "arXiv_link": "https://arxiv.org/abs/2402.05917v1", "other_link": "https://pointvos.github.io.", "title": "Point-VOS: Pointing Up Video Object Segmentation", "abstract": "Current state-of-the-art Video Object Segmentation (VOS) methods rely on\ndense per-object mask annotations both during training and testing. This\nrequires time-consuming and costly video annotation mechanisms. We propose a\nnovel Point-VOS task with a spatio-temporally sparse point-wise annotation\nscheme that substantially reduces the annotation effort. We apply our\nannotation scheme to two large-scale video datasets with text descriptions and\nannotate over 19M points across 133K objects in 32K videos. Based on our\nannotations, we propose a new Point-VOS benchmark, and a corresponding\npoint-based training mechanism, which we use to establish strong baseline\nresults. We show that existing VOS methods can easily be adapted to leverage\nour point annotations during training, and can achieve results close to the\nfully-supervised performance when trained on pseudo-masks generated from these\npoints. In addition, we show that our data can be used to improve models that\nconnect vision and language, by evaluating it on the Video Narrative Grounding\n(VNG) task. We will make our code and annotations available at\nhttps://pointvos.github.io.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Sabarinath Mahadevan", "Idil Esen Zulfikar", "Paul Voigtlaender", "Bastian Leibe"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f46d"}, "filepath": "data/2306.14435.png", "tags": [], "_media_type": "image", "_rand": 0.9997999860640426, "arXiv_link": "https://arxiv.org/abs/2306.14435", "other_link": "https://github.com/Yujun-Shi/DragDiffusion.", "title": "DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing", "abstract": "Accurate and controllable image editing is a challenging task that has\nattracted significant attention recently. Notably, DragGAN is an interactive\npoint-based image editing framework that achieves impressive editing results\nwith pixel-level precision. However, due to its reliance on generative\nadversarial networks (GANs), its generality is limited by the capacity of\npretrained GAN models. In this work, we extend this editing framework to\ndiffusion models and propose a novel approach DragDiffusion. By harnessing\nlarge-scale pretrained diffusion models, we greatly enhance the applicability\nof interactive point-based editing on both real and diffusion-generated images.\nOur approach involves optimizing the diffusion latents to achieve precise\nspatial control. The supervision signal of this optimization process is from\nthe diffusion model's UNet features, which are known to contain rich semantic\nand geometric information. Moreover, we introduce two additional techniques,\nnamely LoRA fine-tuning and latent-MasaCtrl, to further preserve the identity\nof the original image. Lastly, we present a challenging benchmark dataset\ncalled DragBench -- the first benchmark to evaluate the performance of\ninteractive point-based image editing methods. Experiments across a wide range\nof challenging cases (e.g., images with multiple objects, diverse object\ncategories, various styles, etc.) demonstrate the versatility and generality of\nDragDiffusion. Code: https://github.com/Yujun-Shi/DragDiffusion.", "keywords": [], "authors_list": ["Yujun Shi", "Chuhui Xue", "Jun Hao Liew", "Jiachun Pan", "Hanshu Yan", "Wenqing Zhang", "Vincent Y. F. Tan", "Song Bai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f46e"}, "filepath": "data/2401.04727.png", "tags": [], "_media_type": "image", "_rand": 0.9998220577244109, "arXiv_link": "https://arxiv.org/abs/2401.04727", "other_link": "https://github.com/UCSC-VLAA/AdvXL.", "title": "Revisiting Adversarial Training at Scale", "abstract": "The machine learning community has witnessed a drastic change in the training\npipeline, pivoted by those ''foundation models'' with unprecedented scales.\nHowever, the field of adversarial training is lagging behind, predominantly\ncentered around small model sizes like ResNet-50, and tiny and low-resolution\ndatasets like CIFAR-10. To bridge this transformation gap, this paper provides\na modern re-examination with adversarial training, investigating its potential\nbenefits when applied at scale. Additionally, we introduce an efficient and\neffective training strategy to enable adversarial training with giant models\nand web-scale data at an affordable computing cost. We denote this newly\nintroduced framework as AdvXL.\n Empirical results demonstrate that AdvXL establishes new state-of-the-art\nrobust accuracy records under AutoAttack on ImageNet-1K. For example, by\ntraining on DataComp-1B dataset, our AdvXL empowers a vanilla ViT-g model to\nsubstantially surpass the previous records of $l_{\\infty}$-, $l_{2}$-, and\n$l_{1}$-robust accuracy by margins of 11.4%, 14.2% and 12.9%, respectively.\nThis achievement posits AdvXL as a pioneering approach, charting a new\ntrajectory for the efficient training of robust visual representations at\nsignificantly larger scales. Our code is available at\nhttps://github.com/UCSC-VLAA/AdvXL.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zeyu Wang", "Xianhang li", "Hongru Zhu", "Cihang Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f46f"}, "filepath": "data/2404.11884.png", "tags": [], "_media_type": "image", "_rand": 0.9995561712939216, "arXiv_link": "https://arxiv.org/abs/2404.11884", "other_link": "https://github.com/Liu-haoyue/NER-Net.", "title": "Seeing Motion at Nighttime with an Event Camera", "abstract": "We focus on a very challenging task: imaging at nighttime dynamic scenes.\nMost previous methods rely on the low-light enhancement of a conventional RGB\ncamera. However, they would inevitably face a dilemma between the long exposure\ntime of nighttime and the motion blur of dynamic scenes. Event cameras react to\ndynamic changes with higher temporal resolution (microsecond) and higher\ndynamic range (120dB), offering an alternative solution. In this work, we\npresent a novel nighttime dynamic imaging method with an event camera.\nSpecifically, we discover that the event at nighttime exhibits temporal\ntrailing characteristics and spatial non-stationary distribution. Consequently,\nwe propose a nighttime event reconstruction network (NER-Net) which mainly\nincludes a learnable event timestamps calibration module (LETC) to align the\ntemporal trailing events and a non-uniform illumination aware module (NIAM) to\nstabilize the spatiotemporal distribution of events. Moreover, we construct a\npaired real low-light event dataset (RLED) through a co-axial imaging system,\nincluding 64,200 spatially and temporally aligned image GTs and low-light\nevents. Extensive experiments demonstrate that the proposed method outperforms\nstate-of-the-art methods in terms of visual quality and generalization ability\non real-world nighttime datasets. The project are available at:\nhttps://github.com/Liu-haoyue/NER-Net.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haoyue Liu", "Shihan Peng", "Lin Zhu", "Yi Chang", "Hanyu Zhou", "Luxin Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f470"}, "filepath": "data/2405.09879.png", "tags": [], "_media_type": "image", "_rand": 0.9992512377604243, "arXiv_link": "https://arxiv.org/abs/2405.09879", "other_link": "https://github.com/KHU-AGI/GUIDE.", "title": "Generative Unlearning for Any Identity", "abstract": "Recent advances in generative models trained on large-scale datasets have\nmade it possible to synthesize high-quality samples across various domains.\nMoreover, the emergence of strong inversion networks enables not only a\nreconstruction of real-world images but also the modification of attributes\nthrough various editing methods. However, in certain domains related to privacy\nissues, e.g., human faces, advanced generative models along with strong\ninversion methods can lead to potential misuses. In this paper, we propose an\nessential yet under-explored task called generative identity unlearning, which\nsteers the model not to generate an image of a specific identity. In the\ngenerative identity unlearning, we target the following objectives: (i)\npreventing the generation of images with a certain identity, and (ii)\npreserving the overall quality of the generative model. To satisfy these goals,\nwe propose a novel framework, Generative Unlearning for Any Identity (GUIDE),\nwhich prevents the reconstruction of a specific identity by unlearning the\ngenerator with only a single image. GUIDE consists of two parts: (i) finding a\ntarget point for optimization that un-identifies the source latent code and\n(ii) novel loss functions that facilitate the unlearning procedure while less\naffecting the learned distribution. Our extensive experiments demonstrate that\nour proposed method achieves state-of-the-art performance in the generative\nmachine unlearning task. The code is available at\nhttps://github.com/KHU-AGI/GUIDE.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Juwon Seo", "Sung-Hoon Lee", "Tae-Young Lee", "SeungJun Moon", "Gyeong-Moon Park"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f471"}, "filepath": "data/2402.09181.png", "tags": [], "_media_type": "image", "_rand": 0.9998339308530686, "arXiv_link": "https://arxiv.org/abs/2402.09181", "other_link": "https://github.com/OpenGVLab/Multi-Modality-Arena.", "title": "OmniMedVQA: A New Large-Scale Comprehensive Evaluation Benchmark for Medical LVLM", "abstract": "Large Vision-Language Models (LVLMs) have demonstrated remarkable\ncapabilities in various multimodal tasks. However, their potential in the\nmedical domain remains largely unexplored. A significant challenge arises from\nthe scarcity of diverse medical images spanning various modalities and\nanatomical regions, which is essential in real-world medical applications. To\nsolve this problem, in this paper, we introduce OmniMedVQA, a novel\ncomprehensive medical Visual Question Answering (VQA) benchmark. This benchmark\nis collected from 73 different medical datasets, including 12 different\nmodalities and covering more than 20 distinct anatomical regions. Importantly,\nall images in this benchmark are sourced from authentic medical scenarios,\nensuring alignment with the requirements of the medical field and suitability\nfor evaluating LVLMs. Through our extensive experiments, we have found that\nexisting LVLMs struggle to address these medical VQA problems effectively.\nMoreover, what surprises us is that medical-specialized LVLMs even exhibit\ninferior performance to those general-domain models, calling for a more\nversatile and robust LVLM in the biomedical field. The evaluation results not\nonly reveal the current limitations of LVLM in understanding real medical\nimages but also highlight our dataset's significance. Our code with dataset are\navailable at https://github.com/OpenGVLab/Multi-Modality-Arena.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yutao Hu", "Yutao Hu", "Tianbin", "Quanfeng Lu", "Wenqi Shao", "Junjun He", "Yu Qiao", "Ping Luo"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f472"}, "filepath": "data/2312.00785.png", "tags": [], "_media_type": "image", "_rand": 0.9992568799644248, "arXiv_link": "https://arxiv.org/abs/2312.00785", "other_link": "", "title": "Sequential Modeling Enables Scalable Learning for Large Vision Models", "abstract": "We introduce a novel sequential modeling approach which enables learning a\nLarge Vision Model (LVM) without making use of any linguistic data. To do this,\nwe define a common format, \"visual sentences\", in which we can represent raw\nimages and videos as well as annotated data sources such as semantic\nsegmentations and depth reconstructions without needing any meta-knowledge\nbeyond the pixels. Once this wide variety of visual data (comprising 420\nbillion tokens) is represented as sequences, the model can be trained to\nminimize a cross-entropy loss for next token prediction. By training across\nvarious scales of model architecture and data diversity, we provide empirical\nevidence that our models scale effectively. Many different vision tasks can be\nsolved by designing suitable visual prompts at test time.", "keywords": ["Efficient and scalable vision", "Large multimodal models and prompting techniques"], "authors_list": ["Yutong Bai", "Xinyang Geng", "Xinyang Geng", "Karttikeya Mangalam", "Amir Bar", "Alan L. Yuille", "Trevor Darrell", "Jitendra Malik", "Alexei A. Efros"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f473"}, "filepath": "data/2307.00522.png", "tags": [], "_media_type": "image", "_rand": 0.9996568709950318, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2307.00522", "other_link": "", "title": "An edit friendly ddpm noise space: inversion and manipulations", "abstract": "Recent large-scale text-guided diffusion models provide powerful\nimage-generation capabilities. Currently, a significant effort is given to\nenable the modification of these images using text only as means to offer\nintuitive and versatile editing. However, editing proves to be difficult for\nthese generative models due to the inherent nature of editing techniques, which\ninvolves preserving certain content from the original image. Conversely, in\ntext-based models, even minor modifications to the text prompt frequently\nresult in an entirely distinct result, making attaining one-shot generation\nthat accurately corresponds to the users intent exceedingly challenging. In\naddition, to edit a real image using these state-of-the-art tools, one must\nfirst invert the image into the pre-trained models domain - adding another\nfactor affecting the edit quality, as well as latency. In this exploratory\nreport, we propose LEDITS - a combined lightweight approach for real-image\nediting, incorporating the Edit Friendly DDPM inversion technique with Semantic\nGuidance, thus extending Semantic Guidance to real image editing, while\nharnessing the editing capabilities of DDPM inversion as well. This approach\nachieves versatile edits, both subtle and extensive as well as alterations in\ncomposition and style, while requiring no optimization nor extensions to the\narchitecture.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Inbar Huberman-Spiegelglas", "Vladimir Kulikov", "Tomer Michaeli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f474"}, "filepath": "data/2404.03650v1.png", "tags": [], "_media_type": "image", "_rand": 0.999636883580826, "arXiv_link": "https://arxiv.org/html/2404.03650v1", "other_link": "", "title": "SceneFun3D: Fine-Grained Functionality and Affordance Understanding in 3D Scenes", "abstract": "Large visual-language models (VLMs), like CLIP, enable open-set image\nsegmentation to segment arbitrary concepts from an image in a zero-shot manner.\nThis goes beyond the traditional closed-set assumption, i.e., where models can\nonly segment classes from a pre-defined training set. More recently, first\nworks on open-set segmentation in 3D scenes have appeared in the literature.\nThese methods are heavily influenced by closed-set 3D convolutional approaches\nthat process point clouds or polygon meshes. However, these 3D scene\nrepresentations do not align well with the image-based nature of the\nvisual-language models. Indeed, point cloud and 3D meshes typically have a\nlower resolution than images and the reconstructed 3D scene geometry might not\nproject well to the underlying 2D image sequences used to compute pixel-aligned\nCLIP features. To address these challenges, we propose OpenNeRF which naturally\noperates on posed images and directly encodes the VLM features within the NeRF.\nThis is similar in spirit to LERF, however our work shows that using pixel-wise\nVLM features (instead of global CLIP features) results in an overall less\ncomplex architecture without the need for additional DINO regularization. Our\nOpenNeRF further leverages NeRF's ability to render novel views and extract\nopen-set VLM features from areas that are not well observed in the initial\nposed images. For 3D point cloud segmentation on the Replica dataset, OpenNeRF\noutperforms recent open-vocabulary methods such as LERF and OpenScene by at\nleast +4.9 mIoU.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Alexandros Delitzas", "Ay\u00e7a Takmaz", "Federico Tombari", "Robert Sumner", "Marc Pollefeys", "Francis Engelmann"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f475"}, "filepath": "data/2312.03203.png", "tags": [], "_media_type": "image", "_rand": 0.999309407874265, "arXiv_link": "https://arxiv.org/abs/2312.03203", "other_link": "https://feature-3dgs.github.io/", "title": "Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields", "abstract": "3D scene representations have gained immense popularity in recent years.\nMethods that use Neural Radiance fields are versatile for traditional tasks\nsuch as novel view synthesis. In recent times, some work has emerged that aims\nto extend the functionality of NeRF beyond view synthesis, for semantically\naware tasks such as editing and segmentation using 3D feature field\ndistillation from 2D foundation models. However, these methods have two major\nlimitations: (a) they are limited by the rendering speed of NeRF pipelines, and\n(b) implicitly represented feature fields suffer from continuity artifacts\nreducing feature quality. Recently, 3D Gaussian Splatting has shown\nstate-of-the-art performance on real-time radiance field rendering. In this\nwork, we go one step further: in addition to radiance field rendering, we\nenable 3D Gaussian splatting on arbitrary-dimension semantic features via 2D\nfoundation model distillation. This translation is not straightforward: naively\nincorporating feature fields in the 3DGS framework encounters significant\nchallenges, notably the disparities in spatial resolution and channel\nconsistency between RGB images and feature maps. We propose architectural and\ntraining changes to efficiently avert this problem. Our proposed method is\ngeneral, and our experiments showcase novel view semantic segmentation,\nlanguage-guided editing and segment anything through learning feature fields\nfrom state-of-the-art 2D foundation models such as SAM and CLIP-LSeg. Across\nexperiments, our distillation method is able to provide comparable or better\nresults, while being significantly faster to both train and render.\nAdditionally, to the best of our knowledge, we are the first method to enable\npoint and bounding-box prompting for radiance field manipulation, by leveraging\nthe SAM model. Project website at: https://feature-3dgs.github.io/", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Shijie Zhou", "Haoran Chang", "Sicheng Jiang", "Zhiwen Fan", "Zehao Zhu", "Dejia Xu", "Dejia Xu", "Pradyumna Chari", "Suya You", "Zhangyang Wang", "Achuta Kadambi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f476"}, "filepath": "data/2401.00909.png", "tags": [], "_media_type": "image", "_rand": 0.9998309463165993, "arXiv_link": "https://arxiv.org/abs/2401.00909", "other_link": "", "title": "Taming Mode Collapse in Score Distillation for Text-to-3D Generation", "abstract": "Despite the remarkable performance of score distillation in text-to-3D\ngeneration, such techniques notoriously suffer from view inconsistency issues,\nalso known as \"Janus\" artifact, where the generated objects fake each view with\nmultiple front faces. Although empirically effective methods have approached\nthis problem via score debiasing or prompt engineering, a more rigorous\nperspective to explain and tackle this problem remains elusive. In this paper,\nwe reveal that the existing score distillation-based text-to-3D generation\nframeworks degenerate to maximal likelihood seeking on each view independently\nand thus suffer from the mode collapse problem, manifesting as the Janus\nartifact in practice. To tame mode collapse, we improve score distillation by\nre-establishing the entropy term in the corresponding variational objective,\nwhich is applied to the distribution of rendered images. Maximizing the entropy\nencourages diversity among different views in generated 3D assets, thereby\nmitigating the Janus problem. Based on this new objective, we derive a new\nupdate rule for 3D score distillation, dubbed Entropic Score Distillation\n(ESD). We theoretically reveal that ESD can be simplified and implemented by\njust adopting the classifier-free guidance trick upon variational score\ndistillation. Although embarrassingly straightforward, our extensive\nexperiments successfully demonstrate that ESD can be an effective treatment for\nJanus artifacts in score distillation.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Peihao Wang", "Dejia Xu", "Dejia Xu", "Zhiwen Fan", "Dilin Wang", "Sreyas Mohan", "Forrest Iandola", "Rakesh Ranjan", "Yilei Li", "Qiang Liu", "Zhangyang Wang", "Vikas Chandra"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f477"}, "filepath": "data/2405.17429.png", "tags": [], "_media_type": "image", "_rand": 0.9996883021938426, "arXiv_link": "https://arxiv.org/abs/2405.17429", "other_link": "https://github.com/huang-yh/GaussianFormer.", "title": "LowRankOcc: Tensor Decomposition and Low-Rank Recovery for Vision-based 3D Semantic Occupancy Prediction", "abstract": "3D semantic occupancy prediction aims to obtain 3D fine-grained geometry and\nsemantics of the surrounding scene and is an important task for the robustness\nof vision-centric autonomous driving. Most existing methods employ dense grids\nsuch as voxels as scene representations, which ignore the sparsity of occupancy\nand the diversity of object scales and thus lead to unbalanced allocation of\nresources. To address this, we propose an object-centric representation to\ndescribe 3D scenes with sparse 3D semantic Gaussians where each Gaussian\nrepresents a flexible region of interest and its semantic features. We\naggregate information from images through the attention mechanism and\niteratively refine the properties of 3D Gaussians including position,\ncovariance, and semantics. We then propose an efficient Gaussian-to-voxel\nsplatting method to generate 3D occupancy predictions, which only aggregates\nthe neighboring Gaussians for a certain position. We conduct extensive\nexperiments on the widely adopted nuScenes and KITTI-360 datasets. Experimental\nresults demonstrate that GaussianFormer achieves comparable performance with\nstate-of-the-art methods with only 17.8% - 24.8% of their memory consumption.\nCode is available at: https://github.com/huang-yh/GaussianFormer.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Linqing Zhao", "Xiuwei Xu", "Ziwei Wang", "Yunpeng Zhang", "Borui Zhang", "Wenzhao Zheng", "Dalong Du", "Jie Zhou", "Jiwen Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f478"}, "filepath": "data/2311.04257.png", "tags": [], "_media_type": "image", "_rand": 0.9996296660169526, "arXiv_link": "https://arxiv.org/abs/2311.04257", "other_link": "", "title": "mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration", "abstract": "Multi-modal Large Language Models (MLLMs) have demonstrated impressive\ninstruction abilities across various open-ended tasks. However, previous\nmethods primarily focus on enhancing multi-modal capabilities. In this work, we\nintroduce a versatile multi-modal large language model, mPLUG-Owl2, which\neffectively leverages modality collaboration to improve performance in both\ntext and multi-modal tasks. mPLUG-Owl2 utilizes a modularized network design,\nwith the language decoder acting as a universal interface for managing\ndifferent modalities. Specifically, mPLUG-Owl2 incorporates shared functional\nmodules to facilitate modality collaboration and introduces a modality-adaptive\nmodule that preserves modality-specific features. Extensive experiments reveal\nthat mPLUG-Owl2 is capable of generalizing both text tasks and multi-modal\ntasks and achieving state-of-the-art performances with a single generic model.\nNotably, mPLUG-Owl2 is the first MLLM model that demonstrates the modality\ncollaboration phenomenon in both pure-text and multi-modal scenarios, setting a\npioneering path in the development of future multi-modal foundation models.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Qinghao Ye", "Haiyang Xu", "Jiabo Ye", "Ming Yan", "Anwen Hu", "Haowei Liu", "Qi Qian", "Ji Zhang", "Fei Huang", "Fei Huang"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f479"}, "filepath": "data/2312.05390.png", "tags": [], "_media_type": "image", "_rand": 0.9991221397399306, "arXiv_link": "https://arxiv.org/abs/2312.05390", "other_link": "", "title": "NoiseCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions in Diffusion Models", "abstract": "Generative models have been very popular in the recent years for their image\ngeneration capabilities. GAN-based models are highly regarded for their\ndisentangled latent space, which is a key feature contributing to their success\nin controlled image editing. On the other hand, diffusion models have emerged\nas powerful tools for generating high-quality images. However, the latent space\nof diffusion models is not as thoroughly explored or understood. Existing\nmethods that aim to explore the latent space of diffusion models usually relies\non text prompts to pinpoint specific semantics. However, this approach may be\nrestrictive in areas such as art, fashion, or specialized fields like medicine,\nwhere suitable text prompts might not be available or easy to conceive thus\nlimiting the scope of existing work. In this paper, we propose an unsupervised\nmethod to discover latent semantics in text-to-image diffusion models without\nrelying on text prompts. Our method takes a small set of unlabeled images from\nspecific domains, such as faces or cats, and a pre-trained diffusion model, and\ndiscovers diverse semantics in unsupervised fashion using a contrastive\nlearning objective. Moreover, the learned directions can be applied\nsimultaneously, either within the same domain (such as various types of facial\nedits) or across different domains (such as applying cat and face edits within\nthe same image) without interfering with each other. Our extensive experiments\nshow that our method achieves highly disentangled edits, outperforming existing\napproaches in both diffusion-based and GAN-based latent space editing methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yusuf Dalva", "Pinar Yanardag"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f47a"}, "filepath": "data/2403.01238.png", "tags": [], "_media_type": "image", "_rand": 0.99925913964603, "arXiv_link": "https://arxiv.org/abs/2403.01238", "other_link": "", "title": "On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving", "abstract": "End-to-end motion planning models equipped with deep neural networks have\nshown great potential for enabling full autonomous driving. However, the\noversized neural networks render them impractical for deployment on\nresource-constrained systems, which unavoidably requires more computational\ntime and resources during reference.To handle this, knowledge distillation\noffers a promising approach that compresses models by enabling a smaller\nstudent model to learn from a larger teacher model. Nevertheless, how to apply\nknowledge distillation to compress motion planners has not been explored so\nfar. In this paper, we propose PlanKD, the first knowledge distillation\nframework tailored for compressing end-to-end motion planners. First,\nconsidering that driving scenes are inherently complex, often containing\nplanning-irrelevant or even noisy information, transferring such information is\nnot beneficial for the student planner. Thus, we design an information\nbottleneck based strategy to only distill planning-relevant information, rather\nthan transfer all information indiscriminately. Second, different waypoints in\nan output planned trajectory may hold varying degrees of importance for motion\nplanning, where a slight deviation in certain crucial waypoints might lead to a\ncollision. Therefore, we devise a safety-aware waypoint-attentive distillation\nmodule that assigns adaptive weights to different waypoints based on the\nimportance, to encourage the student to accurately mimic more crucial\nwaypoints, thereby improving overall safety. Experiments demonstrate that our\nPlanKD can boost the performance of smaller planners by a large margin, and\nsignificantly reduce their reference time.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Kaituo Feng", "Changsheng Li", "Dongchun Ren", "Ye Yuan", "Guoren Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f47b"}, "filepath": "data/2309.03542.png", "tags": [], "_media_type": "image", "_rand": 0.999331204030329, "arXiv_link": "https://arxiv.org/abs/2309.03542", "other_link": "https://github.com/jkli1998/T-CAR.", "title": "Leveraging Predicate and Triplet Learning for Scene Graph Generation", "abstract": "Scene Graph Generation (SGG) plays a pivotal role in downstream\nvision-language tasks. Existing SGG methods typically suffer from poor\ncompositional generalizations on unseen triplets. They are generally trained on\nincompletely annotated scene graphs that contain dominant triplets and tend to\nbias toward these seen triplets during inference. To address this issue, we\npropose a Triplet Calibration and Reduction (T-CAR) framework in this paper. In\nour framework, a triplet calibration loss is first presented to regularize the\nrepresentations of diverse triplets and to simultaneously excavate the unseen\ntriplets in incompletely annotated training scene graphs. Moreover, the unseen\nspace of scene graphs is usually several times larger than the seen space since\nit contains a huge number of unrealistic compositions. Thus, we propose an\nunseen space reduction loss to shift the attention of excavation to reasonable\nunseen compositions to facilitate the model training. Finally, we propose a\ncontextual encoder to improve the compositional generalizations of unseen\ntriplets by explicitly modeling the relative spatial relations between subjects\nand objects. Extensive experiments show that our approach achieves consistent\nimprovements for zero-shot SGG over state-of-the-art methods. The code is\navailable at https://github.com/jkli1998/T-CAR.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Jiankai Li", "Yunhong Wang", "Xiefan Guo", "Ruijie Yang", "Weixin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f47c"}, "filepath": "data/2312.00690v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999547993852328, "arXiv_link": "https://arxiv.org/abs/2312.00690v2", "other_link": "https://jcorsetti.github.io/oryon/.", "title": "Open-vocabulary object 6D pose estimation", "abstract": "We introduce the new setting of open-vocabulary object 6D pose estimation, in\nwhich a textual prompt is used to specify the object of interest. In contrast\nto existing approaches, in our setting (i) the object of interest is specified\nsolely through the textual prompt, (ii) no object model (e.g. CAD or video\nsequence) is required at inference, (iii) the object is imaged from two\ndifferent viewpoints of two different scenes, and (iv) the object was not\nobserved during the training phase. To operate in this setting, we introduce a\nnovel approach that leverages a Vision-Language Model to segment the object of\ninterest from two distinct scenes and to estimate its relative 6D pose. The key\nof our approach is a carefully devised strategy to fuse object-level\ninformation provided by the prompt with local image features, resulting in a\nfeature space that can generalize to novel concepts. We validate our approach\non a new benchmark based on two popular datasets, REAL275 and Toyota-Light,\nwhich collectively encompass 39 object instances appearing in four thousand\nimage pairs. The results demonstrate that our approach outperforms both a\nwell-established hand-crafted method and a recent deep learning-based baseline\nin estimating the relative 6D pose of objects in different scenes. Project\npage: https://jcorsetti.github.io/oryon/.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jaime Corsetti", "Davide Boscaini", "Changjae Oh", "Andrea Cavallaro", "Fabio Poiesi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f47d"}, "filepath": "data/2405.09882.png", "tags": [], "_media_type": "image", "_rand": 0.9991820986726353, "arXiv_link": "http://export.arxiv.org/abs/2405.09882", "other_link": "https://github.com/HansSunY/DiffAM.", "title": "DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection", "abstract": "With the rapid development of face recognition (FR) systems, the privacy of\nface images on social media is facing severe challenges due to the abuse of\nunauthorized FR systems. Some studies utilize adversarial attack techniques to\ndefend against malicious FR systems by generating adversarial examples.\nHowever, the generated adversarial examples, i.e., the protected face images,\ntend to suffer from subpar visual quality and low transferability. In this\npaper, we propose a novel face protection approach, dubbed DiffAM, which\nleverages the powerful generative ability of diffusion models to generate\nhigh-quality protected face images with adversarial makeup transferred from\nreference images. To be specific, we first introduce a makeup removal module to\ngenerate non-makeup images utilizing a fine-tuned diffusion model with guidance\nof textual prompts in CLIP space. As the inverse process of makeup transfer,\nmakeup removal can make it easier to establish the deterministic relationship\nbetween makeup domain and non-makeup domain regardless of elaborate text\nprompts. Then, with this relationship, a CLIP-based makeup loss along with an\nensemble attack strategy is introduced to jointly guide the direction of\nadversarial makeup domain, achieving the generation of protected face images\nwith natural-looking makeup and high black-box transferability. Extensive\nexperiments demonstrate that DiffAM achieves higher visual quality and attack\nsuccess rates with a gain of 12.98% under black-box setting compared with the\nstate of the arts. The code will be available at\nhttps://github.com/HansSunY/DiffAM.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Yuhao Sun", "Lingyun Yu", "Hongtao Xie", "Jiaming Li", "Yongdong Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f47e"}, "filepath": "data/2312.00063.png", "tags": [], "_media_type": "image", "_rand": 0.9999769341813874, "arXiv_link": "https://arxiv.org/abs/2312.00063", "other_link": "", "title": "MoMask: Generative Masked Modeling of 3D Human Motions", "abstract": "We introduce MoMask, a novel masked modeling framework for text-driven 3D\nhuman motion generation. In MoMask, a hierarchical quantization scheme is\nemployed to represent human motion as multi-layer discrete motion tokens with\nhigh-fidelity details. Starting at the base layer, with a sequence of motion\ntokens obtained by vector quantization, the residual tokens of increasing\norders are derived and stored at the subsequent layers of the hierarchy. This\nis consequently followed by two distinct bidirectional transformers. For the\nbase-layer motion tokens, a Masked Transformer is designated to predict\nrandomly masked motion tokens conditioned on text input at training stage.\nDuring generation (i.e. inference) stage, starting from an empty sequence, our\nMasked Transformer iteratively fills up the missing tokens; Subsequently, a\nResidual Transformer learns to progressively predict the next-layer tokens\nbased on the results from current layer. Extensive experiments demonstrate that\nMoMask outperforms the state-of-art methods on the text-to-motion generation\ntask, with an FID of 0.045 (vs e.g. 0.141 of T2M-GPT) on the HumanML3D dataset,\nand 0.228 (vs 0.514) on KIT-ML, respectively. MoMask can also be seamlessly\napplied in related tasks without further model fine-tuning, such as text-guided\ntemporal inpainting.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["chuan guo", "Yuxuan Mu", "Muhammad Gohar Javed", "Sen Wang", "Li Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f47f"}, "filepath": "data/2404.04627.png", "tags": [], "_media_type": "image", "_rand": 0.999510897306414, "arXiv_link": "https://arxiv.org/abs/2404.04627", "other_link": "https://zaidkhan.me/ViReP", "title": "Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement", "abstract": "Visual program synthesis is a promising approach to exploit the reasoning\nabilities of large language models for compositional computer vision tasks.\nPrevious work has used few-shot prompting with frozen LLMs to synthesize visual\nprograms. Training an LLM to write better visual programs is an attractive\nprospect, but it is unclear how to accomplish this. No dataset of visual\nprograms for training exists, and acquisition of a visual program dataset\ncannot be easily crowdsourced due to the need for expert annotators. To get\naround the lack of direct supervision, we explore improving the program\nsynthesis abilities of an LLM using feedback from interactive experience. We\npropose a method where we exploit existing annotations for a vision-language\ntask to improvise a coarse reward signal for that task, treat the LLM as a\npolicy, and apply reinforced self-training to improve the visual program\nsynthesis ability of the LLM for that task. We describe a series of experiments\non object detection, compositional visual question answering, and image-text\nretrieval, and show that in each case, the self-trained LLM outperforms or\nperforms on par with few-shot frozen LLMs that are an order of magnitude\nlarger. Website: https://zaidkhan.me/ViReP", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Zaid Khan", "Vijay Kumar BG", "Samuel Schulter", "Yun Fu", "Manmohan Chandraker"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f480"}, "filepath": "data/2312.04567.png", "tags": [], "_media_type": "image", "_rand": 0.9998165963393841, "arXiv_link": "https://arxiv.org/abs/2312.04567", "other_link": "", "title": "Scaling Laws of Synthetic Images for Model Training ... for Now", "abstract": "Recent significant advances in text-to-image models unlock the possibility of\ntraining vision systems using synthetic images, potentially overcoming the\ndifficulty of collecting curated data at scale. It is unclear, however, how\nthese models behave at scale, as more synthetic data is added to the training\nset. In this paper we study the scaling laws of synthetic images generated by\nstate of the art text-to-image models, for the training of supervised models:\nimage classifiers with label supervision, and CLIP with language supervision.\nWe identify several factors, including text prompts, classifier-free guidance\nscale, and types of text-to-image models, that significantly affect scaling\nbehavior. After tuning these factors, we observe that synthetic images\ndemonstrate a scaling trend similar to, but slightly less effective than, real\nimages in CLIP training, while they significantly underperform in scaling when\ntraining supervised image classifiers. Our analysis indicates that the main\nreason for this underperformance is the inability of off-the-shelf\ntext-to-image models to generate certain concepts, a limitation that\nsignificantly impairs the training of image classifiers. Our findings also\nsuggest that scaling synthetic data can be particularly effective in scenarios\nsuch as: (1) when there is a limited supply of real images for a supervised\nproblem (e.g., fewer than 0.5 million images in ImageNet), (2) when the\nevaluation dataset diverges significantly from the training data, indicating\nthe out-of-distribution scenario, or (3) when synthetic data is used in\nconjunction with real images, as demonstrated in the training of CLIP models.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Lijie Fan", "Kaifeng Chen", "Dilip Krishnan", "Dina Katabi", "Phillip Isola", "Yonglong Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f481"}, "filepath": "data/2403.16080.png", "tags": [], "_media_type": "image", "_rand": 0.9992710555946543, "arXiv_link": "https://arxiv.org/abs/2403.16080", "other_link": "", "title": "DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling", "abstract": "High-quality human reconstruction and photo-realistic rendering of a dynamic\nscene is a long-standing problem in computer vision and graphics. Despite\nconsiderable efforts invested in developing various capture systems and\nreconstruction algorithms, recent advancements still struggle with loose or\noversized clothing and overly complex poses. In part, this is due to the\nchallenges of acquiring high-quality human datasets. To facilitate the\ndevelopment of these fields, in this paper, we present PKU-DyMVHumans, a\nversatile human-centric dataset for high-fidelity reconstruction and rendering\nof dynamic human scenarios from dense multi-view videos. It comprises 8.2\nmillion frames captured by more than 56 synchronized cameras across diverse\nscenarios. These sequences comprise 32 human subjects across 45 different\nscenarios, each with a high-detailed appearance and realistic human motion.\nInspired by recent advancements in neural radiance field (NeRF)-based scene\nrepresentations, we carefully set up an off-the-shelf framework that is easy to\nprovide those state-of-the-art NeRF-based implementations and benchmark on\nPKU-DyMVHumans dataset. It is paving the way for various applications like\nfine-grained foreground/background decomposition, high-quality human\nreconstruction and photo-realistic novel view synthesis of a dynamic scene.\nExtensive studies are performed on the benchmark, demonstrating new\nobservations and challenges that emerge from using such high-fidelity dynamic\ndata.", "keywords": ["Biometrics and human analysis", "Image and video generation and manipulation"], "authors_list": ["Xiaoyun Zheng", "Liwei Liao", "Xufeng Li", "Jianbo Jiao", "Rongjie Wang", "Feng Gao", "Shiqi Wang", "Ronggang Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f482"}, "filepath": "data/2401.14391.png", "tags": [], "_media_type": "image", "_rand": 0.9995858970824355, "arXiv_link": "https://arxiv.org/abs/2401.14391", "other_link": "https://crossmae.github.io", "title": "CrossMAE: Cross Modality Masked Autoencoders For Region-Aware Audio-Visual Pre-Training", "abstract": "In this work, we re-examine inter-patch dependencies in the decoding\nmechanism of masked autoencoders (MAE). We decompose this decoding mechanism\nfor masked patch reconstruction in MAE into self-attention and cross-attention.\nOur investigations suggest that self-attention between mask patches is not\nessential for learning good representations. To this end, we propose a novel\npretraining framework: Cross-Attention Masked Autoencoders (CrossMAE).\nCrossMAE's decoder leverages only cross-attention between masked and visible\ntokens, with no degradation in downstream performance. This design also enables\ndecoding only a small subset of mask tokens, boosting efficiency. Furthermore,\neach decoder block can now leverage different encoder features, resulting in\nimproved representation learning. CrossMAE matches MAE in performance with 2.5\nto 3.7$\\times$ less decoding compute. It also surpasses MAE on ImageNet\nclassification and COCO instance segmentation under the same compute. Code and\nmodels: https://crossmae.github.io", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yuxin Guo", "Siyang Sun", "Shuailei Ma", "Kecheng Zheng", "Xiaoyi Bao", "Shijie Ma", "Wei Zou", "Yun Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f483"}, "filepath": "data/2312.13500.png", "tags": [], "_media_type": "image", "_rand": 0.9992558543733588, "arXiv_link": "https://arxiv.org/abs/2312.13500", "other_link": "", "title": "Traceable Federated Continual Learning", "abstract": "In a privacy-focused era, Federated Learning (FL) has emerged as a promising\nmachine learning technique. However, most existing FL studies assume that the\ndata distribution remains nearly fixed over time, while real-world scenarios\noften involve dynamic and continual changes. To equip FL systems with continual\nmodel evolution capabilities, we focus on an important problem called Federated\nContinual Novel Class Learning (FedCN) in this work. The biggest challenge in\nFedCN is to merge and align novel classes that are discovered and learned by\ndifferent clients without compromising privacy. To address this, we propose a\nGlobal Alignment Learning (GAL) framework that can accurately estimate the\nglobal novel class number and provide effective guidance for local training\nfrom a global perspective, all while maintaining privacy protection.\nSpecifically, GAL first locates high-density regions in the representation\nspace through a bi-level clustering mechanism to estimate the novel class\nnumber, with which the global prototypes corresponding to novel classes can be\nconstructed. Then, GAL uses a novel semantic weighted loss to capture all\npossible correlations between these prototypes and the training data for\nmitigating the impact of pseudo-label noise and data heterogeneity. Extensive\nexperiments on various datasets demonstrate GAL's superior performance over\nstate-of-the-art novel class discovery methods. In particular, GAL achieves\nsignificant improvements in novel-class performance, increasing the accuracy by\n5.1% to 10.6% in the case of one novel class learning stage and by 7.8% to\n17.9% in the case of two novel class learning stages, without sacrificing\nknown-class performance. Moreover, GAL is shown to be effective in equipping a\nvariety of different mainstream FL algorithms with novel class discovery and\nlearning capability, highlighting its potential for many real-world\napplications.", "keywords": [], "authors_list": ["Qiang Wang", "Bingyan Liu", "Yawen Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f484"}, "filepath": "data/2311.13535.png", "tags": [], "_media_type": "image", "_rand": 0.9992063011005771, "arXiv_link": "https://arxiv.org/abs/2311.13535", "other_link": "https://cnnlstm.github.io/DiffusionMat", "title": "PolarMatte: Fully Computational Ground-Truth-Quality Alpha Matte Extraction for Images and Video using Polarized Screen Matting", "abstract": "In this paper, we introduce DiffusionMat, a novel image matting framework\nthat employs a diffusion model for the transition from coarse to refined alpha\nmattes. Diverging from conventional methods that utilize trimaps merely as\nloose guidance for alpha matte prediction, our approach treats image matting as\na sequential refinement learning process. This process begins with the addition\nof noise to trimaps and iteratively denoises them using a pre-trained diffusion\nmodel, which incrementally guides the prediction towards a clean alpha matte.\nThe key innovation of our framework is a correction module that adjusts the\noutput at each denoising step, ensuring that the final result is consistent\nwith the input image's structures. We also introduce the Alpha Reliability\nPropagation, a novel technique designed to maximize the utility of available\nguidance by selectively enhancing the trimap regions with confident alpha\ninformation, thus simplifying the correction task. To train the correction\nmodule, we devise specialized loss functions that target the accuracy of the\nalpha matte's edges and the consistency of its opaque and transparent regions.\nWe evaluate our model across several image matting benchmarks, and the results\nindicate that DiffusionMat consistently outperforms existing methods. Project\npage at~\\url{https://cnnlstm.github.io/DiffusionMat", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Kenji Enomoto", "TJ Rhodes", "Brian Price", "Gavin Miller"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f485"}, "filepath": "data/2308.07903.png", "tags": [], "_media_type": "image", "_rand": 0.9999787273351602, "arXiv_link": "https://arxiv.org/abs/2308.07903", "other_link": "", "title": "Relightable and Animatable Neural Avatar from Sparse-View Video", "abstract": "This paper tackles the challenge of creating relightable and animatable\nneural avatars from sparse-view (or even monocular) videos of dynamic humans\nunder unknown illumination. Compared to studio environments, this setting is\nmore practical and accessible but poses an extremely challenging ill-posed\nproblem. Previous neural human reconstruction methods are able to reconstruct\nanimatable avatars from sparse views using deformed Signed Distance Fields\n(SDF) but cannot recover material parameters for relighting. While\ndifferentiable inverse rendering-based methods have succeeded in material\nrecovery of static objects, it is not straightforward to extend them to dynamic\nhumans as it is computationally intensive to compute pixel-surface intersection\nand light visibility on deformed SDFs for inverse rendering. To solve this\nchallenge, we propose a Hierarchical Distance Query (HDQ) algorithm to\napproximate the world space distances under arbitrary human poses.\nSpecifically, we estimate coarse distances based on a parametric human model\nand compute fine distances by exploiting the local deformation invariance of\nSDF. Based on the HDQ algorithm, we leverage sphere tracing to efficiently\nestimate the surface intersection and light visibility. This allows us to\ndevelop the first system to recover animatable and relightable neural avatars\nfrom sparse view (or monocular) inputs. Experiments demonstrate that our\napproach is able to produce superior results compared to state-of-the-art\nmethods. Our code will be released for reproducibility.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Zhen Xu", "Sida Peng", "Chen Geng", "Linzhan Mou", "Zihan Yan", "Jiaming Sun", "Hujun Bao", "Xiaowei Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f486"}, "filepath": "data/2312.00858.png", "tags": [], "_media_type": "image", "_rand": 0.9998522686380831, "arXiv_link": "https://arxiv.org/abs/2312.00858", "other_link": "https://github.com/horseee/DeepCache", "title": "DeepCache: Accelerating Diffusion Models for Free", "abstract": "Diffusion models have recently gained unprecedented attention in the field of\nimage synthesis due to their remarkable generative capabilities.\nNotwithstanding their prowess, these models often incur substantial\ncomputational costs, primarily attributed to the sequential denoising process\nand cumbersome model size. Traditional methods for compressing diffusion models\ntypically involve extensive retraining, presenting cost and feasibility\nchallenges. In this paper, we introduce DeepCache, a novel training-free\nparadigm that accelerates diffusion models from the perspective of model\narchitecture. DeepCache capitalizes on the inherent temporal redundancy\nobserved in the sequential denoising steps of diffusion models, which caches\nand retrieves features across adjacent denoising stages, thereby curtailing\nredundant computations. Utilizing the property of the U-Net, we reuse the\nhigh-level features while updating the low-level features in a very cheap way.\nThis innovative strategy, in turn, enables a speedup factor of 2.3$\\times$ for\nStable Diffusion v1.5 with only a 0.05 decline in CLIP Score, and 4.1$\\times$\nfor LDM-4-G with a slight decrease of 0.22 in FID on ImageNet. Our experiments\nalso demonstrate DeepCache's superiority over existing pruning and distillation\nmethods that necessitate retraining and its compatibility with current sampling\ntechniques. Furthermore, we find that under the same throughput, DeepCache\neffectively achieves comparable or even marginally improved results with DDIM\nor PLMS. The code is available at https://github.com/horseee/DeepCache", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xinyin Ma", "Gongfan Fang", "Xinchao Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f487"}, "filepath": "data/2404.02759.png", "tags": [], "_media_type": "image", "_rand": 0.999912620370288, "arXiv_link": "https://arxiv.org/abs/2404.02759", "other_link": "", "title": "Unsupervised Occupancy Learning from Sparse Point Cloud", "abstract": "Implicit Neural Representations have gained prominence as a powerful\nframework for capturing complex data modalities, encompassing a wide range from\n3D shapes to images and audio. Within the realm of 3D shape representation,\nNeural Signed Distance Functions (SDF) have demonstrated remarkable potential\nin faithfully encoding intricate shape geometry. However, learning SDFs from 3D\npoint clouds in the absence of ground truth supervision remains a very\nchallenging task. In this paper, we propose a method to infer occupancy fields\ninstead of SDFs as they are easier to learn from sparse inputs. We leverage a\nmargin-based uncertainty measure to differentially sample from the decision\nboundary of the occupancy function and supervise the sampled boundary points\nusing the input point cloud. We further stabilize the optimization process at\nthe early stages of the training by biasing the occupancy function towards\nminimal entropy fields while maximizing its entropy at the input point cloud.\nThrough extensive experiments and evaluations, we illustrate the efficacy of\nour proposed method, highlighting its capacity to improve implicit shape\ninference with respect to baselines and the state-of-the-art using synthetic\nand real data.", "keywords": [], "authors_list": ["Amine Ouasfi", "Adnane Boukhayma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f488"}, "filepath": "data/2403.08748.png", "tags": [], "_media_type": "image", "_rand": 0.9995512649301997, "arXiv_link": "https://arxiv.org/abs/2403.08748", "other_link": "", "title": "SGC-Occ: Semantic-Geometry Consistent 3D Occupancy Prediction for Autonomous Driving", "abstract": "In autonomous vehicles, understanding the surrounding 3D environment of the\nego vehicle in real-time is essential. A compact way to represent scenes while\nencoding geometric distances and semantic object information is via 3D semantic\noccupancy maps. State of the art 3D mapping methods leverage transformers with\ncross-attention mechanisms to elevate 2D vision-centric camera features into\nthe 3D domain. However, these methods encounter significant challenges in\nreal-time applications due to their high computational demands during\ninference. This limitation is particularly problematic in autonomous vehicles,\nwhere GPU resources must be shared with other tasks such as localization and\nplanning. In this paper, we introduce an approach that extracts features from\nfront-view 2D camera images and LiDAR scans, then employs a sparse convolution\nnetwork (Minkowski Engine), for 3D semantic occupancy prediction. Given that\noutdoor scenes in autonomous driving scenarios are inherently sparse, the\nutilization of sparse convolution is particularly apt. By jointly solving the\nproblems of 3D scene completion of sparse scenes and 3D semantic segmentation,\nwe provide a more efficient learning framework suitable for real-time\napplications in autonomous vehicles. We also demonstrate competitive accuracy\non the nuScenes dataset.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zhiwen Yang", "Xiangteng He", "Yuxin Peng"], "category_name": "Robotics", "all_categories": ["Robotics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f489"}, "filepath": "data/2404.00931.png", "tags": [], "_media_type": "image", "_rand": 0.9992895435964487, "arXiv_link": "https://arxiv.org/abs/2404.00931", "other_link": "", "title": "GOV-NeSF: Generalizable Open-Vocabulary Neural Semantic Fields", "abstract": "Recent advancements in vision-language foundation models have significantly\nenhanced open-vocabulary 3D scene understanding. However, the generalizability\nof existing methods is constrained due to their framework designs and their\nreliance on 3D data. We address this limitation by introducing Generalizable\nOpen-Vocabulary Neural Semantic Fields (GOV-NeSF), a novel approach offering a\ngeneralizable implicit representation of 3D scenes with open-vocabulary\nsemantics. We aggregate the geometry-aware features using a cost volume, and\npropose a Multi-view Joint Fusion module to aggregate multi-view features\nthrough a cross-view attention mechanism, which effectively predicts\nview-specific blending weights for both colors and open-vocabulary features.\nRemarkably, our GOV-NeSF exhibits state-of-the-art performance in both 2D and\n3D open-vocabulary semantic segmentation, eliminating the need for ground truth\nsemantic labels or depth priors, and effectively generalize across scenes and\ndatasets without fine-tuning.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yunsong Wang", "Hanlin Chen", "Gim Hee Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f48a"}, "filepath": "data/2311.15260.png", "tags": [], "_media_type": "image", "_rand": 0.9995143402870447, "arXiv_link": "https://arxiv.org/abs/2311.15260", "other_link": "https://github.com/georghess/NeuRAD", "title": "NeuRAD: Neural Rendering for Autonomous Driving", "abstract": "Neural radiance fields (NeRFs) have gained popularity in the autonomous\ndriving (AD) community. Recent methods show NeRFs' potential for closed-loop\nsimulation, enabling testing of AD systems, and as an advanced training data\naugmentation technique. However, existing methods often require long training\ntimes, dense semantic supervision, or lack generalizability. This, in turn,\nhinders the application of NeRFs for AD at scale. In this paper, we propose\nNeuRAD, a robust novel view synthesis method tailored to dynamic AD data. Our\nmethod features simple network design, extensive sensor modeling for both\ncamera and lidar -- including rolling shutter, beam divergence and ray dropping\n-- and is applicable to multiple datasets out of the box. We verify its\nperformance on five popular AD datasets, achieving state-of-the-art performance\nacross the board. To encourage further development, we will openly release the\nNeuRAD source code. See https://github.com/georghess/NeuRAD .", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Adam Tonderski", "Carl Lindstr\u00f6m", "Georg Hess", "William Ljungbergh", "Lennart Svensson", "Christoffer Petersson"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f48b"}, "filepath": "data/2404.00801.png", "tags": [], "_media_type": "image", "_rand": 0.9996221927484554, "arXiv_link": "https://arxiv.org/abs/2404.00801", "other_link": "https://github.com/yeliudev/R2-Tuning.", "title": "Enhanced Motion-Text Alignment for Image-to-Video Transfer Learning", "abstract": "Video temporal grounding (VTG) is a fine-grained video understanding problem\nthat aims to ground relevant clips in untrimmed videos given natural language\nqueries. Most existing VTG models are built upon frame-wise final-layer CLIP\nfeatures, aided by additional temporal backbones (e.g., SlowFast) with\nsophisticated temporal reasoning mechanisms. In this work, we claim that CLIP\nitself already shows great potential for fine-grained spatial-temporal\nmodeling, as each layer offers distinct yet useful information under different\ngranularity levels. Motivated by this, we propose Reversed Recurrent Tuning\n($R^2$-Tuning), a parameter- and memory-efficient transfer learning framework\nfor video temporal grounding. Our method learns a lightweight $R^2$ Block\ncontaining only 1.5% of the total parameters to perform progressive\nspatial-temporal modeling. Starting from the last layer of CLIP, $R^2$ Block\nrecurrently aggregates spatial features from earlier layers, then refines\ntemporal correlation conditioning on the given query, resulting in a\ncoarse-to-fine scheme. $R^2$-Tuning achieves state-of-the-art performance\nacross three VTG tasks (i.e., moment retrieval, highlight detection, and video\nsummarization) on six public benchmarks (i.e., QVHighlights, Charades-STA,\nEgo4D-NLQ, TACoS, YouTube Highlights, and TVSum) even without the additional\nbackbone, demonstrating the significance and effectiveness of the proposed\nscheme. Our code is available at https://github.com/yeliudev/R2-Tuning.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Wei Zhang", "Chaoqun Wan", "Tongliang Liu", "Xinmei Tian", "Xu Shen", "Jieping Ye"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f48c"}, "filepath": "data/2402.18091.png", "tags": [], "_media_type": "image", "_rand": 0.9996118976867064, "arXiv_link": "https://arxiv.org/abs/2402.18091", "other_link": "", "title": "Polos: Multimodal Metric Learning from Human Feedback for Image Captioning", "abstract": "Establishing an automatic evaluation metric that closely aligns with human\njudgments is essential for effectively developing image captioning models.\nRecent data-driven metrics have demonstrated a stronger correlation with human\njudgments than classic metrics such as CIDEr; however they lack sufficient\ncapabilities to handle hallucinations and generalize across diverse images and\ntexts partially because they compute scalar similarities merely using\nembeddings learned from tasks unrelated to image captioning evaluation. In this\nstudy, we propose Polos, a supervised automatic evaluation metric for image\ncaptioning models. Polos computes scores from multimodal inputs, using a\nparallel feature extraction mechanism that leverages embeddings trained through\nlarge-scale contrastive learning. To train Polos, we introduce Multimodal\nMetric Learning from Human Feedback (M$^2$LHF), a framework for developing\nmetrics based on human feedback. We constructed the Polaris dataset, which\ncomprises 131K human judgments from 550 evaluators, which is approximately ten\ntimes larger than standard datasets. Our approach achieved state-of-the-art\nperformance on Composite, Flickr8K-Expert, Flickr8K-CF, PASCAL-50S, FOIL, and\nthe Polaris dataset, thereby demonstrating its effectiveness and robustness.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yuiga Wada", "Kanta Kaneda", "Daichi Saito", "Komei Sugiura"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f48d"}, "filepath": "data/2404.12203.png", "tags": [], "_media_type": "image", "_rand": 0.9995423753666306, "arXiv_link": "https://arxiv.org/abs/2404.12203", "other_link": "", "title": "CLIB-FIQA: Face Image Quality Assessment with Confidence Calibration", "abstract": "Face Image Quality Assessment (FIQA) estimates the utility of face images for\nautomated face recognition (FR) systems. We propose in this work a novel\napproach to assess the quality of face images based on inspecting the required\nchanges in the pre-trained FR model weights to minimize differences between\ntesting samples and the distribution of the FR training dataset. To achieve\nthat, we propose quantifying the discrepancy in Batch Normalization statistics\n(BNS), including mean and variance, between those recorded during FR training\nand those obtained by processing testing samples through the pretrained FR\nmodel. We then generate gradient magnitudes of pretrained FR weights by\nbackpropagating the BNS through the pretrained model. The cumulative absolute\nsum of these gradient magnitudes serves as the FIQ for our approach. Through\ncomprehensive experimentation, we demonstrate the effectiveness of our\ntraining-free and quality labeling-free approach, achieving competitive\nperformance to recent state-of-theart FIQA approaches without relying on\nquality labeling, the need to train regression networks, specialized\narchitectures, or designing and optimizing specific loss functions.", "keywords": [], "authors_list": ["Fu-Zhao Ou", "Fu-Zhao Ou", "Chongyi Li", "Shiqi Wang", "Sam Kwong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f48e"}, "filepath": "data/2311.15879v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999333019478175, "arXiv_link": "https://arxiv.org/abs/2311.15879v2", "other_link": "", "title": "EVCap: Retrieval-Augmented Image Captioning with External Visual--Name Memory for Open-World Comprehension", "abstract": "Large language models (LLMs)-based image captioning has the capability of\ndescribing objects not explicitly observed in training data; yet novel objects\noccur frequently, necessitating the requirement of sustaining up-to-date object\nknowledge for open-world comprehension. Instead of relying on large amounts of\ndata and/or scaling up network parameters, we introduce a highly effective\nretrieval-augmented image captioning method that prompts LLMs with object names\nretrieved from External Visual--name memory (EVCap). We build ever-changing\nobject knowledge memory using objects' visuals and names, enabling us to (i)\nupdate the memory at a minimal cost and (ii) effortlessly augment LLMs with\nretrieved object names by utilizing a lightweight and fast-to-train model. Our\nmodel, which was trained only on the COCO dataset, can adapt to out-of-domain\nwithout requiring additional fine-tuning or re-training. Our experiments\nconducted on benchmarks and synthetic commonsense-violating data show that\nEVCap, with only 3.97M trainable parameters, exhibits superior performance\ncompared to other methods based on frozen pre-trained LLMs. Its performance is\nalso competitive to specialist SOTAs that require extensive training.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Efficient and scalable vision"], "authors_list": ["Jiaxuan Li", "Duc Minh Vo", "Akihiro Sugimoto", "Hideki Nakayama"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f48f"}, "filepath": "data/2311.18387v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992162086279683, "arXiv_link": "https://arxiv.org/abs/2311.18387v1", "other_link": "https://smhongok.github.io/inv-dpm.html}.", "title": "On Exact Inversion of DPM-Solvers", "abstract": "Diffusion probabilistic models (DPMs) are a key component in modern\ngenerative models. DPM-solvers have achieved reduced latency and enhanced\nquality significantly, but have posed challenges to find the exact inverse\n(i.e., finding the initial noise from the given image). Here we investigate the\nexact inversions for DPM-solvers and propose algorithms to perform them when\nsamples are generated by the first-order as well as higher-order DPM-solvers.\nFor each explicit denoising step in DPM-solvers, we formulated the inversions\nusing implicit methods such as gradient descent or forward step method to\nensure the robustness to large classifier-free guidance unlike the prior\napproach using fixed-point iteration. Experimental results demonstrated that\nour proposed exact inversion methods significantly reduced the error of both\nimage and noise reconstructions, greatly enhanced the ability to distinguish\ninvisible watermarks and well prevented unintended background changes\nconsistently during image editing. Project page:\n\\url{https://smhongok.github.io/inv-dpm.html}.", "keywords": [], "authors_list": ["Seongmin Hong", "Kyeonghyun Lee", "Suh Yoon Jeon", "Hyewon Bae", "Se Young Chun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f490"}, "filepath": "data/2308.15984.png", "tags": [], "_media_type": "image", "_rand": 0.9995531690980428, "arXiv_link": "https://arxiv.org/abs/2308.15984", "other_link": "https://github.com/lucasbrynte/gasfm/.", "title": "Learning Structure-from-Motion with Graph Attention Networks", "abstract": "In this paper we tackle the problem of learning Structure-from-Motion (SfM)\nthrough the use of graph attention networks. SfM is a classic computer vision\nproblem that is solved though iterative minimization of reprojection errors,\nreferred to as Bundle Adjustment (BA), starting from a good initialization. In\norder to obtain a good enough initialization to BA, conventional methods rely\non a sequence of sub-problems (such as pairwise pose estimation, pose averaging\nor triangulation) which provide an initial solution that can then be refined\nusing BA. In this work we replace these sub-problems by learning a model that\ntakes as input the 2D keypoints detected across multiple views, and outputs the\ncorresponding camera poses and 3D keypoint coordinates. Our model takes\nadvantage of graph neural networks to learn SfM-specific primitives, and we\nshow that it can be used for fast inference of the reconstruction for new and\nunseen sequences. The experimental results show that the proposed model\noutperforms competing learning-based methods, and challenges COLMAP while\nhaving lower runtime. Our code is available at\nhttps://github.com/lucasbrynte/gasfm/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Lucas Brynte", "Jos\u00e9 Pedro Iglesias", "Carl Olsson", "Fredrik Kahl"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f491"}, "filepath": "data/2403.06135.png", "tags": [], "_media_type": "image", "_rand": 0.9999147576900679, "arXiv_link": "https://arxiv.org/abs/2403.06135", "other_link": "https://github.com/Shilin-LU/MACE.", "title": "MACE: Mass Concept Erasure in Diffusion Models", "abstract": "The rapid expansion of large-scale text-to-image diffusion models has raised\ngrowing concerns regarding their potential misuse in creating harmful or\nmisleading content. In this paper, we introduce MACE, a finetuning framework\nfor the task of mass concept erasure. This task aims to prevent models from\ngenerating images that embody unwanted concepts when prompted. Existing concept\nerasure methods are typically restricted to handling fewer than five concepts\nsimultaneously and struggle to find a balance between erasing concept synonyms\n(generality) and maintaining unrelated concepts (specificity). In contrast,\nMACE differs by successfully scaling the erasure scope up to 100 concepts and\nby achieving an effective balance between generality and specificity. This is\nachieved by leveraging closed-form cross-attention refinement along with LoRA\nfinetuning, collectively eliminating the information of undesirable concepts.\nFurthermore, MACE integrates multiple LoRAs without mutual interference. We\nconduct extensive evaluations of MACE against prior methods across four\ndifferent tasks: object erasure, celebrity erasure, explicit content erasure,\nand artistic style erasure. Our results reveal that MACE surpasses prior\nmethods in all evaluated tasks. Code is available at\nhttps://github.com/Shilin-LU/MACE.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Shilin Lu", "Zilan Wang", "Leyang Li", "Yanzhu Liu", "Adams Wai-Kin Kong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f492"}, "filepath": "data/2312.14667v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990922157602613, "arXiv_link": "https://arxiv.org/html/2312.14667v1", "other_link": "https://github.com/thuiar/TCL-MAP.", "title": "Contextual Augmented Global Contrast for Multimodal Intent Recognition", "abstract": "Multimodal intent recognition aims to leverage diverse modalities such as\nexpressions, body movements and tone of speech to comprehend user's intent,\nconstituting a critical task for understanding human language and behavior in\nreal-world multimodal scenarios. Nevertheless, the majority of existing methods\nignore potential correlations among different modalities and own limitations in\neffectively learning semantic features from nonverbal modalities. In this\npaper, we introduce a token-level contrastive learning method with\nmodality-aware prompting (TCL-MAP) to address the above challenges. To\nestablish an optimal multimodal semantic environment for text modality, we\ndevelop a modality-aware prompting module (MAP), which effectively aligns and\nfuses features from text, video and audio modalities with similarity-based\nmodality alignment and cross-modality attention mechanism. Based on the\nmodality-aware prompt and ground truth labels, the proposed token-level\ncontrastive learning framework (TCL) constructs augmented samples and employs\nNT-Xent loss on the label token. Specifically, TCL capitalizes on the optimal\ntextual semantic insights derived from intent labels to guide the learning\nprocesses of other modalities in return. Extensive experiments show that our\nmethod achieves remarkable improvements compared to state-of-the-art methods.\nAdditionally, ablation analyses demonstrate the superiority of the\nmodality-aware prompt over the handcrafted prompt, which holds substantial\nsignificance for multimodal prompt learning. The codes are released at\nhttps://github.com/thuiar/TCL-MAP.", "keywords": ["Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Kaili Sun", "Zhiwen Xie", "Mang Ye", "Huyin Zhang"], "category_name": "Multimedia", "all_categories": ["Multimedia", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f493"}, "filepath": "data/2401.08741.png", "tags": [], "_media_type": "image", "_rand": 0.9993790063951931, "arXiv_link": "https://arxiv.org/abs/2401.08741", "other_link": "https://lukemelas.github.io/fixed-point-diffusion-models.", "title": "Fixed Point Diffusion Models", "abstract": "We introduce the Fixed Point Diffusion Model (FPDM), a novel approach to\nimage generation that integrates the concept of fixed point solving into the\nframework of diffusion-based generative modeling. Our approach embeds an\nimplicit fixed point solving layer into the denoising network of a diffusion\nmodel, transforming the diffusion process into a sequence of closely-related\nfixed point problems. Combined with a new stochastic training method, this\napproach significantly reduces model size, reduces memory usage, and\naccelerates training. Moreover, it enables the development of two new\ntechniques to improve sampling efficiency: reallocating computation across\ntimesteps and reusing fixed point solutions between timesteps. We conduct\nextensive experiments with state-of-the-art models on ImageNet, FFHQ,\nCelebA-HQ, and LSUN-Church, demonstrating substantial improvements in\nperformance and efficiency. Compared to the state-of-the-art DiT model, FPDM\ncontains 87% fewer parameters, consumes 60% less memory during training, and\nimproves image generation quality in situations where sampling computation or\ntime is limited. Our code and pretrained models are available at\nhttps://lukemelas.github.io/fixed-point-diffusion-models.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Luke Melas-Kyriazi", "Xingjian Bai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f494"}, "filepath": "data/2311.10329.png", "tags": [], "_media_type": "image", "_rand": 0.9993874583374467, "arXiv_link": "https://arxiv.org/abs/2311.10329", "other_link": "", "title": "High Fidelity Person-centric Subject-to-Image Synthesis", "abstract": "Current subject-driven image generation methods encounter significant\nchallenges in person-centric image generation. The reason is that they learn\nthe semantic scene and person generation by fine-tuning a common pre-trained\ndiffusion, which involves an irreconcilable training imbalance. Precisely, to\ngenerate realistic persons, they need to sufficiently tune the pre-trained\nmodel, which inevitably causes the model to forget the rich semantic scene\nprior and makes scene generation over-fit to the training data. Moreover, even\nwith sufficient fine-tuning, these methods can still not generate high-fidelity\npersons since joint learning of the scene and person generation also lead to\nquality compromise. In this paper, we propose Face-diffuser, an effective\ncollaborative generation pipeline to eliminate the above training imbalance and\nquality compromise. Specifically, we first develop two specialized pre-trained\ndiffusion models, i.e., Text-driven Diffusion Model (TDM) and Subject-augmented\nDiffusion Model (SDM), for scene and person generation, respectively. The\nsampling process is divided into three sequential stages, i.e., semantic scene\nconstruction, subject-scene fusion, and subject enhancement. The first and last\nstages are performed by TDM and SDM respectively. The subject-scene fusion\nstage, that is the collaboration achieved through a novel and highly effective\nmechanism, Saliency-adaptive Noise Fusion (SNF). Specifically, it is based on\nour key observation that there exists a robust link between classifier-free\nguidance responses and the saliency of generated images. In each time step, SNF\nleverages the unique strengths of each model and allows for the spatial\nblending of predicted noises from both models automatically in a saliency-aware\nmanner. Extensive experiments confirm the impressive effectiveness and\nrobustness of the Face-diffuser.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yibin Wang", "Weizhong Zhang", "Jianwei Zheng", "Cheng Jin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f495"}, "filepath": "data/2404.12391.png", "tags": [], "_media_type": "image", "_rand": 0.9997283414533443, "arXiv_link": "https://arxiv.org/abs/2404.12391", "other_link": "", "title": "On the Content Bias in Fr\u00e9chet Video Distance", "abstract": "Fr\\'echet Video Distance (FVD), a prominent metric for evaluating video\ngeneration models, is known to conflict with human perception occasionally. In\nthis paper, we aim to explore the extent of FVD's bias toward per-frame quality\nover temporal realism and identify its sources. We first quantify the FVD's\nsensitivity to the temporal axis by decoupling the frame and motion quality and\nfind that the FVD increases only slightly with large temporal corruption. We\nthen analyze the generated videos and show that via careful sampling from a\nlarge set of generated videos that do not contain motions, one can drastically\ndecrease FVD without improving the temporal quality. Both studies suggest FVD's\nbias towards the quality of individual frames. We further observe that the bias\ncan be attributed to the features extracted from a supervised video classifier\ntrained on the content-biased dataset. We show that FVD with features extracted\nfrom the recent large-scale self-supervised video models is less biased toward\nimage quality. Finally, we revisit a few real-world examples to validate our\nhypothesis.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Songwei Ge", "Aniruddha Mahapatra", "Gaurav Parmar", "Jun-Yan Zhu", "Jia-Bin Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f496"}, "filepath": "data/2403.11708v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994501739045875, "arXiv_link": "https://arxiv.org/abs/2403.11708v2", "other_link": "https://github.com/1KK077/IDKL.", "title": "Implicit Discriminative Knowledge Learning for Visible-Infrared Person Re-Identification", "abstract": "Visible-Infrared Person Re-identification (VI-ReID) is a challenging\ncross-modal pedestrian retrieval task, due to significant intra-class\nvariations and cross-modal discrepancies among different cameras. Existing\nworks mainly focus on embedding images of different modalities into a unified\nspace to mine modality-shared features. They only seek distinctive information\nwithin these shared features, while ignoring the identity-aware useful\ninformation that is implicit in the modality-specific features. To address this\nissue, we propose a novel Implicit Discriminative Knowledge Learning (IDKL)\nnetwork to uncover and leverage the implicit discriminative information\ncontained within the modality-specific. First, we extract modality-specific and\nmodality-shared features using a novel dual-stream network. Then, the\nmodality-specific features undergo purification to reduce their modality style\ndiscrepancies while preserving identity-aware discriminative knowledge.\nSubsequently, this kind of implicit knowledge is distilled into the\nmodality-shared feature to enhance its distinctiveness. Finally, an alignment\nloss is proposed to minimize modality discrepancy on enhanced modality-shared\nfeatures. Extensive experiments on multiple public datasets demonstrate the\nsuperiority of IDKL network over the state-of-the-art methods. Code is\navailable at https://github.com/1KK077/IDKL.", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["kaijie ren", "Lei Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f497"}, "filepath": "data/2312.00703.png", "tags": [], "_media_type": "image", "_rand": 0.9993763171282163, "arXiv_link": "https://arxiv.org/abs/2312.00703", "other_link": "https://github.com/valeoai/PointBeV.", "title": "PointBeV: A Sparse Approach for BeV Predictions", "abstract": "Bird's-eye View (BeV) representations have emerged as the de-facto shared\nspace in driving applications, offering a unified space for sensor data fusion\nand supporting various downstream tasks. However, conventional models use grids\nwith fixed resolution and range and face computational inefficiencies due to\nthe uniform allocation of resources across all cells. To address this, we\npropose PointBeV, a novel sparse BeV segmentation model operating on sparse BeV\ncells instead of dense grids. This approach offers precise control over memory\nusage, enabling the use of long temporal contexts and accommodating\nmemory-constrained platforms. PointBeV employs an efficient two-pass strategy\nfor training, enabling focused computation on regions of interest. At inference\ntime, it can be used with various memory/performance trade-offs and flexibly\nadjusts to new specific use cases. PointBeV achieves state-of-the-art results\non the nuScenes dataset for vehicle, pedestrian, and lane segmentation,\nshowcasing superior performance in static and temporal settings despite being\ntrained solely with sparse signals. We will release our code along with two new\nefficient modules used in the architecture: Sparse Feature Pulling, designed\nfor the effective extraction of features from images to BeV, and Submanifold\nAttention, which enables efficient temporal modeling. Our code is available at\nhttps://github.com/valeoai/PointBeV.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Loick Chambon", "\u00c9loi Zablocki", "Micka\u00ebl Chen", "Florent Bartoccioni", "Patrick P\u00e9rez", "Matthieu Cord"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f498"}, "filepath": "data/2404.03070.png", "tags": [], "_media_type": "image", "_rand": 0.9996169848669683, "arXiv_link": "https://arxiv.org/abs/2404.03070", "other_link": "", "title": "Behind the Veil: Enhanced Indoor 3D Scene Reconstruction with Occluded Surfaces Completion", "abstract": "In this paper, we present a novel indoor 3D reconstruction method with\noccluded surface completion, given a sequence of depth readings. Prior\nstate-of-the-art (SOTA) methods only focus on the reconstruction of the visible\nareas in a scene, neglecting the invisible areas due to the occlusions, e.g.,\nthe contact surface between furniture, occluded wall and floor. Our method\ntackles the task of completing the occluded scene surfaces, resulting in a\ncomplete 3D scene mesh. The core idea of our method is learning 3D geometry\nprior from various complete scenes to infer the occluded geometry of an unseen\nscene from solely depth measurements. We design a coarse-fine hierarchical\noctree representation coupled with a dual-decoder architecture, i.e.,\nGeo-decoder and 3D Inpainter, which jointly reconstructs the complete 3D scene\ngeometry. The Geo-decoder with detailed representation at fine levels is\noptimized online for each scene to reconstruct visible surfaces. The 3D\nInpainter with abstract representation at coarse levels is trained offline\nusing various scenes to complete occluded surfaces. As a result, while the\nGeo-decoder is specialized for an individual scene, the 3D Inpainter can be\ngenerally applied across different scenes. We evaluate the proposed method on\nthe 3D Completed Room Scene (3D-CRS) and iTHOR datasets, significantly\noutperforming the SOTA methods by a gain of 16.8% and 24.2% in terms of the\ncompleteness of 3D reconstruction. 3D-CRS dataset including a complete 3D mesh\nof each scene is provided at project webpage.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Su Sun", "Cheng Zhao", "Yuliang Guo", "Ruoyu Wang", "Xinyu Huang", "Yingjie Victor Chen", "Liu Ren"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f499"}, "filepath": "data/2403.14870.png", "tags": [], "_media_type": "image", "_rand": 0.9990554691911802, "arXiv_link": "https://arxiv.org/abs/2403.14870", "other_link": "", "title": "VidLA: Video-Language Alignment at Scale", "abstract": "In this paper, we propose VidLA, an approach for video-language alignment at\nscale. There are two major limitations of previous video-language alignment\napproaches. First, they do not capture both short-range and long-range temporal\ndependencies and typically employ complex hierarchical deep network\narchitectures that are hard to integrate with existing pretrained image-text\nfoundation models. To effectively address this limitation, we instead keep the\nnetwork architecture simple and use a set of data tokens that operate at\ndifferent temporal resolutions in a hierarchical manner, accounting for the\ntemporally hierarchical nature of videos. By employing a simple two-tower\narchitecture, we are able to initialize our video-language model with\npretrained image-text foundation models, thereby boosting the final\nperformance. Second, existing video-language alignment works struggle due to\nthe lack of semantically aligned large-scale training data. To overcome it, we\nleverage recent LLMs to curate the largest video-language dataset to date with\nbetter visual grounding. Furthermore, unlike existing video-text datasets which\nonly contain short clips, our dataset is enriched with video clips of varying\ndurations to aid our temporally hierarchical data tokens in extracting better\nrepresentations at varying temporal scales. Overall, empirical results show\nthat our proposed approach surpasses state-of-the-art methods on multiple\nretrieval benchmarks, especially on longer videos, and performs competitively\non classification benchmarks.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Mamshad Nayeem Rizve", "Fan Fei", "Jayakrishnan Unnikrishnan", "Son Dinh Tran", "Benjamin Yao", "Belinda Zeng", "Mubarak Shah", "Trishul Chilimbi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f49a"}, "filepath": "data/2401.02416.png", "tags": [], "_media_type": "image", "_rand": 0.9995077675152453, "arXiv_link": "https://arxiv.org/abs/2401.02416", "other_link": "https://odin-seg.github.io).", "title": "ODIN: A Single Model for 2D and 3D Segmentation", "abstract": "State-of-the-art models on contemporary 3D segmentation benchmarks like\nScanNet consume and label dataset-provided 3D point clouds, obtained through\npost processing of sensed multiview RGB-D images. They are typically trained\nin-domain, forego large-scale 2D pre-training and outperform alternatives that\nfeaturize the posed RGB-D multiview images instead. The gap in performance\nbetween methods that consume posed images versus post-processed 3D point clouds\nhas fueled the belief that 2D and 3D perception require distinct model\narchitectures. In this paper, we challenge this view and propose ODIN\n(Omni-Dimensional INstance segmentation), a model that can segment and label\nboth 2D RGB images and 3D point clouds, using a transformer architecture that\nalternates between 2D within-view and 3D cross-view information fusion. Our\nmodel differentiates 2D and 3D feature operations through the positional\nencodings of the tokens involved, which capture pixel coordinates for 2D patch\ntokens and 3D coordinates for 3D feature tokens. ODIN achieves state-of-the-art\nperformance on ScanNet200, Matterport3D and AI2THOR 3D instance segmentation\nbenchmarks, and competitive performance on ScanNet, S3DIS and COCO. It\noutperforms all previous works by a wide margin when the sensed 3D point cloud\nis used in place of the point cloud sampled from 3D mesh. When used as the 3D\nperception engine in an instructable embodied agent architecture, it sets a new\nstate-of-the-art on the TEACh action-from-dialogue benchmark. Our code and\ncheckpoints can be found at the project website (https://odin-seg.github.io).", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Ayush Jain", "Pushkal Katara", "Nikolaos Gkanatsios", "Adam Harley", "Gabriel Sarch", "Kriti Aggarwal", "Vishrav Chaudhary", "Katerina Fragkiadaki"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f49b"}, "filepath": "data/2401.00901.png", "tags": [], "_media_type": "image", "_rand": 0.9994546411204929, "arXiv_link": "https://arxiv.org/abs/2401.00901", "other_link": "", "title": "VideoGrounding-DINO: Towards Open-Vocabulary Spatio-Temporal Video Grounding", "abstract": "Video grounding aims to localize a spatio-temporal section in a video\ncorresponding to an input text query. This paper addresses a critical\nlimitation in current video grounding methodologies by introducing an\nOpen-Vocabulary Spatio-Temporal Video Grounding task. Unlike prevalent\nclosed-set approaches that struggle with open-vocabulary scenarios due to\nlimited training data and predefined vocabularies, our model leverages\npre-trained representations from foundational spatial grounding models. This\nempowers it to effectively bridge the semantic gap between natural language and\ndiverse visual content, achieving strong performance in closed-set and\nopen-vocabulary settings. Our contributions include a novel spatio-temporal\nvideo grounding model, surpassing state-of-the-art results in closed-set\nevaluations on multiple datasets and demonstrating superior performance in\nopen-vocabulary scenarios. Notably, the proposed model outperforms\nstate-of-the-art methods in closed-set settings on VidSTG (Declarative and\nInterrogative) and HC-STVG (V1 and V2) datasets. Furthermore, in\nopen-vocabulary evaluations on HC-STVG V1 and YouCook-Interactions, our model\nsurpasses the recent best-performing models by $4.88$ m_vIoU and $1.83\\%$\naccuracy, demonstrating its efficacy in handling diverse linguistic and visual\nconcepts for improved video understanding. Our codes will be publicly released.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Syed Talal Wasim", "Muzammal Naseer", "Salman Khan", "Ming-Hsuan Yang", "Fahad Shahbaz Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f49c"}, "filepath": "data/2403.06495.png", "tags": [], "_media_type": "image", "_rand": 0.9999381574023196, "arXiv_link": "https://arxiv.org/abs/2403.06495", "other_link": "https://github.com/mala-lab/InCTRL.", "title": "Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts", "abstract": "This paper explores the problem of Generalist Anomaly Detection (GAD), aiming\nto train one single detection model that can generalize to detect anomalies in\ndiverse datasets from different application domains without any further\ntraining on the target data. Some recent studies have shown that large\npre-trained Visual-Language Models (VLMs) like CLIP have strong generalization\ncapabilities on detecting industrial defects from various datasets, but their\nmethods rely heavily on handcrafted text prompts about defects, making them\ndifficult to generalize to anomalies in other applications, e.g., medical image\nanomalies or semantic anomalies in natural images. In this work, we propose to\ntrain a GAD model with few-shot normal images as sample prompts for AD on\ndiverse datasets on the fly. To this end, we introduce a novel approach that\nlearns an in-context residual learning model for GAD, termed InCTRL. It is\ntrained on an auxiliary dataset to discriminate anomalies from normal samples\nbased on a holistic evaluation of the residuals between query images and\nfew-shot normal sample prompts. Regardless of the datasets, per definition of\nanomaly, larger residuals are expected for anomalies than normal samples,\nthereby enabling InCTRL to generalize across different domains without further\ntraining. Comprehensive experiments on nine AD datasets are performed to\nestablish a GAD benchmark that encapsulate the detection of industrial defect\nanomalies, medical anomalies, and semantic anomalies in both one-vs-all and\nmulti-class setting, on which InCTRL is the best performer and significantly\noutperforms state-of-the-art competing methods. Code is available at\nhttps://github.com/mala-lab/InCTRL.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Jiawen Zhu", "Guansong Pang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f49d"}, "filepath": "data/2403.00303.png", "tags": [], "_media_type": "image", "_rand": 0.9997281186173218, "arXiv_link": "https://arxiv.org/abs/2403.00303", "other_link": "https://github.com/PriNing/ODM.", "title": "ODM: A Text-Image Further Alignment Pre-training Approach for Scene Text Detection and Spotting", "abstract": "In recent years, text-image joint pre-training techniques have shown\npromising results in various tasks. However, in Optical Character Recognition\n(OCR) tasks, aligning text instances with their corresponding text regions in\nimages poses a challenge, as it requires effective alignment between text and\nOCR-Text (referring to the text in images as OCR-Text to distinguish from the\ntext in natural language) rather than a holistic understanding of the overall\nimage content. In this paper, we propose a new pre-training method called\nOCR-Text Destylization Modeling (ODM) that transfers diverse styles of text\nfound in images to a uniform style based on the text prompt. With ODM, we\nachieve better alignment between text and OCR-Text and enable pre-trained\nmodels to adapt to the complex and diverse styles of scene text detection and\nspotting tasks. Additionally, we have designed a new labeling generation method\nspecifically for ODM and combined it with our proposed Text-Controller module\nto address the challenge of annotation costs in OCR tasks, allowing a larger\namount of unlabeled data to participate in pre-training. Extensive experiments\non multiple public datasets demonstrate that our method significantly improves\nperformance and outperforms current pre-training methods in scene text\ndetection and spotting tasks. Code is available at\nhttps://github.com/PriNing/ODM.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Chen Duan", "Pei Fu", "Shan Guo", "Qianyi Jiang", "Xiaoming Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f49e"}, "filepath": "data/2311.18651.png", "tags": [], "_media_type": "image", "_rand": 0.9996550878949919, "arXiv_link": "https://arxiv.org/abs/2311.18651", "other_link": "", "title": "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning", "abstract": "Recent advances in Large Multimodal Models (LMM) have made it possible for\nvarious applications in human-machine interactions. However, developing LMMs\nthat can comprehend, reason, and plan in complex and diverse 3D environments\nremains a challenging topic, especially considering the demand for\nunderstanding permutation-invariant point cloud 3D representations of the 3D\nscene. Existing works seek help from multi-view images, and project 2D features\nto 3D space as 3D scene representations. This, however, leads to huge\ncomputational overhead and performance degradation. In this paper, we present\nLL3DA, a Large Language 3D Assistant that takes point cloud as direct input and\nrespond to both textual-instructions and visual-prompts. This help LMMs better\ncomprehend human interactions and further help to remove the ambiguities in\ncluttered 3D scenes. Experiments show that LL3DA achieves remarkable results,\nand surpasses various 3D vision-language models on both 3D Dense Captioning and\n3D Question Answering.", "keywords": ["Large multimodal models and prompting techniques", "Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Sijin Chen", "Xin Chen", "Chi Zhang", "Mingsheng Li", "Gang Yu", "Hao Fei", "Hongyuan Zhu", "Jiayuan Fan", "Tao Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f49f"}, "filepath": "data/2403.06452.png", "tags": [], "_media_type": "image", "_rand": 0.9991414750924261, "arXiv_link": "https://arxiv.org/abs/2403.06452", "other_link": "https://github.com/mulns/Text2QR}", "title": "Text2QR: Harmonizing Aesthetic Customization and Scanning Robustness for Text-Guided QR Code Generation", "abstract": "In the digital era, QR codes serve as a linchpin connecting virtual and\nphysical realms. Their pervasive integration across various applications\nhighlights the demand for aesthetically pleasing codes without compromised\nscannability. However, prevailing methods grapple with the intrinsic challenge\nof balancing customization and scannability. Notably, stable-diffusion models\nhave ushered in an epoch of high-quality, customizable content generation. This\npaper introduces Text2QR, a pioneering approach leveraging these advancements\nto address a fundamental challenge: concurrently achieving user-defined\naesthetics and scanning robustness. To ensure stable generation of aesthetic QR\ncodes, we introduce the QR Aesthetic Blueprint (QAB) module, generating a\nblueprint image exerting control over the entire generation process.\nSubsequently, the Scannability Enhancing Latent Refinement (SELR) process\nrefines the output iteratively in the latent space, enhancing scanning\nrobustness. This approach harnesses the potent generation capabilities of\nstable-diffusion models, navigating the trade-off between image aesthetics and\nQR code scannability. Our experiments demonstrate the seamless fusion of visual\nappeal with the practical utility of aesthetic QR codes, markedly outperforming\nprior methods. Codes are available at \\url{https://github.com/mulns/Text2QR}", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Guangyang Wu", "Xiaohong Liu", "Jun Jia", "Xuehao Cui", "Guangtao Zhai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a0"}, "filepath": "data/2312.09147.png", "tags": [], "_media_type": "image", "_rand": 0.9998069244082272, "arXiv_link": "https://arxiv.org/abs/2312.09147", "other_link": "https://zouzx.github.io/TriplaneGaussian/.", "title": "Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers", "abstract": "Recent advancements in 3D reconstruction from single images have been driven\nby the evolution of generative models. Prominent among these are methods based\non Score Distillation Sampling (SDS) and the adaptation of diffusion models in\nthe 3D domain. Despite their progress, these techniques often face limitations\ndue to slow optimization or rendering processes, leading to extensive training\nand optimization times. In this paper, we introduce a novel approach for\nsingle-view reconstruction that efficiently generates a 3D model from a single\nimage via feed-forward inference. Our method utilizes two transformer-based\nnetworks, namely a point decoder and a triplane decoder, to reconstruct 3D\nobjects using a hybrid Triplane-Gaussian intermediate representation. This\nhybrid representation strikes a balance, achieving a faster rendering speed\ncompared to implicit representations while simultaneously delivering superior\nrendering quality than explicit representations. The point decoder is designed\nfor generating point clouds from single images, offering an explicit\nrepresentation which is then utilized by the triplane decoder to query Gaussian\nfeatures for each point. This design choice addresses the challenges associated\nwith directly regressing explicit 3D Gaussian attributes characterized by their\nnon-structural nature. Subsequently, the 3D Gaussians are decoded by an MLP to\nenable rapid rendering through splatting. Both decoders are built upon a\nscalable, transformer-based architecture and have been efficiently trained on\nlarge-scale 3D datasets. The evaluations conducted on both synthetic datasets\nand real-world images demonstrate that our method not only achieves higher\nquality but also ensures a faster runtime in comparison to previous\nstate-of-the-art techniques. Please see our project page at\nhttps://zouzx.github.io/TriplaneGaussian/.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zi-Xin Zou", "Zhipeng Yu", "Yuan-Chen Guo", "Yangguang Li", "Yan-Pei Cao", "Ding Liang", "Song-Hai Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a1"}, "filepath": "data/2405.12509.png", "tags": [], "_media_type": "image", "_rand": 0.9995425255334257, "arXiv_link": "https://arxiv.org/abs/2405.12509", "other_link": "", "title": "Active Object Detection with Knowledge Aggregation and Distillation from Large Models", "abstract": "Accurately detecting active objects undergoing state changes is essential for\ncomprehending human interactions and facilitating decision-making. The existing\nmethods for active object detection (AOD) primarily rely on visual appearance\nof the objects within input, such as changes in size, shape and relationship\nwith hands. However, these visual changes can be subtle, posing challenges,\nparticularly in scenarios with multiple distracting no-change instances of the\nsame category. We observe that the state changes are often the result of an\ninteraction being performed upon the object, thus propose to use informed\npriors about object related plausible interactions (including semantics and\nvisual appearance) to provide more reliable cues for AOD. Specifically, we\npropose a knowledge aggregation procedure to integrate the aforementioned\ninformed priors into oracle queries within the teacher decoder, offering more\nobject affordance commonsense to locate the active object. To streamline the\ninference process and reduce extra knowledge inputs, we propose a knowledge\ndistillation approach that encourages the student decoder to mimic the\ndetection capabilities of the teacher decoder using the oracle query by\nreplicating its predictions and attention. Our proposed framework achieves\nstate-of-the-art performance on four datasets, namely Ego4D, Epic-Kitchens,\nMECCANO, and 100DOH, which demonstrates the effectiveness of our approach in\nimproving AOD.", "keywords": [], "authors_list": ["Dejie Yang", "Yang Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a2"}, "filepath": "data/2405.11618.png", "tags": [], "_media_type": "image", "_rand": 0.9999270175373697, "arXiv_link": "https://arxiv.org/abs/2405.11618", "other_link": "https://github.com/mahmoodlab/TANGLE.", "title": "Transcriptomics-guided Slide Representation Learning in Computational Pathology", "abstract": "Self-supervised learning (SSL) has been successful in building patch\nembeddings of small histology images (e.g., 224x224 pixels), but scaling these\nmodels to learn slide embeddings from the entirety of giga-pixel whole-slide\nimages (WSIs) remains challenging. Here, we leverage complementary information\nfrom gene expression profiles to guide slide representation learning using\nmultimodal pre-training. Expression profiles constitute highly detailed\nmolecular descriptions of a tissue that we hypothesize offer a strong\ntask-agnostic training signal for learning slide embeddings. Our slide and\nexpression (S+E) pre-training strategy, called Tangle, employs\nmodality-specific encoders, the outputs of which are aligned via contrastive\nlearning. Tangle was pre-trained on samples from three different organs: liver\n(n=6,597 S+E pairs), breast (n=1,020), and lung (n=1,012) from two different\nspecies (Homo sapiens and Rattus norvegicus). Across three independent test\ndatasets consisting of 1,265 breast WSIs, 1,946 lung WSIs, and 4,584 liver\nWSIs, Tangle shows significantly better few-shot performance compared to\nsupervised and SSL baselines. When assessed using prototype-based\nclassification and slide retrieval, Tangle also shows a substantial performance\nimprovement over all baselines. Code available at\nhttps://github.com/mahmoodlab/TANGLE.", "keywords": ["Multimodal models and vision-language models", "Medical imaging and biological vision", "Deep learning architectures and techniques"], "authors_list": ["Guillaume Jaume", "Lukas Oldenburg", "Anurag Vaidya", "Richard J. Chen", "Drew F. K. Williamson", "Thomas Peeters", "Andrew Song", "Faisal Mahmood"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a3"}, "filepath": "data/2404.09001.png", "tags": [], "_media_type": "image", "_rand": 0.999622519609844, "arXiv_link": "https://arxiv.org/abs/2404.09001", "other_link": "", "title": "Smart Help: Strategic Opponent Modeling for Proactive and Adaptive Robot Assistance in Households", "abstract": "Despite the significant demand for assistive technology among vulnerable\ngroups (e.g., the elderly, children, and the disabled) in daily tasks, research\ninto advanced AI-driven assistive solutions that genuinely accommodate their\ndiverse needs remains sparse. Traditional human-machine interaction tasks often\nrequire machines to simply help without nuanced consideration of human\nabilities and feelings, such as their opportunity for practice and learning,\nsense of self-improvement, and self-esteem. Addressing this gap, we define a\npivotal and novel challenge Smart Help, which aims to provide proactive yet\nadaptive support to human agents with diverse disabilities and dynamic goals in\nvarious tasks and environments. To establish this challenge, we leverage\nAI2-THOR to build a new interactive 3D realistic household environment for the\nSmart Help task. We introduce an innovative opponent modeling module that\nprovides a nuanced understanding of the main agent's capabilities and goals, in\norder to optimize the assisting agent's helping policy. Rigorous experiments\nvalidate the efficacy of our model components and show the superiority of our\nholistic approach against established baselines. Our findings illustrate the\npotential of AI-imbued assistive robots in improving the well-being of\nvulnerable groups.", "keywords": ["Scene analysis and understanding", "Vision applications for social good and ethics"], "authors_list": ["Zhihao Cao", "ZiDong Wang", "Siwen Xie", "Anji Liu", "Lifeng Fan"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a4"}, "filepath": "data/2405.18706.png", "tags": [], "_media_type": "image", "_rand": 0.999939554391193, "arXiv_link": "https://arxiv.org/abs/2405.18706", "other_link": "", "title": "FocSAM: Delving Deeply into Focused Objects in Segmenting Anything", "abstract": "The Segment Anything Model (SAM) marks a notable milestone in segmentation\nmodels, highlighted by its robust zero-shot capabilities and ability to handle\ndiverse prompts. SAM follows a pipeline that separates interactive segmentation\ninto image preprocessing through a large encoder and interactive inference via\na lightweight decoder, ensuring efficient real-time performance. However, SAM\nfaces stability issues in challenging samples upon this pipeline. These issues\narise from two main factors. Firstly, the image preprocessing disables SAM from\ndynamically using image-level zoom-in strategies to refocus on the target\nobject during interaction. Secondly, the lightweight decoder struggles to\nsufficiently integrate interactive information with image embeddings. To\naddress these two limitations, we propose FocSAM with a pipeline redesigned on\ntwo pivotal aspects. First, we propose Dynamic Window Multi-head Self-Attention\n(Dwin-MSA) to dynamically refocus SAM's image embeddings on the target object.\nDwin-MSA localizes attention computations around the target object, enhancing\nobject-related embeddings with minimal computational overhead. Second, we\npropose Pixel-wise Dynamic ReLU (P-DyReLU) to enable sufficient integration of\ninteractive information from a few initial clicks that have significant impacts\non the overall segmentation results. Experimentally, FocSAM augments SAM's\ninteractive segmentation performance to match the existing state-of-the-art\nmethod in segmentation quality, requiring only about 5.6% of this method's\ninference time on CPUs.", "keywords": ["Efficient and scalable vision"], "authors_list": ["You Huang", "Zongyu Lan", "Liujuan Cao", "Xianming Lin", "Shengchuan Zhang", "Guannan Jiang", "Rongrong Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a5"}, "filepath": "data/2403.06462v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997456939064743, "arXiv_link": "https://arxiv.org/abs/2403.06462v2", "other_link": "https://github.com/Gavinwxy/DDFP.", "title": "Towards the Uncharted: Density-Descending Feature Perturbation for Semi-supervised Semantic Segmentation", "abstract": "Semi-supervised semantic segmentation allows model to mine effective\nsupervision from unlabeled data to complement label-guided training. Recent\nresearch has primarily focused on consistency regularization techniques,\nexploring perturbation-invariant training at both the image and feature levels.\nIn this work, we proposed a novel feature-level consistency learning framework\nnamed Density-Descending Feature Perturbation (DDFP). Inspired by the\nlow-density separation assumption in semi-supervised learning, our key insight\nis that feature density can shed a light on the most promising direction for\nthe segmentation classifier to explore, which is the regions with lower\ndensity. We propose to shift features with confident predictions towards\nlower-density regions by perturbation injection. The perturbed features are\nthen supervised by the predictions on the original features, thereby compelling\nthe classifier to explore less dense regions to effectively regularize the\ndecision boundary. Central to our method is the estimation of feature density.\nTo this end, we introduce a lightweight density estimator based on normalizing\nflow, allowing for efficient capture of the feature density distribution in an\nonline manner. By extracting gradients from the density estimator, we can\ndetermine the direction towards less dense regions for each feature. The\nproposed DDFP outperforms other designs on feature-level perturbations and\nshows state of the art performances on both Pascal VOC and Cityscapes dataset\nunder various partition protocols. The project is available at\nhttps://github.com/Gavinwxy/DDFP.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiaoyang Wang", "Huihui Bai", "Limin Yu", "Yao Zhao", "Jimin Xiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a6"}, "filepath": "data/2404.00262.png", "tags": [], "_media_type": "image", "_rand": 0.9995853919922748, "arXiv_link": "https://arxiv.org/abs/2404.00262", "other_link": "", "title": "Rethinking the Region Classification in Open-Vocabulary Semantic Segmentation: An Image-to-Image View", "abstract": "Open-vocabulary semantic segmentation (OVS) aims to segment images of\narbitrary categories specified by class labels or captions. However, most\nprevious best-performing methods, whether pixel grouping methods or region\nrecognition methods, suffer from false matches between image features and\ncategory labels. We attribute this to the natural gap between the textual\nfeatures and visual features. In this work, we rethink how to mitigate false\nmatches from the perspective of image-to-image matching and propose a novel\nrelation-aware intra-modal matching (RIM) framework for OVS based on visual\nfoundation models. RIM achieves robust region classification by firstly\nconstructing diverse image-modal reference features and then matching them with\nregion features based on relation-aware ranking distribution. The proposed RIM\nenjoys several merits. First, the intra-modal reference features are better\naligned, circumventing potential ambiguities that may arise in cross-modal\nmatching. Second, the ranking-based matching process harnesses the structure\ninformation implicit in the inter-class relationships, making it more robust\nthan comparing individually. Extensive experiments on three benchmarks\ndemonstrate that RIM outperforms previous state-of-the-art methods by large\nmargins, obtaining a lead of more than 10% in mIoU on PASCAL VOC benchmark.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Yuan Wang", "Rui Sun", "Naisong Luo", "Yuwen Pan", "Tianzhu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a7"}, "filepath": "data/2401.08407.png", "tags": [], "_media_type": "image", "_rand": 0.9998608356093293, "arXiv_link": "https://arxiv.org/abs/2401.08407", "other_link": "https://github.com/niejiahao1998/IFA.", "title": "Addressing Background Context Bias in Few-Shot Segmentation through Iterative Modulation", "abstract": "Cross-Domain Few-Shot Segmentation (CD-FSS) poses the challenge of segmenting\nnovel categories from a distinct domain using only limited exemplars. In this\npaper, we undertake a comprehensive study of CD-FSS and uncover two crucial\ninsights: (i) the necessity of a fine-tuning stage to effectively transfer the\nlearned meta-knowledge across domains, and (ii) the overfitting risk during the\nna\\\"ive fine-tuning due to the scarcity of novel category examples. With these\ninsights, we propose a novel cross-domain fine-tuning strategy that addresses\nthe challenging CD-FSS tasks. We first design Bi-directional Few-shot\nPrediction (BFP), which establishes support-query correspondence in a\nbi-directional manner, crafting augmented supervision to reduce the overfitting\nrisk. Then we further extend BFP into Iterative Few-shot Adaptor (IFA), which\nis a recursive framework to capture the support-query correspondence\niteratively, targeting maximal exploitation of supervisory signals from the\nsparse novel category samples. Extensive empirical evaluations show that our\nmethod significantly outperforms the state-of-the-arts (+7.8\\%), which verifies\nthat IFA tackles the cross-domain challenges and mitigates the overfitting\nsimultaneously. The code is available at: https://github.com/niejiahao1998/IFA.", "keywords": [], "authors_list": ["Lanyun Zhu", "Tianrun Chen", "Jianxiong Yin", "Simon See", "Jun Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a8"}, "filepath": "data/2311.15826.png", "tags": [], "_media_type": "image", "_rand": 0.9990715039545006, "arXiv_link": "https://arxiv.org/abs/2311.15826", "other_link": "https://github.com/mbzuai-oryx/geochat.", "title": "GeoChat: Grounded Large Vision-Language Model for Remote Sensing", "abstract": "Recent advancements in Large Vision-Language Models (VLMs) have shown great\npromise in natural image domains, allowing users to hold a dialogue about given\nvisual content. However, such general-domain VLMs perform poorly for Remote\nSensing (RS) scenarios, leading to inaccurate or fabricated information when\npresented with RS domain-specific queries. Such a behavior emerges due to the\nunique challenges introduced by RS imagery. For example, to handle\nhigh-resolution RS imagery with diverse scale changes across categories and\nmany small objects, region-level reasoning is necessary alongside holistic\nscene interpretation. Furthermore, the lack of domain-specific multimodal\ninstruction following data as well as strong backbone models for RS make it\nhard for the models to align their behavior with user queries. To address these\nlimitations, we propose GeoChat - the first versatile remote sensing VLM that\noffers multitask conversational capabilities with high-resolution RS images.\nSpecifically, GeoChat can not only answer image-level queries but also accepts\nregion inputs to hold region-specific dialogue. Furthermore, it can visually\nground objects in its responses by referring to their spatial coordinates. To\naddress the lack of domain-specific datasets, we generate a novel RS multimodal\ninstruction-following dataset by extending image-text pairs from existing\ndiverse RS datasets. We establish a comprehensive benchmark for RS multitask\nconversations and compare with a number of baseline methods. GeoChat\ndemonstrates robust zero-shot performance on various RS tasks, e.g., image and\nregion captioning, visual question answering, scene classification, visually\ngrounded conversations and referring detection. Our code is available at\nhttps://github.com/mbzuai-oryx/geochat.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Kartik Kuckreja", "Muhammad Sohail Danish", "Muzammal Naseer", "Abhijit Das", "Salman Khan", "Fahad Shahbaz Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4a9"}, "filepath": "data/2312.08631.png", "tags": [], "_media_type": "image", "_rand": 0.999153134293832, "arXiv_link": "https://arxiv.org/abs/2312.08631", "other_link": "", "title": "RankMatch: Exploring the Better Consistency Regularization for Semi-supervised Semantic Segmentation", "abstract": "Semi-supervised semantic segmentation aims to utilize limited labeled images\nand abundant unlabeled images to achieve label-efficient learning, wherein the\nweak-to-strong consistency regularization framework, popularized by FixMatch,\nis widely used as a benchmark scheme. Despite its effectiveness, we observe\nthat such scheme struggles with satisfactory segmentation for the local\nregions. This can be because it originally stems from the image classification\ntask and lacks specialized mechanisms to capture fine-grained local semantics\nthat prioritizes in dense prediction. To address this issue, we propose a novel\nframework called \\texttt{MaskMatch}, which enables fine-grained locality\nlearning to achieve better dense segmentation. On top of the original\nteacher-student framework, we design a masked modeling proxy task that\nencourages the student model to predict the segmentation given the unmasked\nimage patches (even with 30\\% only) and enforces the predictions to be\nconsistent with pseudo-labels generated by the teacher model using the complete\nimage. Such design is motivated by the intuition that if the predictions are\nmore consistent given insufficient neighboring information, stronger\nfine-grained locality perception is achieved. Besides, recognizing the\nimportance of reliable pseudo-labels in the above locality learning and the\noriginal consistency learning scheme, we design a multi-scale ensembling\nstrategy that considers context at different levels of abstraction for\npseudo-label generation. Extensive experiments on benchmark datasets\ndemonstrate the superiority of our method against previous approaches and its\nplug-and-play flexibility.", "keywords": [], "authors_list": ["Huayu Mai", "Rui Sun", "Tianzhu Zhang", "Feng Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4aa"}, "filepath": "data/2403.19490.png", "tags": [], "_media_type": "image", "_rand": 0.999047726623135, "arXiv_link": "https://arxiv.org/abs/2403.19490", "other_link": "", "title": "Jointly Training and Pruning CNNs via Learnable Agent Guidance and Alignment", "abstract": "Structural model pruning is a prominent approach used for reducing the\ncomputational cost of Convolutional Neural Networks (CNNs) before their\ndeployment on resource-constrained devices. Yet, the majority of proposed ideas\nrequire a pretrained model before pruning, which is costly to secure. In this\npaper, we propose a novel structural pruning approach to jointly learn the\nweights and structurally prune architectures of CNN models. The core element of\nour method is a Reinforcement Learning (RL) agent whose actions determine the\npruning ratios of the CNN model's layers, and the resulting model's accuracy\nserves as its reward. We conduct the joint training and pruning by iteratively\ntraining the model's weights and the agent's policy, and we regularize the\nmodel's weights to align with the selected structure by the agent. The evolving\nmodel's weights result in a dynamic reward function for the agent, which\nprevents using prominent episodic RL methods with stationary environment\nassumption for our purpose. We address this challenge by designing a mechanism\nto model the complex changing dynamics of the reward function and provide a\nrepresentation of it to the RL agent. To do so, we take a learnable embedding\nfor each training epoch and employ a recurrent model to calculate a\nrepresentation of the changing environment. We train the recurrent model and\nembeddings using a decoder model to reconstruct observed rewards. Such a design\nempowers our agent to effectively leverage episodic observations along with the\nenvironment representations to learn a proper policy to determine performant\nsub-networks of the CNN model. Our extensive experiments on CIFAR-10 and\nImageNet using ResNets and MobileNets demonstrate the effectiveness of our\nmethod.", "keywords": [], "authors_list": ["Alireza Ganjdanesh", "Shangqian Gao", "Heng Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ab"}, "filepath": "data/2312.07531.png", "tags": [], "_media_type": "image", "_rand": 0.9993483813854868, "arXiv_link": "https://arxiv.org/abs/2312.07531", "other_link": "http://wham.is.tue.mpg.de/", "title": "WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion", "abstract": "The estimation of 3D human motion from video has progressed rapidly but\ncurrent methods still have several key limitations. First, most methods\nestimate the human in camera coordinates. Second, prior work on estimating\nhumans in global coordinates often assumes a flat ground plane and produces\nfoot sliding. Third, the most accurate methods rely on computationally\nexpensive optimization pipelines, limiting their use to offline applications.\nFinally, existing video-based methods are surprisingly less accurate than\nsingle-frame methods. We address these limitations with WHAM (World-grounded\nHumans with Accurate Motion), which accurately and efficiently reconstructs 3D\nhuman motion in a global coordinate system from video. WHAM learns to lift 2D\nkeypoint sequences to 3D using motion capture data and fuses this with video\nfeatures, integrating motion context and visual information. WHAM exploits\ncamera angular velocity estimated from a SLAM method together with human motion\nto estimate the body's global trajectory. We combine this with a contact-aware\ntrajectory refinement method that lets WHAM capture human motion in diverse\nconditions, such as climbing stairs. WHAM outperforms all existing 3D human\nmotion recovery methods across multiple in-the-wild benchmarks. Code will be\navailable for research purposes at http://wham.is.tue.mpg.de/", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Soyong Shin", "Juyong Kim", "Eni Halilaj", "Michael J. Black"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ac"}, "filepath": "data/2403.18922.png", "tags": [], "_media_type": "image", "_rand": 0.9997217518200929, "arXiv_link": "https://arxiv.org/abs/2403.18922", "other_link": "", "title": "Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D", "abstract": "In recent years, there has been an explosion of 2D vision models for numerous\ntasks such as semantic segmentation, style transfer or scene editing, enabled\nby large-scale 2D image datasets. At the same time, there has been renewed\ninterest in 3D scene representations such as neural radiance fields from\nmulti-view images. However, the availability of 3D or multiview data is still\nsubstantially limited compared to 2D image datasets, making extending 2D vision\nmodels to 3D data highly desirable but also very challenging. Indeed, extending\na single 2D vision operator like scene editing to 3D typically requires a\nhighly creative method specialized to that task and often requires per-scene\noptimization. In this paper, we ask the question of whether any 2D vision model\ncan be lifted to make 3D consistent predictions. We answer this question in the\naffirmative; our new Lift3D method trains to predict unseen views on feature\nspaces generated by a few visual models (i.e. DINO and CLIP), but then\ngeneralizes to novel vision operators and tasks, such as style transfer,\nsuper-resolution, open vocabulary segmentation and image colorization; for some\nof these tasks, there is no comparable previous 3D method. In many cases, we\neven outperform state-of-the-art methods specialized for the task in question.\nMoreover, Lift3D is a zero-shot method, in the sense that it requires no\ntask-specific training, nor scene-specific optimization.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Mukund Varma T", "Peihao Wang", "Zhiwen Fan", "Zhangyang Wang", "Hao Su", "Ravi Ramamoorthi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ad"}, "filepath": "data/2405.20654v1.png", "tags": [], "_media_type": "image", "_rand": 0.999309527756148, "arXiv_link": "https://arxiv.org/html/2405.20654v1", "other_link": "", "title": "Querying as Prompt: Parameter-Efficient Learning for Multimodal Language Model", "abstract": "Effective passage retrieval and reranking methods have been widely utilized\nto identify suitable candidates in open-domain question answering tasks, recent\nstudies have resorted to LLMs for reranking the retrieved passages by the\nlog-likelihood of the question conditioned on each passage. Although these\nmethods have demonstrated promising results, the performance is notably\nsensitive to the human-written prompt (or hard prompt), and fine-tuning LLMs\ncan be computationally intensive and time-consuming. Furthermore, this approach\nlimits the leverage of question-passage relevance pairs and passage-specific\nknowledge to enhance the ranking capabilities of LLMs. In this paper, we\npropose passage-specific prompt tuning for reranking in open-domain question\nanswering (PSPT): a parameter-efficient method that fine-tunes learnable\npassage-specific soft prompts, incorporating passage-specific knowledge from a\nlimited set of question-passage relevance pairs. The method involves ranking\nretrieved passages based on the log-likelihood of the model generating the\nquestion conditioned on each passage and the learned soft prompt. We conducted\nextensive experiments utilizing the Llama-2-chat-7B model across three publicly\navailable open-domain question answering datasets and the results demonstrate\nthe effectiveness of the proposed approach.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Tian Liang", "Jing Huang", "Ming Kong", "Luyuan Chen", "Qiang Zhu"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Information Retrieval"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ae"}, "filepath": "data/2404.08450.png", "tags": [], "_media_type": "image", "_rand": 0.9999019487323267, "arXiv_link": "https://arxiv.org/abs/2404.08450", "other_link": "https://github.com/Xianhua-He/cvpr2024-face-anti-spoofing-challenge.", "title": "PairDETR : Joint Detection and Association of Human Bodies and Faces", "abstract": "Face recognition systems are frequently subjected to a variety of physical\nand digital attacks of different types. Previous methods have achieved\nsatisfactory performance in scenarios that address physical attacks and digital\nattacks, respectively. However, few methods are considered to integrate a model\nthat simultaneously addresses both physical and digital attacks, implying the\nnecessity to develop and maintain multiple models. To jointly detect physical\nand digital attacks within a single model, we propose an innovative approach\nthat can adapt to any network architecture. Our approach mainly contains two\ntypes of data augmentation, which we call Simulated Physical Spoofing Clues\naugmentation (SPSC) and Simulated Digital Spoofing Clues augmentation (SDSC).\nSPSC and SDSC augment live samples into simulated attack samples by simulating\nspoofing clues of physical and digital attacks, respectively, which\nsignificantly improve the capability of the model to detect \"unseen\" attack\ntypes. Extensive experiments show that SPSC and SDSC can achieve\nstate-of-the-art generalization in Protocols 2.1 and 2.2 of the UniAttackData\ndataset, respectively. Our method won first place in \"Unified Physical-Digital\nFace Attack Detection\" of the 5th Face Anti-spoofing Challenge@CVPR2024. Our\nfinal submission obtains 3.75% APCER, 0.93% BPCER, and 2.34% ACER,\nrespectively. Our code is available at\nhttps://github.com/Xianhua-He/cvpr2024-face-anti-spoofing-challenge.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Ammar Ali", "Georgii Gaikov", "Denis Rybalchenko", "Alexander Chigorin", "Ivan Laptev", "Sergey Zagoruyko"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4af"}, "filepath": "data/2402.17364v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992873525852247, "arXiv_link": "https://arxiv.org/abs/2402.17364v1", "other_link": "", "title": "Learning Dynamic Tetrahedra for High-Quality Talking Head Synthesis", "abstract": "Recent works in implicit representations, such as Neural Radiance Fields\n(NeRF), have advanced the generation of realistic and animatable head avatars\nfrom video sequences. These implicit methods are still confronted by visual\nartifacts and jitters, since the lack of explicit geometric constraints poses a\nfundamental challenge in accurately modeling complex facial deformations. In\nthis paper, we introduce Dynamic Tetrahedra (DynTet), a novel hybrid\nrepresentation that encodes explicit dynamic meshes by neural networks to\nensure geometric consistency across various motions and viewpoints. DynTet is\nparameterized by the coordinate-based networks which learn signed distance,\ndeformation, and material texture, anchoring the training data into a\npredefined tetrahedra grid. Leveraging Marching Tetrahedra, DynTet efficiently\ndecodes textured meshes with a consistent topology, enabling fast rendering\nthrough a differentiable rasterizer and supervision via a pixel loss. To\nenhance training efficiency, we incorporate classical 3D Morphable Models to\nfacilitate geometry learning and define a canonical space for simplifying\ntexture learning. These advantages are readily achievable owing to the\neffective geometric representation employed in DynTet. Compared with prior\nworks, DynTet demonstrates significant improvements in fidelity, lip\nsynchronization, and real-time performance according to various metrics. Beyond\nproducing stable and visually appealing synthesis videos, our method also\noutputs the dynamic meshes which is promising to enable many emerging\napplications.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Zicheng Zhang", "RUOBING ZHENG", "Bonan Li", "Congying Han", "Tianqi Li", "Meng Wang", "Tiande Guo", "Jingdong Chen", "Ziwen Liu", "Ming Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b0"}, "filepath": "data/2403.16997.png", "tags": [], "_media_type": "image", "_rand": 0.9995499750246001, "arXiv_link": "https://arxiv.org/abs/2403.16997", "other_link": "https://github.com/OmkarThawakar/composed-video-retrieval}", "title": "Composed Video Retrieval via Enriched Context and Discriminative Embeddings", "abstract": "Composed video retrieval (CoVR) is a challenging problem in computer vision\nwhich has recently highlighted the integration of modification text with visual\nqueries for more sophisticated video search in large databases. Existing works\npredominantly rely on visual queries combined with modification text to\ndistinguish relevant videos. However, such a strategy struggles to fully\npreserve the rich query-specific context in retrieved target videos and only\nrepresents the target video using visual embedding. We introduce a novel CoVR\nframework that leverages detailed language descriptions to explicitly encode\nquery-specific contextual information and learns discriminative embeddings of\nvision only, text only and vision-text for better alignment to accurately\nretrieve matched target videos. Our proposed framework can be flexibly employed\nfor both composed video (CoVR) and image (CoIR) retrieval tasks. Experiments on\nthree datasets show that our approach obtains state-of-the-art performance for\nboth CovR and zero-shot CoIR tasks, achieving gains as high as around 7% in\nterms of recall@K=1 score. Our code, models, detailed language descriptions for\nWebViD-CoVR dataset are available at\n\\url{https://github.com/OmkarThawakar/composed-video-retrieval}", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Omkar Thawakar", "Muzammal Naseer", "Rao Anwer", "Salman Khan", "Michael Felsberg", "Mubarak Shah", "Fahad Shahbaz Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b1"}, "filepath": "data/2401.06129.png", "tags": [], "_media_type": "image", "_rand": 0.9992889658156684, "arXiv_link": "https://arxiv.org/abs/2401.06129", "other_link": "", "title": "Distilling Vision-Language Models on Millions of Videos", "abstract": "The recent advance in vision-language models is largely attributed to the\nabundance of image-text data. We aim to replicate this success for\nvideo-language models, but there simply is not enough human-curated video-text\ndata available. We thus resort to fine-tuning a video-language model from a\nstrong image-language baseline with synthesized instructional data. The\nresulting video model by video-instruction-tuning (VIIT) is then used to\nauto-label millions of videos to generate high-quality captions. We show the\nadapted video-language model performs well on a wide range of video-language\nbenchmarks. For instance, it surpasses the best prior result on open-ended\nNExT-QA by 2.8%. Besides, our model generates detailed descriptions for\npreviously unseen videos, which provide better textual supervision than\nexisting methods. Experiments show that a video-language dual-encoder model\ncontrastively trained on these auto-generated captions is 3.8% better than the\nstrongest baseline that also leverages vision-language models. Our best model\noutperforms state-of-the-art methods on MSR-VTT zero-shot text-to-video\nretrieval by 6%. As a side product, we generate the largest video caption\ndataset to date.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yue Zhao", "Long Zhao", "Xingyi Zhou", "Jialin Wu", "Chun-Te Chu", "Hui Miao", "Florian Schroff", "Hartwig Adam", "Ting Liu", "Boqing Gong", "Philipp Kr\u00e4henb\u00fchl", "Liangzhe Yuan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b2"}, "filepath": "data/2405.20161v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992216079750602, "arXiv_link": "https://arxiv.org/html/2405.20161v1", "other_link": "", "title": "Multi-modal learning for geospatial vegetation forecasting", "abstract": "Landslides are one of the most critical and destructive geohazards.\nWidespread development of human activities and settlements combined with the\neffects of climate change on weather are resulting in a high increase in the\nfrequency and destructive power of landslides, making them a major threat to\nhuman life and the economy. In this paper, we explore methodologies to map\nnewly-occurred landslides using Sentinel-2 imagery automatically. All\napproaches presented are framed as a bi-temporal change detection problem,\nrequiring only a pair of Sentinel-2 images, taken respectively before and after\na landslide-triggering event. Furthermore, we introduce a novel deep learning\narchitecture for fusing Sentinel-2 bi-temporal image pairs with Digital\nElevation Model (DEM) data, showcasing its promising performances w.r.t. other\nchange detection models in the literature. As a parallel task, we address\nlimitations in existing datasets by creating a novel geodatabase, which\nincludes manually validated open-access landslide inventories over\nheterogeneous ecoregions of the world. We release both code and dataset with an\nopen-source license.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Vitus Benson", "Claire Robin", "Christian Requena-Mesa", "LAZARO ALONSO SILVA", "M\u00e9lanie Weynants", "Nora Linscheid", "Jose Cortes", "Zhihan Gao", "Nuno Carvalhais", "Markus Reichstein"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b3"}, "filepath": "data/2308.00692.png", "tags": [], "_media_type": "image", "_rand": 0.999594310121755, "arXiv_link": "https://arxiv.org/abs/2308.00692", "other_link": "https://github.com/dvlab-research/LISA.", "title": "LISA: Reasoning Segmentation via Large Language Model", "abstract": "Although perception systems have made remarkable advancements in recent\nyears, they still rely on explicit human instruction or pre-defined categories\nto identify the target objects before executing visual recognition tasks. Such\nsystems cannot actively reason and comprehend implicit user intention. In this\nwork, we propose a new segmentation task -- reasoning segmentation. The task is\ndesigned to output a segmentation mask given a complex and implicit query text.\nFurthermore, we establish a benchmark comprising over one thousand\nimage-instruction-mask data samples, incorporating intricate reasoning and\nworld knowledge for evaluation purposes. Finally, we present LISA: large\nLanguage Instructed Segmentation Assistant, which inherits the language\ngeneration capabilities of multimodal Large Language Models (LLMs) while also\npossessing the ability to produce segmentation masks. We expand the original\nvocabulary with a token and propose the embedding-as-mask paradigm to\nunlock the segmentation capability. Remarkably, LISA can handle cases involving\ncomplex reasoning and world knowledge. Also, it demonstrates robust zero-shot\ncapability when trained exclusively on reasoning-free datasets. In addition,\nfine-tuning the model with merely 239 reasoning segmentation data samples\nresults in further performance enhancement. Both quantitative and qualitative\nexperiments show our method effectively unlocks new reasoning segmentation\ncapabilities for multimodal LLMs. Code, models, and data are available at\nhttps://github.com/dvlab-research/LISA.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Xin Lai", "Zhuotao Tian", "Yukang Chen", "Yanwei Li", "Yuhui Yuan", "Shu Liu", "Jiaya Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b4"}, "filepath": "data/2306.09551.png", "tags": [], "_media_type": "image", "_rand": 0.9996731410168896, "arXiv_link": "https://arxiv.org/abs/2306.09551", "other_link": "", "title": "Instruct 4D-to-4D: Editing 4D Scenes as Pseudo-3D Scenes Using 2D Diffusion", "abstract": "Recent research has demonstrated that the combination of pretrained diffusion\nmodels with neural radiance fields (NeRFs) has emerged as a promising approach\nfor text-to-3D generation. Simply coupling NeRF with diffusion models will\nresult in cross-view inconsistency and degradation of stylized view syntheses.\nTo address this challenge, we propose the Edit-DiffNeRF framework, which is\ncomposed of a frozen diffusion model, a proposed delta module to edit the\nlatent semantic space of the diffusion model, and a NeRF. Instead of training\nthe entire diffusion for each scene, our method focuses on editing the latent\nsemantic space in frozen pretrained diffusion models by the delta module. This\nfundamental change to the standard diffusion framework enables us to make\nfine-grained modifications to the rendered views and effectively consolidate\nthese instructions in a 3D scene via NeRF training. As a result, we are able to\nproduce an edited 3D scene that faithfully aligns to input text instructions.\nFurthermore, to ensure semantic consistency across different viewpoints, we\npropose a novel multi-view semantic consistency loss that extracts a latent\nsemantic embedding from the input view as a prior, and aim to reconstruct it in\ndifferent views. Our proposed method has been shown to effectively edit\nreal-world 3D scenes, resulting in 25% improvement in the alignment of the\nperformed 3D edits with text instructions compared to prior work.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Linzhan Mou", "Junkun Chen", "Yu-Xiong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b5"}, "filepath": "data/2312.00863.png", "tags": [], "_media_type": "image", "_rand": 0.9991944056817864, "arXiv_link": "https://arxiv.org/abs/2312.00863", "other_link": "", "title": "EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything", "abstract": "Segment Anything Model (SAM) has emerged as a powerful tool for numerous\nvision applications. A key component that drives the impressive performance for\nzero-shot transfer and high versatility is a super large Transformer model\ntrained on the extensive high-quality SA-1B dataset. While beneficial, the huge\ncomputation cost of SAM model has limited its applications to wider real-world\napplications. To address this limitation, we propose EfficientSAMs,\nlight-weight SAM models that exhibits decent performance with largely reduced\ncomplexity. Our idea is based on leveraging masked image pretraining, SAMI,\nwhich learns to reconstruct features from SAM image encoder for effective\nvisual representation learning. Further, we take SAMI-pretrained light-weight\nimage encoders and mask decoder to build EfficientSAMs, and finetune the models\non SA-1B for segment anything task. We perform evaluations on multiple vision\ntasks including image classification, object detection, instance segmentation,\nand semantic object detection, and find that our proposed pretraining method,\nSAMI, consistently outperforms other masked image pretraining methods. On\nsegment anything task such as zero-shot instance segmentation, our\nEfficientSAMs with SAMI-pretrained lightweight image encoders perform favorably\nwith a significant gain (e.g., ~4 AP on COCO/LVIS) over other fast SAM models.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yunyang Xiong", "Balakrishnan Varadarajan", "Lemeng Wu", "Xiaoyu Xiang", "Fanyi Xiao", "Chenchen Zhu", "Xiaoliang Dai", "Dilin Wang", "Fei Sun", "Forrest Iandola", "Raghuraman Krishnamoorthi", "Vikas Chandra"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b6"}, "filepath": "data/2308.02897.png", "tags": [], "_media_type": "image", "_rand": 0.9995111467073088, "arXiv_link": "https://arxiv.org/abs/2308.02897", "other_link": "", "title": "Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning", "abstract": "While the transferability property of adversarial examples allows the\nadversary to perform black-box attacks (i.e., the attacker has no knowledge\nabout the target model), the transfer-based adversarial attacks have gained\ngreat attention. Previous works mostly study gradient variation or image\ntransformations to amplify the distortion on critical parts of inputs. These\nmethods can work on transferring across models with limited differences, i.e.,\nfrom CNNs to CNNs, but always fail in transferring across models with wide\ndifferences, such as from CNNs to ViTs. Alternatively, model ensemble\nadversarial attacks are proposed to fuse outputs from surrogate models with\ndiverse architectures to get an ensemble loss, making the generated adversarial\nexample more likely to transfer to other models as it can fool multiple models\nconcurrently. However, existing ensemble attacks simply fuse the outputs of the\nsurrogate models evenly, thus are not efficacious to capture and amplify the\nintrinsic transfer information of adversarial examples. In this paper, we\npropose an adaptive ensemble attack, dubbed AdaEA, to adaptively control the\nfusion of the outputs from each model, via monitoring the discrepancy ratio of\ntheir contributions towards the adversarial objective. Furthermore, an extra\ndisparity-reduced filter is introduced to further synchronize the update\ndirection. As a result, we achieve considerable improvement over the existing\nensemble attacks on various datasets, and the proposed AdaEA can also boost\nexisting transfer-based attacks, which further demonstrates its efficacy and\nversatility.", "keywords": [], "authors_list": ["Zhengwei Fang", "Rui Wang", "Tao Huang", "Liping Jing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b7"}, "filepath": "data/2312.03628.png", "tags": [], "_media_type": "image", "_rand": 0.9994422169862124, "arXiv_link": "https://arxiv.org/abs/2312.03628", "other_link": "", "title": "AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning", "abstract": "The recent Segment Anything Model (SAM) has emerged as a new paradigmatic\nvision foundation model, showcasing potent zero-shot generalization and\nflexible prompting. Despite SAM finding applications and adaptations in various\ndomains, its primary limitation lies in the inability to grasp object\nsemantics. In this paper, we present Sambor to seamlessly integrate SAM with\nthe open-vocabulary object detector in an end-to-end framework. While retaining\nall the remarkable capabilities inherent to SAM, we enhance it with the\ncapacity to detect arbitrary objects based on human inputs like category names\nor reference expressions. To accomplish this, we introduce a novel SideFormer\nmodule that extracts SAM features to facilitate zero-shot object localization\nand inject comprehensive semantic information for open-vocabulary recognition.\nIn addition, we devise an open-set region proposal network (Open-set RPN),\nenabling the detector to acquire the open-set proposals generated by SAM.\nSambor demonstrates superior zero-shot performance across benchmarks, including\nCOCO and LVIS, proving highly competitive against previous SoTA methods. We\naspire for this work to serve as a meaningful endeavor in endowing SAM to\nrecognize diverse object categories and advancing open-vocabulary learning with\nthe support of vision foundation models.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Duojun Huang", "Xinyu Xiong", "Jie Ma", "Jichang Li", "Zequn Jie", "Lin Ma", "Guanbin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b8"}, "filepath": "data/2404.10193.png", "tags": [], "_media_type": "image", "_rand": 0.9990707980197738, "arXiv_link": "https://arxiv.org/abs/2404.10193", "other_link": "", "title": "Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering", "abstract": "The goal of selective prediction is to allow an a model to abstain when it\nmay not be able to deliver a reliable prediction, which is important in\nsafety-critical contexts. Existing approaches to selective prediction typically\nrequire access to the internals of a model, require retraining a model or study\nonly unimodal models. However, the most powerful models (e.g. GPT-4) are\ntypically only available as black boxes with inaccessible internals, are not\nretrainable by end-users, and are frequently used for multimodal tasks. We\nstudy the possibility of selective prediction for vision-language models in a\nrealistic, black-box setting. We propose using the principle of\n\\textit{neighborhood consistency} to identify unreliable responses from a\nblack-box vision-language model in question answering tasks. We hypothesize\nthat given only a visual question and model response, the consistency of the\nmodel's responses over the neighborhood of a visual question will indicate\nreliability. It is impossible to directly sample neighbors in feature space in\na black-box setting. Instead, we show that it is possible to use a smaller\nproxy model to approximately sample from the neighborhood. We find that\nneighborhood consistency can be used to identify model responses to visual\nquestions that are likely unreliable, even in adversarial settings or settings\nthat are out-of-distribution to the proxy model.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zaid Khan", "Yun Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4b9"}, "filepath": "data/2405.19005.png", "tags": [], "_media_type": "image", "_rand": 0.9994916116390054, "arXiv_link": "https://arxiv.org/abs/2405.19005", "other_link": "", "title": "Distribution-aware Knowledge Prototyping for Non-exemplar Lifelong Person Re-identification", "abstract": "Lifelong Person Re-Identification (LReID) extends traditional ReID by\nrequiring systems to continually learn from non-overlapping datasets across\ndifferent times and locations, adapting to new identities while preserving\nknowledge of previous ones. Existing approaches, either rehearsal-free or\nrehearsal-based, still suffer from the problem of catastrophic forgetting since\nthey try to cram diverse knowledge into one fixed model. To overcome this\nlimitation, we introduce a novel framework AdalReID, that adopts knowledge\nadapters and a parameter-free auto-selection mechanism for lifelong learning.\nConcretely, we incrementally build distinct adapters to learn domain-specific\nknowledge at each step, which can effectively learn and preserve knowledge\nacross different datasets. Meanwhile, the proposed auto-selection strategy\nadaptively calculates the knowledge similarity between the input set and the\nadapters. On the one hand, the appropriate adapters are selected for the inputs\nto process ReID, and on the other hand, the knowledge interaction and fusion\nbetween adapters are enhanced to improve the generalization ability of the\nmodel. Extensive experiments are conducted to demonstrate the superiority of\nour AdalReID, which significantly outperforms SOTAs by about 10$\\sim$20\\% mAP\non both seen and unseen domains.", "keywords": [], "authors_list": ["Kunlun Xu", "Xu Zou", "Yuxin Peng", "Jiahuan Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ba"}, "filepath": "data/2311.14897.png", "tags": [], "_media_type": "image", "_rand": 0.9991411397596726, "arXiv_link": "https://arxiv.org/abs/2311.14897", "other_link": "https://github.com/Chopper-233/Anomaly-ShapeNet", "title": "Looking 3D: Anomaly Detection with 2D-3D Alignment", "abstract": "Recently, 3D anomaly detection, a crucial problem involving fine-grained\ngeometry discrimination, is getting more attention. However, the lack of\nabundant real 3D anomaly data limits the scalability of current models. To\nenable scalable anomaly data collection, we propose a 3D anomaly synthesis\npipeline to adapt existing large-scale 3Dmodels for 3D anomaly detection.\nSpecifically, we construct a synthetic dataset, i.e., Anomaly-ShapeNet, basedon\nShapeNet. Anomaly-ShapeNet consists of 1600 point cloud samples under 40\ncategories, which provides a rich and varied collection of data, enabling\nefficient training and enhancing adaptability to industrial scenarios.\nMeanwhile,to enable scalable representation learning for 3D anomaly\nlocalization, we propose a self-supervised method, i.e., Iterative Mask\nReconstruction Network (IMRNet). During training, we propose a geometry-aware\nsample module to preserve potentially anomalous local regions during point\ncloud down-sampling. Then, we randomly mask out point patches and sent the\nvisible patches to a transformer for reconstruction-based self-supervision.\nDuring testing, the point cloud repeatedly goes through the Mask Reconstruction\nNetwork, with each iteration's output becoming the next input. By merging and\ncontrasting the final reconstructed point cloud with the initial input, our\nmethod successfully locates anomalies. Experiments show that IMRNet outperforms\nprevious state-of-the-art methods, achieving 66.1% in I-AUC on Anomaly-ShapeNet\ndataset and 72.5% in I-AUC on Real3D-AD dataset. Our dataset will be released\nat https://github.com/Chopper-233/Anomaly-ShapeNet", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Ankan Kumar Bhunia", "Changjian Li", "Hakan Bilen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4bb"}, "filepath": "data/2401.10891.png", "tags": [], "_media_type": "image", "_rand": 0.999090628333838, "arXiv_link": "https://arxiv.org/abs/2401.10891", "other_link": "https://github.com/LiheYoung/Depth-Anything.", "title": "Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data", "abstract": "This work presents Depth Anything, a highly practical solution for robust\nmonocular depth estimation. Without pursuing novel technical modules, we aim to\nbuild a simple yet powerful foundation model dealing with any images under any\ncircumstances. To this end, we scale up the dataset by designing a data engine\nto collect and automatically annotate large-scale unlabeled data (~62M), which\nsignificantly enlarges the data coverage and thus is able to reduce the\ngeneralization error. We investigate two simple yet effective strategies that\nmake data scaling-up promising. First, a more challenging optimization target\nis created by leveraging data augmentation tools. It compels the model to\nactively seek extra visual knowledge and acquire robust representations.\nSecond, an auxiliary supervision is developed to enforce the model to inherit\nrich semantic priors from pre-trained encoders. We evaluate its zero-shot\ncapabilities extensively, including six public datasets and randomly captured\nphotos. It demonstrates impressive generalization ability. Further, through\nfine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs\nare set. Our better depth model also results in a better depth-conditioned\nControlNet. Our models are released at\nhttps://github.com/LiheYoung/Depth-Anything.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Lihe Yang", "Bingyi Kang", "Zilong Huang", "Xiaogang Xu", "Jiashi Feng", "Hengshuang Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4bc"}, "filepath": "data/2312.02976.png", "tags": [], "_media_type": "image", "_rand": 0.9999107084914048, "arXiv_link": "https://arxiv.org/abs/2312.02976", "other_link": "", "title": "SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World", "abstract": "Reinforcement learning (RL) with dense rewards and imitation learning (IL)\nwith human-generated trajectories are the most widely used approaches for\ntraining modern embodied agents. RL requires extensive reward shaping and\nauxiliary losses and is often too slow and ineffective for long-horizon tasks.\nWhile IL with human supervision is effective, collecting human trajectories at\nscale is extremely expensive. In this work, we show that imitating\nshortest-path planners in simulation produces agents that, given a language\ninstruction, can proficiently navigate, explore, and manipulate objects in both\nsimulation and in the real world using only RGB sensors (no depth map or GPS\ncoordinates). This surprising result is enabled by our end-to-end,\ntransformer-based, SPOC architecture, powerful visual encoders paired with\nextensive image augmentation, and the dramatic scale and diversity of our\ntraining data: millions of frames of shortest-path-expert trajectories\ncollected inside approximately 200,000 procedurally generated houses containing\n40,000 unique 3D assets. Our models, data, training code, and newly proposed\n10-task benchmarking suite CHORES will be open-sourced.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Kiana Ehsani", "Tanmay Gupta", "Rose Hendrix", "Jordi Salvador", "Luca Weihs", "Kuo-Hao Zeng", "Kunal Singh Singh", "Yejin Kim", "Winson Han", "Alvaro Herrasti", "Ranjay Krishna", "Dustin Schwenk", "Eli VanderBilt", "Aniruddha Kembhavi"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4bd"}, "filepath": "data/2404.01243.png", "tags": [], "_media_type": "image", "_rand": 0.9994425831962872, "arXiv_link": "https://arxiv.org/abs/2404.01243", "other_link": "", "title": "A Unified and Interpretable Emotion Representation and Expression Generation", "abstract": "Canonical emotions, such as happy, sad, and fearful, are easy to understand\nand annotate. However, emotions are often compound, e.g. happily surprised, and\ncan be mapped to the action units (AUs) used for expressing emotions, and\ntrivially to the canonical ones. Intuitively, emotions are continuous as\nrepresented by the arousal-valence (AV) model. An interpretable unification of\nthese four modalities - namely, Canonical, Compound, AUs, and AV - is highly\ndesirable, for a better representation and understanding of emotions. However,\nsuch unification remains to be unknown in the current literature. In this work,\nwe propose an interpretable and unified emotion model, referred as C2A2. We\nalso develop a method that leverages labels of the non-unified models to\nannotate the novel unified one. Finally, we modify the text-conditional\ndiffusion models to understand continuous numbers, which are then used to\ngenerate continuous expressions using our unified emotion model. Through\nquantitative and qualitative experiments, we show that our generated images are\nrich and capture subtle expressions. Our work allows a fine-grained generation\nof expressions in conjunction with other textual inputs and offers a new label\nspace for emotions at the same time.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Reni Paskaleva", "Mykyta Holubakha", "Andela Ilic", "Saman Motamed", "Luc Van Gool", "Danda Paudel"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4be"}, "filepath": "data/2311.02072.png", "tags": [], "_media_type": "image", "_rand": 0.9993696656945508, "arXiv_link": "https://arxiv.org/abs/2311.02072", "other_link": "https://github.com/WenRuiCai/HIPTrack.", "title": "HIPTrack: Visual Tracking with Historical Prompts", "abstract": "Trackers that follow Siamese paradigm utilize similarity matching between\ntemplate and search region features for tracking. Many methods have been\nexplored to enhance tracking performance by incorporating tracking history to\nbetter handle scenarios involving target appearance variations such as\ndeformation and occlusion. However, the utilization of historical information\nin existing methods is insufficient and incomprehensive, which typically\nrequires repetitive training and introduces a large amount of computation. In\nthis paper, we show that by providing a tracker that follows Siamese paradigm\nwith precise and updated historical information, a significant performance\nimprovement can be achieved with completely unchanged parameters. Based on\nthis, we propose a historical prompt network that uses refined historical\nforeground masks and historical visual features of the target to provide\ncomprehensive and precise prompts for the tracker. We build a novel tracker\ncalled HIPTrack based on the historical prompt network, which achieves\nconsiderable performance improvements without the need to retrain the entire\nmodel. We conduct experiments on seven datasets and experimental results\ndemonstrate that our method surpasses the current state-of-the-art trackers on\nLaSOT, LaSOText, GOT-10k and NfS. Furthermore, the historical prompt network\ncan seamlessly integrate as a plug-and-play module into existing trackers,\nproviding performance enhancements. The source code is available at\nhttps://github.com/WenRuiCai/HIPTrack.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Wenrui Cai", "Qingjie Liu", "Yunhong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4bf"}, "filepath": "data/2403.03740.png", "tags": [], "_media_type": "image", "_rand": 0.9996862164917417, "arXiv_link": "https://arxiv.org/abs/2403.03740", "other_link": "", "title": "Self-supervised Representation Learning from Arbitrary Scenarios", "abstract": "In the domain of image layout representation learning, the critical process\nof translating image layouts into succinct vector forms is increasingly\nsignificant across diverse applications, such as image retrieval, manipulation,\nand generation. Most approaches in this area heavily rely on costly labeled\ndatasets and notably lack in adapting their modeling and learning methods to\nthe specific nuances of photographic image layouts. This shortfall makes the\nlearning process for photographic image layouts suboptimal. In our research, we\ndirectly address these challenges. We innovate by defining basic layout\nprimitives that encapsulate various levels of layout information and by mapping\nthese, along with their interconnections, onto a heterogeneous graph structure.\nThis graph is meticulously engineered to capture the intricate layout\ninformation within the pixel domain explicitly. Advancing further, we introduce\nnovel pretext tasks coupled with customized loss functions, strategically\ndesigned for effective self-supervised learning of these layout graphs.\nBuilding on this foundation, we develop an autoencoder-based network\narchitecture skilled in compressing these heterogeneous layout graphs into\nprecise, dimensionally-reduced layout representations. Additionally, we\nintroduce the LODB dataset, which features a broader range of layout categories\nand richer semantics, serving as a comprehensive benchmark for evaluating the\neffectiveness of layout representation learning methods. Our extensive\nexperimentation on this dataset demonstrates the superior performance of our\napproach in the realm of photographic image layout representation learning.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Zhaowen Li", "Yousong Zhu", "Zhiyang Chen", "Zongxin Gao", "Rui Zhao", "Chaoyang Zhao", "Ming Tang", "Jinqiao Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c0"}, "filepath": "data/2404.19752.png", "tags": [], "_media_type": "image", "_rand": 0.9999895787831401, "arXiv_link": "https://arxiv.org/abs/2404.19752", "other_link": "", "title": "Visual Fact Checker: Enabling High-Fidelity Detailed Caption Generation", "abstract": "Existing automatic captioning methods for visual content face challenges such\nas lack of detail, content hallucination, and poor instruction following. In\nthis work, we propose VisualFactChecker (VFC), a flexible training-free\npipeline that generates high-fidelity and detailed captions for both 2D images\nand 3D objects. VFC consists of three steps: 1) proposal, where image-to-text\ncaptioning models propose multiple initial captions; 2) verification, where a\nlarge language model (LLM) utilizes tools such as object detection and VQA\nmodels to fact-check proposed captions; 3) captioning, where an LLM generates\nthe final caption by summarizing caption proposals and the fact check\nverification results. In this step, VFC can flexibly generate captions in\nvarious styles following complex instructions. We conduct comprehensive\ncaptioning evaluations using four metrics: 1) CLIP-Score for image-text\nsimilarity; 2) CLIP-Image-Score for measuring the image-image similarity\nbetween the original and the reconstructed image generated by a text-to-image\nmodel using the caption. 3) human study on Amazon Mechanical Turk; 4) GPT-4V\nfor fine-grained evaluation. Evaluation results show that VFC outperforms\nstate-of-the-art open-sourced captioning methods for 2D images on the COCO\ndataset and 3D assets on the Objaverse dataset. Our study demonstrates that by\ncombining open-source models into a pipeline, we can attain captioning\ncapability comparable to proprietary models such as GPT-4V, despite being over\n10x smaller in model size.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yunhao Ge", "Xiaohui Zeng", "Jacob Huffman", "Tsung-Yi Lin", "Ming-Yu Liu", "Yin Cui"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c1"}, "filepath": "data/2312.08568.png", "tags": [], "_media_type": "image", "_rand": 0.9992171678332447, "arXiv_link": "https://arxiv.org/abs/2312.08568", "other_link": "https://wbjang.github.io/nvist_webpage.", "title": "NViST: In the Wild New View Synthesis from a Single Image with Transformers", "abstract": "We propose NViST, a transformer-based model for efficient and generalizable\nnovel-view synthesis from a single image for real-world scenes. In contrast to\nmany methods that are trained on synthetic data, object-centred scenarios, or\nin a category-specific manner, NViST is trained on MVImgNet, a large-scale\ndataset of casually-captured real-world videos of hundreds of object categories\nwith diverse backgrounds. NViST transforms image inputs directly into a\nradiance field, conditioned on camera parameters via adaptive layer\nnormalisation. In practice, NViST exploits fine-tuned masked autoencoder (MAE)\nfeatures and translates them to 3D output tokens via cross-attention, while\naddressing occlusions with self-attention. To move away from object-centred\ndatasets and enable full scene synthesis, NViST adopts a 6-DOF camera pose\nmodel and only requires relative pose, dropping the need for canonicalization\nof the training data, which removes a substantial barrier to it being used on\ncasually captured datasets. We show results on unseen objects and categories\nfrom MVImgNet and even generalization to casual phone captures. We conduct\nqualitative and quantitative evaluations on MVImgNet and ShapeNet to show that\nour model represents a step forward towards enabling true in-the-wild\ngeneralizable novel-view synthesis from a single image. Project webpage:\nhttps://wbjang.github.io/nvist_webpage.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Wonbong Jang", "Lourdes Agapito"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c2"}, "filepath": "data/2311.17009.png", "tags": [], "_media_type": "image", "_rand": 0.9996368075772271, "arXiv_link": "https://arxiv.org/abs/2311.17009", "other_link": "", "title": "Space-time Diffusion Features for Zero-shot Text-driven Motion Transfer", "abstract": "We present a new method for text-driven motion transfer - synthesizing a\nvideo that complies with an input text prompt describing the target objects and\nscene while maintaining an input video's motion and scene layout. Prior methods\nare confined to transferring motion across two subjects within the same or\nclosely related object categories and are applicable for limited domains (e.g.,\nhumans). In this work, we consider a significantly more challenging setting in\nwhich the target and source objects differ drastically in shape and\nfine-grained motion characteristics (e.g., translating a jumping dog into a\ndolphin). To this end, we leverage a pre-trained and fixed text-to-video\ndiffusion model, which provides us with generative and motion priors. The\npillar of our method is a new space-time feature loss derived directly from the\nmodel. This loss guides the generation process to preserve the overall motion\nof the input video while complying with the target object in terms of shape and\nfine-grained motion traits.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Rafail Fridman", "Danah Yatim", "Omer Bar-Tal", "Yoni Kasten", "Tali Dekel"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c3"}, "filepath": "data/2312.07504.png", "tags": [], "_media_type": "image", "_rand": 0.9998744566583805, "arXiv_link": "https://arxiv.org/abs/2312.07504", "other_link": "https://oasisyang.github.io/colmap-free-3dgs", "title": "COLMAP-Free 3D Gaussian Splatting", "abstract": "While neural rendering has led to impressive advances in scene reconstruction\nand novel view synthesis, it relies heavily on accurately pre-computed camera\nposes. To relax this constraint, multiple efforts have been made to train\nNeural Radiance Fields (NeRFs) without pre-processed camera poses. However, the\nimplicit representations of NeRFs provide extra challenges to optimize the 3D\nstructure and camera poses at the same time. On the other hand, the recently\nproposed 3D Gaussian Splatting provides new opportunities given its explicit\npoint cloud representations. This paper leverages both the explicit geometric\nrepresentation and the continuity of the input video stream to perform novel\nview synthesis without any SfM preprocessing. We process the input frames in a\nsequential manner and progressively grow the 3D Gaussians set by taking one\ninput frame at a time, without the need to pre-compute the camera poses. Our\nmethod significantly improves over previous approaches in view synthesis and\ncamera pose estimation under large motion changes. Our project page is\nhttps://oasisyang.github.io/colmap-free-3dgs", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Yang Fu", "Sifei Liu", "Amey Kulkarni", "Jan Kautz", "Alexei A. Efros", "Xiaolong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c4"}, "filepath": "data/2404.07177v1.png", "tags": [], "_media_type": "image", "_rand": 0.99912454338015, "arXiv_link": "https://arxiv.org/abs/2404.07177v1", "other_link": "https://github.com/locuslab/scaling_laws_data_filtering.", "title": "Scaling Laws for Data Filtering: Data Curation cannot be Compute Agnostic", "abstract": "Vision-language models (VLMs) are trained for thousands of GPU hours on\ncarefully curated web datasets. In recent times, data curation has gained\nprominence with several works developing strategies to retain 'high-quality'\nsubsets of 'raw' scraped data. For instance, the LAION public dataset retained\nonly 10% of the total crawled data. However, these strategies are typically\ndeveloped agnostic of the available compute for training. In this paper, we\nfirst demonstrate that making filtering decisions independent of training\ncompute is often suboptimal: the limited high-quality data rapidly loses its\nutility when repeated, eventually requiring the inclusion of 'unseen' but\n'lower-quality' data. To address this quality-quantity tradeoff\n($\\texttt{QQT}$), we introduce neural scaling laws that account for the\nnon-homogeneous nature of web data, an angle ignored in existing literature.\nOur scaling laws (i) characterize the $\\textit{differing}$ 'utility' of various\nquality subsets of web data; (ii) account for how utility diminishes for a data\npoint at its 'nth' repetition; and (iii) formulate the mutual interaction of\nvarious data pools when combined, enabling the estimation of model performance\non a combination of multiple data pools without ever jointly training on them.\nOur key message is that data curation $\\textit{cannot}$ be agnostic of the\ntotal compute that a model will be trained for. Our scaling laws allow us to\ncurate the best possible pool for achieving top performance on Datacomp at\nvarious compute budgets, carving out a pareto-frontier for data curation. Code\nis available at https://github.com/locuslab/scaling_laws_data_filtering.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Sachin Goyal", "Pratyush Maini", "Zachary Lipton", "Aditi Raghunathan", "Zico Kolter"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c5"}, "filepath": "data/2311.17977.png", "tags": [], "_media_type": "image", "_rand": 0.9997396343662968, "arXiv_link": "https://arxiv.org/abs/2311.17977", "other_link": "", "title": "GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces", "abstract": "The advent of neural 3D Gaussians has recently brought about a revolution in\nthe field of neural rendering, facilitating the generation of high-quality\nrenderings at real-time speeds. However, the explicit and discrete\nrepresentation encounters challenges when applied to scenes featuring\nreflective surfaces. In this paper, we present GaussianShader, a novel method\nthat applies a simplified shading function on 3D Gaussians to enhance the\nneural rendering in scenes with reflective surfaces while preserving the\ntraining and rendering efficiency. The main challenge in applying the shading\nfunction lies in the accurate normal estimation on discrete 3D Gaussians.\nSpecifically, we proposed a novel normal estimation framework based on the\nshortest axis directions of 3D Gaussians with a delicately designed loss to\nmake the consistency between the normals and the geometries of Gaussian\nspheres. Experiments show that GaussianShader strikes a commendable balance\nbetween efficiency and visual quality. Our method surpasses Gaussian Splatting\nin PSNR on specular object datasets, exhibiting an improvement of 1.57dB. When\ncompared to prior works handling reflective surfaces, such as Ref-NeRF, our\noptimization time is significantly accelerated (23h vs. 0.58h). Please click on\nour project website to see more results.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yingwenqi Jiang", "Jiadong Tu", "Yuan Liu", "Xifeng Gao", "Xiaoxiao Long", "Wenping Wang", "Yuexin Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c6"}, "filepath": "data/2312.02136.png", "tags": [], "_media_type": "image", "_rand": 0.9994347566008964, "arXiv_link": "https://arxiv.org/abs/2312.02136", "other_link": "https://zqh0253.github.io/BerfScene/.", "title": "BerfScene: Bev-conditioned Equivariant Radiance Fields for Infinite 3D Scene Generation", "abstract": "Generating large-scale 3D scenes cannot simply apply existing 3D object\nsynthesis technique since 3D scenes usually hold complex spatial configurations\nand consist of a number of objects at varying scales. We thus propose a\npractical and efficient 3D representation that incorporates an equivariant\nradiance field with the guidance of a bird's-eye view (BEV) map. Concretely,\nobjects of synthesized 3D scenes could be easily manipulated through steering\nthe corresponding BEV maps. Moreover, by adequately incorporating positional\nencoding and low-pass filters into the generator, the representation becomes\nequivariant to the given BEV map. Such equivariance allows us to produce\nlarge-scale, even infinite-scale, 3D scenes via synthesizing local scenes and\nthen stitching them with smooth consistency. Extensive experiments on 3D scene\ndatasets demonstrate the effectiveness of our approach. Our project website is\nat https://zqh0253.github.io/BerfScene/.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Qihang Zhang", "Yinghao Xu", "Yujun Shen", "Bo Dai", "Bolei Zhou", "Ceyuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c7"}, "filepath": "data/2403.12728.png", "tags": [], "_media_type": "image", "_rand": 0.9990670134792285, "arXiv_link": "https://arxiv.org/abs/2403.12728", "other_link": "", "title": "L4D-Track: Language-to-4D Modeling Towards 6-DoF Tracking and Shape Reconstruction in 3D Point Cloud Stream", "abstract": "Fully-supervised category-level pose estimation aims to determine the 6-DoF\nposes of unseen instances from known categories, requiring expensive mannual\nlabeling costs. Recently, various self-supervised category-level pose\nestimation methods have been proposed to reduce the requirement of the\nannotated datasets. However, most methods rely on synthetic data or 3D CAD\nmodel for self-supervised training, and they are typically limited to\naddressing single-object pose problems without considering multi-objective\ntasks or shape reconstruction. To overcome these challenges and limitations, we\nintroduce a diffusion-driven self-supervised network for multi-object shape\nreconstruction and categorical pose estimation, only leveraging the shape\npriors. Specifically, to capture the SE(3)-equivariant pose features and 3D\nscale-invariant shape information, we present a Prior-Aware Pyramid 3D Point\nTransformer in our network. This module adopts a point convolutional layer with\nradial-kernels for pose-aware learning and a 3D scale-invariant graph\nconvolution layer for object-level shape representation, respectively.\nFurthermore, we introduce a pretrain-to-refine self-supervised training\nparadigm to train our network. It enables proposed network to capture the\nassociations between shape priors and observations, addressing the challenge of\nintra-class shape variations by utilising the diffusion mechanism. Extensive\nexperiments conducted on four public datasets and a self-built dataset\ndemonstrate that our method significantly outperforms state-of-the-art\nself-supervised category-level baselines and even surpasses some\nfully-supervised instance-level and category-level methods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jingtao Sun", "Yaonan Wang", "Mingtao Feng", "Yulan Guo", "Ajmal Mian", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c8"}, "filepath": "data/2310.12790.png", "tags": [], "_media_type": "image", "_rand": 0.9990340962081136, "arXiv_link": "https://arxiv.org/abs/2310.12790", "other_link": "https://github.com/mala-lab/AHL.", "title": "Anomaly Heterogeneity Learning for Open-set Supervised Anomaly Detection", "abstract": "Open-set supervised anomaly detection (OSAD) - a recently emerging anomaly\ndetection area - aims at utilizing a few samples of anomaly classes seen during\ntraining to detect unseen anomalies (i.e., samples from open-set anomaly\nclasses), while effectively identifying the seen anomalies. Benefiting from the\nprior knowledge illustrated by the seen anomalies, current OSAD methods can\noften largely reduce false positive errors. However, these methods are trained\nin a closed-set setting and treat the anomaly examples as from a homogeneous\ndistribution, rendering them less effective in generalizing to unseen anomalies\nthat can be drawn from any distribution. This paper proposes to learn\nheterogeneous anomaly distributions using the limited anomaly examples to\naddress this issue. To this end, we introduce a novel approach, namely Anomaly\nHeterogeneity Learning (AHL), that simulates a diverse set of heterogeneous\nanomaly distributions and then utilizes them to learn a unified heterogeneous\nabnormality model in surrogate open-set environments. Further, AHL is a generic\nframework that existing OSAD models can plug and play for enhancing their\nabnormality modeling. Extensive experiments on nine real-world anomaly\ndetection datasets show that AHL can 1) substantially enhance different\nstate-of-the-art OSAD models in detecting seen and unseen anomalies, and 2)\neffectively generalize to unseen anomalies in new domains. Code is available at\nhttps://github.com/mala-lab/AHL.", "keywords": [], "authors_list": ["Jiawen Zhu", "Choubo Ding", "Yu Tian", "Guansong Pang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4c9"}, "filepath": "data/2311.16833.png", "tags": [], "_media_type": "image", "_rand": 0.999552338448313, "arXiv_link": "https://arxiv.org/abs/2311.16833", "other_link": "https://github.com/berndprach/1LipschitzLayersCompared.", "title": "1-Lipschitz Layers Compared: Memory, Speed, and Certifiable Robustness", "abstract": "The robustness of neural networks against input perturbations with bounded\nmagnitude represents a serious concern in the deployment of deep learning\nmodels in safety-critical systems. Recently, the scientific community has\nfocused on enhancing certifiable robustness guarantees by crafting 1-Lipschitz\nneural networks that leverage Lipschitz bounded dense and convolutional layers.\nAlthough different methods have been proposed in the literature to achieve this\ngoal, understanding the performance of such methods is not straightforward,\nsince different metrics can be relevant (e.g., training time, memory usage,\naccuracy, certifiable robustness) for different applications. For this reason,\nthis work provides a thorough theoretical and empirical comparison between\nmethods by evaluating them in terms of memory usage, speed, and certifiable\nrobust accuracy. The paper also provides some guidelines and recommendations to\nsupport the user in selecting the methods that work best depending on the\navailable resources. We provide code at\nhttps://github.com/berndprach/1LipschitzLayersCompared.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Bernd Prach", "Fabio Brau", "Giorgio Buttazzo", "Christoph Lampert"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition", "Neural and Evolutionary Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ca"}, "filepath": "data/2312.04519.png", "tags": [], "_media_type": "image", "_rand": 0.9993773749886645, "arXiv_link": "https://arxiv.org/abs/2312.04519", "other_link": "https://github.com/yiduohao/Radical}.", "title": "Bootstrapping Autonomous Driving Radars with Self-Supervised Learning", "abstract": "The perception of autonomous vehicles using radars has attracted increased\nresearch interest due its ability to operate in fog and bad weather. However,\ntraining radar models is hindered by the cost and difficulty of annotating\nlarge-scale radar data. To overcome this bottleneck, we propose a\nself-supervised learning framework to leverage the large amount of unlabeled\nradar data to pre-train radar-only embeddings for self-driving perception\ntasks. The proposed method combines radar-to-radar and radar-to-vision\ncontrastive losses to learn a general representation from unlabeled radar\nheatmaps paired with their corresponding camera images. When used for\ndownstream object detection, we demonstrate that the proposed self-supervision\nframework can improve the accuracy of state-of-the-art supervised baselines by\n$5.8\\%$ in mAP. Code is available at \\url{https://github.com/yiduohao/Radical}.", "keywords": [], "authors_list": ["Yiduo Hao", "Sohrab Madani", "Junfeng Guan", "Mo Alloulah", "Saurabh Gupta", "Haitham Al Hassanieh"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4cb"}, "filepath": "data/2312.06059v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995342112145641, "arXiv_link": "https://arxiv.org/abs/2312.06059v1", "other_link": "", "title": "CONFORM: Contrast is All You Need for High-Fidelity Text-to-Image Diffusion Models", "abstract": "Images produced by text-to-image diffusion models might not always faithfully\nrepresent the semantic intent of the provided text prompt, where the model\nmight overlook or entirely fail to produce certain objects. Existing solutions\noften require customly tailored functions for each of these problems, leading\nto sub-optimal results, especially for complex prompts. Our work introduces a\nnovel perspective by tackling this challenge in a contrastive context. Our\napproach intuitively promotes the segregation of objects in attention maps\nwhile also maintaining that pairs of related attributes are kept close to each\nother. We conduct extensive experiments across a wide variety of scenarios,\neach involving unique combinations of objects, attributes, and scenes. These\nexperiments effectively showcase the versatility, efficiency, and flexibility\nof our method in working with both latent and pixel-based diffusion models,\nincluding Stable Diffusion and Imagen. Moreover, we publicly share our source\ncode to facilitate further research.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Tuna Han Salih Meral", "Enis Simsar", "Federico Tombari", "Pinar Yanardag"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4cc"}, "filepath": "data/2403.11157.png", "tags": [], "_media_type": "image", "_rand": 0.9991761143139183, "arXiv_link": "https://arxiv.org/abs/2403.11157", "other_link": "https://github.com/iSEE-Laboratory/DiffUIR", "title": "Selective Hourglass Mapping for Universal Image Restoration Based on Diffusion Model", "abstract": "Universal image restoration is a practical and potential computer vision task\nfor real-world applications. The main challenge of this task is handling the\ndifferent degradation distributions at once. Existing methods mainly utilize\ntask-specific conditions (e.g., prompt) to guide the model to learn different\ndistributions separately, named multi-partite mapping. However, it is not\nsuitable for universal model learning as it ignores the shared information\nbetween different tasks. In this work, we propose an advanced selective\nhourglass mapping strategy based on diffusion model, termed DiffUIR. Two novel\nconsiderations make our DiffUIR non-trivial. Firstly, we equip the model with\nstrong condition guidance to obtain accurate generation direction of diffusion\nmodel (selective). More importantly, DiffUIR integrates a flexible shared\ndistribution term (SDT) into the diffusion algorithm elegantly and naturally,\nwhich gradually maps different distributions into a shared one. In the reverse\nprocess, combined with SDT and strong condition guidance, DiffUIR iteratively\nguides the shared distribution to the task-specific distribution with high\nimage quality (hourglass). Without bells and whistles, by only modifying the\nmapping strategy, we achieve state-of-the-art performance on five image\nrestoration tasks, 22 benchmarks in the universal setting and zero-shot\ngeneralization setting. Surprisingly, by only using a lightweight model (only\n0.89M), we could achieve outstanding performance. The source code and\npre-trained models are available at https://github.com/iSEE-Laboratory/DiffUIR", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Dian Zheng", "Xiao-Ming Wu", "Shuzhou Yang", "Jian Zhang", "Jian-Fang Hu", "Wei-Shi Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4cd"}, "filepath": "data/2311.07634.png", "tags": [], "_media_type": "image", "_rand": 0.9993014906402277, "arXiv_link": "https://arxiv.org/abs/2311.07634", "other_link": "", "title": "ActiveDC: Distribution Calibration for Active Finetuning", "abstract": "The pretraining-finetuning paradigm has gained popularity in various computer\nvision tasks. In this paradigm, the emergence of active finetuning arises due\nto the abundance of large-scale data and costly annotation requirements. Active\nfinetuning involves selecting a subset of data from an unlabeled pool for\nannotation, facilitating subsequent finetuning. However, the use of a limited\nnumber of training samples can lead to a biased distribution, potentially\nresulting in model overfitting. In this paper, we propose a new method called\nActiveDC for the active finetuning tasks. Firstly, we select samples for\nannotation by optimizing the distribution similarity between the subset to be\nselected and the entire unlabeled pool in continuous space. Secondly, we\ncalibrate the distribution of the selected samples by exploiting implicit\ncategory information in the unlabeled pool. The feature visualization provides\nan intuitive sense of the effectiveness of our approach to distribution\ncalibration. We conducted extensive experiments on three image classification\ndatasets with different sampling ratios. The results indicate that ActiveDC\nconsistently outperforms the baseline performance in all image classification\ntasks. The improvement is particularly significant when the sampling ratio is\nlow, with performance gains of up to 10%. Our code will be released.", "keywords": [], "authors_list": ["Wenshuai Xu", "Zhenghui Hu", "Yu Lu", "Jinzhou Meng", "Qingjie Liu", "Yunhong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ce"}, "filepath": "data/2405.20729.png", "tags": [], "_media_type": "image", "_rand": 0.9992410465454905, "arXiv_link": "https://arxiv.org/abs/2405.20729", "other_link": "", "title": "Extreme Point Supervised Instance Segmentation", "abstract": "This paper introduces a novel approach to learning instance segmentation\nusing extreme points, i.e., the topmost, leftmost, bottommost, and rightmost\npoints, of each object. These points are readily available in the modern\nbounding box annotation process while offering strong clues for precise\nsegmentation, and thus allows to improve performance at the same annotation\ncost with box-supervised methods. Our work considers extreme points as a part\nof the true instance mask and propagates them to identify potential foreground\nand background points, which are all together used for training a pseudo label\ngenerator. Then pseudo labels given by the generator are in turn used for\nsupervised learning of our final model. On three public benchmarks, our method\nsignificantly outperforms existing box-supervised methods, further narrowing\nthe gap with its fully supervised counterpart. In particular, our model\ngenerates high-quality masks when a target object is separated into multiple\nparts, where previous box-supervised methods often fail.", "keywords": [], "authors_list": ["Hyeonjun Lee", "Sehyun Hwang", "Suha Kwak"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4cf"}, "filepath": "data/2404.02242.png", "tags": [], "_media_type": "image", "_rand": 0.999490586103589, "arXiv_link": "https://arxiv.org/abs/2404.02242", "other_link": "", "title": "Towards Robust 3D Pose Transfer with Adversarial Learning", "abstract": "3D pose transfer that aims to transfer the desired pose to a target mesh is\none of the most challenging 3D generation tasks. Previous attempts rely on\nwell-defined parametric human models or skeletal joints as driving pose\nsources. However, to obtain those clean pose sources, cumbersome but necessary\npre-processing pipelines are inevitable, hindering implementations of the\nreal-time applications. This work is driven by the intuition that the\nrobustness of the model can be enhanced by introducing adversarial samples into\nthe training, leading to a more invulnerable model to the noisy inputs, which\neven can be further extended to directly handling the real-world data like raw\npoint clouds/scans without intermediate processing. Furthermore, we propose a\nnovel 3D pose Masked Autoencoder (3D-PoseMAE), a customized MAE that\neffectively learns 3D extrinsic presentations (i.e., pose). 3D-PoseMAE\nfacilitates learning from the aspect of extrinsic attributes by simultaneously\ngenerating adversarial samples that perturb the model and learning the\narbitrary raw noisy poses via a multi-scale masking strategy. Both qualitative\nand quantitative studies show that the transferred meshes given by our network\nresult in much better quality. Besides, we demonstrate the strong\ngeneralizability of our method on various poses, different domains, and even\nraw scans. Experimental results also show meaningful insights that the\nintermediate adversarial samples generated in the training can successfully\nattack the existing pose transfer models.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haoyu Chen", "Hao Tang", "Ehsan Adeli", "Guoying Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d0"}, "filepath": "data/2312.17334.png", "tags": [], "_media_type": "image", "_rand": 0.9990873415946765, "arXiv_link": "https://arxiv.org/abs/2312.17334", "other_link": "https://github.com/mrluin/TextualDegRemoval}.", "title": "Improving Image Restoration through Removing Degradations in Textual Representations", "abstract": "In this paper, we introduce a new perspective for improving image restoration\nby removing degradation in the textual representations of a given degraded\nimage. Intuitively, restoration is much easier on text modality than image one.\nFor example, it can be easily conducted by removing degradation-related words\nwhile keeping the content-aware words. Hence, we combine the advantages of\nimages in detail description and ones of text in degradation removal to perform\nrestoration. To address the cross-modal assistance, we propose to map the\ndegraded images into textual representations for removing the degradations, and\nthen convert the restored textual representations into a guidance image for\nassisting image restoration. In particular, We ingeniously embed an\nimage-to-text mapper and text restoration module into CLIP-equipped\ntext-to-image models to generate the guidance. Then, we adopt a simple\ncoarse-to-fine approach to dynamically inject multi-scale information from\nguidance to image restoration networks. Extensive experiments are conducted on\nvarious image restoration tasks, including deblurring, dehazing, deraining, and\ndenoising, and all-in-one image restoration. The results showcase that our\nmethod outperforms state-of-the-art ones across all these tasks. The codes and\nmodels are available at \\url{https://github.com/mrluin/TextualDegRemoval}.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Jingbo Lin", "Zhilu Zhang", "Yuxiang Wei", "Dongwei Ren", "Dongsheng Jiang", "Qi Tian", "Wangmeng Zuo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d1"}, "filepath": "data/2311.18287.png", "tags": [], "_media_type": "image", "_rand": 0.9991611290246936, "arXiv_link": "https://arxiv.org/abs/2311.18287", "other_link": "", "title": "Dispersed Structured Light for Hyperspectral 3D Imaging", "abstract": "Hyperspectral 3D imaging aims to acquire both depth and spectral information\nof a scene. However, existing methods are either prohibitively expensive and\nbulky or compromise on spectral and depth accuracy. In this work, we present\nDispersed Structured Light (DSL), a cost-effective and compact method for\naccurate hyperspectral 3D imaging. DSL modifies a traditional projector-camera\nsystem by placing a sub-millimeter thick diffraction grating film front of the\nprojector. The grating disperses structured light based on light wavelength. To\nutilize the dispersed structured light, we devise a model for dispersive\nprojection image formation and a per-pixel hyperspectral 3D reconstruction\nmethod. We validate DSL by instantiating a compact experimental prototype. DSL\nachieves spectral accuracy of 18.8nm full-width half-maximum (FWHM) and depth\nerror of 1mm. We demonstrate that DSL outperforms prior work on practical\nhyperspectral 3D imaging. DSL promises accurate and practical hyperspectral 3D\nimaging for diverse application domains, including computer vision and\ngraphics, cultural heritage, geology, and biology.", "keywords": ["Computational imaging and physics-based vision", "Vision systems and graphics integration"], "authors_list": ["Suhyun Shin", "Seokjun Choi", "Felix Heide", "Seung-Hwan Baek"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d2"}, "filepath": "data/2306.13361.png", "tags": [], "_media_type": "image", "_rand": 0.9990914768122248, "arXiv_link": "https://arxiv.org/abs/2306.13361", "other_link": "", "title": "TurboSL: Dense, Accurate and Fast 3D by Neural Inverse Structured Light", "abstract": "Structured light has proven instrumental in 3D imaging, LiDAR, and\nholographic light projection. Metasurfaces, comprised of sub-wavelength-sized\nnanostructures, facilitate 180$^\\circ$ field-of-view (FoV) structured light,\ncircumventing the restricted FoV inherent in traditional optics like\ndiffractive optical elements. However, extant metasurface-facilitated\nstructured light exhibits sub-optimal performance in downstream tasks, due to\nheuristic pattern designs such as periodic dots that do not consider the\nobjectives of the end application. In this paper, we present neural 360$^\\circ$\nstructured light, driven by learned metasurfaces. We propose a differentiable\nframework, that encompasses a computationally-efficient 180$^\\circ$ wave\npropagation model and a task-specific reconstructor, and exploits both\ntransmission and reflection channels of the metasurface. Leveraging a\nfirst-order optimizer within our differentiable framework, we optimize the\nmetasurface design, thereby realizing neural 360$^\\circ$ structured light. We\nhave utilized neural 360$^\\circ$ structured light for holographic light\nprojection and 3D imaging. Specifically, we demonstrate the first 360$^\\circ$\nlight projection of complex patterns, enabled by our propagation model that can\nbe computationally evaluated 50,000$\\times$ faster than the Rayleigh-Sommerfeld\npropagation. For 3D imaging, we improve depth-estimation accuracy by\n5.09$\\times$ in RMSE compared to the heuristically-designed structured light.\nNeural 360$^\\circ$ structured light promises robust 360$^\\circ$ imaging and\ndisplay for robotics, extended-reality systems, and human-computer\ninteractions.", "keywords": ["Computational imaging and physics-based vision", "Efficient and scalable vision"], "authors_list": ["Parsa Mirdehghan", "Maxx Wu", "Wenzheng Chen", "Wenzheng Chen", "David B. Lindell", "Kiriakos Kutulakos"], "category_name": "", "all_categories": ["Unknown", "Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d3"}, "filepath": "data/2310.18285.png", "tags": [], "_media_type": "image", "_rand": 0.9998433834152363, "arXiv_link": "https://arxiv.org/abs/2310.18285", "other_link": "", "title": "Unlocking the Potential of Prompt-Tuning in Bridging Generalized and Personalized Federated Learning", "abstract": "Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve\nstate-of-the-art performance with improved efficiency in various computer\nvision tasks. This suggests a promising paradigm shift of adapting pre-trained\nViT models to Federated Learning (FL) settings. However, the challenge of data\nheterogeneity among FL clients presents a significant hurdle in effectively\ndeploying ViT models. Existing Generalized FL (GFL) and Personalized FL (PFL)\nmethods have limitations in balancing performance across both global and local\ndata distributions. In this paper, we present a novel algorithm, SGPT, that\nintegrates GFL and PFL approaches by employing a unique combination of both\nshared and group-specific prompts. This design enables SGPT to capture both\ncommon and group-specific features. A key feature of SGPT is its prompt\nselection module, which facilitates the training of a single global model\ncapable of automatically adapting to diverse local client data distributions\nwithout the need for local fine-tuning. To effectively train the prompts, we\nutilize block coordinate descent (BCD), learning from common feature\ninformation (shared prompts), and then more specialized knowledge (group\nprompts) iteratively. Theoretically, we justify that learning the proposed\nprompts can reduce the gap between global and local performance. Empirically,\nwe conduct experiments on both label and feature heterogeneity settings in\ncomparison with state-of-the-art baselines, along with extensive ablation\nstudies, to substantiate the superior performance of SGPT.", "keywords": ["Efficient and scalable vision"], "authors_list": ["wenlong deng", "Christos Thrampoulidis", "Xiaoxiao Li"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d4"}, "filepath": "data/2312.06685.png", "tags": [], "_media_type": "image", "_rand": 0.9995236739048493, "arXiv_link": "https://arxiv.org/abs/2312.06685", "other_link": "", "title": "Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models", "abstract": "While Multi-modal Language Models (MLMs) demonstrate impressive multimodal\nability, they still struggle on providing factual and precise responses for\ntasks like visual question answering (VQA). In this paper, we address this\nchallenge from the perspective of contextual information. We propose Causal\nContext Generation, Causal-CoG, which is a prompting strategy that engages\ncontextual information to enhance precise VQA during inference. Specifically,\nwe prompt MLMs to generate contexts, i.e, text description of an image, and\nengage the generated contexts for question answering. Moreover, we investigate\nthe advantage of contexts on VQA from a causality perspective, introducing\ncausality filtering to select samples for which contextual information is\nhelpful. To show the effectiveness of Causal-CoG, we run extensive experiments\non 10 multimodal benchmarks and show consistent improvements, e.g., +6.30% on\nPOPE, +13.69% on Vizwiz and +6.43% on VQAv2 compared to direct decoding,\nsurpassing existing methods. We hope Casual-CoG inspires explorations of\ncontext knowledge in multimodal models, and serves as a plug-and-play strategy\nfor MLM decoding.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Shitian Zhao", "Zhuowan Li", "YadongLu", "Alan L. Yuille", "Yan Wang"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d5"}, "filepath": "data/2404.02041.png", "tags": [], "_media_type": "image", "_rand": 0.9990282243856436, "arXiv_link": "https://arxiv.org/abs/2404.02041", "other_link": "https://github.com/CAMMA-public/SelfPose3D}", "title": "Person-in-WiFi 3D: End-to-End Multi-Person 3D Pose Estimation with Wi-Fi", "abstract": "We present a new self-supervised approach, SelfPose3d, for estimating 3d\nposes of multiple persons from multiple camera views. Unlike current\nstate-of-the-art fully-supervised methods, our approach does not require any 2d\nor 3d ground-truth poses and uses only the multi-view input images from a\ncalibrated camera setup and 2d pseudo poses generated from an off-the-shelf 2d\nhuman pose estimator. We propose two self-supervised learning objectives:\nself-supervised person localization in 3d space and self-supervised 3d pose\nestimation. We achieve self-supervised 3d person localization by training the\nmodel on synthetically generated 3d points, serving as 3d person root\npositions, and on the projected root-heatmaps in all the views. We then model\nthe 3d poses of all the localized persons with a bottleneck representation, map\nthem onto all views obtaining 2d joints, and render them using 2d Gaussian\nheatmaps in an end-to-end differentiable manner. Afterwards, we use the\ncorresponding 2d joints and heatmaps from the pseudo 2d poses for learning. To\nalleviate the intrinsic inaccuracy of the pseudo labels, we propose an adaptive\nsupervision attention mechanism to guide the self-supervision. Our experiments\nand analysis on three public benchmark datasets, including Panoptic, Shelf, and\nCampus, show the effectiveness of our approach, which is comparable to\nfully-supervised methods. Code is available at\n\\url{https://github.com/CAMMA-public/SelfPose3D}", "keywords": [], "authors_list": ["Kangwei Yan", "Fei Wang", "Bo Qian", "Han Ding", "Jinsong Han", "Xing Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d6"}, "filepath": "data/2401.12694.png", "tags": [], "_media_type": "image", "_rand": 0.9993101312046765, "arXiv_link": "https://arxiv.org/abs/2401.12694", "other_link": "", "title": "Multi-agent Collaborative Perception via Motion-aware Robust Communication Network", "abstract": "Collaborative perception allows each agent to enhance its perceptual\nabilities by exchanging messages with others. It inherently results in a\ntrade-off between perception ability and communication costs. Previous works\ntransmit complete full-frame high-dimensional feature maps among agents,\nresulting in substantial communication costs. To promote communication\nefficiency, we propose only transmitting the information needed for the\ncollaborator's downstream task. This pragmatic communication strategy focuses\non three key aspects: i) pragmatic message selection, which selects\ntask-critical parts from the complete data, resulting in spatially and\ntemporally sparse feature vectors; ii) pragmatic message representation, which\nachieves pragmatic approximation of high-dimensional feature vectors with a\ntask-adaptive dictionary, enabling communicating with integer indices; iii)\npragmatic collaborator selection, which identifies beneficial collaborators,\npruning unnecessary communication links. Following this strategy, we first\nformulate a mathematical optimization framework for the\nperception-communication trade-off and then propose PragComm, a multi-agent\ncollaborative perception system with two key components: i) single-agent\ndetection and tracking and ii) pragmatic collaboration. The proposed PragComm\npromotes pragmatic communication and adapts to a wide range of communication\nconditions. We evaluate PragComm for both collaborative 3D object detection and\ntracking tasks in both real-world, V2V4Real, and simulation datasets, OPV2V and\nV2X-SIM2.0. PragComm consistently outperforms previous methods with more than\n32.7K times lower communication volume on OPV2V. Code is available at\ngithub.com/PhyllisH/PragComm.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Shixin Hong", "Yu LIU", "Zhi Li", "Shaohui Li", "You He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d7"}, "filepath": "data/2312.00786.png", "tags": [], "_media_type": "image", "_rand": 0.9994436224129235, "arXiv_link": "https://arxiv.org/abs/2312.00786", "other_link": "https://16lemoing.github.io/dot", "title": "Dense Optical Tracking: Connecting the Dots", "abstract": "Recent approaches to point tracking are able to recover the trajectory of any\nscene point through a large portion of a video despite the presence of\nocclusions. They are, however, too slow in practice to track every point\nobserved in a single frame in a reasonable amount of time. This paper\nintroduces DOT, a novel, simple and efficient method for solving this problem.\nIt first extracts a small set of tracks from key regions at motion boundaries\nusing an off-the-shelf point tracking algorithm. Given source and target\nframes, DOT then computes rough initial estimates of a dense flow field and\nvisibility mask through nearest-neighbor interpolation, before refining them\nusing a learnable optical flow estimator that explicitly handles occlusions and\ncan be trained on synthetic data with ground-truth correspondences. We show\nthat DOT is significantly more accurate than current optical flow techniques,\noutperforms sophisticated \"universal\" trackers like OmniMotion, and is on par\nwith, or better than, the best point tracking algorithms like CoTracker while\nbeing at least two orders of magnitude faster. Quantitative and qualitative\nexperiments with synthetic and real videos validate the promise of the proposed\napproach. Code, data, and videos showcasing the capabilities of our approach\nare available in the project webpage: https://16lemoing.github.io/dot .", "keywords": ["Low-level vision"], "authors_list": ["Guillaume Le Moing", "Jean Ponce", "Cordelia Schmid"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d8"}, "filepath": "data/2311.06322.png", "tags": [], "_media_type": "image", "_rand": 0.9994898405056107, "arXiv_link": "https://arxiv.org/abs/2311.06322", "other_link": "", "title": "Enhancing Post-training Quantization Calibration through Contrastive Learning", "abstract": "Diffusion models have achieved great success due to their remarkable\ngeneration ability. However, their high computational overhead is still a\ntroublesome problem. Recent studies have leveraged post-training quantization\n(PTQ) to compress diffusion models. However, most of them only focus on\nunconditional models, leaving the quantization of widely used large pretrained\ntext-to-image models, e.g., Stable Diffusion, largely unexplored. In this\npaper, we propose a novel post-training quantization method PCR (Progressive\nCalibration and Relaxing) for text-to-image diffusion models, which consists of\na progressive calibration strategy that considers the accumulated quantization\nerror across timesteps, and an activation relaxing strategy that improves the\nperformance with negligible cost. Additionally, we demonstrate the previous\nmetrics for text-to-image diffusion model quantization are not accurate due to\nthe distribution gap. To tackle the problem, we propose a novel QDiffBench\nbenchmark, which utilizes data in the same domain for more accurate evaluation.\nBesides, QDiffBench also considers the generalization performance of the\nquantized model outside the calibration dataset. Extensive experiments on\nStable Diffusion and Stable Diffusion XL demonstrate the superiority of our\nmethod and benchmark. Moreover, we are the first to achieve quantization for\nStable Diffusion XL while maintaining the performance.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Yuzhang Shang", "Gaowen Liu", "Ramana Kompella", "Yan Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4d9"}, "filepath": "data/2312.01964.png", "tags": [], "_media_type": "image", "_rand": 0.999194702340719, "arXiv_link": "https://arxiv.org/abs/2312.01964", "other_link": "", "title": "Semantics-aware Motion Retargeting with Vision-Language Models", "abstract": "Capturing and preserving motion semantics is essential to motion retargeting\nbetween animation characters. However, most of the previous works neglect the\nsemantic information or rely on human-designed joint-level representations.\nHere, we present a novel Semantics-aware Motion reTargeting (SMT) method with\nthe advantage of vision-language models to extract and maintain meaningful\nmotion semantics. We utilize a differentiable module to render 3D motions. Then\nthe high-level motion semantics are incorporated into the motion retargeting\nprocess by feeding the vision-language model with the rendered images and\naligning the extracted semantic embeddings. To ensure the preservation of\nfine-grained motion details and high-level semantics, we adopt a two-stage\npipeline consisting of skeleton-aware pre-training and fine-tuning with\nsemantics and geometry constraints. Experimental results show the effectiveness\nof the proposed method in producing high-quality motion retargeting results\nwhile accurately preserving motion semantics.", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Haodong Zhang", "ZhiKe Chen", "Haocheng Xu", "Lei Hao", "Xiaofei Wu", "Songcen Xu", "Zhensong Zhang", "Yue Wang", "Rong Xiong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4da"}, "filepath": "data/2403.01693.png", "tags": [], "_media_type": "image", "_rand": 0.9997046819883114, "arXiv_link": "https://arxiv.org/abs/2403.01693", "other_link": "", "title": "HanDiffuser: Text-to-Image Generation With Realistic Hand Appearances", "abstract": "Text-to-image generative models can generate high-quality humans, but realism\nis lost when generating hands. Common artifacts include irregular hand poses,\nshapes, incorrect numbers of fingers, and physically implausible finger\norientations. To generate images with realistic hands, we propose a novel\ndiffusion-based architecture called HanDiffuser that achieves realism by\ninjecting hand embeddings in the generative process. HanDiffuser consists of\ntwo components: a Text-to-Hand-Params diffusion model to generate SMPL-Body and\nMANO-Hand parameters from input text prompts, and a Text-Guided\nHand-Params-to-Image diffusion model to synthesize images by conditioning on\nthe prompts and hand parameters generated by the previous component. We\nincorporate multiple aspects of hand representation, including 3D shapes and\njoint-level finger positions, orientations and articulations, for robust\nlearning and reliable performance during inference. We conduct extensive\nquantitative and qualitative experiments and perform user studies to\ndemonstrate the efficacy of our method in generating images with high-quality\nhands.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Supreeth Narasimhaswamy", "Uttaran Bhattacharya", "Xiang Chen", "Ishita Dasgupta", "Saayan Mitra", "Minh Hoai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4db"}, "filepath": "data/2405.04966.png", "tags": [], "_media_type": "image", "_rand": 0.9995190422603215, "arXiv_link": "https://arxiv.org/abs/2405.04966", "other_link": "https://github.com/PhyllisH/CodeFilling.", "title": "Communication-Efficient Collaborative Perception via Information Filling with Codebook", "abstract": "Collaborative perception empowers each agent to improve its perceptual\nability through the exchange of perceptual messages with other agents. It\ninherently results in a fundamental trade-off between perception ability and\ncommunication cost. To address this bottleneck issue, our core idea is to\noptimize the collaborative messages from two key aspects: representation and\nselection. The proposed codebook-based message representation enables the\ntransmission of integer codes, rather than high-dimensional feature maps. The\nproposed information-filling-driven message selection optimizes local messages\nto collectively fill each agent's information demand, preventing information\noverflow among multiple agents. By integrating these two designs, we propose\nCodeFilling, a novel communication-efficient collaborative perception system,\nwhich significantly advances the perception-communication trade-off and is\ninclusive to both homogeneous and heterogeneous collaboration settings. We\nevaluate CodeFilling in both a real-world dataset, DAIR-V2X, and a new\nsimulation dataset, OPV2VH+. Results show that CodeFilling outperforms previous\nSOTA Where2comm on DAIR-V2X/OPV2VH+ with 1,333/1,206 times lower communication\nvolume. Our code is available at https://github.com/PhyllisH/CodeFilling.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yue Hu", "Juntong Peng", "Sifei Liu", "Junhao Ge", "Si Liu", "Siheng Chen"], "category_name": "Information Theory", "all_categories": ["Information Theory", "Computer Vision and Pattern Recognition", "Multiagent Systems", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4dc"}, "filepath": "data/2312.00739.png", "tags": [], "_media_type": "image", "_rand": 0.999613469403032, "arXiv_link": "https://arxiv.org/abs/2312.00739", "other_link": "https://github.com/2y7c3/ASD.", "title": "Adversarial Score Distillation: When score distillation meets GAN", "abstract": "Existing score distillation methods are sensitive to classifier-free guidance\n(CFG) scale: manifested as over-smoothness or instability at small CFG scales,\nwhile over-saturation at large ones. To explain and analyze these issues, we\nrevisit the derivation of Score Distillation Sampling (SDS) and decipher\nexisting score distillation with the Wasserstein Generative Adversarial Network\n(WGAN) paradigm. With the WGAN paradigm, we find that existing score\ndistillation either employs a fixed sub-optimal discriminator or conducts\nincomplete discriminator optimization, resulting in the scale-sensitive issue.\nWe propose the Adversarial Score Distillation (ASD), which maintains an\noptimizable discriminator and updates it using the complete optimization\nobjective. Experiments show that the proposed ASD performs favorably in 2D\ndistillation and text-to-3D tasks against existing methods. Furthermore, to\nexplore the generalization ability of our WGAN paradigm, we extend ASD to the\nimage editing task, which achieves competitive results. The project page and\ncode are at https://github.com/2y7c3/ASD.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Min Wei", "Jingkai Zhou", "Junyao Sun", "Xuesong Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4dd"}, "filepath": "data/2311.06242.png", "tags": [], "_media_type": "image", "_rand": 0.9990715267550876, "arXiv_link": "https://arxiv.org/abs/2311.06242", "other_link": "", "title": "Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks", "abstract": "We introduce Florence-2, a novel vision foundation model with a unified,\nprompt-based representation for a variety of computer vision and\nvision-language tasks. While existing large vision models excel in transfer\nlearning, they struggle to perform a diversity of tasks with simple\ninstructions, a capability that implies handling the complexity of various\nspatial hierarchy and semantic granularity. Florence-2 was designed to take\ntext-prompt as task instructions and generate desirable results in text forms,\nwhether it be captioning, object detection, grounding or segmentation. This\nmulti-task learning setup demands large-scale, high-quality annotated data. To\nthis end, we co-developed FLD-5B that consists of 5.4 billion comprehensive\nvisual annotations on 126 million images, using an iterative strategy of\nautomated image annotation and model refinement. We adopted a\nsequence-to-sequence structure to train Florence-2 to perform versatile and\ncomprehensive vision tasks. Extensive evaluations on numerous tasks\ndemonstrated Florence-2 to be a strong vision foundation model contender with\nunprecedented zero-shot and fine-tuning capabilities.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Bin Xiao", "Haiping Wu", "Weijian Xu", "Xiyang Dai", "Houdong Hu", "Yumao Lu", "Michael Zeng", "Ce Liu", "Lu Yuan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4de"}, "filepath": "data/2404.06609.png", "tags": [], "_media_type": "image", "_rand": 0.9990491332793755, "arXiv_link": "https://arxiv.org/abs/2404.06609", "other_link": "", "title": "GOAT-Bench: A Benchmark for Multi-modal Lifelong Navigation", "abstract": "The Embodied AI community has made significant strides in visual navigation\ntasks, exploring targets from 3D coordinates, objects, language descriptions,\nand images. However, these navigation models often handle only a single input\nmodality as the target. With the progress achieved so far, it is time to move\ntowards universal navigation models capable of handling various goal types,\nenabling more effective user interaction with robots. To facilitate this goal,\nwe propose GOAT-Bench, a benchmark for the universal navigation task referred\nto as GO to AnyThing (GOAT). In this task, the agent is directed to navigate to\na sequence of targets specified by the category name, language description, or\nimage in an open-vocabulary fashion. We benchmark monolithic RL and modular\nmethods on the GOAT task, analyzing their performance across modalities, the\nrole of explicit and implicit scene memories, their robustness to noise in goal\nspecifications, and the impact of memory in lifelong scenarios.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Mukul Khanna", "Ram Ramrakhya", "Gunjan Chhablani", "Sriram Yenamandra", "Theo Gervet", "Matthew Chang", "Zsolt Kira", "Devendra Singh Chaplot", "Dhruv Batra", "Roozbeh Mottaghi"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4df"}, "filepath": "data/2311.16926.png", "tags": [], "_media_type": "image", "_rand": 0.9993886799907998, "arXiv_link": "https://arxiv.org/abs/2311.16926", "other_link": "", "title": "LLaFS: When Large Language Models Meet Few-Shot Segmentation", "abstract": "This paper proposes LLaFS, the first attempt to leverage large language\nmodels (LLMs) in few-shot segmentation. In contrast to the conventional\nfew-shot segmentation methods that only rely on the limited and biased\ninformation from the annotated support images, LLaFS leverages the vast prior\nknowledge gained by LLM as an effective supplement and directly uses the LLM to\nsegment images in a few-shot manner. To enable the text-based LLM to handle\nimage-related tasks, we carefully design an input instruction that allows the\nLLM to produce segmentation results represented as polygons, and propose a\nregion-attribute table to simulate the human visual mechanism and provide\nmulti-modal guidance. We also synthesize pseudo samples and use curriculum\nlearning for pretraining to augment data and achieve better optimization. LLaFS\nachieves state-of-the-art results on multiple datasets, showing the potential\nof using LLMs for few-shot computer vision tasks.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Lanyun Zhu", "Tianrun Chen", "Deyi Ji", "Deyi Ji", "Jieping Ye", "Jun Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e0"}, "filepath": "data/2405.12057.png", "tags": [], "_media_type": "image", "_rand": 0.9994188525502926, "arXiv_link": "https://arxiv.org/abs/2405.12057", "other_link": "", "title": "MVCPS-NeuS: Multi-view Constrained Photometric Stereo for Neural Surface Reconstruction", "abstract": "In this work we present a novel multi-view photometric stereo (PS) method.\nLike many works in 3D reconstruction we are leveraging neural shape\nrepresentations and learnt renderers. However, our work differs from the\nstate-of-the-art multi-view PS methods such as PS-NeRF or SuperNormal we\nexplicity leverage per-pixel intensity renderings rather than relying mainly on\nestimated normals.\n We model point light attenuation and explicitly raytrace cast shadows in\norder to best approximate each points incoming radiance. This is used as input\nto a fully neural material renderer that uses minimal prior assumptions and it\nis jointly optimised with the surface. Finally, estimated normal and\nsegmentation maps can also incorporated in order to maximise the surface\naccuracy.\n Our method is among the first to outperform the classical approach of\nDiLiGenT-MV and achieves average 0.2mm Chamfer distance for objects imaged at\napprox 1.5m distance away with approximate 400x400 resolution. Moreover, we\nshow robustness to poor normals in low light count scenario, achieving 0.27mm\nChamfer distance when pixel rendering is used instead of estimated normals.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hiroaki Santo", "Fumio Okura", "Yasuyuki Matsushita"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e1"}, "filepath": "data/2404.05621.png", "tags": [], "_media_type": "image", "_rand": 0.9994044296431462, "arXiv_link": "https://arxiv.org/abs/2404.05621", "other_link": "https://github.com/FarinaMatteo/multiflow.", "title": "MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language Pruning", "abstract": "While excellent in transfer learning, Vision-Language models (VLMs) come with\nhigh computational costs due to their large number of parameters. To address\nthis issue, removing parameters via model pruning is a viable solution.\nHowever, existing techniques for VLMs are task-specific, and thus require\npruning the network from scratch for each new task of interest. In this work,\nwe explore a new direction: Task-Agnostic Vision-Language Pruning (TA-VLP).\nGiven a pretrained VLM, the goal is to find a unique pruned counterpart\ntransferable to multiple unknown downstream tasks. In this challenging setting,\nthe transferable representations already encoded in the pretrained model are a\nkey aspect to preserve. Thus, we propose Multimodal Flow Pruning (MULTIFLOW), a\nfirst, gradient-free, pruning framework for TA-VLP where: (i) the importance of\na parameter is expressed in terms of its magnitude and its information flow, by\nincorporating the saliency of the neurons it connects; and (ii) pruning is\ndriven by the emergent (multimodal) distribution of the VLM parameters after\npretraining. We benchmark eight state-of-the-art pruning algorithms in the\ncontext of TA-VLP, experimenting with two VLMs, three vision-language tasks,\nand three pruning ratios. Our experimental results show that MULTIFLOW\noutperforms recent sophisticated, combinatorial competitors in the vast\nmajority of the cases, paving the way towards addressing TA-VLP. The code is\npublicly available at https://github.com/FarinaMatteo/multiflow.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Matteo Farina", "Massimiliano Mancini", "Elia Cunegatti", "Gaowen Liu", "Giovanni Iacca", "Elisa Ricci"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e2"}, "filepath": "data/2403.15789.png", "tags": [], "_media_type": "image", "_rand": 0.9994805338346101, "arXiv_link": "https://arxiv.org/abs/2403.15789", "other_link": "https://github.com/tiny-smart/in-context-matting", "title": "In-Context Matting", "abstract": "We introduce in-context matting, a novel task setting of image matting. Given\na reference image of a certain foreground and guided priors such as points,\nscribbles, and masks, in-context matting enables automatic alpha estimation on\na batch of target images of the same foreground category, without additional\nauxiliary input. This setting marries good performance in auxiliary input-based\nmatting and ease of use in automatic matting, which finds a good trade-off\nbetween customization and automation. To overcome the key challenge of accurate\nforeground matching, we introduce IconMatting, an in-context matting model\nbuilt upon a pre-trained text-to-image diffusion model. Conditioned on inter-\nand intra-similarity matching, IconMatting can make full use of reference\ncontext to generate accurate target alpha mattes. To benchmark the task, we\nalso introduce a novel testing dataset ICM-$57$, covering 57 groups of\nreal-world images. Quantitative and qualitative results on the ICM-57 testing\nset show that IconMatting rivals the accuracy of trimap-based matting while\nretaining the automation level akin to automatic matting. Code is available at\nhttps://github.com/tiny-smart/in-context-matting", "keywords": [], "authors_list": ["He Guo", "Zixuan Ye", "Zhiguo Cao", "Hao Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e3"}, "filepath": "data/2403.02628.png", "tags": [], "_media_type": "image", "_rand": 0.9998575035070947, "arXiv_link": "https://arxiv.org/abs/2403.02628", "other_link": "", "title": "Interactive Continual Learning: Fast and Slow Thinking", "abstract": "Advanced life forms, sustained by the synergistic interaction of neural\ncognitive mechanisms, continually acquire and transfer knowledge throughout\ntheir lifespan. In contrast, contemporary machine learning paradigms exhibit\nlimitations in emulating the facets of continual learning (CL). Nonetheless,\nthe emergence of large language models (LLMs) presents promising avenues for\nrealizing CL via interactions with these models. Drawing on Complementary\nLearning System theory, this paper presents a novel Interactive Continual\nLearning (ICL) framework, enabled by collaborative interactions among models of\nvarious sizes. Specifically, we assign the ViT model as System1 and multimodal\nLLM as System2. To enable the memory module to deduce tasks from class\ninformation and enhance Set2Set retrieval, we propose the Class-Knowledge-Task\nMulti-Head Attention (CKT-MHA). Additionally, to improve memory retrieval in\nSystem1 through enhanced geometric representation, we introduce the CL-vMF\nmechanism, based on the von Mises-Fisher (vMF) distribution. Meanwhile, we\nintroduce the von Mises-Fisher Outlier Detection and Interaction (vMF-ODI)\nstrategy to identify hard examples, thus enhancing collaboration between\nSystem1 and System2 for complex reasoning realization. Comprehensive evaluation\nof our proposed ICL demonstrates significant resistance to forgetting and\nsuperior performance relative to existing methods. Code is available at\ngithub.com/ICL.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Biqing Qi", "Xinquan Chen", "Junqi Gao", "Dong Li", "Jianxing Liu", "Ligang Wu", "Bowen Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e4"}, "filepath": "data/2203.08450.png", "tags": [], "_media_type": "image", "_rand": 0.9991993156838851, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2203.08450", "other_link": "https://github.com/Googolxx/STF.", "title": "The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing", "abstract": "Learned image compression methods have exhibited superior rate-distortion\nperformance than classical image compression standards. Most existing learned\nimage compression models are based on Convolutional Neural Networks (CNNs).\nDespite great contributions, a main drawback of CNN based model is that its\nstructure is not designed for capturing local redundancy, especially the\nnon-repetitive textures, which severely affects the reconstruction quality.\nTherefore, how to make full use of both global structure and local texture\nbecomes the core problem for learning-based image compression. Inspired by\nrecent progresses of Vision Transformer (ViT) and Swin Transformer, we found\nthat combining the local-aware attention mechanism with the global-related\nfeature learning could meet the expectation in image compression. In this\npaper, we first extensively study the effects of multiple kinds of attention\nmechanisms for local features learning, then introduce a more straightforward\nyet effective window-based local attention block. The proposed window-based\nattention is very flexible which could work as a plug-and-play component to\nenhance CNN and Transformer models. Moreover, we propose a novel Symmetrical\nTransFormer (STF) framework with absolute transformer blocks in the\ndown-sampling encoder and up-sampling decoder. Extensive experimental\nevaluations have shown that the proposed method is effective and outperforms\nthe state-of-the-art methods. The code is publicly available at\nhttps://github.com/Googolxx/STF.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Denis Bobkov", "Vadim Titov", "Aibek Alanov", "Dmitry Vetrov"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e5"}, "filepath": "data/2401.12592.png", "tags": [], "_media_type": "image", "_rand": 0.9996280614130386, "arXiv_link": "https://arxiv.org/abs/2401.12592", "other_link": "https://wildrgbd.github.io/.", "title": "RGBD Objects in the Wild: Scaling Real-World 3D Object Learning from RGB-D Videos", "abstract": "We introduce a new RGB-D object dataset captured in the wild called\nWildRGB-D. Unlike most existing real-world object-centric datasets which only\ncome with RGB capturing, the direct capture of the depth channel allows better\n3D annotations and broader downstream applications. WildRGB-D comprises\nlarge-scale category-level RGB-D object videos, which are taken using an iPhone\nto go around the objects in 360 degrees. It contains around 8500 recorded\nobjects and nearly 20000 RGB-D videos across 46 common object categories. These\nvideos are taken with diverse cluttered backgrounds with three setups to cover\nas many real-world scenarios as possible: (i) a single object in one video;\n(ii) multiple objects in one video; and (iii) an object with a static hand in\none video. The dataset is annotated with object masks, real-world scale camera\nposes, and reconstructed aggregated point clouds from RGBD videos. We benchmark\nfour tasks with WildRGB-D including novel view synthesis, camera pose\nestimation, object 6d pose estimation, and object surface reconstruction. Our\nexperiments show that the large-scale capture of RGB-D objects provides a large\npotential to advance 3D object learning. Our project page is\nhttps://wildrgbd.github.io/.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Hongchi Xia", "Yang Fu", "Sifei Liu", "Xiaolong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e6"}, "filepath": "data/2403.16003.png", "tags": [], "_media_type": "image", "_rand": 0.9995194992508115, "arXiv_link": "https://arxiv.org/abs/2403.16003", "other_link": "", "title": "Learning Continual Compatible Representation for Re-indexing Free Lifelong Person Re-identification", "abstract": "Lifelong Person Re-Identification (LReID) aims to continuously learn from\nsuccessive data streams, matching individuals across multiple cameras. The key\nchallenge for LReID is how to effectively preserve old knowledge while\nincrementally learning new information, which is caused by task-level domain\ngaps and limited old task datasets. Existing methods based on CNN backbone are\ninsufficient to explore the representation of each instance from different\nperspectives, limiting model performance on limited old task datasets and new\ntask datasets. Unlike these methods, we propose a Diverse Representations\nEmbedding (DRE) framework that first explores a pure transformer for LReID. The\nproposed DRE preserves old knowledge while adapting to new information based on\ninstance-level and task-level layout. Concretely, an Adaptive Constraint Module\n(ACM) is proposed to implement integration and push away operations between\nmultiple overlapping representations generated by transformer-based backbone,\nobtaining rich and discriminative representations for each instance to improve\nadaptive ability of LReID. Based on the processed diverse representations, we\npropose Knowledge Update (KU) and Knowledge Preservation (KP) strategies at the\ntask-level layout by introducing the adjustment model and the learner model. KU\nstrategy enhances the adaptive learning ability of learner models for new\ninformation under the adjustment model prior, and KP strategy preserves old\nknowledge operated by representation-level alignment and logit-level\nsupervision in limited old task datasets while guaranteeing the adaptive\nlearning information capacity of the LReID model. Compared to state-of-the-art\nmethods, our method achieves significantly improved performance in holistic,\nlarge-scale, and occluded datasets.", "keywords": [], "authors_list": ["Zhenyu Cui", "Jiahuan Zhou", "Xun Wang", "Manyu Zhu", "Yuxin Peng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e7"}, "filepath": "data/2403.08019.png", "tags": [], "_media_type": "image", "_rand": 0.9994046435002578, "arXiv_link": "https://arxiv.org/abs/2403.08019", "other_link": "", "title": "6-DoF Pose Estimation with MultiScale Residual Correlation", "abstract": "We propose a single-shot approach to determining 6-DoF pose of an object with\navailable 3D computer-aided design (CAD) model from a single RGB image. Our\nmethod, dubbed MRC-Net, comprises two stages. The first performs pose\nclassification and renders the 3D object in the classified pose. The second\nstage performs regression to predict fine-grained residual pose within class.\nConnecting the two stages is a novel multi-scale residual correlation (MRC)\nlayer that captures high-and-low level correspondences between the input image\nand rendering from first stage. MRC-Net employs a Siamese network with shared\nweights between both stages to learn embeddings for input and rendered images.\nTo mitigate ambiguity when predicting discrete pose class labels on symmetric\nobjects, we use soft probabilistic labels to define pose class in the first\nstage. We demonstrate state-of-the-art accuracy, outperforming all competing\nRGB-based methods on four challenging BOP benchmark datasets: T-LESS, LM-O,\nYCB-V, and ITODD. Our method is non-iterative and requires no complex\npost-processing.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yuelong Li", "Yafei Mao", "Raja Bala", "Sunil Hadap"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e8"}, "filepath": "data/2405.05605.png", "tags": [], "_media_type": "image", "_rand": 0.9995423388974098, "arXiv_link": "https://arxiv.org/abs/2405.05605", "other_link": "https://github.com/andreadalcin/MinimalPerspectiveAutocalibration", "title": "Minimal Perspective Autocalibration", "abstract": "We introduce a new family of minimal problems for reconstruction from\nmultiple views. Our primary focus is a novel approach to autocalibration, a\nlong-standing problem in computer vision. Traditional approaches to this\nproblem, such as those based on Kruppa's equations or the modulus constraint,\nrely explicitly on the knowledge of multiple fundamental matrices or a\nprojective reconstruction. In contrast, we consider a novel formulation\ninvolving constraints on image points, the unknown depths of 3D points, and a\npartially specified calibration matrix $K$. For $2$ and $3$ views, we present a\ncomprehensive taxonomy of minimal autocalibration problems obtained by relaxing\nsome of these constraints. These problems are organized into classes according\nto the number of views and any assumed prior knowledge of $K$. Within each\nclass, we determine problems with the fewest -- or a relatively small number of\n-- solutions. From this zoo of problems, we devise three practical solvers.\nExperiments with synthetic and real data and interfacing our solvers with\nCOLMAP demonstrate that we achieve superior accuracy compared to\nstate-of-the-art calibration methods. The code is available at\nhttps://github.com/andreadalcin/MinimalPerspectiveAutocalibration", "keywords": [], "authors_list": ["Andrea Porfiri Dal Cin", "Timothy Duff", "Luca Magri", "Tomas Pajdla"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4e9"}, "filepath": "data/2404.03635.png", "tags": [], "_media_type": "image", "_rand": 0.999330116582335, "arXiv_link": "https://arxiv.org/abs/2404.03635", "other_link": "", "title": "WorDepth: Variational Language Prior for Monocular Depth Estimation", "abstract": "Three-dimensional (3D) reconstruction from a single image is an ill-posed\nproblem with inherent ambiguities, i.e. scale. Predicting a 3D scene from text\ndescription(s) is similarly ill-posed, i.e. spatial arrangements of objects\ndescribed. We investigate the question of whether two inherently ambiguous\nmodalities can be used in conjunction to produce metric-scaled reconstructions.\nTo test this, we focus on monocular depth estimation, the problem of predicting\na dense depth map from a single image, but with an additional text caption\ndescribing the scene. To this end, we begin by encoding the text caption as a\nmean and standard deviation; using a variational framework, we learn the\ndistribution of the plausible metric reconstructions of 3D scenes corresponding\nto the text captions as a prior. To \"select\" a specific reconstruction or depth\nmap, we encode the given image through a conditional sampler that samples from\nthe latent space of the variational text encoder, which is then decoded to the\noutput depth map. Our approach is trained alternatingly between the text and\nimage branches: in one optimization step, we predict the mean and standard\ndeviation from the text description and sample from a standard Gaussian, and in\nthe other, we sample using a (image) conditional sampler. Once trained, we\ndirectly predict depth from the encoded text using the conditional sampler. We\ndemonstrate our approach on indoor (NYUv2) and outdoor (KITTI) scenarios, where\nwe show that language can consistently improve performance in both.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Ziyao Zeng", "Hyoungseob Park", "Fengyu Yang", "Daniel Wang", "Stefano Soatto", "Dong Lao", "Alex Wong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ea"}, "filepath": "data/2310.19512.png", "tags": [], "_media_type": "image", "_rand": 0.9996693070651778, "arXiv_link": "http://export.arxiv.org/abs/2310.19512", "other_link": "", "title": "Hierarchical Patch Diffusion Models for High-Resolution Video Generation", "abstract": "Video generation has increasingly gained interest in both academia and\nindustry. Although commercial tools can generate plausible videos, there is a\nlimited number of open-source models available for researchers and engineers.\nIn this work, we introduce two diffusion models for high-quality video\ngeneration, namely text-to-video (T2V) and image-to-video (I2V) models. T2V\nmodels synthesize a video based on a given text input, while I2V models\nincorporate an additional image input. Our proposed T2V model can generate\nrealistic and cinematic-quality videos with a resolution of $1024 \\times 576$,\noutperforming other open-source T2V models in terms of quality. The I2V model\nis designed to produce videos that strictly adhere to the content of the\nprovided reference image, preserving its content, structure, and style. This\nmodel is the first open-source I2V foundation model capable of transforming a\ngiven image into a video clip while maintaining content preservation\nconstraints. We believe that these open-source video generation models will\ncontribute significantly to the technological advancements within the\ncommunity.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Ivan Skorokhodov", "Willi Menapace", "Aliaksandr Siarohin", "Sergey Tulyakov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4eb"}, "filepath": "data/2311.17241.png", "tags": [], "_media_type": "image", "_rand": 0.9994159822612915, "arXiv_link": "https://arxiv.org/abs/2311.17241", "other_link": "https://github.com/sming256/AdaTAD.", "title": "End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames", "abstract": "Recently, temporal action detection (TAD) has seen significant performance\nimprovement with end-to-end training. However, due to the memory bottleneck,\nonly models with limited scales and limited data volumes can afford end-to-end\ntraining, which inevitably restricts TAD performance. In this paper, we reduce\nthe memory consumption for end-to-end training, and manage to scale up the TAD\nbackbone to 1 billion parameters and the input video to 1,536 frames, leading\nto significant detection performance. The key to our approach lies in our\nproposed temporal-informative adapter (TIA), which is a novel lightweight\nmodule that reduces training memory. Using TIA, we free the humongous backbone\nfrom learning to adapt to the TAD task by only updating the parameters in TIA.\nTIA also leads to better TAD representation by temporally aggregating context\nfrom adjacent frames throughout the backbone. We evaluate our model across four\nrepresentative datasets. Owing to our efficient design, we are able to train\nend-to-end on VideoMAEv2-giant and achieve 75.4% mAP on THUMOS14, being the\nfirst end-to-end model to outperform the best feature-based methods. Code is\navailable at https://github.com/sming256/AdaTAD.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Shuming Liu", "Chenlin Zhang", "Chen Zhao", "Bernard Ghanem"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ec"}, "filepath": "data/2404.00653.png", "tags": [], "_media_type": "image", "_rand": 0.999616174682834, "arXiv_link": "https://arxiv.org/abs/2404.00653", "other_link": "", "title": "Dual DETRs for Multi-Label Temporal Action Detection", "abstract": "Temporal Action Detection (TAD) aims to identify the action boundaries and\nthe corresponding category within untrimmed videos. Inspired by the success of\nDETR in object detection, several methods have adapted the query-based\nframework to the TAD task. However, these approaches primarily followed DETR to\npredict actions at the instance level (i.e., identify each action by its center\npoint), leading to sub-optimal boundary localization. To address this issue, we\npropose a new Dual-level query-based TAD framework, namely DualDETR, to detect\nactions from both instance-level and boundary-level. Decoding at different\nlevels requires semantics of different granularity, therefore we introduce a\ntwo-branch decoding structure. This structure builds distinctive decoding\nprocesses for different levels, facilitating explicit capture of temporal cues\nand semantics at each level. On top of the two-branch design, we present a\njoint query initialization strategy to align queries from both levels.\nSpecifically, we leverage encoder proposals to match queries from each level in\na one-to-one manner. Then, the matched queries are initialized using position\nand content prior from the matched action proposal. The aligned dual-level\nqueries can refine the matched proposal with complementary cues during\nsubsequent decoding. We evaluate DualDETR on three challenging multi-label TAD\nbenchmarks. The experimental results demonstrate the superior performance of\nDualDETR to the existing state-of-the-art methods, achieving a substantial\nimprovement under det-mAP and delivering impressive results under seg-mAP.", "keywords": [], "authors_list": ["Yuhan Zhu", "Guozhen Zhang", "Jing Tan", "Gangshan Wu", "Limin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ed"}, "filepath": "data/2405.18416v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995516642681981, "arXiv_link": "https://arxiv.org/html/2405.18416v1", "other_link": "https://streetunveiler.github.io", "title": "LeftRefill: Filling Right Canvas based on Left Reference through Generalized Text-to-Image Diffusion Model", "abstract": "Unveiling an empty street from crowded observations captured by in-car\ncameras is crucial for autonomous driving. However, removing all temporary\nstatic objects, such as stopped vehicles and standing pedestrians, presents a\nsignificant challenge. Unlike object-centric 3D inpainting, which relies on\nthorough observation in a small scene, street scenes involve long trajectories\nthat differ from previous 3D inpainting tasks. The camera-centric moving\nenvironment of captured videos further complicates the task due to the limited\ndegree and time duration of object observation. To address these obstacles, we\nintroduce StreetUnveiler to reconstruct an empty street. StreetUnveiler learns\na 3D representation of the empty street from crowded observations. Our\nrepresentation is based on the hard-label semantic 2D Gaussian Splatting (2DGS)\nfor its scalability and ability to identify Gaussians to be removed. We inpaint\nrendered image after removing unwanted Gaussians to provide pseudo-labels and\nsubsequently re-optimize the 2DGS. Given its temporal continuous movement, we\ndivide the empty street scene into observed, partial-observed, and unobserved\nregions, which we propose to locate through a rendered alpha map. This\ndecomposition helps us to minimize the regions that need to be inpainted. To\nenhance the temporal consistency of the inpainting, we introduce a novel\ntime-reversal framework to inpaint frames in reverse order and use later frames\nas references for earlier frames to fully utilize the long-trajectory\nobservations. Our experiments conducted on the street scene dataset\nsuccessfully reconstructed a 3D representation of the empty street. The mesh\nrepresentation of the empty street can be extracted for further applications.\nProject page and more visualizations can be found at:\nhttps://streetunveiler.github.io", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Chenjie Cao", "Yunuo Cai", "Qiaole Dong", "Yikai Wang", "Yanwei Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ee"}, "filepath": "data/2311.04391.png", "tags": [], "_media_type": "image", "_rand": 0.9992446586661868, "arXiv_link": "https://arxiv.org/abs/2311.04391", "other_link": "", "title": "3DiffTection: 3D Object Detection with Geometry-aware Diffusion Features", "abstract": "We present 3DiffTection, a state-of-the-art method for 3D object detection\nfrom single images, leveraging features from a 3D-aware diffusion model.\nAnnotating large-scale image data for 3D detection is resource-intensive and\ntime-consuming. Recently, pretrained large image diffusion models have become\nprominent as effective feature extractors for 2D perception tasks. However,\nthese features are initially trained on paired text and image data, which are\nnot optimized for 3D tasks, and often exhibit a domain gap when applied to the\ntarget data. Our approach bridges these gaps through two specialized tuning\nstrategies: geometric and semantic. For geometric tuning, we fine-tune a\ndiffusion model to perform novel view synthesis conditioned on a single image,\nby introducing a novel epipolar warp operator. This task meets two essential\ncriteria: the necessity for 3D awareness and reliance solely on posed image\ndata, which are readily available (e.g., from videos) and does not require\nmanual annotation. For semantic refinement, we further train the model on\ntarget data with detection supervision. Both tuning phases employ ControlNet to\npreserve the integrity of the original feature capabilities. In the final step,\nwe harness these enhanced capabilities to conduct a test-time prediction\nensemble across multiple virtual viewpoints. Through our methodology, we obtain\n3D-aware features that are tailored for 3D detection and excel in identifying\ncross-view point correspondences. Consequently, our model emerges as a powerful\n3D detector, substantially surpassing previous benchmarks, e.g., Cube-RCNN, a\nprecedent in single-view 3D detection by 9.43\\% in AP3D on the\nOmni3D-ARkitscene dataset. Furthermore, 3DiffTection showcases robust data\nefficiency and generalization to cross-domain data.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chenfeng Xu", "Huan Ling", "Sanja Fidler", "Or Litany"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ef"}, "filepath": "data/2307.01421.png", "tags": [], "_media_type": "image", "_rand": 0.9996318308501915, "arXiv_link": "https://arxiv.org/abs/2307.01421", "other_link": "", "title": "Unsupervised Feature Learning with Emergent Data-Driven Prototypicality", "abstract": "Given an image set without any labels, our goal is to train a model that maps\neach image to a point in a feature space such that, not only proximity\nindicates visual similarity, but where it is located directly encodes how\nprototypical the image is according to the dataset.\n Our key insight is to perform unsupervised feature learning in hyperbolic\ninstead of Euclidean space, where the distance between points still reflect\nimage similarity, and yet we gain additional capacity for representing\nprototypicality with the location of the point: The closer it is to the origin,\nthe more prototypical it is. The latter property is simply emergent from\noptimizing the usual metric learning objective: The image similar to many\ntraining instances is best placed at the center of corresponding points in\nEuclidean space, but closer to the origin in hyperbolic space.\n We propose an unsupervised feature learning algorithm in Hyperbolic space\nwith sphere pACKing. HACK first generates uniformly packed particles in the\nPoincar\\'e ball of hyperbolic space and then assigns each image uniquely to\neach particle. Images after congealing are regarded more typical of the dataset\nit belongs to. With our feature mapper simply trained to spread out training\ninstances in hyperbolic space, we observe that images move closer to the origin\nwith congealing, validating our idea of unsupervised prototypicality discovery.\nWe demonstrate that our data-driven prototypicality provides an easy and\nsuperior unsupervised instance selection to reduce sample complexity, increase\nmodel generalization with atypical instances and robustness with typical ones.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yunhui Guo", "Youren Zhang", "Yubei Chen", "Stella X. Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f0"}, "filepath": "data/2311.13601.png", "tags": [], "_media_type": "image", "_rand": 0.9993376377154343, "arXiv_link": "https://arxiv.org/abs/2311.13601", "other_link": "https://github.com/UX-Decoder/DINOv.", "title": "Visual In-Context Prompting", "abstract": "In-context prompting in large language models (LLMs) has become a prevalent\napproach to improve zero-shot capabilities, but this idea is less explored in\nthe vision domain. Existing visual prompting methods focus on referring\nsegmentation to segment the most relevant object, falling short of addressing\nmany generic vision tasks like open-set segmentation and detection. In this\npaper, we introduce a universal visual in-context prompting framework for both\ntasks. In particular, we build on top of an encoder-decoder architecture, and\ndevelop a versatile prompt encoder to support a variety of prompts like\nstrokes, boxes, and points. We further enhance it to take an arbitrary number\nof reference image segments as the context. Our extensive explorations show\nthat the proposed visual in-context prompting elicits extraordinary referring\nand generic segmentation capabilities to refer and detect, yielding competitive\nperformance to close-set in-domain datasets and showing promising results on\nmany open-set segmentation datasets. By joint training on COCO and SA-1B, our\nmodel achieves $57.7$ PQ on COCO and $23.2$ PQ on ADE20K. Code will be\navailable at https://github.com/UX-Decoder/DINOv.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Feng Li", "Qing Jiang", "Hao Zhang", "Shilong Liu", "Huaizhe Xu", "Xueyan Zou", "Tianhe Ren", "Hongyang Li", "Lei Zhang", "Chunyuan Li", "Jianwei Yang", "Jianfeng Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f1"}, "filepath": "data/2405.16585.png", "tags": [], "_media_type": "image", "_rand": 0.9995100453318023, "arXiv_link": "https://arxiv.org/abs/2405.16585", "other_link": "", "title": "Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity", "abstract": "Federated learning (FL) has emerged as a new paradigm for privacy-preserving\ncollaborative training. Under domain skew, the current FL approaches are biased\nand face two fairness problems. 1) Parameter Update Conflict: data disparity\namong clients leads to varying parameter importance and inconsistent update\ndirections. These two disparities cause important parameters to potentially be\noverwhelmed by unimportant ones of dominant updates. It consequently results in\nsignificant performance decreases for lower-performing clients. 2) Model\nAggregation Bias: existing FL approaches introduce unfair weight allocation and\nneglect domain diversity. It leads to biased model convergence objective and\ndistinct performance among domains. We discover a pronounced directional update\nconsistency in Federated Learning and propose a novel framework to tackle above\nissues. First, leveraging the discovered characteristic, we selectively discard\nunimportant parameter updates to prevent updates from clients with lower\nperformance overwhelmed by unimportant parameters, resulting in fairer\ngeneralization performance. Second, we propose a fair aggregation objective to\nprevent global model bias towards some domains, ensuring that the global model\ncontinuously aligns with an unbiased model. The proposed method is generic and\ncan be combined with other existing FL methods to enhance fairness.\nComprehensive experiments on Digits and Office-Caltech demonstrate the high\nfairness and performance of our method.", "keywords": ["Vision applications for social good and ethics"], "authors_list": ["Yuhang Chen", "Wenke Huang", "Mang Ye"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f2"}, "filepath": "data/2310.14729.png", "tags": [], "_media_type": "image", "_rand": 0.9999600241139497, "arXiv_link": "https://arxiv.org/abs/2310.14729", "other_link": "https://guytevet.github.io/mas-page/", "title": "MAS: Multi-view Ancestral Sampling for 3D motion generation using 2D diffusion", "abstract": "We introduce Multi-view Ancestral Sampling (MAS), a method for 3D motion\ngeneration, using 2D diffusion models that were trained on motions obtained\nfrom in-the-wild videos. As such, MAS opens opportunities to exciting and\ndiverse fields of motion previously under-explored as 3D data is scarce and\nhard to collect. MAS works by simultaneously denoising multiple 2D motion\nsequences representing different views of the same 3D motion. It ensures\nconsistency across all views at each diffusion step by combining the individual\ngenerations into a unified 3D sequence, and projecting it back to the original\nviews. We demonstrate MAS on 2D pose data acquired from videos depicting\nprofessional basketball maneuvers, rhythmic gymnastic performances featuring a\nball apparatus, and horse races. In each of these domains, 3D motion capture is\narduous, and yet, MAS generates diverse and realistic 3D sequences. Unlike the\nScore Distillation approach, which optimizes each sample by repeatedly applying\nsmall fixes, our method uses a sampling process that was constructed for the\ndiffusion framework. As we demonstrate, MAS avoids common issues such as\nout-of-domain sampling and mode-collapse. https://guytevet.github.io/mas-page/", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Roy Kapon", "Guy Tevet", "Daniel Cohen-Or", "Amit H. Bermano"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f3"}, "filepath": "data/2312.07509.png", "tags": [], "_media_type": "image", "_rand": 0.999907069668529, "arXiv_link": "https://arxiv.org/abs/2312.07509", "other_link": "", "title": "PEEKABOO: Interactive Video Generation via Masked-Diffusion", "abstract": "Modern video generation models like Sora have achieved remarkable success in\nproducing high-quality videos. However, a significant limitation is their\ninability to offer interactive control to users, a feature that promises to\nopen up unprecedented applications and creativity. In this work, we introduce\nthe first solution to equip diffusion-based video generation models with\nspatio-temporal control. We present Peekaboo, a novel masked attention module,\nwhich seamlessly integrates with current video generation models offering\ncontrol without the need for additional training or inference overhead. To\nfacilitate future research, we also introduce a comprehensive benchmark for\ninteractive video generation. This benchmark offers a standardized framework\nfor the community to assess the efficacy of emerging interactive video\ngeneration models. Our extensive qualitative and quantitative assessments\nreveal that Peekaboo achieves up to a 3.8x improvement in mIoU over baseline\nmodels, all while maintaining the same latency. Code and benchmark are\navailable on the webpage.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yash Jain", "Anshul Nasery", "Vibhav Vineet", "Harkirat Behl"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f4"}, "filepath": "data/2403.15605.png", "tags": [], "_media_type": "image", "_rand": 0.9991841306559811, "arXiv_link": "https://arxiv.org/abs/2403.15605", "other_link": "", "title": "Efficiently Assemble Normalization Layers and Regularization for Federated Domain Generalization", "abstract": "Domain shift is a formidable issue in Machine Learning that causes a model to\nsuffer from performance degradation when tested on unseen domains. Federated\nDomain Generalization (FedDG) attempts to train a global model using\ncollaborative clients in a privacy-preserving manner that can generalize well\nto unseen clients possibly with domain shift. However, most existing FedDG\nmethods either cause additional privacy risks of data leakage or induce\nsignificant costs in client communication and computation, which are major\nconcerns in the Federated Learning paradigm. To circumvent these challenges,\nhere we introduce a novel architectural method for FedDG, namely gPerXAN, which\nrelies on a normalization scheme working with a guiding regularizer. In\nparticular, we carefully design Personalized eXplicitly Assembled Normalization\nto enforce client models selectively filtering domain-specific features that\nare biased towards local data while retaining discrimination of those features.\nThen, we incorporate a simple yet effective regularizer to guide these models\nin directly capturing domain-invariant representations that the global model's\nclassifier can leverage. Extensive experimental results on two benchmark\ndatasets, i.e., PACS and Office-Home, and a real-world medical dataset,\nCamelyon17, indicate that our proposed method outperforms other existing\nmethods in addressing this particular problem.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Khiem Le", "Tuan Long Ho", "Cuong Do", "Danh Le-Phuoc", "KOK SENG WONG"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f5"}, "filepath": "data/2403.09107.png", "tags": [], "_media_type": "image", "_rand": 0.9991706527913267, "arXiv_link": "https://arxiv.org/abs/2403.09107", "other_link": "https://github.com/longzhen520/S2MVTC.", "title": "S$^2$MVTC: a Simple yet Efficient Scalable Multi-View Tensor Clustering", "abstract": "Anchor-based large-scale multi-view clustering has attracted considerable\nattention for its effectiveness in handling massive datasets. However, current\nmethods mainly seek the consensus embedding feature for clustering by exploring\nglobal correlations between anchor graphs or projection matrices.In this paper,\nwe propose a simple yet efficient scalable multi-view tensor clustering\n(S^2MVTC) approach, where our focus is on learning correlations of embedding\nfeatures within and across views. Specifically, we first construct the\nembedding feature tensor by stacking the embedding features of different views\ninto a tensor and rotating it. Additionally, we build a novel tensor\nlow-frequency approximation (TLFA) operator, which incorporates graph\nsimilarity into embedding feature learning, efficiently achieving smooth\nrepresentation of embedding features within different views. Furthermore,\nconsensus constraints are applied to embedding features to ensure inter-view\nsemantic consistency. Experimental results on six large-scale multi-view\ndatasets demonstrate that S^2MVTC significantly outperforms state-of-the-art\nalgorithms in terms of clustering performance and CPU execution time,\nespecially when handling massive data. The code of S^2MVTC is publicly\navailable at https://github.com/longzhen520/S2MVTC.", "keywords": [], "authors_list": ["Zhen Long", "Qiyuan Wang", "Yazhou Ren", "Yipeng Liu", "Ce Zhu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f6"}, "filepath": "data/2403.05854.png", "tags": [], "_media_type": "image", "_rand": 0.9998529795306033, "arXiv_link": "https://arxiv.org/abs/2403.05854", "other_link": "", "title": "LTGC: Long-tail Recognition via Leveraging LLMs-driven Generated Content", "abstract": "Long-tail recognition is challenging because it requires the model to learn\ngood representations from tail categories and address imbalances across all\ncategories. In this paper, we propose a novel generative and fine-tuning\nframework, LTGC, to handle long-tail recognition via leveraging generated\ncontent. Firstly, inspired by the rich implicit knowledge in large-scale models\n(e.g., large language models, LLMs), LTGC leverages the power of these models\nto parse and reason over the original tail data to produce diverse tail-class\ncontent. We then propose several novel designs for LTGC to ensure the quality\nof the generated data and to efficiently fine-tune the model using both the\ngenerated and original data. The visualization demonstrates the effectiveness\nof the generation module in LTGC, which produces accurate and diverse tail\ndata. Additionally, the experimental results demonstrate that our LTGC\noutperforms existing state-of-the-art methods on popular long-tailed\nbenchmarks.", "keywords": [], "authors_list": ["Qihao Zhao", "Yalun Dai", "Hao Li", "Wei Hu", "Fan Zhang", "Jun Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f7"}, "filepath": "data/2312.06886.png", "tags": [], "_media_type": "image", "_rand": 0.9995210793753411, "arXiv_link": "https://arxiv.org/abs/2312.06886", "other_link": "", "title": "Relightful Harmonization: Lighting-aware Portrait Background Replacement", "abstract": "Portrait harmonization aims to composite a subject into a new background,\nadjusting its lighting and color to ensure harmony with the background scene.\nExisting harmonization techniques often only focus on adjusting the global\ncolor and brightness of the foreground and ignore crucial illumination cues\nfrom the background such as apparent lighting direction, leading to unrealistic\ncompositions. We introduce Relightful Harmonization, a lighting-aware diffusion\nmodel designed to seamlessly harmonize sophisticated lighting effect for the\nforeground portrait using any background image. Our approach unfolds in three\nstages. First, we introduce a lighting representation module that allows our\ndiffusion model to encode lighting information from target image background.\nSecond, we introduce an alignment network that aligns lighting features learned\nfrom image background with lighting features learned from panorama environment\nmaps, which is a complete representation for scene illumination. Last, to\nfurther boost the photorealism of the proposed method, we introduce a novel\ndata simulation pipeline that generates synthetic training pairs from a diverse\nrange of natural images, which are used to refine the model. Our method\noutperforms existing benchmarks in visual fidelity and lighting coherence,\nshowing superior generalization in real-world testing scenarios, highlighting\nits versatility and practicality.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Mengwei Ren", "Wei Xiong", "Jae Shin Yoon", "Zhixin Shu", "Jianming Zhang", "HyunJoon Jung", "Guido Gerig", "He Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f8"}, "filepath": "data/2310.10413.png", "tags": [], "_media_type": "image", "_rand": 0.9995773930925002, "arXiv_link": "https://arxiv.org/abs/2310.10413", "other_link": "https://github.com/hellloxiaotian/DSRNet.", "title": "Image Processing GNN: Breaking Rigidity in Super-Resolution", "abstract": "Convolutional neural networks (CNNs) depend on deep network architectures to\nextract accurate information for image super-resolution. However, obtained\ninformation of these CNNs cannot completely express predicted high-quality\nimages for complex scenes. In this paper, we present a dynamic network for\nimage super-resolution (DSRNet), which contains a residual enhancement block,\nwide enhancement block, feature refinement block and construction block. The\nresidual enhancement block is composed of a residual enhanced architecture to\nfacilitate hierarchical features for image super-resolution. To enhance\nrobustness of obtained super-resolution model for complex scenes, a wide\nenhancement block achieves a dynamic architecture to learn more robust\ninformation to enhance applicability of an obtained super-resolution model for\nvarying scenes. To prevent interference of components in a wide enhancement\nblock, a refinement block utilizes a stacked architecture to accurately learn\nobtained features. Also, a residual learning operation is embedded in the\nrefinement block to prevent long-term dependency problem. Finally, a\nconstruction block is responsible for reconstructing high-quality images.\nDesigned heterogeneous architecture can not only facilitate richer structural\ninformation, but also be lightweight, which is suitable for mobile digital\ndevices. Experimental results shows that our method is more competitive in\nterms of performance and recovering time of image super-resolution and\ncomplexity. The code of DSRNet can be obtained at\nhttps://github.com/hellloxiaotian/DSRNet.", "keywords": ["Low-level vision", "Efficient and scalable vision"], "authors_list": ["Yuchuan Tian", "Hanting Chen", "Chao Xu", "Yunhe Wang"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4f9"}, "filepath": "data/2312.11461.png", "tags": [], "_media_type": "image", "_rand": 0.9993818115591505, "arXiv_link": "https://arxiv.org/abs/2312.11461", "other_link": "", "title": "GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning", "abstract": "Gaussian splatting has emerged as a powerful 3D representation that harnesses\nthe advantages of both explicit (mesh) and implicit (NeRF) 3D representations.\nIn this paper, we seek to leverage Gaussian splatting to generate realistic\nanimatable avatars from textual descriptions, addressing the limitations (e.g.,\nflexibility and efficiency) imposed by mesh or NeRF-based representations.\nHowever, a naive application of Gaussian splatting cannot generate high-quality\nanimatable avatars and suffers from learning instability; it also cannot\ncapture fine avatar geometries and often leads to degenerate body parts. To\ntackle these problems, we first propose a primitive-based 3D Gaussian\nrepresentation where Gaussians are defined inside pose-driven primitives to\nfacilitate animation. Second, to stabilize and amortize the learning of\nmillions of Gaussians, we propose to use neural implicit fields to predict the\nGaussian attributes (e.g., colors). Finally, to capture fine avatar geometries\nand extract detailed meshes, we propose a novel SDF-based implicit mesh\nlearning approach for 3D Gaussians that regularizes the underlying geometries\nand extracts highly detailed textured meshes. Our proposed method, GAvatar,\nenables the large-scale generation of diverse animatable avatars using only\ntext prompts. GAvatar significantly surpasses existing methods in terms of both\nappearance and geometry quality, and achieves extremely fast rendering (100\nfps) at 1K resolution.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Ye Yuan", "Xueting Li", "Yangyi Huang", "Shalini De Mello", "Koki Nagano", "Jan Kautz", "Umar Iqbal"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4fa"}, "filepath": "data/2312.04117.png", "tags": [], "_media_type": "image", "_rand": 0.9997943268435957, "arXiv_link": "https://arxiv.org/abs/2312.04117", "other_link": "", "title": "Instance Tracking in 3D Scenes from Egocentric Videos", "abstract": "Egocentric sensors such as AR/VR devices capture human-object interactions\nand offer the potential to provide task-assistance by recalling 3D locations of\nobjects of interest in the surrounding environment. This capability requires\ninstance tracking in real-world 3D scenes from egocentric videos (IT3DEgo). We\nexplore this problem by first introducing a new benchmark dataset, consisting\nof RGB and depth videos, per-frame camera pose, and instance-level annotations\nin both 2D camera and 3D world coordinates. We present an evaluation protocol\nwhich evaluates tracking performance in 3D coordinates with two settings for\nenrolling instances to track: (1) single-view online enrollment where an\ninstance is specified on-the-fly based on the human wearer's interactions. and\n(2) multi-view pre-enrollment where images of an instance to be tracked are\nstored in memory ahead of time. To address IT3DEgo, we first re-purpose methods\nfrom relevant areas, e.g., single object tracking (SOT) -- running SOT methods\nto track instances in 2D frames and lifting them to 3D using camera pose and\ndepth. We also present a simple method that leverages pretrained segmentation\nand detection models to generate proposals from RGB frames and match proposals\nwith enrolled instance images. Perhaps surprisingly, our extensive experiments\nshow that our method (with no finetuning) significantly outperforms SOT-based\napproaches. We conclude by arguing that the problem of egocentric instance\ntracking is made easier by leveraging camera pose and using a 3D allocentric\n(world) coordinate representation.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Yunhan Zhao", "Haoyu Ma", "Shu Kong", "Charless Fowlkes"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4fb"}, "filepath": "data/2311.16081.png", "tags": [], "_media_type": "image", "_rand": 0.9990938213942612, "arXiv_link": "https://arxiv.org/abs/2311.16081", "other_link": "https://github.com/TencentARC/ViT-Lens.", "title": "ViT-Lens: Towards Omni-modal Representations", "abstract": "Aiming to advance AI agents, large foundation models significantly improve\nreasoning and instruction execution, yet the current focus on vision and\nlanguage neglects the potential of perceiving diverse modalities in open-world\nenvironments. However, the success of data-driven vision and language models is\ncostly or even infeasible to be reproduced for rare modalities. In this paper,\nwe present ViT-Lens-2 that facilitates efficient omni-modal representation\nlearning by perceiving novel modalities with a pretrained ViT and aligning them\nto a pre-defined space. Specifically, the modality-specific lens is tuned to\nproject any-modal signals to an intermediate embedding space, which are then\nprocessed by a strong ViT with pre-trained visual knowledge. The encoded\nrepresentations are optimized toward aligning with the modal-independent space,\npre-defined by off-the-shelf foundation models. ViT-Lens-2 provides a unified\nsolution for representation learning of increasing modalities with two\nappealing advantages: (i) Unlocking the great potential of pretrained ViTs to\nnovel modalities effectively with efficient data regime; (ii) Enabling emergent\ndownstream capabilities through modality alignment and shared ViT parameters.\nWe tailor ViT-Lens-2 to learn representations for 3D point cloud, depth, audio,\ntactile and EEG, and set new state-of-the-art results across various\nunderstanding tasks, such as zero-shot classification. By seamlessly\nintegrating ViT-Lens-2 into Multimodal Foundation Models, we enable\nAny-modality to Text and Image Generation in a zero-shot manner. Code and\nmodels are available at https://github.com/TencentARC/ViT-Lens.", "keywords": ["Large multimodal models and prompting techniques", "Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Stan Weixian Lei", "Yixiao Ge", "Kun Yi", "Jianfeng Zhang", "Difei Gao", "Dylan Sun", "Yuying Ge", "Ying Shan", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4fc"}, "filepath": "data/2404.00973.png", "tags": [], "_media_type": "image", "_rand": 0.9990603800805474, "arXiv_link": "https://arxiv.org/abs/2404.00973", "other_link": "", "title": "VideoDistill: Language-aware Vision Distillation for Video Question Answering", "abstract": "Significant advancements in video question answering (VideoQA) have been made\nthanks to thriving large image-language pretraining frameworks. Although these\nimage-language models can efficiently represent both video and language\nbranches, they typically employ a goal-free vision perception process and do\nnot interact vision with language well during the answer generation, thus\nomitting crucial visual cues. In this paper, we are inspired by the human\nrecognition and learning pattern and propose VideoDistill, a framework with\nlanguage-aware (i.e., goal-driven) behavior in both vision perception and\nanswer generation process. VideoDistill generates answers only from\nquestion-related visual embeddings and follows a thinking-observing-answering\napproach that closely resembles human behavior, distinguishing it from previous\nresearch. Specifically, we develop a language-aware gating mechanism to replace\nthe standard cross-attention, avoiding language's direct fusion into visual\nrepresentations. We incorporate this mechanism into two key components of the\nentire framework. The first component is a differentiable sparse sampling\nmodule, which selects frames containing the necessary dynamics and semantics\nrelevant to the questions. The second component is a vision refinement module\nthat merges existing spatial-temporal attention layers to ensure the extraction\nof multi-grained visual semantics associated with the questions. We conduct\nexperimental evaluations on various challenging video question-answering\nbenchmarks, and VideoDistill achieves state-of-the-art performance in both\ngeneral and long-form VideoQA datasets. In Addition, we verify that\nVideoDistill can effectively alleviate the utilization of language shortcut\nsolutions in the EgoTaskQA dataset.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Bo Zou", "Chao Yang", "Yu Qiao", "Chengbin Quan", "Youjian Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4fd"}, "filepath": "data/2311.17112.png", "tags": [], "_media_type": "image", "_rand": 0.9992300423681885, "arXiv_link": "https://arxiv.org/abs/2311.17112", "other_link": "", "title": "Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model", "abstract": "Parameter-efficient fine-tuning (PEFT) is an effective methodology to unleash\nthe potential of large foundation models in novel scenarios with limited\ntraining data. In the computer vision community, PEFT has shown effectiveness\nin image classification, but little research has studied its ability for image\nsegmentation. Fine-tuning segmentation models usually require a heavier\nadjustment of parameters to align the proper projection directions in the\nparameter space for new scenarios. This raises a challenge to existing PEFT\nalgorithms, as they often inject a limited number of individual parameters into\neach block, which prevents substantial adjustment of the projection direction\nof the parameter space due to the limitation of Hidden Markov Chain along\nblocks. In this paper, we equip PEFT with a cross-block orchestration mechanism\nto enable the adaptation of the Segment Anything Model (SAM) to various\ndownstream scenarios. We introduce a novel inter-block communication module,\nwhich integrates a learnable relation matrix to facilitate communication among\ndifferent coefficient sets of each PEFT block's parameter space. Moreover, we\npropose an intra-block enhancement module, which introduces a linear projection\nhead whose weights are generated from a hyper-complex layer, further enhancing\nthe impact of the adjustment of projection directions on the entire parameter\nspace. Extensive experiments on diverse benchmarks demonstrate that our\nproposed approach consistently improves the segmentation performance\nsignificantly on novel scenarios with only around 1K additional parameters.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zelin Peng", "Zhengqin Xu", "Zhilin Zeng", "Lingxi Xie", "Qi Tian", "Wei Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4fe"}, "filepath": "data/2312.04552.png", "tags": [], "_media_type": "image", "_rand": 0.9995186730681838, "arXiv_link": "https://arxiv.org/abs/2312.04552", "other_link": "", "title": "Generating Illustrated Instructions", "abstract": "We introduce the new task of generating Illustrated Instructions, i.e.,\nvisual instructions customized to a user's needs. We identify desiderata unique\nto this task, and formalize it through a suite of automatic and human\nevaluation metrics, designed to measure the validity, consistency, and efficacy\nof the generations. We combine the power of large language models (LLMs)\ntogether with strong text-to-image generation diffusion models to propose a\nsimple approach called StackedDiffusion, which generates such illustrated\ninstructions given text as input. The resulting model strongly outperforms\nbaseline approaches and state-of-the-art multimodal LLMs; and in 30% of cases,\nusers even prefer it to human-generated articles. Most notably, it enables\nvarious new and exciting applications far beyond what static articles on the\nweb can provide, such as personalized instructions complete with intermediate\nsteps and pictures in response to a user's individual situation.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Sachit Menon", "Ishan Misra", "Rohit Girdhar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f4ff"}, "filepath": "data/2312.11894.png", "tags": [], "_media_type": "image", "_rand": 0.9993804551342296, "arXiv_link": "https://arxiv.org/abs/2312.11894", "other_link": "", "title": "3D-LFM: Lifting Foundation Model", "abstract": "The lifting of 3D structure and camera from 2D landmarks is at the\ncornerstone of the entire discipline of computer vision. Traditional methods\nhave been confined to specific rigid objects, such as those in\nPerspective-n-Point (PnP) problems, but deep learning has expanded our\ncapability to reconstruct a wide range of object classes (e.g. C3DPO and PAUL)\nwith resilience to noise, occlusions, and perspective distortions. All these\ntechniques, however, have been limited by the fundamental need to establish\ncorrespondences across the 3D training data -- significantly limiting their\nutility to applications where one has an abundance of \"in-correspondence\" 3D\ndata. Our approach harnesses the inherent permutation equivariance of\ntransformers to manage varying number of points per 3D data instance,\nwithstands occlusions, and generalizes to unseen categories. We demonstrate\nstate of the art performance across 2D-3D lifting task benchmarks. Since our\napproach can be trained across such a broad class of structures we refer to it\nsimply as a 3D Lifting Foundation Model (3D-LFM) -- the first of its kind.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Mosam Dabhi", "L\u00e1szl\u00f3 A. Jeni", "Simon Lucey"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f500"}, "filepath": "data/2404.00913v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995756821578224, "arXiv_link": "https://arxiv.org/abs/2404.00913v1", "other_link": "", "title": "LLaMA-Excitor: General Instruction Tuning via Indirect Feature Interaction", "abstract": "Existing methods to fine-tune LLMs, like Adapter, Prefix-tuning, and LoRA,\nwhich introduce extra modules or additional input sequences to inject new\nskills or knowledge, may compromise the innate abilities of LLMs. In this\npaper, we propose LLaMA-Excitor, a lightweight method that stimulates the LLMs'\npotential to better follow instructions by gradually paying more attention to\nworthwhile information. Specifically, the LLaMA-Excitor does not directly\nchange the intermediate hidden state during the self-attention calculation of\nthe transformer structure. We designed the Excitor block as a bypass module for\nthe similarity score computation in LLMs' self-attention to reconstruct keys\nand change the importance of values by learnable prompts. LLaMA-Excitor ensures\na self-adaptive allocation of additional attention to input instructions, thus\neffectively preserving LLMs' pre-trained knowledge when fine-tuning LLMs on\nlow-quality instruction-following datasets. Furthermore, we unify the modeling\nof multi-modal tuning and language-only tuning, extending LLaMA-Excitor to a\npowerful visual instruction follower without the need for complex multi-modal\nalignment. Our proposed approach is evaluated in language-only and multi-modal\ntuning experimental scenarios. Notably, LLaMA-Excitor is the only method that\nmaintains basic capabilities while achieving a significant improvement (+6%) on\nthe MMLU benchmark. In the visual instruction tuning, we achieve a new\nstate-of-the-art image captioning performance of 157.5 CIDEr on MSCOCO, and a\ncomparable performance (88.39%) on ScienceQA to cutting-edge models with more\nparameters and extensive vision-language pertaining.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Bo Zou", "Chao Yang", "Yu Qiao", "Chengbin Quan", "Youjian Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f501"}, "filepath": "data/2311.09571.png", "tags": [], "_media_type": "image", "_rand": 0.9995328018411853, "arXiv_link": "https://arxiv.org/abs/2311.09571", "other_link": "https://threedle.github.io/3d-paintbrush", "title": "3D Paintbrush: Local Stylization of 3D Shapes with Cascaded Score Distillation", "abstract": "In this work we develop 3D Paintbrush, a technique for automatically\ntexturing local semantic regions on meshes via text descriptions. Our method is\ndesigned to operate directly on meshes, producing texture maps which seamlessly\nintegrate into standard graphics pipelines. We opt to simultaneously produce a\nlocalization map (to specify the edit region) and a texture map which conforms\nto it. This synergistic approach improves the quality of both the localization\nand the stylization. To enhance the details and resolution of the textured\narea, we leverage multiple stages of a cascaded diffusion model to supervise\nour local editing technique with generative priors learned from images at\ndifferent resolutions. Our technique, referred to as Cascaded Score\nDistillation (CSD), simultaneously distills scores at multiple resolutions in a\ncascaded fashion, enabling control over both the granularity and global\nunderstanding of the supervision. We demonstrate the effectiveness of 3D\nPaintbrush to locally texture a variety of shapes within different semantic\nregions. Project page: https://threedle.github.io/3d-paintbrush", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Dale Decatur", "Itai Lang", "Kfir Aberman", "Rana Hanocka"], "category_name": "Graphics", "all_categories": ["Graphics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f502"}, "filepath": "data/2404.10124.png", "tags": [], "_media_type": "image", "_rand": 0.9991022900907334, "arXiv_link": "https://arxiv.org/abs/2404.10124", "other_link": "", "title": "Epistemic Uncertainty Quantification For Pre-trained Neural Networks", "abstract": "Epistemic uncertainty quantification (UQ) identifies where models lack\nknowledge. Traditional UQ methods, often based on Bayesian neural networks, are\nnot suitable for pre-trained non-Bayesian models. Our study addresses\nquantifying epistemic uncertainty for any pre-trained model, which does not\nneed the original training data or model modifications and can ensure broad\napplicability regardless of network architectures or training techniques.\nSpecifically, we propose a gradient-based approach to assess epistemic\nuncertainty, analyzing the gradients of outputs relative to model parameters,\nand thereby indicating necessary model adjustments to accurately represent the\ninputs. We first explore theoretical guarantees of gradient-based methods for\nepistemic UQ, questioning the view that this uncertainty is only calculable\nthrough differences between multiple models. We further improve gradient-driven\nUQ by using class-specific weights for integrating gradients and emphasizing\ndistinct contributions from neural network layers. Additionally, we enhance UQ\naccuracy by combining gradient and perturbation methods to refine the\ngradients. We evaluate our approach on out-of-distribution detection,\nuncertainty calibration, and active learning, demonstrating its superiority\nover current state-of-the-art UQ methods for pre-trained models.", "keywords": [], "authors_list": ["Hanjing Wang", "Qiang Ji"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f503"}, "filepath": "data/2403.14552.png", "tags": [], "_media_type": "image", "_rand": 0.9993028692258081, "arXiv_link": "https://arxiv.org/abs/2403.14552", "other_link": "", "title": "Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer", "abstract": "While Transformers have rapidly gained popularity in various computer vision\napplications, post-hoc explanations of their internal mechanisms remain largely\nunexplored. Vision Transformers extract visual information by representing\nimage regions as transformed tokens and integrating them via attention weights.\nHowever, existing post-hoc explanation methods merely consider these attention\nweights, neglecting crucial information from the transformed tokens, which\nfails to accurately illustrate the rationales behind the models' predictions.\nTo incorporate the influence of token transformation into interpretation, we\npropose TokenTM, a novel post-hoc explanation method that utilizes our\nintroduced measurement of token transformation effects. Specifically, we\nquantify token transformation effects by measuring changes in token lengths and\ncorrelations in their directions pre- and post-transformation. Moreover, we\ndevelop initialization and aggregation rules to integrate both attention\nweights and token transformation effects across all layers, capturing holistic\ntoken contributions throughout the model. Experimental results on segmentation\nand perturbation tests demonstrate the superiority of our proposed TokenTM\ncompared to state-of-the-art Vision Transformer explanation methods.", "keywords": [], "authors_list": ["Junyi Wu", "Bin Duan", "Weitai Kang", "Hao Tang", "Yan Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f504"}, "filepath": "data/2312.08338.png", "tags": [], "_media_type": "image", "_rand": 0.9990568176760496, "arXiv_link": "https://arxiv.org/abs/2312.08338", "other_link": "", "title": "Global Latent Neural Rendering", "abstract": "A recent trend among generalizable novel view synthesis methods is to learn a\nrendering operator acting over single camera rays. This approach is promising\nbecause it removes the need for explicit volumetric rendering, but it\neffectively treats target images as collections of independent pixels. Here, we\npropose to learn a global rendering operator acting over all camera rays\njointly. We show that the right representation to enable such rendering is a\n5-dimensional plane sweep volume consisting of the projection of the input\nimages on a set of planes facing the target camera. Based on this\nunderstanding, we introduce our Convolutional Global Latent Renderer (ConvGLR),\nan efficient convolutional architecture that performs the rendering operation\nglobally in a low-resolution latent space. Experiments on various datasets\nunder sparse and generalizable setups show that our approach consistently\noutperforms existing methods by significant margins.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Thomas Tanay", "Matteo Maggioni"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f505"}, "filepath": "data/2311.15475.png", "tags": [], "_media_type": "image", "_rand": 0.9996096660131631, "arXiv_link": "https://arxiv.org/abs/2311.15475", "other_link": "", "title": "MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers", "abstract": "We introduce MeshGPT, a new approach for generating triangle meshes that\nreflects the compactness typical of artist-created meshes, in contrast to dense\ntriangle meshes extracted by iso-surfacing methods from neural fields. Inspired\nby recent advances in powerful large language models, we adopt a sequence-based\napproach to autoregressively generate triangle meshes as sequences of\ntriangles. We first learn a vocabulary of latent quantized embeddings, using\ngraph convolutions, which inform these embeddings of the local mesh geometry\nand topology. These embeddings are sequenced and decoded into triangles by a\ndecoder, ensuring that they can effectively reconstruct the mesh. A transformer\nis then trained on this learned vocabulary to predict the index of the next\nembedding given previous embeddings. Once trained, our model can be\nautoregressively sampled to generate new triangle meshes, directly generating\ncompact meshes with sharp edges, more closely imitating the efficient\ntriangulation patterns of human-crafted meshes. MeshGPT demonstrates a notable\nimprovement over state of the art mesh generation methods, with a 9% increase\nin shape coverage and a 30-point enhancement in FID scores across various\ncategories.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yawar Siddiqui", "Antonio Alliegro", "Alexey Artemov", "Tatiana Tommasi", "Daniele Sirigatti", "Vladislav Rosov", "Angela Dai", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f506"}, "filepath": "data/2312.13746v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997251916408072, "arXiv_link": "https://arxiv.org/abs/2312.13746v1", "other_link": "", "title": "Video Recognition in Portrait Mode", "abstract": "The creation of new datasets often presents new challenges for video\nrecognition and can inspire novel ideas while addressing these challenges.\nWhile existing datasets mainly comprise landscape mode videos, our paper seeks\nto introduce portrait mode videos to the research community and highlight the\nunique challenges associated with this video format. With the growing\npopularity of smartphones and social media applications, recognizing portrait\nmode videos is becoming increasingly important. To this end, we have developed\nthe first dataset dedicated to portrait mode video recognition, namely\nPortraitMode-400. The taxonomy of PortraitMode-400 was constructed in a\ndata-driven manner, comprising 400 fine-grained categories, and rigorous\nquality assurance was implemented to ensure the accuracy of human annotations.\nIn addition to the new dataset, we conducted a comprehensive analysis of the\nimpact of video format (portrait mode versus landscape mode) on recognition\naccuracy and spatial bias due to the different formats. Furthermore, we\ndesigned extensive experiments to explore key aspects of portrait mode video\nrecognition, including the choice of data augmentation, evaluation procedure,\nthe importance of temporal information, and the role of audio modality.\nBuilding on the insights from our experimental results and the introduction of\nPortraitMode-400, our paper aims to inspire further research efforts in this\nemerging research area.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Mingfei Han", "Linjie Yang", "Xiaojie Jin", "Jiashi Feng", "Xiaojun Chang", "Heng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f507"}, "filepath": "data/2312.04563.png", "tags": [], "_media_type": "image", "_rand": 0.9999959523664829, "arXiv_link": "https://arxiv.org/abs/2312.04563", "other_link": "", "title": "VGGSfM: Visual Geometry Grounded Deep Structure From Motion", "abstract": "Structure-from-motion (SfM) is a long-standing problem in the computer vision\ncommunity, which aims to reconstruct the camera poses and 3D structure of a\nscene from a set of unconstrained 2D images. Classical frameworks solve this\nproblem in an incremental manner by detecting and matching keypoints,\nregistering images, triangulating 3D points, and conducting bundle adjustment.\nRecent research efforts have predominantly revolved around harnessing the power\nof deep learning techniques to enhance specific elements (e.g., keypoint\nmatching), but are still based on the original, non-differentiable pipeline.\nInstead, we propose a new deep pipeline VGGSfM, where each component is fully\ndifferentiable and thus can be trained in an end-to-end manner. To this end, we\nintroduce new mechanisms and simplifications. First, we build on recent\nadvances in deep 2D point tracking to extract reliable pixel-accurate tracks,\nwhich eliminates the need for chaining pairwise matches. Furthermore, we\nrecover all cameras simultaneously based on the image and track features\ninstead of gradually registering cameras. Finally, we optimise the cameras and\ntriangulate 3D points via a differentiable bundle adjustment layer. We attain\nstate-of-the-art performance on three popular datasets, CO3D, IMC Phototourism,\nand ETH3D.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Jianyuan Wang", "Nikita Karaev", "Christian Rupprecht", "David Novotny"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f508"}, "filepath": "data/2312.12274.png", "tags": [], "_media_type": "image", "_rand": 0.9990440210168426, "arXiv_link": "https://arxiv.org/abs/2312.12274", "other_link": "", "title": "Intrinsic Image Diffusion for Indoor Single-view Material Estimation", "abstract": "We present Intrinsic Image Diffusion, a generative model for appearance\ndecomposition of indoor scenes. Given a single input view, we sample multiple\npossible material explanations represented as albedo, roughness, and metallic\nmaps. Appearance decomposition poses a considerable challenge in computer\nvision due to the inherent ambiguity between lighting and material properties\nand the lack of real datasets. To address this issue, we advocate for a\nprobabilistic formulation, where instead of attempting to directly predict the\ntrue material properties, we employ a conditional generative model to sample\nfrom the solution space. Furthermore, we show that utilizing the strong learned\nprior of recent diffusion models trained on large-scale real-world images can\nbe adapted to material estimation and highly improves the generalization to\nreal images. Our method produces significantly sharper, more consistent, and\nmore detailed materials, outperforming state-of-the-art methods by $1.5dB$ on\nPSNR and by $45\\%$ better FID score on albedo prediction. We demonstrate the\neffectiveness of our approach through experiments on both synthetic and\nreal-world datasets.", "keywords": ["Low-level vision"], "authors_list": ["Peter Kocsis", "Vincent Sitzmann", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f509"}, "filepath": "data/2404.00842v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991665634009521, "arXiv_link": "https://arxiv.org/abs/2404.00842v1", "other_link": "", "title": "An N-Point Linear Solver for Line and Motion Estimation with Event Cameras", "abstract": "Event cameras respond primarily to edges--formed by strong gradients--and are\nthus particularly well-suited for line-based motion estimation. Recent work has\nshown that events generated by a single line each satisfy a polynomial\nconstraint which describes a manifold in the space-time volume. Multiple such\nconstraints can be solved simultaneously to recover the partial linear velocity\nand line parameters. In this work, we show that, with a suitable line\nparametrization, this system of constraints is actually linear in the unknowns,\nwhich allows us to design a novel linear solver. Unlike existing solvers, our\nlinear solver (i) is fast and numerically stable since it does not rely on\nexpensive root finding, (ii) can solve both minimal and overdetermined systems\nwith more than 5 events, and (iii) admits the characterization of all\ndegenerate cases and multiple solutions. The found line parameters are\nsingularity-free and have a fixed scale, which eliminates the need for\nauxiliary constraints typically encountered in previous work. To recover the\nfull linear camera velocity we fuse observations from multiple lines with a\nnovel velocity averaging scheme that relies on a geometrically-motivated\nresidual, and thus solves the problem more efficiently than previous schemes\nwhich minimize an algebraic residual. Extensive experiments in synthetic and\nreal-world settings demonstrate that our method surpasses the previous work in\nnumerical stability, and operates over 600 times faster.", "keywords": ["Low-level vision"], "authors_list": ["Ling Gao", "Daniel Gehrig", "Hang Su", "Davide Scaramuzza", "Laurent Kneip"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f50a"}, "filepath": "data/2403.01231.png", "tags": [], "_media_type": "image", "_rand": 0.9994952498279354, "arXiv_link": "https://arxiv.org/abs/2403.01231", "other_link": "https://github.com/PRIS-CV/Pascal-EA.", "title": "Benchmarking Segmentation Models with Mask-Preserved Attribute Editing", "abstract": "When deploying segmentation models in practice, it is critical to evaluate\ntheir behaviors in varied and complex scenes. Different from the previous\nevaluation paradigms only in consideration of global attribute variations (e.g.\nadverse weather), we investigate both local and global attribute variations for\nrobustness evaluation. To achieve this, we construct a mask-preserved attribute\nediting pipeline to edit visual attributes of real images with precise control\nof structural information. Therefore, the original segmentation labels can be\nreused for the edited images. Using our pipeline, we construct a benchmark\ncovering both object and image attributes (e.g. color, material, pattern,\nstyle). We evaluate a broad variety of semantic segmentation models, spanning\nfrom conventional close-set models to recent open-vocabulary large models on\ntheir robustness to different types of variations. We find that both local and\nglobal attribute variations affect segmentation performances, and the\nsensitivity of models diverges across different variation types. We argue that\nlocal attributes have the same importance as global attributes, and should be\nconsidered in the robustness evaluation of segmentation models. Code:\nhttps://github.com/PRIS-CV/Pascal-EA.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Zijin Yin", "Kongming Liang", "Bing Li", "Zhanyu Ma", "Jun Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f50b"}, "filepath": "data/2310.19654.png", "tags": [], "_media_type": "image", "_rand": 0.9996619724705054, "arXiv_link": "https://arxiv.org/abs/2310.19654", "other_link": "", "title": "How to Make Cross Encoder a Good Teacher for Efficient Image-Text Retrieval?", "abstract": "Due to the success of large-scale visual-language pretraining (VLP) models\nand the widespread use of image-text retrieval in industry areas, it is now\ncritically necessary to reduce the model size and streamline their\nmobile-device deployment. Single- and dual-stream model structures are commonly\nused in image-text retrieval with the goal of closing the semantic gap between\ntextual and visual modalities. While single-stream models use deep feature\nfusion to achieve more accurate cross-model alignment, dual-stream models are\nbetter at offline indexing and fast inference.We propose a Multi-teacher\nCross-modality Alignment Distillation (MCAD) technique to integrate the\nadvantages of single- and dual-stream models. By incorporating the fused\nsingle-stream features into the image and text features of the dual-stream\nmodel, we formulate new modified teacher similarity distributions and features.\nThen, we conduct both distribution and feature distillation to boost the\ncapability of the student dual-stream model, achieving high retrieval\nperformance without increasing inference complexity.Extensive experiments\ndemonstrate the remarkable performance and high efficiency of MCAD on\nimage-text retrieval tasks. Furthermore, we implement a lightweight CLIP model\non Snapdragon/Dimensity chips with only $\\sim$100M running memory and\n$\\sim$8.0ms search latency, achieving the mobile-device application of VLP\nmodels.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Yuxin Chen", "Zongyang Ma", "Ziqi Zhang", "Zhongang Qi", "Chunfeng Yuan", "Bing Li", "Junfu Pu", "Ying Shan", "Xiaojuan Qi", "Weiming Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f50c"}, "filepath": "data/2404.04890.png", "tags": [], "_media_type": "image", "_rand": 0.9997176704015934, "arXiv_link": "https://arxiv.org/abs/2404.04890", "other_link": "", "title": "A Unified Diffusion Framework for Scene-aware Human Motion Estimation from Sparse Signals", "abstract": "Estimating full-body human motion via sparse tracking signals from\nhead-mounted displays and hand controllers in 3D scenes is crucial to\napplications in AR/VR. One of the biggest challenges to this task is the\none-to-many mapping from sparse observations to dense full-body motions, which\nendowed inherent ambiguities. To help resolve this ambiguous problem, we\nintroduce a new framework to combine rich contextual information provided by\nscenes to benefit full-body motion tracking from sparse observations. To\nestimate plausible human motions given sparse tracking signals and 3D scenes,\nwe develop $\\text{S}^2$Fusion, a unified framework fusing \\underline{S}cene and\nsparse \\underline{S}ignals with a conditional dif\\underline{Fusion} model.\n$\\text{S}^2$Fusion first extracts the spatial-temporal relations residing in\nthe sparse signals via a periodic autoencoder, and then produces time-alignment\nfeature embedding as additional inputs. Subsequently, by drawing initial noisy\nmotion from a pre-trained prior, $\\text{S}^2$Fusion utilizes conditional\ndiffusion to fuse scene geometry and sparse tracking signals to generate\nfull-body scene-aware motions. The sampling procedure of $\\text{S}^2$Fusion is\nfurther guided by a specially designed scene-penetration loss and\nphase-matching loss, which effectively regularizes the motion of the lower body\neven in the absence of any tracking signals, making the generated motion much\nmore plausible and coherent. Extensive experimental results have demonstrated\nthat our $\\text{S}^2$Fusion outperforms the state-of-the-art in terms of\nestimation quality and smoothness.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Jiangnan Tang", "Jingya Wang", "Kaiyang Ji", "Lan Xu", "Jingyi Yu", "Ye Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f50d"}, "filepath": "data/2309.13101.png", "tags": [], "_media_type": "image", "_rand": 0.9998607300717294, "arXiv_link": "https://arxiv.org/abs/2309.13101", "other_link": "", "title": "Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction", "abstract": "Implicit neural representation has paved the way for new approaches to\ndynamic scene reconstruction and rendering. Nonetheless, cutting-edge dynamic\nneural rendering methods rely heavily on these implicit representations, which\nfrequently struggle to capture the intricate details of objects in the scene.\nFurthermore, implicit methods have difficulty achieving real-time rendering in\ngeneral dynamic scenes, limiting their use in a variety of tasks. To address\nthe issues, we propose a deformable 3D Gaussians Splatting method that\nreconstructs scenes using 3D Gaussians and learns them in canonical space with\na deformation field to model monocular dynamic scenes. We also introduce an\nannealing smoothing training mechanism with no extra overhead, which can\nmitigate the impact of inaccurate poses on the smoothness of time interpolation\ntasks in real-world datasets. Through a differential Gaussian rasterizer, the\ndeformable 3D Gaussians not only achieve higher rendering quality but also\nreal-time rendering speed. Experiments show that our method outperforms\nexisting methods significantly in terms of both rendering quality and speed,\nmaking it well-suited for tasks such as novel-view synthesis, time\ninterpolation, and real-time rendering.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Ziyi Yang", "Xinyu Gao", "Wen Zhou", "Shaohui Jiao", "Yuqing Zhang", "Xiaogang Jin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f50e"}, "filepath": "data/2402.19470.png", "tags": [], "_media_type": "image", "_rand": 0.9993092678355934, "arXiv_link": "https://arxiv.org/abs/2402.19470", "other_link": "", "title": "Towards Generalizable Tumor Synthesis", "abstract": "Tumor synthesis enables the creation of artificial tumors in medical images,\nfacilitating the training of AI models for tumor detection and segmentation.\nHowever, success in tumor synthesis hinges on creating visually realistic\ntumors that are generalizable across multiple organs and, furthermore, the\nresulting AI models being capable of detecting real tumors in images sourced\nfrom different domains (e.g., hospitals). This paper made a progressive stride\ntoward generalizable tumor synthesis by leveraging a critical observation:\nearly-stage tumors (< 2cm) tend to have similar imaging characteristics in\ncomputed tomography (CT), whether they originate in the liver, pancreas, or\nkidneys. We have ascertained that generative AI models, e.g., Diffusion Models,\ncan create realistic tumors generalized to a range of organs even when trained\non a limited number of tumor examples from only one organ. Moreover, we have\nshown that AI models trained on these synthetic tumors can be generalized to\ndetect and segment real tumors from CT volumes, encompassing a broad spectrum\nof patient demographics, imaging protocols, and healthcare facilities.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Qi Chen", "Xiaoxi Chen", "Haorui Song", "Alan L. Yuille", "Zhiwei Xiong", "Chen Wei", "Zongwei Zhou"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f50f"}, "filepath": "data/2312.07530.png", "tags": [], "_media_type": "image", "_rand": 0.9993707868452196, "arXiv_link": "https://arxiv.org/abs/2312.07530", "other_link": "https://github.com/kuanchihhuang/VG-W3D.", "title": "Prompt3D: Random Prompt Assisted Weakly-Supervised 3D Object Detection", "abstract": "Weakly supervised 3D object detection aims to learn a 3D detector with lower\nannotation cost, e.g., 2D labels. Unlike prior work which still relies on few\naccurate 3D annotations, we propose a framework to study how to leverage\nconstraints between 2D and 3D domains without requiring any 3D labels.\nSpecifically, we employ visual data from three perspectives to establish\nconnections between 2D and 3D domains. First, we design a feature-level\nconstraint to align LiDAR and image features based on object-aware regions.\nSecond, the output-level constraint is developed to enforce the overlap between\n2D and projected 3D box estimations. Finally, the training-level constraint is\nutilized by producing accurate and consistent 3D pseudo-labels that align with\nthe visual data. We conduct extensive experiments on the KITTI dataset to\nvalidate the effectiveness of the proposed three constraints. Without using any\n3D labels, our method achieves favorable performance against state-of-the-art\napproaches and is competitive with the method that uses 500-frame 3D\nannotations. Code and models will be made publicly available at\nhttps://github.com/kuanchihhuang/VG-W3D.", "keywords": [], "authors_list": ["Xiaohong Zhang", "Huisheng Ye", "Jingwen Li", "Qinyu Tang", "Yuanqi Li", "Yanwen Guo", "Jie Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f510"}, "filepath": "data/2405.02954.png", "tags": [], "_media_type": "image", "_rand": 0.9997480874068035, "arXiv_link": "https://arxiv.org/abs/2405.02954", "other_link": "", "title": "Discriminative Pattern Calibration Mechanism for Source-Free Domain Adaptation", "abstract": "Source-free domain adaptation (SFDA) aims to adapt a source model trained on\na fully-labeled source domain to a related but unlabeled target domain. While\nthe source model is a key avenue for acquiring target pseudolabels, the\ngenerated pseudolabels may exhibit source bias. In the conventional SFDA\npipeline, a large data (e.g. ImageNet) pre-trained feature extractor is used to\ninitialize the source model at the start of source training, and subsequently\ndiscarded. Despite having diverse features important for generalization, the\npre-trained feature extractor can overfit to the source data distribution\nduring source training and forget relevant target domain knowledge. Rather than\ndiscarding this valuable knowledge, we introduce an integrated framework to\nincorporate pre-trained networks into the target adaptation process. The\nproposed framework is flexible and allows us to plug modern pre-trained\nnetworks into the adaptation process to leverage their stronger representation\nlearning capabilities. For adaptation, we propose the Co-learn algorithm to\nimprove target pseudolabel quality collaboratively through the source model and\na pre-trained feature extractor. Building on the recent success of the\nvision-language model CLIP in zero-shot image recognition, we present an\nextension Co-learn++ to further incorporate CLIP's zero-shot classification\ndecisions. We evaluate on 3 benchmark datasets and include more challenging\nscenarios such as open-set, partial-set and open-partial SFDA. Experimental\nresults demonstrate that our proposed strategy improves adaptation performance\nand can be successfully integrated with existing SFDA methods.", "keywords": [], "authors_list": ["Haifeng Xia", "Siyu Xia", "Zhengming Ding"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f511"}, "filepath": "data/2403.17006.png", "tags": [], "_media_type": "image", "_rand": 0.9990767173169629, "arXiv_link": "https://arxiv.org/abs/2403.17006", "other_link": "", "title": "Reconstruction-free Cascaded Adaptive Compressive Sensing", "abstract": "While deep neural networks (NN) significantly advance image compressed\nsensing (CS) by improving reconstruction quality, the necessity of training\ncurrent CS NNs from scratch constrains their effectiveness and hampers rapid\ndeployment. Although recent methods utilize pre-trained diffusion models for\nimage reconstruction, they struggle with slow inference and restricted\nadaptability to CS. To tackle these challenges, this paper proposes Invertible\nDiffusion Models (IDM), a novel efficient, end-to-end diffusion-based CS\nmethod. IDM repurposes a large-scale diffusion sampling process as a\nreconstruction model, and finetunes it end-to-end to recover original images\ndirectly from CS measurements, moving beyond the traditional paradigm of\none-step noise estimation learning. To enable such memory-intensive end-to-end\nfinetuning, we propose a novel two-level invertible design to transform both\n(1) the multi-step sampling process and (2) the noise estimation U-Net in each\nstep into invertible networks. As a result, most intermediate features are\ncleared during training to reduce up to 93.8% GPU memory. In addition, we\ndevelop a set of lightweight modules to inject measurements into noise\nestimator to further facilitate reconstruction. Experiments demonstrate that\nIDM outperforms existing state-of-the-art CS networks by up to 2.64dB in PSNR.\nCompared to the recent diffusion model-based approach DDNM, our IDM achieves up\nto 10.09dB PSNR gain and 14.54 times faster inference.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chenxi Qiu", "Tao Yue", "Xuemei Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f512"}, "filepath": "data/2402.17678.png", "tags": [], "_media_type": "image", "_rand": 0.9995157096075491, "arXiv_link": "https://arxiv.org/abs/2402.17678", "other_link": "", "title": "CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise Sketch Instance Guided Attention", "abstract": "Reverse engineering in the realm of Computer-Aided Design (CAD) has been a\nlongstanding aspiration, though not yet entirely realized. Its primary aim is\nto uncover the CAD process behind a physical object given its 3D scan. We\npropose CAD-SIGNet, an end-to-end trainable and auto-regressive architecture to\nrecover the design history of a CAD model represented as a sequence of\nsketch-and-extrusion from an input point cloud. Our model learns\nvisual-language representations by layer-wise cross-attention between point\ncloud and CAD language embedding. In particular, a new Sketch instance Guided\nAttention (SGA) module is proposed in order to reconstruct the fine-grained\ndetails of the sketches. Thanks to its auto-regressive nature, CAD-SIGNet not\nonly reconstructs a unique full design history of the corresponding CAD model\ngiven an input point cloud but also provides multiple plausible design choices.\nThis allows for an interactive reverse engineering scenario by providing\ndesigners with multiple next-step choices along with the design process.\nExtensive experiments on publicly available CAD datasets showcase the\neffectiveness of our approach against existing baseline models in two settings,\nnamely, full design history recovery and conditional auto-completion from point\nclouds.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Mohammad Sadil Khan", "Elona Dupont", "Sk Aziz Ali", "Kseniya Cherenkova", "Anis Kacem", "Djamila Aouada"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f513"}, "filepath": "data/2404.02638.png", "tags": [], "_media_type": "image", "_rand": 0.9994489069648539, "arXiv_link": "https://arxiv.org/abs/2404.02638", "other_link": "https://github.com/yejy53/SG-BEV.", "title": "SG-BEV: Satellite-Guided BEV Fusion for Cross-View Semantic Segmentation", "abstract": "This paper aims at achieving fine-grained building attribute segmentation in\na cross-view scenario, i.e., using satellite and street-view image pairs. The\nmain challenge lies in overcoming the significant perspective differences\nbetween street views and satellite views. In this work, we introduce SG-BEV, a\nnovel approach for satellite-guided BEV fusion for cross-view semantic\nsegmentation. To overcome the limitations of existing cross-view projection\nmethods in capturing the complete building facade features, we innovatively\nincorporate Bird's Eye View (BEV) method to establish a spatially explicit\nmapping of street-view features. Moreover, we fully leverage the advantages of\nmultiple perspectives by introducing a novel satellite-guided reprojection\nmodule, optimizing the uneven feature distribution issues associated with\ntraditional BEV methods. Our method demonstrates significant improvements on\nfour cross-view datasets collected from multiple cities, including New York,\nSan Francisco, and Boston. On average across these datasets, our method\nachieves an increase in mIOU by 10.13% and 5.21% compared with the\nstate-of-the-art satellite-based and cross-view methods. The code and datasets\nof this work will be released at https://github.com/yejy53/SG-BEV.", "keywords": ["Scene analysis and understanding", "Remote sensing and photogrammetry"], "authors_list": ["Junyan Ye", "Qiyan Luo", "Jinhua Yu", "Huaping Zhong", "Zhimeng Zheng", "Conghui He", "Weijia Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f514"}, "filepath": "data/2401.01543.png", "tags": [], "_media_type": "image", "_rand": 0.9994387330284463, "arXiv_link": "https://arxiv.org/abs/2401.01543", "other_link": "", "title": "Retraining-free Model Quantization via One-Shot Weight-Coupling Learning", "abstract": "Quantization is of significance for compressing the over-parameterized deep\nneural models and deploying them on resource-limited devices. Fixed-precision\nquantization suffers from performance drop due to the limited numerical\nrepresentation ability. Conversely, mixed-precision quantization (MPQ) is\nadvocated to compress the model effectively by allocating heterogeneous\nbit-width for layers. MPQ is typically organized into a searching-retraining\ntwo-stage process. Previous works only focus on determining the optimal\nbit-width configuration in the first stage efficiently, while ignoring the\nconsiderable time costs in the second stage. However, retraining always\nconsumes hundreds of GPU-hours on the cutting-edge GPUs, thus hindering\ndeployment efficiency significantly. In this paper, we devise a one-shot\ntraining-searching paradigm for mixed-precision model compression.\nSpecifically, in the first stage, all potential bit-width configurations are\ncoupled and thus optimized simultaneously within a set of shared weights.\nHowever, our observations reveal a previously unseen and severe bit-width\ninterference phenomenon among highly coupled weights during optimization,\nleading to considerable performance degradation under a high compression ratio.\nTo tackle this problem, we first design a bit-width scheduler to dynamically\nfreeze the most turbulent bit-width of layers during training, to ensure the\nrest bit-widths converged properly. Then, taking inspiration from information\ntheory, we present an information distortion mitigation technique to align the\nbehaviour of the bad-performing bit-widths to the well-performing ones.", "keywords": [], "authors_list": ["Chen Tang", "Yuan Meng", "Jiacheng Jiang", "Shuzhao Xie", "Rongwei Lu", "Xinzhu Ma", "Zhi Wang", "Wenwu Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f515"}, "filepath": "data/2403.10255.png", "tags": [], "_media_type": "image", "_rand": 0.9990795090621842, "arXiv_link": "https://arxiv.org/abs/2403.10255", "other_link": "", "title": "Arbitrary-Scale Image Generation and Upsampling using Latent Diffusion Model and Implicit Neural Decoder", "abstract": "Super-resolution (SR) and image generation are important tasks in computer\nvision and are widely adopted in real-world applications. Most existing\nmethods, however, generate images only at fixed-scale magnification and suffer\nfrom over-smoothing and artifacts. Additionally, they do not offer enough\ndiversity of output images nor image consistency at different scales. Most\nrelevant work applied Implicit Neural Representation (INR) to the denoising\ndiffusion model to obtain continuous-resolution yet diverse and high-quality SR\nresults. Since this model operates in the image space, the larger the\nresolution of image is produced, the more memory and inference time is\nrequired, and it also does not maintain scale-specific consistency. We propose\na novel pipeline that can super-resolve an input image or generate from a\nrandom noise a novel image at arbitrary scales. The method consists of a\npretrained auto-encoder, a latent diffusion model, and an implicit neural\ndecoder, and their learning strategies. The proposed method adopts diffusion\nprocesses in a latent space, thus efficient, yet aligned with output image\nspace decoded by MLPs at arbitrary scales. More specifically, our\narbitrary-scale decoder is designed by the symmetric decoder w/o up-scaling\nfrom the pretrained auto-encoder, and Local Implicit Image Function (LIIF) in\nseries. The latent diffusion process is learnt by the denoising and the\nalignment losses jointly. Errors in output images are backpropagated via the\nfixed decoder, improving the quality of output images. In the extensive\nexperiments using multiple public benchmarks on the two tasks i.e. image\nsuper-resolution and novel image generation at arbitrary scales, the proposed\nmethod outperforms relevant methods in metrics of image quality, diversity and\nscale consistency. It is significantly better than the relevant prior-art in\nthe inference speed and memory usage.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Jinseok Kim", "Tae-Kyun Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f516"}, "filepath": "data/2312.09866.png", "tags": [], "_media_type": "image", "_rand": 0.9995178128597787, "arXiv_link": "https://arxiv.org/abs/2312.09866", "other_link": "", "title": "PLGSLAM: Progressive Neural Scene Represenation with Local to Global Bundle Adjustment", "abstract": "Neural implicit scene representations have recently shown encouraging results\nin dense visual SLAM. However, existing methods produce low-quality scene\nreconstruction and low-accuracy localization performance when scaling up to\nlarge indoor scenes and long sequences. These limitations are mainly due to\ntheir single, global radiance field with finite capacity, which does not adapt\nto large scenarios. Their end-to-end pose networks are also not robust enough\nwith the growth of cumulative errors in large scenes. To this end, we introduce\nPLGSLAM, a neural visual SLAM system capable of high-fidelity surface\nreconstruction and robust camera tracking in real-time. To handle large-scale\nindoor scenes, PLGSLAM proposes a progressive scene representation method which\ndynamically allocates new local scene representation trained with frames within\na local sliding window. This allows us to scale up to larger indoor scenes and\nimproves robustness (even under pose drifts). In local scene representation,\nPLGSLAM utilizes tri-planes for local high-frequency features with multi-layer\nperceptron (MLP) networks for the low-frequency feature, achieving smoothness\nand scene completion in unobserved areas. Moreover, we propose local-to-global\nbundle adjustment method with a global keyframe database to address the\nincreased pose drifts on long sequences. Experimental results demonstrate that\nPLGSLAM achieves state-of-the-art scene reconstruction results and tracking\nperformance across various datasets and scenarios (both in small and\nlarge-scale indoor environments).", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Tianchen Deng", "Guole Shen", "Tong Qin", "jianyu wang", "Wentao Zhao", "Jingchuan Wang", "Danwei Wang", "Weidong Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f517"}, "filepath": "data/2404.00312.png", "tags": [], "_media_type": "image", "_rand": 0.9991811684208016, "arXiv_link": "https://arxiv.org/abs/2404.00312", "other_link": "", "title": "Bayesian Exploration of Pre-trained Models for Low-shot Image Classification", "abstract": "Low-shot image classification is a fundamental task in computer vision, and\nthe emergence of large-scale vision-language models such as CLIP has greatly\nadvanced the forefront of research in this field. However, most existing\nCLIP-based methods lack the flexibility to effectively incorporate other\npre-trained models that encompass knowledge distinct from CLIP. To bridge the\ngap, this work proposes a simple and effective probabilistic model ensemble\nframework based on Gaussian processes, which have previously demonstrated\nremarkable efficacy in processing small data. We achieve the integration of\nprior knowledge by specifying the mean function with CLIP and the kernel\nfunction with an ensemble of deep kernels built upon various pre-trained\nmodels. By regressing the classification label directly, our framework enables\nanalytical inference, straightforward uncertainty quantification, and\nprincipled hyper-parameter tuning. Through extensive experiments on standard\nbenchmarks, we demonstrate that our method consistently outperforms competitive\nensemble baselines regarding predictive performance. Additionally, we assess\nthe robustness of our method and the quality of the yielded uncertainty\nestimates on out-of-distribution datasets. We also illustrate that our method,\ndespite relying on label regression, still enjoys superior model calibration\ncompared to most deterministic baselines.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yibo Miao", "Yu lei", "Feng Zhou", "Zhijie Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f518"}, "filepath": "data/2310.06627.png", "tags": [], "_media_type": "image", "_rand": 0.9999819477911824, "arXiv_link": "https://arxiv.org/abs/2310.06627", "other_link": "https://bzhao.me/C-VQA/.", "title": "What If the TV Was Off? Examining Counterfactual Reasoning Abilities of Multi-modal Language Models", "abstract": "Counterfactual reasoning, a fundamental aspect of human cognition, involves\ncontemplating alternatives to established facts or past events, significantly\nenhancing our abilities in planning and decision-making. In light of the\nadvancements in current multi-modal large language models, we explore their\neffectiveness in counterfactual reasoning. To facilitate this investigation, we\nintroduce a novel dataset, C-VQA, specifically designed to test the\ncounterfactual reasoning capabilities of modern multi-modal large language\nmodels. This dataset is constructed by infusing original questions with\ncounterfactual presuppositions, spanning various types such as numerical and\nboolean queries. It encompasses a mix of real and synthetic data, representing\na wide range of difficulty levels. Our thorough evaluations of contemporary\nvision-language models using this dataset have revealed substantial performance\ndrops, with some models showing up to a 40% decrease, highlighting a\nsignificant gap between current models and human-like vision reasoning\ncapabilities. We hope our dataset will serve as a vital benchmark for\nevaluating the counterfactual reasoning capabilities of models. Code and\ndataset are publicly available at https://bzhao.me/C-VQA/.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Letian Zhang", "Xiaotong Zhai", "Zhongkai Zhao", "Yongshuo Zong", "Xin Wen", "Bingchen Zhao"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f519"}, "filepath": "data/2404.10241.png", "tags": [], "_media_type": "image", "_rand": 0.9993356145845355, "arXiv_link": "https://arxiv.org/abs/2404.10241", "other_link": "https://github.com/CrystalSixone/VLN-GOAT.", "title": "Vision-and-Language Navigation via Causal Learning", "abstract": "In the pursuit of robust and generalizable environment perception and\nlanguage understanding, the ubiquitous challenge of dataset bias continues to\nplague vision-and-language navigation (VLN) agents, hindering their performance\nin unseen environments. This paper introduces the generalized cross-modal\ncausal transformer (GOAT), a pioneering solution rooted in the paradigm of\ncausal inference. By delving into both observable and unobservable confounders\nwithin vision, language, and history, we propose the back-door and front-door\nadjustment causal learning (BACL and FACL) modules to promote unbiased learning\nby comprehensively mitigating potential spurious correlations. Additionally, to\ncapture global confounder features, we propose a cross-modal feature pooling\n(CFP) module supervised by contrastive learning, which is also shown to be\neffective in improving cross-modal representations during pre-training.\nExtensive experiments across multiple VLN datasets (R2R, REVERIE, RxR, and\nSOON) underscore the superiority of our proposed method over previous\nstate-of-the-art approaches. Code is available at\nhttps://github.com/CrystalSixone/VLN-GOAT.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Liuyi Wang", "Zongtao He", "Ronghao Dang", "mengjiao shen", "Chengju Liu", "Qijun Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f51a"}, "filepath": "data/2404.05559.png", "tags": [], "_media_type": "image", "_rand": 0.999364625481537, "arXiv_link": "https://arxiv.org/abs/2404.05559", "other_link": "https://github.com/JacobChalk/TIM", "title": "TIM: A Time Interval Machine for Audio-Visual Action Recognition", "abstract": "Diverse actions give rise to rich audio-visual signals in long videos. Recent\nworks showcase that the two modalities of audio and video exhibit different\ntemporal extents of events and distinct labels. We address the interplay\nbetween the two modalities in long videos by explicitly modelling the temporal\nextents of audio and visual events. We propose the Time Interval Machine (TIM)\nwhere a modality-specific time interval poses as a query to a transformer\nencoder that ingests a long video input. The encoder then attends to the\nspecified interval, as well as the surrounding context in both modalities, in\norder to recognise the ongoing action.\n We test TIM on three long audio-visual video datasets: EPIC-KITCHENS,\nPerception Test, and AVE, reporting state-of-the-art (SOTA) for recognition. On\nEPIC-KITCHENS, we beat previous SOTA that utilises LLMs and significantly\nlarger pre-training by 2.9% top-1 action recognition accuracy. Additionally, we\nshow that TIM can be adapted for action detection, using dense multi-scale\ninterval queries, outperforming SOTA on EPIC-KITCHENS-100 for most metrics, and\nshowing strong performance on the Perception Test. Our ablations show the\ncritical role of integrating the two modalities and modelling their time\nintervals in achieving this performance. Code and models at:\nhttps://github.com/JacobChalk/TIM", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Jacob Chalk", "Jaesung Huh", "Evangelos Kazakos", "Andrew Zisserman", "Dima Damen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f51b"}, "filepath": "data/2404.05687.png", "tags": [], "_media_type": "image", "_rand": 0.9999703374738771, "arXiv_link": "https://arxiv.org/abs/2404.05687", "other_link": "https://github.com/mlvlab/RALF", "title": "Retrieval-Augmented Open-Vocabulary Object Detection", "abstract": "Open-vocabulary object detection (OVD) has been studied with Vision-Language\nModels (VLMs) to detect novel objects beyond the pre-trained categories.\nPrevious approaches improve the generalization ability to expand the knowledge\nof the detector, using 'positive' pseudo-labels with additional 'class' names,\ne.g., sock, iPod, and alligator. To extend the previous methods in two aspects,\nwe propose Retrieval-Augmented Losses and visual Features (RALF). Our method\nretrieves related 'negative' classes and augments loss functions. Also, visual\nfeatures are augmented with 'verbalized concepts' of classes, e.g., worn on the\nfeet, handheld music player, and sharp teeth. Specifically, RALF consists of\ntwo modules: Retrieval Augmented Losses (RAL) and Retrieval-Augmented visual\nFeatures (RAF). RAL constitutes two losses reflecting the semantic similarity\nwith negative vocabularies. In addition, RAF augments visual features with the\nverbalized concepts from a large language model (LLM). Our experiments\ndemonstrate the effectiveness of RALF on COCO and LVIS benchmark datasets. We\nachieve improvement up to 3.4 box AP$_{50}^{\\text{N}}$ on novel categories of\nthe COCO dataset and 3.6 mask AP$_{\\text{r}}$ gains on the LVIS dataset. Code\nis available at https://github.com/mlvlab/RALF .", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Jooyeon Kim", "Eulrang Cho", "Sehyung Kim", "Hyunwoo J. Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f51c"}, "filepath": "data/2311.11908v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993105177188721, "arXiv_link": "https://arxiv.org/html/2311.11908v3", "other_link": "", "title": "Continual Motion Prediction Learning Framework via Meta-Representation Learning and Optimal Memory Buffer Retention Strategy", "abstract": "Continual learning is a subfield of machine learning, which aims to allow\nmachine learning models to continuously learn on new data, by accumulating\nknowledge without forgetting what was learned in the past. In this work, we\ntake a step back, and ask: \"Why should one care about continual learning in the\nfirst place?\". We set the stage by examining recent continual learning papers\npublished at four major machine learning conferences, and show that\nmemory-constrained settings dominate the field. Then, we discuss five open\nproblems in machine learning, and even though they might seem unrelated to\ncontinual learning at first sight, we show that continual learning will\ninevitably be part of their solution. These problems are model editing,\npersonalization and specialization, on-device learning, faster (re-)training\nand reinforcement learning. Finally, by comparing the desiderata from these\nunsolved problems and the current assumptions in continual learning, we\nhighlight and discuss four future directions for continual learning research.\nWe hope that this work offers an interesting perspective on the future of\ncontinual learning, while displaying its potential value and the paths we have\nto pursue in order to make it successful. This work is the result of the many\ndiscussions the authors had at the Dagstuhl seminar on Deep Continual Learning,\nin March 2023.", "keywords": [], "authors_list": ["Dae Jun Kang", "Dongsuk Kum", "Sanmin Kim"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f51d"}, "filepath": "data/2402.19122.png", "tags": [], "_media_type": "image", "_rand": 0.9997656988195258, "arXiv_link": "https://arxiv.org/abs/2402.19122", "other_link": "https://github.com/ShiqiYu/OpenGait.", "title": "Learning Visual Prompt for Gait Recognition", "abstract": "Gait recognition stands as one of the most pivotal remote identification\ntechnologies and progressively expands across research and industry\ncommunities. However, existing gait recognition methods heavily rely on\ntask-specific upstream driven by supervised learning to provide explicit gait\nrepresentations like silhouette sequences, which inevitably introduce expensive\nannotation costs and potential error accumulation. Escaping from this trend,\nthis work explores effective gait representations based on the all-purpose\nknowledge produced by task-agnostic Large Vision Models (LVMs) and proposes a\nsimple yet efficient gait framework, termed BigGait. Specifically, the Gait\nRepresentation Extractor (GRE) within BigGait draws upon design principles from\nestablished gait representations, effectively transforming all-purpose\nknowledge into implicit gait representations without requiring third-party\nsupervision signals. Experiments on CCPG, CAISA-B* and SUSTech1K indicate that\nBigGait significantly outperforms the previous methods in both within-domain\nand cross-domain tasks in most cases, and provides a more practical paradigm\nfor learning the next-generation gait representation. Finally, we delve into\nprospective challenges and promising directions in LVMs-based gait recognition,\naiming to inspire future work in this emerging topic. The source code is\navailable at https://github.com/ShiqiYu/OpenGait.", "keywords": ["Biometrics and human analysis", "Large multimodal models and prompting techniques"], "authors_list": ["Kang Ma", "Ying Fu", "Chunshui Cao", "Saihui Hou", "Yongzhen Huang", "Dezhi Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f51e"}, "filepath": "data/2403.12933.png", "tags": [], "_media_type": "image", "_rand": 0.9992520848283128, "arXiv_link": "https://arxiv.org/abs/2403.12933", "other_link": "http://daooshee.github.io/QuadPrior-Website/", "title": "Zero-Reference Low-Light Enhancement via Physical Quadruple Priors", "abstract": "Understanding illumination and reducing the need for supervision pose a\nsignificant challenge in low-light enhancement. Current approaches are highly\nsensitive to data usage during training and illumination-specific\nhyper-parameters, limiting their ability to handle unseen scenarios. In this\npaper, we propose a new zero-reference low-light enhancement framework\ntrainable solely with normal light images. To accomplish this, we devise an\nillumination-invariant prior inspired by the theory of physical light transfer.\nThis prior serves as the bridge between normal and low-light images. Then, we\ndevelop a prior-to-image framework trained without low-light data. During\ntesting, this framework is able to restore our illumination-invariant prior\nback to images, automatically achieving low-light enhancement. Within this\nframework, we leverage a pretrained generative diffusion model for model\nability, introduce a bypass decoder to handle detail distortion, as well as\noffer a lightweight version for practicality. Extensive experiments demonstrate\nour framework's superiority in various scenarios as well as good\ninterpretability, robustness, and efficiency. Code is available on our project\nhomepage: http://daooshee.github.io/QuadPrior-Website/", "keywords": ["Efficient and scalable vision"], "authors_list": ["Wenjing Wang", "Huan Yang", "Jianlong Fu", "Jiaying Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f51f"}, "filepath": "data/2403.15681.png", "tags": [], "_media_type": "image", "_rand": 0.9998718164825028, "arXiv_link": "https://arxiv.org/abs/2403.15681", "other_link": "", "title": "Differentiable Information Bottleneck for Deterministic Multi-view Clustering", "abstract": "In recent several years, the information bottleneck (IB) principle provides\nan information-theoretic framework for deep multi-view clustering (MVC) by\ncompressing multi-view observations while preserving the relevant information\nof multiple views. Although existing IB-based deep MVC methods have achieved\nhuge success, they rely on variational approximation and distribution\nassumption to estimate the lower bound of mutual information, which is a\nnotoriously hard and impractical problem in high-dimensional multi-view spaces.\nIn this work, we propose a new differentiable information bottleneck (DIB)\nmethod, which provides a deterministic and analytical MVC solution by fitting\nthe mutual information without the necessity of variational approximation.\nSpecifically, we first propose to directly fit the mutual information of\nhigh-dimensional spaces by leveraging normalized kernel Gram matrix, which does\nnot require any auxiliary neural estimator to estimate the lower bound of\nmutual information. Then, based on the new mutual information measurement, a\ndeterministic multi-view neural network with analytical gradients is explicitly\ntrained to parameterize IB principle, which derives a deterministic compression\nof input variables from different views. Finally, a triplet consistency\ndiscovery mechanism is devised, which is capable of mining the feature\nconsistency, cluster consistency and joint consistency based on the\ndeterministic and compact representations. Extensive experimental results show\nthe superiority of our DIB method on 6 benchmarks compared with 13\nstate-of-the-art baselines.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xiaoqiang Yan", "Zhixiang Jin", "Fengshou Han", "Yangdong Ye"], "category_name": "Information Theory", "all_categories": ["Information Theory", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f520"}, "filepath": "data/2312.03025.png", "tags": [], "_media_type": "image", "_rand": 0.9995207385115779, "arXiv_link": "https://arxiv.org/abs/2312.03025", "other_link": "", "title": "Training on Synthetic Data Beats Real Data in Multimodal Relation Extraction", "abstract": "The task of multimodal relation extraction has attracted significant research\nattention, but progress is constrained by the scarcity of available training\ndata. One natural thought is to extend existing datasets with cross-modal\ngenerative models. In this paper, we consider a novel problem setting, where\nonly unimodal data, either text or image, are available during training. We aim\nto train a multimodal classifier from synthetic data that perform well on real\nmultimodal test data. However, training with synthetic data suffers from two\nobstacles: lack of data diversity and label information loss. To alleviate the\nissues, we propose Mutual Information-aware Multimodal Iterated Relational dAta\nGEneration (MI2RAGE), which applies Chained Cross-modal Generation (CCG) to\npromote diversity in the generated data and exploits a teacher network to\nselect valuable training samples with high mutual information with the\nground-truth labels. Comparing our method to direct training on synthetic data,\nwe observed a significant improvement of 24.06% F1 with synthetic text and\n26.42% F1 with synthetic images. Notably, our best model trained on completely\nsynthetic images outperforms prior state-of-the-art models trained on real\nmultimodal data by a margin of 3.76% in F1. Our codebase will be made available\nupon acceptance.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Zilin Du", "Haoxin Li", "Xu Guo", "Boyang Li"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Computation and Language", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f521"}, "filepath": "data/2403.15389.png", "tags": [], "_media_type": "image", "_rand": 0.9996262332290453, "arXiv_link": "https://arxiv.org/abs/2403.15389", "other_link": "https://prismformore.github.io/diffusionmtl/.", "title": "DiffusionMTL: Learning Multi-Task Denoising Diffusion Model from Partially Annotated Data", "abstract": "Recently, there has been an increased interest in the practical problem of\nlearning multiple dense scene understanding tasks from partially annotated\ndata, where each training sample is only labeled for a subset of the tasks. The\nmissing of task labels in training leads to low-quality and noisy predictions,\nas can be observed from state-of-the-art methods. To tackle this issue, we\nreformulate the partially-labeled multi-task dense prediction as a pixel-level\ndenoising problem, and propose a novel multi-task denoising diffusion framework\ncoined as DiffusionMTL. It designs a joint diffusion and denoising paradigm to\nmodel a potential noisy distribution in the task prediction or feature maps and\ngenerate rectified outputs for different tasks. To exploit multi-task\nconsistency in denoising, we further introduce a Multi-Task Conditioning\nstrategy, which can implicitly utilize the complementary nature of the tasks to\nhelp learn the unlabeled tasks, leading to an improvement in the denoising\nperformance of the different tasks. Extensive quantitative and qualitative\nexperiments demonstrate that the proposed multi-task denoising diffusion model\ncan significantly improve multi-task prediction maps, and outperform the\nstate-of-the-art methods on three challenging multi-task benchmarks, under two\ndifferent partial-labeling evaluation settings. The code is available at\nhttps://prismformore.github.io/diffusionmtl/.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Hanrong Ye", "Dan Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f522"}, "filepath": "data/2404.11699.png", "tags": [], "_media_type": "image", "_rand": 0.9999659633081704, "arXiv_link": "https://arxiv.org/abs/2404.11699", "other_link": "", "title": "Retrieval-Augmented Embodied Agents", "abstract": "Embodied agents operating in complex and uncertain environments face\nconsiderable challenges. While some advanced agents handle complex manipulation\ntasks with proficiency, their success often hinges on extensive training data\nto develop their capabilities. In contrast, humans typically rely on recalling\npast experiences and analogous situations to solve new problems. Aiming to\nemulate this human approach in robotics, we introduce the Retrieval-Augmented\nEmbodied Agent (RAEA). This innovative system equips robots with a form of\nshared memory, significantly enhancing their performance. Our approach\nintegrates a policy retriever, allowing robots to access relevant strategies\nfrom an external policy memory bank based on multi-modal inputs. Additionally,\na policy generator is employed to assimilate these strategies into the learning\nprocess, enabling robots to formulate effective responses to tasks. Extensive\ntesting of RAEA in both simulated and real-world scenarios demonstrates its\nsuperior performance over traditional methods, representing a major leap\nforward in robotic technology.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Yichen Zhu", "Zhicai Ou", "Xiaofeng Mou", "Jian Tang"], "category_name": "Robotics", "all_categories": ["Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f523"}, "filepath": "data/2312.01409.png", "tags": [], "_media_type": "image", "_rand": 0.9994753380666734, "arXiv_link": "https://arxiv.org/abs/2312.01409", "other_link": "", "title": "Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models", "abstract": "Traditional 3D content creation tools empower users to bring their\nimagination to life by giving them direct control over a scene's geometry,\nappearance, motion, and camera path. Creating computer-generated videos,\nhowever, is a tedious manual process, which can be automated by emerging\ntext-to-video diffusion models. Despite great promise, video diffusion models\nare difficult to control, hindering a user to apply their own creativity rather\nthan amplifying it. To address this challenge, we present a novel approach that\ncombines the controllability of dynamic 3D meshes with the expressivity and\neditability of emerging diffusion models. For this purpose, our approach takes\nan animated, low-fidelity rendered mesh as input and injects the ground truth\ncorrespondence information obtained from the dynamic mesh into various stages\nof a pre-trained text-to-image generation model to output high-quality and\ntemporally consistent frames. We demonstrate our approach on various examples\nwhere motion can be obtained by animating rigged assets or changing the camera\npath.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Shengqu Cai", "Duygu Ceylan", "Matheus Gadelha", "Chun-Hao P. Huang", "Tuanfeng Y. Wang", "Gordon Wetzstein"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f524"}, "filepath": "data/2311.17911.png", "tags": [], "_media_type": "image", "_rand": 0.9996422161017727, "arXiv_link": "https://arxiv.org/abs/2311.17911", "other_link": "https://github.com/shikiw/OPERA.", "title": "OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation", "abstract": "Hallucination, posed as a pervasive challenge of multi-modal large language\nmodels (MLLMs), has significantly impeded their real-world usage that demands\nprecise judgment. Existing methods mitigate this issue with either training\nwith specific designed data or inferencing with external knowledge from other\nsources, incurring inevitable additional costs. In this paper, we present\nOPERA, a novel MLLM decoding method grounded in an Over-trust Penalty and a\nRetrospection-Allocation strategy, serving as a nearly free lunch to alleviate\nthe hallucination issue without additional data, knowledge, or training. Our\napproach begins with an interesting observation that, most hallucinations are\nclosely tied to the knowledge aggregation patterns manifested in the\nself-attention matrix, i.e., MLLMs tend to generate new tokens by focusing on a\nfew summary tokens, but not all the previous tokens. Such partial over-trust\ninclination results in the neglecting of image tokens and describes the image\ncontent with hallucination. Based on the observation, OPERA introduces a\npenalty term on the model logits during the beam-search decoding to mitigate\nthe over-trust issue, along with a rollback strategy that retrospects the\npresence of summary tokens in the previously generated tokens, and re-allocate\nthe token selection if necessary. With extensive experiments, OPERA shows\nsignificant hallucination-mitigating performance on different MLLMs and\nmetrics, proving its effectiveness and generality. Our code is available at:\nhttps://github.com/shikiw/OPERA.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Qidong Huang", "Xiaoyi Dong", "Pan Zhang", "Bin Wang", "Conghui He", "Jiaqi Wang", "Dahua Lin", "Weiming Zhang", "Nenghai Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f525"}, "filepath": "data/2403.15679.png", "tags": [], "_media_type": "image", "_rand": 0.9990188011214204, "arXiv_link": "https://arxiv.org/abs/2403.15679", "other_link": "https://haoyan14.github.io/DS-NeRV.", "title": "Combining Frame and GOP Embeddings for Neural Video Representation", "abstract": "Implicit neural representations for video (NeRV) have recently become a novel\nway for high-quality video representation. However, existing works employ a\nsingle network to represent the entire video, which implicitly confuse static\nand dynamic information. This leads to an inability to effectively compress the\nredundant static information and lack the explicitly modeling of global\ntemporal-coherent dynamic details. To solve above problems, we propose DS-NeRV,\nwhich decomposes videos into sparse learnable static codes and dynamic codes\nwithout the need for explicit optical flow or residual supervision. By setting\ndifferent sampling rates for two codes and applying weighted sum and\ninterpolation sampling methods, DS-NeRV efficiently utilizes redundant static\ninformation while maintaining high-frequency details. Additionally, we design a\ncross-channel attention-based (CCA) fusion module to efficiently fuse these two\ncodes for frame decoding. Our approach achieves a high quality reconstruction\nof 31.2 PSNR with only 0.35M parameters thanks to separate static and dynamic\ncodes representation and outperforms existing NeRV methods in many downstream\ntasks. Our project website is at https://haoyan14.github.io/DS-NeRV.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jens Eirik Saethre", "Roberto Azevedo", "Christopher Schroers"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f526"}, "filepath": "data/2405.14677.png", "tags": [], "_media_type": "image", "_rand": 0.9998138387684207, "arXiv_link": "https://arxiv.org/abs/2405.14677", "other_link": "https://github.com/feifeiobama/RectifID.", "title": "FlowIE\uff1aEfficient Image Enhancement via Rectified Flow", "abstract": "Customizing diffusion models to generate identity-preserving images from\nuser-provided reference images is an intriguing new problem. The prevalent\napproaches typically require training on extensive domain-specific images to\nachieve identity preservation, which lacks flexibility across different use\ncases. To address this issue, we exploit classifier guidance, a training-free\ntechnique that steers diffusion models using an existing classifier, for\npersonalized image generation. Our study shows that based on a recent rectified\nflow framework, the major limitation of vanilla classifier guidance in\nrequiring a special classifier can be resolved with a simple fixed-point\nsolution, allowing flexible personalization with off-the-shelf image\ndiscriminators. Moreover, its solving procedure proves to be stable when\nanchored to a reference flow trajectory, with a convergence guarantee. The\nderived method is implemented on rectified flow with different off-the-shelf\nimage discriminators, delivering advantageous personalization results for human\nfaces, live subjects, and certain objects. Code is available at\nhttps://github.com/feifeiobama/RectifID.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Yixuan Zhu", "Wenliang Zhao", "Ao Li", "Yansong Tang", "Jie Zhou", "Jiwen Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f527"}, "filepath": "data/2309.11718.png", "tags": [], "_media_type": "image", "_rand": 0.9991149671760237, "arXiv_link": "https://arxiv.org/abs/2309.11718", "other_link": "", "title": "CPR-Coach: Recognizing Composite Error Actions based on Single-class Training", "abstract": "The fine-grained medical action analysis task has received considerable\nattention from pattern recognition communities recently, but it faces the\nproblems of data and algorithm shortage. Cardiopulmonary Resuscitation (CPR) is\nan essential skill in emergency treatment. Currently, the assessment of CPR\nskills mainly depends on dummies and trainers, leading to high training costs\nand low efficiency. For the first time, this paper constructs a vision-based\nsystem to complete error action recognition and skill assessment in CPR.\nSpecifically, we define 13 types of single-error actions and 74 types of\ncomposite error actions during external cardiac compression and then develop a\nvideo dataset named CPR-Coach. By taking the CPR-Coach as a benchmark, this\npaper thoroughly investigates and compares the performance of existing action\nrecognition models based on different data modalities. To solve the unavoidable\nSingle-class Training & Multi-class Testing problem, we propose a\nhumancognition-inspired framework named ImagineNet to improve the model's\nmultierror recognition performance under restricted supervision. Extensive\nexperiments verify the effectiveness of the framework. We hope this work could\nadvance research toward fine-grained medical action analysis and skill\nassessment. The CPR-Coach dataset and the code of ImagineNet are publicly\navailable on Github.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Shunli Wang", "Shuaibing Wang", "Dingkang Yang", "Mingcheng Li", "Haopeng Kuang", "Xiao Zhao", "Liuzhen Su", "Peng Zhai", "Lihua Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f528"}, "filepath": "data/2307.16121.png", "tags": [], "_media_type": "image", "_rand": 0.999413154281408, "arXiv_link": "https://arxiv.org/abs/2307.16121", "other_link": "", "title": "Embracing Unimodal Aleatoric Uncertainty for Robust Multimodal Fusion", "abstract": "Multi-modal fusion has shown initial promising results for object detection\nof autonomous driving perception. However, many existing fusion schemes do not\nconsider the quality of each fusion input and may suffer from adverse\nconditions on one or more sensors. While predictive uncertainty has been\napplied to characterize single-modal object detection performance at run time,\nincorporating uncertainties into the multi-modal fusion still lacks effective\nsolutions due primarily to the uncertainty's cross-modal incomparability and\ndistinct sensitivities to various adverse conditions. To fill this gap, this\npaper proposes Uncertainty-Encoded Mixture-of-Experts (UMoE) that explicitly\nincorporates single-modal uncertainties into LiDAR-camera fusion. UMoE uses\nindividual expert network to process each sensor's detection result together\nwith encoded uncertainty. Then, the expert networks' outputs are analyzed by a\ngating network to determine the fusion weights. The proposed UMoE module can be\nintegrated into any proposal fusion pipeline. Evaluation shows that UMoE\nachieves a maximum of 10.67%, 3.17%, and 5.40% performance gain compared with\nthe state-of-the-art proposal-level multi-modal object detectors under extreme\nweather, adversarial, and blinding attack scenarios.", "keywords": [], "authors_list": ["Zixian Gao", "Xun Jiang", "Xing Xu", "Fumin Shen", "Yujie Li", "Heng Tao Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f529"}, "filepath": "data/2405.18437.png", "tags": [], "_media_type": "image", "_rand": 0.9995764430550967, "arXiv_link": "https://arxiv.org/abs/2405.18437", "other_link": "https://github.com/SegoleneMartin/transductive-CLIP.", "title": "Transductive Zero-Shot $\\&$ Few-Shot CLIP", "abstract": "Transductive inference has been widely investigated in few-shot image\nclassification, but completely overlooked in the recent, fast growing\nliterature on adapting vision-langage models like CLIP. This paper addresses\nthe transductive zero-shot and few-shot CLIP classification challenge, in which\ninference is performed jointly across a mini-batch of unlabeled query samples,\nrather than treating each instance independently. We initially construct\ninformative vision-text probability features, leading to a classification\nproblem on the unit simplex set. Inspired by Expectation-Maximization (EM), our\noptimization-based classification objective models the data probability\ndistribution for each class using a Dirichlet law. The minimization problem is\nthen tackled with a novel block Majorization-Minimization algorithm, which\nsimultaneously estimates the distribution parameters and class assignments.\nExtensive numerical experiments on 11 datasets underscore the benefits and\nefficacy of our batch inference approach.On zero-shot tasks with test batches\nof 75 samples, our approach yields near 20% improvement in ImageNet accuracy\nover CLIP's zero-shot performance. Additionally, we outperform state-of-the-art\nmethods in the few-shot setting. The code is available at:\nhttps://github.com/SegoleneMartin/transductive-CLIP.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["S\u00e9gol\u00e8ne Martin", "Yunshi HUANG", "Fereshteh Shakeri", "Jean-Christophe Pesquet", "Ismail Ben Ayed"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f52a"}, "filepath": "data/2311.18840.png", "tags": [], "_media_type": "image", "_rand": 0.9999155055114858, "arXiv_link": "https://arxiv.org/abs/2311.18840", "other_link": "", "title": "Just Add $\\pi$! Pose Induced Video Transformers for Understanding Activities of Daily Living", "abstract": "Video transformers have become the de facto standard for human action\nrecognition, yet their exclusive reliance on the RGB modality still limits\ntheir adoption in certain domains. One such domain is Activities of Daily\nLiving (ADL), where RGB alone is not sufficient to distinguish between visually\nsimilar actions, or actions observed from multiple viewpoints. To facilitate\nthe adoption of video transformers for ADL, we hypothesize that the\naugmentation of RGB with human pose information, known for its sensitivity to\nfine-grained motion and multiple viewpoints, is essential. Consequently, we\nintroduce the first Pose Induced Video Transformer: PI-ViT (or $\\pi$-ViT), a\nnovel approach that augments the RGB representations learned by video\ntransformers with 2D and 3D pose information. The key elements of $\\pi$-ViT are\ntwo plug-in modules, 2D Skeleton Induction Module and 3D Skeleton Induction\nModule, that are responsible for inducing 2D and 3D pose information into the\nRGB representations. These modules operate by performing pose-aware auxiliary\ntasks, a design choice that allows $\\pi$-ViT to discard the modules during\ninference. Notably, $\\pi$-ViT achieves the state-of-the-art performance on\nthree prominent ADL datasets, encompassing both real-world and large-scale\nRGB-D datasets, without requiring poses or additional computational overhead at\ninference.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Dominick Reilly", "Srijan Das"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f52b"}, "filepath": "data/2404.01998.png", "tags": [], "_media_type": "image", "_rand": 0.9997287237617913, "arXiv_link": "https://arxiv.org/abs/2404.01998", "other_link": "", "title": "Specularity Factorization for Low Light Enhancement", "abstract": "We present a new additive image factorization technique that treats images to\nbe composed of multiple latent specular components which can be simply\nestimated recursively by modulating the sparsity during decomposition. Our\nmodel-driven {\\em RSFNet} estimates these factors by unrolling the optimization\ninto network layers requiring only a few scalars to be learned. The resultant\nfactors are interpretable by design and can be fused for different image\nenhancement tasks via a network or combined directly by the user in a\ncontrollable fashion. Based on RSFNet, we detail a zero-reference Low Light\nEnhancement (LLE) application trained without paired or unpaired supervision.\nOur system improves the state-of-the-art performance on standard benchmarks and\nachieves better generalization on multiple other datasets. We also integrate\nour factors with other task specific fusion networks for applications like\nderaining, deblurring and dehazing with negligible overhead thereby\nhighlighting the multi-domain and multi-task generalizability of our proposed\nRSFNet. The code and data is released for reproducibility on the project\nhomepage.", "keywords": ["Low-level vision", "Efficient and scalable vision"], "authors_list": ["Saurabh Saini", "P. J. Narayanan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f52c"}, "filepath": "data/2405.15188.png", "tags": [], "_media_type": "image", "_rand": 0.9995992795450919, "arXiv_link": "https://arxiv.org/abs/2405.15188", "other_link": "", "title": "Draw Step by Step: Reconstructing CAD Construction Sequences from Point Clouds via Multimodal Diffusion.", "abstract": "Reverse engineering CAD models from raw geometry is a classic but challenging\nresearch problem. In particular, reconstructing the CAD modeling sequence from\npoint clouds provides great interpretability and convenience for editing. To\nimprove upon this problem, we introduce geometric guidance into the\nreconstruction network. Our proposed model, PS-CAD, reconstructs the CAD\nmodeling sequence one step at a time. At each step, we provide two forms of\ngeometric guidance. First, we provide the geometry of surfaces where the\ncurrent reconstruction differs from the complete model as a point cloud. This\nhelps the framework to focus on regions that still need work. Second, we use\ngeometric analysis to extract a set of planar prompts, that correspond to\ncandidate surfaces where a CAD extrusion step could be started. Our framework\nhas three major components. Geometric guidance computation extracts the two\ntypes of geometric guidance. Single-step reconstruction computes a single\ncandidate CAD modeling step for each provided prompt. Single-step selection\nselects among the candidate CAD modeling steps. The process continues until the\nreconstruction is completed. Our quantitative results show a significant\nimprovement across all metrics. For example, on the dataset DeepCAD, PS-CAD\nimproves upon the best published SOTA method by reducing the geometry errors\n(CD and HD) by 10%, and the structural error (ECD metric) by about 15%.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Weijian Ma", "Shuaiqi Chen", "Yunzhong Lou", "Xueyang Li", "Xiangdong Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f52d"}, "filepath": "data/2403.03881.png", "tags": [], "_media_type": "image", "_rand": 0.9996091067673064, "arXiv_link": "https://arxiv.org/abs/2403.03881", "other_link": "", "title": "D$^4$M: Dataset Distillation via Disentangled Diffusion Model", "abstract": "The efficacy of machine learning has traditionally relied on the availability\nof increasingly larger datasets. However, large datasets pose storage\nchallenges and contain non-influential samples, which could be ignored during\ntraining without impacting the final accuracy of the model. In response to\nthese limitations, the concept of distilling the information on a dataset into\na condensed set of (synthetic) samples, namely a distilled dataset, emerged.\nOne crucial aspect is the selected architecture (usually ConvNet) for linking\nthe original and synthetic datasets. However, the final accuracy is lower if\nthe employed model architecture differs from the model used during\ndistillation. Another challenge is the generation of high-resolution images,\ne.g., 128x128 and higher. In this paper, we propose Latent Dataset Distillation\nwith Diffusion Models (LD3M) that combine diffusion in latent space with\ndataset distillation to tackle both challenges. LD3M incorporates a novel\ndiffusion process tailored for dataset distillation, which improves the\ngradient norms for learning synthetic images. By adjusting the number of\ndiffusion steps, LD3M also offers a straightforward way of controlling the\ntrade-off between speed and accuracy. We evaluate our approach in several\nImageNet subsets and for high-resolution images (128x128 and 256x256). As a\nresult, LD3M consistently outperforms state-of-the-art distillation techniques\nby up to 4.8 p.p. and 4.2 p.p. for 1 and 10 images per class, respectively.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Duo Su", "Junjie Hou", "Weizhi Gao", "Yingjie Tian", "Bowen Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f52e"}, "filepath": "data/2404.08636.png", "tags": [], "_media_type": "image", "_rand": 0.9998125036108522, "arXiv_link": "https://arxiv.org/abs/2404.08636", "other_link": "https://github.com/mbanani/probe3d.", "title": "Probing the 3D Awareness of Visual Foundation Models", "abstract": "Recent advances in large-scale pretraining have yielded visual foundation\nmodels with strong capabilities. Not only can recent models generalize to\narbitrary images for their training task, their intermediate representations\nare useful for other visual tasks such as detection and segmentation. Given\nthat such models can classify, delineate, and localize objects in 2D, we ask\nwhether they also represent their 3D structure? In this work, we analyze the 3D\nawareness of visual foundation models. We posit that 3D awareness implies that\nrepresentations (1) encode the 3D structure of the scene and (2) consistently\nrepresent the surface across views. We conduct a series of experiments using\ntask-specific probes and zero-shot inference procedures on frozen features. Our\nexperiments reveal several limitations of the current models. Our code and\nanalysis can be found at https://github.com/mbanani/probe3d.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Mohamed El Banani", "Amit Raj", "Kevis-kokitsi Maninis", "Abhishek Kar", "Yuanzhen Li", "Michael Rubinstein", "Deqing Sun", "Leonidas Guibas", "Justin Johnson", "Varun Jampani"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f52f"}, "filepath": "data/2307.06949.png", "tags": [], "_media_type": "image", "_rand": 0.9994381092313459, "arXiv_link": "https://arxiv.org/abs/2307.06949", "other_link": "https://hyperdreambooth.github.io", "title": "HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models", "abstract": "Personalization has emerged as a prominent aspect within the field of\ngenerative AI, enabling the synthesis of individuals in diverse contexts and\nstyles, while retaining high-fidelity to their identities. However, the process\nof personalization presents inherent challenges in terms of time and memory\nrequirements. Fine-tuning each personalized model needs considerable GPU time\ninvestment, and storing a personalized model per subject can be demanding in\nterms of storage capacity. To overcome these challenges, we propose\nHyperDreamBooth-a hypernetwork capable of efficiently generating a small set of\npersonalized weights from a single image of a person. By composing these\nweights into the diffusion model, coupled with fast finetuning, HyperDreamBooth\ncan generate a person's face in various contexts and styles, with high subject\ndetails while also preserving the model's crucial knowledge of diverse styles\nand semantic modifications. Our method achieves personalization on faces in\nroughly 20 seconds, 25x faster than DreamBooth and 125x faster than Textual\nInversion, using as few as one reference image, with the same quality and style\ndiversity as DreamBooth. Also our method yields a model that is 10000x smaller\nthan a normal DreamBooth model. Project page: https://hyperdreambooth.github.io", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Nataniel Ruiz", "Yuanzhen Li", "Varun Jampani", "Wei Wei", "Tingbo Hou", "Yael Pritch", "Neal Wadhwa", "Michael Rubinstein", "Kfir Aberman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f530"}, "filepath": "data/2312.01407.png", "tags": [], "_media_type": "image", "_rand": 0.9998663668105654, "arXiv_link": "https://arxiv.org/abs/2312.01407", "other_link": "", "title": "VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams", "abstract": "Neural Radiance Fields (NeRFs) excel in photorealistically rendering static\nscenes. However, rendering dynamic, long-duration radiance fields on ubiquitous\ndevices remains challenging, due to data storage and computational constraints.\nIn this paper, we introduce VideoRF, the first approach to enable real-time\nstreaming and rendering of dynamic radiance fields on mobile platforms. At the\ncore is a serialized 2D feature image stream representing the 4D radiance field\nall in one. We introduce a tailored training scheme directly applied to this 2D\ndomain to impose the temporal and spatial redundancy of the feature image\nstream. By leveraging the redundancy, we show that the feature image stream can\nbe efficiently compressed by 2D video codecs, which allows us to exploit video\nhardware accelerators to achieve real-time decoding. On the other hand, based\non the feature image stream, we propose a novel rendering pipeline for VideoRF,\nwhich has specialized space mappings to query radiance properties efficiently.\nPaired with a deferred shading model, VideoRF has the capability of real-time\nrendering on mobile devices thanks to its efficiency. We have developed a\nreal-time interactive player that enables online streaming and rendering of\ndynamic scenes, offering a seamless and immersive free-viewpoint experience\nacross a range of devices, from desktops to mobile phones.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Liao Wang", "Kaixin Yao", "Chengcheng Guo", "Zhirui Zhang", "Qiang Hu", "Jingyi Yu", "Lan Xu", "Minye Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f531"}, "filepath": "data/2311.03356v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992317459599078, "arXiv_link": "https://arxiv.org/abs/2311.03356v1", "other_link": "https://mbzuai-oryx.github.io/groundingLMM.", "title": "GLaMM: Pixel Grounding Large Multimodal Model", "abstract": "Large Multimodal Models (LMMs) extend Large Language Models to the vision\ndomain. Initial efforts towards LMMs used holistic images and text prompts to\ngenerate ungrounded textual responses. Very recently, region-level LMMs have\nbeen used to generate visually grounded responses. However, they are limited to\nonly referring a single object category at a time, require users to specify the\nregions in inputs, or cannot offer dense pixel-wise object grounding. In this\nwork, we present Grounding LMM (GLaMM), the first model that can generate\nnatural language responses seamlessly intertwined with corresponding object\nsegmentation masks. GLaMM not only grounds objects appearing in the\nconversations but is flexible enough to accept both textual and optional visual\nprompts (region of interest) as input. This empowers users to interact with the\nmodel at various levels of granularity, both in textual and visual domains. Due\nto the lack of standard benchmarks for the novel setting of generating visually\ngrounded detailed conversations, we introduce a comprehensive evaluation\nprotocol with our curated grounded conversations. Our proposed Grounded\nConversation Generation (GCG) task requires densely grounded concepts in\nnatural scenes at a large-scale. To this end, we propose a densely annotated\nGrounding-anything Dataset (GranD) using our proposed automated annotation\npipeline that encompasses 7.5M unique concepts grounded in a total of 810M\nregions available with segmentation masks. Besides GCG, GLaMM also performs\neffectively on several downstream tasks e.g., referring expression\nsegmentation, image and region-level captioning and vision-language\nconversations. Project Page: https://mbzuai-oryx.github.io/groundingLMM.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Hanoona Rasheed", "Muhammad Maaz", "Sahal Shaji Mullappilly", "Abdelrahman Shaker", "Salman Khan", "Hisham Cholakkal", "Rao Anwer", "Eric P. Xing", "Ming-Hsuan Yang", "Fahad Shahbaz Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f532"}, "filepath": "data/2401.14398.png", "tags": [], "_media_type": "image", "_rand": 0.9990080295704163, "arXiv_link": "https://arxiv.org/abs/2401.14398", "other_link": "", "title": "pix2gestalt: Amodal Segmentation by Synthesizing Wholes", "abstract": "We introduce pix2gestalt, a framework for zero-shot amodal segmentation,\nwhich learns to estimate the shape and appearance of whole objects that are\nonly partially visible behind occlusions. By capitalizing on large-scale\ndiffusion models and transferring their representations to this task, we learn\na conditional diffusion model for reconstructing whole objects in challenging\nzero-shot cases, including examples that break natural and physical priors,\nsuch as art. As training data, we use a synthetically curated dataset\ncontaining occluded objects paired with their whole counterparts. Experiments\nshow that our approach outperforms supervised baselines on established\nbenchmarks. Our model can furthermore be used to significantly improve the\nperformance of existing object recognition and 3D reconstruction methods in the\npresence of occlusions.", "keywords": [], "authors_list": ["Ege Ozguroglu", "Ruoshi Liu", "D\u00eddac Sur\u00eds", "Dian Chen", "Achal Dave", "Pavel Tokmakov", "Carl Vondrick"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f533"}, "filepath": "data/2404.03925.png", "tags": [], "_media_type": "image", "_rand": 0.9996500849494124, "arXiv_link": "https://arxiv.org/abs/2404.03925", "other_link": "", "title": "LightOctree: Lightweight 3D Spatially-Coherent Indoor Lighting Estimation", "abstract": "We present a lightweight solution for estimating spatially-coherent indoor\nlighting from a single RGB image. Previous methods for estimating illumination\nusing volumetric representations have overlooked the sparse distribution of\nlight sources in space, necessitating substantial memory and computational\nresources for achieving high-quality results. We introduce a unified, voxel\noctree-based illumination estimation framework to produce 3D spatially-coherent\nlighting. Additionally, a differentiable voxel octree cone tracing rendering\nlayer is proposed to eliminate regular volumetric representation throughout the\nentire process and ensure the retention of features across different frequency\ndomains. This reduction significantly decreases spatial usage and required\nfloating-point operations without substantially compromising precision.\nExperimental results demonstrate that our approach achieves high-quality\ncoherent estimation with minimal cost compared to previous methods.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding", "Computational imaging and physics-based vision"], "authors_list": ["Xuecan Wang", "Shibang Xiao", "Xiaohui Liang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f534"}, "filepath": "data/2404.06270.png", "tags": [], "_media_type": "image", "_rand": 0.9999744140337999, "arXiv_link": "https://arxiv.org/abs/2404.06270", "other_link": "https://npucvr.github.io/GaGS/", "title": "3D Geometry-aware Deformable Gaussian Splatting for Dynamic View Synthesis", "abstract": "In this paper, we propose a 3D geometry-aware deformable Gaussian Splatting\nmethod for dynamic view synthesis. Existing neural radiance fields (NeRF) based\nsolutions learn the deformation in an implicit manner, which cannot incorporate\n3D scene geometry. Therefore, the learned deformation is not necessarily\ngeometrically coherent, which results in unsatisfactory dynamic view synthesis\nand 3D dynamic reconstruction. Recently, 3D Gaussian Splatting provides a new\nrepresentation of the 3D scene, building upon which the 3D geometry could be\nexploited in learning the complex 3D deformation. Specifically, the scenes are\nrepresented as a collection of 3D Gaussian, where each 3D Gaussian is optimized\nto move and rotate over time to model the deformation. To enforce the 3D scene\ngeometry constraint during deformation, we explicitly extract 3D geometry\nfeatures and integrate them in learning the 3D deformation. In this way, our\nsolution achieves 3D geometry-aware deformation modeling, which enables\nimproved dynamic view synthesis and 3D dynamic reconstruction. Extensive\nexperimental results on both synthetic and real datasets prove the superiority\nof our solution, which achieves new state-of-the-art performance.\n The project is available at https://npucvr.github.io/GaGS/", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Zhicheng Lu", "xiang guo", "Le Hui", "Tianrui Chen", "Min Yang", "Xiao Tang", "feng zhu", "Yuchao Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f535"}, "filepath": "data/2404.05231.png", "tags": [], "_media_type": "image", "_rand": 0.9998997162764574, "arXiv_link": "https://arxiv.org/abs/2404.05231", "other_link": "", "title": "PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection", "abstract": "The vision-language model has brought great improvement to few-shot\nindustrial anomaly detection, which usually needs to design of hundreds of\nprompts through prompt engineering. For automated scenarios, we first use\nconventional prompt learning with many-class paradigm as the baseline to\nautomatically learn prompts but found that it can not work well in one-class\nanomaly detection. To address the above problem, this paper proposes a\none-class prompt learning method for few-shot anomaly detection, termed\nPromptAD. First, we propose semantic concatenation which can transpose normal\nprompts into anomaly prompts by concatenating normal prompts with anomaly\nsuffixes, thus constructing a large number of negative samples used to guide\nprompt learning in one-class setting. Furthermore, to mitigate the training\nchallenge caused by the absence of anomaly images, we introduce the concept of\nexplicit anomaly margin, which is used to explicitly control the margin between\nnormal prompt features and anomaly prompt features through a hyper-parameter.\nFor image-level/pixel-level anomaly detection, PromptAD achieves first place in\n11/12 few-shot settings on MVTec and VisA.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Qihang Ma", "Zhizhong Zhang", "Xin Tan", "Yanyun Qu", "Chengwei Chen", "Yuan Xie", "Lizhuang Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f536"}, "filepath": "data/2307.08873.png", "tags": [], "_media_type": "image", "_rand": 0.9998893221906388, "arXiv_link": "https://arxiv.org/abs/2307.08873", "other_link": "", "title": "From Variance to Veracity: Unbundling and Mitigating Gradient Variance in Differentiable Bundle Adjustment Layers", "abstract": "Restricting the variance of a policy's return is a popular choice in\nrisk-averse Reinforcement Learning (RL) due to its clear mathematical\ndefinition and easy interpretability. Traditional methods directly restrict the\ntotal return variance. Recent methods restrict the per-step reward variance as\na proxy. We thoroughly examine the limitations of these variance-based methods,\nsuch as sensitivity to numerical scale and hindering of policy learning, and\npropose to use an alternative risk measure, Gini deviation, as a substitute. We\nstudy various properties of this new risk measure and derive a policy gradient\nalgorithm to minimize it. Empirical evaluation in domains where risk-aversion\ncan be clearly defined, shows that our algorithm can mitigate the limitations\nof variance-based risk measures and achieves high return with low risk in terms\nof variance and Gini deviation when others fail to learn a reasonable policy.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Swaminathan Gurumurthy", "Karnik Ram", "Bingqing Chen", "Zachary Manchester", "Zico Kolter"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f537"}, "filepath": "data/2404.11156.png", "tags": [], "_media_type": "image", "_rand": 0.9994335181805802, "arXiv_link": "https://arxiv.org/abs/2404.11156", "other_link": "", "title": "Learning SO(3)-Invariant Semantic Correspondence via Local Shape Transform", "abstract": "Establishing accurate 3D correspondences between shapes stands as a pivotal\nchallenge with profound implications for computer vision and robotics. However,\nexisting self-supervised methods for this problem assume perfect input shape\nalignment, restricting their real-world applicability. In this work, we\nintroduce a novel self-supervised Rotation-Invariant 3D correspondence learner\nwith Local Shape Transform, dubbed RIST, that learns to establish dense\ncorrespondences between shapes even under challenging intra-class variations\nand arbitrary orientations. Specifically, RIST learns to dynamically formulate\nan SO(3)-invariant local shape transform for each point, which maps the\nSO(3)-equivariant global shape descriptor of the input shape to a local shape\ndescriptor. These local shape descriptors are provided as inputs to our decoder\nto facilitate point cloud self- and cross-reconstruction. Our proposed\nself-supervised training pipeline encourages semantically corresponding points\nfrom different shapes to be mapped to similar local shape descriptors, enabling\nRIST to establish dense point-wise correspondences. RIST demonstrates\nstate-of-the-art performances on 3D part label transfer and semantic keypoint\ntransfer given arbitrarily rotated point cloud pairs, outperforming existing\nmethods by significant margins.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chunghyun Park", "Seungwook Kim", "Jaesik Park", "Minsu Cho"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f538"}, "filepath": "data/2403.15019.png", "tags": [], "_media_type": "image", "_rand": 0.9992909627237869, "arXiv_link": "https://arxiv.org/abs/2403.15019", "other_link": "https://github.com/peoplelu/BSNet}{https://github.com/peoplelu/BSNet}.", "title": "BSNet: Box-Supervised Simulation-assisted Mean Teacher for 3D Instance Segmentation", "abstract": "3D instance segmentation (3DIS) is a crucial task, but point-level\nannotations are tedious in fully supervised settings. Thus, using bounding\nboxes (bboxes) as annotations has shown great potential. The current mainstream\napproach is a two-step process, involving the generation of pseudo-labels from\nbox annotations and the training of a 3DIS network with the pseudo-labels.\nHowever, due to the presence of intersections among bboxes, not every point has\na determined instance label, especially in overlapping areas. To generate\nhigher quality pseudo-labels and achieve more precise weakly supervised 3DIS\nresults, we propose the Box-Supervised Simulation-assisted Mean Teacher for 3D\nInstance Segmentation (BSNet), which devises a novel pseudo-labeler called\nSimulation-assisted Transformer. The labeler consists of two main components.\nThe first is Simulation-assisted Mean Teacher, which introduces Mean Teacher\nfor the first time in this task and constructs simulated samples to assist the\nlabeler in acquiring prior knowledge about overlapping areas. To better model\nlocal-global structure, we also propose Local-Global Aware Attention as the\ndecoder for teacher and student labelers. Extensive experiments conducted on\nthe ScanNetV2 and S3DIS datasets verify the superiority of our designs. Code is\navailable at\n\\href{https://github.com/peoplelu/BSNet}{https://github.com/peoplelu/BSNet}.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiahao Lu", "Jiacheng Deng", "Tianzhu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f539"}, "filepath": "data/2404.00103.png", "tags": [], "_media_type": "image", "_rand": 0.9994105648358628, "arXiv_link": "https://arxiv.org/abs/2404.00103", "other_link": "", "title": "PikeLPN: Mitigating Overlooked Inefficiencies of Low-Precision Neural Networks", "abstract": "Low-precision quantization is recognized for its efficacy in neural network\noptimization. Our analysis reveals that non-quantized elementwise operations\nwhich are prevalent in layers such as parameterized activation functions, batch\nnormalization, and quantization scaling dominate the inference cost of\nlow-precision models. These non-quantized elementwise operations are commonly\noverlooked in SOTA efficiency metrics such as Arithmetic Computation Effort\n(ACE). In this paper, we propose ACEv2 - an extended version of ACE which\noffers a better alignment with the inference cost of quantized models and their\nenergy consumption on ML hardware. Moreover, we introduce PikeLPN, a model that\naddresses these efficiency issues by applying quantization to both elementwise\noperations and multiply-accumulate operations. In particular, we present a\nnovel quantization technique for batch normalization layers named QuantNorm\nwhich allows for quantizing the batch normalization parameters without\ncompromising the model performance. Additionally, we propose applying Double\nQuantization where the quantization scaling parameters are quantized.\nFurthermore, we recognize and resolve the issue of distribution mismatch in\nSeparable Convolution layers by introducing Distribution-Heterogeneous\nQuantization which enables quantizing them to low-precision. PikeLPN achieves\nPareto-optimality in efficiency-accuracy trade-off with up to 3X efficiency\nimprovement compared to SOTA low-precision models.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Marina Neseem", "Conor McCullough", "Randy Hsin", "Chas Leichner", "Shan Li", "In Suk Chong", "Andrew Howard", "Lukasz Lew", "Sherief Reda", "Ville-Mikko Rautio", "Daniele Moro"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f53a"}, "filepath": "data/2403.04765.png", "tags": [], "_media_type": "image", "_rand": 0.9994861902185868, "arXiv_link": "https://arxiv.org/abs/2403.04765", "other_link": "https://zju3dv.github.io/efficientloftr.", "title": "Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed", "abstract": "We present a novel method for efficiently producing semi-dense matches across\nimages. Previous detector-free matcher LoFTR has shown remarkable matching\ncapability in handling large-viewpoint change and texture-poor scenarios but\nsuffers from low efficiency. We revisit its design choices and derive multiple\nimprovements for both efficiency and accuracy. One key observation is that\nperforming the transformer over the entire feature map is redundant due to\nshared local information, therefore we propose an aggregated attention\nmechanism with adaptive token selection for efficiency. Furthermore, we find\nspatial variance exists in LoFTR's fine correlation module, which is adverse to\nmatching accuracy. A novel two-stage correlation layer is proposed to achieve\naccurate subpixel correspondences for accuracy improvement. Our efficiency\noptimized model is $\\sim 2.5\\times$ faster than LoFTR which can even surpass\nstate-of-the-art efficient sparse matching pipeline SuperPoint + LightGlue.\nMoreover, extensive experiments show that our method can achieve higher\naccuracy compared with competitive semi-dense matchers, with considerable\nefficiency benefits. This opens up exciting prospects for large-scale or\nlatency-sensitive applications such as image retrieval and 3D reconstruction.\nProject page: https://zju3dv.github.io/efficientloftr.", "keywords": [], "authors_list": ["Yifan Wang", "Xingyi He", "Sida Peng", "Dongli Tan", "Xiaowei Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f53b"}, "filepath": "data/2405.00181.png", "tags": [], "_media_type": "image", "_rand": 0.9991748286482857, "arXiv_link": "https://arxiv.org/abs/2405.00181", "other_link": "https://github.com/fesvhtr/CUVA.", "title": "Uncovering What, Why and How: A Comprehensive Benchmark for Causation Understanding of Video Anomaly", "abstract": "Video anomaly understanding (VAU) aims to automatically comprehend unusual\noccurrences in videos, thereby enabling various applications such as traffic\nsurveillance and industrial manufacturing. While existing VAU benchmarks\nprimarily concentrate on anomaly detection and localization, our focus is on\nmore practicality, prompting us to raise the following crucial questions: \"what\nanomaly occurred?\", \"why did it happen?\", and \"how severe is this abnormal\nevent?\". In pursuit of these answers, we present a comprehensive benchmark for\nCausation Understanding of Video Anomaly (CUVA). Specifically, each instance of\nthe proposed benchmark involves three sets of human annotations to indicate the\n\"what\", \"why\" and \"how\" of an anomaly, including 1) anomaly type, start and end\ntimes, and event descriptions, 2) natural language explanations for the cause\nof an anomaly, and 3) free text reflecting the effect of the abnormality. In\naddition, we also introduce MMEval, a novel evaluation metric designed to\nbetter align with human preferences for CUVA, facilitating the measurement of\nexisting LLMs in comprehending the underlying cause and corresponding effect of\nvideo anomalies. Finally, we propose a novel prompt-based method that can serve\nas a baseline approach for the challenging CUVA. We conduct extensive\nexperiments to show the superiority of our evaluation metric and the\nprompt-based approach. Our code and dataset are available at\nhttps://github.com/fesvhtr/CUVA.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Hang Du", "Sicheng Zhang", "Binzhu Xie", "Guoshun Nan", "Jiayang Zhang", "Junrui Xu", "Hangyu Liu", "Sicong Leng", "Jiangming Liu", "Hehe Fan", "Dajiu Huang", "Jing Feng", "Linli Chen", "Can Zhang", "Xuhuan Li", "Hao Zhang", "Jianhang Chen", "Qimei Cui", "Xiaofeng Tao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f53c"}, "filepath": "data/2311.16099.png", "tags": [], "_media_type": "image", "_rand": 0.999126191881061, "arXiv_link": "https://arxiv.org/abs/2311.16099", "other_link": "", "title": "GART: Gaussian Articulated Template Models", "abstract": "We introduce Gaussian Articulated Template Model GART, an explicit,\nefficient, and expressive representation for non-rigid articulated subject\ncapturing and rendering from monocular videos. GART utilizes a mixture of\nmoving 3D Gaussians to explicitly approximate a deformable subject's geometry\nand appearance. It takes advantage of a categorical template model prior (SMPL,\nSMAL, etc.) with learnable forward skinning while further generalizing to more\ncomplex non-rigid deformations with novel latent bones. GART can be\nreconstructed via differentiable rendering from monocular videos in seconds or\nminutes and rendered in novel poses faster than 150fps.", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis", "Image and video generation and manipulation"], "authors_list": ["Jiahui Lei", "Yufu Wang", "Georgios Pavlakos", "Lingjie Liu", "Kostas Daniilidis"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f53d"}, "filepath": "data/2312.00109.png", "tags": [], "_media_type": "image", "_rand": 0.9990560853570628, "arXiv_link": "https://arxiv.org/abs/2312.00109", "other_link": "", "title": "Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering", "abstract": "Neural rendering methods have significantly advanced photo-realistic 3D scene\nrendering in various academic and industrial applications. The recent 3D\nGaussian Splatting method has achieved the state-of-the-art rendering quality\nand speed combining the benefits of both primitive-based representations and\nvolumetric representations. However, it often leads to heavily redundant\nGaussians that try to fit every training view, neglecting the underlying scene\ngeometry. Consequently, the resulting model becomes less robust to significant\nview changes, texture-less area and lighting effects. We introduce Scaffold-GS,\nwhich uses anchor points to distribute local 3D Gaussians, and predicts their\nattributes on-the-fly based on viewing direction and distance within the view\nfrustum. Anchor growing and pruning strategies are developed based on the\nimportance of neural Gaussians to reliably improve the scene coverage. We show\nthat our method effectively reduces redundant Gaussians while delivering\nhigh-quality rendering. We also demonstrates an enhanced capability to\naccommodate scenes with varying levels-of-detail and view-dependent\nobservations, without sacrificing the rendering speed.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Tao Lu", "Mulin Yu", "Linning Xu", "Yuanbo Xiangli", "Limin Wang", "Dahua Lin", "Bo Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f53e"}, "filepath": "data/2404.00874.png", "tags": [], "_media_type": "image", "_rand": 0.9991423739315661, "arXiv_link": "https://arxiv.org/abs/2404.00874", "other_link": "", "title": "DiSR-NeRF: Diffusion-Guided View-Consistent Super-Resolution NeRF", "abstract": "We present DiSR-NeRF, a diffusion-guided framework for view-consistent\nsuper-resolution (SR) NeRF. Unlike prior works, we circumvent the requirement\nfor high-resolution (HR) reference images by leveraging existing powerful 2D\nsuper-resolution models. Nonetheless, independent SR 2D images are often\ninconsistent across different views. We thus propose Iterative 3D\nSynchronization (I3DS) to mitigate the inconsistency problem via the inherent\nmulti-view consistency property of NeRF. Specifically, our I3DS alternates\nbetween upscaling low-resolution (LR) rendered images with diffusion models,\nand updating the underlying 3D representation with standard NeRF training. We\nfurther introduce Renoised Score Distillation (RSD), a novel score-distillation\nobjective for 2D image resolution. Our RSD combines features from ancestral\nsampling and Score Distillation Sampling (SDS) to generate sharp images that\nare also LR-consistent. Qualitative and quantitative results on both synthetic\nand real-world datasets demonstrate that our DiSR-NeRF can achieve better\nresults on NeRF super-resolution compared with existing works. Code and video\nresults available at the project website.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Jie Long Lee", "Chen Li", "Gim Hee Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f53f"}, "filepath": "data/2311.12754.png", "tags": [], "_media_type": "image", "_rand": 0.9990995490826198, "arXiv_link": "https://arxiv.org/abs/2311.12754", "other_link": "https://github.com/huang-yh/SelfOcc.", "title": "SelfOcc: Self-Supervised Vision-Based 3D Occupancy Prediction", "abstract": "3D occupancy prediction is an important task for the robustness of\nvision-centric autonomous driving, which aims to predict whether each point is\noccupied in the surrounding 3D space. Existing methods usually require 3D\noccupancy labels to produce meaningful results. However, it is very laborious\nto annotate the occupancy status of each voxel. In this paper, we propose\nSelfOcc to explore a self-supervised way to learn 3D occupancy using only video\nsequences. We first transform the images into the 3D space (e.g., bird's eye\nview) to obtain 3D representation of the scene. We directly impose constraints\non the 3D representations by treating them as signed distance fields. We can\nthen render 2D images of previous and future frames as self-supervision signals\nto learn the 3D representations. We propose an MVS-embedded strategy to\ndirectly optimize the SDF-induced weights with multiple depth proposals. Our\nSelfOcc outperforms the previous best method SceneRF by 58.7% using a single\nframe as input on SemanticKITTI and is the first self-supervised work that\nproduces reasonable 3D occupancy for surround cameras on nuScenes. SelfOcc\nproduces high-quality depth and achieves state-of-the-art results on novel\ndepth synthesis, monocular depth estimation, and surround-view depth estimation\non the SemanticKITTI, KITTI-2015, and nuScenes, respectively. Code:\nhttps://github.com/huang-yh/SelfOcc.", "keywords": [], "authors_list": ["Yuanhui Huang", "Wenzhao Zheng", "Borui Zhang", "Jie Zhou", "Jiwen Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f540"}, "filepath": "data/2401.07770.png", "tags": [], "_media_type": "image", "_rand": 0.9992725224522117, "arXiv_link": "https://arxiv.org/abs/2401.07770", "other_link": "", "title": "Seeing the Unseen: Visual Common Sense for Semantic Placement", "abstract": "Computer vision tasks typically involve describing what is present in an\nimage (e.g. classification, detection, segmentation, and captioning). We study\na visual common sense task that requires understanding what is not present.\nSpecifically, given an image (e.g. of a living room) and name of an object\n(\"cushion\"), a vision system is asked to predict semantically-meaningful\nregions (masks or bounding boxes) in the image where that object could be\nplaced or is likely be placed by humans (e.g. on the sofa). We call this task:\nSemantic Placement (SP) and believe that such common-sense visual understanding\nis critical for assitive robots (tidying a house), and AR devices\n(automatically rendering an object in the user's space). Studying the invisible\nis hard. Datasets for image description are typically constructed by curating\nrelevant images and asking humans to annotate the contents of the image;\nneither of those two steps are straightforward for objects not present in the\nimage. We overcome this challenge by operating in the opposite direction: we\nstart with an image of an object in context from web, and then remove that\nobject from the image via inpainting. This automated pipeline converts\nunstructured web data into a dataset comprising pairs of images with/without\nthe object. Using this, we collect a novel dataset, with ${\\sim}1.3$M images\nacross $9$ object categories, and train a SP prediction model called CLIP-UNet.\nCLIP-UNet outperforms existing VLMs and baselines that combine semantic priors\nwith object detectors on real-world and simulated images. In our user studies,\nwe find that the SP masks predicted by CLIP-UNet are favored $43.7\\%$ and\n$31.3\\%$ times when comparing against the $4$ SP baselines on real and\nsimulated images. In addition, we demonstrate leveraging SP mask predictions\nfrom CLIP-UNet enables downstream applications like building tidying robots in\nindoor environments.", "keywords": ["Vision applications for social good and ethics"], "authors_list": ["Ram Ramrakhya", "Aniruddha Kembhavi", "Dhruv Batra", "Zsolt Kira", "Kuo-Hao Zeng", "Luca Weihs"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f541"}, "filepath": "data/2403.02249.png", "tags": [], "_media_type": "image", "_rand": 0.9994205994152076, "arXiv_link": "https://arxiv.org/abs/2403.02249", "other_link": "", "title": "Non-autoregressive Sequence-to-Sequence Vision-Language Models", "abstract": "Sequence-to-sequence vision-language models are showing promise, but their\napplicability is limited by their inference latency due to their autoregressive\nway of generating predictions. We propose a parallel decoding\nsequence-to-sequence vision-language model, trained with a Query-CTC loss, that\nmarginalizes over multiple inference paths in the decoder. This allows us to\nmodel the joint distribution of tokens, rather than restricting to conditional\ndistribution as in an autoregressive model. The resulting model, NARVL,\nachieves performance on-par with its state-of-the-art autoregressive\ncounterpart, but is faster at inference time, reducing from the linear\ncomplexity associated with the sequential generation of tokens to a paradigm of\nconstant time joint inference.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Kunyu Shi", "Qi Dong", "Luis Goncalves", "Zhuowen Tu", "Stefano Soatto"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f542"}, "filepath": "data/2312.11269.png", "tags": [], "_media_type": "image", "_rand": 0.9992672163780333, "arXiv_link": "https://arxiv.org/abs/2312.11269", "other_link": "", "title": "Spherical Mask: Coarse-to-Fine 3D Point Cloud Instance Segmentation with Spherical Representation", "abstract": "Coarse-to-fine 3D instance segmentation methods show weak performances\ncompared to recent Grouping-based, Kernel-based and Transformer-based methods.\nWe argue that this is due to two limitations: 1) Instance size overestimation\nby axis-aligned bounding box(AABB) 2) False negative error accumulation from\ninaccurate box to the refinement phase. In this work, we introduce Spherical\nMask, a novel coarse-to-fine approach based on spherical representation,\novercoming those two limitations with several benefits. Specifically, our\ncoarse detection estimates each instance with a 3D polygon using a center and\nradial distance predictions, which avoids excessive size estimation of AABB. To\ncut the error propagation in the existing coarse-to-fine approaches, we\nvirtually migrate points based on the polygon, allowing all foreground points,\nincluding false negatives, to be refined. During inference, the proposal and\npoint migration modules run in parallel and are assembled to form binary masks\nof instances. We also introduce two margin-based losses for the point migration\nto enforce corrections for the false positives/negatives and cohesion of\nforeground points, significantly improving the performance. Experimental\nresults from three datasets, such as ScanNetV2, S3DIS, and STPLS3D, show that\nour proposed method outperforms existing works, demonstrating the effectiveness\nof the new instance representation with spherical coordinates.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Sangyun Shin", "Kaichen Zhou", "Madhu Vankadari", "Andrew Markham", "Niki Trigoni"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f543"}, "filepath": "data/2311.11417.png", "tags": [], "_media_type": "image", "_rand": 0.9995138499762167, "arXiv_link": "https://arxiv.org/abs/2311.11417", "other_link": "", "title": "DiffSCI: Zero-Shot Snapshot Compressive Imaging via Iterative Spectral Diffusion Model", "abstract": "This paper endeavors to advance the precision of snapshot compressive imaging\n(SCI) reconstruction for multispectral image (MSI). To achieve this, we\nintegrate the advantageous attributes of established SCI techniques and an\nimage generative model, propose a novel structured zero-shot diffusion model,\ndubbed DiffSCI. DiffSCI leverages the structural insights from the deep prior\nand optimization-based methodologies, complemented by the generative\ncapabilities offered by the contemporary denoising diffusion model.\nSpecifically, firstly, we employ a pre-trained diffusion model, which has been\ntrained on a substantial corpus of RGB images, as the generative denoiser\nwithin the Plug-and-Play framework for the first time. This integration allows\nfor the successful completion of SCI reconstruction, especially in the case\nthat current methods struggle to address effectively. Secondly, we\nsystematically account for spectral band correlations and introduce a robust\nmethodology to mitigate wavelength mismatch, thus enabling seamless adaptation\nof the RGB diffusion model to MSIs. Thirdly, an accelerated algorithm is\nimplemented to expedite the resolution of the data subproblem. This\naugmentation not only accelerates the convergence rate but also elevates the\nquality of the reconstruction process. We present extensive testing to show\nthat DiffSCI exhibits discernible performance enhancements over prevailing\nself-supervised and zero-shot approaches, surpassing even supervised\ntransformer counterparts across both simulated and real datasets. Our code will\nbe available.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Zhenghao Pan", "Haijin Zeng", "Jiezhang Cao", "Kai Zhang", "Yongyong Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f544"}, "filepath": "data/2312.07661.png", "tags": [], "_media_type": "image", "_rand": 0.9995251484399102, "arXiv_link": "https://arxiv.org/abs/2312.07661", "other_link": "", "title": "CLIP as RNN: Segment Countless Visual Concepts without Training Endeavor", "abstract": "Existing open-vocabulary image segmentation methods require a fine-tuning\nstep on mask labels and/or image-text datasets. Mask labels are\nlabor-intensive, which limits the number of categories in segmentation\ndatasets. Consequently, the vocabulary capacity of pre-trained VLMs is severely\nreduced after fine-tuning. However, without fine-tuning, VLMs trained under\nweak image-text supervision tend to make suboptimal mask predictions. To\nalleviate these issues, we introduce a novel recurrent framework that\nprogressively filters out irrelevant texts and enhances mask quality without\ntraining efforts. The recurrent unit is a two-stage segmenter built upon a\nfrozen VLM. Thus, our model retains the VLM's broad vocabulary space and equips\nit with segmentation ability. Experiments show that our method outperforms not\nonly the training-free counterparts, but also those fine-tuned with millions of\ndata samples, and sets the new state-of-the-art records for both zero-shot\nsemantic and referring segmentation. Concretely, we improve the current record\nby 28.8, 16.0, and 6.9 mIoU on Pascal VOC, COCO Object, and Pascal Context.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Shuyang Sun", "Runjia Li", "Philip H.S. Torr", "Xiuye Gu", "Siyang Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f545"}, "filepath": "data/2403.12236.png", "tags": [], "_media_type": "image", "_rand": 0.9999560554192554, "arXiv_link": "https://arxiv.org/abs/2403.12236", "other_link": "", "title": "Improving Generalization via Meta-Learning on Hard Samples", "abstract": "Learned reweighting (LRW) approaches to supervised learning use an\noptimization criterion to assign weights for training instances, in order to\nmaximize performance on a representative validation dataset. We pose and\nformalize the problem of optimized selection of the validation set used in LRW\ntraining, to improve classifier generalization. In particular, we show that\nusing hard-to-classify instances in the validation set has both a theoretical\nconnection to, and strong empirical evidence of generalization. We provide an\nefficient algorithm for training this meta-optimized model, as well as a simple\ntrain-twice heuristic for careful comparative study. We demonstrate that LRW\nwith easy validation data performs consistently worse than LRW with hard\nvalidation data, establishing the validity of our meta-optimization problem.\nOur proposed algorithm outperforms a wide range of baselines on a range of\ndatasets and domain shift challenges (Imagenet-1K, CIFAR-100, Clothing-1M,\nCAMELYON, WILDS, etc.), with ~1% gains using VIT-B on Imagenet. We also show\nthat using naturally hard examples for validation (Imagenet-R / Imagenet-A) in\nLRW training for Imagenet improves performance on both clean and naturally hard\ntest instances by 1-2%. Secondary analyses show that using hard validation data\nin an LRW framework improves margins on test data, hinting at the mechanism\nunderlying our empirical gains. We believe this work opens up new research\ndirections for the meta-optimization of meta-learning in a supervised learning\ncontext.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Nishant Jain", "Arun Suggala", "Pradeep Shenoy"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f546"}, "filepath": "data/2312.04534.png", "tags": [], "_media_type": "image", "_rand": 0.9999338589203658, "arXiv_link": "https://arxiv.org/abs/2312.04534", "other_link": "", "title": "PICTURE: PhotorealistIC virtual Try-on from UnconstRained dEsigns", "abstract": "In this paper, we propose a novel virtual try-on from unconstrained designs\n(ucVTON) task to enable photorealistic synthesis of personalized composite\nclothing on input human images. Unlike prior arts constrained by specific input\ntypes, our method allows flexible specification of style (text or image) and\ntexture (full garment, cropped sections, or texture patches) conditions. To\naddress the entanglement challenge when using full garment images as\nconditions, we develop a two-stage pipeline with explicit disentanglement of\nstyle and texture. In the first stage, we generate a human parsing map\nreflecting the desired style conditioned on the input. In the second stage, we\ncomposite textures onto the parsing map areas based on the texture input. To\nrepresent complex and non-stationary textures that have never been achieved in\nprevious fashion editing works, we first propose extracting hierarchical and\nbalanced CLIP features and applying position encoding in VTON. Experiments\ndemonstrate superior synthesis quality and personalization enabled by our\nmethod. The flexible control over style and texture mixing brings virtual\ntry-on to a new level of user experience for online shopping and fashion\ndesign.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Shuliang Ning", "Duomin Wang", "Yipeng Qin", "Zirong Jin", "Baoyuan Wang", "Xiaoguang Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f547"}, "filepath": "data/2405.13194.png", "tags": [], "_media_type": "image", "_rand": 0.9995339771168462, "arXiv_link": "https://arxiv.org/abs/2405.13194", "other_link": "", "title": "KPConvX: Modernizing Kernel Point Convolution with Kernel Attention", "abstract": "In the field of deep point cloud understanding, KPConv is a unique\narchitecture that uses kernel points to locate convolutional weights in space,\ninstead of relying on Multi-Layer Perceptron (MLP) encodings. While it\ninitially achieved success, it has since been surpassed by recent MLP networks\nthat employ updated designs and training strategies. Building upon the kernel\npoint principle, we present two novel designs: KPConvD (depthwise KPConv), a\nlighter design that enables the use of deeper architectures, and KPConvX, an\ninnovative design that scales the depthwise convolutional weights of KPConvD\nwith kernel attention values. Using KPConvX with a modern architecture and\ntraining strategy, we are able to outperform current state-of-the-art\napproaches on the ScanObjectNN, Scannetv2, and S3DIS datasets. We validate our\ndesign choices through ablation studies and release our code and models.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Hugues Thomas", "Yao-Hung Hubert Tsai", "Timothy Barfoot", "Jian Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f548"}, "filepath": "data/2312.00081.png", "tags": [], "_media_type": "image", "_rand": 0.9995832384976294, "arXiv_link": "https://arxiv.org/abs/2312.00081", "other_link": "https://github.com/wjpoom/SPEC.", "title": "Synthesize, Diagnose, and Optimize: Towards Fine-Grained Vision-Language Understanding", "abstract": "Vision language models (VLM) have demonstrated remarkable performance across\nvarious downstream tasks. However, understanding fine-grained visual-linguistic\nconcepts, such as attributes and inter-object relationships, remains a\nsignificant challenge. While several benchmarks aim to evaluate VLMs in finer\ngranularity, their primary focus remains on the linguistic aspect, neglecting\nthe visual dimension. Here, we highlight the importance of evaluating VLMs from\nboth a textual and visual perspective. We introduce a progressive pipeline to\nsynthesize images that vary in a specific attribute while ensuring consistency\nin all other aspects. Utilizing this data engine, we carefully design a\nbenchmark, SPEC, to diagnose the comprehension of object size, position,\nexistence, and count. Subsequently, we conduct a thorough evaluation of four\nleading VLMs on SPEC. Surprisingly, their performance is close to random guess,\nrevealing significant limitations. With this in mind, we propose a simple yet\neffective approach to optimize VLMs in fine-grained understanding, achieving\nsignificant improvements on SPEC without compromising the zero-shot\nperformance. Results on two additional fine-grained benchmarks also show\nconsistent improvements, further validating the transferability of our\napproach. Code and data are available at https://github.com/wjpoom/SPEC.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Wujian Peng", "Sicheng Xie", "Zuyao You", "Shiyi Lan", "Zuxuan Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f549"}, "filepath": "data/2403.18920.png", "tags": [], "_media_type": "image", "_rand": 0.9991014566105724, "arXiv_link": "https://arxiv.org/abs/2403.18920", "other_link": "", "title": "CPR: Retrieval Augmented Generation for Copyright Protection", "abstract": "Retrieval Augmented Generation (RAG) is emerging as a flexible and robust\ntechnique to adapt models to private users data without training, to handle\ncredit attribution, and to allow efficient machine unlearning at scale.\nHowever, RAG techniques for image generation may lead to parts of the retrieved\nsamples being copied in the model's output. To reduce risks of leaking private\ninformation contained in the retrieved set, we introduce Copy-Protected\ngeneration with Retrieval (CPR), a new method for RAG with strong copyright\nprotection guarantees in a mixed-private setting for diffusion models.CPR\nallows to condition the output of diffusion models on a set of retrieved\nimages, while also guaranteeing that unique identifiable information about\nthose example is not exposed in the generated outputs. In particular, it does\nso by sampling from a mixture of public (safe) distribution and private (user)\ndistribution by merging their diffusion scores at inference. We prove that CPR\nsatisfies Near Access Freeness (NAF) which bounds the amount of information an\nattacker may be able to extract from the generated images. We provide two\nalgorithms for copyright protection, CPR-KL and CPR-Choose. Unlike previously\nproposed rejection-sampling-based NAF methods, our methods enable efficient\ncopyright-protected sampling with a single run of backward diffusion. We show\nthat our method can be applied to any pre-trained conditional diffusion model,\nsuch as Stable Diffusion or unCLIP. In particular, we empirically show that\napplying CPR on top of unCLIP improves quality and text-to-image alignment of\nthe generated results (81.4 to 83.17 on TIFA benchmark), while enabling credit\nattribution, copy-right protection, and deterministic, constant time,\nunlearning.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Aditya Golatkar", "Alessandro Achille", "Luca Zancato", "Yu-Xiang Wang", "Ashwin Swaminathan", "Stefano Soatto", "Stefano Soatto"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f54a"}, "filepath": "data/2312.08459.png", "tags": [], "_media_type": "image", "_rand": 0.9998305562825303, "arXiv_link": "https://arxiv.org/abs/2312.08459", "other_link": "", "title": "FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models", "abstract": "We introduce FaceTalk, a novel generative approach designed for synthesizing\nhigh-fidelity 3D motion sequences of talking human heads from input audio\nsignal. To capture the expressive, detailed nature of human heads, including\nhair, ears, and finer-scale eye movements, we propose to couple speech signal\nwith the latent space of neural parametric head models to create high-fidelity,\ntemporally coherent motion sequences. We propose a new latent diffusion model\nfor this task, operating in the expression space of neural parametric head\nmodels, to synthesize audio-driven realistic head sequences. In the absence of\na dataset with corresponding NPHM expressions to audio, we optimize for these\ncorrespondences to produce a dataset of temporally-optimized NPHM expressions\nfit to audio-video recordings of people talking. To the best of our knowledge,\nthis is the first work to propose a generative approach for realistic and\nhigh-quality motion synthesis of volumetric human heads, representing a\nsignificant advancement in the field of audio-driven 3D animation. Notably, our\napproach stands out in its ability to generate plausible motion sequences that\ncan produce high-fidelity head animation coupled with the NPHM shape space. Our\nexperimental results substantiate the effectiveness of FaceTalk, consistently\nachieving superior and visually natural motion, encompassing diverse facial\nexpressions and styles, outperforming existing methods by 75% in perceptual\nuser study evaluation.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Shivangi Aneja", "Justus Thies", "Angela Dai", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f54b"}, "filepath": "data/2401.18084.png", "tags": [], "_media_type": "image", "_rand": 0.9997741678583579, "arXiv_link": "https://arxiv.org/abs/2401.18084", "other_link": "https://cfeng16.github.io/UniTouch/", "title": "Binding Touch to Everything: Learning Unified Multimodal Tactile Representations", "abstract": "The ability to associate touch with other modalities has huge implications\nfor humans and computational systems. However, multimodal learning with touch\nremains challenging due to the expensive data collection process and\nnon-standardized sensor outputs. We introduce UniTouch, a unified tactile model\nfor vision-based touch sensors connected to multiple modalities, including\nvision, language, and sound. We achieve this by aligning our UniTouch\nembeddings to pretrained image embeddings already associated with a variety of\nother modalities. We further propose learnable sensor-specific tokens, allowing\nthe model to learn from a set of heterogeneous tactile sensors, all at the same\ntime. UniTouch is capable of conducting various touch sensing tasks in the\nzero-shot setting, from robot grasping prediction to touch image question\nanswering. To the best of our knowledge, UniTouch is the first to demonstrate\nsuch capabilities. Project page: https://cfeng16.github.io/UniTouch/", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Fengyu Yang", "Chao Feng", "Ziyang Chen", "Hyoungseob Park", "Daniel Wang", "Yiming Dou", "Ziyao Zeng", "xien chen", "Suchisrit Gangopadhyay", "Andrew Owens", "Alex Wong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f54c"}, "filepath": "data/2312.11666.png", "tags": [], "_media_type": "image", "_rand": 0.9990180724139774, "arXiv_link": "https://arxiv.org/abs/2312.11666", "other_link": "", "title": "Text-Conditioned Generative Model of 3D Strand-based Human Hairstyles", "abstract": "We present HAAR, a new strand-based generative model for 3D human hairstyles.\nSpecifically, based on textual inputs, HAAR produces 3D hairstyles that could\nbe used as production-level assets in modern computer graphics engines. Current\nAI-based generative models take advantage of powerful 2D priors to reconstruct\n3D content in the form of point clouds, meshes, or volumetric functions.\nHowever, by using the 2D priors, they are intrinsically limited to only\nrecovering the visual parts. Highly occluded hair structures can not be\nreconstructed with those methods, and they only model the ''outer shell'',\nwhich is not ready to be used in physics-based rendering or simulation\npipelines. In contrast, we propose a first text-guided generative method that\nuses 3D hair strands as an underlying representation. Leveraging 2D visual\nquestion-answering (VQA) systems, we automatically annotate synthetic hair\nmodels that are generated from a small set of artist-created hairstyles. This\nallows us to train a latent diffusion model that operates in a common hairstyle\nUV space. In qualitative and quantitative studies, we demonstrate the\ncapabilities of the proposed model and compare it to existing hairstyle\ngeneration approaches.", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Vanessa Sklyarova", "Egor Zakharov", "Otmar Hilliges", "Michael J. Black", "Justus Thies"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f54d"}, "filepath": "data/2310.02110.png", "tags": [], "_media_type": "image", "_rand": 0.9996721655974721, "arXiv_link": "https://arxiv.org/abs/2310.02110", "other_link": "", "title": "Sieve: Multimodal Dataset Pruning using Image-Captioning Models", "abstract": "Vision-Language Models (VLMs) are pretrained on large, diverse, and noisy\nweb-crawled datasets. This underscores the critical need for dataset pruning,\nas the quality of these datasets is strongly correlated with the performance of\nVLMs on downstream tasks. Using CLIPScore from a pretrained model to only train\nmodels using highly-aligned samples is one of the most successful methods for\npruning. We argue that this approach suffers from multiple limitations\nincluding: false positives and negatives due to CLIP's pretraining on noisy\nlabels. We propose a pruning signal, Sieve, that employs synthetic captions\ngenerated by image-captioning models pretrained on small, diverse, and\nwell-aligned image-text pairs to evaluate the alignment of noisy image-text\npairs. To bridge the gap between the limited diversity of generated captions\nand the high diversity of alternative text (alt-text), we estimate the semantic\ntextual similarity in the embedding space of a language model pretrained on\nunlabeled text corpus. Using DataComp, a multimodal dataset filtering\nbenchmark, when evaluating on 38 downstream tasks, our pruning approach,\nsurpasses CLIPScore by 2.6\\% and 1.7\\% on medium and large scale respectively.\nIn addition, on retrieval tasks, Sieve leads to a significant improvement of\n2.7% and 4.5% on medium and large scale respectively.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Anas Mahmoud", "Mostafa Elhoushi", "Amro Abbas", "Yu Yang", "Newsha Ardalani", "Hugh Leather", "Ari Morcos"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f54e"}, "filepath": "data/2404.01297.png", "tags": [], "_media_type": "image", "_rand": 0.9993749625391702, "arXiv_link": "https://arxiv.org/abs/2404.01297", "other_link": "https://github.com/google-research/scenic.", "title": "Streaming Dense Video Captioning", "abstract": "An ideal model for dense video captioning -- predicting captions localized\ntemporally in a video -- should be able to handle long input videos, predict\nrich, detailed textual descriptions, and be able to produce outputs before\nprocessing the entire video. Current state-of-the-art models, however, process\na fixed number of downsampled frames, and make a single full prediction after\nseeing the whole video. We propose a streaming dense video captioning model\nthat consists of two novel components: First, we propose a new memory module,\nbased on clustering incoming tokens, which can handle arbitrarily long videos\nas the memory is of a fixed size. Second, we develop a streaming decoding\nalgorithm that enables our model to make predictions before the entire video\nhas been processed. Our model achieves this streaming ability, and\nsignificantly improves the state-of-the-art on three dense video captioning\nbenchmarks: ActivityNet, YouCook2 and ViTT. Our code is released at\nhttps://github.com/google-research/scenic.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Xingyi Zhou", "Anurag Arnab", "Shyamal Buch", "Shen Yan", "Austin Myers", "Xuehan Xiong", "Arsha Nagrani", "Cordelia Schmid"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f54f"}, "filepath": "data/2312.01531.png", "tags": [], "_media_type": "image", "_rand": 0.999824022211017, "arXiv_link": "https://arxiv.org/abs/2312.01531", "other_link": "https://lyclyc52.github.io/SANeRF-HQ/.", "title": "SANeRF-HQ: Segment Anything for NeRF in High Quality", "abstract": "Recently, the Segment Anything Model (SAM) has showcased remarkable\ncapabilities of zero-shot segmentation, while NeRF (Neural Radiance Fields) has\ngained popularity as a method for various 3D problems beyond novel view\nsynthesis. Though there exist initial attempts to incorporate these two methods\ninto 3D segmentation, they face the challenge of accurately and consistently\nsegmenting objects in complex scenarios. In this paper, we introduce the\nSegment Anything for NeRF in High Quality (SANeRF-HQ) to achieve high-quality\n3D segmentation of any target object in a given scene. SANeRF-HQ utilizes SAM\nfor open-world object segmentation guided by user-supplied prompts, while\nleveraging NeRF to aggregate information from different viewpoints. To overcome\nthe aforementioned challenges, we employ density field and RGB similarity to\nenhance the accuracy of segmentation boundary during the aggregation.\nEmphasizing on segmentation accuracy, we evaluate our method on multiple NeRF\ndatasets where high-quality ground-truths are available or manually annotated.\nSANeRF-HQ shows a significant quality improvement over state-of-the-art methods\nin NeRF object segmentation, provides higher flexibility for object\nlocalization, and enables more consistent object segmentation across multiple\nviews. Results and code are available at the project site:\nhttps://lyclyc52.github.io/SANeRF-HQ/.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yichen Liu", "Benran Hu", "Chi-Keung Tang", "Yu-Wing Tai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f550"}, "filepath": "data/2403.00483.png", "tags": [], "_media_type": "image", "_rand": 0.9991544207604476, "arXiv_link": "https://arxiv.org/abs/2403.00483", "other_link": "https://corleone-huang.github.io/realcustom/.", "title": "\\emph{RealCustom}: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization", "abstract": "Text-to-image customization, which aims to synthesize text-driven images for\nthe given subjects, has recently revolutionized content creation. Existing\nworks follow the pseudo-word paradigm, i.e., represent the given subjects as\npseudo-words and then compose them with the given text. However, the inherent\nentangled influence scope of pseudo-words with the given text results in a\ndual-optimum paradox, i.e., the similarity of the given subjects and the\ncontrollability of the given text could not be optimal simultaneously. We\npresent RealCustom that, for the first time, disentangles similarity from\ncontrollability by precisely limiting subject influence to relevant parts only,\nachieved by gradually narrowing real text word from its general connotation to\nthe specific subject and using its cross-attention to distinguish relevance.\nSpecifically, RealCustom introduces a novel \"train-inference\" decoupled\nframework: (1) during training, RealCustom learns general alignment between\nvisual conditions to original textual conditions by a novel adaptive scoring\nmodule to adaptively modulate influence quantity; (2) during inference, a novel\nadaptive mask guidance strategy is proposed to iteratively update the influence\nscope and influence quantity of the given subjects to gradually narrow the\ngeneration of the real text word. Comprehensive experiments demonstrate the\nsuperior real-time customization ability of RealCustom in the open domain,\nachieving both unprecedented similarity of the given subjects and\ncontrollability of the given text for the first time. The project page is\nhttps://corleone-huang.github.io/realcustom/.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Mengqi Huang", "Zhendong Mao", "Mingcong Liu", "Qian HE", "Yongdong Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f551"}, "filepath": "data/2310.08255.png", "tags": [], "_media_type": "image", "_rand": 0.9999137828044652, "arXiv_link": "https://arxiv.org/abs/2310.08255", "other_link": "", "title": "Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification", "abstract": "Vision-Language Models (VLMs) such as CLIP are trained on large amounts of\nimage-text pairs, resulting in remarkable generalization across several data\ndistributions. However, in several cases, their expensive training and data\ncollection/curation costs do not justify the end application. This motivates a\nvendor-client paradigm, where a vendor trains a large-scale VLM and grants only\ninput-output access to clients on a pay-per-query basis in a black-box setting.\nThe client aims to minimize inference cost by distilling the VLM to a student\nmodel using the limited available task-specific data, and further deploying\nthis student model in the downstream application. While naive distillation\nlargely improves the In-Domain (ID) accuracy of the student, it fails to\ntransfer the superior out-of-distribution (OOD) generalization of the VLM\nteacher using the limited available labeled images. To mitigate this, we\npropose Vision-Language to Vision - Align, Distill, Predict (VL2V-ADiP), which\nfirst aligns the vision and language modalities of the teacher model with the\nvision modality of a pre-trained student model, and further distills the\naligned VLM representations to the student. This maximally retains the\npre-trained features of the student, while also incorporating the rich\nrepresentations of the VLM image encoder and the superior generalization of the\ntext embeddings. The proposed approach achieves state-of-the-art results on the\nstandard Domain Generalization benchmarks in a black-box teacher setting as\nwell as a white-box setting where the weights of the VLM are accessible.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Sravanti Addepalli", "Ashish Asokan", "Lakshay Sharma", "R. Venkatesh Babu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f552"}, "filepath": "data/2401.13082.png", "tags": [], "_media_type": "image", "_rand": 0.9992068587679986, "arXiv_link": "https://arxiv.org/abs/2401.13082", "other_link": "", "title": "TransLoc4D: Transformer-based 4D Radar Place Recognition", "abstract": "Visual place recognition is a challenging task in the field of computer\nvision, and autonomous robotics and vehicles, which aims to identify a location\nor a place from visual inputs. Contemporary methods in visual place recognition\nemploy convolutional neural networks and utilize every region within the image\nfor the place recognition task. However, the presence of dynamic and\ndistracting elements in the image may impact the effectiveness of the place\nrecognition process. Therefore, it is meaningful to focus on task-relevant\nregions of the image for improved recognition. In this paper, we present\nPlaceFormer, a novel transformer-based approach for visual place recognition.\nPlaceFormer employs patch tokens from the transformer to create global image\ndescriptors, which are then used for image retrieval. To re-rank the retrieved\nimages, PlaceFormer merges the patch tokens from the transformer to form\nmulti-scale patches. Utilizing the transformer's self-attention mechanism, it\nselects patches that correspond to task-relevant areas in an image. These\nselected patches undergo geometric verification, generating similarity scores\nacross different patch sizes. Subsequently, spatial scores from each patch size\nare fused to produce a final similarity score. This score is then used to\nre-rank the images initially retrieved using global image descriptors.\nExtensive experiments on benchmark datasets demonstrate that PlaceFormer\noutperforms several state-of-the-art methods in terms of accuracy and\ncomputational efficiency, requiring less time and memory.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Guohao Peng", "Heshan Li", "Yangyang Zhao", "Jun Zhang", "Zhenyu Wu", "Pengyu Zheng", "Danwei Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f553"}, "filepath": "data/2312.05387.png", "tags": [], "_media_type": "image", "_rand": 0.9991045234173561, "arXiv_link": "https://arxiv.org/abs/2312.05387", "other_link": "", "title": "Domain Gap Embeddings for Generative Dataset Augmentation", "abstract": "Despite the huge effort in developing novel regularizers for Domain\nGeneralization (DG), adding simple data augmentation to the vanilla ERM which\nis a practical implementation of the Vicinal Risk Minimization principle (VRM)\n\\citep{chapelle2000vicinal} outperforms or stays competitive with many of the\nproposed regularizers. The VRM reduces the estimation error in ERM by replacing\nthe point-wise kernel estimates with a more precise estimation of true data\ndistribution that reduces the gap between data points \\textbf{within each\ndomain}. However, in the DG setting, the estimation error of true data\ndistribution by ERM is mainly caused by the distribution shift \\textbf{between\ndomains} which cannot be fully addressed by simple data augmentation techniques\nwithin each domain. Inspired by this limitation of VRM, we propose a novel data\naugmentation named Cross Domain Generative Augmentation (CDGA) that replaces\nthe pointwise kernel estimates in ERM with new density estimates in the\n\\textbf{vicinity of domain pairs} so that the gap between domains is further\nreduced. To this end, CDGA, which is built upon latent diffusion models (LDM),\ngenerates synthetic images to fill the gap between all domains and as a result,\nreduces the non-iidness. We show that CDGA outperforms SOTA DG methods under\nthe Domainbed benchmark. To explain the effectiveness of CDGA, we generate more\nthan 5 Million synthetic images and perform extensive ablation studies\nincluding data scaling laws, distribution visualization, domain shift\nquantification, adversarial robustness, and loss landscape analysis.", "keywords": [], "authors_list": ["Yinong Wang", "Younjoon Chung", "Chen Henry Wu", "Fernando De la Torre"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f554"}, "filepath": "data/2401.01823.png", "tags": [], "_media_type": "image", "_rand": 0.9994199743275395, "arXiv_link": "https://arxiv.org/abs/2401.01823", "other_link": "", "title": "Detours for Navigating Instructional Videos", "abstract": "We introduce the video detours problem for navigating instructional videos.\nGiven a source video and a natural language query asking to alter the how-to\nvideo's current path of execution in a certain way, the goal is to find a\nrelated ''detour video'' that satisfies the requested alteration. To address\nthis challenge, we propose VidDetours, a novel video-language approach that\nlearns to retrieve the targeted temporal segments from a large repository of\nhow-to's using video-and-text conditioned queries. Furthermore, we devise a\nlanguage-based pipeline that exploits how-to video narration text to create\nweakly supervised training data. We demonstrate our idea applied to the domain\nof how-to cooking videos, where a user can detour from their current recipe to\nfind steps with alternate ingredients, tools, and techniques. Validating on a\nground truth annotated dataset of 16K samples, we show our model's significant\nimprovements over best available methods for video retrieval and question\nanswering, with recall rates exceeding the state of the art by 35%.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Kumar Ashutosh", "Zihui Xue", "Tushar Nagarajan", "Kristen Grauman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f555"}, "filepath": "data/2404.02145.png", "tags": [], "_media_type": "image", "_rand": 0.9990252323756164, "arXiv_link": "https://arxiv.org/abs/2404.02145", "other_link": "", "title": "Iterated Learning Improves Compositionality in Large Vision-Language Models", "abstract": "A fundamental characteristic common to both human vision and natural language\nis their compositional nature. Yet, despite the performance gains contributed\nby large vision and language pretraining, recent investigations find that\nmost-if not all-our state-of-the-art vision-language models struggle at\ncompositionality. They are unable to distinguish between images of \" a girl in\nwhite facing a man in black\" and \"a girl in black facing a man in white\".\nMoreover, prior work suggests that compositionality doesn't arise with scale:\nlarger model sizes or training data don't help. This paper develops a new\niterated training algorithm that incentivizes compositionality. We draw on\ndecades of cognitive science research that identifies cultural transmission-the\nneed to teach a new generation-as a necessary inductive prior that incentivizes\nhumans to develop compositional languages. Specifically, we reframe\nvision-language contrastive learning as the Lewis Signaling Game between a\nvision agent and a language agent, and operationalize cultural transmission by\niteratively resetting one of the agent's weights during training. After every\niteration, this training paradigm induces representations that become \"easier\nto learn\", a property of compositional languages: e.g. our model trained on\nCC3M and CC12M improves standard CLIP by 4.7%, 4.0% respectfully in the\nSugarCrepe benchmark.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Chenhao Zheng", "Jieyu Zhang", "Aniruddha Kembhavi", "Ranjay Krishna"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f556"}, "filepath": "data/2404.09451.png", "tags": [], "_media_type": "image", "_rand": 0.9993635534295402, "arXiv_link": "https://arxiv.org/abs/2404.09451", "other_link": "", "title": "Contrastive Mean-Shift Learning for Generalized Category Discovery", "abstract": "We address the problem of generalized category discovery (GCD) that aims to\npartition a partially labeled collection of images; only a small part of the\ncollection is labeled and the total number of target classes is unknown. To\naddress this generalized image clustering problem, we revisit the mean-shift\nalgorithm, i.e., a classic, powerful technique for mode seeking, and\nincorporate it into a contrastive learning framework. The proposed method,\ndubbed Contrastive Mean-Shift (CMS) learning, trains an image encoder to\nproduce representations with better clustering properties by an iterative\nprocess of mean shift and contrastive update. Experiments demonstrate that our\nmethod, both in settings with and without the total number of clusters being\nknown, achieves state-of-the-art performance on six public GCD benchmarks\nwithout bells and whistles.", "keywords": [], "authors_list": ["Sua Choi", "Dahyun Kang", "Minsu Cho"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f557"}, "filepath": "data/2403.14158v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995478555046189, "arXiv_link": "https://arxiv.org/abs/2403.14158v1", "other_link": "", "title": "Volumetric Environment Representation for Vision-Language Navigation", "abstract": "Vision-language navigation (VLN) requires an agent to navigate through an 3D\nenvironment based on visual observations and natural language instructions. It\nis clear that the pivotal factor for successful navigation lies in the\ncomprehensive scene understanding. Previous VLN agents employ monocular\nframeworks to extract 2D features of perspective views directly. Though\nstraightforward, they struggle for capturing 3D geometry and semantics, leading\nto a partial and incomplete environment representation. To achieve a\ncomprehensive 3D representation with fine-grained details, we introduce a\nVolumetric Environment Representation (VER), which voxelizes the physical world\ninto structured 3D cells. For each cell, VER aggregates multi-view 2D features\ninto such a unified 3D space via 2D-3D sampling. Through coarse-to-fine feature\nextraction and multi-task learning for VER, our agent predicts 3D occupancy, 3D\nroom layout, and 3D bounding boxes jointly. Based on online collected VERs, our\nagent performs volume state estimation and builds episodic memory for\npredicting the next step. Experimental results show our environment\nrepresentations from multi-task learning lead to evident performance gains on\nVLN. Our model achieves state-of-the-art performance across VLN benchmarks\n(R2R, REVERIE, and R4R).", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Liu", "Wenguan Wang", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f558"}, "filepath": "data/2405.14832.png", "tags": [], "_media_type": "image", "_rand": 0.9990157608312639, "arXiv_link": "https://arxiv.org/abs/2405.14832", "other_link": "https://nju-3dv.github.io/projects/Direct3D/.", "title": "DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data", "abstract": "Generating high-quality 3D assets from text and images has long been\nchallenging, primarily due to the absence of scalable 3D representations\ncapable of capturing intricate geometry distributions. In this work, we\nintroduce Direct3D, a native 3D generative model scalable to in-the-wild input\nimages, without requiring a multiview diffusion model or SDS optimization. Our\napproach comprises two primary components: a Direct 3D Variational Auto-Encoder\n(D3D-VAE) and a Direct 3D Diffusion Transformer (D3D-DiT). D3D-VAE efficiently\nencodes high-resolution 3D shapes into a compact and continuous latent triplane\nspace. Notably, our method directly supervises the decoded geometry using a\nsemi-continuous surface sampling strategy, diverging from previous methods\nrelying on rendered images as supervision signals. D3D-DiT models the\ndistribution of encoded 3D latents and is specifically designed to fuse\npositional information from the three feature maps of the triplane latent,\nenabling a native 3D generative model scalable to large-scale 3D datasets.\nAdditionally, we introduce an innovative image-to-3D generation pipeline\nincorporating semantic and pixel-level image conditions, allowing the model to\nproduce 3D shapes consistent with the provided conditional image input.\nExtensive experiments demonstrate the superiority of our large-scale\npre-trained Direct3D over previous image-to-3D approaches, achieving\nsignificantly better generation quality and generalization ability, thus\nestablishing a new state-of-the-art for 3D content creation. Project page:\nhttps://nju-3dv.github.io/projects/Direct3D/.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Qihao Liu", "Yi Zhang", "Song Bai", "Adam Kortylewski", "Alan L. Yuille"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f559"}, "filepath": "data/2403.01849.png", "tags": [], "_media_type": "image", "_rand": 0.9997239125398099, "arXiv_link": "https://arxiv.org/abs/2403.01849", "other_link": "https://github.com/TreeLLi/APT.", "title": "One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models", "abstract": "Large pre-trained Vision-Language Models (VLMs) like CLIP, despite having\nremarkable generalization ability, are highly vulnerable to adversarial\nexamples. This work studies the adversarial robustness of VLMs from the novel\nperspective of the text prompt instead of the extensively studied model weights\n(frozen in this work). We first show that the effectiveness of both adversarial\nattack and defense are sensitive to the used text prompt. Inspired by this, we\npropose a method to improve resilience to adversarial attacks by learning a\nrobust text prompt for VLMs. The proposed method, named Adversarial Prompt\nTuning (APT), is effective while being both computationally and data efficient.\nExtensive experiments are conducted across 15 datasets and 4 data sparsity\nschemes (from 1-shot to full training data settings) to show APT's superiority\nover hand-engineered prompts and other state-of-the-art adaption methods. APT\ndemonstrated excellent abilities in terms of the in-distribution performance\nand the generalization under input distribution shift and across datasets.\nSurprisingly, by simply adding one learned word to the prompts, APT can\nsignificantly boost the accuracy and robustness (epsilon=4/255) over the\nhand-engineered prompts by +13% and +8.5% on average respectively. The\nimprovement further increases, in our most effective setting, to +26.4% for\naccuracy and +16.7% for robustness. Code is available at\nhttps://github.com/TreeLLi/APT.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Lin Li", "Haoyan Guan", "Jianing Qiu", "Michael Spratling"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f55a"}, "filepath": "data/2403.10064.png", "tags": [], "_media_type": "image", "_rand": 0.9996772959068471, "arXiv_link": "https://arxiv.org/abs/2403.10064", "other_link": "", "title": "Progressive Divide-and-Conquer via Subsampling Decomposition for Accelerated MRI", "abstract": "Deep unfolding networks (DUN) have emerged as a popular iterative framework\nfor accelerated magnetic resonance imaging (MRI) reconstruction. However,\nconventional DUN aims to reconstruct all the missing information within the\nentire null space in each iteration. Thus it could be challenging when dealing\nwith highly ill-posed degradation, usually leading to unsatisfactory\nreconstruction. In this work, we propose a Progressive Divide-And-Conquer\n(PDAC) strategy, aiming to break down the subsampling process in the actual\nsevere degradation and thus perform reconstruction sequentially. Starting from\ndecomposing the original maximum-a-posteriori problem of accelerated MRI, we\npresent a rigorous derivation of the proposed PDAC framework, which could be\nfurther unfolded into an end-to-end trainable network. Specifically, each\niterative stage in PDAC focuses on recovering a distinct moderate degradation\naccording to the decomposition. Furthermore, as part of the PDAC iteration,\nsuch decomposition is adaptively learned as an auxiliary task through a\ndegradation predictor which provides an estimation of the decomposed sampling\nmask. Following this prediction, the sampling mask is further integrated via a\nseverity conditioning module to ensure awareness of the degradation severity at\neach stage. Extensive experiments demonstrate that our proposed method achieves\nsuperior performance on the publicly available fastMRI and Stanford2D FSE\ndatasets in both multi-coil and single-coil settings.", "keywords": ["Medical imaging and biological vision", "Computational imaging and physics-based vision"], "authors_list": ["Chong Wang", "Lanqing Guo", "Yufei Wang", "Hao Cheng", "Yi Yu", "Bihan Wen"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f55b"}, "filepath": "data/2404.19417.png", "tags": [], "_media_type": "image", "_rand": 0.999289027995041, "arXiv_link": "http://export.arxiv.org/abs/2404.19417", "other_link": "", "title": "Physical Backdoor: Towards Temperature-based Backdoor Attacks in the Physical World", "abstract": "Backdoor attacks have been well-studied in visible light object detection\n(VLOD) in recent years. However, VLOD can not effectively work in dark and\ntemperature-sensitive scenarios. Instead, thermal infrared object detection\n(TIOD) is the most accessible and practical in such environments. In this\npaper, our team is the first to investigate the security vulnerabilities\nassociated with TIOD in the context of backdoor attacks, spanning both the\ndigital and physical realms. We introduce two novel types of backdoor attacks\non TIOD, each offering unique capabilities: Object-affecting Attack and\nRange-affecting Attack. We conduct a comprehensive analysis of key factors\ninfluencing trigger design, which include temperature, size, material, and\nconcealment. These factors, especially temperature, significantly impact the\nefficacy of backdoor attacks on TIOD. A thorough understanding of these factors\nwill serve as a foundation for designing physical triggers and temperature\ncontrolling experiments. Our study includes extensive experiments conducted in\nboth digital and physical environments. In the digital realm, we evaluate our\napproach using benchmark datasets for TIOD, achieving an Attack Success Rate\n(ASR) of up to 98.21%. In the physical realm, we test our approach in two\nreal-world settings: a traffic intersection and a parking lot, using a thermal\ninfrared camera. Here, we attain an ASR of up to 98.38%.", "keywords": [], "authors_list": ["Wen Yin", "Jian Lou", "Pan Zhou", "Yulai Xie", "Dan Feng", "Yuhua Sun", "Tailai Zhang", "Lichao Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f55c"}, "filepath": "data/2311.12908.png", "tags": [], "_media_type": "image", "_rand": 0.9994713598376842, "arXiv_link": "https://arxiv.org/abs/2311.12908", "other_link": "", "title": "Diffusion Model Alignment Using Direct Preference Optimization", "abstract": "Large language models (LLMs) are fine-tuned using human comparison data with\nReinforcement Learning from Human Feedback (RLHF) methods to make them better\naligned with users' preferences. In contrast to LLMs, human preference learning\nhas not been widely explored in text-to-image diffusion models; the best\nexisting approach is to fine-tune a pretrained model using carefully curated\nhigh quality images and captions to improve visual appeal and text alignment.\nWe propose Diffusion-DPO, a method to align diffusion models to human\npreferences by directly optimizing on human comparison data. Diffusion-DPO is\nadapted from the recently developed Direct Preference Optimization (DPO), a\nsimpler alternative to RLHF which directly optimizes a policy that best\nsatisfies human preferences under a classification objective. We re-formulate\nDPO to account for a diffusion model notion of likelihood, utilizing the\nevidence lower bound to derive a differentiable objective. Using the Pick-a-Pic\ndataset of 851K crowdsourced pairwise preferences, we fine-tune the base model\nof the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with\nDiffusion-DPO. Our fine-tuned base model significantly outperforms both base\nSDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement\nmodel in human evaluation, improving visual appeal and prompt alignment. We\nalso develop a variant that uses AI feedback and has comparable performance to\ntraining on human preferences, opening the door for scaling of diffusion model\nalignment methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Bram Wallace", "Meihua Dang", "Rafael Rafailov", "Linqi Zhou", "Aaron Lou", "Senthil Purushwalkam", "Stefano Ermon", "Caiming Xiong", "Shafiq Joty", "Nikhil Naik"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f55d"}, "filepath": "data/2310.10769.png", "tags": [], "_media_type": "image", "_rand": 0.9990578238599939, "arXiv_link": "https://arxiv.org/abs/2310.10769", "other_link": "https://rq-wu.github.io/projects/LAMP.", "title": "LAMP: Learn A Motion Pattern for Few-Shot Video Generation", "abstract": "With the impressive progress in diffusion-based text-to-image generation,\nextending such powerful generative ability to text-to-video raises enormous\nattention. Existing methods either require large-scale text-video pairs and a\nlarge number of training resources or learn motions that are precisely aligned\nwith template videos. It is non-trivial to balance a trade-off between the\ndegree of generation freedom and the resource costs for video generation. In\nour study, we present a few-shot-based tuning framework, LAMP, which enables\ntext-to-image diffusion model Learn A specific Motion Pattern with 8~16 videos\non a single GPU. Specifically, we design a first-frame-conditioned pipeline\nthat uses an off-the-shelf text-to-image model for content generation so that\nour tuned video diffusion model mainly focuses on motion learning. The\nwell-developed text-to-image techniques can provide visually pleasing and\ndiverse content as generation conditions, which highly improves video quality\nand generation freedom. To capture the features of temporal dimension, we\nexpand the pretrained 2D convolution layers of the T2I model to our novel\ntemporal-spatial motion learning layers and modify the attention blocks to the\ntemporal level. Additionally, we develop an effective inference trick,\nshared-noise sampling, which can improve the stability of videos with\ncomputational costs. Our method can also be flexibly applied to other tasks,\ne.g. real-world image animation and video editing. Extensive experiments\ndemonstrate that LAMP can effectively learn the motion pattern on limited data\nand generate high-quality videos. The code and models are available at\nhttps://rq-wu.github.io/projects/LAMP.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Rui-Qi Wu", "Liangyu Chen", "Tong Yang", "Chun-Le Guo", "Chongyi Li", "Xiangyu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f55e"}, "filepath": "data/2403.19527.png", "tags": [], "_media_type": "image", "_rand": 0.9994623636344669, "arXiv_link": "https://arxiv.org/abs/2403.19527", "other_link": "", "title": "Instance-Adaptive and Geometric-Aware Keypoint Learning for Category-Level 6D Object Pose Estimation", "abstract": "Category-level 6D object pose estimation aims to estimate the rotation,\ntranslation and size of unseen instances within specific categories. In this\narea, dense correspondence-based methods have achieved leading performance.\nHowever, they do not explicitly consider the local and global geometric\ninformation of different instances, resulting in poor generalization ability to\nunseen instances with significant shape variations. To deal with this problem,\nwe propose a novel Instance-Adaptive and Geometric-Aware Keypoint Learning\nmethod for category-level 6D object pose estimation (AG-Pose), which includes\ntwo key designs: (1) The first design is an Instance-Adaptive Keypoint\nDetection module, which can adaptively detect a set of sparse keypoints for\nvarious instances to represent their geometric structures. (2) The second\ndesign is a Geometric-Aware Feature Aggregation module, which can efficiently\nintegrate the local and global geometric information into keypoint features.\nThese two modules can work together to establish robust keypoint-level\ncorrespondences for unseen instances, thus enhancing the generalization ability\nof the model.Experimental results on CAMERA25 and REAL275 datasets show that\nthe proposed AG-Pose outperforms state-of-the-art methods by a large margin\nwithout category-specific shape priors.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Xiao Lin", "Wenfei Yang", "Yuan Gao", "Tianzhu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f55f"}, "filepath": "data/2312.00849.png", "tags": [], "_media_type": "image", "_rand": 0.999637523413345, "arXiv_link": "https://arxiv.org/abs/2312.00849", "other_link": "https://github.com/RLHF-V/RLHF-V.", "title": "RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback", "abstract": "Multimodal Large Language Models (MLLMs) have recently demonstrated\nimpressive capabilities in multimodal understanding, reasoning, and\ninteraction. However, existing MLLMs prevalently suffer from serious\nhallucination problems, generating text that is not factually grounded in\nassociated images. The problem makes existing MLLMs untrustworthy and thus\nimpractical in real-world (especially high-stakes) applications. To address the\nchallenge, we present RLHF-V, which enhances MLLM trustworthiness via behavior\nalignment from fine-grained correctional human feedback. Specifically, RLHF-V\ncollects human preference in the form of segment-level corrections on\nhallucinations, and performs dense direct preference optimization over the\nhuman feedback. Comprehensive experiments on five benchmarks in both automatic\nand human evaluation show that, RLHF-V can enable substantially more\ntrustworthy MLLM behaviors with promising data and computation efficiency.\nRemarkably, using 1.4k annotated data samples, RLHF-V significantly reduces the\nhallucination rate of the base MLLM by 34.8%, outperforming the concurrent\nLLaVA-RLHF trained on 10k annotated data. The final model achieves\nstate-of-the-art performance in trustworthiness among open-source MLLMs, and\nshows better robustness than GPT-4V in preventing hallucinations aroused from\nover-generalization. We open-source our code, model, and data at\nhttps://github.com/RLHF-V/RLHF-V.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Tianyu Yu", "Yuan Yao", "Haoye Zhang", "Taiwen He", "Yifeng Han", "Ganqu Cui", "Jinyi Hu", "Zhiyuan Liu", "Hai-Tao Zheng", "Maosong Sun"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f560"}, "filepath": "data/2402.18956.png", "tags": [], "_media_type": "image", "_rand": 0.9997592341197633, "arXiv_link": "https://arxiv.org/abs/2402.18956", "other_link": "", "title": "WWW: A Unified Framework for Explaining What, Where and Why of Neural Networks by Interpretation of Neuron Concept", "abstract": "Recent advancements in neural networks have showcased their remarkable\ncapabilities across various domains. Despite these successes, the \"black box\"\nproblem still remains. Addressing this, we propose a novel framework, WWW, that\noffers the 'what', 'where', and 'why' of the neural network decisions in\nhuman-understandable terms. Specifically, WWW utilizes adaptive selection for\nconcept discovery, employing adaptive cosine similarity and thresholding\ntechniques to effectively explain 'what'. To address the 'where' and 'why', we\nproposed a novel combination of neuron activation maps (NAMs) with Shapley\nvalues, generating localized concept maps and heatmaps for individual inputs.\nFurthermore, WWW introduces a method for predicting uncertainty, leveraging\nheatmap similarities to estimate 'how' reliable the prediction is. Experimental\nevaluations of WWW demonstrate superior performance in both quantitative and\nqualitative metrics, outperforming existing methods in interpretability. WWW\nprovides a unified solution for explaining 'what', 'where', and 'why',\nintroducing a method for localized explanations from global interpretations and\noffering a plug-and-play solution adaptable to various architectures.", "keywords": [], "authors_list": ["Yong Hyun Ahn", "Hyeon Bae Kim", "Seong Tae Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f561"}, "filepath": "data/2404.00368.png", "tags": [], "_media_type": "image", "_rand": 0.9995916842641024, "arXiv_link": "https://arxiv.org/abs/2404.00368", "other_link": "https://feifeifeiliu.github.io/probtalk/.", "title": "Towards Variable and Coordinated Holistic Co-Speech Motion Generation", "abstract": "This paper addresses the problem of generating lifelike holistic co-speech\nmotions for 3D avatars, focusing on two key aspects: variability and\ncoordination. Variability allows the avatar to exhibit a wide range of motions\neven with similar speech content, while coordination ensures a harmonious\nalignment among facial expressions, hand gestures, and body poses. We aim to\nachieve both with ProbTalk, a unified probabilistic framework designed to\njointly model facial, hand, and body movements in speech. ProbTalk builds on\nthe variational autoencoder (VAE) architecture and incorporates three core\ndesigns. First, we introduce product quantization (PQ) to the VAE, which\nenriches the representation of complex holistic motion. Second, we devise a\nnovel non-autoregressive model that embeds 2D positional encoding into the\nproduct-quantized representation, thereby preserving essential structure\ninformation of the PQ codes. Last, we employ a secondary stage to refine the\npreliminary prediction, further sharpening the high-frequency details. Coupling\nthese three designs enables ProbTalk to generate natural and diverse holistic\nco-speech motions, outperforming several state-of-the-art methods in\nqualitative and quantitative evaluations, particularly in terms of realism. Our\ncode and model will be released for research purposes at\nhttps://feifeifeiliu.github.io/probtalk/.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Yifei Liu", "Qiong Cao", "Yandong Wen", "Huaiguang Jiang", "Changxing Ding"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f562"}, "filepath": "data/2403.01517.png", "tags": [], "_media_type": "image", "_rand": 0.9999968555617885, "arXiv_link": "https://arxiv.org/abs/2403.01517", "other_link": "", "title": "MatchU: Matching Unseen Objects for 6D Pose Estimation from RGB-D Images", "abstract": "Recent learning methods for object pose estimation require resource-intensive\ntraining for each individual object instance or category, hampering their\nscalability in real applications when confronted with previously unseen\nobjects. In this paper, we propose MatchU, a Fuse-Describe-Match strategy for\n6D pose estimation from RGB-D images. MatchU is a generic approach that fuses\n2D texture and 3D geometric cues for 6D pose prediction of unseen objects. We\nrely on learning geometric 3D descriptors that are rotation-invariant by\ndesign. By encoding pose-agnostic geometry, the learned descriptors naturally\ngeneralize to unseen objects and capture symmetries. To tackle ambiguous\nassociations using 3D geometry only, we fuse additional RGB information into\nour descriptor. This is achieved through a novel attention-based mechanism that\nfuses cross-modal information, together with a matching loss that leverages the\nlatent space learned from RGB data to guide the descriptor learning process.\nExtensive experiments reveal the generalizability of both the RGB-D fusion\nstrategy as well as the descriptor efficacy. Benefiting from the novel designs,\nMatchU surpasses all existing methods by a significant margin in terms of both\naccuracy and speed, even without the requirement of expensive re-training or\nrendering.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Junwen Huang", "Hao Yu", "Kuan-Ting Yu", "Nassir Navab", "Slobodan Ilic", "Benjamin Busam"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f563"}, "filepath": "data/2404.01156.png", "tags": [], "_media_type": "image", "_rand": 0.9993589329008998, "arXiv_link": "https://arxiv.org/abs/2404.01156", "other_link": "", "title": "SyncMask: Synchronized Attentional Masking for Fashion-centric Vision-Language Pretraining", "abstract": "Vision-language models (VLMs) have made significant strides in cross-modal\nunderstanding through large-scale paired datasets. However, in fashion domain,\ndatasets often exhibit a disparity between the information conveyed in image\nand text. This issue stems from datasets containing multiple images of a single\nfashion item all paired with one text, leading to cases where some textual\ndetails are not visible in individual images. This mismatch, particularly when\nnon-co-occurring elements are masked, undermines the training of conventional\nVLM objectives like Masked Language Modeling and Masked Image Modeling, thereby\nhindering the model's ability to accurately align fine-grained visual and\ntextual features. Addressing this problem, we propose Synchronized attentional\nMasking (SyncMask), which generate masks that pinpoint the image patches and\nword tokens where the information co-occur in both image and text. This\nsynchronization is accomplished by harnessing cross-attentional features\nobtained from a momentum model, ensuring a precise alignment between the two\nmodalities. Additionally, we enhance grouped batch sampling with semi-hard\nnegatives, effectively mitigating false negative issues in Image-Text Matching\nand Image-Text Contrastive learning objectives within fashion datasets. Our\nexperiments demonstrate the effectiveness of the proposed approach,\noutperforming existing methods in three downstream tasks.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Chull Hwan Song", "Taebaek Hwang", "Jooyoung Yoon", "Shunghyun Choi", "Yeong Hyeon Gu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f564"}, "filepath": "data/2403.02969.png", "tags": [], "_media_type": "image", "_rand": 0.9997205036854055, "arXiv_link": "https://arxiv.org/abs/2403.02969", "other_link": "", "title": "Multi-modal Instruction Tuned LLMs with Fine-grained Visual Perception", "abstract": "Multimodal Large Language Model (MLLMs) leverages Large Language Models as a\ncognitive framework for diverse visual-language tasks. Recent efforts have been\nmade to equip MLLMs with visual perceiving and grounding capabilities. However,\nthere still remains a gap in providing fine-grained pixel-level perceptions and\nextending interactions beyond text-specific inputs. In this work, we propose\n{\\bf{AnyRef}}, a general MLLM model that can generate pixel-wise object\nperceptions and natural language descriptions from multi-modality references,\nsuch as texts, boxes, images, or audio. This innovation empowers users with\ngreater flexibility to engage with the model beyond textual and regional\nprompts, without modality-specific designs. Through our proposed refocusing\nmechanism, the generated grounding output is guided to better focus on the\nreferenced object, implicitly incorporating additional pixel-level supervision.\nThis simple modification utilizes attention scores generated during the\ninference of LLM, eliminating the need for extra computations while exhibiting\nperformance enhancements in both grounding masks and referring expressions.\nWith only publicly available training data, our model achieves state-of-the-art\nresults across multiple benchmarks, including diverse modality referring\nsegmentation and region-level referring expression generation.", "keywords": ["Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Junwen He", "Yifan Wang", "Lijun Wang", "Huchuan Lu", "Bin Luo", "Jun-Yan He", "Jin-Peng Lan", "Xuansong Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f565"}, "filepath": "data/2311.15637.png", "tags": [], "_media_type": "image", "_rand": 0.9996401479271664, "arXiv_link": "https://arxiv.org/abs/2311.15637", "other_link": "http://buaavrcg.github.io/Neural3DStrokes.", "title": "Neural 3D Strokes: Creating Stylized 3D Scenes with Vectorized 3D Strokes", "abstract": "We present Neural 3D Strokes, a novel technique to generate stylized images\nof a 3D scene at arbitrary novel views from multi-view 2D images. Different\nfrom existing methods which apply stylization to trained neural radiance fields\nat the voxel level, our approach draws inspiration from image-to-painting\nmethods, simulating the progressive painting process of human artwork with\nvector strokes. We develop a palette of stylized 3D strokes from basic\nprimitives and splines, and consider the 3D scene stylization task as a\nmulti-view reconstruction process based on these 3D stroke primitives. Instead\nof directly searching for the parameters of these 3D strokes, which would be\ntoo costly, we introduce a differentiable renderer that allows optimizing\nstroke parameters using gradient descent, and propose a training scheme to\nalleviate the vanishing gradient issue. The extensive evaluation demonstrates\nthat our approach effectively synthesizes 3D scenes with significant geometric\nand aesthetic stylization while maintaining a consistent appearance across\ndifferent views. Our method can be further integrated with style loss and\nimage-text contrastive models to extend its applications, including color\ntransfer and text-driven 3D scene drawing. Results and code are available at\nhttp://buaavrcg.github.io/Neural3DStrokes.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Scene analysis and understanding"], "authors_list": ["Haobin Duan", "Miao Wang", "Yanxun Li", "Yong-Liang Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f566"}, "filepath": "data/2404.10603.png", "tags": [], "_media_type": "image", "_rand": 0.9999975228868728, "arXiv_link": "https://arxiv.org/abs/2404.10603", "other_link": "", "title": "Enhancing 3D Fidelity of Text-to-3D using Cross-View Correspondences", "abstract": "Leveraging multi-view diffusion models as priors for 3D optimization have\nalleviated the problem of 3D consistency, e.g., the Janus face problem or the\ncontent drift problem, in zero-shot text-to-3D models. However, the 3D\ngeometric fidelity of the output remains an unresolved issue; albeit the\nrendered 2D views are realistic, the underlying geometry may contain errors\nsuch as unreasonable concavities. In this work, we propose CorrespondentDream,\nan effective method to leverage annotation-free, cross-view correspondences\nyielded from the diffusion U-Net to provide additional 3D prior to the NeRF\noptimization process. We find that these correspondences are strongly\nconsistent with human perception, and by adopting it in our loss design, we are\nable to produce NeRF models with geometries that are more coherent with common\nsense, e.g., more smoothed object surface, yielding higher 3D fidelity. We\ndemonstrate the efficacy of our approach through various comparative\nqualitative results and a solid user study.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Seungwook Kim", "Kejie Li", "Xueqing Deng", "Yichun Shi", "Minsu Cho", "Peng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f567"}, "filepath": "data/2403.10191.png", "tags": [], "_media_type": "image", "_rand": 0.9996313607997034, "arXiv_link": "https://arxiv.org/abs/2403.10191", "other_link": "", "title": "Generative Region-Language Pretraining for Open-Ended Object Detection", "abstract": "In recent research, significant attention has been devoted to the\nopen-vocabulary object detection task, aiming to generalize beyond the limited\nnumber of classes labeled during training and detect objects described by\narbitrary category names at inference. Compared with conventional object\ndetection, open vocabulary object detection largely extends the object\ndetection categories. However, it relies on calculating the similarity between\nimage regions and a set of arbitrary category names with a pretrained\nvision-and-language model. This implies that, despite its open-set nature, the\ntask still needs the predefined object categories during the inference stage.\nThis raises the question: What if we do not have exact knowledge of object\ncategories during inference? In this paper, we call such a new setting as\ngenerative open-ended object detection, which is a more general and practical\nproblem. To address it, we formulate object detection as a generative problem\nand propose a simple framework named GenerateU, which can detect dense objects\nand generate their names in a free-form way. Particularly, we employ Deformable\nDETR as a region proposal generator with a language model translating visual\nregions to object names. To assess the free-form object detection task, we\nintroduce an evaluation method designed to quantitatively measure the\nperformance of generative outcomes. Extensive experiments demonstrate strong\nzero-shot detection performance of our GenerateU. For example, on the LVIS\ndataset, our GenerateU achieves comparable results to the open-vocabulary\nobject detection method GLIP, even though the category names are not seen by\nGenerateU during inference. Code is available at: https://\ngithub.com/FoundationVision/GenerateU .", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Chuang Lin", "Yi Jiang", "Lizhen Qu", "Zehuan Yuan", "Jianfei Cai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f568"}, "filepath": "data/2403.09639.png", "tags": [], "_media_type": "image", "_rand": 0.9992710147150893, "arXiv_link": "https://arxiv.org/abs/2403.09639", "other_link": "", "title": "GroupContrast: Semantic-aware Self-supervised Representation Learning for 3D Understanding", "abstract": "Self-supervised 3D representation learning aims to learn effective\nrepresentations from large-scale unlabeled point clouds. Most existing\napproaches adopt point discrimination as the pretext task, which assigns\nmatched points in two distinct views as positive pairs and unmatched points as\nnegative pairs. However, this approach often results in semantically identical\npoints having dissimilar representations, leading to a high number of false\nnegatives and introducing a \"semantic conflict\" problem. To address this issue,\nwe propose GroupContrast, a novel approach that combines segment grouping and\nsemantic-aware contrastive learning. Segment grouping partitions points into\nsemantically meaningful regions, which enhances semantic coherence and provides\nsemantic guidance for the subsequent contrastive representation learning.\nSemantic-aware contrastive learning augments the semantic information extracted\nfrom segment grouping and helps to alleviate the issue of \"semantic conflict\".\nWe conducted extensive experiments on multiple 3D scene understanding tasks.\nThe results demonstrate that GroupContrast learns semantically meaningful\nrepresentations and achieves promising transfer learning performance.", "keywords": ["Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Chengyao Wang", "Li Jiang", "Xiaoyang Wu", "Zhuotao Tian", "Bohao Peng", "Hengshuang Zhao", "Jiaya Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f569"}, "filepath": "data/2312.04554v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992081103247225, "arXiv_link": "https://arxiv.org/abs/2312.04554v1", "other_link": "", "title": "Improved Visual Grounding through Self-Consistent Explanations", "abstract": "Vision-and-language models trained to match images with text can be combined\nwith visual explanation methods to point to the locations of specific objects\nin an image. Our work shows that the localization --\"grounding\"-- abilities of\nthese models can be further improved by finetuning for self-consistent visual\nexplanations. We propose a strategy for augmenting existing text-image datasets\nwith paraphrases using a large language model, and SelfEQ, a weakly-supervised\nstrategy on visual explanation maps for paraphrases that encourages\nself-consistency. Specifically, for an input textual phrase, we attempt to\ngenerate a paraphrase and finetune the model so that the phrase and paraphrase\nmap to the same region in the image. We posit that this both expands the\nvocabulary that the model is able to handle, and improves the quality of the\nobject locations highlighted by gradient-based visual explanation methods (e.g.\nGradCAM). We demonstrate that SelfEQ improves performance on Flickr30k,\nReferIt, and RefCOCO+ over a strong baseline method and several prior works.\nParticularly, comparing to other methods that do not use any type of box\nannotations, we obtain 84.07% on Flickr30k (an absolute improvement of 4.69%),\n67.40% on ReferIt (an absolute improvement of 7.68%), and 75.10%, 55.49% on\nRefCOCO+ test sets A and B respectively (an absolute improvement of 3.74% on\naverage).", "keywords": [], "authors_list": ["Ruozhen He", "Paola Cascante-Bonilla", "Ziyan Yang", "Alex Berg", "Vicente Ordonez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f56a"}, "filepath": "data/2312.06640.png", "tags": [], "_media_type": "image", "_rand": 0.9993448759436699, "arXiv_link": "https://arxiv.org/abs/2312.06640", "other_link": "", "title": "Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution", "abstract": "Text-based diffusion models have exhibited remarkable success in generation\nand editing, showing great promise for enhancing visual content with their\ngenerative prior. However, applying these models to video super-resolution\nremains challenging due to the high demands for output fidelity and temporal\nconsistency, which is complicated by the inherent randomness in diffusion\nmodels. Our study introduces Upscale-A-Video, a text-guided latent diffusion\nframework for video upscaling. This framework ensures temporal coherence\nthrough two key mechanisms: locally, it integrates temporal layers into U-Net\nand VAE-Decoder, maintaining consistency within short sequences; globally,\nwithout training, a flow-guided recurrent latent propagation module is\nintroduced to enhance overall video stability by propagating and fusing latent\nacross the entire sequences. Thanks to the diffusion paradigm, our model also\noffers greater flexibility by allowing text prompts to guide texture creation\nand adjustable noise levels to balance restoration and generation, enabling a\ntrade-off between fidelity and quality. Extensive experiments show that\nUpscale-A-Video surpasses existing methods in both synthetic and real-world\nbenchmarks, as well as in AI-generated videos, showcasing impressive visual\nrealism and temporal consistency.", "keywords": ["Image and video generation and manipulation", "Low-level vision"], "authors_list": ["Shangchen Zhou", "Peiqing Yang", "Jianyi Wang", "Yihang Luo", "Chen Change Loy"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f56b"}, "filepath": "data/2310.08337.png", "tags": [], "_media_type": "image", "_rand": 0.9999320763010555, "arXiv_link": "https://arxiv.org/abs/2310.08337", "other_link": "", "title": "Image Neural Field Diffusion Models", "abstract": "Diffusion models have shown remarkable performance on many generative tasks.\nDespite recent success, most diffusion models are restricted in that they only\nallow linear transformation of the data distribution. In contrast, broader\nfamily of transformations can potentially help train generative distributions\nmore efficiently, simplifying the reverse process and closing the gap between\nthe true negative log-likelihood and the variational approximation. In this\npaper, we present Neural Diffusion Models (NDMs), a generalization of\nconventional diffusion models that enables defining and learning time-dependent\nnon-linear transformations of data. We show how to optimise NDMs using a\nvariational bound in a simulation-free setting. Moreover, we derive a\ntime-continuous formulation of NDMs, which allows fast and reliable inference\nusing off-the-shelf numerical ODE and SDE solvers. Finally, we demonstrate the\nutility of NDMs with learnable transformations through experiments on standard\nimage generation benchmarks, including CIFAR-10, downsampled versions of\nImageNet and CelebA-HQ. NDMs outperform conventional diffusion models in terms\nof likelihood and produce high-quality samples.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yinbo Chen", "Oliver Wang", "Richard Zhang", "Eli Shechtman", "Xiaolong Wang", "Micha\u00ebl Gharbi"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f56c"}, "filepath": "data/2404.02132.png", "tags": [], "_media_type": "image", "_rand": 0.9990870530146935, "arXiv_link": "https://arxiv.org/abs/2404.02132", "other_link": "", "title": "ViTamin: Designing Scalable Vision Models in the Vision-Language Era", "abstract": "Recent breakthroughs in vision-language models (VLMs) start a new page in the\nvision community. The VLMs provide stronger and more generalizable feature\nembeddings compared to those from ImageNet-pretrained models, thanks to the\ntraining on the large-scale Internet image-text pairs. However, despite the\namazing achievement from the VLMs, vanilla Vision Transformers (ViTs) remain\nthe default choice for the image encoder. Although pure transformer proves its\neffectiveness in the text encoding area, it remains questionable whether it is\nalso the case for image encoding, especially considering that various types of\nnetworks are proposed on the ImageNet benchmark, which, unfortunately, are\nrarely studied in VLMs. Due to small data/model scale, the original conclusions\nof model design on ImageNet can be limited and biased. In this paper, we aim at\nbuilding an evaluation protocol of vision models in the vision-language era\nunder the contrastive language-image pretraining (CLIP) framework. We provide a\ncomprehensive way to benchmark different vision models, covering their\nzero-shot performance and scalability in both model and training data sizes. To\nthis end, we introduce ViTamin, a new vision models tailored for VLMs.\nViTamin-L significantly outperforms ViT-L by 2.0% ImageNet zero-shot accuracy,\nwhen using the same publicly available DataComp-1B dataset and the same\nOpenCLIP training scheme. ViTamin-L presents promising results on 60 diverse\nbenchmarks, including classification, retrieval, open-vocabulary detection and\nsegmentation, and large multi-modal models. When further scaling up the model\nsize, our ViTamin-XL with only 436M parameters attains 82.9% ImageNet zero-shot\naccuracy, surpassing 82.0% achieved by EVA-E that has ten times more parameters\n(4.4B).", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Jieneng Chen", "Qihang Yu", "Xiaohui Shen", "Alan L. Yuille", "Liang-Chieh Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f56d"}, "filepath": "data/2403.05419.png", "tags": [], "_media_type": "image", "_rand": 0.9994338687947409, "arXiv_link": "https://web3.arxiv.org/abs/2403.05419", "other_link": "https://github.com/techmn/satmae_pp}.", "title": "Rethinking Transformers Pre-training for Multi-Spectral Satellite Imagery", "abstract": "Recent advances in unsupervised learning have demonstrated the ability of\nlarge vision models to achieve promising results on downstream tasks by\npre-training on large amount of unlabelled data. Such pre-training techniques\nhave also been explored recently in the remote sensing domain due to the\navailability of large amount of unlabelled data. Different from standard\nnatural image datasets, remote sensing data is acquired from various sensor\ntechnologies and exhibit diverse range of scale variations as well as\nmodalities. Existing satellite image pre-training methods either ignore the\nscale information present in the remote sensing imagery or restrict themselves\nto use only a single type of data modality. In this paper, we re-visit\ntransformers pre-training and leverage multi-scale information that is\neffectively utilized with multiple modalities. Our proposed approach, named\nSatMAE++, performs multi-scale pre-training and utilizes convolution based\nupsampling blocks to reconstruct the image at higher scales making it\nextensible to include more scales. Compared to existing works, the proposed\nSatMAE++ with multi-scale pre-training is equally effective for both optical as\nwell as multi-spectral imagery. Extensive experiments on six datasets reveal\nthe merits of proposed contributions, leading to state-of-the-art performance\non all datasets. SatMAE++ achieves mean average precision (mAP) gain of 2.5\\%\nfor multi-label classification task on BigEarthNet dataset. Our code and\npre-trained models are available at \\url{https://github.com/techmn/satmae_pp}.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Mubashir Noman", "Muzammal Naseer", "Hisham Cholakkal", "Rao Anwer", "Salman Khan", "Fahad Shahbaz Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f56e"}, "filepath": "data/2403.02886.png", "tags": [], "_media_type": "image", "_rand": 0.9990868726807848, "arXiv_link": "https://arxiv.org/abs/2403.02886", "other_link": "https://github.com/Impression2805/FMFP}.", "title": "RCL: Reliable Continual Learning for Unified Failure Detection", "abstract": "Reliable confidence estimation is a challenging yet fundamental requirement\nin many risk-sensitive applications. However, modern deep neural networks are\noften overconfident for their incorrect predictions, i.e., misclassified\nsamples from known classes, and out-of-distribution (OOD) samples from unknown\nclasses. In recent years, many confidence calibration and OOD detection methods\nhave been developed. In this paper, we find a general, widely existing but\nactually-neglected phenomenon that most confidence estimation methods are\nharmful for detecting misclassification errors. We investigate this problem and\nreveal that popular calibration and OOD detection methods often lead to worse\nconfidence separation between correctly classified and misclassified examples,\nmaking it difficult to decide whether to trust a prediction or not. Finally, we\npropose to enlarge the confidence gap by finding flat minima, which yields\nstate-of-the-art failure prediction performance under various settings\nincluding balanced, long-tailed, and covariate-shift classification scenarios.\nOur study not only provides a strong baseline for reliable confidence\nestimation but also acts as a bridge between understanding calibration, OOD\ndetection, and failure prediction. The code is available at\n\\url{https://github.com/Impression2805/FMFP}.", "keywords": [], "authors_list": ["Fei Zhu", "Zhen Cheng", "Xu-Yao Zhang", "Cheng-Lin Liu", "Zhaoxiang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f56f"}, "filepath": "data/2402.18152.png", "tags": [], "_media_type": "image", "_rand": 0.9990436701418688, "arXiv_link": "https://arxiv.org/abs/2402.18152", "other_link": "https://github.com/Xinjie-Q/Boosting-NeRV.", "title": "Boosting Neural Representations for Videos with a Conditional Decoder", "abstract": "Implicit neural representations (INRs) have emerged as a promising approach\nfor video storage and processing, showing remarkable versatility across various\nvideo tasks. However, existing methods often fail to fully leverage their\nrepresentation capabilities, primarily due to inadequate alignment of\nintermediate features during target frame decoding. This paper introduces a\nuniversal boosting framework for current implicit video representation\napproaches. Specifically, we utilize a conditional decoder with a\ntemporal-aware affine transform module, which uses the frame index as a prior\ncondition to effectively align intermediate features with target frames.\nBesides, we introduce a sinusoidal NeRV-like block to generate diverse\nintermediate features and achieve a more balanced parameter distribution,\nthereby enhancing the model's capacity. With a high-frequency\ninformation-preserving reconstruction loss, our approach successfully boosts\nmultiple baseline INRs in the reconstruction quality and convergence speed for\nvideo regression, and exhibits superior inpainting and interpolation results.\nFurther, we integrate a consistent entropy minimization technique and develop\nvideo codecs based on these boosted INRs. Experiments on the UVG dataset\nconfirm that our enhanced codecs significantly outperform baseline INRs and\noffer competitive rate-distortion performance compared to traditional and\nlearning-based codecs. Code is available at\nhttps://github.com/Xinjie-Q/Boosting-NeRV.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["XINJIE ZHANG", "Ren Yang", "Dailan He", "Xingtong Ge", "Tongda Xu", "Yan Wang", "Hongwei Qin", "Jun Zhang"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f570"}, "filepath": "data/2312.07804.png", "tags": [], "_media_type": "image", "_rand": 0.9994703859279372, "arXiv_link": "https://arxiv.org/abs/2312.07804", "other_link": "", "title": "Uncertainty Visualization via Low-Dimensional Posterior Projections", "abstract": "In ill-posed inverse problems, it is commonly desirable to obtain insight\ninto the full spectrum of plausible solutions, rather than extracting only a\nsingle reconstruction. Information about the plausible solutions and their\nlikelihoods is encoded in the posterior distribution. However, for\nhigh-dimensional data, this distribution is challenging to visualize. In this\nwork, we introduce a new approach for estimating and visualizing posteriors by\nemploying energy-based models (EBMs) over low-dimensional subspaces.\nSpecifically, we train a conditional EBM that receives an input measurement and\na set of directions that span some low-dimensional subspace of solutions, and\noutputs the probability density function of the posterior within that space. We\ndemonstrate the effectiveness of our method across a diverse range of datasets\nand image restoration problems, showcasing its strength in uncertainty\nquantification and visualization. As we show, our method outperforms a baseline\nthat projects samples from a diffusion-based posterior sampler, while being\norders of magnitude faster. Furthermore, it is more accurate than a baseline\nthat assumes a Gaussian posterior.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Omer Yair", "Tomer Michaeli", "Elias Nehme"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f571"}, "filepath": "data/2311.18822.png", "tags": [], "_media_type": "image", "_rand": 0.9992469123142943, "arXiv_link": "https://arxiv.org/abs/2311.18822", "other_link": "https://elasticdiffusion.github.io/", "title": "ElasticDiffusion: Training-free Arbitrary Size Image Generation", "abstract": "Diffusion models have revolutionized image generation in recent years, yet\nthey are still limited to a few sizes and aspect ratios. We propose\nElasticDiffusion, a novel training-free decoding method that enables pretrained\ntext-to-image diffusion models to generate images with various sizes.\nElasticDiffusion attempts to decouple the generation trajectory of a pretrained\nmodel into local and global signals. The local signal controls low-level pixel\ninformation and can be estimated on local patches, while the global signal is\nused to maintain overall structural consistency and is estimated with a\nreference image. We test our method on CelebA-HQ (faces) and LAION-COCO\n(objects/indoor/outdoor scenes). Our experiments and qualitative results show\nsuperior image coherence quality across aspect ratios compared to\nMultiDiffusion and the standard decoding strategy of Stable Diffusion. Project\npage: https://elasticdiffusion.github.io/", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Moayed Haji Ali", "Guha Balakrishnan", "Vicente Ordonez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f572"}, "filepath": "data/2311.18832.png", "tags": [], "_media_type": "image", "_rand": 0.9996847366168282, "arXiv_link": "https://arxiv.org/abs/2311.18832", "other_link": "", "title": "Exploiting Diffusion Prior for Generalizable Dense Prediction", "abstract": "Contents generated by recent advanced Text-to-Image (T2I) diffusion models\nare sometimes too imaginative for existing off-the-shelf dense predictors to\nestimate due to the immitigable domain gap. We introduce DMP, a pipeline\nutilizing pre-trained T2I models as a prior for dense prediction tasks. To\naddress the misalignment between deterministic prediction tasks and stochastic\nT2I models, we reformulate the diffusion process through a sequence of\ninterpolations, establishing a deterministic mapping between input RGB images\nand output prediction distributions. To preserve generalizability, we use\nlow-rank adaptation to fine-tune pre-trained models. Extensive experiments\nacross five tasks, including 3D property estimation, semantic segmentation, and\nintrinsic image decomposition, showcase the efficacy of the proposed method.\nDespite limited-domain training data, the approach yields faithful estimations\nfor arbitrary images, surpassing existing state-of-the-art algorithms.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Hsin-Ying Lee", "Hung-Yu Tseng", "Hsin-Ying Lee", "Ming-Hsuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f573"}, "filepath": "data/2403.16897.png", "tags": [], "_media_type": "image", "_rand": 0.9997400695581932, "arXiv_link": "https://arxiv.org/abs/2403.16897", "other_link": "", "title": "Make-It-Vivid: Dressing Your Animatable Biped Cartoon Characters from Text", "abstract": "Creating and animating 3D biped cartoon characters is crucial and valuable in\nvarious applications. Compared with geometry, the diverse texture design plays\nan important role in making 3D biped cartoon characters vivid and charming.\nTherefore, we focus on automatic texture design for cartoon characters based on\ninput instructions. This is challenging for domain-specific requirements and a\nlack of high-quality data. To address this challenge, we propose Make-It-Vivid,\nthe first attempt to enable high-quality texture generation from text in UV\nspace. We prepare a detailed text-texture paired data for 3D characters by\nusing vision-question-answering agents. Then we customize a pretrained\ntext-to-image model to generate texture map with template structure while\npreserving the natural 2D image knowledge. Furthermore, to enhance fine-grained\ndetails, we propose a novel adversarial learning scheme to shorten the domain\ngap between original dataset and realistic texture domain. Extensive\nexperiments show that our approach outperforms current texture generation\nmethods, resulting in efficient character texturing and faithful generation\nwith prompts. Besides, we showcase various applications such as out of domain\ngeneration and texture stylization. We also provide an efficient generation\nsystem for automatic text-guided textured character generation and animation.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Junshu Tang", "Yanhong Zeng", "Ke Fan", "Xuheng Wang", "Bo Dai", "Kai Chen", "Lizhuang Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f574"}, "filepath": "data/2311.11278v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993531965328813, "arXiv_link": "https://arxiv.org/abs/2311.11278v1", "other_link": "", "title": "Transcending Forgery Specificity with Latent Space Augmentation for Generalizable Deepfake Detection", "abstract": "Deepfake detection faces a critical generalization hurdle, with performance\ndeteriorating when there is a mismatch between the distributions of training\nand testing data. A broadly received explanation is the tendency of these\ndetectors to be overfitted to forgery-specific artifacts, rather than learning\nfeatures that are widely applicable across various forgeries. To address this\nissue, we propose a simple yet effective detector called LSDA\n(\\underline{L}atent \\underline{S}pace \\underline{D}ata\n\\underline{A}ugmentation), which is based on a heuristic idea: representations\nwith a wider variety of forgeries should be able to learn a more generalizable\ndecision boundary, thereby mitigating the overfitting of method-specific\nfeatures (see Figure. 1). Following this idea, we propose to enlarge the\nforgery space by constructing and simulating variations within and across\nforgery features in the latent space. This approach encompasses the acquisition\nof enriched, domain-specific features and the facilitation of smoother\ntransitions between different forgery types, effectively bridging domain gaps.\nOur approach culminates in refining a binary classifier that leverages the\ndistilled knowledge from the enhanced features, striving for a generalizable\ndeepfake detector. Comprehensive experiments show that our proposed method is\nsurprisingly effective and transcends state-of-the-art detectors across several\nwidely used benchmarks.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhiyuan Yan", "Yuhao Luo", "Siwei Lyu", "Qingshan Liu", "Baoyuan Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f575"}, "filepath": "data/2401.07114.png", "tags": [], "_media_type": "image", "_rand": 0.9991883616321815, "arXiv_link": "https://arxiv.org/abs/2401.07114", "other_link": "", "title": "Revisiting Sampson Approximations for Geometric Estimation Problems", "abstract": "Many problems in computer vision can be formulated as geometric estimation\nproblems, i.e. given a collection of measurements (e.g. point correspondences)\nwe wish to fit a model (e.g. an essential matrix) that agrees with our\nobservations. This necessitates some measure of how much an observation\n``agrees\" with a given model. A natural choice is to consider the smallest\nperturbation that makes the observation exactly satisfy the constraints.\nHowever, for many problems, this metric is expensive or otherwise intractable\nto compute. The so-called Sampson error approximates this geometric error\nthrough a linearization scheme. For epipolar geometry, the Sampson error is a\npopular choice and in practice known to yield very tight approximations of the\ncorresponding geometric residual (the reprojection error).\n In this paper we revisit the Sampson approximation and provide new\ntheoretical insights as to why and when this approximation works, as well as\nprovide explicit bounds on the tightness under some mild assumptions. Our\ntheoretical results are validated in several experiments on real data and in\nthe context of different geometric estimation tasks.", "keywords": [], "authors_list": ["Felix Rydell", "Angelica Torres", "Viktor Larsson"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f576"}, "filepath": "data/2403.03063v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993166783033123, "arXiv_link": "https://arxiv.org/html/2403.03063v1", "other_link": "https://github.com/zy1296/CrackNex.", "title": "Mind marginal non-crack regions: Clustering-inspired representation learning for crack segmentation", "abstract": "Routine visual inspections of concrete structures are imperative for\nupholding the safety and integrity of critical infrastructure. Such visual\ninspections sometimes happen under low-light conditions, e.g., checking for\nbridge health. Crack segmentation under such conditions is challenging due to\nthe poor contrast between cracks and their surroundings. However, most deep\nlearning methods are designed for well-illuminated crack images and hence their\nperformance drops dramatically in low-light scenes. In addition, conventional\napproaches require many annotated low-light crack images which is\ntime-consuming. In this paper, we address these challenges by proposing\nCrackNex, a framework that utilizes reflectance information based on Retinex\nTheory to help the model learn a unified illumination-invariant representation.\nFurthermore, we utilize few-shot segmentation to solve the inefficient training\ndata problem. In CrackNex, both a support prototype and a reflectance prototype\nare extracted from the support set. Then, a prototype fusion module is designed\nto integrate the features from both prototypes. CrackNex outperforms the SOTA\nmethods on multiple datasets. Additionally, we present the first benchmark\ndataset, LCSD, for low-light crack segmentation. LCSD consists of 102\nwell-illuminated crack images and 41 low-light crack images. The dataset and\ncode are available at https://github.com/zy1296/CrackNex.", "keywords": ["Low-level vision"], "authors_list": ["zhuangzhuang chen", "Zhuonan Lai", "Jie Chen", "Jianqiang Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f577"}, "filepath": "data/2405.19465.png", "tags": [], "_media_type": "image", "_rand": 0.9993109140172414, "arXiv_link": "https://arxiv.org/abs/2405.19465", "other_link": "", "title": "MV-Adapter: Exploring Parameter Efficient Learning for Video Text Retrieval", "abstract": "Text-Video Retrieval (TVR) aims to align relevant video content with natural\nlanguage queries. To date, most state-of-the-art TVR methods learn\nimage-to-video transfer learning based on large-scale pre-trained\nvisionlanguage models (e.g., CLIP). However, fully fine-tuning these\npre-trained models for TVR incurs prohibitively expensive computation costs. To\nthis end, we propose to conduct efficient text-video Retrieval with a\nsparse-andcorrelated AdaPter (RAP), i.e., fine-tuning the pre-trained model\nwith a few parameterized layers. To accommodate the text-video scenario, we\nequip our RAP with two indispensable characteristics: temporal sparsity and\ncorrelation. Specifically, we propose a low-rank modulation module to refine\nthe per-image features from the frozen CLIP backbone, which accentuates salient\nframes within the video features while alleviating temporal redundancy.\nBesides, we introduce an asynchronous self-attention mechanism that first\nselects the top responsive visual patches and augments the correlation modeling\nbetween them with learnable temporal and patch offsets. Extensive experiments\non four TVR datasets demonstrate that RAP achieves superior or comparable\nperformance compared to the fully fine-tuned counterpart and other\nparameter-efficient fine-tuning methods.", "keywords": ["Efficient and scalable vision"], "authors_list": ["bowen zhang", "Xiaojie Jin", "Weibo Gong", "Kai Xu", "Xueqing Deng", "Peng Wang", "Zhao Zhang", "Xiaohui Shen", "Jiashi Feng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f578"}, "filepath": "data/2403.02649.png", "tags": [], "_media_type": "image", "_rand": 0.9997719619187554, "arXiv_link": "https://arxiv.org/abs/2403.02649", "other_link": "https://github.com/yue-zhongqi/tif.", "title": "Few-shot Learner Parameterization by Diffusion Time-steps", "abstract": "Even when using large multi-modal foundation models, few-shot learning is\nstill challenging -- if there is no proper inductive bias, it is nearly\nimpossible to keep the nuanced class attributes while removing the visually\nprominent attributes that spuriously correlate with class labels. To this end,\nwe find an inductive bias that the time-steps of a Diffusion Model (DM) can\nisolate the nuanced class attributes, i.e., as the forward diffusion adds noise\nto an image at each time-step, nuanced attributes are usually lost at an\nearlier time-step than the spurious attributes that are visually prominent.\nBuilding on this, we propose Time-step Few-shot (TiF) learner. We train\nclass-specific low-rank adapters for a text-conditioned DM to make up for the\nlost attributes, such that images can be accurately reconstructed from their\nnoisy ones given a prompt. Hence, at a small time-step, the adapter and prompt\nare essentially a parameterization of only the nuanced class attributes. For a\ntest image, we can use the parameterization to only extract the nuanced class\nattributes for classification. TiF learner significantly outperforms OpenCLIP\nand its adapters on a variety of fine-grained and customized few-shot learning\ntasks. Codes are in https://github.com/yue-zhongqi/tif.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Zhongqi Yue", "Pan Zhou", "Richang Hong", "Hanwang Zhang", "Qianru Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f579"}, "filepath": "data/2405.07991.png", "tags": [], "_media_type": "image", "_rand": 0.9991404670118226, "arXiv_link": "https://arxiv.org/abs/2405.07991", "other_link": "https://spin-robot.github.io/", "title": "SPIN: Simultaneous Perception, Interaction and Navigation", "abstract": "While there has been remarkable progress recently in the fields of\nmanipulation and locomotion, mobile manipulation remains a long-standing\nchallenge. Compared to locomotion or static manipulation, a mobile system must\nmake a diverse range of long-horizon tasks feasible in unstructured and dynamic\nenvironments. While the applications are broad and interesting, there are a\nplethora of challenges in developing these systems such as coordination between\nthe base and arm, reliance on onboard perception for perceiving and interacting\nwith the environment, and most importantly, simultaneously integrating all\nthese parts together. Prior works approach the problem using disentangled\nmodular skills for mobility and manipulation that are trivially tied together.\nThis causes several limitations such as compounding errors, delays in\ndecision-making, and no whole-body coordination. In this work, we present a\nreactive mobile manipulation framework that uses an active visual system to\nconsciously perceive and react to its environment. Similar to how humans\nleverage whole-body and hand-eye coordination, we develop a mobile manipulator\nthat exploits its ability to move and see, more specifically -- to move in\norder to see and to see in order to move. This allows it to not only move\naround and interact with its environment but also, choose \"when\" to perceive\n\"what\" using an active visual system. We observe that such an agent learns to\nnavigate around complex cluttered scenarios while displaying agile whole-body\ncoordination using only ego-vision without needing to create environment maps.\nResults visualizations and videos at https://spin-robot.github.io/", "keywords": ["Scene analysis and understanding"], "authors_list": ["Shagun Uppal", "Ananye Agarwal", "Haoyu Xiong", "Kenneth Shaw", "Deepak Pathak"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Machine Learning", "Systems and Control", "Systems and Control"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f57a"}, "filepath": "data/2403.16194.png", "tags": [], "_media_type": "image", "_rand": 0.9998840732615442, "arXiv_link": "https://arxiv.org/abs/2403.16194", "other_link": "", "title": "Pose-Guided Self-Training with Two-Stage Clustering for Unsupervised Landmark Discovery", "abstract": "Unsupervised landmarks discovery (ULD) for an object category is a\nchallenging computer vision problem. In pursuit of developing a robust ULD\nframework, we explore the potential of a recent paradigm of self-supervised\nlearning algorithms, known as diffusion models. Some recent works have shown\nthat these models implicitly contain important correspondence cues. Towards\nharnessing the potential of diffusion models for the ULD task, we make the\nfollowing core contributions. First, we propose a ZeroShot ULD baseline based\non simple clustering of random pixel locations with nearest neighbour matching.\nIt delivers better results than existing ULD methods. Second, motivated by the\nZeroShot performance, we develop a ULD algorithm based on diffusion features\nusing self-training and clustering which also outperforms prior methods by\nnotable margins. Third, we introduce a new proxy task based on generating\nlatent pose codes and also propose a two-stage clustering mechanism to\nfacilitate effective pseudo-labeling, resulting in a significant performance\nimprovement. Overall, our approach consistently outperforms state-of-the-art\nmethods on four challenging benchmarks AFLW, MAFL, CatHeads and LS3D by\nsignificant margins.", "keywords": [], "authors_list": ["Siddharth Tourani", "Ahmed Alwheibi", "Arif Mahmood", "Muhammad Haris Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f57b"}, "filepath": "data/2311.15383.png", "tags": [], "_media_type": "image", "_rand": 0.999897747343267, "arXiv_link": "https://arxiv.org/abs/2311.15383", "other_link": "", "title": "Towards CLIP-driven Language-free 3D Visual Grounding via 2D-3D Relational Enhancement and Consistency", "abstract": "3D Visual Grounding (3DVG) aims at localizing 3D object based on textual\ndescriptions. Conventional supervised methods for 3DVG often necessitate\nextensive annotations and a predefined vocabulary, which can be restrictive. To\naddress this issue, we propose a novel visual programming approach for\nzero-shot open-vocabulary 3DVG, leveraging the capabilities of large language\nmodels (LLMs). Our approach begins with a unique dialog-based method, engaging\nwith LLMs to establish a foundational understanding of zero-shot 3DVG. Building\non this, we design a visual program that consists of three types of modules,\ni.e., view-independent, view-dependent, and functional modules. These modules,\nspecifically tailored for 3D scenarios, work collaboratively to perform complex\nreasoning and inference. Furthermore, we develop an innovative language-object\ncorrelation module to extend the scope of existing 3D object detectors into\nopen-vocabulary scenarios. Extensive experiments demonstrate that our zero-shot\napproach can outperform some supervised baselines, marking a significant stride\ntowards effective 3DVG.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yuqi Zhang", "Han Luo", "Yinjie Lei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f57c"}, "filepath": "data/2405.03178.png", "tags": [], "_media_type": "image", "_rand": 0.9991158762621374, "arXiv_link": "https://arxiv.org/abs/2405.03178", "other_link": "https://github.com/Luke-Luo1/POPDG.", "title": "POPDG: Popular 3D Dance Generation with PopDanceSet", "abstract": "Generating dances that are both lifelike and well-aligned with music\ncontinues to be a challenging task in the cross-modal domain. This paper\nintroduces PopDanceSet, the first dataset tailored to the preferences of young\naudiences, enabling the generation of aesthetically oriented dances. And it\nsurpasses the AIST++ dataset in music genre diversity and the intricacy and\ndepth of dance movements. Moreover, the proposed POPDG model within the iDDPM\nframework enhances dance diversity and, through the Space Augmentation\nAlgorithm, strengthens spatial physical connections between human body joints,\nensuring that increased diversity does not compromise generation quality. A\nstreamlined Alignment Module is also designed to improve the temporal alignment\nbetween dance and music. Extensive experiments show that POPDG achieves SOTA\nresults on two datasets. Furthermore, the paper also expands on current\nevaluation metrics. The dataset and code are available at\nhttps://github.com/Luke-Luo1/POPDG.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Zhenye Luo", "Min Ren", "Xuecai Hu", "Yongzhen Huang", "Li Yao"], "category_name": "Sound", "all_categories": ["Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f57d"}, "filepath": "data/2311.17083.png", "tags": [], "_media_type": "image", "_rand": 0.9996121306701308, "arXiv_link": "https://arxiv.org/abs/2311.17083", "other_link": "", "title": "CLiC: Concept Learning in Context", "abstract": "This paper addresses the challenge of learning a local visual pattern of an\nobject from one image, and generating images depicting objects with that\npattern. Learning a localized concept and placing it on an object in a target\nimage is a nontrivial task, as the objects may have different orientations and\nshapes. Our approach builds upon recent advancements in visual concept\nlearning. It involves acquiring a visual concept (e.g., an ornament) from a\nsource image and subsequently applying it to an object (e.g., a chair) in a\ntarget image. Our key idea is to perform in-context concept learning, acquiring\nthe local visual concept within the broader context of the objects they belong\nto. To localize the concept learning, we employ soft masks that contain both\nthe concept within the mask and the surrounding image area. We demonstrate our\napproach through object generation within an image, showcasing plausible\nembedding of in-context learned concepts. We also introduce methods for\ndirecting acquired concepts to specific locations within target images,\nemploying cross-attention mechanisms, and establishing correspondences between\nsource and target objects. The effectiveness of our method is demonstrated\nthrough quantitative and qualitative experiments, along with comparisons\nagainst baseline techniques.", "keywords": [], "authors_list": ["Mehdi Safaee", "Aryan Mikaeili", "Or Patashnik", "Daniel Cohen-Or", "Ali Mahdavi Amiri"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f57e"}, "filepath": "data/2402.19298.png", "tags": [], "_media_type": "image", "_rand": 0.9994900953542804, "arXiv_link": "https://arxiv.org/abs/2402.19298", "other_link": "https://github.com/OMGGGGG/mmdg.", "title": "Suppress and Rebalance: Towards Generalized Multi-Modal Face Anti-Spoofing", "abstract": "Face Anti-Spoofing (FAS) is crucial for securing face recognition systems\nagainst presentation attacks. With advancements in sensor manufacture and\nmulti-modal learning techniques, many multi-modal FAS approaches have emerged.\nHowever, they face challenges in generalizing to unseen attacks and deployment\nconditions. These challenges arise from (1) modality unreliability, where some\nmodality sensors like depth and infrared undergo significant domain shifts in\nvarying environments, leading to the spread of unreliable information during\ncross-modal feature fusion, and (2) modality imbalance, where training overly\nrelies on a dominant modality hinders the convergence of others, reducing\neffectiveness against attack types that are indistinguishable sorely using the\ndominant modality. To address modality unreliability, we propose the\nUncertainty-Guided Cross-Adapter (U-Adapter) to recognize unreliably detected\nregions within each modality and suppress the impact of unreliable regions on\nother modalities. For modality imbalance, we propose a Rebalanced Modality\nGradient Modulation (ReGrad) strategy to rebalance the convergence speed of all\nmodalities by adaptively adjusting their gradients. Besides, we provide the\nfirst large-scale benchmark for evaluating multi-modal FAS performance under\ndomain generalization scenarios. Extensive experiments demonstrate that our\nmethod outperforms state-of-the-art methods. Source code and protocols will be\nreleased on https://github.com/OMGGGGG/mmdg.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Xun Lin", "Shuai Wang", "RIZHAO CAI", "Yizhong Liu", "Ying Fu", "Wenzhong Tang", "Zitong YU", "Alex C. Kot"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f57f"}, "filepath": "data/2312.02970.png", "tags": [], "_media_type": "image", "_rand": 0.9999903063124285, "arXiv_link": "https://arxiv.org/abs/2312.02970", "other_link": "", "title": "Alchemist: Parametric Control of Material Properties with Diffusion Models", "abstract": "We propose a method to control material attributes of objects like roughness,\nmetallic, albedo, and transparency in real images. Our method capitalizes on\nthe generative prior of text-to-image models known for photorealism, employing\na scalar value and instructions to alter low-level material properties.\nAddressing the lack of datasets with controlled material attributes, we\ngenerated an object-centric synthetic dataset with physically-based materials.\nFine-tuning a modified pre-trained text-to-image model on this synthetic\ndataset enables us to edit material properties in real-world images while\npreserving all other attributes. We show the potential application of our model\nto material edited NeRFs.", "keywords": ["Image and video generation and manipulation", "Computational imaging and physics-based vision"], "authors_list": ["Prafull Sharma", "Varun Jampani", "Yuanzhen Li", "Xuhui Jia", "Dmitry Lagun", "Fredo Durand", "William Freeman", "Mark Matthews"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f580"}, "filepath": "data/2403.19539.png", "tags": [], "_media_type": "image", "_rand": 0.9996205405369911, "arXiv_link": "https://arxiv.org/abs/2403.19539", "other_link": "", "title": "Small Scale Data-Free Knowledge Distillation", "abstract": "Data-Free Knowledge Distillation (DFKD) is a promising task to train\nhigh-performance small models to enhance actual deployment without relying on\nthe original training data. Existing methods commonly avoid relying on private\ndata by utilizing synthetic or sampled data. However, a long-overlooked issue\nis that the severe distribution shifts between their substitution and original\ndata, which manifests as huge differences in the quality of images and class\nproportions. The harmful shifts are essentially the confounder that\nsignificantly causes performance bottlenecks. To tackle the issue, this paper\nproposes a novel perspective with causal inference to disentangle the student\nmodels from the impact of such shifts. By designing a customized causal graph,\nwe first reveal the causalities among the variables in the DFKD task.\nSubsequently, we propose a Knowledge Distillation Causal Intervention (KDCI)\nframework based on the backdoor adjustment to de-confound the confounder. KDCI\ncan be flexibly combined with most existing state-of-the-art baselines.\nExperiments in combination with six representative DFKD methods demonstrate the\neffectiveness of our KDCI, which can obviously help existing methods under\nalmost all settings, \\textit{e.g.}, improving the baseline by up to 15.54\\%\naccuracy on the CIFAR-100 dataset.", "keywords": ["Efficient and scalable vision"], "authors_list": ["He Liu", "Yikai Wang", "Huaping Liu", "Fuchun Sun", "Anbang Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f581"}, "filepath": "data/2405.14136.png", "tags": [], "_media_type": "image", "_rand": 0.999309805583772, "arXiv_link": "https://arxiv.org/abs/2405.14136", "other_link": "https://github.com/42Shawn/BiMTDP.", "title": "Efficient Multitask Dense Predictor via Binarization", "abstract": "Multi-task learning for dense prediction has emerged as a pivotal area in\ncomputer vision, enabling simultaneous processing of diverse yet interrelated\npixel-wise prediction tasks. However, the substantial computational demands of\nstate-of-the-art (SoTA) models often limit their widespread deployment. This\npaper addresses this challenge by introducing network binarization to compress\nresource-intensive multi-task dense predictors. Specifically, our goal is to\nsignificantly accelerate multi-task dense prediction models via Binary Neural\nNetworks (BNNs) while maintaining and even improving model performance at the\nsame time. To reach this goal, we propose a Binary Multi-task Dense Predictor,\nBi-MTDP, and several variants of Bi-MTDP, in which a multi-task dense predictor\nis constructed via specified binarized modules. Our systematical analysis of\nthis predictor reveals that performance drop from binarization is primarily\ncaused by severe information degradation. To address this issue, we introduce a\ndeep information bottleneck layer that enforces representations for downstream\ntasks satisfying Gaussian distribution in forward propagation. Moreover, we\nintroduce a knowledge distillation mechanism to correct the direction of\ninformation flow in backward propagation. Intriguingly, one variant of Bi-MTDP\noutperforms full-precision (FP) multi-task dense prediction SoTAs, ARTC\n(CNN-based) and InvPT (ViT-Based). This result indicates that Bi-MTDP is not\nmerely a naive trade-off between performance and efficiency, but is rather a\nbenefit of the redundant information flow thanks to the multi-task\narchitecture. Code is available at https://github.com/42Shawn/BiMTDP.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yuzhang Shang", "Dan Xu", "Gaowen Liu", "Ramana Kompella", "Yan Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f582"}, "filepath": "data/2308.06699.png", "tags": [], "_media_type": "image", "_rand": 0.9997413909212433, "arXiv_link": "https://arxiv.org/abs/2308.06699", "other_link": "", "title": "Neural Super-Resolution for Real-time Rendering with Radiance Demodulation", "abstract": "It is time-consuming to render high-resolution images in applications such as\nvideo games and virtual reality, and thus super-resolution technologies become\nincreasingly popular for real-time rendering. However, it is challenging to\npreserve sharp texture details, keep the temporal stability and avoid the\nghosting artifacts in real-time super-resolution rendering. To address this\nissue, we introduce radiance demodulation to separate the rendered image or\nradiance into a lighting component and a material component, considering the\nfact that the light component is smoother than the rendered image so that the\nhigh-resolution material component with detailed textures can be easily\nobtained. We perform the super-resolution on the lighting component only and\nre-modulate it with the high-resolution material component to obtain the final\nsuper-resolution image with more texture details. A reliable warping module is\nproposed by explicitly marking the occluded regions to avoid the ghosting\nartifacts. To further enhance the temporal stability, we design a\nframe-recurrent neural network and a temporal loss to aggregate the previous\nand current frames, which can better capture the spatial-temporal consistency\namong reconstructed frames. As a result, our method is able to produce\ntemporally stable results in real-time rendering with high-quality details,\neven in the challenging 4 $\\times$ 4 super-resolution scenarios.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Jia Li", "Ziling Chen", "Xiaolong Wu", "Lu Wang", "Beibei Wang", "Lei Zhang"], "category_name": "Graphics", "all_categories": ["Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f583"}, "filepath": "data/2311.10983.png", "tags": [], "_media_type": "image", "_rand": 0.9995559433204378, "arXiv_link": "https://arxiv.org/abs/2311.10983", "other_link": "https://github.com/XunshanMan/MVGFormer.", "title": "Multiple View Geometry Transformers for 3D Human Pose Estimation", "abstract": "In this work, we aim to improve the 3D reasoning ability of Transformers in\nmulti-view 3D human pose estimation. Recent works have focused on end-to-end\nlearning-based transformer designs, which struggle to resolve geometric\ninformation accurately, particularly during occlusion. Instead, we propose a\nnovel hybrid model, MVGFormer, which has a series of geometric and appearance\nmodules organized in an iterative manner. The geometry modules are\nlearning-free and handle all viewpoint-dependent 3D tasks geometrically which\nnotably improves the model's generalization ability. The appearance modules are\nlearnable and are dedicated to estimating 2D poses from image signals\nend-to-end which enables them to achieve accurate estimates even when occlusion\noccurs, leading to a model that is both accurate and generalizable to new\ncameras and geometries. We evaluate our approach for both in-domain and\nout-of-domain settings, where our model consistently outperforms\nstate-of-the-art methods, and especially does so by a significant margin in the\nout-of-domain setting. We will release the code and models:\nhttps://github.com/XunshanMan/MVGFormer.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Ziwei Liao", "jialiang zhu", "Chunyu Wang", "Han Hu", "Steven L. Waslander"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f584"}, "filepath": "data/2312.17133.png", "tags": [], "_media_type": "image", "_rand": 0.9998298669682959, "arXiv_link": "https://arxiv.org/abs/2312.17133", "other_link": "", "title": "ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe", "abstract": "We present ARTrackV2, which integrates two pivotal aspects of tracking:\ndetermining where to look (localization) and how to describe (appearance\nanalysis) the target object across video frames. Building on the foundation of\nits predecessor, ARTrackV2 extends the concept by introducing a unified\ngenerative framework to \"read out\" object's trajectory and \"retell\" its\nappearance in an autoregressive manner. This approach fosters a time-continuous\nmethodology that models the joint evolution of motion and visual features,\nguided by previous estimates. Furthermore, ARTrackV2 stands out for its\nefficiency and simplicity, obviating the less efficient intra-frame\nautoregression and hand-tuned parameters for appearance updates. Despite its\nsimplicity, ARTrackV2 achieves state-of-the-art performance on prevailing\nbenchmark datasets while demonstrating remarkable efficiency improvement. In\nparticular, ARTrackV2 achieves AO score of 79.5\\% on GOT-10k, and AUC of 86.1\\%\non TrackingNet while being $3.6 \\times$ faster than ARTrack. The code will be\nreleased.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yifan Bai", "Zeyang Zhao", "Yihong Gong", "Xing Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f585"}, "filepath": "data/2310.10343.png", "tags": [], "_media_type": "image", "_rand": 0.9996872988192207, "arXiv_link": "https://arxiv.org/abs/2310.10343", "other_link": "https://github.com/JiayuYANG/ConsistNet", "title": "ConsistNet: Enforcing 3D Consistency for Multi-view Images Diffusion", "abstract": "Given a single image of a 3D object, this paper proposes a novel method\n(named ConsistNet) that is able to generate multiple images of the same object,\nas if seen they are captured from different viewpoints, while the 3D\n(multi-view) consistencies among those multiple generated images are\neffectively exploited. Central to our method is a multi-view consistency block\nwhich enables information exchange across multiple single-view diffusion\nprocesses based on the underlying multi-view geometry principles. ConsistNet is\nan extension to the standard latent diffusion model, and consists of two\nsub-modules: (a) a view aggregation module that unprojects multi-view features\ninto global 3D volumes and infer consistency, and (b) a ray aggregation module\nthat samples and aggregate 3D consistent features back to each view to enforce\nconsistency. Our approach departs from previous methods in multi-view image\ngeneration, in that it can be easily dropped-in pre-trained LDMs without\nrequiring explicit pixel correspondences or depth prediction. Experiments show\nthat our method effectively learns 3D consistency over a frozen Zero123\nbackbone and can generate 16 surrounding views of the object within 40 seconds\non a single A100 GPU. Our code will be made available on\nhttps://github.com/JiayuYANG/ConsistNet", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiayu Yang", "Ziang Cheng", "Yunfei Duan", "Pan Ji", "Hongdong Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f586"}, "filepath": "data/2404.00562.png", "tags": [], "_media_type": "image", "_rand": 0.9999370447015875, "arXiv_link": "https://arxiv.org/abs/2404.00562", "other_link": "https://github.com/JunukCha/Text2HOI.", "title": "Text2HOI: Text-guided 3D Motion Generation for Hand-Object Interaction", "abstract": "This paper introduces the first text-guided work for generating the sequence\nof hand-object interaction in 3D. The main challenge arises from the lack of\nlabeled data where existing ground-truth datasets are nowhere near\ngeneralizable in interaction type and object category, which inhibits the\nmodeling of diverse 3D hand-object interaction with the correct physical\nimplication (e.g., contacts and semantics) from text prompts. To address this\nchallenge, we propose to decompose the interaction generation task into two\nsubtasks: hand-object contact generation; and hand-object motion generation.\nFor contact generation, a VAE-based network takes as input a text and an object\nmesh, and generates the probability of contacts between the surfaces of hands\nand the object during the interaction. The network learns a variety of local\ngeometry structure of diverse objects that is independent of the objects'\ncategory, and thus, it is applicable to general objects. For motion generation,\na Transformer-based diffusion model utilizes this 3D contact map as a strong\nprior for generating physically plausible hand-object motion as a function of\ntext prompts by learning from the augmented labeled dataset; where we annotate\ntext labels from many existing 3D hand and object motion data. Finally, we\nfurther introduce a hand refiner module that minimizes the distance between the\nobject surface and hand joints to improve the temporal stability of the\nobject-hand contacts and to suppress the penetration artifacts. In the\nexperiments, we demonstrate that our method can generate more realistic and\ndiverse interactions compared to other baseline methods. We also show that our\nmethod is applicable to unseen objects. We will release our model and newly\nlabeled data as a strong foundation for future research. Codes and data are\navailable in: https://github.com/JunukCha/Text2HOI.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Junuk Cha", "Jihyeon Kim", "Jae Shin Yoon", "Seungryul Baek"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f587"}, "filepath": "data/2403.09632.png", "tags": [], "_media_type": "image", "_rand": 0.999768337676515, "arXiv_link": "https://arxiv.org/abs/2403.09632", "other_link": "", "title": "Holo-Relighting: Controllable Volumetric Portrait Relighting from a Single Image", "abstract": "At the core of portrait photography is the search for ideal lighting and\nviewpoint. The process often requires advanced knowledge in photography and an\nelaborate studio setup. In this work, we propose Holo-Relighting, a volumetric\nrelighting method that is capable of synthesizing novel viewpoints, and novel\nlighting from a single image. Holo-Relighting leverages the pretrained 3D GAN\n(EG3D) to reconstruct geometry and appearance from an input portrait as a set\nof 3D-aware features. We design a relighting module conditioned on a given\nlighting to process these features, and predict a relit 3D representation in\nthe form of a tri-plane, which can render to an arbitrary viewpoint through\nvolume rendering. Besides viewpoint and lighting control, Holo-Relighting also\ntakes the head pose as a condition to enable head-pose-dependent lighting\neffects. With these novel designs, Holo-Relighting can generate complex\nnon-Lambertian lighting effects (e.g., specular highlights and cast shadows)\nwithout using any explicit physical lighting priors. We train Holo-Relighting\nwith data captured with a light stage, and propose two data-rendering\ntechniques to improve the data quality for training the volumetric relighting\nsystem. Through quantitative and qualitative experiments, we demonstrate\nHolo-Relighting can achieve state-of-the-arts relighting quality with better\nphotorealism, 3D consistency and controllability.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Yiqun Mei", "Yu Zeng", "He Zhang", "Zhixin Shu", "Xuaner Zhang", "Sai Bi", "Jianming Zhang", "HyunJoon Jung", "Vishal M. Patel"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f588"}, "filepath": "data/2307.15973.png", "tags": [], "_media_type": "image", "_rand": 0.9990228468476564, "arXiv_link": "https://arxiv.org/abs/2307.15973", "other_link": "", "title": "Positive-Unlabeled Learning by Latent Group-Aware Meta Disambiguation", "abstract": "Learning contrastive representations from pairwise comparisons has achieved\nremarkable success in various fields, such as natural language processing,\ncomputer vision, and information retrieval. Collaborative filtering algorithms\nbased on pairwise learning also rooted in this paradigm. A significant concern\nis the absence of labels for negative instances in implicit feedback data,\nwhich often results in the random selected negative instances contains false\nnegatives and inevitably, biased embeddings. To address this issue, we\nintroduce a novel correction method for sampling bias that yields a modified\nloss for pairwise learning called debiased pairwise loss (DPL). The key idea\nunderlying DPL is to correct the biased probability estimates that result from\nfalse negatives, thereby correcting the gradients to approximate those of fully\nsupervised data. The implementation of DPL only requires a small modification\nof the codes. Experimental studies on five public datasets validate the\neffectiveness of proposed learning method.", "keywords": [], "authors_list": ["Lin Long", "Haobo Wang", "Zhijie Jiang", "Lei Feng", "Chang Yao", "Gang Chen", "Junbo Zhao"], "category_name": "Information Retrieval", "all_categories": ["Information Retrieval"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f589"}, "filepath": "data/2308.12462.png", "tags": [], "_media_type": "image", "_rand": 0.9996357024417527, "arXiv_link": "https://arxiv.org/abs/2308.12462", "other_link": "", "title": "Overcoming Generic Knowledge Loss with Selective Parameter Update", "abstract": "Foundation models encompass an extensive knowledge base and offer remarkable\ntransferability. However, this knowledge becomes outdated or insufficient over\ntime. The challenge lies in continuously updating foundation models to\naccommodate novel information while retaining their original capabilities.\nLeveraging the fact that foundation models have initial knowledge on various\ntasks and domains, we propose a novel approach that, instead of updating all\nparameters equally, localizes the updates to a sparse set of parameters\nrelevant to the task being learned. We strike a balance between efficiency and\nnew task performance, while maintaining the transferability and\ngeneralizability of foundation models. We extensively evaluate our method on\nfoundational vision-language models with a diverse spectrum of continual\nlearning tasks. Our method achieves improvements on the accuracy of the newly\nlearned tasks up to 7% while preserving the pretraining knowledge with a\nnegligible decrease of 0.9% on a representative control set accuracy.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Wenxuan Zhang", "Paul Janson", "Rahaf Aljundi", "Mohamed Elhoseiny"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f58a"}, "filepath": "data/2309.07439.png", "tags": [], "_media_type": "image", "_rand": 0.9999308189531935, "arXiv_link": "https://arxiv.org/abs/2309.07439", "other_link": "https://github.com/Koorye/DePT.", "title": "DePT: Decoupled Prompt Tuning", "abstract": "This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning,\ni.e., the better the tuned model generalizes to the base (or target) task, the\nworse it generalizes to new tasks, and vice versa. Specifically, through an\nin-depth analysis of the learned features of the base and new tasks, we observe\nthat the BNT stems from a channel bias issue, i.e., the vast majority of\nfeature channels are occupied by base-specific knowledge, resulting in the\ncollapse of taskshared knowledge important to new tasks. To address this, we\npropose the Decoupled Prompt Tuning (DePT) framework, which decouples\nbase-specific knowledge from feature channels into an isolated feature space\nduring prompt tuning, so as to maximally preserve task-shared knowledge in the\noriginal feature space for achieving better zero-shot generalization on new\ntasks. Importantly, our DePT is orthogonal to existing prompt tuning methods,\nhence it can improve all of them. Extensive experiments on 11 datasets show the\nstrong flexibility and effectiveness of DePT. Our code and pretrained models\nare available at https://github.com/Koorye/DePT.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Ji Zhang", "Shihan Wu", "Lianli Gao", "Heng Tao Shen", "Jingkuan Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f58b"}, "filepath": "data/2310.00816.png", "tags": [], "_media_type": "image", "_rand": 0.9992111376215342, "arXiv_link": "https://arxiv.org/abs/2310.00816", "other_link": "", "title": "Sharingan: A Transformer Architecture for Multi-Person Gaze Following", "abstract": "Gaze is a powerful form of non-verbal communication and social interaction\nthat humans develop from an early age. As such, modeling this behavior is an\nimportant task that can benefit a broad set of application domains ranging from\nrobotics to sociology. In particular, Gaze Following is defined as the\nprediction of the pixel-wise 2D location where a person in the image is\nlooking. Prior efforts in this direction have focused primarily on CNN-based\narchitectures to perform the task. In this paper, we introduce a novel\ntransformer-based architecture for 2D gaze prediction. We experiment with 2\nvariants: the first one retains the same task formulation of predicting a gaze\nheatmap for one person at a time, while the second one casts the problem as a\n2D point regression and allows us to perform multi-person gaze prediction with\na single forward pass. This new architecture achieves state-of-the-art results\non the GazeFollow and VideoAttentionTarget datasets. The code for this paper\nwill be made publicly available.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Samy Tafasca", "Anshul Gupta", "Jean-marc Odobez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f58c"}, "filepath": "data/2309.10058.png", "tags": [], "_media_type": "image", "_rand": 0.9993973473839964, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2309.10058", "other_link": "", "title": "Fully Exploiting Every Real Sample: Super-Pixel Sample Gradient Model Stealing", "abstract": "Existing data-free model stealing methods use a generator to produce samples\nin order to train a student model to match the target model outputs. To this\nend, the two main challenges are estimating gradients of the target model\nwithout access to its parameters, and generating a diverse set of training\nsamples that thoroughly explores the input space. We propose a Dual Student\nmethod where two students are symmetrically trained in order to provide the\ngenerator a criterion to generate samples that the two students disagree on. On\none hand, disagreement on a sample implies at least one student has classified\nthe sample incorrectly when compared to the target model. This incentive\ntowards disagreement implicitly encourages the generator to explore more\ndiverse regions of the input space. On the other hand, our method utilizes\ngradients of student models to indirectly estimate gradients of the target\nmodel. We show that this novel training objective for the generator network is\nequivalent to optimizing a lower bound on the generator's loss if we had access\nto the target model gradients. We show that our new optimization framework\nprovides more accurate gradient estimation of the target model and better\naccuracies on benchmark classification datasets. Additionally, our approach\nbalances improved query efficiency with training computation cost. Finally, we\ndemonstrate that our method serves as a better proxy model for transfer-based\nadversarial attacks than existing data-free model stealing methods.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yunlong Zhao", "Xiaoheng Deng", "Yijing Liu", "Xinjun Pei", "Jiazhi Xia", "Wei Chen"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f58d"}, "filepath": "data/2401.07745.png", "tags": [], "_media_type": "image", "_rand": 0.9997791588068978, "arXiv_link": "https://arxiv.org/abs/2401.07745", "other_link": "https://pku-epic.github.io/MaskClustering.", "title": "MaskClustering: View Consensus based Mask Graph Clustering for Open-Vocabulary 3D Instance Segmentation", "abstract": "Open-vocabulary 3D instance segmentation is cutting-edge for its ability to\nsegment 3D instances without predefined categories. However, progress in 3D\nlags behind its 2D counterpart due to limited annotated 3D data. To address\nthis, recent works first generate 2D open-vocabulary masks through 2D models\nand then merge them into 3D instances based on metrics calculated between two\nneighboring frames. In contrast to these local metrics, we propose a novel\nmetric, view consensus rate, to enhance the utilization of multi-view\nobservations. The key insight is that two 2D masks should be deemed part of the\nsame 3D instance if a significant number of other 2D masks from different views\ncontain both these two masks. Using this metric as edge weight, we construct a\nglobal mask graph where each mask is a node. Through iterative clustering of\nmasks showing high view consensus, we generate a series of clusters, each\nrepresenting a distinct 3D instance. Notably, our model is training-free.\nThrough extensive experiments on publicly available datasets, including\nScanNet++, ScanNet200 and MatterPort3D, we demonstrate that our method achieves\nstate-of-the-art performance in open-vocabulary 3D instance segmentation. Our\nproject page is at https://pku-epic.github.io/MaskClustering.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Mi Yan", "Jiazhao Zhang", "Yan Zhu", "He Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f58e"}, "filepath": "data/2312.00968.png", "tags": [], "_media_type": "image", "_rand": 0.999030075283977, "arXiv_link": "https://arxiv.org/abs/2312.00968", "other_link": "", "title": "Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts", "abstract": "Large multi-modal models (LMMs) exhibit remarkable performance across\nnumerous tasks. However, generalist LMMs often suffer from performance\ndegradation when tuned over a large collection of tasks. Recent research\nsuggests that Mixture of Experts (MoE) architectures are useful for instruction\ntuning, but for LMMs of parameter size around O(50-100B), the prohibitive cost\nof replicating and storing the expert models severely limits the number of\nexperts we can use. We propose Omni-SMoLA, an architecture that uses the Soft\nMoE approach to (softly) mix many multimodal low rank experts, and avoids\nintroducing a significant number of new parameters compared to conventional MoE\nmodels. The core intuition here is that the large model provides a foundational\nbackbone, while different lightweight experts residually learn specialized\nknowledge, either per-modality or multimodally. Extensive experiments\ndemonstrate that the SMoLA approach helps improve the generalist performance\nacross a broad range of generative vision-and-language tasks, achieving new\nSoTA generalist performance that often matches or outperforms single\nspecialized LMM baselines, as well as new SoTA specialist performance.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jialin Wu", "Xia Hu", "Yaqing Wang", "Bo Pang", "Radu Soricut"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f58f"}, "filepath": "data/2312.05239.png", "tags": [], "_media_type": "image", "_rand": 0.9990327298939802, "arXiv_link": "https://arxiv.org/abs/2312.05239", "other_link": "", "title": "SwiftBrush: One-Step Text-to-Image Diffusion Model with Variational Score Distillation", "abstract": "Despite their ability to generate high-resolution and diverse images from\ntext prompts, text-to-image diffusion models often suffer from slow iterative\nsampling processes. Model distillation is one of the most effective directions\nto accelerate these models. However, previous distillation methods fail to\nretain the generation quality while requiring a significant amount of images\nfor training, either from real data or synthetically generated by the teacher\nmodel. In response to this limitation, we present a novel image-free\ndistillation scheme named $\\textbf{SwiftBrush}$. Drawing inspiration from\ntext-to-3D synthesis, in which a 3D neural radiance field that aligns with the\ninput prompt can be obtained from a 2D text-to-image diffusion prior via a\nspecialized loss without the use of any 3D data ground-truth, our approach\nre-purposes that same loss for distilling a pretrained multi-step text-to-image\nmodel to a student network that can generate high-fidelity images with just a\nsingle inference step. In spite of its simplicity, our model stands as one of\nthe first one-step text-to-image generators that can produce images of\ncomparable quality to Stable Diffusion without reliance on any training image\ndata. Remarkably, SwiftBrush achieves an FID score of $\\textbf{16.67}$ and a\nCLIP score of $\\textbf{0.29}$ on the COCO-30K benchmark, achieving competitive\nresults or even substantially surpassing existing state-of-the-art distillation\ntechniques.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Thuan Nguyen", "Anh Tran"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f590"}, "filepath": "data/2312.13763.png", "tags": [], "_media_type": "image", "_rand": 0.9996282162459058, "arXiv_link": "https://arxiv.org/abs/2312.13763", "other_link": "", "title": "Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models", "abstract": "Text-guided diffusion models have revolutionized image and video generation\nand have also been successfully used for optimization-based 3D object\nsynthesis. Here, we instead focus on the underexplored text-to-4D setting and\nsynthesize dynamic, animated 3D objects using score distillation methods with\nan additional temporal dimension. Compared to previous work, we pursue a novel\ncompositional generation-based approach, and combine text-to-image,\ntext-to-video, and 3D-aware multiview diffusion models to provide feedback\nduring 4D object optimization, thereby simultaneously enforcing temporal\nconsistency, high-quality visual appearance and realistic geometry. Our method,\ncalled Align Your Gaussians (AYG), leverages dynamic 3D Gaussian Splatting with\ndeformation fields as 4D representation. Crucial to AYG is a novel method to\nregularize the distribution of the moving 3D Gaussians and thereby stabilize\nthe optimization and induce motion. We also propose a motion amplification\nmechanism as well as a new autoregressive synthesis scheme to generate and\ncombine multiple 4D sequences for longer generation. These techniques allow us\nto synthesize vivid dynamic scenes, outperform previous work qualitatively and\nquantitatively and achieve state-of-the-art text-to-4D performance. Due to the\nGaussian 4D representation, different 4D animations can be seamlessly combined,\nas we demonstrate. AYG opens up promising avenues for animation, simulation and\ndigital content creation as well as synthetic data generation.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Huan Ling", "Seung Wook Kim", "Antonio Torralba", "Sanja Fidler", "Karsten Kreis"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f591"}, "filepath": "data/2403.19314.png", "tags": [], "_media_type": "image", "_rand": 0.9999916787374833, "arXiv_link": "https://arxiv.org/abs/2403.19314", "other_link": "https://github.com/CVMI-Lab/Total-Decom.git.", "title": "Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction", "abstract": "Scene reconstruction from multi-view images is a fundamental problem in\ncomputer vision and graphics. Recent neural implicit surface reconstruction\nmethods have achieved high-quality results; however, editing and manipulating\nthe 3D geometry of reconstructed scenes remains challenging due to the absence\nof naturally decomposed object entities and complex object/background\ncompositions. In this paper, we present Total-Decom, a novel method for\ndecomposed 3D reconstruction with minimal human interaction. Our approach\nseamlessly integrates the Segment Anything Model (SAM) with hybrid\nimplicit-explicit neural surface representations and a mesh-based\nregion-growing technique for accurate 3D object decomposition. Total-Decom\nrequires minimal human annotations while providing users with real-time control\nover the granularity and quality of decomposition. We extensively evaluate our\nmethod on benchmark datasets and demonstrate its potential for downstream\napplications, such as animation and scene editing. The code is available at\nhttps://github.com/CVMI-Lab/Total-Decom.git.", "keywords": ["Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Xiaoyang Lyu", "Chirui Chang", "Peng Dai", "Yangtian Sun", "Xiaojuan Qi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f592"}, "filepath": "data/2403.10066.png", "tags": [], "_media_type": "image", "_rand": 0.9992856896979784, "arXiv_link": "https://arxiv.org/abs/2403.10066", "other_link": "", "title": "Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment", "abstract": "No-reference point cloud quality assessment (NR-PCQA) aims to automatically\nevaluate the perceptual quality of distorted point clouds without available\nreference, which have achieved tremendous improvements due to the utilization\nof deep neural networks. However, learning-based NR-PCQA methods suffer from\nthe scarcity of labeled data and usually perform suboptimally in terms of\ngeneralization. To solve the problem, we propose a novel contrastive\npre-training framework tailored for PCQA (CoPA), which enables the pre-trained\nmodel to learn quality-aware representations from unlabeled data. To obtain\nanchors in the representation space, we project point clouds with different\ndistortions into images and randomly mix their local patches to form mixed\nimages with multiple distortions. Utilizing the generated anchors, we constrain\nthe pre-training process via a quality-aware contrastive loss following the\nphilosophy that perceptual quality is closely related to both content and\ndistortion. Furthermore, in the model fine-tuning stage, we propose a\nsemantic-guided multi-view fusion module to effectively integrate the features\nof projected images from multiple perspectives. Extensive experiments show that\nour method outperforms the state-of-the-art PCQA methods on popular benchmarks.\nFurther investigations demonstrate that CoPA can also benefit existing\nlearning-based PCQA models.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ziyu Shan", "Yujie Zhang", "Qi Yang", "Haichen Yang", "Yiling Xu", "Jenq-Neng Hwang", "Xiaozhong Xu", "Shan Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f593"}, "filepath": "data/2310.18698.png", "tags": [], "_media_type": "image", "_rand": 0.9991059836249987, "arXiv_link": "http://export.arxiv.org/abs/2310.18698", "other_link": "", "title": "PredToken: Predicting Unknown Tokens and Beyond with Coarse-to-Fine Iterative Decoding", "abstract": "Spatiotemporal predictive learning offers a self-supervised learning paradigm\nthat enables models to learn both spatial and temporal patterns by predicting\nfuture sequences based on historical sequences. Mainstream methods are\ndominated by recurrent units, yet they are limited by their lack of\nparallelization and often underperform in real-world scenarios. To improve\nprediction quality while maintaining computational efficiency, we propose an\ninnovative triplet attention transformer designed to capture both inter-frame\ndynamics and intra-frame static features. Specifically, the model incorporates\nthe Triplet Attention Module (TAM), which replaces traditional recurrent units\nby exploring self-attention mechanisms in temporal, spatial, and channel\ndimensions. In this configuration: (i) temporal tokens contain abstract\nrepresentations of inter-frame, facilitating the capture of inherent temporal\ndependencies; (ii) spatial and channel attention combine to refine the\nintra-frame representation by performing fine-grained interactions across\nspatial and channel dimensions. Alternating temporal, spatial, and\nchannel-level attention allows our approach to learn more complex short- and\nlong-range spatiotemporal dependencies. Extensive experiments demonstrate\nperformance surpassing existing recurrent-based and recurrent-free methods,\nachieving state-of-the-art under multi-scenario examination including moving\nobject trajectory prediction, traffic flow prediction, driving scene\nprediction, and human motion capture.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Xuesong Nie", "Haoyuan Jin", "Yunfeng Yan", "Xi Chen", "Zhihang Zhu", "Donglian Qi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f594"}, "filepath": "data/2403.20231.png", "tags": [], "_media_type": "image", "_rand": 0.9998544168727321, "arXiv_link": "https://arxiv.org/abs/2403.20231", "other_link": "", "title": "U-VAP: User-specified Visual Appearance Personalization via Decoupled Self Augmentation", "abstract": "Concept personalization methods enable large text-to-image models to learn\nspecific subjects (e.g., objects/poses/3D models) and synthesize renditions in\nnew contexts. Given that the image references are highly biased towards visual\nattributes, state-of-the-art personalization models tend to overfit the whole\nsubject and cannot disentangle visual characteristics in pixel space. In this\nstudy, we proposed a more challenging setting, namely fine-grained visual\nappearance personalization. Different from existing methods, we allow users to\nprovide a sentence describing the desired attributes. A novel decoupled\nself-augmentation strategy is proposed to generate target-related and\nnon-target samples to learn user-specified visual attributes. These augmented\ndata allow for refining the model's understanding of the target attribute while\nmitigating the impact of unrelated attributes. At the inference stage,\nadjustments are conducted on semantic space through the learned target and\nnon-target embeddings to further enhance the disentanglement of target\nattributes. Extensive experiments on various kinds of visual attributes with\nSOTA personalization methods show the ability of the proposed method to mimic\ntarget visual appearance in novel contexts, thus improving the controllability\nand flexibility of personalization.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["You Wu", "Kean Liu", "Xiaoyue Mi", "Fan Tang", "Juan Cao", "Jintao Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f595"}, "filepath": "data/2306.05493.png", "tags": [], "_media_type": "image", "_rand": 0.9993421364998641, "arXiv_link": "https://arxiv.org/abs/2306.05493", "other_link": "", "title": "OVMR: Open-Vocabulary Recognition with Multi-Modal References", "abstract": "The goal of this paper is open-vocabulary object detection (OVOD)\n$\\unicode{x2013}$ building a model that can detect objects beyond the set of\ncategories seen at training, thus enabling the user to specify categories of\ninterest at inference without the need for model retraining. We adopt a\nstandard two-stage object detector architecture, and explore three ways for\nspecifying novel categories: via language descriptions, via image exemplars, or\nvia a combination of the two. We make three contributions: first, we prompt a\nlarge language model (LLM) to generate informative language descriptions for\nobject classes, and construct powerful text-based classifiers; second, we\nemploy a visual aggregator on image exemplars that can ingest any number of\nimages as input, forming vision-based classifiers; and third, we provide a\nsimple method to fuse information from language descriptions and image\nexemplars, yielding a multi-modal classifier. When evaluating on the\nchallenging LVIS open-vocabulary benchmark we demonstrate that: (i) our\ntext-based classifiers outperform all previous OVOD works; (ii) our\nvision-based classifiers perform as well as text-based classifiers in prior\nwork; (iii) using multi-modal classifiers perform better than either modality\nalone; and finally, (iv) our text-based and multi-modal classifiers yield\nbetter performance than a fully-supervised detector.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zehong Ma", "Shiliang Zhang", "Longhui Wei", "Qi Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f596"}, "filepath": "data/2404.04095.png", "tags": [], "_media_type": "image", "_rand": 0.9993719805960167, "arXiv_link": "https://arxiv.org/abs/2404.04095", "other_link": "https://github.com/Mowenyii/PAE.", "title": "Dynamic Prompt Optimizing for Text-to-Image Generation", "abstract": "Text-to-image generative models, specifically those based on diffusion models\nlike Imagen and Stable Diffusion, have made substantial advancements. Recently,\nthere has been a surge of interest in the delicate refinement of text prompts.\nUsers assign weights or alter the injection time steps of certain words in the\ntext prompts to improve the quality of generated images. However, the success\nof fine-control prompts depends on the accuracy of the text prompts and the\ncareful selection of weights and time steps, which requires significant manual\nintervention. To address this, we introduce the \\textbf{P}rompt\n\\textbf{A}uto-\\textbf{E}diting (PAE) method. Besides refining the original\nprompts for image generation, we further employ an online reinforcement\nlearning strategy to explore the weights and injection time steps of each word,\nleading to the dynamic fine-control prompts. The reward function during\ntraining encourages the model to consider aesthetic score, semantic\nconsistency, and user preferences. Experimental results demonstrate that our\nproposed method effectively improves the original prompts, generating visually\nmore appealing images while maintaining semantic alignment. Code is available\nat https://github.com/Mowenyii/PAE.", "keywords": ["Image and video generation and manipulation", "Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Wenyi Mo", "Tianyu Zhang", "Yalong Bai", "Bing Su", "Ji-Rong Wen", "Qing Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f597"}, "filepath": "data/2404.01424.png", "tags": [], "_media_type": "image", "_rand": 0.9999230193486566, "arXiv_link": "https://arxiv.org/abs/2404.01424", "other_link": "", "title": "DPMesh: Exploiting Diffusion Prior for Occluded Human Mesh Recovery", "abstract": "The recovery of occluded human meshes presents challenges for current methods\ndue to the difficulty in extracting effective image features under severe\nocclusion. In this paper, we introduce DPMesh, an innovative framework for\noccluded human mesh recovery that capitalizes on the profound diffusion prior\nabout object structure and spatial relationships embedded in a pre-trained\ntext-to-image diffusion model. Unlike previous methods reliant on conventional\nbackbones for vanilla feature extraction, DPMesh seamlessly integrates the\npre-trained denoising U-Net with potent knowledge as its image backbone and\nperforms a single-step inference to provide occlusion-aware information. To\nenhance the perception capability for occluded poses, DPMesh incorporates\nwell-designed guidance via condition injection, which produces effective\ncontrols from 2D observations for the denoising U-Net. Furthermore, we explore\na dedicated noisy key-point reasoning approach to mitigate disturbances arising\nfrom occlusion and crowded scenarios. This strategy fully unleashes the\nperceptual capability of the diffusion prior, thereby enhancing accuracy.\nExtensive experiments affirm the efficacy of our framework, as we outperform\nstate-of-the-art methods on both occlusion-specific and standard datasets. The\npersuasive results underscore its ability to achieve precise and robust 3D\nhuman mesh recovery, particularly in challenging scenarios involving occlusion\nand crowded scenes.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Yixuan Zhu", "Ao Li", "Yansong Tang", "Wenliang Zhao", "Jie Zhou", "Jiwen Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f598"}, "filepath": "data/2403.18342.png", "tags": [], "_media_type": "image", "_rand": 0.9998635122375552, "arXiv_link": "https://arxiv.org/abs/2403.18342", "other_link": "", "title": "Learning Inclusion Matching for Animation Paint Bucket Colorization", "abstract": "Colorizing line art is a pivotal task in the production of hand-drawn cel\nanimation. This typically involves digital painters using a paint bucket tool\nto manually color each segment enclosed by lines, based on RGB values\npredetermined by a color designer. This frame-by-frame process is both arduous\nand time-intensive. Current automated methods mainly focus on segment matching.\nThis technique migrates colors from a reference to the target frame by aligning\nfeatures within line-enclosed segments across frames. However, issues like\nocclusion and wrinkles in animations often disrupt these direct\ncorrespondences, leading to mismatches. In this work, we introduce a new\nlearning-based inclusion matching pipeline, which directs the network to\ncomprehend the inclusion relationships between segments rather than relying\nsolely on direct visual correspondences. Our method features a two-stage\npipeline that integrates a coarse color warping module with an inclusion\nmatching module, enabling more nuanced and accurate colorization. To facilitate\nthe training of our network, we also develope a unique dataset, referred to as\nPaintBucket-Character. This dataset includes rendered line arts alongside their\ncolorized counterparts, featuring various 3D characters. Extensive experiments\ndemonstrate the effectiveness and superiority of our method over existing\ntechniques.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yuekun Dai", "Shangchen Zhou", "Blake Li", "Chongyi Li", "Chen Change Loy"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f599"}, "filepath": "data/2312.06505.png", "tags": [], "_media_type": "image", "_rand": 0.9997282457312486, "arXiv_link": "https://arxiv.org/abs/2312.06505", "other_link": "https://github.com/Becomebright/GroundVQA.", "title": "Grounded Question-Answering in Long Egocentric Videos", "abstract": "Existing approaches to video understanding, mainly designed for short videos\nfrom a third-person perspective, are limited in their applicability in certain\nfields, such as robotics. In this paper, we delve into open-ended\nquestion-answering (QA) in long, egocentric videos, which allows individuals or\nrobots to inquire about their own past visual experiences. This task presents\nunique challenges, including the complexity of temporally grounding queries\nwithin extensive video content, the high resource demands for precise data\nannotation, and the inherent difficulty of evaluating open-ended answers due to\ntheir ambiguous nature. Our proposed approach tackles these challenges by (i)\nintegrating query grounding and answering within a unified model to reduce\nerror propagation; (ii) employing large language models for efficient and\nscalable data synthesis; and (iii) introducing a close-ended QA task for\nevaluation, to manage answer ambiguity. Extensive experiments demonstrate the\neffectiveness of our method, which also achieves state-of-the-art performance\non the QaEgo4D and Ego4D-NLQ benchmarks. Code, data, and models are available\nat https://github.com/Becomebright/GroundVQA.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Shangzhe Di", "Weidi Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f59a"}, "filepath": "data/2312.07865.png", "tags": [], "_media_type": "image", "_rand": 0.9990248482366303, "arXiv_link": "https://arxiv.org/abs/2312.07865", "other_link": "https://github.com/somuchtome/SimAC.", "title": "SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion Models", "abstract": "Despite the success of diffusion-based customization methods on visual\ncontent creation, increasing concerns have been raised about such techniques\nfrom both privacy and political perspectives. To tackle this issue, several\nanti-customization methods have been proposed in very recent months,\npredominantly grounded in adversarial attacks. Unfortunately, most of these\nmethods adopt straightforward designs, such as end-to-end optimization with a\nfocus on adversarially maximizing the original training loss, thereby\nneglecting nuanced internal properties intrinsic to the diffusion model, and\neven leading to ineffective optimization in some diffusion time steps.In this\npaper, we strive to bridge this gap by undertaking a comprehensive exploration\nof these inherent properties, to boost the performance of current\nanti-customization approaches. Two aspects of properties are investigated: 1)\nWe examine the relationship between time step selection and the model's\nperception in the frequency domain of images and find that lower time steps can\ngive much more contributions to adversarial noises. This inspires us to propose\nan adaptive greedy search for optimal time steps that seamlessly integrates\nwith existing anti-customization methods. 2) We scrutinize the roles of\nfeatures at different layers during denoising and devise a sophisticated\nfeature-based optimization framework for anti-customization.Experiments on\nfacial benchmarks demonstrate that our approach significantly increases\nidentity disruption, thereby protecting user privacy and copyright. Our code is\navailable at: https://github.com/somuchtome/SimAC.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Feifei Wang", "Zhentao Tan", "Tianyi Wei", "Yue Wu", "Qidong Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f59b"}, "filepath": "data/2405.08533.png", "tags": [], "_media_type": "image", "_rand": 0.9995177717495246, "arXiv_link": "https://arxiv.org/abs/2405.08533", "other_link": "", "title": "DYSON: Dynamic Feature Space Self-Organization for Online Task-Free Class Incremental Learning", "abstract": "Class-incremental learning (CIL) has emerged as a means to learn new classes\nincrementally without catastrophic forgetting of previous classes. Recently,\nCIL has undergone a paradigm shift towards dynamic architectures due to their\nsuperior performance. However, these models are still limited by the following\naspects: (i) Data augmentation (DA), which are tightly coupled with CIL,\nremains under-explored in dynamic architecture scenarios. (ii) Feature\nrepresentation. The discriminativeness of dynamic feature are sub-optimal and\npossess potential for refinement. (iii) Classifier. The misalignment between\ndynamic feature and classifier constrains the capabilities of the model. To\ntackle the aforementioned drawbacks, we propose the Dynamic Feature Learning\nand Matching (DFLM) model in this paper from above three perspectives.\nSpecifically, we firstly introduce class weight information and non-stationary\nfunctions to extend the mix DA method for dynamically adjusting the focus on\nmemory during training. Then, von Mises-Fisher (vMF) classifier is employed to\neffectively model the dynamic feature distribution and implicitly learn their\ndiscriminative properties. Finally, the matching loss is proposed to facilitate\nthe alignment between the learned dynamic features and the classifier by\nminimizing the distribution distance. Extensive experiments on CIL benchmarks\nvalidate that our proposed model achieves significant performance improvements\nover existing methods.", "keywords": [], "authors_list": ["Yuhang He", "YingJie Chen", "Yuhan Jin", "Songlin Dong", "Xing Wei", "Yihong Gong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f59c"}, "filepath": "data/2403.08182.png", "tags": [], "_media_type": "image", "_rand": 0.9995563150197734, "arXiv_link": "https://arxiv.org/abs/2403.08182", "other_link": "", "title": "G$^3$-LQ: Marrying Hyperbolic Alignment with Explicit Semantic-Geometric Modeling for 3D Visual Grounding", "abstract": "3D visual grounding aims to automatically locate the 3D region of the\nspecified object given the corresponding textual description. Existing works\nfail to distinguish similar objects especially when multiple referred objects\nare involved in the description. Experiments show that direct matching of\nlanguage and visual modal has limited capacity to comprehend complex\nreferential relationships in utterances. It is mainly due to the interference\ncaused by redundant visual information in cross-modal alignment. To strengthen\nrelation-orientated mapping between different modalities, we propose SeCG, a\nsemantic-enhanced relational learning model based on a graph network with our\ndesigned memory graph attention layer. Our method replaces original\nlanguage-independent encoding with cross-modal encoding in visual analysis.\nMore text-related feature expressions are obtained through the guidance of\nglobal semantics and implicit relationships. Experimental results on ReferIt3D\nand ScanRefer benchmarks show that the proposed method outperforms the existing\nstate-of-the-art methods, particularly improving the localization performance\nfor the multi-relation challenges.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Yuan Wang", "Yali Li", "Shengjin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f59d"}, "filepath": "data/2306.02416.png", "tags": [], "_media_type": "image", "_rand": 0.9992661066663852, "arXiv_link": "https://arxiv.org/abs/2306.02416", "other_link": "https://github.com/yhygao/universal-medical-image-segmentation.", "title": "Training Like a Medical Resident: Context-Prior Learning Toward Universal Medical Image Segmentation", "abstract": "A major focus of clinical imaging workflow is disease diagnosis and\nmanagement, leading to medical imaging datasets strongly tied to specific\nclinical objectives. This scenario has led to the prevailing practice of\ndeveloping task-specific segmentation models, without gaining insights from\nwidespread imaging cohorts. Inspired by the training program of medical\nradiology residents, we propose a shift towards universal medical image\nsegmentation, a paradigm aiming to build medical image understanding foundation\nmodels by leveraging the diversity and commonality across clinical targets,\nbody regions, and imaging modalities. Towards this goal, we develop Hermes, a\nnovel context-prior learning approach to address the challenges of data\nheterogeneity and annotation differences in medical image segmentation. In a\nlarge collection of eleven diverse datasets (2,438 3D images) across five\nmodalities (CT, PET, T1, T2 and cine MRI) and multiple body regions, we\ndemonstrate the merit of the universal paradigm over the traditional paradigm\non addressing multiple tasks within a single model. By exploiting the synergy\nacross tasks, Hermes achieves state-of-the-art performance on all testing\ndatasets and shows superior model scalability. Results on two additional\ndatasets reveals Hermes' strong performance for transfer learning, incremental\nlearning, and generalization to downstream tasks. Hermes's learned priors\ndemonstrate an appealing trait to reflect the intricate relations among tasks\nand modalities, which aligns with the established anatomical and imaging\nprinciples in radiology. The code is available:\nhttps://github.com/yhygao/universal-medical-image-segmentation.", "keywords": ["Efficient and scalable vision", "Medical imaging and biological vision"], "authors_list": ["Yunhe Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f59e"}, "filepath": "data/2403.19066.png", "tags": [], "_media_type": "image", "_rand": 0.9990122793539727, "arXiv_link": "https://arxiv.org/abs/2403.19066", "other_link": "https://vishal-s-p.github.io/projects/2023/generative_quanta_color.html.", "title": "Generative Quanta Color Imaging", "abstract": "The astonishing development of single-photon cameras has created an\nunprecedented opportunity for scientific and industrial imaging. However, the\nhigh data throughput generated by these 1-bit sensors creates a significant\nbottleneck for low-power applications. In this paper, we explore the\npossibility of generating a color image from a single binary frame of a\nsingle-photon camera. We evidently find this problem being particularly\ndifficult to standard colorization approaches due to the substantial degree of\nexposure variation. The core innovation of our paper is an exposure synthesis\nmodel framed under a neural ordinary differential equation (Neural ODE) that\nallows us to generate a continuum of exposures from a single observation. This\ninnovation ensures consistent exposure in binary images that colorizers take\non, resulting in notably enhanced colorization. We demonstrate applications of\nthe method in single-image and burst colorization and show superior generative\nperformance over baselines. Project website can be found at\nhttps://vishal-s-p.github.io/projects/2023/generative_quanta_color.html.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Vishal Purohit", "Junjie Luo", "Yiheng Chi", "Qi Guo", "Stanley H. Chan", "Qiang Qiu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f59f"}, "filepath": "data/2403.06403.png", "tags": [], "_media_type": "image", "_rand": 0.9990004851852211, "arXiv_link": "https://arxiv.org/abs/2403.06403", "other_link": "", "title": "MirageRoom: 3D Scene Segmentation with 2D Pre-trained Models by Mirage Projection", "abstract": "Recent success of vision foundation models have shown promising performance\nfor the 2D perception tasks. However, it is difficult to train a 3D foundation\nnetwork directly due to the limited dataset and it remains under explored\nwhether existing foundation models can be lifted to 3D space seamlessly. In\nthis paper, we present PointSeg, a novel training-free paradigm that leverages\noff-the-shelf vision foundation models to address 3D scene perception tasks.\nPointSeg can segment anything in 3D scene by acquiring accurate 3D prompts to\nalign their corresponding pixels across frames. Concretely, we design a\ntwo-branch prompts learning structure to construct the 3D point-box prompts\npairs, combining with the bidirectional matching strategy for accurate point\nand proposal prompts generation. Then, we perform the iterative post-refinement\nadaptively when cooperated with different vision foundation models. Moreover,\nwe design a affinity-aware merging algorithm to improve the final ensemble\nmasks. PointSeg demonstrates impressive segmentation performance across various\ndatasets, all without training. Specifically, our approach significantly\nsurpasses the state-of-the-art specialist model by 13.4$\\%$, 11.3$\\%$, and\n12$\\%$ mAP on ScanNet, ScanNet++, and KITTI-360 datasets, respectively. On top\nof that, PointSeg can incorporate with various segmentation models and even\nsurpasses the supervised methods.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Haowen Sun", "Yueqi Duan", "Juncheng Yan", "Yifan Liu", "Jiwen Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a0"}, "filepath": "data/2210.06998.png", "tags": [], "_media_type": "image", "_rand": 0.9992931897630656, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2210.06998", "other_link": "", "title": "FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion", "abstract": "Text-to-image generation models that generate images based on prompt\ndescriptions have attracted an increasing amount of attention during the past\nfew months. Despite their encouraging performance, these models raise concerns\nabout the misuse of their generated fake images. To tackle this problem, we\npioneer a systematic study on the detection and attribution of fake images\ngenerated by text-to-image generation models. Concretely, we first build a\nmachine learning classifier to detect the fake images generated by various\ntext-to-image generation models. We then attribute these fake images to their\nsource models, such that model owners can be held responsible for their models'\nmisuse. We further investigate how prompts that generate fake images affect\ndetection and attribution. We conduct extensive experiments on four popular\ntext-to-image generation models, including DALL$\\cdot$E 2, Stable Diffusion,\nGLIDE, and Latent Diffusion, and two benchmark prompt-image datasets. Empirical\nresults show that (1) fake images generated by various models can be\ndistinguished from real ones, as there exists a common artifact shared by fake\nimages from different models; (2) fake images can be effectively attributed to\ntheir source models, as different models leave unique fingerprints in their\ngenerated images; (3) prompts with the ``person'' topic or a length between 25\nand 75 enable models to generate fake images with higher authenticity. All\nfindings contribute to the community's insight into the threats caused by\ntext-to-image generation models. We appeal to the community's consideration of\nthe counterpart solutions, like ours, against the rapidly-evolving fake image\ngeneration.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["George Cazenavette", "Avneesh Sud", "Thomas Leung", "Ben Usman"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a1"}, "filepath": "data/2404.04936.png", "tags": [], "_media_type": "image", "_rand": 0.9995196010177628, "arXiv_link": "https://arxiv.org/abs/2404.04936", "other_link": "", "title": "Bootstrapping Chest CT Image Understanding by Distilling Knowledge from X-ray Expert Models", "abstract": "Radiologists highly desire fully automated versatile AI for medical imaging\ninterpretation. However, the lack of extensively annotated large-scale\nmulti-disease datasets has hindered the achievement of this goal. In this\npaper, we explore the feasibility of leveraging language as a naturally\nhigh-quality supervision for chest CT imaging. In light of the limited\navailability of image-report pairs, we bootstrap the understanding of 3D chest\nCT images by distilling chest-related diagnostic knowledge from an extensively\npre-trained 2D X-ray expert model. Specifically, we propose a language-guided\nretrieval method to match each 3D CT image with its semantically closest 2D\nX-ray image, and perform pair-wise and semantic relation knowledge\ndistillation. Subsequently, we use contrastive learning to align images and\nreports within the same patient while distinguishing them from the other\npatients. However, the challenge arises when patients have similar semantic\ndiagnoses, such as healthy patients, potentially confusing if treated as\nnegatives. We introduce a robust contrastive learning that identifies and\ncorrects these false negatives. We train our model with over 12,000 pairs of\nchest CT images and radiology reports. Extensive experiments across multiple\nscenarios, including zero-shot learning, report generation, and fine-tuning\nprocesses, demonstrate the model's feasibility in interpreting chest CT images.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Weiwei Cao", "Jianpeng Zhang", "Yingda Xia", "Tony C. W. MOK", "Zi Li", "Xianghua Ye", "Le Lu", "Jian Zheng", "Yuxing Tang", "Ling Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a2"}, "filepath": "data/2404.03477.png", "tags": [], "_media_type": "image", "_rand": 0.9990236885976113, "arXiv_link": "https://arxiv.org/abs/2404.03477", "other_link": "", "title": "Towards Automated Movie Trailer Generation", "abstract": "Movie trailers are an essential tool for promoting films and attracting\naudiences. However, the process of creating trailers can be time-consuming and\nexpensive. To streamline this process, we propose an automatic trailer\ngeneration framework that generates plausible trailers from a full movie by\nautomating shot selection and composition. Our approach draws inspiration from\nmachine translation techniques and models the movies and trailers as sequences\nof shots, thus formulating the trailer generation problem as a\nsequence-to-sequence task. We introduce Trailer Generation Transformer (TGT), a\ndeep-learning framework utilizing an encoder-decoder architecture. TGT movie\nencoder is tasked with contextualizing each movie shot representation via\nself-attention, while the autoregressive trailer decoder predicts the feature\nrepresentation of the next trailer shot, accounting for the relevance of shots'\ntemporal order in trailers. Our TGT significantly outperforms previous methods\non a comprehensive suite of metrics.", "keywords": [], "authors_list": ["Dawit Argaw Argaw", "Mattia Soldan", "Alejandro Pardo", "Chen Zhao", "Fabian Caba Heilbron", "Joon Chung", "Bernard Ghanem"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a3"}, "filepath": "data/2405.17104v1.png", "tags": [], "_media_type": "image", "_rand": 0.999594267739788, "arXiv_link": "https://arxiv.org/html/2405.17104v1", "other_link": "", "title": "Investigating Compositional Challenges in Vision-Language Models for Visual Grounding", "abstract": "Visual grounding is an essential tool that links user-provided text queries\nwith query-specific regions within an image. Despite advancements in visual\ngrounding models, their ability to comprehend complex queries remains limited.\nTo overcome this limitation, we introduce LLM-Optic, an innovative method that\nutilizes Large Language Models (LLMs) as an optical lens to enhance existing\nvisual grounding models in comprehending complex text queries involving\nintricate text structures, multiple objects, or object spatial relationships,\nsituations that current models struggle with. LLM-Optic first employs an LLM as\na Text Grounder to interpret complex text queries and accurately identify\nobjects the user intends to locate. Then a pre-trained visual grounding model\nis used to generate candidate bounding boxes given the refined query by the\nText Grounder. After that, LLM-Optic annotates the candidate bounding boxes\nwith numerical marks to establish a connection between text and specific image\nregions, thereby linking two distinct modalities. Finally, it employs a Large\nMultimodal Model (LMM) as a Visual Grounder to select the marked candidate\nobjects that best correspond to the original text query. Through LLM-Optic, we\nhave achieved universal visual grounding, which allows for the detection of\narbitrary objects specified by arbitrary human language input. Importantly, our\nmethod achieves this enhancement without requiring additional training or\nfine-tuning. Extensive experiments across various challenging benchmarks\ndemonstrate that LLM-Optic achieves state-of-the-art zero-shot visual grounding\ncapabilities.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Yunan Zeng", "Yan Huang", "Jinjin Zhang", "Zequn Jie", "Zhenhua Chai", "Liang Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a4"}, "filepath": "data/2403.14513.png", "tags": [], "_media_type": "image", "_rand": 0.9999220617998794, "arXiv_link": "https://arxiv.org/abs/2403.14513", "other_link": "https://github.com/LinlyAC/VDT-AGPReID", "title": "View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network", "abstract": "Existing person re-identification methods have achieved remarkable advances\nin appearance-based identity association across homogeneous cameras, such as\nground-ground matching. However, as a more practical scenario, aerial-ground\nperson re-identification (AGPReID) among heterogeneous cameras has received\nminimal attention. To alleviate the disruption of discriminative identity\nrepresentation by dramatic view discrepancy as the most significant challenge\nin AGPReID, the view-decoupled transformer (VDT) is proposed as a simple yet\neffective framework. Two major components are designed in VDT to decouple\nview-related and view-unrelated features, namely hierarchical subtractive\nseparation and orthogonal loss, where the former separates these two features\ninside the VDT, and the latter constrains these two to be independent. In\naddition, we contribute a large-scale AGPReID dataset called CARGO, consisting\nof five/eight aerial/ground cameras, 5,000 identities, and 108,563 images.\nExperiments on two datasets show that VDT is a feasible and effective solution\nfor AGPReID, surpassing the previous method on mAP/Rank1 by up to 5.0%/2.7% on\nCARGO and 3.7%/5.2% on AG-ReID, keeping the same magnitude of computational\ncomplexity. Our project is available at https://github.com/LinlyAC/VDT-AGPReID", "keywords": ["Biometrics and human analysis"], "authors_list": ["Quan Zhang", "Lei Wang", "Vishal M. Patel", "Xiaohua Xie", "Jianhuang Lai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a5"}, "filepath": "data/2404.05662.png", "tags": [], "_media_type": "image", "_rand": 0.9991828559811089, "arXiv_link": "https://arxiv.org/abs/2404.05662", "other_link": "", "title": "Towards Accurate Post-training Quantization for Diffusion Models", "abstract": "With the advancement of diffusion models (DMs) and the substantially\nincreased computational requirements, quantization emerges as a practical\nsolution to obtain compact and efficient low-bit DMs. However, the highly\ndiscrete representation leads to severe accuracy degradation, hindering the\nquantization of diffusion models to ultra-low bit-widths. This paper proposes a\nnovel quantization-aware training approach for DMs, namely BinaryDM. The\nproposed method pushes DMs' weights toward accurate and efficient binarization,\nconsidering the representation and computation properties. From the\nrepresentation perspective, we present a Learnable Multi-basis Binarizer (LMB)\nto recover the representations generated by the binarized DM. The LMB enhances\ndetailed information through the flexible combination of dual binary bases\nwhile applying to parameter-sparse locations of DM architectures to achieve\nminor burdens. From the optimization perspective, a Low-rank Representation\nMimicking (LRM) is applied to assist the optimization of binarized DMs. The LRM\nmimics the representations of full-precision DMs in low-rank space, alleviating\nthe direction ambiguity of the optimization process caused by fine-grained\nalignment. Moreover, a quick progressive warm-up is applied to BinaryDM,\navoiding convergence difficulties by layerwisely progressive quantization at\nthe beginning of training. Comprehensive experiments demonstrate that BinaryDM\nachieves significant accuracy and efficiency gains compared to SOTA\nquantization methods of DMs under ultra-low bit-widths. With 1.1-bit weight and\n4-bit activation (W1.1A4), BinaryDM achieves as low as 7.11 FID and saves the\nperformance from collapse (baseline FID 39.69). As the first binarization\nmethod for diffusion models, W1.1A4 BinaryDM achieves impressive 9.3 times OPs\nand 24.8 times model size savings, showcasing its substantial potential for\nedge deployment.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Changyuan Wang", "Ziwei Wang", "Xiuwei Xu", "Yansong Tang", "Jie Zhou", "Jiwen Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a6"}, "filepath": "data/2312.04913.png", "tags": [], "_media_type": "image", "_rand": 0.9991763692130645, "arXiv_link": "https://arxiv.org/abs/2312.04913", "other_link": "", "title": "Improving Transferable Targeted Adversarial Attacks with Model Self-Enhancement", "abstract": "Current Visual-Language Pre-training (VLP) models are vulnerable to\nadversarial examples. These adversarial examples present substantial security\nrisks to VLP models, as they can leverage inherent weaknesses in the models,\nresulting in incorrect predictions. In contrast to white-box adversarial\nattacks, transfer attacks (where the adversary crafts adversarial examples on a\nwhite-box model to fool another black-box model) are more reflective of\nreal-world scenarios, thus making them more meaningful for research. By\nsummarizing and analyzing existing research, we identified two factors that can\ninfluence the efficacy of transfer attacks on VLP models: inter-modal\ninteraction and data diversity. Based on these insights, we propose a\nself-augment-based transfer attack method, termed SA-Attack. Specifically,\nduring the generation of adversarial images and adversarial texts, we apply\ndifferent data augmentation methods to the image modality and text modality,\nrespectively, with the aim of improving the adversarial transferability of the\ngenerated adversarial images and texts. Experiments conducted on the FLickr30K\nand COCO datasets have validated the effectiveness of our method. Our code will\nbe available after this paper is accepted.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Han Wu", "Guanyan Ou", "Weibin Wu", "Zibin Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Cryptography and Security", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a7"}, "filepath": "data/2310.09469.png", "tags": [], "_media_type": "image", "_rand": 0.9991244349720839, "arXiv_link": "https://arxiv.org/abs/2310.09469", "other_link": "", "title": "Towards More Accurate Diffusion Model Acceleration with A Timestep Aligner", "abstract": "A diffusion model, which is formulated to produce an image using thousands of\ndenoising steps, usually suffers from a slow inference speed. Existing\nacceleration algorithms simplify the sampling by skipping most steps yet\nexhibit considerable performance degradation. By viewing the generation of\ndiffusion models as a discretized integrating process, we argue that the\nquality drop is partly caused by applying an inaccurate integral direction to a\ntimestep interval. To rectify this issue, we propose a timestep aligner that\nhelps find a more accurate integral direction for a particular interval at the\nminimum cost. Specifically, at each denoising step, we replace the original\nparameterization by conditioning the network on a new timestep, which is\nobtained by aligning the sampling distribution to the real distribution.\nExtensive experiments show that our plug-in design can be trained efficiently\nand boost the inference performance of various state-of-the-art acceleration\nmethods, especially when there are few denoising steps. For example, when using\n10 denoising steps on the popular LSUN Bedroom dataset, we improve the FID of\nDDIM from 9.65 to 6.07, simply by adopting our method for a more appropriate\nset of timesteps. Code will be made publicly available.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Mengfei Xia", "Yujun Shen", "Changsong Lei", "Yu Zhou", "Deli Zhao", "Ran Yi", "Wenping Wang", "Yong-Jin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a8"}, "filepath": "data/2404.14412v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991476764020071, "arXiv_link": "https://arxiv.org/abs/2404.14412v1", "other_link": "", "title": "AutoAD III: The Prequel -- Back to the Pixels", "abstract": "Generating Audio Description (AD) for movies is a challenging task that\nrequires fine-grained visual understanding and an awareness of the characters\nand their names. Currently, visual language models for AD generation are\nlimited by a lack of suitable training data, and also their evaluation is\nhampered by using performance measures not specialized to the AD domain. In\nthis paper, we make three contributions: (i) We propose two approaches for\nconstructing AD datasets with aligned video data, and build training and\nevaluation datasets using these. These datasets will be publicly released; (ii)\nWe develop a Q-former-based architecture which ingests raw video and generates\nAD, using frozen pre-trained visual encoders and large language models; and\n(iii) We provide new evaluation metrics to benchmark AD quality that are\nwell-matched to human performance. Taken together, we improve the state of the\nart on AD generation.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Tengda Han", "Max Bain", "Arsha Nagrani", "G\u00fcl Varol", "Weidi Xie", "Andrew Zisserman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5a9"}, "filepath": "data/2402.18330.png", "tags": [], "_media_type": "image", "_rand": 0.9994416119240358, "arXiv_link": "https://arxiv.org/abs/2402.18330", "other_link": "", "title": "Attention-Propagation Network for Egocentric Heatmap to 3D Pose Lifting", "abstract": "We present EgoTAP, a heatmap-to-3D pose lifting method for highly accurate\nstereo egocentric 3D pose estimation. Severe self-occlusion and out-of-view\nlimbs in egocentric camera views make accurate pose estimation a challenging\nproblem. To address the challenge, prior methods employ joint\nheatmaps-probabilistic 2D representations of the body pose, but heatmap-to-3D\npose conversion still remains an inaccurate process. We propose a novel\nheatmap-to-3D lifting method composed of the Grid ViT Encoder and the\nPropagation Network. The Grid ViT Encoder summarizes joint heatmaps into\neffective feature embedding using self-attention. Then, the Propagation Network\nestimates the 3D pose by utilizing skeletal information to better estimate the\nposition of obscure joints. Our method significantly outperforms the previous\nstate-of-the-art qualitatively and quantitatively demonstrated by a 23.9\\%\nreduction of error in an MPJPE metric. Our source code is available in GitHub.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Taeho Kang", "Youngki Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5aa"}, "filepath": "data/2311.05304v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999259070161034, "arXiv_link": "https://arxiv.org/abs/2311.05304v2", "other_link": "", "title": "Data Valuation and Detections in Federated Learning", "abstract": "Federated Learning (FL) enables collaborative model training while preserving\nthe privacy of raw data. A challenge in this framework is the fair and\nefficient valuation of data, which is crucial for incentivizing clients to\ncontribute high-quality data in the FL task. In scenarios involving numerous\ndata clients within FL, it is often the case that only a subset of clients and\ndatasets are pertinent to a specific learning task, while others might have\neither a negative or negligible impact on the model training process. This\npaper introduces a novel privacy-preserving method for evaluating client\ncontributions and selecting relevant datasets without a pre-specified training\nalgorithm in an FL task. Our proposed approach FedBary, utilizes Wasserstein\ndistance within the federated context, offering a new solution for data\nvaluation in the FL framework. This method ensures transparent data valuation\nand efficient computation of the Wasserstein barycenter and reduces the\ndependence on validation datasets. Through extensive empirical experiments and\ntheoretical analyses, we demonstrate the potential of this data valuation\nmethod as a promising avenue for FL research.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Wenqian Li", "Shuran Fu", "Fengrui Zhang", "Yan Pang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ab"}, "filepath": "data/2403.19022.png", "tags": [], "_media_type": "image", "_rand": 0.999013473233333, "arXiv_link": "https://arxiv.org/abs/2403.19022", "other_link": "", "title": "WALT3D: Generating Realistic Training Data from Time-Lapse Imagery for Reconstructing Dynamic Objects under Occlusion", "abstract": "Current methods for 2D and 3D object understanding struggle with severe\nocclusions in busy urban environments, partly due to the lack of large-scale\nlabeled ground-truth annotations for learning occlusion. In this work, we\nintroduce a novel framework for automatically generating a large, realistic\ndataset of dynamic objects under occlusions using freely available time-lapse\nimagery. By leveraging off-the-shelf 2D (bounding box, segmentation, keypoint)\nand 3D (pose, shape) predictions as pseudo-groundtruth, unoccluded 3D objects\nare identified automatically and composited into the background in a clip-art\nstyle, ensuring realistic appearances and physically accurate occlusion\nconfigurations. The resulting clip-art image with pseudo-groundtruth enables\nefficient training of object reconstruction methods that are robust to\nocclusions. Our method demonstrates significant improvements in both 2D and 3D\nreconstruction, particularly in scenarios with heavily occluded objects like\nvehicles and people in urban scenes.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Khiem Vuong", "N. Dinesh Reddy", "Robert Tamburo", "Srinivasa G. Narasimhan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ac"}, "filepath": "data/2312.03052.png", "tags": [], "_media_type": "image", "_rand": 0.9994203102394452, "arXiv_link": "https://arxiv.org/abs/2312.03052", "other_link": "", "title": "Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language Models", "abstract": "Solving complex visual tasks such as \"Who invented the musical instrument on\nthe right?\" involves a composition of skills: understanding space, recognizing\ninstruments, and also retrieving prior knowledge. Recent work shows promise by\ndecomposing such tasks using a large language model (LLM) into an executable\nprogram that invokes specialized vision models. However, generated programs are\nerror-prone: they omit necessary steps, include spurious ones, and are unable\nto recover when the specialized models give incorrect outputs. Moreover, they\nrequire loading multiple models, incurring high latency and computation costs.\nWe propose Visual Program Distillation (VPD), an instruction tuning framework\nthat produces a vision-language model (VLM) capable of solving complex visual\ntasks with a single forward pass. VPD distills the reasoning ability of LLMs by\nusing them to sample multiple candidate programs, which are then executed and\nverified to identify a correct one. It translates each correct program into a\nlanguage description of the reasoning steps, which are then distilled into a\nVLM. Extensive experiments show that VPD improves the VLM's ability to count,\nunderstand spatial relations, and reason compositionally. Our VPD-trained\nPaLI-X outperforms all prior VLMs, achieving state-of-the-art performance\nacross complex vision tasks, including MMBench, OK-VQA, A-OKVQA, TallyQA, POPE,\nand Hateful Memes. An evaluation with human annotators also confirms that VPD\nimproves model response factuality and consistency. Finally, experiments on\ncontent moderation demonstrate that VPD is also helpful for adaptation to\nreal-world applications with limited data.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Yushi Hu", "Otilia Stretcu", "Chun-Ta Lu", "Krishnamurthy Viswanathan", "Kenji Hata", "Enming Luo", "Ranjay Krishna", "Ariel Fuxman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ad"}, "filepath": "data/2404.13534.png", "tags": [], "_media_type": "image", "_rand": 0.9996785389096299, "arXiv_link": "https://arxiv.org/abs/2404.13534", "other_link": "", "title": "IQ-VFI: Implicit Quadratic Motion Estimation for Video Frame Interpolation", "abstract": "With the advancement of AIGC, video frame interpolation (VFI) has become a\ncrucial component in existing video generation frameworks, attracting\nwidespread research interest. For the VFI task, the motion estimation between\nneighboring frames plays a crucial role in avoiding motion ambiguity. However,\nexisting VFI methods always struggle to accurately predict the motion\ninformation between consecutive frames, and this imprecise estimation leads to\nblurred and visually incoherent interpolated frames. In this paper, we propose\na novel diffusion framework, motion-aware latent diffusion models (MADiff),\nwhich is specifically designed for the VFI task. By incorporating motion priors\nbetween the conditional neighboring frames with the target interpolated frame\npredicted throughout the diffusion sampling procedure, MADiff progressively\nrefines the intermediate outcomes, culminating in generating both visually\nsmooth and realistic results. Extensive experiments conducted on benchmark\ndatasets demonstrate that our method achieves state-of-the-art performance\nsignificantly outperforming existing approaches, especially under challenging\nscenarios involving dynamic textures with complex motion.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Mengshun Hu", "Kui Jiang", "Zhihang Zhong", "Zheng Wang", "Yinqiang Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ae"}, "filepath": "data/2404.03656.png", "tags": [], "_media_type": "image", "_rand": 0.9993953206498631, "arXiv_link": "https://arxiv.org/abs/2404.03656", "other_link": "", "title": "MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation", "abstract": "We present MVD-Fusion: a method for single-view 3D inference via generative\nmodeling of multi-view-consistent RGB-D images. While recent methods pursuing\n3D inference advocate learning novel-view generative models, these generations\nare not 3D-consistent and require a distillation process to generate a 3D\noutput. We instead cast the task of 3D inference as directly generating\nmutually-consistent multiple views and build on the insight that additionally\ninferring depth can provide a mechanism for enforcing this consistency.\nSpecifically, we train a denoising diffusion model to generate multi-view RGB-D\nimages given a single RGB input image and leverage the (intermediate noisy)\ndepth estimates to obtain reprojection-based conditioning to maintain\nmulti-view consistency. We train our model using large-scale synthetic dataset\nObajverse as well as the real-world CO3D dataset comprising of generic camera\nviewpoints. We demonstrate that our approach can yield more accurate synthesis\ncompared to recent state-of-the-art, including distillation-based 3D inference\nand prior multi-view generation methods. We also evaluate the geometry induced\nby our multi-view depth prediction and find that it yields a more accurate\nrepresentation than other direct 3D inference approaches.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hanzhe Hu", "Zhizhuo Zhou", "Varun Jampani", "Shubham Tulsiani"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5af"}, "filepath": "data/2402.13250.png", "tags": [], "_media_type": "image", "_rand": 0.9998300736651239, "arXiv_link": "https://arxiv.org/abs/2402.13250", "other_link": "https://sites.google.com/view/vidrecap", "title": "Video ReCap: Recursive Captioning of Hour-Long Videos", "abstract": "Most video captioning models are designed to process short video clips of few\nseconds and output text describing low-level visual concepts (e.g., objects,\nscenes, atomic actions). However, most real-world videos last for minutes or\nhours and have a complex hierarchical structure spanning different temporal\ngranularities. We propose Video ReCap, a recursive video captioning model that\ncan process video inputs of dramatically different lengths (from 1 second to 2\nhours) and output video captions at multiple hierarchy levels. The recursive\nvideo-language architecture exploits the synergy between different video\nhierarchies and can process hour-long videos efficiently. We utilize a\ncurriculum learning training scheme to learn the hierarchical structure of\nvideos, starting from clip-level captions describing atomic actions, then\nfocusing on segment-level descriptions, and concluding with generating\nsummaries for hour-long videos. Furthermore, we introduce Ego4D-HCap dataset by\naugmenting Ego4D with 8,267 manually collected long-range video summaries. Our\nrecursive model can flexibly generate captions at different hierarchy levels\nwhile also being useful for other complex video understanding tasks, such as\nVideoQA on EgoSchema. Data, code, and models are available at:\nhttps://sites.google.com/view/vidrecap", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Md Mohaiminul Islam", "Vu Bao Ngan Ho", "Xitong Yang", "Tushar Nagarajan", "Lorenzo Torresani", "Gedas Bertasius"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b0"}, "filepath": "data/2405.02962v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997329945040215, "arXiv_link": "https://arxiv.org/html/2405.02962v1", "other_link": "", "title": "SuperSVG: Superpixel-based Scalable Vector Graphics Synthesis", "abstract": "We propose a novel method, VectorPainter, for the task of stylized vector\ngraphics synthesis. Given a text prompt and a reference style image,\nVectorPainter generates a vector graphic that aligns in content with the text\nprompt and remains faithful in style to the reference image. We recognize that\nthe key to this task lies in fully leveraging the intrinsic properties of\nvector graphics. Innovatively, we conceptualize the stylization process as the\nrearrangement of vectorized strokes extracted from the reference image.\nVectorPainter employs an optimization-based pipeline. It begins by extracting\nvectorized strokes from the reference image, which are then used to initialize\nthe synthesis process. To ensure fidelity to the reference style, a novel style\npreservation loss is introduced. Extensive experiments have been conducted to\ndemonstrate that our method is capable of aligning with the text description\nwhile remaining faithful to the reference image.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Teng Hu", "Ran Yi", "Baihong Qian", "Jiangning Zhang", "Paul L. Rosin", "Yu-Kun Lai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b1"}, "filepath": "data/2403.19220.png", "tags": [], "_media_type": "image", "_rand": 0.9992628791225837, "arXiv_link": "https://arxiv.org/abs/2403.19220", "other_link": "", "title": "GeoAuxNet: Towards Universal 3D Representation Learning for Multi-sensor Point Clouds", "abstract": "Point clouds captured by different sensors such as RGB-D cameras and LiDAR\npossess non-negligible domain gaps. Most existing methods design different\nnetwork architectures and train separately on point clouds from various\nsensors. Typically, point-based methods achieve outstanding performances on\neven-distributed dense point clouds from RGB-D cameras, while voxel-based\nmethods are more efficient for large-range sparse LiDAR point clouds. In this\npaper, we propose geometry-to-voxel auxiliary learning to enable voxel\nrepresentations to access point-level geometric information, which supports\nbetter generalisation of the voxel-based backbone with additional\ninterpretations of multi-sensor point clouds. Specifically, we construct\nhierarchical geometry pools generated by a voxel-guided dynamic point network,\nwhich efficiently provide auxiliary fine-grained geometric information adapted\nto different stages of voxel features. We conduct experiments on joint\nmulti-sensor datasets to demonstrate the effectiveness of GeoAuxNet. Enjoying\nelaborate geometric information, our method outperforms other models\ncollectively trained on multi-sensor datasets, and achieve competitive results\nwith the-state-of-art experts on each single dataset.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Shengjun Zhang", "Xin Fei", "Yueqi Duan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b2"}, "filepath": "data/2312.07937.png", "tags": [], "_media_type": "image", "_rand": 0.9991270952628861, "arXiv_link": "https://arxiv.org/abs/2312.07937", "other_link": "", "title": "BOTH2Hands: Inferring 3D Hands from Both Text Prompts and Body Dynamics", "abstract": "The recently emerging text-to-motion advances have spired numerous attempts\nfor convenient and interactive human motion generation. Yet, existing methods\nare largely limited to generating body motions only without considering the\nrich two-hand motions, let alone handling various conditions like body dynamics\nor texts. To break the data bottleneck, we propose BOTH57M, a novel multi-modal\ndataset for two-hand motion generation. Our dataset includes accurate motion\ntracking for the human body and hands and provides pair-wised finger-level hand\nannotations and body descriptions. We further provide a strong baseline method,\nBOTH2Hands, for the novel task: generating vivid two-hand motions from both\nimplicit body dynamics and explicit text prompts. We first warm up two parallel\nbody-to-hand and text-to-hand diffusion models and then utilize the\ncross-attention transformer for motion blending. Extensive experiments and\ncross-validations demonstrate the effectiveness of our approach and dataset for\ngenerating convincing two-hand motions from the hybrid body-and-textual\nconditions. Our dataset and code will be disseminated to the community for\nfuture research.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Wenqian Zhang", "Molin Huang", "Yuxuan Zhou", "Juze Zhang", "Jingyi Yu", "Jingya Wang", "Lan Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b3"}, "filepath": "data/2312.13913.png", "tags": [], "_media_type": "image", "_rand": 0.9998735268870732, "arXiv_link": "https://arxiv.org/abs/2312.13913", "other_link": "", "title": "Paint3D: Paint Anything 3D with Lighting-less Texture Diffusion Models", "abstract": "This paper presents Paint3D, a novel coarse-to-fine generative framework that\nis capable of producing high-resolution, lighting-less, and diverse 2K UV\ntexture maps for untextured 3D meshes conditioned on text or image inputs. The\nkey challenge addressed is generating high-quality textures without embedded\nillumination information, which allows the textures to be re-lighted or\nre-edited within modern graphics pipelines. To achieve this, our method first\nleverages a pre-trained depth-aware 2D diffusion model to generate\nview-conditional images and perform multi-view texture fusion, producing an\ninitial coarse texture map. However, as 2D models cannot fully represent 3D\nshapes and disable lighting effects, the coarse texture map exhibits incomplete\nareas and illumination artifacts. To resolve this, we train separate UV\nInpainting and UVHD diffusion models specialized for the shape-aware refinement\nof incomplete areas and the removal of illumination artifacts. Through this\ncoarse-to-fine process, Paint3D can produce high-quality 2K UV textures that\nmaintain semantic consistency while being lighting-less, significantly\nadvancing the state-of-the-art in texturing 3D objects.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Xianfang Zeng", "Xin Chen", "Zhongqi Qi", "Wen Liu", "Zibo Zhao", "Zhibin Wang", "Bin Fu", "Yong Liu", "Gang Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b4"}, "filepath": "data/2304.05370.png", "tags": [], "_media_type": "image", "_rand": 0.9992853018470815, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2304.05370", "other_link": "", "title": "Overload: Latency Attacks on Object Detection for Edge Devices", "abstract": "Nowadays, the deployment of deep learning-based applications is an essential\ntask owing to the increasing demands on intelligent services. In this paper, we\ninvestigate latency attacks on deep learning applications. Unlike common\nadversarial attacks for misclassification, the goal of latency attacks is to\nincrease the inference time, which may stop applications from responding to the\nrequests within a reasonable time. This kind of attack is ubiquitous for\nvarious applications, and we use object detection to demonstrate how such kind\nof attacks work. We also design a framework named Overload to generate latency\nattacks at scale. Our method is based on a newly formulated optimization\nproblem and a novel technique, called spatial attention. This attack serves to\nescalate the required computing costs during the inference time, consequently\nleading to an extended inference time for object detection. It presents a\nsignificant threat, especially to systems with limited computing resources. We\nconducted experiments using YOLOv5 models on Nvidia NX. Compared to existing\nmethods, our method is simpler and more effective. The experimental results\nshow that with latency attacks, the inference time of a single image can be\nincreased ten times longer in reference to the normal setting. Moreover, our\nfindings pose a potential new threat to all object detection tasks requiring\nnon-maximum suppression (NMS), as our attack is NMS-agnostic.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Erh-Chung Chen", "Pin-Yu Chen", "I-Hsin Chung", "Che-Rung Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b5"}, "filepath": "data/2405.12979.png", "tags": [], "_media_type": "image", "_rand": 0.9993811306259432, "arXiv_link": "https://arxiv.org/abs/2405.12979", "other_link": "https://hwjiang1510.github.io/OmniGlue", "title": "OmniGlue: Generalizable Feature Matching with Foundation Model Guidance", "abstract": "The image matching field has been witnessing a continuous emergence of novel\nlearnable feature matching techniques, with ever-improving performance on\nconventional benchmarks. However, our investigation shows that despite these\ngains, their potential for real-world applications is restricted by their\nlimited generalization capabilities to novel image domains. In this paper, we\nintroduce OmniGlue, the first learnable image matcher that is designed with\ngeneralization as a core principle. OmniGlue leverages broad knowledge from a\nvision foundation model to guide the feature matching process, boosting\ngeneralization to domains not seen at training time. Additionally, we propose a\nnovel keypoint position-guided attention mechanism which disentangles spatial\nand appearance information, leading to enhanced matching descriptors. We\nperform comprehensive experiments on a suite of $7$ datasets with varied image\ndomains, including scene-level, object-centric and aerial images. OmniGlue's\nnovel components lead to relative gains on unseen domains of $20.9\\%$ with\nrespect to a directly comparable reference model, while also outperforming the\nrecent LightGlue method by $9.5\\%$ relatively.Code and model can be found at\nhttps://hwjiang1510.github.io/OmniGlue", "keywords": [], "authors_list": ["Hanwen Jiang", "Arjun Karpur", "Bingyi Cao", "Qixing Huang", "Andr\u00e9 Araujo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b6"}, "filepath": "data/2402.05937.png", "tags": [], "_media_type": "image", "_rand": 0.9993936150063003, "arXiv_link": "https://arxiv.org/abs/2402.05937", "other_link": "https://fcjian.github.io/InstaGen.", "title": "InstaGen: Enhancing Object Detection by Training on Synthetic Dataset", "abstract": "In this paper, we present a novel paradigm to enhance the ability of object\ndetector, e.g., expanding categories or improving detection performance, by\ntraining on synthetic dataset generated from diffusion models. Specifically, we\nintegrate an instance-level grounding head into a pre-trained, generative\ndiffusion model, to augment it with the ability of localising instances in the\ngenerated images. The grounding head is trained to align the text embedding of\ncategory names with the regional visual feature of the diffusion model, using\nsupervision from an off-the-shelf object detector, and a novel self-training\nscheme on (novel) categories not covered by the detector. We conduct thorough\nexperiments to show that, this enhanced version of diffusion model, termed as\nInstaGen, can serve as a data synthesizer, to enhance object detectors by\ntraining on its generated samples, demonstrating superior performance over\nexisting state-of-the-art methods in open-vocabulary (+4.5 AP) and data-sparse\n(+1.2 to 5.2 AP) scenarios. Project page with code:\nhttps://fcjian.github.io/InstaGen.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chengjian Feng", "Yujie Zhong", "Zequn Jie", "Weidi Xie", "Lin Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b7"}, "filepath": "data/2404.15891v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993252307810648, "arXiv_link": "https://arxiv.org/html/2404.15891v2", "other_link": "https://github.com/CrystalWlz/OMEGAS", "title": "LTM: Lightweight Textured Mesh Extraction and Refinement of Large Unbounded Scenes for Efficient Storage and Real-time Rendering", "abstract": "Recent advancements in 3D reconstruction technologies have paved the way for\nhigh-quality and real-time rendering of complex 3D scenes. Despite these\nachievements, a notable challenge persists: it is difficult to precisely\nreconstruct specific objects from large scenes. Current scene reconstruction\ntechniques frequently result in the loss of object detail textures and are\nunable to reconstruct object portions that are occluded or unseen in views. To\naddress this challenge, we delve into the meticulous 3D reconstruction of\nspecific objects within large scenes and propose a framework termed OMEGAS:\nObject Mesh Extraction from Large Scenes Guided by GAussian Segmentation.\nOMEGAS employs a multi-step approach, grounded in several excellent\noff-the-shelf methodologies. Specifically, initially, we utilize the Segment\nAnything Model (SAM) to guide the segmentation of 3D Gaussian Splatting (3DGS),\nthereby creating a basic 3DGS model of the target object. Then, we leverage\nlarge-scale diffusion priors to further refine the details of the 3DGS model,\nespecially aimed at addressing invisible or occluded object portions from the\noriginal scene views. Subsequently, by re-rendering the 3DGS model onto the\nscene views, we achieve accurate object segmentation and effectively remove the\nbackground. Finally, these target-only images are used to improve the 3DGS\nmodel further and extract the definitive 3D object mesh by the SuGaR model. In\nvarious scenarios, our experiments demonstrate that OMEGAS significantly\nsurpasses existing scene reconstruction methods. Our project page is at:\nhttps://github.com/CrystalWlz/OMEGAS", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Jaehoon Choi", "Rajvi Shah", "Qinbo Li", "Yipeng Wang", "Ayush Saraf", "Changil Kim", "Jia-Bin Huang", "Dinesh Manocha", "Suhib Alsisan", "Johannes Kopf"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b8"}, "filepath": "data/2403.14291v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994002148107122, "arXiv_link": "https://arxiv.org/abs/2403.14291v1", "other_link": "", "title": "Open-Vocabulary Attention Maps with Token Optimization for Semantic Segmentation in Diffusion Models", "abstract": "Diffusion models represent a new paradigm in text-to-image generation. Beyond\ngenerating high-quality images from text prompts, models such as Stable\nDiffusion have been successfully extended to the joint generation of semantic\nsegmentation pseudo-masks. However, current extensions primarily rely on\nextracting attentions linked to prompt words used for image synthesis. This\napproach limits the generation of segmentation masks derived from word tokens\nnot contained in the text prompt. In this work, we introduce Open-Vocabulary\nAttention Maps (OVAM)-a training-free method for text-to-image diffusion models\nthat enables the generation of attention maps for any word. In addition, we\npropose a lightweight optimization process based on OVAM for finding tokens\nthat generate accurate attention maps for an object class with a single\nannotation. We evaluate these tokens within existing state-of-the-art Stable\nDiffusion extensions. The best-performing model improves its mIoU from 52.1 to\n86.6 for the synthetic images' pseudo-masks, demonstrating that our optimized\ntokens are an efficient way to improve the performance of existing methods\nwithout architectural changes or retraining.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Pablo Marcos-Manch\u00f3n", "Roberto Alcover-Couso", "Juan SanMiguel", "Jose M. Martinez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5b9"}, "filepath": "data/2404.09490.png", "tags": [], "_media_type": "image", "_rand": 0.9998731206218735, "arXiv_link": "https://arxiv.org/abs/2404.09490", "other_link": "https://github.com/naver-ai/tc-clip", "title": "Leveraging Frame Affinity for sRGB-to-RAW Video De-rendering", "abstract": "Pretrained vision-language models have shown effectiveness in video\nunderstanding. However, recent studies have not sufficiently leveraged\nessential temporal information from videos, simply averaging frame-wise\nrepresentations or referencing consecutive frames. We introduce Temporally\nContextualized CLIP (TC-CLIP), a pioneering framework for video understanding\nthat effectively and efficiently leverages comprehensive video information. We\npropose Temporal Contextualization (TC), a novel layer-wise temporal\ninformation infusion mechanism for video that extracts core information from\neach frame, interconnects relevant information across the video to summarize\ninto context tokens, and ultimately leverages the context tokens during the\nfeature encoding process. Furthermore, our Video-conditional Prompting (VP)\nmodule manufactures context tokens to generate informative prompts in text\nmodality. We conduct extensive experiments in zero-shot, few-shot,\nbase-to-novel, and fully-supervised action recognition to validate the\nsuperiority of our TC-CLIP. Ablation studies for TC and VP guarantee our design\nchoices. Code is available at https://github.com/naver-ai/tc-clip", "keywords": ["Large multimodal models and prompting techniques", "Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Chen Zhang", "Wencheng Han", "Yang Zhou", "Jianbing Shen", "Cheng-Zhong Xu", "Wentao Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ba"}, "filepath": "data/2404.06851.png", "tags": [], "_media_type": "image", "_rand": 0.9997577997084458, "arXiv_link": "https://arxiv.org/abs/2404.06851", "other_link": "https://weiqi-zhang.github.io/UDiFF.", "title": "UDiFF: Generating Conditional Unsigned Distance Fields with Optimal Wavelet Diffusion", "abstract": "Diffusion models have shown remarkable results for image generation, editing\nand inpainting. Recent works explore diffusion models for 3D shape generation\nwith neural implicit functions, i.e., signed distance function and occupancy\nfunction. However, they are limited to shapes with closed surfaces, which\nprevents them from generating diverse 3D real-world contents containing open\nsurfaces. In this work, we present UDiFF, a 3D diffusion model for unsigned\ndistance fields (UDFs) which is capable to generate textured 3D shapes with\nopen surfaces from text conditions or unconditionally. Our key idea is to\ngenerate UDFs in spatial-frequency domain with an optimal wavelet\ntransformation, which produces a compact representation space for UDF\ngeneration. Specifically, instead of selecting an appropriate wavelet\ntransformation which requires expensive manual efforts and still leads to large\ninformation loss, we propose a data-driven approach to learn the optimal\nwavelet transformation for UDFs. We evaluate UDiFF to show our advantages by\nnumerical and visual comparisons with the latest methods on widely used\nbenchmarks. Page: https://weiqi-zhang.github.io/UDiFF.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Junsheng Zhou", "Weiqi Zhang", "Baorui Ma", "Kanle Shi", "Yu-Shen Liu", "Zhizhong Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5bb"}, "filepath": "data/2404.01409.png", "tags": [], "_media_type": "image", "_rand": 0.9994611290289299, "arXiv_link": "https://arxiv.org/abs/2404.01409", "other_link": "", "title": "OVFoodSeg: Elevating Open-Vocabulary Food Image Segmentation via Image-Informed Textual Representation", "abstract": "In the realm of food computing, segmenting ingredients from images poses\nsubstantial challenges due to the large intra-class variance among the same\ningredients, the emergence of new ingredients, and the high annotation costs\nassociated with large food segmentation datasets. Existing approaches primarily\nutilize a closed-vocabulary and static text embeddings setting. These methods\noften fall short in effectively handling the ingredients, particularly new and\ndiverse ones. In response to these limitations, we introduce OVFoodSeg, a\nframework that adopts an open-vocabulary setting and enhances text embeddings\nwith visual context. By integrating vision-language models (VLMs), our approach\nenriches text embedding with image-specific information through two innovative\nmodules, eg, an image-to-text learner FoodLearner and an Image-Informed Text\nEncoder. The training process of OVFoodSeg is divided into two stages: the\npre-training of FoodLearner and the subsequent learning phase for segmentation.\nThe pre-training phase equips FoodLearner with the capability to align visual\ninformation with corresponding textual representations that are specifically\nrelated to food, while the second phase adapts both the FoodLearner and the\nImage-Informed Text Encoder for the segmentation task. By addressing the\ndeficiencies of previous models, OVFoodSeg demonstrates a significant\nimprovement, achieving an 4.9\\% increase in mean Intersection over Union (mIoU)\non the FoodSeg103 dataset, setting a new milestone for food image segmentation.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Xiongwei Wu", "Sicheng Yu", "Ee-Peng Lim", "Chong Wah Ngo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5bc"}, "filepath": "data/2401.08036.png", "tags": [], "_media_type": "image", "_rand": 0.9995081029731747, "arXiv_link": "https://arxiv.org/abs/2401.08036", "other_link": "", "title": "LaneCPP: Continuous 3D Lane Detection using Physical Priors", "abstract": "3D lanes offer a more comprehensive understanding of the road surface\ngeometry than 2D lanes, thereby providing crucial references for driving\ndecisions and trajectory planning. While many efforts aim to improve prediction\naccuracy, we recognize that an efficient network can bring results closer to\nlane modeling. However, if the modeling data is imprecise, the results might\nnot accurately capture the real-world scenario. Therefore, accurate lane\nmodeling is essential to align prediction results closely with the environment.\nThis study centers on efficient and accurate lane modeling, proposing a joint\nmodeling approach that combines Bezier curves and interpolation methods.\nFurthermore, based on this lane modeling approach, we developed a Global2Local\nLane Matching method with Bezier Control-Point and Key-Point, which serve as a\ncomprehensive solution that leverages hierarchical features with two\nmathematical models to ensure a precise match. We also introduce a novel 3D\nSpatial Encoder, representing an exploration of 3D surround-view lane detection\nresearch. The framework is suitable for front-view or surround-view 3D lane\ndetection. By directly outputting the key points of lanes in 3D space, it\novercomes the limitations of anchor-based methods, enabling accurate prediction\nof closed-loop or U-shaped lanes and effective adaptation to complex road\nconditions. This innovative method establishes a new benchmark in front-view 3D\nlane detection on the Openlane dataset and achieves competitive performance in\nsurround-view 2D lane detection on the Argoverse2 dataset.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Maximilian Pittner", "Joel Janai", "Alexandru Paul Condurache"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5bd"}, "filepath": "data/2312.06740.png", "tags": [], "_media_type": "image", "_rand": 0.9995534263952186, "arXiv_link": "https://arxiv.org/abs/2312.06740", "other_link": "", "title": "MonoNPHM: Dynamic Head Reconstruction from Monocular Videos", "abstract": "We present Monocular Neural Parametric Head Models (MonoNPHM) for dynamic 3D\nhead reconstructions from monocular RGB videos. To this end, we propose a\nlatent appearance space that parameterizes a texture field on top of a neural\nparametric model. We constrain predicted color values to be correlated with the\nunderlying geometry such that gradients from RGB effectively influence latent\ngeometry codes during inverse rendering. To increase the representational\ncapacity of our expression space, we augment our backward deformation field\nwith hyper-dimensions, thus improving color and geometry representation in\ntopologically challenging expressions. Using MonoNPHM as a learned prior, we\napproach the task of 3D head reconstruction using signed distance field based\nvolumetric rendering. By numerically inverting our backward deformation field,\nwe incorporated a landmark loss using facial anchor points that are closely\ntied to our canonical geometry representation. To evaluate the task of dynamic\nface reconstruction from monocular RGB videos we record 20 challenging Kinect\nsequences under casual conditions. MonoNPHM outperforms all baselines with a\nsignificant margin, and makes an important step towards easily accessible\nneural parametric face models through RGB tracking.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis", "Image and video generation and manipulation"], "authors_list": ["Simon Giebenhain", "Tobias Kirschstein", "Markos Georgopoulos", "Martin R\u00fcnz", "Lourdes Agapito", "Matthias Nie\u00dfner"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5be"}, "filepath": "data/2401.00789.png", "tags": [], "_media_type": "image", "_rand": 0.9995068716030286, "arXiv_link": "https://arxiv.org/abs/2401.00789", "other_link": "", "title": "Retrieval-Augmented Egocentric Video Captioning", "abstract": "Understanding human actions from videos of first-person view poses\nsignificant challenges. Most prior approaches explore representation learning\non egocentric videos only, while overlooking the potential benefit of\nexploiting existing large-scale third-person videos. In this paper, (1) we\ndevelop EgoInstructor, a retrieval-augmented multimodal captioning model that\nautomatically retrieves semantically relevant third-person instructional videos\nto enhance the video captioning of egocentric videos. (2) For training the\ncross-view retrieval module, we devise an automatic pipeline to discover\nego-exo video pairs from distinct large-scale egocentric and exocentric\ndatasets. (3) We train the cross-view retrieval module with a novel EgoExoNCE\nloss that pulls egocentric and exocentric video features closer by aligning\nthem to shared text features that describe similar actions. (4) Through\nextensive experiments, our cross-view retrieval module demonstrates superior\nperformance across seven benchmarks. Regarding egocentric video captioning,\nEgoInstructor exhibits significant improvements by leveraging third-person\nvideos as references.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Jilan Xu", "Yifei Huang", "Junlin Hou", "Guo Chen", "Yuejie Zhang", "Rui Feng", "Weidi Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5bf"}, "filepath": "data/2401.04928.png", "tags": [], "_media_type": "image", "_rand": 0.9999925539654599, "arXiv_link": "https://arxiv.org/abs/2401.04928", "other_link": "", "title": "Relaxed Contrastive Learning for Federated Learning", "abstract": "We propose a novel contrastive learning framework to effectively address the\nchallenges of data heterogeneity in federated learning. We first analyze the\ninconsistency of gradient updates across clients during local training and\nestablish its dependence on the distribution of feature representations,\nleading to the derivation of the supervised contrastive learning (SCL)\nobjective to mitigate local deviations. In addition, we show that a na\\\"ive\nadoption of SCL in federated learning leads to representation collapse,\nresulting in slow convergence and limited performance gains. To address this\nissue, we introduce a relaxed contrastive learning loss that imposes a\ndivergence penalty on excessively similar sample pairs within each class. This\nstrategy prevents collapsed representations and enhances feature\ntransferability, facilitating collaborative training and leading to significant\nperformance improvements. Our framework outperforms all existing federated\nlearning approaches by huge margins on the standard benchmarks through\nextensive experimental results.", "keywords": [], "authors_list": ["Seonguk Seo", "Jinkyu Kim", "Geeho Kim", "Bohyung Han"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c0"}, "filepath": "data/2403.19967.png", "tags": [], "_media_type": "image", "_rand": 0.9991626332021148, "arXiv_link": "https://arxiv.org/abs/2403.19967", "other_link": "https://github.com/ma-xu/Rewrite-the-Stars.", "title": "Rewrite the stars", "abstract": "Recent studies have drawn attention to the untapped potential of the \"star\noperation\" (element-wise multiplication) in network design. While intuitive\nexplanations abound, the foundational rationale behind its application remains\nlargely unexplored. Our study attempts to reveal the star operation's ability\nto map inputs into high-dimensional, non-linear feature spaces -- akin to\nkernel tricks -- without widening the network. We further introduce StarNet, a\nsimple yet powerful prototype, demonstrating impressive performance and low\nlatency under compact network structure and efficient budget. Like stars in the\nsky, the star operation appears unremarkable but holds a vast universe of\npotential. Our work encourages further exploration across tasks, with codes\navailable at https://github.com/ma-xu/Rewrite-the-Stars.", "keywords": [], "authors_list": ["Xu Ma", "Xiyang Dai", "Yue Bai", "Yizhou Wang", "Yun Fu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c1"}, "filepath": "data/2404.03566v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991824187349343, "arXiv_link": "https://arxiv.org/abs/2404.03566v1", "other_link": "", "title": "PointInfinity: Resolution-Invariant Point Diffusion Models", "abstract": "We present PointInfinity, an efficient family of point cloud diffusion\nmodels. Our core idea is to use a transformer-based architecture with a\nfixed-size, resolution-invariant latent representation. This enables efficient\ntraining with low-resolution point clouds, while allowing high-resolution point\nclouds to be generated during inference. More importantly, we show that scaling\nthe test-time resolution beyond the training resolution improves the fidelity\nof generated point clouds and surfaces. We analyze this phenomenon and draw a\nlink to classifier-free guidance commonly used in diffusion models,\ndemonstrating that both allow trading off fidelity and variability during\ninference. Experiments on CO3D show that PointInfinity can efficiently generate\nhigh-resolution point clouds (up to 131k points, 31 times more than Point-E)\nwith state-of-the-art quality.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zixuan Huang", "Justin Johnson", "Shoubhik Debnath", "James Rehg", "Chao-Yuan Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c2"}, "filepath": "data/2307.04725v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991378588390465, "arXiv_link": "https://arxiv.org/html/2307.04725v2", "other_link": "https://github.com/guoyww/AnimateDiff.", "title": "JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation", "abstract": "With the advance of text-to-image (T2I) diffusion models (e.g., Stable\nDiffusion) and corresponding personalization techniques such as DreamBooth and\nLoRA, everyone can manifest their imagination into high-quality images at an\naffordable cost. However, adding motion dynamics to existing high-quality\npersonalized T2Is and enabling them to generate animations remains an open\nchallenge. In this paper, we present AnimateDiff, a practical framework for\nanimating personalized T2I models without requiring model-specific tuning. At\nthe core of our framework is a plug-and-play motion module that can be trained\nonce and seamlessly integrated into any personalized T2Is originating from the\nsame base T2I. Through our proposed training strategy, the motion module\neffectively learns transferable motion priors from real-world videos. Once\ntrained, the motion module can be inserted into a personalized T2I model to\nform a personalized animation generator. We further propose MotionLoRA, a\nlightweight fine-tuning technique for AnimateDiff that enables a pre-trained\nmotion module to adapt to new motion patterns, such as different shot types, at\na low training and data collection cost. We evaluate AnimateDiff and MotionLoRA\non several public representative personalized T2I models collected from the\ncommunity. The results demonstrate that our approaches help these models\ngenerate temporally smooth animation clips while preserving the visual quality\nand motion diversity. Codes and pre-trained weights are available at\nhttps://github.com/guoyww/AnimateDiff.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yu Zeng", "Vishal M. Patel", "Haochen Wang", "Xun Huang", "Ting-Chun Wang", "Ming-Yu Liu", "Yogesh Balaji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c3"}, "filepath": "data/2312.12416.png", "tags": [], "_media_type": "image", "_rand": 0.9999350558809736, "arXiv_link": "https://arxiv.org/abs/2312.12416", "other_link": "", "title": "Prompting Hard or Hardly Prompting: Prompt Inversion for Text-to-Image Diffusion Models", "abstract": "The quality of the prompts provided to text-to-image diffusion models\ndetermines how faithful the generated content is to the user's intent, often\nrequiring `prompt engineering'. To harness visual concepts from target images\nwithout prompt engineering, current approaches largely rely on embedding\ninversion by optimizing and then mapping them to pseudo-tokens. However,\nworking with such high-dimensional vector representations is challenging\nbecause they lack semantics and interpretability, and only allow simple vector\noperations when using them. Instead, this work focuses on inverting the\ndiffusion model to obtain interpretable language prompts directly. The\nchallenge of doing this lies in the fact that the resulting optimization\nproblem is fundamentally discrete and the space of prompts is exponentially\nlarge; this makes using standard optimization techniques, such as stochastic\ngradient descent, difficult. To this end, we utilize a delayed projection\nscheme to optimize for prompts representative of the vocabulary space in the\nmodel. Further, we leverage the findings that different timesteps of the\ndiffusion process cater to different levels of detail in an image. The later,\nnoisy, timesteps of the forward diffusion process correspond to the semantic\ninformation, and therefore, prompt inversion in this range provides tokens\nrepresentative of the image semantics. We show that our approach can identify\nsemantically interpretable and meaningful prompts for a target image which can\nbe used to synthesize diverse images with similar content. We further\nillustrate the application of the optimized prompts in evolutionary image\ngeneration and concept removal.", "keywords": ["Image and video generation and manipulation", "Large multimodal models and prompting techniques", "Multimodal models and vision-language models"], "authors_list": ["Shweta Mahajan", "Tanzila Rahman", "Kwang Moo Yi", "Leonid Sigal"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c4"}, "filepath": "data/2312.09237.png", "tags": [], "_media_type": "image", "_rand": 0.9993734801439145, "arXiv_link": "https://arxiv.org/abs/2312.09237", "other_link": "https://jerryxu.net/PixelLLM", "title": "Pixel Aligned Language Models", "abstract": "Large language models have achieved great success in recent years, so as\ntheir variants in vision. Existing vision-language models can describe images\nin natural languages, answer visual-related questions, or perform complex\nreasoning about the image. However, it is yet unclear how localization tasks,\nsuch as word grounding or referring localization, can be performed using large\nlanguage models. In this work, we aim to develop a vision-language model that\ncan take locations, for example, a set of points or boxes, as either inputs or\noutputs. When taking locations as inputs, the model performs\nlocation-conditioned captioning, which generates captions for the indicated\nobject or region. When generating locations as outputs, our model regresses\npixel coordinates for each output word generated by the language model, and\nthus performs dense word grounding. Our model is pre-trained on the Localized\nNarrative dataset, which contains pixel-word-aligned captioning from human\nattention. We show our model can be applied to various location-aware\nvision-language tasks, including referring localization, location-conditioned\ncaptioning, and dense object captioning, archiving state-of-the-art performance\non RefCOCO and Visual Genome. Project page: https://jerryxu.net/PixelLLM .", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jiarui Xu", "Xingyi Zhou", "Shen Yan", "Xiuye Gu", "Anurag Arnab", "Chen Sun", "Xiaolong Wang", "Cordelia Schmid"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c5"}, "filepath": "data/2312.03703.png", "tags": [], "_media_type": "image", "_rand": 0.999787717654296, "arXiv_link": "https://arxiv.org/abs/2312.03703", "other_link": "", "title": "Skeleton-in-Context: Unified Skeleton Sequence Modeling with In-Context Learning", "abstract": "In-context learning provides a new perspective for multi-task modeling for\nvision and NLP. Under this setting, the model can perceive tasks from prompts\nand accomplish them without any extra task-specific head predictions or model\nfine-tuning. However, Skeleton sequence modeling via in-context learning\nremains unexplored. Directly applying existing in-context models from other\nareas onto skeleton sequences fails due to the inter-frame and cross-task pose\nsimilarity that makes it outstandingly hard to perceive the task correctly from\na subtle context. To address this challenge, we propose Skeleton-in-Context\n(SiC), an effective framework for in-context skeleton sequence modeling. Our\nSiC is able to handle multiple skeleton-based tasks simultaneously after a\nsingle training process and accomplish each task from context according to the\ngiven prompt. It can further generalize to new, unseen tasks according to\ncustomized prompts. To facilitate context perception, we additionally propose a\ntask-unified prompt, which adaptively learns tasks of different natures, such\nas partial joint-level generation, sequence-level prediction, or 2D-to-3D\nmotion prediction. We conduct extensive experiments to evaluate the\neffectiveness of our SiC on multiple tasks, including motion prediction, pose\nestimation, joint completion, and future pose estimation. We also evaluate its\ngeneralization capability on unseen tasks such as motion-in-between. These\nexperiments show that our model achieves state-of-the-art multi-task\nperformance and even outperforms single-task methods on certain tasks.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Xinshun Wang", "Zhongbin Fang", "Xia Li", "Xiangtai Li", "Chen Chen", "Mengyuan Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c6"}, "filepath": "data/2404.01123.png", "tags": [], "_media_type": "image", "_rand": 0.9991838956416385, "arXiv_link": "https://arxiv.org/abs/2404.01123", "other_link": "", "title": "CLIPtone: Unsupervised Learning for Text-based Image Tone Adjustment", "abstract": "Recent image tone adjustment (or enhancement) approaches have predominantly\nadopted supervised learning for learning human-centric perceptual assessment.\nHowever, these approaches are constrained by intrinsic challenges of supervised\nlearning. Primarily, the requirement for expertly-curated or retouched images\nescalates the data acquisition expenses. Moreover, their coverage of target\nstyle is confined to stylistic variants inferred from the training data. To\nsurmount the above challenges, we propose an unsupervised learning-based\napproach for text-based image tone adjustment method, CLIPtone, that extends an\nexisting image enhancement method to accommodate natural language descriptions.\nSpecifically, we design a hyper-network to adaptively modulate the pretrained\nparameters of the backbone model based on text description. To assess whether\nthe adjusted image aligns with the text description without ground truth image,\nwe utilize CLIP, which is trained on a vast set of language-image pairs and\nthus encompasses knowledge of human perception. The major advantages of our\napproach are three fold: (i) minimal data collection expenses, (ii) support for\na range of adjustments, and (iii) the ability to handle novel text descriptions\nunseen in training. Our approach's efficacy is demonstrated through\ncomprehensive experiments, including a user study.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Hyeongmin Lee", "Kyoungkook Kang", "Jungseul Ok", "Sunghyun Cho"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c7"}, "filepath": "data/2403.06668.png", "tags": [], "_media_type": "image", "_rand": 0.9998930647175757, "arXiv_link": "https://arxiv.org/abs/2403.06668", "other_link": "https://github.com/jaewonalive/PeerAiD.", "title": "PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor", "abstract": "Adversarial robustness of the neural network is a significant concern when it\nis applied to security-critical domains. In this situation, adversarial\ndistillation is a promising option which aims to distill the robustness of the\nteacher network to improve the robustness of a small student network. Previous\nworks pretrain the teacher network to make it robust against the adversarial\nexamples aimed at itself. However, the adversarial examples are dependent on\nthe parameters of the target network. The fixed teacher network inevitably\ndegrades its robustness against the unseen transferred adversarial examples\nwhich target the parameters of the student network in the adversarial\ndistillation process. We propose PeerAiD to make a peer network learn the\nadversarial examples of the student network instead of adversarial examples\naimed at itself. PeerAiD is an adversarial distillation that trains the peer\nnetwork and the student network simultaneously in order to specialize the peer\nnetwork for defending the student network. We observe that such peer networks\nsurpass the robustness of the pretrained robust teacher model against\nadversarial examples aimed at the student network. With this peer network and\nadversarial distillation, PeerAiD achieves significantly higher robustness of\nthe student network with AutoAttack (AA) accuracy by up to 1.66%p and improves\nthe natural accuracy of the student network by up to 4.72%p with ResNet-18 on\nTinyImageNet dataset. Code is available at\nhttps://github.com/jaewonalive/PeerAiD.", "keywords": [], "authors_list": ["Jaewon Jung", "Hongsun Jang", "Jaeyong Song", "Jinho Lee"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c8"}, "filepath": "data/2403.02991.png", "tags": [], "_media_type": "image", "_rand": 0.999639159873231, "arXiv_link": "https://arxiv.org/abs/2403.02991", "other_link": "", "title": "MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer", "abstract": "Vision-Language Transformers (VLTs) have shown great success recently, but\nare meanwhile accompanied by heavy computation costs, where a major reason can\nbe attributed to the large number of visual and language tokens. Existing token\npruning research for compressing VLTs mainly follows a single-modality-based\nscheme yet ignores the critical role of aligning different modalities for\nguiding the token pruning process, causing the important tokens for one\nmodality to be falsely pruned in another modality branch. Meanwhile, existing\nVLT pruning works also lack the flexibility to dynamically compress each layer\nbased on different input samples. To this end, we propose a novel framework\nnamed Multimodal Alignment-Guided Dynamic Token Pruning (MADTP) for\naccelerating various VLTs. Specifically, we first introduce a well-designed\nMulti-modality Alignment Guidance (MAG) module that can align features of the\nsame semantic concept from different modalities, to ensure the pruned tokens\nare less important for all modalities. We further design a novel Dynamic Token\nPruning (DTP) module, which can adaptively adjust the token compression ratio\nin each layer based on different input instances. Extensive experiments on\nvarious benchmarks demonstrate that MADTP significantly reduces the\ncomputational complexity of kinds of multimodal models while preserving\ncompetitive performance. Notably, when applied to the BLIP model in the NLVR2\ndataset, MADTP can reduce the GFLOPs by 80% with less than 4% performance\ndegradation.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jianjian Cao", "Peng Ye", "Shengze Li", "Chong Yu", "Yansong Tang", "Jiwen Lu", "Tao Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5c9"}, "filepath": "data/2312.00845.png", "tags": [], "_media_type": "image", "_rand": 0.9997913570109627, "arXiv_link": "https://arxiv.org/abs/2312.00845", "other_link": "https://video-motion-customization.github.io", "title": "VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models", "abstract": "Text-to-video diffusion models have advanced video generation significantly.\nHowever, customizing these models to generate videos with tailored motions\npresents a substantial challenge. In specific, they encounter hurdles in (a)\naccurately reproducing motion from a target video, and (b) creating diverse\nvisual variations. For example, straightforward extensions of static image\ncustomization methods to video often lead to intricate entanglements of\nappearance and motion data. To tackle this, here we present the Video Motion\nCustomization (VMC) framework, a novel one-shot tuning approach crafted to\nadapt temporal attention layers within video diffusion models. Our approach\nintroduces a novel motion distillation objective using residual vectors between\nconsecutive frames as a motion reference. The diffusion process then preserves\nlow-frequency motion trajectories while mitigating high-frequency\nmotion-unrelated noise in image space. We validate our method against\nstate-of-the-art video generative models across diverse real-world motions and\ncontexts. Our codes, data and the project demo can be found at\nhttps://video-motion-customization.github.io", "keywords": [], "authors_list": ["Hyeonho Jeong", "Geon Yeong Park", "Jong Chul Ye"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ca"}, "filepath": "data/2310.08129.png", "tags": [], "_media_type": "image", "_rand": 0.9995839355055267, "arXiv_link": "https://arxiv.org/abs/2310.08129", "other_link": "https://github.com/zzjchen/Tailored-Visions.", "title": "Tailored Visions: Enhancing Text-to-Image Generation with Personalized Prompt Rewriting", "abstract": "Despite significant progress in the field, it is still challenging to create\npersonalized visual representations that align closely with the desires and\npreferences of individual users. This process requires users to articulate\ntheir ideas in words that are both comprehensible to the models and accurately\ncapture their vision, posing difficulties for many users. In this paper, we\ntackle this challenge by leveraging historical user interactions with the\nsystem to enhance user prompts. We propose a novel approach that involves\nrewriting user prompts based on a newly collected large-scale text-to-image\ndataset with over 300k prompts from 3115 users. Our rewriting model enhances\nthe expressiveness and alignment of user prompts with their intended visual\noutputs. Experimental results demonstrate the superiority of our methods over\nbaseline approaches, as evidenced in our new offline evaluation method and\nonline tests. Our code and dataset are available at\nhttps://github.com/zzjchen/Tailored-Visions.", "keywords": ["Image and video generation and manipulation", "Large multimodal models and prompting techniques", "Multimodal models and vision-language models"], "authors_list": ["Zijie Chen", "Lichao Zhang", "Fangsheng Weng", "Lili Pan", "ZHENZHONG Lan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5cb"}, "filepath": "data/2312.00777.png", "tags": [], "_media_type": "image", "_rand": 0.9996092419501376, "arXiv_link": "https://arxiv.org/abs/2312.00777", "other_link": "", "title": "VideoBooth: Diffusion-based Video Generation with Image Prompts", "abstract": "Text-driven video generation witnesses rapid progress. However, merely using\ntext prompts is not enough to depict the desired subject appearance that\naccurately aligns with users' intents, especially for customized content\ncreation. In this paper, we study the task of video generation with image\nprompts, which provide more accurate and direct content control beyond the text\nprompts. Specifically, we propose a feed-forward framework VideoBooth, with two\ndedicated designs: 1) We propose to embed image prompts in a coarse-to-fine\nmanner. Coarse visual embeddings from image encoder provide high-level\nencodings of image prompts, while fine visual embeddings from the proposed\nattention injection module provide multi-scale and detailed encoding of image\nprompts. These two complementary embeddings can faithfully capture the desired\nappearance. 2) In the attention injection module at fine level, multi-scale\nimage prompts are fed into different cross-frame attention layers as additional\nkeys and values. This extra spatial information refines the details in the\nfirst frame and then it is propagated to the remaining frames, which maintains\ntemporal consistency. Extensive experiments demonstrate that VideoBooth\nachieves state-of-the-art performance in generating customized high-quality\nvideos with subjects specified in image prompts. Notably, VideoBooth is a\ngeneralizable framework where a single model works for a wide range of image\nprompts with feed-forward pass.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yuming Jiang", "Tianxing Wu", "Shuai Yang", "Chenyang Si", "Dahua Lin", "Yu Qiao", "Chen Change Loy", "Ziwei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5cc"}, "filepath": "data/2309.11497.png", "tags": [], "_media_type": "image", "_rand": 0.9999196557675919, "arXiv_link": "https://arxiv.org/abs/2309.11497", "other_link": "https://chenyangsi.top/FreeU/.", "title": "FreeU: Free Lunch in Diffusion U-Net", "abstract": "In this paper, we uncover the untapped potential of diffusion U-Net, which\nserves as a \"free lunch\" that substantially improves the generation quality on\nthe fly. We initially investigate the key contributions of the U-Net\narchitecture to the denoising process and identify that its main backbone\nprimarily contributes to denoising, whereas its skip connections mainly\nintroduce high-frequency features into the decoder module, causing the network\nto overlook the backbone semantics. Capitalizing on this discovery, we propose\na simple yet effective method-termed \"FreeU\" - that enhances generation quality\nwithout additional training or finetuning. Our key insight is to strategically\nre-weight the contributions sourced from the U-Net's skip connections and\nbackbone feature maps, to leverage the strengths of both components of the\nU-Net architecture. Promising results on image and video generation tasks\ndemonstrate that our FreeU can be readily integrated to existing diffusion\nmodels, e.g., Stable Diffusion, DreamBooth, ModelScope, Rerender and ReVersion,\nto improve the generation quality with only a few lines of code. All you need\nis to adjust two scaling factors during inference. Project page:\nhttps://chenyangsi.top/FreeU/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Chenyang Si", "Ziqi Huang", "Yuming Jiang", "Ziwei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5cd"}, "filepath": "data/2402.17275.png", "tags": [], "_media_type": "image", "_rand": 0.9997468238805909, "arXiv_link": "https://arxiv.org/abs/2402.17275", "other_link": "", "title": "One-Shot Structure-Aware Stylized Image Synthesis", "abstract": "While GAN-based models have been successful in image stylization tasks, they\noften struggle with structure preservation while stylizing a wide range of\ninput images. Recently, diffusion models have been adopted for image\nstylization but still lack the capability to maintain the original quality of\ninput images. Building on this, we propose OSASIS: a novel one-shot stylization\nmethod that is robust in structure preservation. We show that OSASIS is able to\neffectively disentangle the semantics from the structure of an image, allowing\nit to control the level of content and style implemented to a given input. We\napply OSASIS to various experimental settings, including stylization with\nout-of-domain reference images and stylization with text-driven manipulation.\nResults show that OSASIS outperforms other stylization methods, especially for\ninput images that were rarely encountered during training, providing a\npromising solution to stylization via diffusion models.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Hansam Cho", "Jonghyun Lee", "Seunggyu Chang", "Yonghyun Jeong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ce"}, "filepath": "data/2403.18550.png", "tags": [], "_media_type": "image", "_rand": 0.9992236922060632, "arXiv_link": "https://arxiv.org/abs/2403.18550", "other_link": "https://github.com/noorahmedds/OrCo", "title": "OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning", "abstract": "Few-Shot Class-Incremental Learning (FSCIL) introduces a paradigm in which\nthe problem space expands with limited data. FSCIL methods inherently face the\nchallenge of catastrophic forgetting as data arrives incrementally, making\nmodels susceptible to overwriting previously acquired knowledge. Moreover,\ngiven the scarcity of labeled samples available at any given time, models may\nbe prone to overfitting and find it challenging to strike a balance between\nextensive pretraining and the limited incremental data. To address these\nchallenges, we propose the OrCo framework built on two core principles:\nfeatures' orthogonality in the representation space, and contrastive learning.\nIn particular, we improve the generalization of the embedding space by\nemploying a combination of supervised and self-supervised contrastive losses\nduring the pretraining phase. Additionally, we introduce OrCo loss to address\nchallenges arising from data limitations during incremental sessions. Through\nfeature space perturbations and orthogonality between classes, the OrCo loss\nmaximizes margins and reserves space for the following incremental data. This,\nin turn, ensures the accommodation of incoming classes in the feature space\nwithout compromising previously acquired knowledge. Our experimental results\nshowcase state-of-the-art performance across three benchmark datasets,\nincluding mini-ImageNet, CIFAR100, and CUB datasets. Code is available at\nhttps://github.com/noorahmedds/OrCo", "keywords": [], "authors_list": ["Noor Ahmed", "Anna Kukleva", "Bernt Schiele"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5cf"}, "filepath": "data/2312.14198.png", "tags": [], "_media_type": "image", "_rand": 0.9992456356144048, "arXiv_link": "https://arxiv.org/abs/2312.14198", "other_link": "", "title": "ZeroShape: Regression-based Zero-shot Shape Reconstruction", "abstract": "We study the problem of single-image zero-shot 3D shape reconstruction.\nRecent works learn zero-shot shape reconstruction through generative modeling\nof 3D assets, but these models are computationally expensive at train and\ninference time. In contrast, the traditional approach to this problem is\nregression-based, where deterministic models are trained to directly regress\nthe object shape. Such regression methods possess much higher computational\nefficiency than generative methods. This raises a natural question: is\ngenerative modeling necessary for high performance, or conversely, are\nregression-based approaches still competitive? To answer this, we design a\nstrong regression-based model, called ZeroShape, based on the converging\nfindings in this field and a novel insight. We also curate a large real-world\nevaluation benchmark, with objects from three different real-world 3D datasets.\nThis evaluation benchmark is more diverse and an order of magnitude larger than\nwhat prior works use to quantitatively evaluate their models, aiming at\nreducing the evaluation variance in our field. We show that ZeroShape not only\nachieves superior performance over state-of-the-art methods, but also\ndemonstrates significantly higher computational and data efficiency.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zixuan Huang", "Stefan Stojanov", "Anh Thai", "Varun Jampani", "James Rehg"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d0"}, "filepath": "data/2311.16304.png", "tags": [], "_media_type": "image", "_rand": 0.9996444398812521, "arXiv_link": "https://arxiv.org/abs/2311.16304", "other_link": "", "title": "Robust Self-calibration of Focal Lengths from the Fundamental Matrix", "abstract": "The problem of self-calibration of two cameras from a given fundamental\nmatrix is one of the basic problems in geometric computer vision. Under the\nassumption of known principal points and square pixels, the well-known Bougnoux\nformula offers a means to compute the two unknown focal lengths. However, in\nmany practical situations, the formula yields inaccurate results due to\ncommonly occurring singularities. Moreover, the estimates are sensitive to\nnoise in the computed fundamental matrix and to the assumed positions of the\nprincipal points. In this paper, we therefore propose an efficient and robust\niterative method to estimate the focal lengths along with the principal points\nof the cameras given a fundamental matrix and priors for the estimated camera\nparameters. In addition, we study a computationally efficient check of models\ngenerated within RANSAC that improves the accuracy of the estimated models\nwhile reducing the total computational time. Extensive experiments on real and\nsynthetic data show that our iterative method brings significant improvements\nin terms of the accuracy of the estimated focal lengths over the Bougnoux\nformula and other state-of-the-art methods, even when relying on inaccurate\npriors.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Viktor Kocur", "Daniel Kyselica", "Zuzana Kukelova"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d1"}, "filepath": "data/2309.02685.png", "tags": [], "_media_type": "image", "_rand": 0.9996674726073224, "arXiv_link": "https://arxiv.org/abs/2309.02685", "other_link": "https://sites.google.com/view/diffusion-edfs/home", "title": "Diffusion-EDFs: Bi-equivariant Denoising Generative Modeling on SE(3) for Visual Robotic Manipulation", "abstract": "Diffusion generative modeling has become a promising approach for learning\nrobotic manipulation tasks from stochastic human demonstrations. In this paper,\nwe present Diffusion-EDFs, a novel SE(3)-equivariant diffusion-based approach\nfor visual robotic manipulation tasks. We show that our proposed method\nachieves remarkable data efficiency, requiring only 5 to 10 human\ndemonstrations for effective end-to-end training in less than an hour.\nFurthermore, our benchmark experiments demonstrate that our approach has\nsuperior generalizability and robustness compared to state-of-the-art methods.\nLastly, we validate our methods with real hardware experiments. Project\nWebsite: https://sites.google.com/view/diffusion-edfs/home", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hyunwoo Ryu", "Jiwoo Kim", "Hyunseok An", "Junwoo Chang", "Joohwan Seo", "Taehan Kim", "Yubin Kim", "Chaewon Hwang", "Jongeun Choi", "Roberto Horowitz"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d2"}, "filepath": "data/2312.00210.png", "tags": [], "_media_type": "image", "_rand": 0.9997104020884962, "arXiv_link": "https://arxiv.org/abs/2312.00210", "other_link": "", "title": "DREAM: Diffusion Rectification and Estimation-Adaptive Models", "abstract": "We present DREAM, a novel training framework representing Diffusion\nRectification and Estimation Adaptive Models, requiring minimal code changes\n(just three lines) yet significantly enhancing the alignment of training with\nsampling in diffusion models. DREAM features two components: diffusion\nrectification, which adjusts training to reflect the sampling process, and\nestimation adaptation, which balances perception against distortion. When\napplied to image super-resolution (SR), DREAM adeptly navigates the tradeoff\nbetween minimizing distortion and preserving high image quality. Experiments\ndemonstrate DREAM's superiority over standard diffusion-based SR methods,\nshowing a $2$ to $3\\times $ faster training convergence and a $10$ to\n$20\\times$ reduction in sampling steps to achieve comparable results. We hope\nDREAM will inspire a rethinking of diffusion model training paradigms.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Jinxin Zhou", "Tianyu Ding", "Tianyi Chen", "Jiachen Jiang", "Ilya Zharkov", "Zhihui Zhu", "Luming Liang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d3"}, "filepath": "data/2305.17328.png", "tags": [], "_media_type": "image", "_rand": 0.9995837677009608, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2305.17328", "other_link": "https://jha-lab.github.io/zerotprune.", "title": "Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers", "abstract": "Deployment of Transformer models on edge devices is becoming increasingly\nchallenging due to the exponentially growing inference cost that scales\nquadratically with the number of tokens in the input sequence. Token pruning is\nan emerging solution to address this challenge due to its ease of deployment on\nvarious Transformer backbones. However, most token pruning methods require\ncomputationally expensive fine-tuning, which is undesirable in many edge\ndeployment cases. In this work, we propose Zero-TPrune, the first zero-shot\nmethod that considers both the importance and similarity of tokens in\nperforming token pruning. It leverages the attention graph of pre-trained\nTransformer models to produce an importance distribution for tokens via our\nproposed Weighted Page Rank (WPR) algorithm. This distribution further guides\ntoken partitioning for efficient similarity-based pruning. Due to the\nelimination of the fine-tuning overhead, Zero-TPrune can prune large models at\nnegligible computational cost, switch between different pruning configurations\nat no computational cost, and perform hyperparameter tuning efficiently. We\nevaluate the performance of Zero-TPrune on vision tasks by applying it to\nvarious vision Transformer backbones and testing them on ImageNet. Without any\nfine-tuning, Zero-TPrune reduces the FLOPs cost of DeiT-S by 34.7% and improves\nits throughput by 45.3% with only 0.4% accuracy loss. Compared with\nstate-of-the-art pruning methods that require fine-tuning, Zero-TPrune not only\neliminates the need for fine-tuning after pruning but also does so with only\n0.1% accuracy loss. Compared with state-of-the-art fine-tuning-free pruning\nmethods, Zero-TPrune reduces accuracy loss by up to 49% with similar FLOPs\nbudgets. Project webpage: https://jha-lab.github.io/zerotprune.", "keywords": [], "authors_list": ["Hongjie Wang", "Bhishma Dedhia", "Niraj Jha"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d4"}, "filepath": "data/2312.08344.png", "tags": [], "_media_type": "image", "_rand": 0.9992850950514919, "arXiv_link": "https://arxiv.org/abs/2312.08344", "other_link": "https://nvlabs.github.io/FoundationPose/", "title": "FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects", "abstract": "We present FoundationPose, a unified foundation model for 6D object pose\nestimation and tracking, supporting both model-based and model-free setups. Our\napproach can be instantly applied at test-time to a novel object without\nfine-tuning, as long as its CAD model is given, or a small number of reference\nimages are captured. We bridge the gap between these two setups with a neural\nimplicit representation that allows for effective novel view synthesis, keeping\nthe downstream pose estimation modules invariant under the same unified\nframework. Strong generalizability is achieved via large-scale synthetic\ntraining, aided by a large language model (LLM), a novel transformer-based\narchitecture, and contrastive learning formulation. Extensive evaluation on\nmultiple public datasets involving challenging scenarios and objects indicate\nour unified approach outperforms existing methods specialized for each task by\na large margin. In addition, it even achieves comparable results to\ninstance-level methods despite the reduced assumptions. Project page:\nhttps://nvlabs.github.io/FoundationPose/", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Bowen Wen", "Wei Yang", "Jan Kautz", "Stan Birchfield"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d5"}, "filepath": "data/2404.01925v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994707880249534, "arXiv_link": "https://arxiv.org/abs/2404.01925v1", "other_link": "https://github.com/happytianhao/TaDe.", "title": "Improving Bird\u2019s Eye View Semantic Segmentation by Task Decomposition", "abstract": "Semantic segmentation in bird's eye view (BEV) plays a crucial role in\nautonomous driving. Previous methods usually follow an end-to-end pipeline,\ndirectly predicting the BEV segmentation map from monocular RGB inputs.\nHowever, the challenge arises when the RGB inputs and BEV targets from distinct\nperspectives, making the direct point-to-point predicting hard to optimize. In\nthis paper, we decompose the original BEV segmentation task into two stages,\nnamely BEV map reconstruction and RGB-BEV feature alignment. In the first\nstage, we train a BEV autoencoder to reconstruct the BEV segmentation maps\ngiven corrupted noisy latent representation, which urges the decoder to learn\nfundamental knowledge of typical BEV patterns. The second stage involves\nmapping RGB input images into the BEV latent space of the first stage, directly\noptimizing the correlations between the two views at the feature level. Our\napproach simplifies the complexity of combining perception and generation into\ndistinct steps, equipping the model to handle intricate and challenging scenes\neffectively. Besides, we propose to transform the BEV segmentation map from the\nCartesian to the polar coordinate system to establish the column-wise\ncorrespondence between RGB images and BEV maps. Moreover, our method requires\nneither multi-scale features nor camera intrinsic parameters for depth\nestimation and saves computational overhead. Extensive experiments on nuScenes\nand Argoverse show the effectiveness and efficiency of our method. Code is\navailable at https://github.com/happytianhao/TaDe.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Tianhao Zhao", "Yongcan Chen", "Yu Wu", "Tianyang Liu", "Bo Du", "Peilun Xiao", "shi qiu", "Hongda Yang", "Guozhen Li", "yi yang", "Yutian Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d6"}, "filepath": "data/2311.15937.png", "tags": [], "_media_type": "image", "_rand": 0.9999261605885067, "arXiv_link": "https://arxiv.org/abs/2311.15937", "other_link": "https://github.com/serizba/salad.", "title": "Optimal Transport Aggregation for Visual Place Recognition", "abstract": "The task of Visual Place Recognition (VPR) aims to match a query image\nagainst references from an extensive database of images from different places,\nrelying solely on visual cues. State-of-the-art pipelines focus on the\naggregation of features extracted from a deep backbone, in order to form a\nglobal descriptor for each image. In this context, we introduce SALAD (Sinkhorn\nAlgorithm for Locally Aggregated Descriptors), which reformulates NetVLAD's\nsoft-assignment of local features to clusters as an optimal transport problem.\nIn SALAD, we consider both feature-to-cluster and cluster-to-feature relations\nand we also introduce a 'dustbin' cluster, designed to selectively discard\nfeatures deemed non-informative, enhancing the overall descriptor quality.\nAdditionally, we leverage and fine-tune DINOv2 as a backbone, which provides\nenhanced description power for the local features, and dramatically reduces the\nrequired training time. As a result, our single-stage method not only surpasses\nsingle-stage baselines in public VPR datasets, but also surpasses two-stage\nmethods that add a re-ranking with significantly higher cost. Code and models\nare available at https://github.com/serizba/salad.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Sergio Izquierdo", "Javier Civera"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d7"}, "filepath": "data/2401.11078.png", "tags": [], "_media_type": "image", "_rand": 0.9998232482847137, "arXiv_link": "https://arxiv.org/abs/2401.11078", "other_link": "", "title": "UltrAvatar: A Realistic Animatable 3D Avatar Diffusion Model with Authenticity Guided Textures", "abstract": "Recent advances in 3D avatar generation have gained significant attentions.\nThese breakthroughs aim to produce more realistic animatable avatars, narrowing\nthe gap between virtual and real-world experiences. Most of existing works\nemploy Score Distillation Sampling (SDS) loss, combined with a differentiable\nrenderer and text condition, to guide a diffusion model in generating 3D\navatars. However, SDS often generates oversmoothed results with few facial\ndetails, thereby lacking the diversity compared with ancestral sampling. On the\nother hand, other works generate 3D avatar from a single image, where the\nchallenges of unwanted lighting effects, perspective views, and inferior image\nquality make them difficult to reliably reconstruct the 3D face meshes with the\naligned complete textures. In this paper, we propose a novel 3D avatar\ngeneration approach termed UltrAvatar with enhanced fidelity of geometry, and\nsuperior quality of physically based rendering (PBR) textures without unwanted\nlighting. To this end, the proposed approach presents a diffuse color\nextraction model and an authenticity guided texture diffusion model. The former\nremoves the unwanted lighting effects to reveal true diffuse colors so that the\ngenerated avatars can be rendered under various lighting conditions. The latter\nfollows two gradient-based guidances for generating PBR textures to render\ndiverse face-identity features and details better aligning with 3D mesh\ngeometry. We demonstrate the effectiveness and robustness of the proposed\nmethod, outperforming the state-of-the-art methods by a large margin in the\nexperiments.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Mingyuan Zhou", "Rakib Hyder", "Ziwei Xuan", "Guo-Jun Qi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d8"}, "filepath": "data/2312.05210.png", "tags": [], "_media_type": "image", "_rand": 0.9998710481248531, "arXiv_link": "https://arxiv.org/abs/2312.05210", "other_link": "", "title": "IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing", "abstract": "We present IntrinsicAvatar, a novel approach to recovering the intrinsic\nproperties of clothed human avatars including geometry, albedo, material, and\nenvironment lighting from only monocular videos. Recent advancements in\nhuman-based neural rendering have enabled high-quality geometry and appearance\nreconstruction of clothed humans from just monocular videos. However, these\nmethods bake intrinsic properties such as albedo, material, and environment\nlighting into a single entangled neural representation. On the other hand, only\na handful of works tackle the problem of estimating geometry and disentangled\nappearance properties of clothed humans from monocular videos. They usually\nachieve limited quality and disentanglement due to approximations of secondary\nshading effects via learned MLPs. In this work, we propose to model secondary\nshading effects explicitly via Monte-Carlo ray tracing. We model the rendering\nprocess of clothed humans as a volumetric scattering process, and combine ray\ntracing with body articulation. Our approach can recover high-quality geometry,\nalbedo, material, and lighting properties of clothed humans from a single\nmonocular video, without requiring supervised pre-training using ground truth\nmaterials. Furthermore, since we explicitly model the volumetric scattering\nprocess and ray tracing, our model naturally generalizes to novel poses,\nenabling animation of the reconstructed avatar in novel lighting conditions.", "keywords": ["Biometrics and human analysis", "Computational imaging and physics-based vision"], "authors_list": ["Shaofei Wang", "Bozidar Antic", "Andreas Geiger", "Siyu Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5d9"}, "filepath": "data/2307.04570v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990280195698598, "arXiv_link": "https://arxiv.org/abs/2307.04570v2", "other_link": "https://github.com/paplhjak/Facial-Age-Estimation-Benchmark.", "title": "A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark", "abstract": "Comparing different age estimation methods poses a challenge due to the\nunreliability of published results stemming from inconsistencies in the\nbenchmarking process. Previous studies have reported continuous performance\nimprovements over the past decade using specialized methods; however, our\nfindings challenge these claims. This paper identifies two trivial, yet\npersistent issues with the currently used evaluation protocol and describes how\nto resolve them. We describe our evaluation protocol in detail and provide\nspecific examples of how the protocol should be used. We utilize the protocol\nto offer an extensive comparative analysis for state-of-the-art facial age\nestimation methods. Surprisingly, we find that the performance differences\nbetween the methods are negligible compared to the effect of other factors,\nsuch as facial alignment, facial coverage, image resolution, model\narchitecture, or the amount of data used for pretraining. We use the gained\ninsights to propose using FaRL as the backbone model and demonstrate its\nefficiency. The results emphasize the importance of consistent data\npreprocessing practices for reliable and meaningful comparisons. We make our\nsource code public at\nhttps://github.com/paplhjak/Facial-Age-Estimation-Benchmark.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Jakub Paplham", "Vojtech Franc"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5da"}, "filepath": "data/2404.11151.png", "tags": [], "_media_type": "image", "_rand": 0.9998910095851091, "arXiv_link": "https://arxiv.org/abs/2404.11151", "other_link": "https://chaoyuesong.github.io/REACTO.", "title": "REACTO: Reconstructing Articulated Objects from a Single Video", "abstract": "In this paper, we address the challenge of reconstructing general articulated\n3D objects from a single video. Existing works employing dynamic neural\nradiance fields have advanced the modeling of articulated objects like humans\nand animals from videos, but face challenges with piece-wise rigid general\narticulated objects due to limitations in their deformation models. To tackle\nthis, we propose Quasi-Rigid Blend Skinning, a novel deformation model that\nenhances the rigidity of each part while maintaining flexible deformation of\nthe joints. Our primary insight combines three distinct approaches: 1) an\nenhanced bone rigging system for improved component modeling, 2) the use of\nquasi-sparse skinning weights to boost part rigidity and reconstruction\nfidelity, and 3) the application of geodesic point assignment for precise\nmotion and seamless deformation. Our method outperforms previous works in\nproducing higher-fidelity 3D reconstructions of general articulated objects, as\ndemonstrated on both real and synthetic datasets. Project page:\nhttps://chaoyuesong.github.io/REACTO.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chaoyue Song", "Jiacheng Wei", "Chuan-Sheng Foo", "Guosheng Lin", "Fayao Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5db"}, "filepath": "data/2401.04747.png", "tags": [], "_media_type": "image", "_rand": 0.9998159255823511, "arXiv_link": "https://arxiv.org/abs/2401.04747", "other_link": "", "title": "DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation", "abstract": "We propose DiffSHEG, a Diffusion-based approach for Speech-driven Holistic 3D\nExpression and Gesture generation with arbitrary length. While previous works\nfocused on co-speech gesture or expression generation individually, the joint\ngeneration of synchronized expressions and gestures remains barely explored. To\naddress this, our diffusion-based co-speech motion generation transformer\nenables uni-directional information flow from expression to gesture,\nfacilitating improved matching of joint expression-gesture distributions.\nFurthermore, we introduce an outpainting-based sampling strategy for arbitrary\nlong sequence generation in diffusion models, offering flexibility and\ncomputational efficiency. Our method provides a practical solution that\nproduces high-quality synchronized expression and gesture generation driven by\nspeech. Evaluated on two public datasets, our approach achieves\nstate-of-the-art performance both quantitatively and qualitatively.\nAdditionally, a user study confirms the superiority of DiffSHEG over prior\napproaches. By enabling the real-time generation of expressive and synchronized\nmotions, DiffSHEG showcases its potential for various applications in the\ndevelopment of digital humans and embodied agents.", "keywords": ["Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Junming Chen", "Yunfei Liu", "Jianan Wang", "Ailing Zeng", "Yu Li", "Qifeng Chen"], "category_name": "Sound", "all_categories": ["Sound", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Graphics", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5dc"}, "filepath": "data/2311.03149.png", "tags": [], "_media_type": "image", "_rand": 0.9993676341773519, "arXiv_link": "https://arxiv.org/abs/2311.03149", "other_link": "https://github.com/MCG-NJU/AMD.", "title": "Building Vision-Language Models on Solid Foundations with Masked Distillation", "abstract": "Self-supervised foundation models have shown great potential in computer\nvision thanks to the pre-training paradigm of masked autoencoding. Scale is a\nprimary factor influencing the performance of these foundation models. However,\nthese large foundation models often result in high computational cost. This\npaper focuses on pre-training relatively small vision transformer models that\ncould be efficiently adapted to downstream tasks. Specifically, taking\ninspiration from knowledge distillation in model compression, we propose a new\nasymmetric masked distillation (AMD) framework for pre-training relatively\nsmall models with autoencoding. The core of AMD is to devise an asymmetric\nmasking strategy, where the teacher model is enabled to see more context\ninformation with a lower masking ratio, while the student model is still\nequipped with a high masking ratio. We design customized multi-layer feature\nalignment between the teacher encoder and student encoder to regularize the\npre-training of student MAE. To demonstrate the effectiveness and versatility\nof AMD, we apply it to both ImageMAE and VideoMAE for pre-training relatively\nsmall ViT models. AMD achieved 84.6% classification accuracy on IN1K using the\nViT-B model. And AMD achieves 73.3% classification accuracy using the ViT-B\nmodel on the Something-in-Something V2 dataset, a 3.7% improvement over the\noriginal ViT-B model from VideoMAE. We also transfer AMD pre-trained models to\ndownstream tasks and obtain consistent performance improvement over the\noriginal masked autoencoding. The code and models are available at\nhttps://github.com/MCG-NJU/AMD.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Sepehr Sameni", "Kushal Kafle", "Hao Tan", "Simon Jenni"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5dd"}, "filepath": "data/2310.07997.png", "tags": [], "_media_type": "image", "_rand": 0.9994717983468995, "arXiv_link": "http://export.arxiv.org/abs/2310.07997", "other_link": "", "title": "Small Steps and Level Sets: Fitting Neural Surface Models with Point Guidance", "abstract": "Recently, learning multi-view neural surface reconstruction with the\nsupervision of point clouds or depth maps has been a promising way. However,\ndue to the underutilization of prior information, current methods still\nstruggle with the challenges of limited accuracy and excessive time complexity.\nIn addition, prior data perturbation is also an important but rarely considered\nissue. To address these challenges, we propose a novel point-guided method\nnamed PG-NeuS, which achieves accurate and efficient reconstruction while\nrobustly coping with point noise. Specifically, aleatoric uncertainty of the\npoint cloud is modeled to capture the distribution of noise, leading to noise\nrobustness. Furthermore, a Neural Projection module connecting points and\nimages is proposed to add geometric constraints to implicit surface, achieving\nprecise point guidance. To better compensate for geometric bias between volume\nrendering and point modeling, high-fidelity points are filtered into a Bias\nNetwork to further improve details representation. Benefiting from the\neffective point guidance, even with a lightweight network, the proposed PG-NeuS\nachieves fast convergence with an impressive 11x speedup compared to NeuS.\nExtensive experiments show that our method yields high-quality surfaces with\nhigh efficiency, especially for fine-grained details and smooth regions,\noutperforming the state-of-the-art methods. Moreover, it exhibits strong\nrobustness to noisy data and sparse data.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Chamin Hewa Koneputugodage", "Yizhak Ben-Shabat", "Dylan Campbell", "Stephen Gould"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5de"}, "filepath": "data/2311.16090.png", "tags": [], "_media_type": "image", "_rand": 0.9996501161445677, "arXiv_link": "https://arxiv.org/abs/2311.16090", "other_link": "", "title": "Self-correcting LLM-controlled Diffusion", "abstract": "Text-to-image generation has witnessed significant progress with the advent\nof diffusion models. Despite the ability to generate photorealistic images,\ncurrent text-to-image diffusion models still often struggle to accurately\ninterpret and follow complex input text prompts. In contrast to existing models\nthat aim to generate images only with their best effort, we introduce\nSelf-correcting LLM-controlled Diffusion (SLD). SLD is a framework that\ngenerates an image from the input prompt, assesses its alignment with the\nprompt, and performs self-corrections on the inaccuracies in the generated\nimage. Steered by an LLM controller, SLD turns text-to-image generation into an\niterative closed-loop process, ensuring correctness in the resulting image. SLD\nis not only training-free but can also be seamlessly integrated with diffusion\nmodels behind API access, such as DALL-E 3, to further boost the performance of\nstate-of-the-art diffusion models. Experimental results show that our approach\ncan rectify a majority of incorrect generations, particularly in generative\nnumeracy, attribute binding, and spatial relationships. Furthermore, by simply\nadjusting the instructions to the LLM, SLD can perform image editing tasks,\nbridging the gap between text-to-image generation and image editing pipelines.\nWe will make our code available for future research and applications.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Tsung-Han Wu", "Long Lian", "Joseph Gonzalez", "Boyi Li", "Trevor Darrell"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5df"}, "filepath": "data/2403.16005.png", "tags": [], "_media_type": "image", "_rand": 0.999216763170085, "arXiv_link": "https://arxiv.org/abs/2403.16005", "other_link": "", "title": "Knowledge-Enhanced Dual-stream Zero-shot Composed Image Retrieval", "abstract": "We study the zero-shot Composed Image Retrieval (ZS-CIR) task, which is to\nretrieve the target image given a reference image and a description without\ntraining on the triplet datasets. Previous works generate pseudo-word tokens by\nprojecting the reference image features to the text embedding space. However,\nthey focus on the global visual representation, ignoring the representation of\ndetailed attributes, e.g., color, object number and layout. To address this\nchallenge, we propose a Knowledge-Enhanced Dual-stream zero-shot composed image\nretrieval framework (KEDs). KEDs implicitly models the attributes of the\nreference images by incorporating a database. The database enriches the\npseudo-word tokens by providing relevant images and captions, emphasizing\nshared attribute information in various aspects. In this way, KEDs recognizes\nthe reference image from diverse perspectives. Moreover, KEDs adopts an extra\nstream that aligns pseudo-word tokens with textual concepts, leveraging\npseudo-triplets mined from image-text pairs. The pseudo-word tokens generated\nin this stream are explicitly aligned with fine-grained semantics in the text\nembedding space. Extensive experiments on widely used benchmarks, i.e.\nImageNet-R, COCO object, Fashion-IQ and CIRR, show that KEDs outperforms\nprevious zero-shot composed image retrieval methods.", "keywords": [], "authors_list": ["Yucheng Suo", "Fan Ma", "Linchao Zhu", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e0"}, "filepath": "data/2311.17049.png", "tags": [], "_media_type": "image", "_rand": 0.9997807164310649, "arXiv_link": "https://arxiv.org/abs/2311.17049", "other_link": "https://github.com/apple/ml-mobileclip", "title": "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training", "abstract": "Contrastive pretraining of image-text foundation models, such as CLIP,\ndemonstrated excellent zero-shot performance and improved robustness on a wide\nrange of downstream tasks. However, these models utilize large\ntransformer-based encoders with significant memory and latency overhead which\npose challenges for deployment on mobile devices. In this work, we introduce\nMobileCLIP -- a new family of efficient image-text models optimized for runtime\nperformance along with a novel and efficient training approach, namely\nmulti-modal reinforced training. The proposed training approach leverages\nknowledge transfer from an image captioning model and an ensemble of strong\nCLIP encoders to improve the accuracy of efficient models. Our approach avoids\ntrain-time compute overhead by storing the additional knowledge in a reinforced\ndataset. MobileCLIP sets a new state-of-the-art latency-accuracy tradeoff for\nzero-shot classification and retrieval tasks on several datasets. Our\nMobileCLIP-S2 variant is 2.3$\\times$ faster while more accurate compared to\nprevious best CLIP model based on ViT-B/16. We further demonstrate the\neffectiveness of our multi-modal reinforced training by training a CLIP model\nbased on ViT-B/16 image backbone and achieving +2.9% average performance\nimprovement on 38 evaluation benchmarks compared to the previous best.\nMoreover, we show that the proposed approach achieves 10$\\times$-1000$\\times$\nimproved learning efficiency when compared with non-reinforced CLIP training.\nCode and models are available at https://github.com/apple/ml-mobileclip .", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Pavan Kumar Anasosalu Vasu", "Hadi Pouransari", "Fartash Faghri", "Raviteja Vemulapalli", "Oncel Tuzel"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e1"}, "filepath": "data/2401.02937.png", "tags": [], "_media_type": "image", "_rand": 0.9990669969351352, "arXiv_link": "https://arxiv.org/abs/2401.02937", "other_link": "https://github.com/michaeltrs/LAMM.", "title": "Locally Adaptive Neural 3D Morphable Models", "abstract": "We present the Locally Adaptive Morphable Model (LAMM), a highly flexible\nAuto-Encoder (AE) framework for learning to generate and manipulate 3D meshes.\nWe train our architecture following a simple self-supervised training scheme in\nwhich input displacements over a set of sparse control vertices are used to\noverwrite the encoded geometry in order to transform one training sample into\nanother. During inference, our model produces a dense output that adheres\nlocally to the specified sparse geometry while maintaining the overall\nappearance of the encoded object. This approach results in state-of-the-art\nperformance in both disentangling manipulated geometry and 3D mesh\nreconstruction. To the best of our knowledge LAMM is the first end-to-end\nframework that enables direct local control of 3D vertex geometry in a single\nforward pass. A very efficient computational graph allows our network to train\nwith only a fraction of the memory required by previous methods and run faster\nduring inference, generating 12k vertex meshes at $>$60fps on a single CPU\nthread. We further leverage local geometry control as a primitive for higher\nlevel editing operations and present a set of derivative capabilities such as\nswapping and sampling object parts. Code and pretrained models can be found at\nhttps://github.com/michaeltrs/LAMM.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Michail Tarasiou", "Rolandos Alexandros Potamias", "Eimear O' Sullivan", "Stylianos Ploumpis", "Stefanos Zafeiriou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e2"}, "filepath": "data/2404.03999.png", "tags": [], "_media_type": "image", "_rand": 0.9992620803788279, "arXiv_link": "https://arxiv.org/abs/2404.03999", "other_link": "", "title": "Finsler-Laplace-Beltrami Operators with Application to Shape Analysis", "abstract": "The Laplace-Beltrami operator (LBO) emerges from studying manifolds equipped\nwith a Riemannian metric. It is often called the Swiss army knife of geometry\nprocessing as it allows to capture intrinsic shape information and gives rise\nto heat diffusion, geodesic distances, and a multitude of shape descriptors. It\nalso plays a central role in geometric deep learning. In this work, we explore\nFinsler manifolds as a generalization of Riemannian manifolds. We revisit the\nFinsler heat equation and derive a Finsler heat kernel and a\nFinsler-Laplace-Beltrami Operator (FLBO): a novel theoretically justified\nanisotropic Laplace-Beltrami operator (ALBO). In experimental evaluations we\ndemonstrate that the proposed FLBO is a valuable alternative to the traditional\nRiemannian-based LBO and ALBOs for spatial filtering and shape correspondence\nestimation. We hope that the proposed Finsler heat kernel and the FLBO will\ninspire further exploration of Finsler geometry in the computer vision\ncommunity.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Simon Weber", "Thomas Dag\u00e8s", "Maolin Gao", "Daniel Cremers"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e3"}, "filepath": "data/2312.14238.png", "tags": [], "_media_type": "image", "_rand": 0.9994095208119587, "arXiv_link": "https://arxiv.org/abs/2312.14238", "other_link": "https://github.com/OpenGVLab/InternVL.", "title": "InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks", "abstract": "The exponential growth of large language models (LLMs) has opened up numerous\npossibilities for multimodal AGI systems. However, the progress in vision and\nvision-language foundation models, which are also critical elements of\nmulti-modal AGI, has not kept pace with LLMs. In this work, we design a\nlarge-scale vision-language foundation model (InternVL), which scales up the\nvision foundation model to 6 billion parameters and progressively aligns it\nwith the LLM, using web-scale image-text data from various sources. This model\ncan be broadly applied to and achieve state-of-the-art performance on 32\ngeneric visual-linguistic benchmarks including visual perception tasks such as\nimage-level or pixel-level recognition, vision-language tasks such as zero-shot\nimage/video classification, zero-shot image/video-text retrieval, and link with\nLLMs to create multi-modal dialogue systems. It has powerful visual\ncapabilities and can be a good alternative to the ViT-22B. We hope that our\nresearch could contribute to the development of multi-modal large models. Code\nand models are available at https://github.com/OpenGVLab/InternVL.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Zhe Chen", "Jiannan Wu", "Wenhai Wang", "Weijie Su", "Guo Chen", "Sen Xing", "Zhong Muyan", "Qing-Long Zhang", "Xizhou Zhu", "Lewei Lu", "Bin Li", "Ping Luo", "Tong Lu", "Yu Qiao", "Jifeng Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e4"}, "filepath": "data/2401.01887v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999034874548651, "arXiv_link": "https://arxiv.org/abs/2401.01887v1", "other_link": "", "title": "LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry", "abstract": "Visual odometry estimates the motion of a moving camera based on visual\ninput. Existing methods, mostly focusing on two-view point tracking, often\nignore the rich temporal context in the image sequence, thereby overlooking the\nglobal motion patterns and providing no assessment of the full trajectory\nreliability. These shortcomings hinder performance in scenarios with occlusion,\ndynamic objects, and low-texture areas. To address these challenges, we present\nthe Long-term Effective Any Point Tracking (LEAP) module. LEAP innovatively\ncombines visual, inter-track, and temporal cues with mindfully selected anchors\nfor dynamic track estimation. Moreover, LEAP's temporal probabilistic\nformulation integrates distribution updates into a learnable iterative\nrefinement module to reason about point-wise uncertainty. Based on these\ntraits, we develop LEAP-VO, a robust visual odometry system adept at handling\nocclusions and dynamic scenes. Our mindful integration showcases a novel\npractice by employing long-term point tracking as the front-end. Extensive\nexperiments demonstrate that the proposed pipeline significantly outperforms\nexisting baselines across various visual odometry benchmarks.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Weirong Chen", "Le Chen", "Rui Wang", "Marc Pollefeys"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e5"}, "filepath": "data/2404.05726v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992482538999913, "arXiv_link": "https://arxiv.org/html/2404.05726v2", "other_link": "https://boheumd.github.io/MA-LMM/.", "title": "MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding", "abstract": "With the success of large language models (LLMs), integrating the vision\nmodel into LLMs to build vision-language foundation models has gained much more\ninterest recently. However, existing LLM-based large multimodal models (e.g.,\nVideo-LLaMA, VideoChat) can only take in a limited number of frames for short\nvideo understanding. In this study, we mainly focus on designing an efficient\nand effective model for long-term video understanding. Instead of trying to\nprocess more frames simultaneously like most existing work, we propose to\nprocess videos in an online manner and store past video information in a memory\nbank. This allows our model to reference historical video content for long-term\nanalysis without exceeding LLMs' context length constraints or GPU memory\nlimits. Our memory bank can be seamlessly integrated into current multimodal\nLLMs in an off-the-shelf manner. We conduct extensive experiments on various\nvideo understanding tasks, such as long-video understanding, video question\nanswering, and video captioning, and our model can achieve state-of-the-art\nperformances across multiple datasets. Code available at\nhttps://boheumd.github.io/MA-LMM/.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Bo He", "Hengduo Li", "Young Kyun Jang", "Menglin Jia", "Xuefei Cao", "Ashish Shah", "Abhinav Shrivastava", "Ser-Nam Lim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e6"}, "filepath": "data/2404.11291.png", "tags": [], "_media_type": "image", "_rand": 0.9997162475305125, "arXiv_link": "https://arxiv.org/abs/2404.11291", "other_link": "https://github.com/boycehbz/HumanInteraction}.", "title": "Closely Interactive Human Reconstruction with Proxemics and Physics-Guided Adaption", "abstract": "Existing multi-person human reconstruction approaches mainly focus on\nrecovering accurate poses or avoiding penetration, but overlook the modeling of\nclose interactions. In this work, we tackle the task of reconstructing closely\ninteractive humans from a monocular video. The main challenge of this task\ncomes from insufficient visual information caused by depth ambiguity and severe\ninter-person occlusion. In view of this, we propose to leverage knowledge from\nproxemic behavior and physics to compensate the lack of visual information.\nThis is based on the observation that human interaction has specific patterns\nfollowing the social proxemics. Specifically, we first design a latent\nrepresentation based on Vector Quantised-Variational AutoEncoder (VQ-VAE) to\nmodel human interaction. A proxemics and physics guided diffusion model is then\nintroduced to denoise the initial distribution. We design the diffusion model\nas dual branch with each branch representing one individual such that the\ninteraction can be modeled via cross attention. With the learned priors of\nVQ-VAE and physical constraint as the additional information, our proposed\napproach is capable of estimating accurate poses that are also proxemics and\nphysics plausible. Experimental results on Hi4D, 3DPW, and CHI3D demonstrate\nthat our method outperforms existing approaches. The code is available at\n\\url{https://github.com/boycehbz/HumanInteraction}.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Buzhen Huang", "Chen Li", "Chongyang Xu", "Liang Pan", "Yangang Wang", "Gim Hee Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e7"}, "filepath": "data/2404.10438v1.png", "tags": [], "_media_type": "image", "_rand": 0.999557036853063, "arXiv_link": "https://arxiv.org/abs/2404.10438v1", "other_link": "https://github.com/ga1i13o/mcloc_poseref", "title": "The Unreasonable Effectiveness of Pre-Trained Features for Camera Pose Refinement", "abstract": "Pose refinement is an interesting and practically relevant research\ndirection. Pose refinement can be used to (1) obtain a more accurate pose\nestimate from an initial prior (e.g., from retrieval), (2) as pre-processing,\ni.e., to provide a better starting point to a more expensive pose estimator,\n(3) as post-processing of a more accurate localizer. Existing approaches focus\non learning features / scene representations for the pose refinement task. This\ninvolves training an implicit scene representation or learning features while\noptimizing a camera pose-based loss. A natural question is whether training\nspecific features / representations is truly necessary or whether similar\nresults can be already achieved with more generic features. In this work, we\npresent a simple approach that combines pre-trained features with a particle\nfilter and a renderable representation of the scene. Despite its simplicity, it\nachieves state-of-the-art results, demonstrating that one can easily build a\npose refiner without the need for specific training. The code is at\nhttps://github.com/ga1i13o/mcloc_poseref", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Gabriele Trivigno", "Carlo Masone", "Barbara Caputo", "Torsten Sattler"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e8"}, "filepath": "data/2404.00095.png", "tags": [], "_media_type": "image", "_rand": 0.9994486445421205, "arXiv_link": "https://arxiv.org/abs/2404.00095", "other_link": "", "title": "GDA: Generalized Diffusion for Robust Test-time Adaptation", "abstract": "Machine learning models struggle with generalization when encountering\nout-of-distribution (OOD) samples with unexpected distribution shifts. For\nvision tasks, recent studies have shown that test-time adaptation employing\ndiffusion models can achieve state-of-the-art accuracy improvements on OOD\nsamples by generating new samples that align with the model's domain without\nthe need to modify the model's weights. Unfortunately, those studies have\nprimarily focused on pixel-level corruptions, thereby lacking the\ngeneralization to adapt to a broader range of OOD types. We introduce\nGeneralized Diffusion Adaptation (GDA), a novel diffusion-based test-time\nadaptation method robust against diverse OOD types. Specifically, GDA\niteratively guides the diffusion by applying a marginal entropy loss derived\nfrom the model, in conjunction with style and content preservation losses\nduring the reverse sampling process. In other words, GDA considers the model's\noutput behavior with the semantic information of the samples as a whole, which\ncan reduce ambiguity in downstream tasks during the generation process.\nEvaluation across various popular model architectures and OOD benchmarks shows\nthat GDA consistently outperforms prior work on diffusion-driven adaptation.\nNotably, it achieves the highest classification accuracy improvements, ranging\nfrom 4.4\\% to 5.02\\% on ImageNet-C and 2.5\\% to 7.4\\% on Rendition, Sketch, and\nStylized benchmarks. This performance highlights GDA's generalization to a\nbroader range of OOD benchmarks.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yun-Yun Tsai", "Fu-Chen Chen", "Albert Chen", "Junfeng Yang", "Che-Chun Su", "Min Sun", "Cheng-Hao Kuo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5e9"}, "filepath": "data/2403.19164.png", "tags": [], "_media_type": "image", "_rand": 0.999040860772469, "arXiv_link": "https://arxiv.org/abs/2403.19164", "other_link": "https://github.com/lhaippp/RecDiffusion.", "title": "RecDiffusion: Rectangling for Image Stitching with Diffusion Models", "abstract": "Image stitching from different captures often results in non-rectangular\nboundaries, which is often considered unappealing. To solve non-rectangular\nboundaries, current solutions involve cropping, which discards image content,\ninpainting, which can introduce unrelated content, or warping, which can\ndistort non-linear features and introduce artifacts. To overcome these issues,\nwe introduce a novel diffusion-based learning framework, \\textbf{RecDiffusion},\nfor image stitching rectangling. This framework combines Motion Diffusion\nModels (MDM) to generate motion fields, effectively transitioning from the\nstitched image's irregular borders to a geometrically corrected intermediary.\nFollowed by Content Diffusion Models (CDM) for image detail refinement.\nNotably, our sampling process utilizes a weighted map to identify regions\nneeding correction during each iteration of CDM. Our RecDiffusion ensures\ngeometric accuracy and overall visual appeal, surpassing all previous methods\nin both quantitative and qualitative measures when evaluated on public\nbenchmarks. Code is released at https://github.com/lhaippp/RecDiffusion.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Tianhao Zhou", "Li Haipeng", "Ziyi Wang", "Ao Luo", "Chenlin Zhang", "Jiajun Li", "Bing Zeng", "Shuaicheng Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ea"}, "filepath": "data/2404.00741.png", "tags": [], "_media_type": "image", "_rand": 0.9994147765496029, "arXiv_link": "https://arxiv.org/abs/2404.00741", "other_link": "", "title": "Rethinking Interactive Image Segmentation with Low Latency, High Quality, and Diverse Prompts", "abstract": "The goal of interactive image segmentation is to delineate specific regions\nwithin an image via visual or language prompts. Low-latency and high-quality\ninteractive segmentation with diverse prompts remain challenging for existing\nspecialist and generalist models. Specialist models, with their limited prompts\nand task-specific designs, experience high latency because the image must be\nrecomputed every time the prompt is updated, due to the joint encoding of image\nand visual prompts. Generalist models, exemplified by the Segment Anything\nModel (SAM), have recently excelled in prompt diversity and efficiency, lifting\nimage segmentation to the foundation model era. However, for high-quality\nsegmentations, SAM still lags behind state-of-the-art specialist models despite\nSAM being trained with x100 more segmentation masks. In this work, we delve\ndeep into the architectural differences between the two types of models. We\nobserve that dense representation and fusion of visual prompts are the key\ndesign choices contributing to the high segmentation quality of specialist\nmodels. In light of this, we reintroduce this dense design into the generalist\nmodels, to facilitate the development of generalist models with high\nsegmentation quality. To densely represent diverse visual prompts, we propose\nto use a dense map to capture five types: clicks, boxes, polygons, scribbles,\nand masks. Thus, we propose SegNext, a next-generation interactive segmentation\napproach offering low latency, high quality, and diverse prompt support. Our\nmethod outperforms current state-of-the-art methods on HQSeg-44K and DAVIS,\nboth quantitatively and qualitatively.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Qin Liu", "Jaemin Cho", "Mohit Bansal", "Marc Niethammer"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5eb"}, "filepath": "data/2312.06655.png", "tags": [], "_media_type": "image", "_rand": 0.9996327123695379, "arXiv_link": "https://arxiv.org/abs/2312.06655", "other_link": "", "title": "Sherpa3D: Boosting High-Fidelity Text-to-3D Generation via Coarse 3D Prior", "abstract": "Recently, 3D content creation from text prompts has demonstrated remarkable\nprogress by utilizing 2D and 3D diffusion models. While 3D diffusion models\nensure great multi-view consistency, their ability to generate high-quality and\ndiverse 3D assets is hindered by the limited 3D data. In contrast, 2D diffusion\nmodels find a distillation approach that achieves excellent generalization and\nrich details without any 3D data. However, 2D lifting methods suffer from\ninherent view-agnostic ambiguity thereby leading to serious multi-face Janus\nissues, where text prompts fail to provide sufficient guidance to learn\ncoherent 3D results. Instead of retraining a costly viewpoint-aware model, we\nstudy how to fully exploit easily accessible coarse 3D knowledge to enhance the\nprompts and guide 2D lifting optimization for refinement. In this paper, we\npropose Sherpa3D, a new text-to-3D framework that achieves high-fidelity,\ngeneralizability, and geometric consistency simultaneously. Specifically, we\ndesign a pair of guiding strategies derived from the coarse 3D prior generated\nby the 3D diffusion model: a structural guidance for geometric fidelity and a\nsemantic guidance for 3D coherence. Employing the two types of guidance, the 2D\ndiffusion model enriches the 3D content with diversified and high-quality\nresults. Extensive experiments show the superiority of our Sherpa3D over the\nstate-of-the-art text-to-3D methods in terms of quality and 3D consistency.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Fangfu Liu", "Diankun Wu", "Yi Wei", "Yongming Rao", "Yueqi Duan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ec"}, "filepath": "data/2401.03785.png", "tags": [], "_media_type": "image", "_rand": 0.9990427997687643, "arXiv_link": "https://arxiv.org/abs/2401.03785", "other_link": "https://github.com/KosukeSumiyasu/MoXI.", "title": "Identifying Important Group of Pixels using Interactions", "abstract": "To better understand the behavior of image classifiers, it is useful to\nvisualize the contribution of individual pixels to the model prediction. In\nthis study, we propose a method, MoXI ($\\textbf{Mo}$del e$\\textbf{X}$planation\nby $\\textbf{I}$nteractions), that efficiently and accurately identifies a group\nof pixels with high prediction confidence. The proposed method employs\ngame-theoretic concepts, Shapley values and interactions, taking into account\nthe effects of individual pixels and the cooperative influence of pixels on\nmodel confidence. Theoretical analysis and experiments demonstrate that our\nmethod better identifies the pixels that are highly contributing to the model\noutputs than widely-used visualization by Grad-CAM, Attention rollout, and\nShapley value. While prior studies have suffered from the exponential\ncomputational cost in the computation of Shapley value and interactions, we\nshow that this can be reduced to quadratic cost for our task. The code is\navailable at https://github.com/KosukeSumiyasu/MoXI.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Kosuke Sumiyasu", "Kazuhiko Kawamoto", "Hiroshi Kera"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ed"}, "filepath": "data/2306.16927v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997541416517227, "arXiv_link": "https://arxiv.org/html/2306.16927v2", "other_link": "https://github.com/OpenDriveLab/End-to-end-Autonomous-Driving.", "title": "DualAD: Disentangling the Dynamic and Static World for End-to-End Driving", "abstract": "The autonomous driving community has witnessed a rapid growth in approaches\nthat embrace an end-to-end algorithm framework, utilizing raw sensor input to\ngenerate vehicle motion plans, instead of concentrating on individual tasks\nsuch as detection and motion prediction. End-to-end systems, in comparison to\nmodular pipelines, benefit from joint feature optimization for perception and\nplanning. This field has flourished due to the availability of large-scale\ndatasets, closed-loop evaluation, and the increasing need for autonomous\ndriving algorithms to perform effectively in challenging scenarios. In this\nsurvey, we provide a comprehensive analysis of more than 270 papers, covering\nthe motivation, roadmap, methodology, challenges, and future trends in\nend-to-end autonomous driving. We delve into several critical challenges,\nincluding multi-modality, interpretability, causal confusion, robustness, and\nworld models, amongst others. Additionally, we discuss current advancements in\nfoundation models and visual pre-training, as well as how to incorporate these\ntechniques within the end-to-end driving framework. we maintain an active\nrepository that contains up-to-date literature and open-source projects at\nhttps://github.com/OpenDriveLab/End-to-end-Autonomous-Driving.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Simon Doll", "Niklas Hanselmann", "Lukas Schneider", "Richard Schulz", "Marius Cordts", "Markus Enzweiler", "Hendrik Lensch"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ee"}, "filepath": "data/2312.05856.png", "tags": [], "_media_type": "image", "_rand": 0.9999039536399581, "arXiv_link": "https://arxiv.org/abs/2312.05856", "other_link": "https://stem-inv.github.io/page/.", "title": "A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization Inversion for Zero-Shot Video Editing", "abstract": "This paper presents a video inversion approach for zero-shot video editing,\nwhich models the input video with low-rank representation during the inversion\nprocess. The existing video editing methods usually apply the typical 2D DDIM\ninversion or naive spatial-temporal DDIM inversion before editing, which\nleverages time-varying representation for each frame to derive noisy latent.\nUnlike most existing approaches, we propose a Spatial-Temporal\nExpectation-Maximization (STEM) inversion, which formulates the dense video\nfeature under an expectation-maximization manner and iteratively estimates a\nmore compact basis set to represent the whole video. Each frame applies the\nfixed and global representation for inversion, which is more friendly for\ntemporal consistency during reconstruction and editing. Extensive qualitative\nand quantitative experiments demonstrate that our STEM inversion can achieve\nconsistent improvement on two state-of-the-art video editing methods. Project\npage: https://stem-inv.github.io/page/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Li Maomao", "Yu Li", "Tianyu Yang", "Yunfei Liu", "Dongxu Yue", "Zhihui Lin", "Dong Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ef"}, "filepath": "data/2402.02887.png", "tags": [], "_media_type": "image", "_rand": 0.999199011774648, "arXiv_link": "https://arxiv.org/abs/2402.02887", "other_link": "", "title": "Time-, Memory- and Parameter-Efficient Visual Adaptation", "abstract": "As foundation models become more popular, there is a growing need to\nefficiently finetune them for downstream tasks. Although numerous adaptation\nmethods have been proposed, they are designed to be efficient only in terms of\nhow many parameters are trained. They, however, typically still require\nbackpropagating gradients throughout the model, meaning that their\ntraining-time and -memory cost does not reduce as significantly. We propose an\nadaptation method which does not backpropagate gradients through the backbone.\nWe achieve this by designing a lightweight network in parallel that operates on\nfeatures from the frozen, pretrained backbone. As a result, our method is\nefficient not only in terms of parameters, but also in training-time and memory\nusage. Our approach achieves state-of-the-art accuracy-parameter trade-offs on\nthe popular VTAB benchmark, and we further show how we outperform prior works\nwith respect to training-time and -memory usage too. We further demonstrate the\ntraining efficiency and scalability of our method by adapting a vision\ntransformer backbone of 4 billion parameters for the computationally demanding\ntask of video classification, without any intricate model parallelism. Here, we\noutperform a prior adaptor-based method which could only scale to a 1 billion\nparameter backbone, or fully-finetuning a smaller backbone, with the same GPU\nand less training time.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Otniel-Bogdan Mercea", "Alexey Gritsenko", "Cordelia Schmid", "Anurag Arnab"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f0"}, "filepath": "data/2311.12956.png", "tags": [], "_media_type": "image", "_rand": 0.9994270546360472, "arXiv_link": "https://arxiv.org/abs/2311.12956", "other_link": "https://github.com/SashaMatsun/LSKDiffDet", "title": "WildlifeMapper: Aerial Image Analysis for Multi-Species Detection and Identification", "abstract": "In the realm of aerial image analysis, object detection plays a pivotal role,\nwith significant implications for areas such as remote sensing, urban planning,\nand disaster management. This study addresses the inherent challenges in this\ndomain, notably the detection of small objects, managing densely packed\nelements, and accounting for diverse orientations. We present an in-depth\nevaluation of an object detection model that integrates the Large Selective\nKernel Network (LSKNet)as its backbone with the DiffusionDet head, utilizing\nthe iSAID dataset for empirical analysis. Our approach encompasses the\nintroduction of novel methodologies and extensive ablation studies. These\nstudies critically assess various aspects such as loss functions, box\nregression techniques, and classification strategies to refine the model's\nprecision in object detection. The paper details the experimental application\nof the LSKNet backbone in synergy with the DiffusionDet heads, a combination\ntailored to meet the specific challenges in aerial image object detection. The\nfindings of this research indicate a substantial enhancement in the model's\nperformance, especially in the accuracy-time tradeoff. The proposed model\nachieves a mean average precision (MAP) of approximately 45.7%, which is a\nsignificant improvement, outperforming the RCNN model by 4.7% on the same\ndataset. This advancement underscores the effectiveness of the proposed\nmodifications and sets a new benchmark in aerial image analysis, paving the way\nfor more accurate and efficient object detection methodologies. The code is\npublicly available at https://github.com/SashaMatsun/LSKDiffDet", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Satish Kumar", "Bowen Zhang", "Chandrakanth Gudavalli", "Connor Levenson", "Lacey Hughey", "Jared Stabach", "Irene Amoke", "Gordon Ojwang", "Joseph Mukeka", "Howard Frederick", "Stephen Mwiu", "Joseph Ochieng Ogutu", "B S Manjunath"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f1"}, "filepath": "data/2404.03778.png", "tags": [], "_media_type": "image", "_rand": 0.9998037036316388, "arXiv_link": "https://arxiv.org/abs/2404.03778", "other_link": "", "title": "Flattening the Parent Bias: Hierarchical Semantic Segmentation in the Poincar\u00e9 Ball", "abstract": "Hierarchy is a natural representation of semantic taxonomies, including the\nones routinely used in image segmentation. Indeed, recent work on semantic\nsegmentation reports improved accuracy from supervised training leveraging\nhierarchical label structures. Encouraged by these results, we revisit the\nfundamental assumptions behind that work. We postulate and then empirically\nverify that the reasons for the observed improvement in segmentation accuracy\nmay be entirely unrelated to the use of the semantic hierarchy. To demonstrate\nthis, we design a range of cross-domain experiments with a representative\nhierarchical approach. We find that on the new testing domains, a flat\n(non-hierarchical) segmentation network, in which the parents are inferred from\nthe children, has superior segmentation accuracy to the hierarchical approach\nacross the board. Complementing these findings and inspired by the intrinsic\nproperties of hyperbolic spaces, we study a more principled approach to\nhierarchical segmentation using the Poincar\\'e ball model. The hyperbolic\nrepresentation largely outperforms the previous (Euclidean) hierarchical\napproach as well and is on par with our flat Euclidean baseline in terms of\nsegmentation accuracy. However, it additionally exhibits surprisingly strong\ncalibration quality of the parent nodes in the semantic hierarchy, especially\non the more challenging domains. Our combined analysis suggests that the\nestablished practice of hierarchical segmentation may be limited to in-domain\nsettings, whereas flat classifiers generalize substantially better, especially\nif they are modeled in the hyperbolic space.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Simon Weber", "Bar\u0131\u015f Z\u00f6ng\u00fcr", "Nikita Araslanov", "Daniel Cremers"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f2"}, "filepath": "data/2403.07518v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998117988034005, "arXiv_link": "https://arxiv.org/html/2403.07518v1", "other_link": "", "title": "OTE: Exploring Accurate Scene Text Recognition Using One Token", "abstract": "Scene text recognition is an important and challenging task in computer\nvision. However, most prior works focus on recognizing pre-defined words, while\nthere are various out-of-vocabulary (OOV) words in real-world applications.\n In this paper, we propose a novel open-vocabulary text recognition framework,\nPseudo-OCR, to recognize OOV words. The key challenge in this task is the lack\nof OOV training data. To solve this problem, we first propose a pseudo label\ngeneration module that leverages character detection and image inpainting to\nproduce substantial pseudo OOV training data from real-world images. Unlike\nprevious synthetic data, our pseudo OOV data contains real characters and\nbackgrounds to simulate real-world applications. Secondly, to reduce noises in\npseudo data, we present a semantic checking mechanism to filter semantically\nmeaningful data. Thirdly, we introduce a quality-aware margin loss to boost the\ntraining with pseudo data. Our loss includes a margin-based part to enhance the\nclassification ability, and a quality-aware part to penalize low-quality\nsamples in both real and pseudo data.\n Extensive experiments demonstrate that our approach outperforms the\nstate-of-the-art on eight datasets and achieves the first rank in the ICDAR2022\nchallenge.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jianjun Xu", "Yuxin Wang", "Hongtao Xie", "Yongdong Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f3"}, "filepath": "data/2404.16622.png", "tags": [], "_media_type": "image", "_rand": 0.9997132401607217, "arXiv_link": "https://arxiv.org/abs/2404.16622", "other_link": "", "title": "DAVE -- A Detect-and-Verify Paradigm for Low-Shot Counting", "abstract": "Low-shot counters estimate the number of objects corresponding to a selected\ncategory, based on only few or no exemplars annotated in the image. The current\nstate-of-the-art estimates the total counts as the sum over the object location\ndensity map, but does not provide individual object locations and sizes, which\nare crucial for many applications. This is addressed by detection-based\ncounters, which, however fall behind in the total count accuracy. Furthermore,\nboth approaches tend to overestimate the counts in the presence of other object\nclasses due to many false positives. We propose DAVE, a low-shot counter based\non a detect-and-verify paradigm, that avoids the aforementioned issues by first\ngenerating a high-recall detection set and then verifying the detections to\nidentify and remove the outliers. This jointly increases the recall and\nprecision, leading to accurate counts. DAVE outperforms the top density-based\ncounters by ~20% in the total count MAE, it outperforms the most recent\ndetection-based counter by ~20% in detection quality and sets a new\nstate-of-the-art in zero-shot as well as text-prompt-based counting.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jer Pelhan", "Alan Lukezic", "Vitjan Zavrtanik", "Matej Kristan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f4"}, "filepath": "data/2308.10638v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995954666807834, "arXiv_link": "https://arxiv.org/html/2308.10638v2", "other_link": "https://sculpt.is.tue.mpg.de.", "title": "SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes", "abstract": "We present SCULPT, a novel 3D generative model for clothed and textured 3D\nmeshes of humans. Specifically, we devise a deep neural network that learns to\nrepresent the geometry and appearance distribution of clothed human bodies.\nTraining such a model is challenging, as datasets of textured 3D meshes for\nhumans are limited in size and accessibility. Our key observation is that there\nexist medium-sized 3D scan datasets like CAPE, as well as large-scale 2D image\ndatasets of clothed humans and multiple appearances can be mapped to a single\ngeometry. To effectively learn from the two data modalities, we propose an\nunpaired learning procedure for pose-dependent clothed and textured human\nmeshes. Specifically, we learn a pose-dependent geometry space from 3D scan\ndata. We represent this as per vertex displacements w.r.t. the SMPL model.\nNext, we train a geometry conditioned texture generator in an unsupervised way\nusing the 2D image data. We use intermediate activations of the learned\ngeometry model to condition our texture generator. To alleviate entanglement\nbetween pose and clothing type, and pose and clothing appearance, we condition\nboth the texture and geometry generators with attribute labels such as clothing\ntypes for the geometry, and clothing colors for the texture generator. We\nautomatically generated these conditioning labels for the 2D images based on\nthe visual question answering model BLIP and CLIP. We validate our method on\nthe SCULPT dataset, and compare to state-of-the-art 3D generative models for\nclothed human bodies. Our code and data can be found at\nhttps://sculpt.is.tue.mpg.de.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Soubhik Sanyal", "Partha Ghosh", "Jinlong Yang", "Michael J. Black", "Justus Thies", "Timo Bolkart"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f5"}, "filepath": "data/2204.07845.png", "tags": [], "_media_type": "image", "_rand": 0.9996992777197956, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2204.07845", "other_link": "https://zengxianyu.github.io/objpaint}", "title": "Brush2Prompt: Contextual Prompt Generator for Object Inpainting", "abstract": "Previous works on image inpainting mainly focus on inpainting background or\npartially missing objects, while the problem of inpainting an entire missing\nobject remains unexplored. This work studies a new image inpainting task, i.e.\nshape-guided object inpainting. Given an incomplete input image, the goal is to\nfill in the hole by generating an object based on the context and implicit\nguidance given by the hole shape. Since previous methods for image inpainting\nare mainly designed for background inpainting, they are not suitable for this\ntask. Therefore, we propose a new data preparation method and a novel\nContextual Object Generator (CogNet) for the object inpainting task. On the\ndata side, we incorporate object priors into training data by using object\ninstances as holes. The CogNet has a two-stream architecture that combines the\nstandard bottom-up image completion process with a top-down object generation\nprocess. A predictive class embedding module bridges the two streams by\npredicting the class of the missing object from the bottom-up features, from\nwhich a semantic object map is derived as the input of the top-down stream.\nExperiments demonstrate that the proposed method can generate realistic objects\nthat fit the context in terms of both visual appearance and semantic meanings.\nCode can be found at the project page\n\\url{https://zengxianyu.github.io/objpaint}", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Mang Tik Chiu", "Yuqian Zhou", "Lingzhi Zhang", "Zhe Lin", "Connelly Barnes", "Sohrab Amirghodsi", "Eli Shechtman", "Humphrey Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f6"}, "filepath": "data/2312.00834.png", "tags": [], "_media_type": "image", "_rand": 0.9999862194092933, "arXiv_link": "https://arxiv.org/abs/2312.00834", "other_link": "https://www.youtube.com/watch?v=tTsKhviukAE.", "title": "AV-RIR: Audio-Visual Room Impulse Response Estimation", "abstract": "Accurate estimation of Room Impulse Response (RIR), which captures an\nenvironment's acoustic properties, is important for speech processing and AR/VR\napplications. We propose AV-RIR, a novel multi-modal multi-task learning\napproach to accurately estimate the RIR from a given reverberant speech signal\nand the visual cues of its corresponding environment. AV-RIR builds on a novel\nneural codec-based architecture that effectively captures environment geometry\nand materials properties and solves speech dereverberation as an auxiliary task\nby using multi-task learning. We also propose Geo-Mat features that augment\nmaterial information into visual cues and CRIP that improves late reverberation\ncomponents in the estimated RIR via image-to-RIR retrieval by 86%. Empirical\nresults show that AV-RIR quantitatively outperforms previous audio-only and\nvisual-only approaches by achieving 36% - 63% improvement across various\nacoustic metrics in RIR estimation. Additionally, it also achieves higher\npreference scores in human evaluation. As an auxiliary benefit, dereverbed\nspeech from AV-RIR shows competitive performance with the state-of-the-art in\nvarious spoken language processing tasks and outperforms reverberation time\nerror score in the real-world AVSpeech dataset. Qualitative examples of both\nsynthesized reverberant speech and enhanced speech can be found at\nhttps://www.youtube.com/watch?v=tTsKhviukAE.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Anton Ratnarajah", "Sreyan Ghosh", "Sonal Kumar", "Purva Chiniya", "Dinesh Manocha"], "category_name": "Sound", "all_categories": ["Sound", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f7"}, "filepath": "data/2402.09812.png", "tags": [], "_media_type": "image", "_rand": 0.9991146632734053, "arXiv_link": "https://arxiv.org/abs/2402.09812", "other_link": "", "title": "DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization", "abstract": "The objective of text-to-image (T2I) personalization is to customize a\ndiffusion model to a user-provided reference concept, generating diverse images\nof the concept aligned with the target prompts. Conventional methods\nrepresenting the reference concepts using unique text embeddings often fail to\naccurately mimic the appearance of the reference. To address this, one solution\nmay be explicitly conditioning the reference images into the target denoising\nprocess, known as key-value replacement. However, prior works are constrained\nto local editing since they disrupt the structure path of the pre-trained T2I\nmodel. To overcome this, we propose a novel plug-in method, called\nDreamMatcher, which reformulates T2I personalization as semantic matching.\nSpecifically, DreamMatcher replaces the target values with reference values\naligned by semantic matching, while leaving the structure path unchanged to\npreserve the versatile capability of pre-trained T2I models for generating\ndiverse structures. We also introduce a semantic-consistent masking strategy to\nisolate the personalized concept from irrelevant regions introduced by the\ntarget prompts. Compatible with existing T2I models, DreamMatcher shows\nsignificant improvements in complex scenarios. Intensive analyses demonstrate\nthe effectiveness of our approach.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jisu Nam", "Heesu Kim", "DongJae Lee", "Siyoon Jin", "Seungryong Kim", "Seunggyu Chang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f8"}, "filepath": "data/2404.09736.png", "tags": [], "_media_type": "image", "_rand": 0.9996110419813897, "arXiv_link": "https://arxiv.org/abs/2404.09736", "other_link": "", "title": "FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance, Head-pose, and Facial Expression Features", "abstract": "The task of face reenactment is to transfer the head motion and facial\nexpressions from a driving video to the appearance of a source image, which may\nbe of a different person (cross-reenactment). Most existing methods are\nCNN-based and estimate optical flow from the source image to the current\ndriving frame, which is then inpainted and refined to produce the output\nanimation. We propose a transformer-based encoder for computing a set-latent\nrepresentation of the source image(s). We then predict the output color of a\nquery pixel using a transformer-based decoder, which is conditioned with\nkeypoints and a facial expression vector extracted from the driving frame.\nLatent representations of the source person are learned in a self-supervised\nmanner that factorize their appearance, head pose, and facial expressions.\nThus, they are perfectly suited for cross-reenactment. In contrast to most\nrelated work, our method naturally extends to multiple source images and can\nthus adapt to person-specific facial dynamics. We also propose data\naugmentation and regularization schemes that are necessary to prevent\noverfitting and support generalizability of the learned representations. We\nevaluated our approach in a randomized user study. The results indicate\nsuperior performance compared to the state-of-the-art in terms of motion\ntransfer quality and temporal consistency.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Andre Rochow", "Max Schwarz", "Sven Behnke"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5f9"}, "filepath": "data/2405.05252.png", "tags": [], "_media_type": "image", "_rand": 0.9994705389920147, "arXiv_link": "https://arxiv.org/abs/2405.05252", "other_link": "https://atedm.github.io.", "title": "Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models", "abstract": "Diffusion Models (DMs) have exhibited superior performance in generating\nhigh-quality and diverse images. However, this exceptional performance comes at\nthe cost of expensive architectural design, particularly due to the attention\nmodule heavily used in leading models. Existing works mainly adopt a retraining\nprocess to enhance DM efficiency. This is computationally expensive and not\nvery scalable. To this end, we introduce the Attention-driven Training-free\nEfficient Diffusion Model (AT-EDM) framework that leverages attention maps to\nperform run-time pruning of redundant tokens, without the need for any\nretraining. Specifically, for single-denoising-step pruning, we develop a novel\nranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify\nredundant tokens, and a similarity-based recovery method to restore tokens for\nthe convolution operation. In addition, we propose a Denoising-Steps-Aware\nPruning (DSAP) approach to adjust the pruning budget across different denoising\ntimesteps for better generation quality. Extensive evaluations show that AT-EDM\nperforms favorably against prior art in terms of efficiency (e.g., 38.8% FLOPs\nsaving and up to 1.53x speed-up over Stable Diffusion XL) while maintaining\nnearly the same FID and CLIP scores as the full model. Project webpage:\nhttps://atedm.github.io.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Hongjie Wang", "Difan Liu", "Yan Kang", "Yijun Li", "Zhe Lin", "Niraj Jha", "Yuchen Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Image and Video Processing", "Signal Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5fa"}, "filepath": "data/2403.16258.png", "tags": [], "_media_type": "image", "_rand": 0.999951958276732, "arXiv_link": "https://arxiv.org/abs/2403.16258", "other_link": "", "title": "Laplacian-guided Entropy Model in Neural Codec with Blur-dissipated Synthesis", "abstract": "While replacing Gaussian decoders with a conditional diffusion model enhances\nthe perceptual quality of reconstructions in neural image compression, their\nlack of inductive bias for image data restricts their ability to achieve\nstate-of-the-art perceptual levels. To address this limitation, we adopt a\nnon-isotropic diffusion model at the decoder side. This model imposes an\ninductive bias aimed at distinguishing between frequency contents, thereby\nfacilitating the generation of high-quality images. Moreover, our framework is\nequipped with a novel entropy model that accurately models the probability\ndistribution of latent representation by exploiting spatio-channel correlations\nin latent space, while accelerating the entropy decoding step. This\nchannel-wise entropy model leverages both local and global spatial contexts\nwithin each channel chunk. The global spatial context is built upon the\nTransformer, which is specifically designed for image compression tasks. The\ndesigned Transformer employs a Laplacian-shaped positional encoding, the\nlearnable parameters of which are adaptively adjusted for each channel cluster.\nOur experiments demonstrate that our proposed framework yields better\nperceptual quality compared to cutting-edge generative-based codecs, and the\nproposed entropy model contributes to notable bitrate savings.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Atefeh Khoshkhahtinat", "Ali Zafari", "Piyush Mehta", "Nasser Nasrabadi"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition", "Information Theory", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5fb"}, "filepath": "data/2403.08381.png", "tags": [], "_media_type": "image", "_rand": 0.9993491817990247, "arXiv_link": "https://arxiv.org/abs/2403.08381", "other_link": "", "title": "Tackling the Singularities at the Endpoints of Time Intervals in Diffusion Models", "abstract": "Most diffusion models assume that the reverse process adheres to a Gaussian\ndistribution. However, this approximation has not been rigorously validated,\nespecially at singularities, where t=0 and t=1. Improperly dealing with such\nsingularities leads to an average brightness issue in applications, and limits\nthe generation of images with extreme brightness or darkness. We primarily\nfocus on tackling singularities from both theoretical and practical\nperspectives. Initially, we establish the error bounds for the reverse process\napproximation, and showcase its Gaussian characteristics at singularity time\nsteps. Based on this theoretical insight, we confirm the singularity at t=1 is\nconditionally removable while it at t=0 is an inherent property. Upon these\nsignificant conclusions, we propose a novel plug-and-play method SingDiffusion\nto address the initial singular time step sampling, which not only effectively\nresolves the average brightness issue for a wide range of diffusion models\nwithout extra training efforts, but also enhances their generation capability\nin achieving notable lower FID scores.", "keywords": [], "authors_list": ["Pengze Zhang", "Hubery Yin", "Chen Li", "Xiaohua Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5fc"}, "filepath": "data/2404.16035v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992190530035218, "arXiv_link": "https://arxiv.org/abs/2404.16035v1", "other_link": "", "title": "MaGGIe: Masked Guided Gradual Human Instance Matting", "abstract": "Human matting is a foundation task in image and video processing, where human\nforeground pixels are extracted from the input. Prior works either improve the\naccuracy by additional guidance or improve the temporal consistency of a single\ninstance across frames. We propose a new framework MaGGIe, Masked Guided\nGradual Human Instance Matting, which predicts alpha mattes progressively for\neach human instances while maintaining the computational cost, precision, and\nconsistency. Our method leverages modern architectures, including transformer\nattention and sparse convolution, to output all instance mattes simultaneously\nwithout exploding memory and latency. Although keeping constant inference costs\nin the multiple-instance scenario, our framework achieves robust and versatile\nperformance on our proposed synthesized benchmarks. With the higher quality\nimage and video matting benchmarks, the novel multi-instance synthesis approach\nfrom publicly available sources is introduced to increase the generalization of\nmodels in real-world scenarios.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Chuong Huynh", "Seoung Wug Oh", "Abhinav Shrivastava", "Joon-Young Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5fd"}, "filepath": "data/2312.02153v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998241622023478, "arXiv_link": "https://arxiv.org/abs/2312.02153v1", "other_link": "https://github.com/shenyunhang/APE.", "title": "Aligning and Prompting Everything All at Once for Universal Visual Perception", "abstract": "Vision foundation models have been explored recently to build general-purpose\nvision systems. However, predominant paradigms, driven by casting\ninstance-level tasks as an object-word alignment, bring heavy cross-modality\ninteraction, which is not effective in prompting object detection and visual\ngrounding. Another line of work that focuses on pixel-level tasks often\nencounters a large annotation gap of things and stuff, and suffers from mutual\ninterference between foreground-object and background-class segmentation. In\nstark contrast to the prevailing methods, we present APE, a universal visual\nperception model for aligning and prompting everything all at once in an image\nto perform diverse tasks, i.e., detection, segmentation, and grounding, as an\ninstance-level sentence-object matching paradigm. Specifically, APE advances\nthe convergence of detection and grounding by reformulating language-guided\ngrounding as open-vocabulary detection, which efficiently scales up model\nprompting to thousands of category vocabularies and region descriptions while\nmaintaining the effectiveness of cross-modality fusion. To bridge the\ngranularity gap of different pixel-level tasks, APE equalizes semantic and\npanoptic segmentation to proxy instance learning by considering any isolated\nregions as individual instances. APE aligns vision and language representation\non broad data with natural and challenging characteristics all at once without\ntask-specific fine-tuning. The extensive experiments on over 160 datasets\ndemonstrate that, with only one-suit of weights, APE outperforms (or is on par\nwith) the state-of-the-art models, proving that an effective yet universal\nperception for anything aligning and prompting is indeed feasible. Codes and\ntrained models are released at https://github.com/shenyunhang/APE.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yunhang Shen", "Chaoyou Fu", "Peixian Chen", "Mengdan Zhang", "Ke Li", "Xing Sun", "Yunsheng Wu", "Shaohui Lin", "Rongrong Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5fe"}, "filepath": "data/2404.00672v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995634381009209, "arXiv_link": "https://arxiv.org/abs/2404.00672v1", "other_link": "https://github.com/Osilly/TokenExpansion", "title": "A General and Efficient Training for Transformer via Token Expansion", "abstract": "The remarkable performance of Vision Transformers (ViTs) typically requires\nan extremely large training cost. Existing methods have attempted to accelerate\nthe training of ViTs, yet typically disregard method universality with accuracy\ndropping. Meanwhile, they break the training consistency of the original\ntransformers, including the consistency of hyper-parameters, architecture, and\nstrategy, which prevents them from being widely applied to different\nTransformer networks. In this paper, we propose a novel token growth scheme\nToken Expansion (termed ToE) to achieve consistent training acceleration for\nViTs. We introduce an \"initialization-expansion-merging\" pipeline to maintain\nthe integrity of the intermediate feature distribution of original\ntransformers, preventing the loss of crucial learnable information in the\ntraining process. ToE can not only be seamlessly integrated into the training\nand fine-tuning process of transformers (e.g., DeiT and LV-ViT), but also\neffective for efficient training frameworks (e.g., EfficientTrain), without\ntwisting the original training hyper-parameters, architecture, and introducing\nadditional training strategies. Extensive experiments demonstrate that ToE\nachieves about 1.3x faster for the training of ViTs in a lossless manner, or\neven with performance gains over the full-token training baselines. Code is\navailable at https://github.com/Osilly/TokenExpansion .", "keywords": [], "authors_list": ["Wenxuan Huang", "Yunhang Shen", "Jiao Xie", "Baochang Zhang", "Gaoqi He", "Ke Li", "Xing Sun", "Shaohui Lin"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computation and Language", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f5ff"}, "filepath": "data/2403.01795.png", "tags": [], "_media_type": "image", "_rand": 0.9997278175310153, "arXiv_link": "https://arxiv.org/abs/2403.01795", "other_link": "https://ranked-cvpr24.github.io.", "title": "RankED: Addressing Imbalance and Uncertainty in Edge Detection Using Ranking-based Losses", "abstract": "Detecting edges in images suffers from the problems of (P1) heavy imbalance\nbetween positive and negative classes as well as (P2) label uncertainty owing\nto disagreement between different annotators. Existing solutions address P1\nusing class-balanced cross-entropy loss and dice loss and P2 by only predicting\nedges agreed upon by most annotators. In this paper, we propose RankED, a\nunified ranking-based approach that addresses both the imbalance problem (P1)\nand the uncertainty problem (P2). RankED tackles these two problems with two\ncomponents: One component which ranks positive pixels over negative pixels, and\nthe second which promotes high confidence edge pixels to have more label\ncertainty. We show that RankED outperforms previous studies and sets a new\nstate-of-the-art on NYUD-v2, BSDS500 and Multi-cue datasets. Code is available\nat https://ranked-cvpr24.github.io.", "keywords": ["Low-level vision"], "authors_list": ["bedrettin cetinkaya", "Sinan Kalkan", "Emre Akbas"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f600"}, "filepath": "data/2308.12112.png", "tags": [], "_media_type": "image", "_rand": 0.9990268874778717, "arXiv_link": "https://arxiv.org/abs/2308.12112", "other_link": "", "title": "Solving the Catastrophic Forgetting Problem in Generalized Category Discovery", "abstract": "Most of Continual Learning (CL) methods push the limit of supervised learning\nsettings, where an agent is expected to learn new labeled tasks and not forget\nprevious knowledge. However, these settings are not well aligned with real-life\nscenarios, where a learning agent has access to a vast amount of unlabeled data\nencompassing both novel (entirely unlabeled) classes and examples from known\nclasses. Drawing inspiration from Generalized Category Discovery (GCD), we\nintroduce a novel framework that relaxes this assumption. Precisely, in any\ntask, we allow for the existence of novel and known classes, and one must use\ncontinual version of unsupervised learning methods to discover them. We call\nthis setting Generalized Continual Category Discovery (GCCD). It unifies CL and\nGCD, bridging the gap between synthetic benchmarks and real-life scenarios.\nWith a series of experiments, we present that existing methods fail to\naccumulate knowledge from subsequent tasks in which unlabeled samples of novel\nclasses are present. In light of these limitations, we propose a method that\nincorporates both supervised and unsupervised signals and mitigates the\nforgetting through the use of centroid adaptation. Our method surpasses strong\nCL methods adopted for GCD techniques and presents a superior representation\nlearning performance.", "keywords": [], "authors_list": ["Xinzi Cao", "Xiawu Zheng", "Guanhong Wang", "Weijiang Yu", "Yunhang Shen", "Ke Li", "Yutong Lu", "Yonghong Tian"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f601"}, "filepath": "data/2405.19074.png", "tags": [], "_media_type": "image", "_rand": 0.9998753137164285, "arXiv_link": "https://arxiv.org/abs/2405.19074", "other_link": "https://github.com/dipamgoswami/ADC.", "title": "Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning", "abstract": "Continual learning methods are known to suffer from catastrophic forgetting,\na phenomenon that is particularly hard to counter for methods that do not store\nexemplars of previous tasks. Therefore, to reduce potential drift in the\nfeature extractor, existing exemplar-free methods are typically evaluated in\nsettings where the first task is significantly larger than subsequent tasks.\nTheir performance drops drastically in more challenging settings starting with\na smaller first task. To address this problem of feature drift estimation for\nexemplar-free methods, we propose to adversarially perturb the current samples\nsuch that their embeddings are close to the old class prototypes in the old\nmodel embedding space. We then estimate the drift in the embedding space from\nthe old to the new model using the perturbed images and compensate the\nprototypes accordingly. We exploit the fact that adversarial samples are\ntransferable from the old to the new feature space in a continual learning\nsetting. The generation of these images is simple and computationally cheap. We\ndemonstrate in our experiments that the proposed approach better tracks the\nmovement of prototypes in embedding space and outperforms existing methods on\nseveral standard continual learning benchmarks as well as on fine-grained\ndatasets. Code is available at https://github.com/dipamgoswami/ADC.", "keywords": [], "authors_list": ["Dipam Goswami", "Albin Soutif", "Yuyang Liu", "Sandesh Kamath", "Bart\u0142omiej Twardowski", "Joost van de Weijer"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f602"}, "filepath": "data/2312.00084.png", "tags": [], "_media_type": "image", "_rand": 0.9995610926165747, "arXiv_link": "https://arxiv.org/abs/2312.00084", "other_link": "", "title": "Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?", "abstract": "Stable Diffusion has established itself as a foundation model in generative\nAI artistic applications, receiving widespread research and application. Some\nrecent fine-tuning methods have made it feasible for individuals to implant\npersonalized concepts onto the basic Stable Diffusion model with minimal\ncomputational costs on small datasets. However, these innovations have also\ngiven rise to issues like facial privacy forgery and artistic copyright\ninfringement. In recent studies, researchers have explored the addition of\nimperceptible adversarial perturbations to images to prevent potential\nunauthorized exploitation and infringements when personal data is used for\nfine-tuning Stable Diffusion. Although these studies have demonstrated the\nability to protect images, it is essential to consider that these methods may\nnot be entirely applicable in real-world scenarios. In this paper, we\nsystematically evaluate the use of perturbations to protect images within a\npractical threat model. The results suggest that these approaches may not be\nsufficient to safeguard image privacy and copyright effectively. Furthermore,\nwe introduce a purification method capable of removing protected perturbations\nwhile preserving the original image structure to the greatest extent possible.\nExperiments reveal that Stable Diffusion can effectively learn from purified\nimages over all protective methods.", "keywords": ["Image and video generation and manipulation", "Vision applications for social good and ethics"], "authors_list": ["Zhengyue Zhao", "Jinhao Duan", "Kaidi Xu", "Chenan Wang", "Rui Zhang", "Zidong Du", "Qi Guo", "Xing Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f603"}, "filepath": "data/2404.19531.png", "tags": [], "_media_type": "image", "_rand": 0.9996444182246784, "arXiv_link": "http://export.arxiv.org/abs/2404.19531", "other_link": "", "title": "MoST: Multi-modality Scene Tokenization for Motion Prediction", "abstract": "Many existing motion prediction approaches rely on symbolic perception\noutputs to generate agent trajectories, such as bounding boxes, road graph\ninformation and traffic lights. This symbolic representation is a high-level\nabstraction of the real world, which may render the motion prediction model\nvulnerable to perception errors (e.g., failures in detecting open-vocabulary\nobstacles) while missing salient information from the scene context (e.g., poor\nroad conditions). An alternative paradigm is end-to-end learning from raw\nsensors. However, this approach suffers from the lack of interpretability and\nrequires significantly more training resources. In this work, we propose\ntokenizing the visual world into a compact set of scene elements and then\nleveraging pre-trained image foundation models and LiDAR neural networks to\nencode all the scene elements in an open-vocabulary manner. The image\nfoundation model enables our scene tokens to encode the general knowledge of\nthe open world while the LiDAR neural network encodes geometry information. Our\nproposed representation can efficiently encode the multi-frame multi-modality\nobservations with a few hundred tokens and is compatible with most\ntransformer-based architectures. To evaluate our method, we have augmented\nWaymo Open Motion Dataset with camera embeddings. Experiments over Waymo Open\nMotion Dataset show that our approach leads to significant performance\nimprovements over the state-of-the-art.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Norman Mu", "Jingwei Ji", "Zhenpei Yang", "Nathan Harada", "Haotian Tang", "Kan Chen", "Charles R. Qi", "Runzhou Ge", "Kratarth Goel", "Zoey Yang", "Scott Ettinger", "Rami Al-Rfou", "Dragomir Anguelov", "Yin Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f604"}, "filepath": "data/2404.02189.png", "tags": [], "_media_type": "image", "_rand": 0.9994360764672602, "arXiv_link": "https://arxiv.org/abs/2404.02189", "other_link": "", "title": "Insights from the Use of Previously Unseen Neural Architecture Search Datasets", "abstract": "The boundless possibility of neural networks which can be used to solve a\nproblem -- each with different performance -- leads to a situation where a Deep\nLearning expert is required to identify the best neural network. This goes\nagainst the hope of removing the need for experts. Neural Architecture Search\n(NAS) offers a solution to this by automatically identifying the best\narchitecture. However, to date, NAS work has focused on a small set of datasets\nwhich we argue are not representative of real-world problems. We introduce\neight new datasets created for a series of NAS Challenges: AddNIST, Language,\nMultNIST, CIFARTile, Gutenberg, Isabella, GeoClassing, and Chesseract. These\ndatasets and challenges are developed to direct attention to issues in NAS\ndevelopment and to encourage authors to consider how their models will perform\non datasets unknown to them at development time. We present experimentation\nusing standard Deep Learning methods as well as the best results from challenge\nparticipants.", "keywords": [], "authors_list": ["Rob Geada", "David Towers", "Matthew Forshaw", "Amir Atapour-Abarghouei", "Stephen McGough"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f605"}, "filepath": "data/2403.19067.png", "tags": [], "_media_type": "image", "_rand": 0.9996710852254508, "arXiv_link": "https://arxiv.org/abs/2403.19067", "other_link": "https://github.com/zstarN70/RLRR.git}{https://github.com/zstarN70/RLRR.git}.", "title": "Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design Approach", "abstract": "Parameter-efficient fine-tuning for pre-trained Vision Transformers aims to\nadeptly tailor a model to downstream tasks by learning a minimal set of new\nadaptation parameters while preserving the frozen majority of pre-trained\nparameters. Striking a balance between retaining the generalizable\nrepresentation capacity of the pre-trained model and acquiring task-specific\nfeatures poses a key challenge. Currently, there is a lack of focus on guiding\nthis delicate trade-off. In this study, we approach the problem from the\nperspective of Singular Value Decomposition (SVD) of pre-trained parameter\nmatrices, providing insights into the tuning dynamics of existing methods.\nBuilding upon this understanding, we propose a Residual-based Low-Rank\nRescaling (RLRR) fine-tuning strategy. This strategy not only enhances\nflexibility in parameter tuning but also ensures that new parameters do not\ndeviate excessively from the pre-trained model through a residual design.\nExtensive experiments demonstrate that our method achieves competitive\nperformance across various downstream image classification tasks, all while\nmaintaining comparable new parameters. We believe this work takes a step\nforward in offering a unified perspective for interpreting existing methods and\nserves as motivation for the development of new approaches that move closer to\neffectively considering the crucial trade-off mentioned above. Our code is\navailable at\n\\href{https://github.com/zstarN70/RLRR.git}{https://github.com/zstarN70/RLRR.git}.", "keywords": [], "authors_list": ["Wei Dong", "Xing Zhang", "Bihui Chen", "Dawei Yan", "Zhijun Lin", "Qingsen Yan", "Peng Wang", "Yang Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f606"}, "filepath": "data/2311.11837v1.png", "tags": [], "_media_type": "image", "_rand": 0.999214117256821, "arXiv_link": "https://arxiv.org/abs/2311.11837v1", "other_link": "", "title": "Kandinsky Conformal Prediction: Efficient Calibration of Image Segmentation Algorithms", "abstract": "Image segmentation algorithms can be understood as a collection of pixel\nclassifiers, for which the outcomes of nearby pixels are correlated. Classifier\nmodels can be calibrated using Inductive Conformal Prediction, but this\nrequires holding back a sufficiently large calibration dataset for computing\nthe distribution of non-conformity scores of the model's predictions. If one\nonly requires only marginal calibration on the image level, this calibration\nset consists of all individual pixels in the images available for calibration.\nHowever, if the goal is to attain proper calibration for each individual pixel\nclassifier, the calibration set consists of individual images. In a scenario\nwhere data are scarce (such as the medical domain), it may not always be\npossible to set aside sufficiently many images for this pixel-level\ncalibration. The method we propose, dubbed ``Kandinsky calibration'', makes use\nof the spatial structure present in the distribution of natural images to\nsimultaneously calibrate the classifiers of ``similar'' pixels. This can be\nseen as an intermediate approach between marginal (imagewise) and conditional\n(pixelwise) calibration, where non-conformity scores are aggregated over\nsimilar image regions, thereby making more efficient use of the images\navailable for calibration. We run experiments on segmentation algorithms\ntrained and calibrated on subsets of the public MS-COCO and Medical Decathlon\ndatasets, demonstrating that Kandinsky calibration method can significantly\nimprove the coverage. When compared to both pixelwise and imagewise calibration\non little data, the Kandinsky method achieves much lower coverage errors,\nindicating the data efficiency of the Kandinsky calibration.", "keywords": ["Efficient and scalable vision", "Medical imaging and biological vision"], "authors_list": ["Joren Brunekreef", "Eric Marcus", "Ray Sheombarsing", "Jan-Jakob Sonke", "Jonas Teuwen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f607"}, "filepath": "data/2402.19463.png", "tags": [], "_media_type": "image", "_rand": 0.9999893167782029, "arXiv_link": "https://arxiv.org/abs/2402.19463", "other_link": "", "title": "What Moves Together Belongs Together", "abstract": "We tackle semi-supervised object detection based on motion cues. Recent\nresults suggest that heuristic-based clustering methods in conjunction with\nobject trackers can be used to pseudo-label instances of moving objects and use\nthese as supervisory signals to train 3D object detectors in Lidar data without\nmanual supervision. We re-think this approach and suggest that both, object\ndetection, as well as motion-inspired pseudo-labeling, can be tackled in a\ndata-driven manner. We leverage recent advances in scene flow estimation to\nobtain point trajectories from which we extract long-term, class-agnostic\nmotion patterns. Revisiting correlation clustering in the context of message\npassing networks, we learn to group those motion patterns to cluster points to\nobject instances. By estimating the full extent of the objects, we obtain\nper-scan 3D bounding boxes that we use to supervise a Lidar object detection\nnetwork. Our method not only outperforms prior heuristic-based approaches (57.5\nAP, +14 improvement over prior work), more importantly, we show we can\npseudo-label and train object detectors across datasets.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Jenny Seidenschwarz", "Aljo\u0161a O\u0161ep", "Francesco Ferroni", "Simon Lucey", "Laura Leal-Taixe"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f608"}, "filepath": "data/2405.20786.png", "tags": [], "_media_type": "image", "_rand": 0.9991994565184535, "arXiv_link": "https://arxiv.org/abs/2405.20786", "other_link": "", "title": "Stratified Avatar Generation from Sparse Observations", "abstract": "Estimating 3D full-body avatars from AR/VR devices is essential for creating\nimmersive experiences in AR/VR applications. This task is challenging due to\nthe limited input from Head Mounted Devices, which capture only sparse\nobservations from the head and hands. Predicting the full-body avatars,\nparticularly the lower body, from these sparse observations presents\nsignificant difficulties. In this paper, we are inspired by the inherent\nproperty of the kinematic tree defined in the Skinned Multi-Person Linear\n(SMPL) model, where the upper body and lower body share only one common\nancestor node, bringing the potential of decoupled reconstruction. We propose a\nstratified approach to decouple the conventional full-body avatar\nreconstruction pipeline into two stages, with the reconstruction of the upper\nbody first and a subsequent reconstruction of the lower body conditioned on the\nprevious stage. To implement this straightforward idea, we leverage the latent\ndiffusion model as a powerful probabilistic generator, and train it to follow\nthe latent distribution of decoupled motions explored by a VQ-VAE\nencoder-decoder model. Extensive experiments on AMASS mocap dataset demonstrate\nour state-of-the-art performance in the reconstruction of full-body motions.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Han Feng", "Wenchao Ma", "Quankai Gao", "Xianwei Zheng", "Nan Xue", "Huijuan Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Human-Computer Interaction"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f609"}, "filepath": "data/2401.05577.png", "tags": [], "_media_type": "image", "_rand": 0.9993437544696989, "arXiv_link": "https://arxiv.org/abs/2401.05577", "other_link": "", "title": "VLP: Vision Language Planning for Autonomous Driving", "abstract": "Autonomous driving is a complex and challenging task that aims at safe motion\nplanning through scene understanding and reasoning. While vision-only\nautonomous driving methods have recently achieved notable performance, through\nenhanced scene understanding, several key issues, including lack of reasoning,\nlow generalization performance and long-tail scenarios, still need to be\naddressed. In this paper, we present VLP, a novel Vision-Language-Planning\nframework that exploits language models to bridge the gap between linguistic\nunderstanding and autonomous driving. VLP enhances autonomous driving systems\nby strengthening both the source memory foundation and the self-driving car's\ncontextual understanding. VLP achieves state-of-the-art end-to-end planning\nperformance on the challenging NuScenes dataset by achieving 35.9\\% and 60.5\\%\nreduction in terms of average L2 error and collision rates, respectively,\ncompared to the previous best method. Moreover, VLP shows improved performance\nin challenging long-tail scenarios and strong generalization capabilities when\nfaced with new urban environments.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Chenbin Pan", "Burhaneddin Yaman", "Tommaso Nesti", "Abhirup Mallik", "Alessandro G Allievi", "Senem Velipasalar", "Liu Ren"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f60a"}, "filepath": "data/2312.09625.png", "tags": [], "_media_type": "image", "_rand": 0.9993242497082219, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2312.09625", "other_link": "", "title": "Omni-Q: Omni-Directional Scene Understanding for Unsupervised Visual Grounding", "abstract": "Learning to ground natural language queries to target objects or regions in\n3D point clouds is quite essential for 3D scene understanding. Nevertheless,\nexisting 3D visual grounding approaches require a substantial number of\nbounding box annotations for text queries, which is time-consuming and\nlabor-intensive to obtain. In this paper, we propose \\textbf{3D-VLA}, a weakly\nsupervised approach for \\textbf{3D} visual grounding based on \\textbf{V}isual\n\\textbf{L}inguistic \\textbf{A}lignment. Our 3D-VLA exploits the superior\nability of current large-scale vision-language models (VLMs) on aligning the\nsemantics between texts and 2D images, as well as the naturally existing\ncorrespondences between 2D images and 3D point clouds, and thus implicitly\nconstructs correspondences between texts and 3D point clouds with no need for\nfine-grained box annotations in the training procedure. During the inference\nstage, the learned text-3D correspondence will help us ground the text queries\nto the 3D target objects even without 2D images. To the best of our knowledge,\nthis is the first work to investigate 3D visual grounding in a weakly\nsupervised manner by involving large scale vision-language models, and\nextensive experiments on ReferIt3D and ScanRefer datasets demonstrate that our\n3D-VLA achieves comparable and even superior results over the fully supervised\nmethods.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Sai Wang", "Yutian Lin", "Yu Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f60b"}, "filepath": "data/2404.04823.png", "tags": [], "_media_type": "image", "_rand": 0.9993935496110935, "arXiv_link": "https://arxiv.org/abs/2404.04823", "other_link": "https://github.com/opendatalab/MLS-BRN.git.", "title": "3D Building Reconstruction from Monocular Remote Sensing Images with Multi-level Supervisions", "abstract": "3D building reconstruction from monocular remote sensing images is an\nimportant and challenging research problem that has received increasing\nattention in recent years, owing to its low cost of data acquisition and\navailability for large-scale applications. However, existing methods rely on\nexpensive 3D-annotated samples for fully-supervised training, restricting their\napplication to large-scale cross-city scenarios. In this work, we propose\nMLS-BRN, a multi-level supervised building reconstruction network that can\nflexibly utilize training samples with different annotation levels to achieve\nbetter reconstruction results in an end-to-end manner. To alleviate the demand\non full 3D supervision, we design two new modules, Pseudo Building Bbox\nCalculator and Roof-Offset guided Footprint Extractor, as well as new tasks and\ntraining strategies for different types of samples. Experimental results on\nseveral public and new datasets demonstrate that our proposed MLS-BRN achieves\ncompetitive performance using much fewer 3D-annotated samples, and\nsignificantly improves the footprint extraction and 3D reconstruction\nperformance compared with current state-of-the-art. The code and datasets of\nthis work will be released at https://github.com/opendatalab/MLS-BRN.git.", "keywords": ["Remote sensing and photogrammetry", "Deep learning architectures and techniques"], "authors_list": ["Weijia Li", "Haote Yang", "Zhenghao Hu", "Juepeng Zheng", "Gui-Song Xia", "Conghui He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f60c"}, "filepath": "data/2403.20318.png", "tags": [], "_media_type": "image", "_rand": 0.9993882058787744, "arXiv_link": "https://arxiv.org/abs/2403.20318", "other_link": "https://github.com/abhi1kumar/SeaBird", "title": "SeaBird: Segmentation in Bird\u2019s View with Dice Loss Improves Monocular 3D Detection of Large Objects", "abstract": "Monocular 3D detectors achieve remarkable performance on cars and smaller\nobjects. However, their performance drops on larger objects, leading to fatal\naccidents. Some attribute the failures to training data scarcity or their\nreceptive field requirements of large objects. In this paper, we highlight this\nunderstudied problem of generalization to large objects. We find that modern\nfrontal detectors struggle to generalize to large objects even on nearly\nbalanced datasets. We argue that the cause of failure is the sensitivity of\ndepth regression losses to noise of larger objects. To bridge this gap, we\ncomprehensively investigate regression and dice losses, examining their\nrobustness under varying error levels and object sizes. We mathematically prove\nthat the dice loss leads to superior noise-robustness and model convergence for\nlarge objects compared to regression losses for a simplified case. Leveraging\nour theoretical insights, we propose SeaBird (Segmentation in Bird's View) as\nthe first step towards generalizing to large objects. SeaBird effectively\nintegrates BEV segmentation on foreground objects for 3D detection, with the\nsegmentation head trained with the dice loss. SeaBird achieves SoTA results on\nthe KITTI-360 leaderboard and improves existing detectors on the nuScenes\nleaderboard, particularly for large objects. Code and models at\nhttps://github.com/abhi1kumar/SeaBird", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Abhinav Kumar", "Yuliang Guo", "Xinyu Huang", "Liu Ren", "Xiaoming Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f60d"}, "filepath": "data/2307.08727.png", "tags": [], "_media_type": "image", "_rand": 0.9998885003118347, "arXiv_link": "https://web3.arxiv.org/abs/2307.08727", "other_link": "", "title": "Learning to Count without Annotations", "abstract": "While recent supervised methods for reference-based object counting continue\nto improve the performance on benchmark datasets, they have to rely on small\ndatasets due to the cost associated with manually annotating dozens of objects\nin images. We propose UnCounTR, a model that can learn this task without\nrequiring any manual annotations. To this end, we construct \"Self-Collages\",\nimages with various pasted objects as training samples, that provide a rich\nlearning signal covering arbitrary object types and counts. Our method builds\non existing unsupervised representations and segmentation techniques to\nsuccessfully demonstrate for the first time the ability of reference-based\ncounting without manual supervision. Our experiments show that our method not\nonly outperforms simple baselines and generic models such as FasterRCNN and\nDETR, but also matches the performance of supervised counting models in some\ndomains.", "keywords": [], "authors_list": ["Lukas Knobel", "Tengda Han", "Yuki Asano"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f60e"}, "filepath": "data/2312.06709.png", "tags": [], "_media_type": "image", "_rand": 0.9998644618014852, "arXiv_link": "https://arxiv.org/abs/2312.06709", "other_link": "https://github.com/NVlabs/RADIO", "title": "AM-RADIO: Agglomerative Models - Reduce All Domains Into One", "abstract": "A handful of visual foundation models (VFMs) have recently emerged as the\nbackbones for numerous downstream tasks. VFMs like CLIP, DINOv2, SAM are\ntrained with distinct objectives, exhibiting unique characteristics for various\ndownstream tasks. We find that despite their conceptual differences, these\nmodels can be effectively merged into a unified model through multi-teacher\ndistillation. We name this approach AM-RADIO (Agglomerative Model -- Reduce All\nDomains Into One). This integrative approach not only surpasses the performance\nof individual teacher models but also amalgamates their distinctive features,\nsuch as zero-shot vision-language comprehension, detailed pixel-level\nunderstanding, and open vocabulary segmentation capabilities. In pursuit of the\nmost hardware-efficient backbone, we evaluated numerous architectures in our\nmulti-teacher distillation pipeline using the same training recipe. This led to\nthe development of a novel architecture (E-RADIO) that exceeds the performance\nof its predecessors and is at least 7x faster than the teacher models. Our\ncomprehensive benchmarking process covers downstream tasks including ImageNet\nclassification, ADE20k semantic segmentation, COCO object detection and\nLLaVa-1.5 framework.\n Code: https://github.com/NVlabs/RADIO", "keywords": ["Efficient and scalable vision"], "authors_list": ["Mike Ranzinger", "Greg Heinrich", "Jan Kautz", "Pavlo Molchanov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f60f"}, "filepath": "data/2403.17360.png", "tags": [], "_media_type": "image", "_rand": 0.9999424775106202, "arXiv_link": "https://arxiv.org/abs/2403.17360", "other_link": "https://github.com/sacrcv/Activity-Biometrics/}", "title": "Activity-Biometrics: Person Identification from Daily Activities", "abstract": "In this work, we study a novel problem which focuses on person identification\nwhile performing daily activities. Learning biometric features from RGB videos\nis challenging due to spatio-temporal complexity and presence of appearance\nbiases such as clothing color and background. We propose ABNet, a novel\nframework which leverages disentanglement of biometric and non-biometric\nfeatures to perform effective person identification from daily activities.\nABNet relies on a bias-less teacher to learn biometric features from RGB videos\nand explicitly disentangle non-biometric features with the help of biometric\ndistortion. In addition, ABNet also exploits activity prior for biometrics\nwhich is enabled by joint biometric and activity learning. We perform\ncomprehensive evaluation of the proposed approach across five different\ndatasets which are derived from existing activity recognition benchmarks.\nFurthermore, we extensively compare ABNet with existing works in person\nidentification and demonstrate its effectiveness for activity-based biometrics\nacross all five datasets. The code and dataset can be accessed at:\n\\url{https://github.com/sacrcv/Activity-Biometrics/}", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Shehreen Azad", "Yogesh S. Rawat"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f610"}, "filepath": "data/2403.16412.png", "tags": [], "_media_type": "image", "_rand": 0.9995625183838263, "arXiv_link": "https://arxiv.org/abs/2403.16412", "other_link": "", "title": "Unsupervised Template-assisted Point Cloud Shape Correspondence Network", "abstract": "Unsupervised point cloud shape correspondence aims to establish point-wise\ncorrespondences between source and target point clouds. Existing methods obtain\ncorrespondences directly by computing point-wise feature similarity between\npoint clouds. However, non-rigid objects possess strong deformability and\nunusual shapes, making it a longstanding challenge to directly establish\ncorrespondences between point clouds with unconventional shapes. To address\nthis challenge, we propose an unsupervised Template-Assisted point cloud shape\ncorrespondence Network, termed TANet, including a template generation module\nand a template assistance module. The proposed TANet enjoys several merits.\nFirstly, the template generation module establishes a set of learnable\ntemplates with explicit structures. Secondly, we introduce a template\nassistance module that extensively leverages the generated templates to\nestablish more accurate shape correspondences from multiple perspectives.\nExtensive experiments on four human and animal datasets demonstrate that TANet\nachieves favorable performance against state-of-the-art methods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiacheng Deng", "Jiahao Lu", "Tianzhu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f611"}, "filepath": "data/2404.10766v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998206210105399, "arXiv_link": "https://arxiv.org/html/2404.10766v1", "other_link": "", "title": "Real-time Acquisition and Reconstruction of Dynamic Volumes with Neural Structured Illumination", "abstract": "Two-dimensional (2D) freehand ultrasonography is one of the most commonly\nused medical imaging modalities, particularly in obstetrics and gynaecology.\nHowever, it only captures 2D cross-sectional views of inherently 3D anatomies,\nlosing valuable contextual information. As an alternative to requiring costly\nand complex 3D ultrasound scanners, 3D volumes can be constructed from 2D scans\nusing machine learning. However this usually requires long computational time.\nHere, we propose RapidVol: a neural representation framework to speed up\nslice-to-volume ultrasound reconstruction. We use tensor-rank decomposition, to\ndecompose the typical 3D volume into sets of tri-planes, and store those\ninstead, as well as a small neural network. A set of 2D ultrasound scans, with\ntheir ground truth (or estimated) 3D position and orientation (pose) is all\nthat is required to form a complete 3D reconstruction. Reconstructions are\nformed from real fetal brain scans, and then evaluated by requesting novel\ncross-sectional views. When compared to prior approaches based on fully\nimplicit representation (e.g. neural radiance fields), our method is over 3x\nquicker, 46% more accurate, and if given inaccurate poses is more robust.\nFurther speed-up is also possible by reconstructing from a structural prior\nrather than from scratch.", "keywords": ["Computational imaging and physics-based vision", "Efficient and scalable vision"], "authors_list": ["Yixin Zeng", "Zoubin Bi", "Yin Mingrui", "Xiang Feng", "Kun Zhou", "Hongzhi Wu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f612"}, "filepath": "data/2403.16143.png", "tags": [], "_media_type": "image", "_rand": 0.9994446645480436, "arXiv_link": "https://arxiv.org/abs/2403.16143", "other_link": "", "title": "CFAT: Unleashing Triangular Windows for Image Super-resolution", "abstract": "Transformer-based models have revolutionized the field of image\nsuper-resolution (SR) by harnessing their inherent ability to capture complex\ncontextual features. The overlapping rectangular shifted window technique used\nin transformer architecture nowadays is a common practice in super-resolution\nmodels to improve the quality and robustness of image upscaling. However, it\nsuffers from distortion at the boundaries and has limited unique shifting\nmodes. To overcome these weaknesses, we propose a non-overlapping triangular\nwindow technique that synchronously works with the rectangular one to mitigate\nboundary-level distortion and allows the model to access more unique sifting\nmodes. In this paper, we propose a Composite Fusion Attention Transformer\n(CFAT) that incorporates triangular-rectangular window-based local attention\nwith a channel-based global attention technique in image super-resolution. As a\nresult, CFAT enables attention mechanisms to be activated on more image pixels\nand captures long-range, multi-scale features to improve SR performance. The\nextensive experimental results and ablation study demonstrate the effectiveness\nof CFAT in the SR domain. Our proposed model shows a significant 0.7 dB\nperformance improvement over other state-of-the-art SR architectures.", "keywords": ["Low-level vision", "Efficient and scalable vision"], "authors_list": ["Abhisek Ray", "Gaurav Kumar", "Maheshkumar Kolekar"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition", "Machine Learning", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f613"}, "filepath": "data/2312.06716.png", "tags": [], "_media_type": "image", "_rand": 0.9992117250212396, "arXiv_link": "https://arxiv.org/abs/2312.06716", "other_link": "", "title": "Deciphering \u2018What\u2019 and \u2018Where\u2019 Visual Pathways from Spectral Clustering of Layer-Distributed Neural Representations", "abstract": "We present an approach for analyzing grouping information contained within a\nneural network's activations, permitting extraction of spatial layout and\nsemantic segmentation from the behavior of large pre-trained vision models.\nUnlike prior work, our method conducts a wholistic analysis of a network's\nactivation state, leveraging features from all layers and obviating the need to\nguess which part of the model contains relevant information. Motivated by\nclassic spectral clustering, we formulate this analysis in terms of an\noptimization objective involving a set of affinity matrices, each formed by\ncomparing features within a different layer. Solving this optimization problem\nusing gradient descent allows our technique to scale from single images to\ndataset-level analysis, including, in the latter, both intra- and inter-image\nrelationships. Analyzing a pre-trained generative transformer provides insight\ninto the computational strategy learned by such models. Equating affinity with\nkey-query similarity across attention layers yields eigenvectors encoding scene\nspatial layout, whereas defining affinity by value vector similarity yields\neigenvectors encoding object identity. This result suggests that key and query\nvectors coordinate attentional information flow according to spatial proximity\n(a `where' pathway), while value vectors refine a semantic category\nrepresentation (a `what' pathway).", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Xiao Zhang", "David Yunis", "Michael Maire"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f614"}, "filepath": "data/2403.03896v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996931688008989, "arXiv_link": "https://arxiv.org/abs/2403.03896v1", "other_link": "", "title": "DART: Implicit Doppler Tomography for Radar Novel View Synthesis", "abstract": "Simulation is an invaluable tool for radio-frequency system designers that\nenables rapid prototyping of various algorithms for imaging, target detection,\nclassification, and tracking. However, simulating realistic radar scans is a\nchallenging task that requires an accurate model of the scene, radio frequency\nmaterial properties, and a corresponding radar synthesis function. Rather than\nspecifying these models explicitly, we propose DART - Doppler Aided Radar\nTomography, a Neural Radiance Field-inspired method which uses radar-specific\nphysics to create a reflectance and transmittance-based rendering pipeline for\nrange-Doppler images. We then evaluate DART by constructing a custom data\ncollection platform and collecting a novel radar dataset together with accurate\nposition and instantaneous velocity measurements from lidar-based localization.\nIn comparison to state-of-the-art baselines, DART synthesizes superior radar\nrange-Doppler images from novel views across all datasets and additionally can\nbe used to generate high quality tomographic images.", "keywords": ["Remote sensing and photogrammetry", "Deep learning architectures and techniques"], "authors_list": ["Tianshu Huang", "John Miller", "Akarsh Prabhakara", "Tao Jin", "Tarana Laroia", "Zico Kolter", "Anthony Rowe"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f615"}, "filepath": "data/2405.20324.png", "tags": [], "_media_type": "image", "_rand": 0.999699740661895, "arXiv_link": "https://arxiv.org/abs/2405.20324", "other_link": "", "title": "Don\u2019t drop your samples! Coherence-aware training benefits Conditional diffusion", "abstract": "Conditional diffusion models are powerful generative models that can leverage\nvarious types of conditional information, such as class labels, segmentation\nmasks, or text captions. However, in many real-world scenarios, conditional\ninformation may be noisy or unreliable due to human annotation errors or weak\nalignment. In this paper, we propose the Coherence-Aware Diffusion (CAD), a\nnovel method that integrates coherence in conditional information into\ndiffusion models, allowing them to learn from noisy annotations without\ndiscarding data. We assume that each data point has an associated coherence\nscore that reflects the quality of the conditional information. We then\ncondition the diffusion model on both the conditional information and the\ncoherence score. In this way, the model learns to ignore or discount the\nconditioning when the coherence is low. We show that CAD is theoretically sound\nand empirically effective on various conditional generation tasks. Moreover, we\nshow that leveraging coherence generates realistic and diverse samples that\nrespect conditional information better than models trained on cleaned datasets\nwhere samples with low coherence have been discarded.", "keywords": [], "authors_list": ["Nicolas Dufour", "Victor Besnier", "Vicky Kalogeiton", "David Picard"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f616"}, "filepath": "data/2311.15773.png", "tags": [], "_media_type": "image", "_rand": 0.9993037215361609, "arXiv_link": "https://arxiv.org/abs/2311.15773", "other_link": "https://simm-t2i.github.io/SimM.", "title": "Check, Locate, Rectify: A Training-Free Layout Calibration System for Text-to-Image Generation", "abstract": "Diffusion models have recently achieved remarkable progress in generating\nrealistic images. However, challenges remain in accurately understanding and\nsynthesizing the layout requirements in the textual prompts. To align the\ngenerated image with layout instructions, we present a training-free layout\ncalibration system SimM that intervenes in the generative process on the fly\nduring inference time. Specifically, following a \"check-locate-rectify\"\npipeline, the system first analyses the prompt to generate the target layout\nand compares it with the intermediate outputs to automatically detect errors.\nThen, by moving the located activations and making intra- and inter-map\nadjustments, the rectification process can be performed with negligible\ncomputational overhead. To evaluate SimM over a range of layout requirements,\nwe present a benchmark SimMBench that compensates for the lack of superlative\nspatial relations in existing datasets. And both quantitative and qualitative\nresults demonstrate the effectiveness of the proposed SimM in calibrating the\nlayout inconsistencies. Our project page is at https://simm-t2i.github.io/SimM.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Biao Gong", "Siteng Huang", "Yutong Feng", "Shiwei Zhang", "Yuyuan Li", "Yu Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f617"}, "filepath": "data/2312.07322.png", "tags": [], "_media_type": "image", "_rand": 0.999521896139154, "arXiv_link": "https://arxiv.org/abs/2312.07322", "other_link": "", "title": "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos", "abstract": "We address the task of generating temporally consistent and physically\nplausible images of actions and object state transformations. Given an input\nimage and a text prompt describing the targeted transformation, our generated\nimages preserve the environment and transform objects in the initial image. Our\ncontributions are threefold. First, we leverage a large body of instructional\nvideos and automatically mine a dataset of triplets of consecutive frames\ncorresponding to initial object states, actions, and resulting object\ntransformations. Second, equipped with this data, we develop and train a\nconditioned diffusion model dubbed GenHowTo. Third, we evaluate GenHowTo on a\nvariety of objects and actions and show superior performance compared to\nexisting methods. In particular, we introduce a quantitative evaluation where\nGenHowTo achieves 88% and 74% on seen and unseen interaction categories,\nrespectively, outperforming prior work by a large margin.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Tomas Soucek", "Dima Damen", "Michael Wray", "Ivan Laptev", "Josef Sivic"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f618"}, "filepath": "data/2307.07214.png", "tags": [], "_media_type": "image", "_rand": 0.9993248527286323, "arXiv_link": "https://arxiv.org/abs/2307.07214", "other_link": "", "title": "From Coarse to Fine-Grained Open-Set Recognition", "abstract": "Open-set image recognition is a challenging topic in computer vision. Most of\nthe existing works in literature focus on learning more discriminative features\nfrom the input images, however, they are usually insensitive to the high- or\nlow-frequency components in features, resulting in a decreasing performance on\nfine-grained image recognition. To address this problem, we propose a\nComplementary Frequency-varying Awareness Network that could better capture\nboth high-frequency and low-frequency information, called CFAN. The proposed\nCFAN consists of three sequential modules: (i) a feature extraction module is\nintroduced for learning preliminary features from the input images; (ii) a\nfrequency-varying filtering module is designed to separate out both high- and\nlow-frequency components from the preliminary features in the frequency domain\nvia a frequency-adjustable filter; (iii) a complementary temporal aggregation\nmodule is designed for aggregating the high- and low-frequency components via\ntwo Long Short-Term Memory networks into discriminative features. Based on\nCFAN, we further propose an open-set fine-grained image recognition method,\ncalled CFAN-OSFGR, which learns image features via CFAN and classifies them via\na linear classifier. Experimental results on 3 fine-grained datasets and 2\ncoarse-grained datasets demonstrate that CFAN-OSFGR performs significantly\nbetter than 9 state-of-the-art methods in most cases.", "keywords": [], "authors_list": ["Nico Lang", "V\u00e9steinn Sn\u00e6bjarnarson", "Elijah Cole", "Oisin Mac Aodha", "Christian Igel", "Serge Belongie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f619"}, "filepath": "data/2403.14737.png", "tags": [], "_media_type": "image", "_rand": 0.999078296051483, "arXiv_link": "https://arxiv.org/abs/2403.14737", "other_link": "", "title": "FedMef: Towards Memory-efficient Federated Dynamic Pruning", "abstract": "Federated learning (FL) promotes decentralized training while prioritizing\ndata confidentiality. However, its application on resource-constrained devices\nis challenging due to the high demand for computation and memory resources to\ntrain deep learning models. Neural network pruning techniques, such as dynamic\npruning, could enhance model efficiency, but directly adopting them in FL still\nposes substantial challenges, including post-pruning performance degradation,\nhigh activation memory usage, etc. To address these challenges, we propose\nFedMef, a novel and memory-efficient federated dynamic pruning framework.\nFedMef comprises two key components. First, we introduce the budget-aware\nextrusion that maintains pruning efficiency while preserving post-pruning\nperformance by salvaging crucial information from parameters marked for pruning\nwithin a given budget. Second, we propose scaled activation pruning to\neffectively reduce activation memory footprints, which is particularly\nbeneficial for deploying FL to memory-limited devices. Extensive experiments\ndemonstrate the effectiveness of our proposed FedMef. In particular, it\nachieves a significant reduction of 28.5% in memory footprint compared to\nstate-of-the-art methods while obtaining superior accuracy.", "keywords": [], "authors_list": ["Hong Huang", "Weiming Zhuang", "Chen Chen", "Lingjuan Lyu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Distributed, Parallel, and Cluster Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f61a"}, "filepath": "data/2312.07536.png", "tags": [], "_media_type": "image", "_rand": 0.9992148096232804, "arXiv_link": "https://arxiv.org/abs/2312.07536", "other_link": "", "title": "FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Condition", "abstract": "Recent approaches such as ControlNet offer users fine-grained spatial control\nover text-to-image (T2I) diffusion models. However, auxiliary modules have to\nbe trained for each type of spatial condition, model architecture, and\ncheckpoint, putting them at odds with the diverse intents and preferences a\nhuman designer would like to convey to the AI models during the content\ncreation process. In this work, we present FreeControl, a training-free\napproach for controllable T2I generation that supports multiple conditions,\narchitectures, and checkpoints simultaneously. FreeControl designs structure\nguidance to facilitate the structure alignment with a guidance image, and\nappearance guidance to enable the appearance sharing between images generated\nusing the same seed. Extensive qualitative and quantitative experiments\ndemonstrate the superior performance of FreeControl across a variety of\npre-trained T2I models. In particular, FreeControl facilitates convenient\ntraining-free control over many different architectures and checkpoints, allows\nthe challenging input conditions on which most of the existing training-free\nmethods fail, and achieves competitive synthesis quality with training-based\napproaches.", "keywords": [], "authors_list": ["Sicheng Mo", "Fangzhou Mu", "Kuan Heng Lin", "Yanli Liu", "Bochen Guan", "Yin Li", "Bolei Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f61b"}, "filepath": "data/2307.08672.png", "tags": [], "_media_type": "image", "_rand": 0.9997584995946674, "arXiv_link": "https://arxiv.org/abs/2307.08672", "other_link": "", "title": "Revamping Federated Learning Security from a Defender's Perspective: A Unified Defense with Homomorphic Encrypted Data Space", "abstract": "Federated Learning (FL) is a privacy-preserving distributed machine learning\ntechnique that enables individual clients (e.g., user participants, edge\ndevices, or organizations) to train a model on their local data in a secure\nenvironment and then share the trained model with an aggregator to build a\nglobal model collaboratively. In this work, we propose FedDefender, a defense\nmechanism against targeted poisoning attacks in FL by leveraging differential\ntesting. Our proposed method fingerprints the neuron activations of clients'\nmodels on the same input and uses differential testing to identify a\npotentially malicious client containing a backdoor. We evaluate FedDefender\nusing MNIST and FashionMNIST datasets with 20 and 30 clients, and our results\ndemonstrate that FedDefender effectively mitigates such attacks, reducing the\nattack success rate (ASR) to 10\\% without deteriorating the global model\nperformance.", "keywords": [], "authors_list": ["Naveen Kumar Kummari", "Reshmi Mitra", "Krishna Mohan Chalavadi"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Artificial Intelligence", "Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f61c"}, "filepath": "data/2403.12202.png", "tags": [], "_media_type": "image", "_rand": 0.9990350534480892, "arXiv_link": "https://arxiv.org/abs/2403.12202", "other_link": "", "title": "DeCoTR: Enhancing Depth Completion with 2D and 3D Attentions", "abstract": "In this paper, we introduce a novel approach that harnesses both 2D and 3D\nattentions to enable highly accurate depth completion without requiring\niterative spatial propagations. Specifically, we first enhance a baseline\nconvolutional depth completion model by applying attention to 2D features in\nthe bottleneck and skip connections. This effectively improves the performance\nof this simple network and sets it on par with the latest, complex\ntransformer-based models. Leveraging the initial depths and features from this\nnetwork, we uplift the 2D features to form a 3D point cloud and construct a 3D\npoint transformer to process it, allowing the model to explicitly learn and\nexploit 3D geometric features. In addition, we propose normalization techniques\nto process the point cloud, which improves learning and leads to better\naccuracy than directly using point transformers off the shelf. Furthermore, we\nincorporate global attention on downsampled point cloud features, which enables\nlong-range context while still being computationally feasible. We evaluate our\nmethod, DeCoTR, on established depth completion benchmarks, including NYU Depth\nV2 and KITTI, showcasing that it sets new state-of-the-art performance. We\nfurther conduct zero-shot evaluations on ScanNet and DDAD benchmarks and\ndemonstrate that DeCoTR has superior generalizability compared to existing\napproaches.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Yunxiao Shi", "Manish Singh", "Hong Cai", "Fatih Porikli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f61d"}, "filepath": "data/2404.11120.png", "tags": [], "_media_type": "image", "_rand": 0.9994184123338883, "arXiv_link": "https://arxiv.org/abs/2404.11120", "other_link": "https://github.com/SherryXTChen/TiNO-Edit.", "title": "TiNO-Edit: Timestep and Noise Optimization for Robust Diffusion-Based Image Editing", "abstract": "Despite many attempts to leverage pre-trained text-to-image models (T2I) like\nStable Diffusion (SD) for controllable image editing, producing good\npredictable results remains a challenge. Previous approaches have focused on\neither fine-tuning pre-trained T2I models on specific datasets to generate\ncertain kinds of images (e.g., with a specific object or person), or on\noptimizing the weights, text prompts, and/or learning features for each input\nimage in an attempt to coax the image generator to produce the desired result.\nHowever, these approaches all have shortcomings and fail to produce good\nresults in a predictable and controllable manner. To address this problem, we\npresent TiNO-Edit, an SD-based method that focuses on optimizing the noise\npatterns and diffusion timesteps during editing, something previously\nunexplored in the literature. With this simple change, we are able to generate\nresults that both better align with the original images and reflect the desired\nresult. Furthermore, we propose a set of new loss functions that operate in the\nlatent domain of SD, greatly speeding up the optimization when compared to\nprior approaches, which operate in the pixel domain. Our method can be easily\napplied to variations of SD including Textual Inversion and DreamBooth that\nencode new concepts and incorporate them into the edited results. We present a\nhost of image-editing capabilities enabled by our approach. Our code is\npublicly available at https://github.com/SherryXTChen/TiNO-Edit.", "keywords": [], "authors_list": ["Sherry X. Chen", "Yaron Vaxman", "Elad Ben Baruch", "David Asulin", "Aviad Moreshet", "Kuo-Chin Lien", "Misha Sra", "Pradeep Sen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f61e"}, "filepath": "data/2404.00330.png", "tags": [], "_media_type": "image", "_rand": 0.9993609942411951, "arXiv_link": "https://arxiv.org/abs/2404.00330", "other_link": "", "title": "Memory-Scalable and Simplified Functional Map Learning", "abstract": "Deep functional maps have emerged in recent years as a prominent\nlearning-based framework for non-rigid shape matching problems. While early\nmethods in this domain only focused on learning in the functional domain, the\nlatest techniques have demonstrated that by promoting consistency between\nfunctional and pointwise maps leads to significant improvements in accuracy.\nUnfortunately, existing approaches rely heavily on the computation of large\ndense matrices arising from soft pointwise maps, which compromises their\nefficiency and scalability. To address this limitation, we introduce a novel\nmemory-scalable and efficient functional map learning pipeline. By leveraging\nthe specific structure of functional maps, we offer the possibility to achieve\nidentical results without ever storing the pointwise map in memory.\nFurthermore, based on the same approach, we present a differentiable map\nrefinement layer adapted from an existing axiomatic refinement algorithm.\nUnlike many functional map learning methods, which use this algorithm at a\npost-processing step, ours can be easily used at train time, enabling to\nenforce consistency between the refined and initial versions of the map. Our\nresulting approach is both simpler, more efficient and more numerically stable,\nby avoiding differentiation through a linear system, while achieving close to\nstate-of-the-art results in challenging scenarios.", "keywords": ["Efficient and scalable vision", "Efficient and scalable vision"], "authors_list": ["Robin Magnet", "Maks Ovsjanikov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f61f"}, "filepath": "data/2405.06887.png", "tags": [], "_media_type": "image", "_rand": 0.9992869741669439, "arXiv_link": "https://arxiv.org/abs/2405.06887", "other_link": "https://github.com/PKU-ICST-MIPL/FineParser_CVPR2024}.", "title": "FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment", "abstract": "Existing action quality assessment (AQA) methods mainly learn deep\nrepresentations at the video level for scoring diverse actions. Due to the lack\nof a fine-grained understanding of actions in videos, they harshly suffer from\nlow credibility and interpretability, thus insufficient for stringent\napplications, such as Olympic diving events. We argue that a fine-grained\nunderstanding of actions requires the model to perceive and parse actions in\nboth time and space, which is also the key to the credibility and\ninterpretability of the AQA technique. Based on this insight, we propose a new\nfine-grained spatial-temporal action parser named \\textbf{FineParser}. It\nlearns human-centric foreground action representations by focusing on target\naction regions within each frame and exploiting their fine-grained alignments\nin time and space to minimize the impact of invalid backgrounds during the\nassessment. In addition, we construct fine-grained annotations of human-centric\nforeground action masks for the FineDiving dataset, called\n\\textbf{FineDiving-HM}. With refined annotations on diverse target action\nprocedures, FineDiving-HM can promote the development of real-world AQA\nsystems. Through extensive experiments, we demonstrate the effectiveness of\nFineParser, which outperforms state-of-the-art methods while supporting more\ntasks of fine-grained action understanding. Data and code are available at\n\\url{https://github.com/PKU-ICST-MIPL/FineParser_CVPR2024}.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Jinglin Xu", "Sibo Yin", "Guohao Zhao", "Zishuo Wang", "Yuxin Peng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f620"}, "filepath": "data/2403.09486.png", "tags": [], "_media_type": "image", "_rand": 0.9994495341480678, "arXiv_link": "https://arxiv.org/abs/2403.09486", "other_link": "https://github.com/chenkang455/S-SDM}.", "title": "Spike-guided Motion Deblurring with Unknown Modal Spatiotemporal Alignment", "abstract": "Reconstructing a sequence of sharp images from the blurry input is crucial\nfor enhancing our insights into the captured scene and poses a significant\nchallenge due to the limited temporal features embedded in the image. Spike\ncameras, sampling at rates up to 40,000 Hz, have proven effective in capturing\nmotion features and beneficial for solving this ill-posed problem. Nonetheless,\nexisting methods fall into the supervised learning paradigm, which suffers from\nnotable performance degradation when applied to real-world scenarios that\ndiverge from the synthetic training data domain. Moreover, the quality of\nreconstructed images is capped by the generated images based on motion analysis\ninterpolation, which inherently differs from the actual scene, affecting the\ngeneralization ability of these methods in real high-speed scenarios. To\naddress these challenges, we propose the first self-supervised framework for\nthe task of spike-guided motion deblurring. Our approach begins with the\nformulation of a spike-guided deblurring model that explores the theoretical\nrelationships among spike streams, blurry images, and their corresponding sharp\nsequences. We subsequently develop a self-supervised cascaded framework to\nalleviate the issues of spike noise and spatial-resolution mismatching\nencountered in the deblurring model. With knowledge distillation and\nre-blurring loss, we further design a lightweight deblur network to generate\nhigh-quality sequences with brightness and texture consistency with the\noriginal input. Quantitative and qualitative experiments conducted on our\nreal-world and synthetic datasets with spikes validate the superior\ngeneralization of the proposed framework. Our code, data and trained models\nwill be available at \\url{https://github.com/chenkang455/S-SDM}.", "keywords": [], "authors_list": ["Jiyuan Zhang", "Shiyan Chen", "Yajing Zheng", "Zhaofei Yu", "Tiejun Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f621"}, "filepath": "data/2312.03585.png", "tags": [], "_media_type": "image", "_rand": 0.9997481867087783, "arXiv_link": "https://arxiv.org/abs/2312.03585", "other_link": "https://github.com/HAL-42/FMA-WSSS.git.", "title": "From SAM to CAMs: Exploring Segment Anything Model for Weakly Supervised Semantic Segmentation", "abstract": "This work aims to leverage pre-trained foundation models, such as contrastive\nlanguage-image pre-training (CLIP) and segment anything model (SAM), to address\nweakly supervised semantic segmentation (WSSS) using image-level labels. To\nthis end, we propose a coarse-to-fine framework based on CLIP and SAM for\ngenerating high-quality segmentation seeds. Specifically, we construct an image\nclassification task and a seed segmentation task, which are jointly performed\nby CLIP with frozen weights and two sets of learnable task-specific prompts. A\nSAM-based seeding (SAMS) module is designed and applied to each task to produce\neither coarse or fine seed maps. Moreover, we design a multi-label contrastive\nloss supervised by image-level labels and a CAM activation loss supervised by\nthe generated coarse seed map. These losses are used to learn the prompts,\nwhich are the only parts need to be learned in our framework. Once the prompts\nare learned, we input each image along with the learned segmentation-specific\nprompts into CLIP and the SAMS module to produce high-quality segmentation\nseeds. These seeds serve as pseudo labels to train an off-the-shelf\nsegmentation network like other two-stage WSSS methods. Experiments show that\nour method achieves the state-of-the-art performance on PASCAL VOC 2012 and\ncompetitive results on MS COCO 2014. Code is available at\nhttps://github.com/HAL-42/FMA-WSSS.git.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Hyeokjun Kweon", "Kuk-Jin Yoon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f622"}, "filepath": "data/2312.01616.png", "tags": [], "_media_type": "image", "_rand": 0.9997797017346381, "arXiv_link": "https://arxiv.org/abs/2312.01616", "other_link": "https://github.com/bytedance/SchurVINS.", "title": "SchurVINS: Schur Complement-Based Lightweight Visual Inertial Navigation System", "abstract": "Accuracy and computational efficiency are the most important metrics to\nVisual Inertial Navigation System (VINS). The existing VINS algorithms with\neither high accuracy or low computational complexity, are difficult to provide\nthe high precision localization in resource-constrained devices. To this end,\nwe propose a novel filter-based VINS framework named SchurVINS, which could\nguarantee both high accuracy by building a complete residual model and low\ncomputational complexity with Schur complement. Technically, we first formulate\nthe full residual model where Gradient, Hessian and observation covariance are\nexplicitly modeled. Then Schur complement is employed to decompose the full\nmodel into ego-motion residual model and landmark residual model. Finally,\nExtended Kalman Filter (EKF) update is implemented in these two models with\nhigh efficiency. Experiments on EuRoC and TUM-VI datasets show that our method\nnotably outperforms state-of-the-art (SOTA) methods in both accuracy and\ncomputational complexity. The experimental code of SchurVINS is available at\nhttps://github.com/bytedance/SchurVINS.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yunfei Fan", "Tianyu Zhao", "Guidong Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f623"}, "filepath": "data/2312.09570.png", "tags": [], "_media_type": "image", "_rand": 0.9997138486068894, "arXiv_link": "https://arxiv.org/abs/2312.09570", "other_link": "http://youtu.be/cH_rbKbyTpE", "title": "CAGE: Controllable Articulation GEneration", "abstract": "We address the challenge of generating 3D articulated objects in a\ncontrollable fashion. Currently, modeling articulated 3D objects is either\nachieved through laborious manual authoring, or using methods from prior work\nthat are hard to scale and control directly. We leverage the interplay between\npart shape, connectivity, and motion using a denoising diffusion-based method\nwith attention modules designed to extract correlations between part\nattributes. Our method takes an object category label and a part connectivity\ngraph as input and generates an object's geometry and motion parameters. The\ngenerated objects conform to user-specified constraints on the object category,\npart shape, and part articulation. Our experiments show that our method\noutperforms the state-of-the-art in articulated object generation, producing\nmore realistic objects while conforming better to user constraints.\n Video Summary at: http://youtu.be/cH_rbKbyTpE", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiayi Liu", "Hou In Ivan Tam", "Ali Mahdavi Amiri", "Manolis Savva"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f624"}, "filepath": "data/2404.17620.png", "tags": [], "_media_type": "image", "_rand": 0.9995916301818571, "arXiv_link": "https://arxiv.org/abs/2404.17620", "other_link": "", "title": "Neural Modes: Self-supervised Learning of Nonlinear Modal Subspaces", "abstract": "We propose a self-supervised approach for learning physics-based subspaces\nfor real-time simulation. Existing learning-based methods construct subspaces\nby approximating pre-defined simulation data in a purely geometric way.\nHowever, this approach tends to produce high-energy configurations, leads to\nentangled latent space dimensions, and generalizes poorly beyond the training\nset. To overcome these limitations, we propose a self-supervised approach that\ndirectly minimizes the system's mechanical energy during training. We show that\nour method leads to learned subspaces that reflect physical equilibrium\nconstraints, resolve overfitting issues of previous methods, and offer\ninterpretable latent space parameters.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiahong Wang", "Yinwei DU", "Stelian Coros", "Bernhard Thomaszewski"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f625"}, "filepath": "data/2404.12235.png", "tags": [], "_media_type": "image", "_rand": 0.9991739529538987, "arXiv_link": "https://arxiv.org/abs/2404.12235", "other_link": "", "title": "Beyond Average: Individualized Visual Scanpath Prediction", "abstract": "Understanding how attention varies across individuals has significant\nscientific and societal impacts. However, existing visual scanpath models treat\nattention uniformly, neglecting individual differences. To bridge this gap,\nthis paper focuses on individualized scanpath prediction (ISP), a new attention\nmodeling task that aims to accurately predict how different individuals shift\ntheir attention in diverse visual tasks. It proposes an ISP method featuring\nthree novel technical components: (1) an observer encoder to characterize and\nintegrate an observer's unique attention traits, (2) an observer-centric\nfeature integration approach that holistically combines visual features, task\nguidance, and observer-specific characteristics, and (3) an adaptive fixation\nprioritization mechanism that refines scanpath predictions by dynamically\nprioritizing semantic feature maps based on individual observers' attention\ntraits. These novel components allow scanpath models to effectively address the\nattention variations across different observers. Our method is generally\napplicable to different datasets, model architectures, and visual tasks,\noffering a comprehensive tool for transforming general scanpath models into\nindividualized ones. Comprehensive evaluations using value-based and\nranking-based metrics verify the method's effectiveness and generalizability.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Xianyu Chen", "Ming Jiang", "Qi Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f626"}, "filepath": "data/2403.08919.png", "tags": [], "_media_type": "image", "_rand": 0.9996141673462647, "arXiv_link": "https://arxiv.org/abs/2403.08919", "other_link": "", "title": "CLIP-BEVFormer: Enhancing Multi-View Image-Based BEV Detector with Ground Truth Flow", "abstract": "Autonomous driving stands as a pivotal domain in computer vision, shaping the\nfuture of transportation. Within this paradigm, the backbone of the system\nplays a crucial role in interpreting the complex environment. However, a\nnotable challenge has been the loss of clear supervision when it comes to\nBird's Eye View elements. To address this limitation, we introduce\nCLIP-BEVFormer, a novel approach that leverages the power of contrastive\nlearning techniques to enhance the multi-view image-derived BEV backbones with\nground truth information flow. We conduct extensive experiments on the\nchallenging nuScenes dataset and showcase significant and consistent\nimprovements over the SOTA. Specifically, CLIP-BEVFormer achieves an impressive\n8.5\\% and 9.2\\% enhancement in terms of NDS and mAP, respectively, over the\nprevious best BEV model on the 3D object detection task.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chenbin Pan", "Burhaneddin Yaman", "Senem Velipasalar", "Liu Ren"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f627"}, "filepath": "data/2403.19128.png", "tags": [], "_media_type": "image", "_rand": 0.999072975212249, "arXiv_link": "https://arxiv.org/abs/2403.19128", "other_link": "https://github.com/AlibabaResearch/AdvancedLiterateMachinery.", "title": "OmniParser: A Unified Framework for Text Spotting, Key Information Extraction and Table Recognition", "abstract": "Recently, visually-situated text parsing (VsTP) has experienced notable\nadvancements, driven by the increasing demand for automated document\nunderstanding and the emergence of Generative Large Language Models (LLMs)\ncapable of processing document-based questions. Various methods have been\nproposed to address the challenging problem of VsTP. However, due to the\ndiversified targets and heterogeneous schemas, previous works usually design\ntask-specific architectures and objectives for individual tasks, which\ninadvertently leads to modal isolation and complex workflow. In this paper, we\npropose a unified paradigm for parsing visually-situated text across diverse\nscenarios. Specifically, we devise a universal model, called OmniParser, which\ncan simultaneously handle three typical visually-situated text parsing tasks:\ntext spotting, key information extraction, and table recognition. In\nOmniParser, all tasks share the unified encoder-decoder architecture, the\nunified objective: point-conditioned text generation, and the unified input &\noutput representation: prompt & structured sequences. Extensive experiments\ndemonstrate that the proposed OmniParser achieves state-of-the-art (SOTA) or\nhighly competitive performances on 7 datasets for the three visually-situated\ntext parsing tasks, despite its unified, concise design. The code is available\nat https://github.com/AlibabaResearch/AdvancedLiterateMachinery.", "keywords": ["Document analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Jianqiang Wan", "Sibo Song", "Wenwen Yu", "Yuliang Liu", "Wenqing Cheng", "Fei Huang", "Xiang Bai", "Cong Yao", "Zhibo Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f628"}, "filepath": "data/2310.17994.png", "tags": [], "_media_type": "image", "_rand": 0.9993015900882855, "arXiv_link": "https://arxiv.org/abs/2310.17994", "other_link": "http://kylesargent.github.io/zeronvs/", "title": "ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Image", "abstract": "We introduce a 3D-aware diffusion model, ZeroNVS, for single-image novel view\nsynthesis for in-the-wild scenes. While existing methods are designed for\nsingle objects with masked backgrounds, we propose new techniques to address\nchallenges introduced by in-the-wild multi-object scenes with complex\nbackgrounds. Specifically, we train a generative prior on a mixture of data\nsources that capture object-centric, indoor, and outdoor scenes. To address\nissues from data mixture such as depth-scale ambiguity, we propose a novel\ncamera conditioning parameterization and normalization scheme. Further, we\nobserve that Score Distillation Sampling (SDS) tends to truncate the\ndistribution of complex backgrounds during distillation of 360-degree scenes,\nand propose \"SDS anchoring\" to improve the diversity of synthesized novel\nviews. Our model sets a new state-of-the-art result in LPIPS on the DTU dataset\nin the zero-shot setting, even outperforming methods specifically trained on\nDTU. We further adapt the challenging Mip-NeRF 360 dataset as a new benchmark\nfor single-image novel view synthesis, and demonstrate strong performance in\nthis setting. Our code and data are at http://kylesargent.github.io/zeronvs/", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Kyle Sargent", "Zizhang Li", "Tanmay Shah", "Charles Herrmann", "Hong-Xing Yu", "Yunzhi Zhang", "Eric Ryan Chan", "Dmitry Lagun", "Li Fei-Fei", "Deqing Sun", "Jiajun Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f629"}, "filepath": "data/2305.08275.png", "tags": [], "_media_type": "image", "_rand": 0.9998833770107428, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2305.08275", "other_link": "https://github.com/salesforce/ULIP.", "title": "ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding", "abstract": "Recent advancements in multimodal pre-training have shown promising efficacy\nin 3D representation learning by aligning multimodal features across 3D shapes,\ntheir 2D counterparts, and language descriptions. However, the methods used by\nexisting frameworks to curate such multimodal data, in particular language\ndescriptions for 3D shapes, are not scalable, and the collected language\ndescriptions are not diverse. To address this, we introduce ULIP-2, a simple\nyet effective tri-modal pre-training framework that leverages large multimodal\nmodels to automatically generate holistic language descriptions for 3D shapes.\nIt only needs 3D data as input, eliminating the need for any manual 3D\nannotations, and is therefore scalable to large datasets. ULIP-2 is also\nequipped with scaled-up backbones for better multimodal representation\nlearning. We conduct experiments on two large-scale 3D datasets, Objaverse and\nShapeNet, and augment them with tri-modal datasets of 3D point clouds, images,\nand language for training ULIP-2. Experiments show that ULIP-2 demonstrates\nsubstantial benefits in three downstream tasks: zero-shot 3D classification,\nstandard 3D classification with fine-tuning, and 3D captioning (3D-to-language\ngeneration). It achieves a new SOTA of 50.6% (top-1) on Objaverse-LVIS and\n84.7% (top-1) on ModelNet40 in zero-shot classification. In the ScanObjectNN\nbenchmark for standard fine-tuning, ULIP-2 reaches an overall accuracy of 91.5%\nwith a compact model of only 1.4 million parameters. ULIP-2 sheds light on a\nnew paradigm for scalable multimodal 3D representation learning without human\nannotations and shows significant improvements over existing baselines. The\ncode and datasets are released at https://github.com/salesforce/ULIP.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Le Xue", "Ning Yu", "Shu Zhang", "Artemis Panagopoulou", "Junnan Li", "Roberto Mart\u00edn-Mart\u00edn", "Jiajun Wu", "Caiming Xiong", "Ran Xu", "Juan Carlos Niebles", "Silvio Savarese"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f62a"}, "filepath": "data/2405.02266.png", "tags": [], "_media_type": "image", "_rand": 0.9994760800249819, "arXiv_link": "https://arxiv.org/abs/2405.02266", "other_link": "", "title": "On the test-time zero-shot generalization of vision-language models: Do we really need prompt learning?", "abstract": "The development of large vision-language models, notably CLIP, has catalyzed\nresearch into effective adaptation techniques, with a particular focus on soft\nprompt tuning. Conjointly, test-time augmentation, which utilizes multiple\naugmented views of a single image to enhance zero-shot generalization, is\nemerging as a significant area of interest. This has predominantly directed\nresearch efforts toward test-time prompt tuning. In contrast, we introduce a\nrobust MeanShift for Test-time Augmentation (MTA), which surpasses prompt-based\nmethods without requiring this intensive training procedure. This positions MTA\nas an ideal solution for both standalone and API-based applications.\nAdditionally, our method does not rely on ad hoc rules (e.g., confidence\nthreshold) used in some previous test-time augmentation techniques to filter\nthe augmented views. Instead, MTA incorporates a quality assessment variable\nfor each view directly into its optimization process, termed as the inlierness\nscore. This score is jointly optimized with a density mode seeking process,\nleading to an efficient training- and hyperparameter-free approach. We\nextensively benchmark our method on 15 datasets and demonstrate MTA's\nsuperiority and computational efficiency. Deployed easily as plug-and-play\nmodule on top of zero-shot models and state-of-the-art few-shot methods, MTA\nshows systematic and consistent improvements.", "keywords": [], "authors_list": ["Maxime Zanella", "Ismail Ben Ayed"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f62b"}, "filepath": "data/2401.01578.png", "tags": [], "_media_type": "image", "_rand": 0.9998650402874574, "arXiv_link": "https://arxiv.org/abs/2401.01578", "other_link": "https://github.com/HengLan/CGSTVG.", "title": "Context-Guided Spatio-Temporal Video Grounding", "abstract": "Spatio-temporal video grounding (or STVG) task aims at locating a\nspatio-temporal tube for a specific instance given a text query. Despite\nadvancements, current methods easily suffer the distractors or heavy object\nappearance variations in videos due to insufficient object information from the\ntext, leading to degradation. Addressing this, we propose a novel framework,\ncontext-guided STVG (CG-STVG), which mines discriminative instance context for\nobject in videos and applies it as a supplementary guidance for target\nlocalization. The key of CG-STVG lies in two specially designed modules,\nincluding instance context generation (ICG), which focuses on discovering\nvisual context information (in both appearance and motion) of the instance, and\ninstance context refinement (ICR), which aims to improve the instance context\nfrom ICG by eliminating irrelevant or even harmful information from the\ncontext. During grounding, ICG, together with ICR, are deployed at each\ndecoding stage of a Transformer architecture for instance context learning.\nParticularly, instance context learned from one decoding stage is fed to the\nnext stage, and leveraged as a guidance containing rich and discriminative\nobject feature to enhance the target-awareness in decoding feature, which\nconversely benefits generating better new instance context for improving\nlocalization finally. Compared to existing methods, CG-STVG enjoys object\ninformation in text query and guidance from mined instance visual context for\nmore accurate target localization. In our experiments on three benchmarks,\nincluding HCSTVG-v1/-v2 and VidSTG, CG-STVG sets new state-of-the-arts in\nm_tIoU and m_vIoU on all of them, showing its efficacy. The code will be\nreleased at https://github.com/HengLan/CGSTVG.", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Xin Gu", "Heng Fan", "Yan Huang", "Tiejian Luo", "Libo Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f62c"}, "filepath": "data/2311.16037.png", "tags": [], "_media_type": "image", "_rand": 0.9997192725435329, "arXiv_link": "https://arxiv.org/abs/2311.16037", "other_link": "", "title": "GaussianEditor: Editing 3D Gaussians Delicately with Text Instructions", "abstract": "Recently, impressive results have been achieved in 3D scene editing with text\ninstructions based on a 2D diffusion model. However, current diffusion models\nprimarily generate images by predicting noise in the latent space, and the\nediting is usually applied to the whole image, which makes it challenging to\nperform delicate, especially localized, editing for 3D scenes. Inspired by\nrecent 3D Gaussian splatting, we propose a systematic framework, named\nGaussianEditor, to edit 3D scenes delicately via 3D Gaussians with text\ninstructions. Benefiting from the explicit property of 3D Gaussians, we design\na series of techniques to achieve delicate editing. Specifically, we first\nextract the region of interest (RoI) corresponding to the text instruction,\naligning it to 3D Gaussians. The Gaussian RoI is further used to control the\nediting process. Our framework can achieve more delicate and precise editing of\n3D scenes than previous methods while enjoying much faster training speed, i.e.\nwithin 20 minutes on a single V100 GPU, more than twice as fast as\nInstruct-NeRF2NeRF (45 minutes -- 2 hours).", "keywords": ["Scene analysis and understanding", "Image and video generation and manipulation"], "authors_list": ["Junjie Wang", "Jiemin Fang", "Xiaopeng Zhang", "Lingxi Xie", "Qi Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f62d"}, "filepath": "data/2312.02155.png", "tags": [], "_media_type": "image", "_rand": 0.9993089757866159, "arXiv_link": "https://arxiv.org/abs/2312.02155", "other_link": "", "title": "GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis", "abstract": "We present a new approach, termed GPS-Gaussian, for synthesizing novel views\nof a character in a real-time manner. The proposed method enables 2K-resolution\nrendering under a sparse-view camera setting. Unlike the original Gaussian\nSplatting or neural implicit rendering methods that necessitate per-subject\noptimizations, we introduce Gaussian parameter maps defined on the source views\nand regress directly Gaussian Splatting properties for instant novel view\nsynthesis without any fine-tuning or optimization. To this end, we train our\nGaussian parameter regression module on a large amount of human scan data,\njointly with a depth estimation module to lift 2D parameter maps to 3D space.\nThe proposed framework is fully differentiable and experiments on several\ndatasets demonstrate that our method outperforms state-of-the-art methods while\nachieving an exceeding rendering speed.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Shunyuan Zheng", "Boyao ZHOU", "Ruizhi Shao", "Boning Liu", "Shengping Zhang", "Liqiang Nie", "Yebin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f62e"}, "filepath": "data/2404.18873v1.png", "tags": [], "_media_type": "image", "_rand": 0.999697958778733, "arXiv_link": "https://arxiv.org/abs/2404.18873v1", "other_link": "https://github.com/gastruc/osv5m.", "title": "OpenStreetView-5M: The Many Roads to Global Visual Geolocation", "abstract": "Determining the location of an image anywhere on Earth is a complex visual\ntask, which makes it particularly relevant for evaluating computer vision\nalgorithms. Yet, the absence of standard, large-scale, open-access datasets\nwith reliably localizable images has limited its potential. To address this\nissue, we introduce OpenStreetView-5M, a large-scale, open-access dataset\ncomprising over 5.1 million geo-referenced street view images, covering 225\ncountries and territories. In contrast to existing benchmarks, we enforce a\nstrict train/test separation, allowing us to evaluate the relevance of learned\ngeographical features beyond mere memorization. To demonstrate the utility of\nour dataset, we conduct an extensive benchmark of various state-of-the-art\nimage encoders, spatial representations, and training strategies. All\nassociated codes and models can be found at https://github.com/gastruc/osv5m.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Guillaume Astruc", "Nicolas Dufour", "Ioannis Siglidis", "Constantin Aronssohn", "Nacim Bouia", "Stephanie Fu", "Romain Loiseau", "Van Nguyen Nguyen", "Charles Raude", "Elliot Vincent", "Lintao XU", "Hongyu Zhou", "Loic Landrieu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f62f"}, "filepath": "data/2401.10786.png", "tags": [], "_media_type": "image", "_rand": 0.9999859399327223, "arXiv_link": "https://arxiv.org/abs/2401.10786", "other_link": "", "title": "Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion", "abstract": "Directly generating scenes from satellite imagery offers exciting\npossibilities for integration into applications like games and map services.\nHowever, challenges arise from significant view changes and scene scale.\nPrevious efforts mainly focused on image or video generation, lacking\nexploration into the adaptability of scene generation for arbitrary views.\nExisting 3D generation works either operate at the object level or are\ndifficult to utilize the geometry obtained from satellite imagery. To overcome\nthese limitations, we propose a novel architecture for direct 3D scene\ngeneration by introducing diffusion models into 3D sparse representations and\ncombining them with neural rendering techniques. Specifically, our approach\ngenerates texture colors at the point level for a given geometry using a 3D\ndiffusion model first, which is then transformed into a scene representation in\na feed-forward manner. The representation can be utilized to render arbitrary\nviews which would excel in both single-frame quality and inter-frame\nconsistency. Experiments in two city-scale datasets show that our model\ndemonstrates proficiency in generating photo-realistic street-view image\nsequences and cross-view urban scenes from satellite imagery.", "keywords": ["Remote sensing and photogrammetry", "Scene analysis and understanding"], "authors_list": ["Zuoyue Li", "Zhenqiang Li", "Zhaopeng Cui", "Marc Pollefeys", "Martin R. Oswald"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f630"}, "filepath": "data/2311.16703.png", "tags": [], "_media_type": "image", "_rand": 0.9997506479329529, "arXiv_link": "https://arxiv.org/abs/2311.16703", "other_link": "https://enigma-li.github.io/CADTalk/.", "title": "CADTalk: An Algorithm and Benchmark for Semantic Commenting of CAD Programs", "abstract": "CAD programs are a popular way to compactly encode shapes as a sequence of\noperations that are easy to parametrically modify. However, without sufficient\nsemantic comments and structure, such programs can be challenging to\nunderstand, let alone modify. We introduce the problem of semantic commenting\nCAD programs, wherein the goal is to segment the input program into code blocks\ncorresponding to semantically meaningful shape parts and assign a semantic\nlabel to each block. We solve the problem by combining program parsing with\nvisual-semantic analysis afforded by recent advances in foundational language\nand vision models. Specifically, by executing the input programs, we create\nshapes, which we use to generate conditional photorealistic images to make use\nof semantic annotators for such images. We then distill the information across\nthe images and link back to the original programs to semantically comment on\nthem. Additionally, we collected and annotated a benchmark dataset, CADTalk,\nconsisting of 5,288 machine-made programs and 45 human-made programs with\nground truth semantic comments. We extensively evaluated our approach, compared\nit to a GPT-based baseline, and an open-set shape segmentation baseline, and\nreported an 83.24% accuracy on the new CADTalk dataset. Code and data:\nhttps://enigma-li.github.io/CADTalk/.", "keywords": [], "authors_list": ["Haocheng Yuan", "Jing Xu", "Hao Pan", "Adrien Bousseau", "Niloy J. Mitra", "Changjian Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f631"}, "filepath": "data/2403.20320.png", "tags": [], "_media_type": "image", "_rand": 0.999711276798345, "arXiv_link": "https://arxiv.org/abs/2403.20320", "other_link": "", "title": "MTLoRA: Low-Rank Adaptation Approach for Efficient Multi-Task Learning", "abstract": "Adapting models pre-trained on large-scale datasets to a variety of\ndownstream tasks is a common strategy in deep learning. Consequently,\nparameter-efficient fine-tuning methods have emerged as a promising way to\nadapt pre-trained models to different tasks while training only a minimal\nnumber of parameters. While most of these methods are designed for single-task\nadaptation, parameter-efficient training in Multi-Task Learning (MTL)\narchitectures is still unexplored. In this paper, we introduce MTLoRA, a novel\nframework for parameter-efficient training of MTL models. MTLoRA employs\nTask-Agnostic and Task-Specific Low-Rank Adaptation modules, which effectively\ndisentangle the parameter space in MTL fine-tuning, thereby enabling the model\nto adeptly handle both task specialization and interaction within MTL contexts.\nWe applied MTLoRA to hierarchical-transformer-based MTL architectures, adapting\nthem to multiple downstream dense prediction tasks. Our extensive experiments\non the PASCAL dataset show that MTLoRA achieves higher accuracy on downstream\ntasks compared to fully fine-tuning the MTL model while reducing the number of\ntrainable parameters by 3.6x. Furthermore, MTLoRA establishes a Pareto-optimal\ntrade-off between the number of trainable parameters and the accuracy of the\ndownstream tasks, outperforming current state-of-the-art parameter-efficient\ntraining methods in both accuracy and efficiency. Our code is publicly\navailable.", "keywords": [], "authors_list": ["Ahmed Agiza", "Marina Neseem", "Sherief Reda"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f632"}, "filepath": "data/2404.17340.png", "tags": [], "_media_type": "image", "_rand": 0.9999908535728326, "arXiv_link": "https://arxiv.org/abs/2404.17340", "other_link": "", "title": "View-Category Interactive Sharing Transformer for Incomplete Multi-View Multi-Label Learning", "abstract": "Multi-view learning has become a popular research topic in recent years, but\nresearch on the cross-application of classic multi-label classification and\nmulti-view learning is still in its early stages. In this paper, we focus on\nthe complex yet highly realistic task of incomplete multi-view weak multi-label\nlearning and propose a masked two-channel decoupling framework based on deep\nneural networks to solve this problem. The core innovation of our method lies\nin decoupling the single-channel view-level representation, which is common in\ndeep multi-view learning methods, into a shared representation and a\nview-proprietary representation. We also design a cross-channel contrastive\nloss to enhance the semantic property of the two channels. Additionally, we\nexploit supervised information to design a label-guided graph regularization\nloss, helping the extracted embedding features preserve the geometric structure\namong samples. Inspired by the success of masking mechanisms in image and text\nanalysis, we develop a random fragment masking strategy for vector features to\nimprove the learning ability of encoders. Finally, it is important to emphasize\nthat our model is fully adaptable to arbitrary view and label absences while\nalso performing well on the ideal full data. We have conducted sufficient and\nconvincing experiments to confirm the effectiveness and advancement of our\nmodel.", "keywords": [], "authors_list": ["Shilong Ou", "Zhe Xue", "Yawen Li", "Meiyu Liang", "Yuanqiang Cai", "junjiang wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f633"}, "filepath": "data/2402.04754.png", "tags": [], "_media_type": "image", "_rand": 0.9996630622978513, "arXiv_link": "https://web3.arxiv.org/abs/2402.04754", "other_link": "", "title": "Visual Layout Composer: Image-Vector Dual Diffusion Model for Design Layout Generation", "abstract": "Controllable layout generation refers to the process of creating a plausible\nvisual arrangement of elements within a graphic design (e.g., document and web\ndesigns) with constraints representing design intentions. Although recent\ndiffusion-based models have achieved state-of-the-art FID scores, they tend to\nexhibit more pronounced misalignment compared to earlier transformer-based\nmodels. In this work, we propose the $\\textbf{LA}$yout $\\textbf{C}$onstraint\ndiffusion mod$\\textbf{E}$l (LACE), a unified model to handle a broad range of\nlayout generation tasks, such as arranging elements with specified attributes\nand refining or completing a coarse layout design. The model is based on\ncontinuous diffusion models. Compared with existing methods that use discrete\ndiffusion models, continuous state-space design can enable the incorporation of\ndifferentiable aesthetic constraint functions in training. For conditional\ngeneration, we introduce conditions via masked input. Extensive experiment\nresults show that LACE produces high-quality layouts and outperforms existing\nstate-of-the-art baselines.", "keywords": ["Document analysis and understanding"], "authors_list": ["Mohammad Amin Shabani", "Zhaowen Wang", "Difan Liu", "Nanxuan Zhao", "Jimei Yang", "Yasutaka Furukawa"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f634"}, "filepath": "data/2405.02781.png", "tags": [], "_media_type": "image", "_rand": 0.9997029843214342, "arXiv_link": "https://arxiv.org/abs/2405.02781", "other_link": "", "title": "Instantaneous Perception of Moving Objects in 3D", "abstract": "The perception of 3D motion of surrounding traffic participants is crucial\nfor driving safety. While existing works primarily focus on general large\nmotions, we contend that the instantaneous detection and quantification of\nsubtle motions is equally important as they indicate the nuances in driving\nbehavior that may be safety critical, such as behaviors near a stop sign of\nparking positions. We delve into this under-explored task, examining its unique\nchallenges and developing our solution, accompanied by a carefully designed\nbenchmark. Specifically, due to the lack of correspondences between consecutive\nframes of sparse Lidar point clouds, static objects might appear to be moving -\nthe so-called swimming effect. This intertwines with the true object motion,\nthereby posing ambiguity in accurate estimation, especially for subtle motions.\nTo address this, we propose to leverage local occupancy completion of object\npoint clouds to densify the shape cue, and mitigate the impact of swimming\nartifacts. The occupancy completion is learned in an end-to-end fashion\ntogether with the detection of moving objects and the estimation of their\nmotion, instantaneously as soon as objects start to move. Extensive experiments\ndemonstrate superior performance compared to standard 3D motion estimation\napproaches, particularly highlighting our method's specialized treatment of\nsubtle motions.", "keywords": [], "authors_list": ["Di Liu", "Bingbing Zhuang", "Dimitris N. Metaxas", "Manmohan Chandraker"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f635"}, "filepath": "data/2403.19976.png", "tags": [], "_media_type": "image", "_rand": 0.9990757642148117, "arXiv_link": "https://arxiv.org/abs/2403.19976", "other_link": "https://eventbasedvision.github.io/eTraM", "title": "eTraM: Event-based Traffic Monitoring Dataset", "abstract": "Event cameras, with their high temporal and dynamic range and minimal memory\nusage, have found applications in various fields. However, their potential in\nstatic traffic monitoring remains largely unexplored. To facilitate this\nexploration, we present eTraM - a first-of-its-kind, fully event-based traffic\nmonitoring dataset. eTraM offers 10 hr of data from different traffic scenarios\nin various lighting and weather conditions, providing a comprehensive overview\nof real-world situations. Providing 2M bounding box annotations, it covers\neight distinct classes of traffic participants, ranging from vehicles to\npedestrians and micro-mobility. eTraM's utility has been assessed using\nstate-of-the-art methods for traffic participant detection, including RVT, RED,\nand YOLOv8. We quantitatively evaluate the ability of event-based models to\ngeneralize on nighttime and unseen scenes. Our findings substantiate the\ncompelling potential of leveraging event cameras for traffic monitoring,\nopening new avenues for research and application. eTraM is available at\nhttps://eventbasedvision.github.io/eTraM", "keywords": [], "authors_list": ["Aayush Atul Verma", "Bharatesh Chakravarthi", "Arpitsinh Vaghela", "Hua Wei", "'YZ' Yezhou Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f636"}, "filepath": "data/2311.17216.png", "tags": [], "_media_type": "image", "_rand": 0.9995716935633316, "arXiv_link": "https://arxiv.org/abs/2311.17216", "other_link": "https://interpretdiffusion.github.io}.", "title": "Self-Discovering Interpretable Diffusion Latent Directions for Responsible Text-to-Image Generation", "abstract": "Diffusion-based models have gained significant popularity for text-to-image\ngeneration due to their exceptional image-generation capabilities. A risk with\nthese models is the potential generation of inappropriate content, such as\nbiased or harmful images. However, the underlying reasons for generating such\nundesired content from the perspective of the diffusion model's internal\nrepresentation remain unclear. Previous work interprets vectors in an\ninterpretable latent space of diffusion models as semantic concepts. However,\nexisting approaches cannot discover directions for arbitrary concepts, such as\nthose related to inappropriate concepts. In this work, we propose a novel\nself-supervised approach to find interpretable latent directions for a given\nconcept. With the discovered vectors, we further propose a simple approach to\nmitigate inappropriate generation. Extensive experiments have been conducted to\nverify the effectiveness of our mitigation approach, namely, for fair\ngeneration, safe generation, and responsible text-enhancing generation. Project\npage: \\url{https://interpretdiffusion.github.io}.", "keywords": ["Vision applications for social good and ethics", "Image and video generation and manipulation"], "authors_list": ["Hang Li", "Chengzhi Shen", "Philip H.S. Torr", "Volker Tresp", "Jindong Gu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f637"}, "filepath": "data/2402.16174.png", "tags": [], "_media_type": "image", "_rand": 0.9998403650127246, "arXiv_link": "https://arxiv.org/abs/2402.16174", "other_link": "", "title": "GenNBV: Generalizable Next-Best-View Policy for Active 3D Reconstruction", "abstract": "While recent advances in neural radiance field enable realistic digitization\nfor large-scale scenes, the image-capturing process is still time-consuming and\nlabor-intensive. Previous works attempt to automate this process using the\nNext-Best-View (NBV) policy for active 3D reconstruction. However, the existing\nNBV policies heavily rely on hand-crafted criteria, limited action space, or\nper-scene optimized representations. These constraints limit their\ncross-dataset generalizability. To overcome them, we propose GenNBV, an\nend-to-end generalizable NBV policy. Our policy adopts a reinforcement learning\n(RL)-based framework and extends typical limited action space to 5D free space.\nIt empowers our agent drone to scan from any viewpoint, and even interact with\nunseen geometries during training. To boost the cross-dataset generalizability,\nwe also propose a novel multi-source state embedding, including geometric,\nsemantic, and action representations. We establish a benchmark using the Isaac\nGym simulator with the Houses3K and OmniObject3D datasets to evaluate this NBV\npolicy. Experiments demonstrate that our policy achieves a 98.26% and 97.12%\ncoverage ratio on unseen building-scale objects from these datasets,\nrespectively, outperforming prior solutions.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiao Chen", "Quanyi Li", "Tai Wang", "Tianfan Xue", "Jiangmiao Pang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f638"}, "filepath": "data/2404.08540.png", "tags": [], "_media_type": "image", "_rand": 0.9995385630643905, "arXiv_link": "https://arxiv.org/abs/2404.08540", "other_link": "", "title": "On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation", "abstract": "Recent advances in monocular depth estimation have been made by incorporating\nnatural language as additional guidance. Although yielding impressive results,\nthe impact of the language prior, particularly in terms of generalization and\nrobustness, remains unexplored. In this paper, we address this gap by\nquantifying the impact of this prior and introduce methods to benchmark its\neffectiveness across various settings. We generate \"low-level\" sentences that\nconvey object-centric, three-dimensional spatial relationships, incorporate\nthem as additional language priors and evaluate their downstream impact on\ndepth estimation. Our key finding is that current language-guided depth\nestimators perform optimally only with scene-level descriptions and\ncounter-intuitively fare worse with low level descriptions. Despite leveraging\nadditional data, these methods are not robust to directed adversarial attacks\nand decline in performance with an increase in distribution shift. Finally, to\nprovide a foundation for future research, we identify points of failures and\noffer insights to better understand these shortcomings. With an increasing\nnumber of methods using language for depth estimation, our findings highlight\nthe opportunities and pitfalls that require careful consideration for effective\ndeployment in real-world settings", "keywords": ["Low-level vision", "Multimodal models and vision-language models"], "authors_list": ["Agneet Chatterjee", "Tejas Gokhale", "Chitta Baral", "'YZ' Yezhou Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f639"}, "filepath": "data/2312.04334.png", "tags": [], "_media_type": "image", "_rand": 0.9995714567429537, "arXiv_link": "https://arxiv.org/abs/2312.04334", "other_link": "", "title": "Towards a Perceptual Evaluation Framework for Lighting Estimation", "abstract": "Progress in lighting estimation is tracked by computing existing image\nquality assessment (IQA) metrics on images from standard datasets. While this\nmay appear to be a reasonable approach, we demonstrate that doing so does not\ncorrelate to human preference when the estimated lighting is used to relight a\nvirtual scene into a real photograph. To study this, we design a controlled\npsychophysical experiment where human observers must choose their preference\namongst rendered scenes lit using a set of lighting estimation algorithms\nselected from the recent literature, and use it to analyse how these algorithms\nperform according to human perception. Then, we demonstrate that none of the\nmost popular IQA metrics from the literature, taken individually, correctly\nrepresent human perception. Finally, we show that by learning a combination of\nexisting IQA metrics, we can more accurately represent human preference. This\nprovides a new perceptual framework to help evaluate future lighting estimation\nalgorithms.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Justine Giroux", "Mohammad Reza Karimi Dastjerdi", "Yannick Hold-Geoffroy", "Javier Vazquez-Corral", "Jean-Fran\u00e7ois Lalonde"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f63a"}, "filepath": "data/2311.18448v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993337528697316, "arXiv_link": "https://arxiv.org/abs/2311.18448v1", "other_link": "https://github.com/zc-alexfan/hold", "title": "HOLD: Category-agnostic 3D Reconstruction of Interacting Hands and Objects from Video", "abstract": "Since humans interact with diverse objects every day, the holistic 3D capture\nof these interactions is important to understand and model human behaviour.\nHowever, most existing methods for hand-object reconstruction from RGB either\nassume pre-scanned object templates or heavily rely on limited 3D hand-object\ndata, restricting their ability to scale and generalize to more unconstrained\ninteraction settings. To this end, we introduce HOLD -- the first\ncategory-agnostic method that reconstructs an articulated hand and object\njointly from a monocular interaction video. We develop a compositional\narticulated implicit model that can reconstruct disentangled 3D hand and object\nfrom 2D images. We also further incorporate hand-object constraints to improve\nhand-object poses and consequently the reconstruction quality. Our method does\nnot rely on 3D hand-object annotations while outperforming fully-supervised\nbaselines in both in-the-lab and challenging in-the-wild settings. Moreover, we\nqualitatively show its robustness in reconstructing from in-the-wild videos.\nCode: https://github.com/zc-alexfan/hold", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Zicong Fan", "Maria Parelli", "Maria Kadoglou", "Xu Chen", "Muhammed Kocabas", "Michael J. Black", "Otmar Hilliges"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f63b"}, "filepath": "data/2312.01381.png", "tags": [], "_media_type": "image", "_rand": 0.9996197305982353, "arXiv_link": "https://arxiv.org/abs/2312.01381", "other_link": "", "title": "Language-driven All-in-one Adverse Weather Removal", "abstract": "All-in-one (AiO) frameworks restore various adverse weather degradations with\na single set of networks jointly. To handle various weather conditions, an AiO\nframework is expected to adaptively learn weather-specific knowledge for\ndifferent degradations and shared knowledge for common patterns. However,\nexisting methods: 1) rely on extra supervision signals, which are usually\nunknown in real-world applications; 2) employ fixed network structures, which\nrestrict the diversity of weather-specific knowledge. In this paper, we propose\na Language-driven Restoration framework (LDR) to alleviate the aforementioned\nissues. First, we leverage the power of pre-trained vision-language (PVL)\nmodels to enrich the diversity of weather-specific knowledge by reasoning about\nthe occurrence, type, and severity of degradation, generating description-based\ndegradation priors. Then, with the guidance of degradation prior, we sparsely\nselect restoration experts from a candidate list dynamically based on a\nMixture-of-Experts (MoE) structure. This enables us to adaptively learn the\nweather-specific and shared knowledge to handle various weather conditions\n(e.g., unknown or mixed weather). Experiments on extensive restoration\nscenarios show our superior performance (see Fig. 1). The source code will be\nmade available.", "keywords": ["Low-level vision", "Multimodal models and vision-language models"], "authors_list": ["Hao Yang", "Liyuan Pan", "Yan Yang", "Wei Liang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f63c"}, "filepath": "data/2312.06741.png", "tags": [], "_media_type": "image", "_rand": 0.9990072548671596, "arXiv_link": "https://arxiv.org/abs/2312.06741", "other_link": "", "title": "Gaussian Splatting SLAM", "abstract": "We present the first application of 3D Gaussian Splatting in monocular SLAM,\nthe most fundamental but the hardest setup for Visual SLAM. Our method, which\nruns live at 3fps, utilises Gaussians as the only 3D representation, unifying\nthe required representation for accurate, efficient tracking, mapping, and\nhigh-quality rendering. Designed for challenging monocular settings, our\napproach is seamlessly extendable to RGB-D SLAM when an external depth sensor\nis available. Several innovations are required to continuously reconstruct 3D\nscenes with high fidelity from a live camera. First, to move beyond the\noriginal 3DGS algorithm, which requires accurate poses from an offline\nStructure from Motion (SfM) system, we formulate camera tracking for 3DGS using\ndirect optimisation against the 3D Gaussians, and show that this enables fast\nand robust tracking with a wide basin of convergence. Second, by utilising the\nexplicit nature of the Gaussians, we introduce geometric verification and\nregularisation to handle the ambiguities occurring in incremental 3D dense\nreconstruction. Finally, we introduce a full SLAM system which not only\nachieves state-of-the-art results in novel view synthesis and trajectory\nestimation but also reconstruction of tiny and even transparent objects.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Hidenobu Matsuki", "Riku Murai", "Paul Kelly", "Andrew J. Davison"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f63d"}, "filepath": "data/2308.06107.png", "tags": [], "_media_type": "image", "_rand": 0.999600234293842, "arXiv_link": "https://arxiv.org/abs/2308.06107", "other_link": "", "title": "Backdoor Defense via Test-Time Detecting and Repairing", "abstract": "Deep neural networks have played a crucial part in many critical domains,\nsuch as autonomous driving, face recognition, and medical diagnosis. However,\ndeep neural networks are facing security threats from backdoor attacks and can\nbe manipulated into attacker-decided behaviors by the backdoor attacker. To\ndefend the backdoor, prior research has focused on using clean data to remove\nbackdoor attacks before model deployment. In this paper, we investigate the\npossibility of defending against backdoor attacks at test time by utilizing\npartially poisoned data to remove the backdoor from the model. To address the\nproblem, a two-stage method Test-Time Backdoor Defense (TTBD) is proposed. In\nthe first stage, we propose a backdoor sample detection method DDP to identify\npoisoned samples from a batch of mixed, partially poisoned samples. Once the\npoisoned samples are detected, we employ Shapley estimation to calculate the\ncontribution of each neuron's significance in the network, locate the poisoned\nneurons, and prune them to remove backdoor in the models. Our experiments\ndemonstrate that TTBD removes the backdoor successfully with only a batch of\npartially poisoned data across different model architectures and datasets\nagainst different types of backdoor attacks.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Jiyang Guan", "Jian Liang", "Ran He"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f63e"}, "filepath": "data/2312.03806.png", "tags": [], "_media_type": "image", "_rand": 0.999154961986748, "arXiv_link": "https://arxiv.org/abs/2312.03806", "other_link": "https://research.nvidia.com/labs/toronto-ai/xcube/.", "title": "XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies", "abstract": "We present $\\mathcal{X}^3$ (pronounced XCube), a novel generative model for\nhigh-resolution sparse 3D voxel grids with arbitrary attributes. Our model can\ngenerate millions of voxels with a finest effective resolution of up to\n$1024^3$ in a feed-forward fashion without time-consuming test-time\noptimization. To achieve this, we employ a hierarchical voxel latent diffusion\nmodel which generates progressively higher resolution grids in a coarse-to-fine\nmanner using a custom framework built on the highly efficient VDB data\nstructure. Apart from generating high-resolution objects, we demonstrate the\neffectiveness of XCube on large outdoor scenes at scales of 100m$\\times$100m\nwith a voxel size as small as 10cm. We observe clear qualitative and\nquantitative improvements over past approaches. In addition to unconditional\ngeneration, we show that our model can be used to solve a variety of tasks such\nas user-guided editing, scene completion from a single scan, and text-to-3D.\nMore results and details can be found at\nhttps://research.nvidia.com/labs/toronto-ai/xcube/.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xuanchi Ren", "Jiahui Huang", "Xiaohui Zeng", "Ken Museth", "Sanja Fidler", "Francis Williams"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f63f"}, "filepath": "data/2404.11987.png", "tags": [], "_media_type": "image", "_rand": 0.9992665844363706, "arXiv_link": "https://arxiv.org/abs/2404.11987", "other_link": "http://www.iri.upc.edu/people/nugrinovic/multiphys/).", "title": "MultiPhys: Multi-Person Physics-aware 3D Motion Estimation", "abstract": "We introduce MultiPhys, a method designed for recovering multi-person motion\nfrom monocular videos. Our focus lies in capturing coherent spatial placement\nbetween pairs of individuals across varying degrees of engagement. MultiPhys,\nbeing physically aware, exhibits robustness to jittering and occlusions, and\neffectively eliminates penetration issues between the two individuals. We\ndevise a pipeline in which the motion estimated by a kinematic-based method is\nfed into a physics simulator in an autoregressive manner. We introduce distinct\ncomponents that enable our model to harness the simulator's properties without\ncompromising the accuracy of the kinematic estimates. This results in final\nmotion estimates that are both kinematically coherent and physically compliant.\nExtensive evaluations on three challenging datasets characterized by\nsubstantial inter-person interaction show that our method significantly reduces\nerrors associated with penetration and foot skating, while performing\ncompetitively with the state-of-the-art on motion accuracy and smoothness.\nResults and code can be found on our project page\n(http://www.iri.upc.edu/people/nugrinovic/multiphys/).", "keywords": ["Biometrics and human analysis", "Computational imaging and physics-based vision"], "authors_list": ["Nicol\u00e1s Ugrinovic", "Boxiao Pan", "Georgios Pavlakos", "Despoina Paschalidou", "Bokui Shen", "Jordi Sanchez-Riera", "Francesc Moreno-Noguer", "Leonidas Guibas"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f640"}, "filepath": "data/2311.16432.png", "tags": [], "_media_type": "image", "_rand": 0.9998200030679009, "arXiv_link": "https://arxiv.org/abs/2311.16432", "other_link": "https://yuanze-lin.me/LearnableRegions_page.", "title": "Text-Driven Image Editing via Learnable Regions", "abstract": "Language has emerged as a natural interface for image editing. In this paper,\nwe introduce a method for region-based image editing driven by textual prompts,\nwithout the need for user-provided masks or sketches. Specifically, our\napproach leverages an existing pre-trained text-to-image model and introduces a\nbounding box generator to identify the editing regions that are aligned with\nthe textual prompts. We show that this simple approach enables flexible editing\nthat is compatible with current image generation models, and is able to handle\ncomplex prompts featuring multiple objects, complex sentences, or lengthy\nparagraphs. We conduct an extensive user study to compare our method against\nstate-of-the-art methods. The experiments demonstrate the competitive\nperformance of our method in manipulating images with high fidelity and realism\nthat correspond to the provided language descriptions. Our project webpage can\nbe found at: https://yuanze-lin.me/LearnableRegions_page.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yuanze Lin", "Yi-Wen Chen", "Yi-Hsuan Tsai", "Lu Jiang", "Ming-Hsuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f641"}, "filepath": "data/2402.18206.png", "tags": [], "_media_type": "image", "_rand": 0.9993047028092508, "arXiv_link": "https://arxiv.org/abs/2402.18206", "other_link": "", "title": "Balancing Act: Distribution-Guided Debiasing in Diffusion Models", "abstract": "Diffusion Models (DMs) have emerged as powerful generative models with\nunprecedented image generation capability. These models are widely used for\ndata augmentation and creative applications. However, DMs reflect the biases\npresent in the training datasets. This is especially concerning in the context\nof faces, where the DM prefers one demographic subgroup vs others (eg. female\nvs male). In this work, we present a method for debiasing DMs without relying\non additional data or model retraining. Specifically, we propose Distribution\nGuidance, which enforces the generated images to follow the prescribed\nattribute distribution. To realize this, we build on the key insight that the\nlatent features of denoising UNet hold rich demographic semantics, and the same\ncan be leveraged to guide debiased generation. We train Attribute Distribution\nPredictor (ADP) - a small mlp that maps the latent features to the distribution\nof attributes. ADP is trained with pseudo labels generated from existing\nattribute classifiers. The proposed Distribution Guidance with ADP enables us\nto do fair generation. Our method reduces bias across single/multiple\nattributes and outperforms the baseline by a significant margin for\nunconditional and text-conditional diffusion models. Further, we present a\ndownstream task of training a fair attribute classifier by rebalancing the\ntraining set with our generated data.", "keywords": ["Image and video generation and manipulation", "Vision applications for social good and ethics", "Deep learning architectures and techniques"], "authors_list": ["Rishubh Parihar", "Abhijnya Bhat", "Abhipsa Basu", "Saswat Mallick", "Jogendra Kundu Kundu", "R. Venkatesh Babu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f642"}, "filepath": "data/2309.07906.png", "tags": [], "_media_type": "image", "_rand": 0.9991659388438279, "arXiv_link": "https://arxiv.org/abs/2309.07906", "other_link": "", "title": "Generative Image Dynamics", "abstract": "We present an approach to modeling an image-space prior on scene motion. Our\nprior is learned from a collection of motion trajectories extracted from real\nvideo sequences depicting natural, oscillatory dynamics such as trees, flowers,\ncandles, and clothes swaying in the wind. We model this dense, long-term motion\nprior in the Fourier domain:given a single image, our trained model uses a\nfrequency-coordinated diffusion sampling process to predict a spectral volume,\nwhich can be converted into a motion texture that spans an entire video. Along\nwith an image-based rendering module, these trajectories can be used for a\nnumber of downstream applications, such as turning still images into seamlessly\nlooping videos, or allowing users to realistically interact with objects in\nreal pictures by interpreting the spectral volumes as image-space modal bases,\nwhich approximate object dynamics.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhengqi Li", "Richard Tucker", "Noah Snavely", "Aleksander Holynski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f643"}, "filepath": "data/2303.10275v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993304201540979, "arXiv_link": "https://arxiv.org/html/2303.10275v2", "other_link": "", "title": "RAM-Avatar: Real-time Photo-Realistic Avatar from Monocular Videos with Full-body Control", "abstract": "We present a system to create Mobile Realistic Fullbody (MoRF) avatars. MoRF\navatars are rendered in real-time on mobile devices, learned from monocular\nvideos, and have high realism. We use SMPL-X as a proxy geometry and render it\nwith DNR (neural texture and image-2-image network). We improve on prior work,\nby overfitting per-frame warping fields in the neural texture space, allowing\nto better align the training signal between different frames. We also refine\nSMPL-X mesh fitting procedure to improve the overall avatar quality. In the\ncomparisons to other monocular video-based avatar systems, MoRF avatars achieve\nhigher image sharpness and temporal consistency. Participants of our user study\nalso preferred avatars generated by MoRF.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["xiang deng", "Zerong Zheng", "Yuxiang Zhang", "Jingxiang Sun", "Chao Xu", "Xiaodong Yang", "Lizhen Wang", "Yebin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f644"}, "filepath": "data/2401.08053.png", "tags": [], "_media_type": "image", "_rand": 0.9990346358552451, "arXiv_link": "https://arxiv.org/abs/2401.08053", "other_link": "", "title": "SCoFT: Self-Contrastive Fine-Tuning for Equitable Image Generation", "abstract": "Accurate representation in media is known to improve the well-being of the\npeople who consume it. Generative image models trained on large web-crawled\ndatasets such as LAION are known to produce images with harmful stereotypes and\nmisrepresentations of cultures. We improve inclusive representation in\ngenerated images by (1) engaging with communities to collect a culturally\nrepresentative dataset that we call the Cross-Cultural Understanding Benchmark\n(CCUB) and (2) proposing a novel Self-Contrastive Fine-Tuning (SCoFT) method\nthat leverages the model's known biases to self-improve. SCoFT is designed to\nprevent overfitting on small datasets, encode only high-level information from\nthe data, and shift the generated distribution away from misrepresentations\nencoded in a pretrained model. Our user study conducted on 51 participants from\n5 different countries based on their self-selected national cultural\naffiliation shows that fine-tuning on CCUB consistently generates images with\nhigher cultural relevance and fewer stereotypes when compared to the Stable\nDiffusion baseline, which is further improved with our SCoFT technique.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhixuan Liu", "Peter Schaldenbrand", "Beverley-Claire Okogwu", "Wenxuan Peng", "Youngsik Yun", "Andrew Hundt", "Jihie Kim", "Jean Oh"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f645"}, "filepath": "data/2401.02411.png", "tags": [], "_media_type": "image", "_rand": 0.9995011465234295, "arXiv_link": "https://arxiv.org/abs/2401.02411", "other_link": "", "title": "Rendering Every Pixel for High-Fidelity Geometry in 3D GANs", "abstract": "3D-aware Generative Adversarial Networks (GANs) have shown remarkable\nprogress in learning to generate multi-view-consistent images and 3D geometries\nof scenes from collections of 2D images via neural volume rendering. Yet, the\nsignificant memory and computational costs of dense sampling in volume\nrendering have forced 3D GANs to adopt patch-based training or employ\nlow-resolution rendering with post-processing 2D super resolution, which\nsacrifices multiview consistency and the quality of resolved geometry.\nConsequently, 3D GANs have not yet been able to fully resolve the rich 3D\ngeometry present in 2D images. In this work, we propose techniques to scale\nneural volume rendering to the much higher resolution of native 2D images,\nthereby resolving fine-grained 3D geometry with unprecedented detail. Our\napproach employs learning-based samplers for accelerating neural rendering for\n3D GAN training using up to 5 times fewer depth samples. This enables us to\nexplicitly \"render every pixel\" of the full-resolution image during training\nand inference without post-processing superresolution in 2D. Together with our\nstrategy to learn high-quality surface geometry, our method synthesizes\nhigh-resolution 3D geometry and strictly view-consistent images while\nmaintaining image quality on par with baselines relying on post-processing\nsuper resolution. We demonstrate state-of-the-art 3D gemetric quality on FFHQ\nand AFHQ, setting a new standard for unsupervised learning of 3D shapes in 3D\nGANs.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Alex Trevithick", "Matthew Chan", "Towaki Takikawa", "Umar Iqbal", "Shalini De Mello", "Manmohan Chandraker", "Ravi Ramamoorthi", "Koki Nagano"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f646"}, "filepath": "data/2310.08873.png", "tags": [], "_media_type": "image", "_rand": 0.9990694369080736, "arXiv_link": "https://arxiv.org/abs/2310.08873", "other_link": "", "title": "An Interactive Navigation Method with Effect-oriented Affordance", "abstract": "This paper proposes an interactive navigation framework by using large\nlanguage and vision-language models, allowing robots to navigate in\nenvironments with traversable obstacles. We utilize the large language model\n(GPT-3.5) and the open-set Vision-language Model (Grounding DINO) to create an\naction-aware costmap to perform effective path planning without fine-tuning.\nWith the large models, we can achieve an end-to-end system from textual\ninstructions like \"Can you pass through the curtains to deliver medicines to\nme?\", to bounding boxes (e.g., curtains) with action-aware attributes. They can\nbe used to segment LiDAR point clouds into two parts: traversable and\nuntraversable parts, and then an action-aware costmap is constructed for\ngenerating a feasible path. The pre-trained large models have great\ngeneralization ability and do not require additional annotated data for\ntraining, allowing fast deployment in the interactive navigation tasks. We\nchoose to use multiple traversable objects such as curtains and grasses for\nverification by instructing the robot to traverse them. Besides, traversing\ncurtains in a medical scenario was tested. All experimental results\ndemonstrated the proposed framework's effectiveness and adaptability to diverse\nenvironments.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Xiaohan Wang", "Yuehu LIU", "Xinhang Song", "Yuyi Liu", "Sixian Zhang", "Shuqiang Jiang"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f647"}, "filepath": "data/2404.02790.png", "tags": [], "_media_type": "image", "_rand": 0.9992871805486313, "arXiv_link": "https://arxiv.org/abs/2404.02790", "other_link": "https://MuLAn-dataset.github.io/.", "title": "MULAN: A Multi Layer Annotated Dataset for Controllable Text-to-Image Generation", "abstract": "Text-to-image generation has achieved astonishing results, yet precise\nspatial controllability and prompt fidelity remain highly challenging. This\nlimitation is typically addressed through cumbersome prompt engineering, scene\nlayout conditioning, or image editing techniques which often require hand drawn\nmasks. Nonetheless, pre-existing works struggle to take advantage of the\nnatural instance-level compositionality of scenes due to the typically flat\nnature of rasterized RGB output images. Towards adressing this challenge, we\nintroduce MuLAn: a novel dataset comprising over 44K MUlti-Layer ANnotations of\nRGB images as multilayer, instance-wise RGBA decompositions, and over 100K\ninstance images. To build MuLAn, we developed a training free pipeline which\ndecomposes a monocular RGB image into a stack of RGBA layers comprising of\nbackground and isolated instances. We achieve this through the use of\npretrained general-purpose models, and by developing three modules: image\ndecomposition for instance discovery and extraction, instance completion to\nreconstruct occluded areas, and image re-assembly. We use our pipeline to\ncreate MuLAn-COCO and MuLAn-LAION datasets, which contain a variety of image\ndecompositions in terms of style, composition and complexity. With MuLAn, we\nprovide the first photorealistic resource providing instance decomposition and\nocclusion information for high quality images, opening up new avenues for\ntext-to-image generative AI research. With this, we aim to encourage the\ndevelopment of novel generation and editing technology, in particular\nlayer-wise solutions. MuLAn data resources are available at\nhttps://MuLAn-dataset.github.io/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Petru-Daniel Tudosiu", "Yongxin Yang", "Shifeng Zhang", "Fei Chen", "Steven McDonagh", "Gerasimos Lampouras", "Ignacio Iacobacci", "Sarah Parisot"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f648"}, "filepath": "data/2404.02883.png", "tags": [], "_media_type": "image", "_rand": 0.9994202663127332, "arXiv_link": "https://arxiv.org/abs/2404.02883", "other_link": "", "title": "On the Scalability of Diffusion-based Text-to-Image Generation", "abstract": "Scaling up model and data size has been quite successful for the evolution of\nLLMs. However, the scaling law for the diffusion based text-to-image (T2I)\nmodels is not fully explored. It is also unclear how to efficiently scale the\nmodel for better performance at reduced cost. The different training settings\nand expensive training cost make a fair model comparison extremely difficult.\nIn this work, we empirically study the scaling properties of diffusion based\nT2I models by performing extensive and rigours ablations on scaling both\ndenoising backbones and training set, including training scaled UNet and\nTransformer variants ranging from 0.4B to 4B parameters on datasets upto 600M\nimages. For model scaling, we find the location and amount of cross attention\ndistinguishes the performance of existing UNet designs. And increasing the\ntransformer blocks is more parameter-efficient for improving text-image\nalignment than increasing channel numbers. We then identify an efficient UNet\nvariant, which is 45% smaller and 28% faster than SDXL's UNet. On the data\nscaling side, we show the quality and diversity of the training set matters\nmore than simply dataset size. Increasing caption density and diversity\nimproves text-image alignment performance and the learning efficiency. Finally,\nwe provide scaling functions to predict the text-image alignment performance as\nfunctions of the scale of model size, compute and dataset size.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques"], "authors_list": ["Hao Li", "Yang Zou", "Ying Wang", "Orchid Majumder", "Yusheng Xie", "R. Manmatha", "Ashwin Swaminathan", "Zhuowen Tu", "Stefano Ermon", "Stefano Soatto"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f649"}, "filepath": "data/2405.06828.png", "tags": [], "_media_type": "image", "_rand": 0.9995087965641777, "arXiv_link": "https://arxiv.org/abs/2405.06828", "other_link": "https://github.com/J-F-Cheng/G-FARS-3DPartGrouping.", "title": "G-FARS: Gradient-Field-based Auto-Regressive Sampling for 3D Part Grouping", "abstract": "This paper proposes a novel task named \"3D part grouping\". Suppose there is a\nmixed set containing scattered parts from various shapes. This task requires\nalgorithms to find out every possible combination among all the parts. To\naddress this challenge, we propose the so called Gradient Field-based\nAuto-Regressive Sampling framework (G-FARS) tailored specifically for the 3D\npart grouping task. In our framework, we design a gradient-field-based\nselection graph neural network (GNN) to learn the gradients of a log\nconditional probability density in terms of part selection, where the condition\nis the given mixed part set. This innovative approach, implemented through the\ngradient-field-based selection GNN, effectively captures complex relationships\namong all the parts in the input. Upon completion of the training process, our\nframework becomes capable of autonomously grouping 3D parts by iteratively\nselecting them from the mixed part set, leveraging the knowledge acquired by\nthe trained gradient-field-based selection GNN. Our code is available at:\nhttps://github.com/J-F-Cheng/G-FARS-3DPartGrouping.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Junfeng Cheng", "Tania Stathaki"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f64a"}, "filepath": "data/2404.14759.png", "tags": [], "_media_type": "image", "_rand": 0.9997589297506447, "arXiv_link": "https://arxiv.org/abs/2404.14759", "other_link": "https://github.com/I2-Multimedia-Lab/A2S-v3.", "title": "Unsupervised Salient Instance Detection", "abstract": "Recently, unsupervised salient object detection (USOD) has gained increasing\nattention due to its annotation-free nature. However, current methods mainly\nfocus on specific tasks such as RGB and RGB-D, neglecting the potential for\ntask migration. In this paper, we propose a unified USOD framework for generic\nUSOD tasks. Firstly, we propose a Progressive Curriculum Learning-based\nSaliency Distilling (PCL-SD) mechanism to extract saliency cues from a\npre-trained deep network. This mechanism starts with easy samples and\nprogressively moves towards harder ones, to avoid initial interference caused\nby hard samples. Afterwards, the obtained saliency cues are utilized to train a\nsaliency detector, and we employ a Self-rectify Pseudo-label Refinement (SPR)\nmechanism to improve the quality of pseudo-labels. Finally, an adapter-tuning\nmethod is devised to transfer the acquired saliency knowledge, leveraging\nshared knowledge to attain superior transferring performance on the target\ntasks. Extensive experiments on five representative SOD tasks confirm the\neffectiveness and feasibility of our proposed method. Code and supplement\nmaterials are available at https://github.com/I2-Multimedia-Lab/A2S-v3.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xin Tian", "Ke Xu", "Rynson W.H. Lau"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f64b"}, "filepath": "data/2312.04524.png", "tags": [], "_media_type": "image", "_rand": 0.9994046535977975, "arXiv_link": "https://arxiv.org/abs/2312.04524", "other_link": "https://rave-video.github.io.", "title": "RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models", "abstract": "Recent advancements in diffusion-based models have demonstrated significant\nsuccess in generating images from text. However, video editing models have not\nyet reached the same level of visual quality and user control. To address this,\nwe introduce RAVE, a zero-shot video editing method that leverages pre-trained\ntext-to-image diffusion models without additional training. RAVE takes an input\nvideo and a text prompt to produce high-quality videos while preserving the\noriginal motion and semantic structure. It employs a novel noise shuffling\nstrategy, leveraging spatio-temporal interactions between frames, to produce\ntemporally consistent videos faster than existing methods. It is also efficient\nin terms of memory requirements, allowing it to handle longer videos. RAVE is\ncapable of a wide range of edits, from local attribute modifications to shape\ntransformations. In order to demonstrate the versatility of RAVE, we create a\ncomprehensive video evaluation dataset ranging from object-focused scenes to\ncomplex human activities like dancing and typing, and dynamic scenes featuring\nswimming fish and boats. Our qualitative and quantitative experiments highlight\nthe effectiveness of RAVE in diverse video editing scenarios compared to\nexisting methods. Our code, dataset and videos can be found in\nhttps://rave-video.github.io.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Ozgur Kara", "Bariscan Kurtkaya", "Hidir Yesiltepe", "James Rehg", "Pinar Yanardag"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f64c"}, "filepath": "data/2404.01294.png", "tags": [], "_media_type": "image", "_rand": 0.9993173509829539, "arXiv_link": "http://export.arxiv.org/abs/2404.01294", "other_link": "", "title": "CosmicMan: A Text-to-Image Foundation Model for Humans", "abstract": "We present CosmicMan, a text-to-image foundation model specialized for\ngenerating high-fidelity human images. Unlike current general-purpose\nfoundation models that are stuck in the dilemma of inferior quality and\ntext-image misalignment for humans, CosmicMan enables generating\nphoto-realistic human images with meticulous appearance, reasonable structure,\nand precise text-image alignment with detailed dense descriptions. At the heart\nof CosmicMan's success are the new reflections and perspectives on data and\nmodels: (1) We found that data quality and a scalable data production flow are\nessential for the final results from trained models. Hence, we propose a new\ndata production paradigm, Annotate Anyone, which serves as a perpetual data\nflywheel to produce high-quality data with accurate yet cost-effective\nannotations over time. Based on this, we constructed a large-scale dataset,\nCosmicMan-HQ 1.0, with 6 Million high-quality real-world human images in a mean\nresolution of 1488x1255, and attached with precise text annotations deriving\nfrom 115 Million attributes in diverse granularities. (2) We argue that a\ntext-to-image foundation model specialized for humans must be pragmatic -- easy\nto integrate into down-streaming tasks while effective in producing\nhigh-quality human images. Hence, we propose to model the relationship between\ndense text descriptions and image pixels in a decomposed manner, and present\nDecomposed-Attention-Refocusing (Daring) training framework. It seamlessly\ndecomposes the cross-attention features in existing text-to-image diffusion\nmodel, and enforces attention refocusing without adding extra modules. Through\nDaring, we show that explicitly discretizing continuous text space into several\nbasic groups that align with human body structure is the key to tackling the\nmisalignment problem in a breeze.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Shikai Li", "Jianglin Fu", "Kaiyuan Liu", "Wentao Wang", "Kwan-Yee Lin", "Wayne Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f64d"}, "filepath": "data/2312.02567.png", "tags": [], "_media_type": "image", "_rand": 0.9994041102993161, "arXiv_link": "https://arxiv.org/abs/2312.02567", "other_link": "https://github.com/JiayiChen815/FEAL.", "title": "Think Twice Before Selection: Federated Evidential Active Learning for Medical Image Analysis with Domain Shifts", "abstract": "Federated learning facilitates the collaborative learning of a global model\nacross multiple distributed medical institutions without centralizing data.\nNevertheless, the expensive cost of annotation on local clients remains an\nobstacle to effectively utilizing local data. To mitigate this issue, federated\nactive learning methods suggest leveraging local and global model predictions\nto select a relatively small amount of informative local data for annotation.\nHowever, existing methods mainly focus on all local data sampled from the same\ndomain, making them unreliable in realistic medical scenarios with domain\nshifts among different clients. In this paper, we make the first attempt to\nassess the informativeness of local data derived from diverse domains and\npropose a novel methodology termed Federated Evidential Active Learning (FEAL)\nto calibrate the data evaluation under domain shift. Specifically, we introduce\na Dirichlet prior distribution in both local and global models to treat the\nprediction as a distribution over the probability simplex and capture both\naleatoric and epistemic uncertainties by using the Dirichlet-based evidential\nmodel. Then we employ the epistemic uncertainty to calibrate the aleatoric\nuncertainty. Afterward, we design a diversity relaxation strategy to reduce\ndata redundancy and maintain data diversity. Extensive experiments and analysis\non five real multi-center medical image datasets demonstrate the superiority of\nFEAL over the state-of-the-art active learning methods in federated scenarios\nwith domain shifts. The code will be available at\nhttps://github.com/JiayiChen815/FEAL.", "keywords": [], "authors_list": ["Jiayi Chen", "Benteng Ma", "Hengfei Cui", "Kwang-Ting Cheng", "Yong Xia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f64e"}, "filepath": "data/2404.15516.png", "tags": [], "_media_type": "image", "_rand": 0.999825792097823, "arXiv_link": "https://arxiv.org/abs/2404.15516", "other_link": "", "title": "Visual Delta Generator with Large Multi-modal Models for Semi-supervised Composed Image Retrieval", "abstract": "Composed Image Retrieval (CIR) is a task that retrieves images similar to a\nquery, based on a provided textual modification. Current techniques rely on\nsupervised learning for CIR models using labeled triplets of the reference\nimage, text, target image. These specific triplets are not as commonly\navailable as simple image-text pairs, limiting the widespread use of CIR and\nits scalability. On the other hand, zero-shot CIR can be relatively easily\ntrained with image-caption pairs without considering the image-to-image\nrelation, but this approach tends to yield lower accuracy. We propose a new\nsemi-supervised CIR approach where we search for a reference and its related\ntarget images in auxiliary data and learn our large language model-based Visual\nDelta Generator (VDG) to generate text describing the visual difference (i.e.,\nvisual delta) between the two. VDG, equipped with fluent language knowledge and\nbeing model agnostic, can generate pseudo triplets to boost the performance of\nCIR models. Our approach significantly improves the existing supervised\nlearning approaches and achieves state-of-the-art results on the CIR\nbenchmarks.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Young Kyun Jang", "Donghyun Kim", "Zihang Meng", "Dat Huynh", "Ser-Nam Lim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f64f"}, "filepath": "data/2312.03033.png", "tags": [], "_media_type": "image", "_rand": 0.9997918672328862, "arXiv_link": "https://arxiv.org/abs/2312.03033", "other_link": "", "title": "LiDAR-based Person Re-identification", "abstract": "Camera-based person re-identification (ReID) systems have been widely applied\nin the field of public security. However, cameras often lack the perception of\n3D morphological information of human and are susceptible to various\nlimitations, such as inadequate illumination, complex background, and personal\nprivacy. In this paper, we propose a LiDAR-based ReID framework, ReID3D, that\nutilizes pre-training strategy to retrieve features of 3D body shape and\nintroduces Graph-based Complementary Enhancement Encoder for extracting\ncomprehensive features. Due to the lack of LiDAR datasets, we build LReID, the\nfirst LiDAR-based person ReID dataset, which is collected in several outdoor\nscenes with variations in natural conditions. Additionally, we introduce\nLReID-sync, a simulated pedestrian dataset designed for pre-training encoders\nwith tasks of point cloud completion and shape parameter learning. Extensive\nexperiments on LReID show that ReID3D achieves exceptional performance with a\nrank-1 accuracy of 94.0, highlighting the significant potential of LiDAR in\naddressing person ReID tasks. To the best of our knowledge, we are the first to\npropose a solution for LiDAR-based ReID. The code and datasets will be released\nsoon.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Wenxuan Guo", "Zhiyu Pan", "Yingping Liang", "Ziheng Xi", "Zhi Chen Zhong", "Jianjiang Feng", "Jie Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f650"}, "filepath": "data/2404.06443.png", "tags": [], "_media_type": "image", "_rand": 0.999499940889285, "arXiv_link": "https://arxiv.org/abs/2404.06443", "other_link": "https://github.com/CVI-SZU/MDHR.", "title": "Multi-scale Dynamic and Hierarchical Relationship Modeling for Facial Action Units Recognition", "abstract": "Human facial action units (AUs) are mutually related in a hierarchical\nmanner, as not only they are associated with each other in both spatial and\ntemporal domains but also AUs located in the same/close facial regions show\nstronger relationships than those of different facial regions. While none of\nexisting approach thoroughly model such hierarchical inter-dependencies among\nAUs, this paper proposes to comprehensively model multi-scale AU-related\ndynamic and hierarchical spatio-temporal relationship among AUs for their\noccurrences recognition. Specifically, we first propose a novel multi-scale\ntemporal differencing network with an adaptive weighting block to explicitly\ncapture facial dynamics across frames at different spatial scales, which\nspecifically considers the heterogeneity of range and magnitude in different\nAUs' activation. Then, a two-stage strategy is introduced to hierarchically\nmodel the relationship among AUs based on their spatial distribution (i.e.,\nlocal and cross-region AU relationship modelling). Experimental results\nachieved on BP4D and DISFA show that our approach is the new state-of-the-art\nin the field of AU occurrence recognition. Our code is publicly available at\nhttps://github.com/CVI-SZU/MDHR.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Zihan Wang", "Siyang Song", "Cheng Luo", "Songhe Deng", "Weicheng Xie", "Linlin Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f651"}, "filepath": "data/2312.06874.png", "tags": [], "_media_type": "image", "_rand": 0.9990955260118481, "arXiv_link": "https://arxiv.org/abs/2312.06874", "other_link": "", "title": "Adapt or Perish: Adaptive Sparse Transformer with Attentive Feature Refinement for Image Restoration", "abstract": "Transformers have achieved remarkable performance in multivariate time\nseries(MTS) forecasting due to their capability to capture long-term\ndependencies. However, the canonical attention mechanism has two key\nlimitations: (1) its quadratic time complexity limits the sequence length, and\n(2) it generates future values from the entire historical sequence. To address\nthis, we propose a Dozer Attention mechanism consisting of three sparse\ncomponents: (1) Local, each query exclusively attends to keys within a\nlocalized window of neighboring time steps. (2) Stride, enables each query to\nattend to keys at predefined intervals. (3) Vary, allows queries to selectively\nattend to keys from a subset of the historical sequence. Notably, the size of\nthis subset dynamically expands as forecasting horizons extend. Those three\ncomponents are designed to capture essential attributes of MTS data, including\nlocality, seasonality, and global temporal dependencies. Additionally, we\npresent the Dozerformer Framework, incorporating the Dozer Attention mechanism\nfor the MTS forecasting task. We evaluated the proposed Dozerformer framework\nwith recent state-of-the-art methods on nine benchmark datasets and confirmed\nits superior performance. The code will be released after the manuscript is\naccepted.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Shihao Zhou", "Duosheng Chen", "Jinshan Pan", "Jinglei Shi", "Jufeng Yang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f652"}, "filepath": "data/2310.03978.png", "tags": [], "_media_type": "image", "_rand": 0.999226687017277, "arXiv_link": "https://arxiv.org/abs/2310.03978", "other_link": "", "title": "Circuit Design and Efficient Simulation of Quantum Inner Product and Empirical Studies of Its Effect on Near-Term Hybrid Quantum-Classic Machine Learning", "abstract": "Efficient simulation of quantum circuits has become indispensable with the\nrapid development of quantum hardware. The primary simulation methods are based\non state vectors and tensor networks. As the number of qubits and quantum gates\ngrows larger in current quantum devices, traditional state-vector based quantum\ncircuit simulation methods prove inadequate due to the overwhelming size of the\nHilbert space and extensive entanglement. Consequently, brutal force tensor\nnetwork simulation algorithms become the only viable solution in such\nscenarios. The two main challenges faced in tensor network simulation\nalgorithms are optimal contraction path finding and efficient execution on\nmodern computing devices, with the latter determines the actual efficiency. In\nthis study, we investigate the optimization of such tensor network simulations\non modern GPUs and propose general optimization strategies from two aspects:\ncomputational efficiency and accuracy. Firstly, we propose to transform\ncritical Einstein summation operations into GEMM operations, leveraging the\nspecific features of tensor network simulations to amplify the efficiency of\nGPUs. Secondly, by analyzing the data characteristics of quantum circuits, we\nemploy extended precision to ensure the accuracy of simulation results and\nmixed precision to fully exploit the potential of GPUs, resulting in faster and\nmore precise simulations. Our numerical experiments demonstrate that our\napproach can achieve a 3.96x reduction in verification time for random quantum\ncircuit samples in the 18-cycle case of Sycamore, with sustained performance\nexceeding 21 TFLOPS on one A100. This method can be easily extended to the\n20-cycle case, maintaining the same performance, accelerating by 12.5x compared\nto the state-of-the-art CPU-based results and 4.48-6.78x compared to the\nstate-of-the-art GPU-based results reported in the literature.", "keywords": [], "authors_list": ["Hao Xiong", "Yehui Tang", "Xinyu Ye", "Junchi Yan"], "category_name": "", "all_categories": ["Unknown", "Distributed, Parallel, and Cluster Computing", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f653"}, "filepath": "data/2401.01702.png", "tags": [], "_media_type": "image", "_rand": 0.9998134876818149, "arXiv_link": "https://arxiv.org/abs/2401.01702", "other_link": "", "title": "Image Sculpting: Precise Object Editing with 3D Geometry Control", "abstract": "We present Image Sculpting, a new framework for editing 2D images by\nincorporating tools from 3D geometry and graphics. This approach differs\nmarkedly from existing methods, which are confined to 2D spaces and typically\nrely on textual instructions, leading to ambiguity and limited control. Image\nSculpting converts 2D objects into 3D, enabling direct interaction with their\n3D geometry. Post-editing, these objects are re-rendered into 2D, merging into\nthe original image to produce high-fidelity results through a coarse-to-fine\nenhancement process. The framework supports precise, quantifiable, and\nphysically-plausible editing options such as pose editing, rotation,\ntranslation, 3D composition, carving, and serial addition. It marks an initial\nstep towards combining the creative freedom of generative models with the\nprecision of graphics pipelines.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jiraphon Yenphraphai", "Xichen Pan", "Sainan Liu", "Daniele Panozzo", "Saining Xie"], "category_name": "Graphics", "all_categories": ["Graphics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f654"}, "filepath": "data/2403.19334.png", "tags": [], "_media_type": "image", "_rand": 0.9996417292807329, "arXiv_link": "https://arxiv.org/abs/2403.19334", "other_link": "", "title": "Test-Time Domain Generalization for Face Anti-Spoofing", "abstract": "Face Anti-Spoofing (FAS) is pivotal in safeguarding facial recognition\nsystems against presentation attacks. While domain generalization (DG) methods\nhave been developed to enhance FAS performance, they predominantly focus on\nlearning domain-invariant features during training, which may not guarantee\ngeneralizability to unseen data that differs largely from the source\ndistributions. Our insight is that testing data can serve as a valuable\nresource to enhance the generalizability beyond mere evaluation for DG FAS. In\nthis paper, we introduce a novel Test-Time Domain Generalization (TTDG)\nframework for FAS, which leverages the testing data to boost the model's\ngeneralizability. Our method, consisting of Test-Time Style Projection (TTSP)\nand Diverse Style Shifts Simulation (DSSS), effectively projects the unseen\ndata to the seen domain space. In particular, we first introduce the innovative\nTTSP to project the styles of the arbitrarily unseen samples of the testing\ndistribution to the known source space of the training distributions. We then\ndesign the efficient DSSS to synthesize diverse style shifts via learnable\nstyle bases with two specifically designed losses in a hyperspherical feature\nspace. Our method eliminates the need for model updates at the test time and\ncan be seamlessly integrated into not only the CNN but also ViT backbones.\nComprehensive experiments on widely used cross-domain FAS benchmarks\ndemonstrate our method's state-of-the-art performance and effectiveness.", "keywords": [], "authors_list": ["Qianyu Zhou", "Ke-Yue Zhang", "Taiping Yao", "Xuequan Lu", "Shouhong Ding", "Lizhuang Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f655"}, "filepath": "data/2312.02010.png", "tags": [], "_media_type": "image", "_rand": 0.9994123220622448, "arXiv_link": "https://arxiv.org/abs/2312.02010", "other_link": "", "title": "Towards Learning a Generalist Model for Embodied Navigation", "abstract": "Building a generalist agent that can interact with the world is the\nintriguing target of AI systems, thus spurring the research for embodied\nnavigation, where an agent is required to navigate according to instructions or\nrespond to queries. Despite the major progress attained, previous works\nprimarily focus on task-specific agents and lack generalizability to unseen\nscenarios. Recently, LLMs have presented remarkable capabilities across various\nfields, and provided a promising opportunity for embodied navigation. Drawing\non this, we propose the first generalist model for embodied navigation,\nNaviLLM. It adapts LLMs to embodied navigation by introducing schema-based\ninstruction. The schema-based instruction flexibly casts various tasks into\ngeneration problems, thereby unifying a wide range of tasks. This approach\nallows us to integrate diverse data sources from various datasets into the\ntraining, equipping NaviLLM with a wide range of capabilities required by\nembodied navigation. We conduct extensive experiments to evaluate the\nperformance and generalizability of our model. The experimental results\ndemonstrate that our unified model achieves state-of-the-art performance on\nCVDN, SOON, and ScanQA. Specifically, it surpasses the previous\nstats-of-the-art method by a significant margin of 29% in goal progress on\nCVDN. Moreover, our model also demonstrates strong generalizability and\npresents impressive results on unseen tasks, e.g., embodied question answering\nand 3D captioning.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Duo Zheng", "Shijia Huang", "Lin Zhao", "Yiwu Zhong", "Liwei Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f656"}, "filepath": "data/2403.01781v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996089869102857, "arXiv_link": "https://arxiv.org/abs/2403.01781v1", "other_link": "", "title": "Integrating Efficient Optimal Transport and Functional Maps For Unsupervised Shape Correspondence Learning", "abstract": "In the realm of computer vision and graphics, accurately establishing\ncorrespondences between geometric 3D shapes is pivotal for applications like\nobject tracking, registration, texture transfer, and statistical shape\nanalysis. Moving beyond traditional hand-crafted and data-driven feature\nlearning methods, we incorporate spectral methods with deep learning, focusing\non functional maps (FMs) and optimal transport (OT). Traditional OT-based\napproaches, often reliant on entropy regularization OT in learning-based\nframework, face computational challenges due to their quadratic cost. Our key\ncontribution is to employ the sliced Wasserstein distance (SWD) for OT, which\nis a valid fast optimal transport metric in an unsupervised shape matching\nframework. This unsupervised framework integrates functional map regularizers\nwith a novel OT-based loss derived from SWD, enhancing feature alignment\nbetween shapes treated as discrete probability measures. We also introduce an\nadaptive refinement process utilizing entropy regularized OT, further refining\nfeature alignments for accurate point-to-point correspondences. Our method\ndemonstrates superior performance in non-rigid shape matching, including\nnear-isometric and non-isometric scenarios, and excels in downstream tasks like\nsegmentation transfer. The empirical results on diverse datasets highlight our\nframework's effectiveness and generalization capabilities, setting new\nstandards in non-rigid shape matching with efficient OT metrics and an adaptive\nrefinement module.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Tung Le", "Khai Nguyen", "Shanlin Sun", "Nhat Ho", "Xiaohui Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f657"}, "filepath": "data/2312.09558.png", "tags": [], "_media_type": "image", "_rand": 0.999602400158814, "arXiv_link": "https://arxiv.org/abs/2312.09558", "other_link": "", "title": "Towards Transferable Targeted 3D Adversarial Attack in the Physical World", "abstract": "Compared with transferable untargeted attacks, transferable targeted\nadversarial attacks could specify the misclassification categories of\nadversarial samples, posing a greater threat to security-critical tasks. In the\nmeanwhile, 3D adversarial samples, due to their potential of multi-view\nrobustness, can more comprehensively identify weaknesses in existing deep\nlearning systems, possessing great application value. However, the field of\ntransferable targeted 3D adversarial attacks remains vacant. The goal of this\nwork is to develop a more effective technique that could generate transferable\ntargeted 3D adversarial examples, filling the gap in this field. To achieve\nthis goal, we design a novel framework named TT3D that could rapidly\nreconstruct from few multi-view images into Transferable Targeted 3D textured\nmeshes. While existing mesh-based texture optimization methods compute\ngradients in the high-dimensional mesh space and easily fall into local optima,\nleading to unsatisfactory transferability and distinct distortions, TT3D\ninnovatively performs dual optimization towards both feature grid and\nMulti-layer Perceptron (MLP) parameters in the grid-based NeRF space, which\nsignificantly enhances black-box transferability while enjoying naturalness.\nExperimental results show that TT3D not only exhibits superior cross-model\ntransferability but also maintains considerable adaptability across different\nrenders and vision tasks. More importantly, we produce 3D adversarial examples\nwith 3D printing techniques in the real world and verify their robust\nperformance under various scenarios.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yao Huang", "Yinpeng Dong", "Shouwei Ruan", "Xiao Yang", "Hang Su", "Xingxing Wei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f658"}, "filepath": "data/2404.06337.png", "tags": [], "_media_type": "image", "_rand": 0.9994993287955716, "arXiv_link": "https://arxiv.org/abs/2404.06337", "other_link": "", "title": "Matching 2D Images in 3D: Metric Relative Pose from Metric Correspondences", "abstract": "Given two images, we can estimate the relative camera pose between them by\nestablishing image-to-image correspondences. Usually, correspondences are\n2D-to-2D and the pose we estimate is defined only up to scale. Some\napplications, aiming at instant augmented reality anywhere, require\nscale-metric pose estimates, and hence, they rely on external depth estimators\nto recover the scale. We present MicKey, a keypoint matching pipeline that is\nable to predict metric correspondences in 3D camera space. By learning to match\n3D coordinates across images, we are able to infer the metric relative pose\nwithout depth measurements. Depth measurements are also not required for\ntraining, nor are scene reconstructions or image overlap information. MicKey is\nsupervised only by pairs of images and their relative poses. MicKey achieves\nstate-of-the-art performance on the Map-Free Relocalisation benchmark while\nrequiring less supervision than competing approaches.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Axel Barroso-Laguna", "Sowmya Munukutla", "Victor Adrian Prisacariu", "Eric Brachmann"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f659"}, "filepath": "data/2405.03388.png", "tags": [], "_media_type": "image", "_rand": 0.9995190048035029, "arXiv_link": "http://export.arxiv.org/abs/2405.03388", "other_link": "https://github.com/PRBonn/4dNDF", "title": "3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation", "abstract": "Building accurate maps is a key building block to enable reliable\nlocalization, planning, and navigation of autonomous vehicles. We propose a\nnovel approach for building accurate maps of dynamic environments utilizing a\nsequence of LiDAR scans. To this end, we propose encoding the 4D scene into a\nnovel spatio-temporal implicit neural map representation by fitting a\ntime-dependent truncated signed distance function to each point. Using our\nrepresentation, we extract the static map by filtering the dynamic parts. Our\nneural representation is based on sparse feature grids, a globally shared\ndecoder, and time-dependent basis functions, which we jointly optimize in an\nunsupervised fashion. To learn this representation from a sequence of LiDAR\nscans, we design a simple yet efficient loss function to supervise the map\noptimization in a piecewise way. We evaluate our approach on various scenes\ncontaining moving objects in terms of the reconstruction quality of static maps\nand the segmentation of dynamic point clouds. The experimental results\ndemonstrate that our method is capable of removing the dynamic part of the\ninput point clouds while reconstructing accurate and complete 3D maps,\noutperforming several state-of-the-art methods. Codes are available at:\nhttps://github.com/PRBonn/4dNDF", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xingguang Zhong", "Yue Pan", "Cyrill Stachniss", "Jens Behley"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f65a"}, "filepath": "data/2403.05963.png", "tags": [], "_media_type": "image", "_rand": 0.9998282793656232, "arXiv_link": "https://arxiv.org/abs/2403.05963", "other_link": "", "title": "Robust Emotion Recognition in Context Debiasing", "abstract": "Context-aware emotion recognition (CAER) has recently boosted the practical\napplications of affective computing techniques in unconstrained environments.\nMainstream CAER methods invariably extract ensemble representations from\ndiverse contexts and subject-centred characteristics to perceive the target\nperson's emotional state. Despite advancements, the biggest challenge remains\ndue to context bias interference. The harmful bias forces the models to rely on\nspurious correlations between background contexts and emotion labels in\nlikelihood estimation, causing severe performance bottlenecks and confounding\nvaluable context priors. In this paper, we propose a counterfactual emotion\ninference (CLEF) framework to address the above issue. Specifically, we first\nformulate a generalized causal graph to decouple the causal relationships among\nthe variables in CAER. Following the causal graph, CLEF introduces a\nnon-invasive context branch to capture the adverse direct effect caused by the\ncontext bias. During the inference, we eliminate the direct context effect from\nthe total causal effect by comparing factual and counterfactual outcomes,\nresulting in bias mitigation and robust prediction. As a model-agnostic\nframework, CLEF can be readily integrated into existing methods, bringing\nconsistent performance gains.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Dingkang Yang", "Kun Yang", "Mingcheng Li", "Shunli Wang", "Shuaibing Wang", "Lihua Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f65b"}, "filepath": "data/2402.08359.png", "tags": [], "_media_type": "image", "_rand": 0.9999363051146345, "arXiv_link": "https://arxiv.org/abs/2402.08359", "other_link": "", "title": "Learning to Produce Semi-dense Correspondences for Visual Localization", "abstract": "This study addresses the challenge of performing visual localization in\ndemanding conditions such as night-time scenarios, adverse weather, and\nseasonal changes. While many prior studies have focused on improving\nimage-matching performance to facilitate reliable dense keypoint matching\nbetween images, existing methods often heavily rely on predefined feature\npoints on a reconstructed 3D model. Consequently, they tend to overlook\nunobserved keypoints during the matching process. Therefore, dense keypoint\nmatches are not fully exploited, leading to a notable reduction in accuracy,\nparticularly in noisy scenes. To tackle this issue, we propose a novel\nlocalization method that extracts reliable semi-dense 2D-3D matching points\nbased on dense keypoint matches. This approach involves regressing semi-dense\n2D keypoints into 3D scene coordinates using a point inference network. The\nnetwork utilizes both geometric and visual cues to effectively infer 3D\ncoordinates for unobserved keypoints from the observed ones. The abundance of\nmatching information significantly enhances the accuracy of camera pose\nestimation, even in scenarios involving noisy or sparse 3D models.\nComprehensive evaluations demonstrate that the proposed method outperforms\nother methods in challenging scenes and achieves competitive results in\nlarge-scale visual localization benchmarks. The code will be available.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Khang Truong Giang", "Soohwan Song", "Sungho Jo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f65c"}, "filepath": "data/2309.02165.png", "tags": [], "_media_type": "image", "_rand": 0.9999206498038666, "arXiv_link": "https://arxiv.org/abs/2309.02165", "other_link": "", "title": "From Feature to Gaze: A Generalizable Replacement of Linear Layer for Gaze Estimation", "abstract": "Although recent deep learning based gaze estimation approaches have achieved\nmuch improvement, we still know little about how gaze features are connected to\nthe physics of gaze. In this paper, we try to answer this question by analyzing\nthe gaze feature manifold. Our analysis revealed the insight that the geodesic\ndistance between gaze features is consistent with the gaze differences between\nsamples. According to this finding, we construct the Physics- Consistent\nFeature (PCF) in an analytical way, which connects gaze feature to the physical\ndefinition of gaze. We further propose the PCFGaze framework that directly\noptimizes gaze feature space by the guidance of PCF. Experimental results\ndemonstrate that the proposed framework alleviates the overfitting problem and\nsignificantly improves cross-domain gaze estimation accuracy without extra\ntraining data. The insight of gaze feature has the potential to benefit other\nregression tasks with physical meanings.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Yiwei Bao", "Feng Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f65d"}, "filepath": "data/2405.07201.png", "tags": [], "_media_type": "image", "_rand": 0.9990820101537675, "arXiv_link": "https://arxiv.org/abs/2405.07201", "other_link": "https://github.com/chenhaomingbob/CSC,", "title": "Building a Strong Pre-Training Baseline for Universal 3D Large-Scale Perception", "abstract": "An effective pre-training framework with universal 3D representations is\nextremely desired in perceiving large-scale dynamic scenes. However,\nestablishing such an ideal framework that is both task-generic and\nlabel-efficient poses a challenge in unifying the representation of the same\nprimitive across diverse scenes. The current contrastive 3D pre-training\nmethods typically follow a frame-level consistency, which focuses on the 2D-3D\nrelationships in each detached image. Such inconsiderate consistency greatly\nhampers the promising path of reaching an universal pre-training framework: (1)\nThe cross-scene semantic self-conflict, i.e., the intense collision between\nprimitive segments of the same semantics from different scenes; (2) Lacking a\nglobally unified bond that pushes the cross-scene semantic consistency into 3D\nrepresentation learning. To address above challenges, we propose a CSC\nframework that puts a scene-level semantic consistency in the heart, bridging\nthe connection of the similar semantic segments across various scenes. To\nachieve this goal, we combine the coherent semantic cues provided by the vision\nfoundation model and the knowledge-rich cross-scene prototypes derived from the\ncomplementary multi-modality information. These allow us to train a universal\n3D pre-training model that facilitates various downstream tasks with less\nfine-tuning efforts. Empirically, we achieve consistent improvements over SOTA\npre-training approaches in semantic segmentation (+1.4% mIoU), object detection\n(+1.0% mAP), and panoptic segmentation (+3.0% PQ) using their task-specific 3D\nnetwork on nuScenes. Code is released at https://github.com/chenhaomingbob/CSC,\nhoping to inspire future research.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Haoming Chen", "Zhizhong Zhang", "Yanyun Qu", "Ruixin Zhang", "Xin Tan", "Yuan Xie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f65e"}, "filepath": "data/2402.18920.png", "tags": [], "_media_type": "image", "_rand": 0.9997490116508646, "arXiv_link": "https://web3.arxiv.org/abs/2402.18920", "other_link": "", "title": "Spectral Meets Spatial: Harmonising 3D Shape Matching and Interpolation", "abstract": "Although 3D shape matching and interpolation are highly interrelated, they\nare often studied separately and applied sequentially to relate different 3D\nshapes, thus resulting in sub-optimal performance. In this work we present a\nunified framework to predict both point-wise correspondences and shape\ninterpolation between 3D shapes. To this end, we combine the deep functional\nmap framework with classical surface deformation models to map shapes in both\nspectral and spatial domains. On the one hand, by incorporating spatial maps,\nour method obtains more accurate and smooth point-wise correspondences compared\nto previous functional map methods for shape matching. On the other hand, by\nintroducing spectral maps, our method gets rid of commonly used but\ncomputationally expensive geodesic distance constraints that are only valid for\nnear-isometric shape deformations. Furthermore, we propose a novel test-time\nadaptation scheme to capture both pose-dominant and shape-dominant\ndeformations. Using different challenging datasets, we demonstrate that our\nmethod outperforms previous state-of-the-art methods for both shape matching\nand interpolation, even compared to supervised approaches.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Dongliang Cao", "Marvin Eisenberger", "Nafie El Amrani", "Daniel Cremers", "Florian Bernard"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computational Geometry"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f65f"}, "filepath": "data/2311.14760.png", "tags": [], "_media_type": "image", "_rand": 0.9998737689362178, "arXiv_link": "https://arxiv.org/abs/2311.14760", "other_link": "https://github.com/wyf0912/SinSR", "title": "SinSR: Diffusion-Based Image Super-Resolution in a Single Step", "abstract": "While super-resolution (SR) methods based on diffusion models exhibit\npromising results, their practical application is hindered by the substantial\nnumber of required inference steps. Recent methods utilize degraded images in\nthe initial state, thereby shortening the Markov chain. Nevertheless, these\nsolutions either rely on a precise formulation of the degradation process or\nstill necessitate a relatively lengthy generation path (e.g., 15 iterations).\nTo enhance inference speed, we propose a simple yet effective method for\nachieving single-step SR generation, named SinSR. Specifically, we first derive\na deterministic sampling process from the most recent state-of-the-art (SOTA)\nmethod for accelerating diffusion-based SR. This allows the mapping between the\ninput random noise and the generated high-resolution image to be obtained in a\nreduced and acceptable number of inference steps during training. We show that\nthis deterministic mapping can be distilled into a student model that performs\nSR within only one inference step. Additionally, we propose a novel\nconsistency-preserving loss to simultaneously leverage the ground-truth image\nduring the distillation process, ensuring that the performance of the student\nmodel is not solely bound by the feature manifold of the teacher model,\nresulting in further performance improvement. Extensive experiments conducted\non synthetic and real-world datasets demonstrate that the proposed method can\nachieve comparable or even superior performance compared to both previous SOTA\nmethods and the teacher model, in just one sampling step, resulting in a\nremarkable up to x10 speedup for inference. Our code will be released at\nhttps://github.com/wyf0912/SinSR", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Yufei Wang", "Wenhan Yang", "Xinyuan Chen", "Yaohui Wang", "Lanqing Guo", "Lap-Pui Chau", "Ziwei Liu", "Yu Qiao", "Alex C. Kot", "Bihan Wen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f660"}, "filepath": "data/2402.03161.png", "tags": [], "_media_type": "image", "_rand": 0.9998456363885264, "arXiv_link": "https://arxiv.org/abs/2402.03161", "other_link": "https://video-lavit.github.io.", "title": "SRTube: Video-Language Pre-Training with Action-Centric Video Tube Features and Semantic Role Labeling", "abstract": "In light of recent advances in multimodal Large Language Models (LLMs), there\nis increasing attention to scaling them from image-text data to more\ninformative real-world videos. Compared to static images, video poses unique\nchallenges for effective large-scale pre-training due to the modeling of its\nspatiotemporal dynamics. In this paper, we address such limitations in\nvideo-language pre-training with an efficient video decomposition that\nrepresents each video as keyframes and temporal motions. These are then adapted\nto an LLM using well-designed tokenizers that discretize visual and temporal\ninformation as a few tokens, thus enabling unified generative pre-training of\nvideos, images, and text. At inference, the generated tokens from the LLM are\ncarefully recovered to the original continuous pixel space to create various\nvideo content. Our proposed framework is both capable of comprehending and\ngenerating image and video content, as demonstrated by its competitive\nperformance across 13 multimodal benchmarks in image and video understanding\nand generation. Our code and models are available at\nhttps://video-lavit.github.io.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Juhee Lee", "Jewon Kang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f661"}, "filepath": "data/2403.16162.png", "tags": [], "_media_type": "image", "_rand": 0.9998634006549301, "arXiv_link": "https://arxiv.org/abs/2403.16162", "other_link": "", "title": "Quantifying Task Priority for Multi-Task Optimization", "abstract": "Multi-task learning solves multiple correlated tasks. However, conflicts may\nexist between them. In such circumstances, a single solution can rarely\noptimize all the tasks, leading to performance trade-offs. To arrive at a set\nof optimized yet well-distributed models that collectively embody different\ntrade-offs in one algorithmic pass, this paper proposes to view Pareto\nmulti-task learning through the lens of multi-task optimization. Multi-task\nlearning is first cast as a multi-objective optimization problem, which is then\ndecomposed into a diverse set of unconstrained scalar-valued subproblems. These\nsubproblems are solved jointly using a novel multi-task gradient descent\nmethod, whose uniqueness lies in the iterative transfer of model parameters\namong the subproblems during the course of optimization. A theorem proving\nfaster convergence through the inclusion of such transfers is presented. We\ninvestigate the proposed multi-task learning with multi-task optimization for\nsolving various problem settings including image classification, scene\nunderstanding, and multi-target regression. Comprehensive experiments confirm\nthat the proposed method significantly advances the state-of-the-art in\ndiscovering sets of Pareto-optimized models. Notably, on the large image\ndataset we tested on, namely NYUv2, the hypervolume convergence achieved by our\nmethod was found to be nearly two times faster than the next-best among the\nstate-of-the-art.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Wooseong Jeong", "Kuk-Jin Yoon"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f662"}, "filepath": "data/2306.08832.png", "tags": [], "_media_type": "image", "_rand": 0.9990993916305361, "arXiv_link": "https://arxiv.org/abs/2306.08832", "other_link": "https://github.com/lezhang7/Enhance-FineGrained.", "title": "Contrasting intra-modal and ranking cross-modal hard negatives to enhance visio-linguistic compositional understanding", "abstract": "Vision-Language Models (VLMs), such as CLIP, exhibit strong image-text\ncomprehension abilities, facilitating advances in several downstream tasks such\nas zero-shot image classification, image-text retrieval, and text-to-image\ngeneration. However, the compositional reasoning abilities of existing VLMs\nremains subpar. The root of this limitation lies in the inadequate alignment\nbetween the images and captions in the pretraining datasets. Additionally, the\ncurrent contrastive learning objective fails to focus on fine-grained grounding\ncomponents like relations, actions, and attributes, resulting in \"bag-of-words\"\nrepresentations. We introduce a simple and effective method to improve\ncompositional reasoning in VLMs. Our method better leverages available datasets\nby refining and expanding the standard image-text contrastive learning\nframework. Our approach does not require specific annotations and does not\nincur extra parameters. When integrated with CLIP, our technique yields notable\nimprovement over state-of-the-art baselines across five vision-language\ncompositional benchmarks. We open-source our code at\nhttps://github.com/lezhang7/Enhance-FineGrained.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Le Zhang", "Rabiul Awal", "Aishwarya Agrawal"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f663"}, "filepath": "data/2402.19231.png", "tags": [], "_media_type": "image", "_rand": 0.9993386653384628, "arXiv_link": "https://arxiv.org/abs/2402.19231", "other_link": "https://github.com/Lu-Feng/CricaVPR.", "title": "CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition", "abstract": "Over the past decade, most methods in visual place recognition (VPR) have\nused neural networks to produce feature representations. These networks\ntypically produce a global representation of a place image using only this\nimage itself and neglect the cross-image variations (e.g. viewpoint and\nillumination), which limits their robustness in challenging scenes. In this\npaper, we propose a robust global representation method with cross-image\ncorrelation awareness for VPR, named CricaVPR. Our method uses the attention\nmechanism to correlate multiple images within a batch. These images can be\ntaken in the same place with different conditions or viewpoints, or even\ncaptured from different places. Therefore, our method can utilize the\ncross-image variations as a cue to guide the representation learning, which\nensures more robust features are produced. To further facilitate the\nrobustness, we propose a multi-scale convolution-enhanced adaptation method to\nadapt pre-trained visual foundation models to the VPR task, which introduces\nthe multi-scale local information to further enhance the cross-image\ncorrelation-aware representation. Experimental results show that our method\noutperforms state-of-the-art methods by a large margin with significantly less\ntraining time. The code is released at https://github.com/Lu-Feng/CricaVPR.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Feng Lu", "Xiangyuan Lan", "Lijun Zhang", "Dongmei Jiang", "Yaowei Wang", "Chun Yuan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f664"}, "filepath": "data/2403.17173.png", "tags": [], "_media_type": "image", "_rand": 0.9992072734551986, "arXiv_link": "https://arxiv.org/abs/2403.17173", "other_link": "", "title": "Task2Box: Box Embeddings for Modeling Asymmetric Task Relationships", "abstract": "Modeling and visualizing relationships between tasks or datasets is an\nimportant step towards solving various meta-tasks such as dataset discovery,\nmulti-tasking, and transfer learning. However, many relationships, such as\ncontainment and transferability, are naturally asymmetric and current\napproaches for representation and visualization (e.g., t-SNE) do not readily\nsupport this. We propose Task2Box, an approach to represent tasks using box\nembeddings -- axis-aligned hyperrectangles in low dimensional spaces -- that\ncan capture asymmetric relationships between them through volumetric overlaps.\nWe show that Task2Box accurately predicts unseen hierarchical relationships\nbetween nodes in ImageNet and iNaturalist datasets, as well as transferability\nbetween tasks in the Taskonomy benchmark. We also show that box embeddings\nestimated from task representations (e.g., CLIP, Task2Vec, or attribute based)\ncan be used to predict relationships between unseen tasks more accurately than\nclassifiers trained on the same representations, as well as handcrafted\nasymmetric distances (e.g., KL divergence). This suggests that low-dimensional\nbox embeddings can effectively capture these task relationships and have the\nadded advantage of being interpretable. We use the approach to visualize\nrelationships among publicly available image classification datasets on popular\ndataset hosting platform called Hugging Face.", "keywords": [], "authors_list": ["Rangel Daroya", "Aaron Sun", "Subhransu Maji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f665"}, "filepath": "data/2403.15234.png", "tags": [], "_media_type": "image", "_rand": 0.9993663842063532, "arXiv_link": "https://arxiv.org/abs/2403.15234", "other_link": "https://github.com/bcmi/Object-Shadow-Generation-Dataset-DESOBAv2.", "title": "Shadow Generation for Composite Image Using Diffusion Model", "abstract": "In the realm of image composition, generating realistic shadow for the\ninserted foreground remains a formidable challenge. Previous works have\ndeveloped image-to-image translation models which are trained on paired\ntraining data. However, they are struggling to generate shadows with accurate\nshapes and intensities, hindered by data scarcity and inherent task complexity.\nIn this paper, we resort to foundation model with rich prior knowledge of\nnatural shadow images. Specifically, we first adapt ControlNet to our task and\nthen propose intensity modulation modules to improve the shadow intensity.\nMoreover, we extend the small-scale DESOBA dataset to DESOBAv2 using a novel\ndata acquisition pipeline. Experimental results on both DESOBA and DESOBAv2\ndatasets as well as real composite images demonstrate the superior capability\nof our model for shadow generation task. The dataset, code, and model are\nreleased at https://github.com/bcmi/Object-Shadow-Generation-Dataset-DESOBAv2.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Qingyang Liu", "Junqi You", "Jian-Ting Wang", "Xinhao Tao", "Bo Zhang", "Li Niu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f666"}, "filepath": "data/2405.18715.png", "tags": [], "_media_type": "image", "_rand": 0.9997884901053073, "arXiv_link": "https://arxiv.org/abs/2405.18715", "other_link": "", "title": "NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild", "abstract": "Neural Radiance Fields (NeRFs) have shown remarkable success in synthesizing\nphotorealistic views from multi-view images of static scenes, but face\nchallenges in dynamic, real-world environments with distractors like moving\nobjects, shadows, and lighting changes. Existing methods manage controlled\nenvironments and low occlusion ratios but fall short in render quality,\nespecially under high occlusion scenarios. In this paper, we introduce NeRF\nOn-the-go, a simple yet effective approach that enables the robust synthesis of\nnovel views in complex, in-the-wild scenes from only casually captured image\nsequences. Delving into uncertainty, our method not only efficiently eliminates\ndistractors, even when they are predominant in captures, but also achieves a\nnotably faster convergence speed. Through comprehensive experiments on various\nscenes, our method demonstrates a significant improvement over state-of-the-art\ntechniques. This advancement opens new avenues for NeRF in diverse and dynamic\nreal-world applications.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Weining Ren", "Zihan Zhu", "Boyang Sun", "Jiaqi Chen", "Marc Pollefeys", "Songyou Peng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f667"}, "filepath": "data/2310.03744.png", "tags": [], "_media_type": "image", "_rand": 0.9990087548501823, "arXiv_link": "https://arxiv.org/abs/2310.03744", "other_link": "", "title": "Improved Baselines with Visual Instruction Tuning", "abstract": "Large multimodal models (LMM) have recently shown encouraging progress with\nvisual instruction tuning. In this note, we show that the fully-connected\nvision-language cross-modal connector in LLaVA is surprisingly powerful and\ndata-efficient. With simple modifications to LLaVA, namely, using\nCLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA\ndata with simple response formatting prompts, we establish stronger baselines\nthat achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint\nuses merely 1.2M publicly available data, and finishes full training in ~1 day\non a single 8-A100 node. We hope this can make state-of-the-art LMM research\nmore accessible. Code and model will be publicly available.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Haotian Liu", "Chunyuan Li", "Yuheng Li", "Yong Jae Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f668"}, "filepath": "data/2312.13313.png", "tags": [], "_media_type": "image", "_rand": 0.9992278019274312, "arXiv_link": "https://arxiv.org/abs/2312.13313", "other_link": "", "title": "ParamISP: Learned Forward and Inverse ISPs using Camera Parameters", "abstract": "RAW images are rarely shared mainly due to its excessive data size compared\nto their sRGB counterparts obtained by camera ISPs. Learning the forward and\ninverse processes of camera ISPs has been recently demonstrated, enabling\nphysically-meaningful RAW-level image processing on input sRGB images. However,\nexisting learning-based ISP methods fail to handle the large variations in the\nISP processes with respect to camera parameters such as ISO and exposure time,\nand have limitations when used for various applications. In this paper, we\npropose ParamISP, a learning-based method for forward and inverse conversion\nbetween sRGB and RAW images, that adopts a novel neural-network module to\nutilize camera parameters, which is dubbed as ParamNet. Given the camera\nparameters provided in the EXIF data, ParamNet converts them into a feature\nvector to control the ISP networks. Extensive experiments demonstrate that\nParamISP achieve superior RAW and sRGB reconstruction results compared to\nprevious methods and it can be effectively used for a variety of applications\nsuch as deblurring dataset synthesis, raw deblurring, HDR reconstruction, and\ncamera-to-camera transfer.", "keywords": ["Low-level vision", "Computational imaging and physics-based vision"], "authors_list": ["Woohyeok Kim", "Geonu Kim", "Junyong Lee", "Seungyong Lee", "Seung-Hwan Baek", "Sunghyun Cho"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f669"}, "filepath": "data/2312.00784.png", "tags": [], "_media_type": "image", "_rand": 0.9995137625025478, "arXiv_link": "https://arxiv.org/abs/2312.00784", "other_link": "", "title": "ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts", "abstract": "While existing large vision-language multimodal models focus on whole image\nunderstanding, there is a prominent gap in achieving region-specific\ncomprehension. Current approaches that use textual coordinates or spatial\nencodings often fail to provide a user-friendly interface for visual prompting.\nTo address this challenge, we introduce a novel multimodal model capable of\ndecoding arbitrary visual prompts. This allows users to intuitively mark images\nand interact with the model using natural cues like a \"red bounding box\" or\n\"pointed arrow\". Our simple design directly overlays visual markers onto the\nRGB image, eliminating the need for complex region encodings, yet achieves\nstate-of-the-art performance on region-understanding tasks like Visual7W,\nPointQA, and Visual Commonsense Reasoning benchmark. Furthermore, we present\nViP-Bench, a comprehensive benchmark to assess the capability of models in\nunderstanding visual prompts across multiple dimensions, enabling future\nresearch in this domain. Code, data, and model are publicly available.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Mu Cai", "Haotian Liu", "Siva Mustikovela", "Gregory P. Meyer", "Yuning Chai", "Dennis Park", "Yong Jae Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f66a"}, "filepath": "data/2311.13681.png", "tags": [], "_media_type": "image", "_rand": 0.9990112722068109, "arXiv_link": "https://arxiv.org/abs/2311.13681", "other_link": "https://maincold2.github.io/c3dgs/.", "title": "Compact 3D Gaussian Representation for Radiance Field", "abstract": "Neural Radiance Fields (NeRFs) have demonstrated remarkable potential in\ncapturing complex 3D scenes with high fidelity. However, one persistent\nchallenge that hinders the widespread adoption of NeRFs is the computational\nbottleneck due to the volumetric rendering. On the other hand, 3D Gaussian\nsplatting (3DGS) has recently emerged as an alternative representation that\nleverages a 3D Gaussisan-based representation and adopts the rasterization\npipeline to render the images rather than volumetric rendering, achieving very\nfast rendering speed and promising image quality. However, a significant\ndrawback arises as 3DGS entails a substantial number of 3D Gaussians to\nmaintain the high fidelity of the rendered images, which requires a large\namount of memory and storage. To address this critical issue, we place a\nspecific emphasis on two key objectives: reducing the number of Gaussian points\nwithout sacrificing performance and compressing the Gaussian attributes, such\nas view-dependent color and covariance. To this end, we propose a learnable\nmask strategy that significantly reduces the number of Gaussians while\npreserving high performance. In addition, we propose a compact but effective\nrepresentation of view-dependent color by employing a grid-based neural field\nrather than relying on spherical harmonics. Finally, we learn codebooks to\ncompactly represent the geometric attributes of Gaussian by vector\nquantization. With model compression techniques such as quantization and\nentropy coding, we consistently show over 25$\\times$ reduced storage and\nenhanced rendering speed, while maintaining the quality of the scene\nrepresentation, compared to 3DGS. Our work provides a comprehensive framework\nfor 3D scene representation, achieving high performance, fast training,\ncompactness, and real-time rendering. Our project page is available at\nhttps://maincold2.github.io/c3dgs/.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Joo Chan Lee", "Daniel Rho", "Xiangyu Sun", "Jong Hwan Ko", "Eunbyung Park"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f66b"}, "filepath": "data/2404.10242.png", "tags": [], "_media_type": "image", "_rand": 0.9997478908897857, "arXiv_link": "https://arxiv.org/abs/2404.10242", "other_link": "", "title": "Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology", "abstract": "Featurizing microscopy images for use in biological research remains a\nsignificant challenge, especially for large-scale experiments spanning millions\nof images. This work explores the scaling properties of weakly supervised\nclassifiers and self-supervised masked autoencoders (MAEs) when training with\nincreasingly larger model backbones and microscopy datasets. Our results show\nthat ViT-based MAEs outperform weakly supervised classifiers on a variety of\ntasks, achieving as much as a 11.5% relative improvement when recalling known\nbiological relationships curated from public databases. Additionally, we\ndevelop a new channel-agnostic MAE architecture (CA-MAE) that allows for\ninputting images of different numbers and orders of channels at inference time.\nWe demonstrate that CA-MAEs effectively generalize by inferring and evaluating\non a microscopy image dataset (JUMP-CP) generated under different experimental\nconditions with a different channel structure than our pretraining data\n(RPI-93M). Our findings motivate continued research into scaling\nself-supervised learning on microscopy data in order to create powerful\nfoundation models of cellular biology that have the potential to catalyze\nadvancements in drug discovery and beyond.", "keywords": ["Efficient and scalable vision", "Medical imaging and biological vision"], "authors_list": ["Oren Kraus", "Kian Kenyon-Dean", "Saber Saberian", "Maryam Fallah", "Peter McLean", "Jess Leung", "Vasudev Sharma", "Ayla Khan", "Jia Balakrishnan", "Safiye Celik", "Dominique Beaini", "Maciej Sypetkowski", "Chi Cheng", "Kristen Morse", "Maureen Makes", "Ben Mabey", "Berton Earnshaw"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f66c"}, "filepath": "data/2402.18078.png", "tags": [], "_media_type": "image", "_rand": 0.9996538349019238, "arXiv_link": "https://arxiv.org/abs/2402.18078", "other_link": "https://github.com/YanzuoLu/CFLD.", "title": "Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis", "abstract": "Diffusion model is a promising approach to image generation and has been\nemployed for Pose-Guided Person Image Synthesis (PGPIS) with competitive\nperformance. While existing methods simply align the person appearance to the\ntarget pose, they are prone to overfitting due to the lack of a high-level\nsemantic understanding on the source person image. In this paper, we propose a\nnovel Coarse-to-Fine Latent Diffusion (CFLD) method for PGPIS. In the absence\nof image-caption pairs and textual prompts, we develop a novel training\nparadigm purely based on images to control the generation process of a\npre-trained text-to-image diffusion model. A perception-refined decoder is\ndesigned to progressively refine a set of learnable queries and extract\nsemantic understanding of person images as a coarse-grained prompt. This allows\nfor the decoupling of fine-grained appearance and pose information controls at\ndifferent stages, and thus circumventing the potential overfitting problem. To\ngenerate more realistic texture details, a hybrid-granularity attention module\nis proposed to encode multi-scale fine-grained appearance features as bias\nterms to augment the coarse-grained prompt. Both quantitative and qualitative\nexperimental results on the DeepFashion benchmark demonstrate the superiority\nof our method over the state of the arts for PGPIS. Code is available at\nhttps://github.com/YanzuoLu/CFLD.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Yanzuo Lu", "Manlin Zhang", "Jinhua Ma", "Xiaohua Xie", "Jianhuang Lai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f66d"}, "filepath": "data/2312.13216.png", "tags": [], "_media_type": "image", "_rand": 0.999138196714825, "arXiv_link": "https://arxiv.org/abs/2312.13216", "other_link": "", "title": "Improving Semantic Correspondence with Viewpoint-Guided Spherical Maps", "abstract": "Recent progress in self-supervised representation learning has resulted in\nmodels that are capable of extracting image features that are not only\neffective at encoding image level, but also pixel-level, semantics. These\nfeatures have been shown to be effective for dense visual semantic\ncorrespondence estimation, even outperforming fully-supervised methods.\nNevertheless, current self-supervised approaches still fail in the presence of\nchallenging image characteristics such as symmetries and repeated parts. To\naddress these limitations, we propose a new approach for semantic\ncorrespondence estimation that supplements discriminative self-supervised\nfeatures with 3D understanding via a weak geometric spherical prior. Compared\nto more involved 3D pipelines, our model only requires weak viewpoint\ninformation, and the simplicity of our spherical representation enables us to\ninject informative geometric priors into the model during training. We propose\na new evaluation metric that better accounts for repeated part and\nsymmetry-induced mistakes. We present results on the challenging SPair-71k\ndataset, where we show that our approach demonstrates is capable of\ndistinguishing between symmetric views and repeated parts across many object\ncategories, and also demonstrate that we can generalize to unseen classes on\nthe AwA dataset.", "keywords": [], "authors_list": ["Octave Mariotti", "Oisin Mac Aodha", "Hakan Bilen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f66e"}, "filepath": "data/2404.19174.png", "tags": [], "_media_type": "image", "_rand": 0.999970071485449, "arXiv_link": "https://arxiv.org/abs/2404.19174", "other_link": "", "title": "XFeat: Accelerated Features for Lightweight Image Matching", "abstract": "We introduce a lightweight and accurate architecture for resource-efficient\nvisual correspondence. Our method, dubbed XFeat (Accelerated Features),\nrevisits fundamental design choices in convolutional neural networks for\ndetecting, extracting, and matching local features. Our new model satisfies a\ncritical need for fast and robust algorithms suitable to resource-limited\ndevices. In particular, accurate image matching requires sufficiently large\nimage resolutions - for this reason, we keep the resolution as large as\npossible while limiting the number of channels in the network. Besides, our\nmodel is designed to offer the choice of matching at the sparse or semi-dense\nlevels, each of which may be more suitable for different downstream\napplications, such as visual navigation and augmented reality. Our model is the\nfirst to offer semi-dense matching efficiently, leveraging a novel match\nrefinement module that relies on coarse local descriptors. XFeat is versatile\nand hardware-independent, surpassing current deep learning-based local features\nin speed (up to 5x faster) with comparable or better accuracy, proven in pose\nestimation and visual localization. We showcase it running in real-time on an\ninexpensive laptop CPU without specialized hardware optimizations. Code and\nweights are available at www.verlab.dcc.ufmg.br/descriptors/xfeat_cvpr24.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Guilherme Potje", "Felipe Cadar", "Andr\u00e9 Araujo", "Renato Martins", "Erickson R. Nascimento"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f66f"}, "filepath": "data/2404.00815.png", "tags": [], "_media_type": "image", "_rand": 0.9997560889531585, "arXiv_link": "https://arxiv.org/abs/2404.00815", "other_link": "", "title": "Towards Realistic Scene Generation with LiDAR Diffusion Models", "abstract": "Diffusion models (DMs) excel in photo-realistic image synthesis, but their\nadaptation to LiDAR scene generation poses a substantial hurdle. This is\nprimarily because DMs operating in the point space struggle to preserve the\ncurve-like patterns and 3D geometry of LiDAR scenes, which consumes much of\ntheir representation power. In this paper, we propose LiDAR Diffusion Models\n(LiDMs) to generate LiDAR-realistic scenes from a latent space tailored to\ncapture the realism of LiDAR scenes by incorporating geometric priors into the\nlearning pipeline. Our method targets three major desiderata: pattern realism,\ngeometry realism, and object realism. Specifically, we introduce curve-wise\ncompression to simulate real-world LiDAR patterns, point-wise coordinate\nsupervision to learn scene geometry, and patch-wise encoding for a full 3D\nobject context. With these three core designs, our method achieves competitive\nperformance on unconditional LiDAR generation in 64-beam scenario and state of\nthe art on conditional LiDAR generation, while maintaining high efficiency\ncompared to point-based DMs (up to 107$\\times$ faster). Furthermore, by\ncompressing LiDAR scenes into a latent space, we enable the controllability of\nDMs with various conditions such as semantic maps, camera views, and text\nprompts.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Haoxi Ran", "Vitor Guizilini", "Yue Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f670"}, "filepath": "data/2402.19481.png", "tags": [], "_media_type": "image", "_rand": 0.9994177517372406, "arXiv_link": "https://arxiv.org/abs/2402.19481", "other_link": "https://github.com/mit-han-lab/distrifuser.", "title": "DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models", "abstract": "Diffusion models have achieved great success in synthesizing high-quality\nimages. However, generating high-resolution images with diffusion models is\nstill challenging due to the enormous computational costs, resulting in a\nprohibitive latency for interactive applications. In this paper, we propose\nDistriFusion to tackle this problem by leveraging parallelism across multiple\nGPUs. Our method splits the model input into multiple patches and assigns each\npatch to a GPU. However, naively implementing such an algorithm breaks the\ninteraction between patches and loses fidelity, while incorporating such an\ninteraction will incur tremendous communication overhead. To overcome this\ndilemma, we observe the high similarity between the input from adjacent\ndiffusion steps and propose displaced patch parallelism, which takes advantage\nof the sequential nature of the diffusion process by reusing the pre-computed\nfeature maps from the previous timestep to provide context for the current\nstep. Therefore, our method supports asynchronous communication, which can be\npipelined by computation. Extensive experiments show that our method can be\napplied to recent Stable Diffusion XL with no quality degradation and achieve\nup to a 6.1$\\times$ speedup on eight NVIDIA A100s compared to one. Our code is\npublicly available at https://github.com/mit-han-lab/distrifuser.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Muyang Li", "Tianle Cai", "Jiaxin Cao", "Qinsheng Zhang", "Han Cai", "Junjie Bai", "Yangqing Jia", "Kai Li", "Song Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f671"}, "filepath": "data/2403.02626.png", "tags": [], "_media_type": "image", "_rand": 0.9993084748173013, "arXiv_link": "https://arxiv.org/abs/2403.02626", "other_link": "", "title": "Modeling Collaborator: Enabling Subjective Vision Classification With Minimal Human Effort via LLM Tool-Use", "abstract": "From content moderation to wildlife conservation, the number of applications\nthat require models to recognize nuanced or subjective visual concepts is\ngrowing. Traditionally, developing classifiers for such concepts requires\nsubstantial manual effort measured in hours, days, or even months to identify\nand annotate data needed for training. Even with recently proposed Agile\nModeling techniques, which enable rapid bootstrapping of image classifiers,\nusers are still required to spend 30 minutes or more of monotonous, repetitive\ndata labeling just to train a single classifier. Drawing on Fiske's Cognitive\nMiser theory, we propose a new framework that alleviates manual effort by\nreplacing human labeling with natural language interactions, reducing the total\neffort required to define a concept by an order of magnitude: from labeling\n2,000 images to only 100 plus some natural language interactions. Our framework\nleverages recent advances in foundation models, both large language models and\nvision-language models, to carve out the concept space through conversation and\nby automatically labeling training data points. Most importantly, our framework\neliminates the need for crowd-sourced annotations. Moreover, our framework\nultimately produces lightweight classification models that are deployable in\ncost-sensitive scenarios. Across 15 subjective concepts and across 2 public\nimage classification datasets, our trained models outperform traditional Agile\nModeling as well as state-of-the-art zero-shot classification models like\nALIGN, CLIP, CuPL, and large visual question-answering models like PaLI-X.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Imad Eddine Toubal", "Aditya Avinash", "Neil Alldrin", "Jan Dlabal", "Wenlei Zhou", "Enming Luo", "Otilia Stretcu", "Hao Xiong", "Chun-Ta Lu", "Howard Zhou", "Ranjay Krishna", "Ariel Fuxman", "Tom Duerig"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f672"}, "filepath": "data/2403.17749.png", "tags": [], "_media_type": "image", "_rand": 0.9992450127172562, "arXiv_link": "https://arxiv.org/abs/2403.17749", "other_link": "https://github.com/YuqiYang213/MLoRE.", "title": "Multi-Task Dense Prediction via Mixture of Low-Rank Experts", "abstract": "Previous multi-task dense prediction methods based on the Mixture of Experts\n(MoE) have received great performance but they neglect the importance of\nexplicitly modeling the global relations among all tasks. In this paper, we\npresent a novel decoder-focused method for multi-task dense prediction, called\nMixture-of-Low-Rank-Experts (MLoRE). To model the global task relationships,\nMLoRE adds a generic convolution path to the original MoE structure, where each\ntask feature can go through this path for explicit parameter sharing.\nFurthermore, to control the parameters and computational cost brought by the\nincrease in the number of experts, we take inspiration from LoRA and propose to\nleverage the low-rank format of a vanilla convolution in the expert network.\nSince the low-rank experts have fewer parameters and can be dynamically\nparameterized into the generic convolution, the parameters and computational\ncost do not change much with the increase of experts. Benefiting from this\ndesign, we increase the number of experts and its reception field to enlarge\nthe representation capacity, facilitating multiple dense tasks learning in a\nunified network. Extensive experiments on the PASCAL-Context and NYUD-v2\nbenchmarks show that our MLoRE achieves superior performance compared to\nprevious state-of-the-art methods on all metrics. Our code is available at\nhttps://github.com/YuqiYang213/MLoRE.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Yuqi Yang", "Peng-Tao Jiang", "Qibin Hou", "Hao Zhang", "Jinwei Chen", "Bo Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f673"}, "filepath": "data/2303.02835.png", "tags": [], "_media_type": "image", "_rand": 0.9999676245271943, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2303.02835", "other_link": "https://github.com/PengtaoJiang/TSP6K.", "title": "Traffic Scene Parsing through the TSP6K Dataset", "abstract": "Traffic scene perception in computer vision is a critically important task to\nachieve intelligent cities. To date, most existing datasets focus on autonomous\ndriving scenes. We observe that the models trained on those driving datasets\noften yield unsatisfactory results on traffic monitoring scenes. However,\nlittle effort has been put into improving the traffic monitoring scene\nunderstanding, mainly due to the lack of specific datasets. To fill this gap,\nwe introduce a specialized traffic monitoring dataset, termed TSP6K, containing\nimages from the traffic monitoring scenario, with high-quality pixel-level and\ninstance-level annotations. The TSP6K dataset captures more crowded traffic\nscenes with several times more traffic participants than the existing driving\nscenes. We perform a detailed analysis of the dataset and comprehensively\nevaluate previous popular scene parsing methods, instance segmentation methods\nand unsupervised domain adaption methods. Furthermore, considering the vast\ndifference in instance sizes, we propose a detail refining decoder for scene\nparsing, which recovers the details of different semantic regions in traffic\nscenes owing to the proposed TSP6K dataset. Experiments show its effectiveness\nin parsing the traffic monitoring scenes. Code and dataset are available at\nhttps://github.com/PengtaoJiang/TSP6K.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Peng-Tao Jiang", "Yuqi Yang", "Yang Cao", "Qibin Hou", "Ming-Ming Cheng", "Chunhua Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f674"}, "filepath": "data/2311.12386.png", "tags": [], "_media_type": "image", "_rand": 0.9997152165641722, "arXiv_link": "https://arxiv.org/abs/2311.12386", "other_link": "https://github.com/Hzzone/PseCo", "title": "Point, Segment and Count: A Generalized Framework for Object Counting", "abstract": "Class-agnostic object counting aims to count all objects in an image with\nrespect to example boxes or class names, \\emph{a.k.a} few-shot and zero-shot\ncounting. In this paper, we propose a generalized framework for both few-shot\nand zero-shot object counting based on detection. Our framework combines the\nsuperior advantages of two foundation models without compromising their\nzero-shot capability: (\\textbf{i}) SAM to segment all possible objects as mask\nproposals, and (\\textbf{ii}) CLIP to classify proposals to obtain accurate\nobject counts. However, this strategy meets the obstacles of efficiency\noverhead and the small crowded objects that cannot be localized and\ndistinguished. To address these issues, our framework, termed PseCo, follows\nthree steps: point, segment, and count. Specifically, we first propose a\nclass-agnostic object localization to provide accurate but least point prompts\nfor SAM, which consequently not only reduces computation costs but also avoids\nmissing small objects. Furthermore, we propose a generalized object\nclassification that leverages CLIP image/text embeddings as the classifier,\nfollowing a hierarchical knowledge distillation to obtain discriminative\nclassifications among hierarchical mask proposals. Extensive experimental\nresults on FSC-147, COCO, and LVIS demonstrate that PseCo achieves\nstate-of-the-art performance in both few-shot/zero-shot object\ncounting/detection. Code: https://github.com/Hzzone/PseCo", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Zhizhong Huang", "Mingliang Dai", "Yi Zhang", "Junping Zhang", "Hongming Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f675"}, "filepath": "data/2312.03526.png", "tags": [], "_media_type": "image", "_rand": 0.9991037731306973, "arXiv_link": "https://arxiv.org/abs/2312.03526", "other_link": "", "title": "On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm", "abstract": "Contemporary machine learning requires training large neural networks on\nmassive datasets and thus faces the challenges of high computational demands.\nDataset distillation, as a recent emerging strategy, aims to compress\nreal-world datasets for efficient training. However, this line of research\ncurrently struggle with large-scale and high-resolution datasets, hindering its\npracticality and feasibility. To this end, we re-examine the existing dataset\ndistillation methods and identify three properties required for large-scale\nreal-world applications, namely, realism, diversity, and efficiency. As a\nremedy, we propose RDED, a novel computationally-efficient yet effective data\ndistillation paradigm, to enable both diversity and realism of the distilled\ndata. Extensive empirical results over various neural architectures and\ndatasets demonstrate the advancement of RDED: we can distill the full\nImageNet-1K to a small dataset comprising 10 images per class within 7 minutes,\nachieving a notable 42% top-1 accuracy with ResNet-18 on a single RTX-4090 GPU\n(while the SOTA only achieves 21% but requires 6 hours).", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Peng Sun", "Bei Shi", "Daiwei Yu", "Tao Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f676"}, "filepath": "data/2403.08768.png", "tags": [], "_media_type": "image", "_rand": 0.9992589099349745, "arXiv_link": "https://arxiv.org/abs/2403.08768", "other_link": "", "title": "3DFIRES: Few Image 3D REconstruction for Scenes with Hidden Surfaces", "abstract": "This paper introduces 3DFIRES, a novel system for scene-level 3D\nreconstruction from posed images. Designed to work with as few as one view,\n3DFIRES reconstructs the complete geometry of unseen scenes, including hidden\nsurfaces. With multiple view inputs, our method produces full reconstruction\nwithin all camera frustums. A key feature of our approach is the fusion of\nmulti-view information at the feature level, enabling the production of\ncoherent and comprehensive 3D reconstruction. We train our system on\nnon-watertight scans from large-scale real scene dataset. We show it matches\nthe efficacy of single-view reconstruction methods with only one input and\nsurpasses existing techniques in both quantitative and qualitative measures for\nsparse-view 3D reconstruction.", "keywords": ["Scene analysis and understanding", "Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Linyi Jin", "Nilesh Kulkarni", "David Fouhey"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f677"}, "filepath": "data/2402.17483.png", "tags": [], "_media_type": "image", "_rand": 0.9996141499112087, "arXiv_link": "https://arxiv.org/abs/2402.17483", "other_link": "", "title": "AlignMiF: Geometry-Aligned Multimodal Implicit Field for Enhanced LiDAR-Camera Joint Synthesis", "abstract": "Neural implicit fields have been a de facto standard in novel view synthesis.\nRecently, there exist some methods exploring fusing multiple modalities within\na single field, aiming to share implicit features from different modalities to\nenhance reconstruction performance. However, these modalities often exhibit\nmisaligned behaviors: optimizing for one modality, such as LiDAR, can adversely\naffect another, like camera performance, and vice versa. In this work, we\nconduct comprehensive analyses on the multimodal implicit field of LiDAR-camera\njoint synthesis, revealing the underlying issue lies in the misalignment of\ndifferent sensors. Furthermore, we introduce AlignMiF, a geometrically aligned\nmultimodal implicit field with two proposed modules: Geometry-Aware Alignment\n(GAA) and Shared Geometry Initialization (SGI). These modules effectively align\nthe coarse geometry across different modalities, significantly enhancing the\nfusion process between LiDAR and camera data. Through extensive experiments\nacross various datasets and scenes, we demonstrate the effectiveness of our\napproach in facilitating better interaction between LiDAR and camera modalities\nwithin a unified neural field. Specifically, our proposed AlignMiF, achieves\nremarkable improvement over recent implicit fusion methods (+2.01 and +3.11\nimage PSNR on the KITTI-360 and Waymo datasets) and consistently surpasses\nsingle modality performance (13.8% and 14.2% reduction in LiDAR Chamfer\nDistance on the respective datasets).", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Tao Tang", "Guangrun Wang", "Yixing Lao", "Peng Chen", "Jie Liu", "Liang Lin", "Kaicheng Yu", "Xiaodan Liang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f678"}, "filepath": "data/2309.04228.png", "tags": [], "_media_type": "image", "_rand": 0.9994164023083375, "arXiv_link": "https://arxiv.org/abs/2309.04228", "other_link": "", "title": "Facial Identity Anonymization via Intrinsic and Extrinsic Attention Distraction", "abstract": "In this paper, we present a new approach for facial anonymization in images\nand videos, abbreviated as FIVA. Our proposed method is able to maintain the\nsame face anonymization consistently over frames with our suggested\nidentity-tracking and guarantees a strong difference from the original face.\nFIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work\nconsiders the important security issue of reconstruction attacks and\ninvestigates adversarial noise, uniform noise, and parameter noise to disrupt\nreconstruction attacks. In this regard, we apply different defense and\nprotection methods against these privacy threats to demonstrate the scalability\nof FIVA. On top of this, we also show that reconstruction attack models can be\nused for detection of deep fakes. Last but not least, we provide experimental\nresults showing how FIVA can even enable face swapping, which is purely trained\non a single target image.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Zhenzhong Kuang", "Xiaochen Yang", "Yingjie Shen", "Chao Hu", "Jun Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f679"}, "filepath": "data/2405.06880.png", "tags": [], "_media_type": "image", "_rand": 0.9993638165587526, "arXiv_link": "https://arxiv.org/abs/2405.06880", "other_link": "https://github.com/SLDGroup/EMCAD.", "title": "EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation", "abstract": "An efficient and effective decoding mechanism is crucial in medical image\nsegmentation, especially in scenarios with limited computational resources.\nHowever, these decoding mechanisms usually come with high computational costs.\nTo address this concern, we introduce EMCAD, a new efficient multi-scale\nconvolutional attention decoder, designed to optimize both performance and\ncomputational efficiency. EMCAD leverages a unique multi-scale depth-wise\nconvolution block, significantly enhancing feature maps through multi-scale\nconvolutions. EMCAD also employs channel, spatial, and grouped (large-kernel)\ngated attention mechanisms, which are highly effective at capturing intricate\nspatial relationships while focusing on salient regions. By employing group and\ndepth-wise convolution, EMCAD is very efficient and scales well (e.g., only\n1.91M parameters and 0.381G FLOPs are needed when using a standard encoder).\nOur rigorous evaluations across 12 datasets that belong to six medical image\nsegmentation tasks reveal that EMCAD achieves state-of-the-art (SOTA)\nperformance with 79.4% and 80.3% reduction in #Params and #FLOPs, respectively.\nMoreover, EMCAD's adaptability to different encoders and versatility across\nsegmentation tasks further establish EMCAD as a promising tool, advancing the\nfield towards more efficient and accurate medical image analysis. Our\nimplementation is available at https://github.com/SLDGroup/EMCAD.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Md Mostafijur Rahman", "Mustafa Munir", "Radu Marculescu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f67a"}, "filepath": "data/2403.18913.png", "tags": [], "_media_type": "image", "_rand": 0.9991100605571636, "arXiv_link": "https://arxiv.org/abs/2403.18913", "other_link": "https://github.com/lpiccinelli-eth/unidepth", "title": "UniDepth: Universal Monocular Metric Depth Estimation", "abstract": "Accurate monocular metric depth estimation (MMDE) is crucial to solving\ndownstream tasks in 3D perception and modeling. However, the remarkable\naccuracy of recent MMDE methods is confined to their training domains. These\nmethods fail to generalize to unseen domains even in the presence of moderate\ndomain gaps, which hinders their practical applicability. We propose a new\nmodel, UniDepth, capable of reconstructing metric 3D scenes from solely single\nimages across domains. Departing from the existing MMDE methods, UniDepth\ndirectly predicts metric 3D points from the input image at inference time\nwithout any additional information, striving for a universal and flexible MMDE\nsolution. In particular, UniDepth implements a self-promptable camera module\npredicting dense camera representation to condition depth features. Our model\nexploits a pseudo-spherical output representation, which disentangles camera\nand depth representations. In addition, we propose a geometric invariance loss\nthat promotes the invariance of camera-prompted depth features. Thorough\nevaluations on ten datasets in a zero-shot regime consistently demonstrate the\nsuperior performance of UniDepth, even when compared with methods directly\ntrained on the testing domains. Code and models are available at:\nhttps://github.com/lpiccinelli-eth/unidepth", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Luigi Piccinelli", "Yung-Hsu Yang", "Christos Sakaridis", "Mattia Segu", "Siyuan Li", "Luc Van Gool", "Fisher Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f67b"}, "filepath": "data/2306.16772.png", "tags": [], "_media_type": "image", "_rand": 0.9992677782982946, "arXiv_link": "https://arxiv.org/abs/2306.16772", "other_link": "http://cjerry1243.github.io/M3Act.", "title": "Learning from Synthetic Human Group Activities", "abstract": "The study of complex human interactions and group activities has become a\nfocal point in human-centric computer vision. However, progress in related\ntasks is often hindered by the challenges of obtaining large-scale labeled\ndatasets from real-world scenarios. To address the limitation, we introduce\nM3Act, a synthetic data generator for multi-view multi-group multi-person human\natomic actions and group activities. Powered by Unity Engine, M3Act features\nmultiple semantic groups, highly diverse and photorealistic images, and a\ncomprehensive set of annotations, which facilitates the learning of\nhuman-centered tasks across single-person, multi-person, and multi-group\nconditions. We demonstrate the advantages of M3Act across three core\nexperiments. The results suggest our synthetic dataset can significantly\nimprove the performance of several downstream methods and replace real-world\ndatasets to reduce cost. Notably, M3Act improves the state-of-the-art MOTRv2 on\nDanceTrack dataset, leading to a hop on the leaderboard from 10th to 2nd place.\nMoreover, M3Act opens new research for controllable 3D group activity\ngeneration. We define multiple metrics and propose a competitive baseline for\nthe novel task. Our code and data are available at our project page:\nhttp://cjerry1243.github.io/M3Act.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Che-Jui Chang", "Danrui Li", "Deep Patel", "Parth Goel", "Seonghyeon Moon", "Samuel Sohn", "Honglu Zhou", "Sejong Yoon", "Vladimir Pavlovic", "Mubbasir Kapadia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f67c"}, "filepath": "data/2404.08958.png", "tags": [], "_media_type": "image", "_rand": 0.999588736614097, "arXiv_link": "https://arxiv.org/abs/2404.08958", "other_link": "", "title": "AMU-Tuning: Learning Effective Bias for CLIP-based Few-shot Classification", "abstract": "Recently, pre-trained vision-language models (e.g., CLIP) have shown great\npotential in few-shot learning and attracted a lot of research interest.\nAlthough efforts have been made to improve few-shot ability of CLIP, key\nfactors on the effectiveness of existing methods have not been well studied,\nlimiting further exploration of CLIP's potential in few-shot learning. In this\npaper, we first introduce a unified formulation to analyze CLIP-based few-shot\nlearning methods from a perspective of logit bias, which encourages us to learn\nan effective logit bias for further improving performance of CLIP-based\nfew-shot learning methods. To this end, we disassemble three key components\ninvolved in computation of logit bias (i.e., logit features, logit predictor,\nand logit fusion) and empirically analyze the effect on performance of few-shot\nclassification. Based on analysis of key components, this paper proposes a\nnovel AMU-Tuning method to learn effective logit bias for CLIP-based few-shot\nclassification. Specifically, our AMU-Tuning predicts logit bias by exploiting\nthe appropriate $\\underline{\\textbf{A}}$uxiliary features, which are fed into\nan efficient feature-initialized linear classifier with\n$\\underline{\\textbf{M}}$ulti-branch training. Finally, an\n$\\underline{\\textbf{U}}$ncertainty-based fusion is developed to incorporate\nlogit bias into CLIP for few-shot classification. The experiments are conducted\non several widely used benchmarks, and the results show AMU-Tuning clearly\noutperforms its counterparts while achieving state-of-the-art performance of\nCLIP-based few-shot learning without bells and whistles.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yuwei Tang", "ZhenYi Lin", "Qilong Wang", "Pengfei Zhu", "Qinghua Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f67d"}, "filepath": "data/2312.05664.png", "tags": [], "_media_type": "image", "_rand": 0.9998162574836125, "arXiv_link": "https://arxiv.org/abs/2312.05664", "other_link": "", "title": "CoGS: Controllable Gaussian Splatting", "abstract": "Capturing and re-animating the 3D structure of articulated objects present\nsignificant barriers. On one hand, methods requiring extensively calibrated\nmulti-view setups are prohibitively complex and resource-intensive, limiting\ntheir practical applicability. On the other hand, while single-camera Neural\nRadiance Fields (NeRFs) offer a more streamlined approach, they have excessive\ntraining and rendering costs. 3D Gaussian Splatting would be a suitable\nalternative but for two reasons. Firstly, existing methods for 3D dynamic\nGaussians require synchronized multi-view cameras, and secondly, the lack of\ncontrollability in dynamic scenarios. We present CoGS, a method for\nControllable Gaussian Splatting, that enables the direct manipulation of scene\nelements, offering real-time control of dynamic scenes without the prerequisite\nof pre-computing control signals. We evaluated CoGS using both synthetic and\nreal-world datasets that include dynamic objects that differ in degree of\ndifficulty. In our evaluations, CoGS consistently outperformed existing dynamic\nand controllable neural representations in terms of visual fidelity.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Heng Yu", "Joel Julin", "Zolt\u00e1n \u00c1. Milacski", "Koichiro Niinuma", "L\u00e1szl\u00f3 A. Jeni"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f67e"}, "filepath": "data/2312.14235.png", "tags": [], "_media_type": "image", "_rand": 0.9994508208231151, "arXiv_link": "https://arxiv.org/abs/2312.14235", "other_link": "", "title": "Neural Spline Fields for Burst Image Fusion and Layer Separation", "abstract": "Each photo in an image burst can be considered a sample of a complex 3D\nscene: the product of parallax, diffuse and specular materials, scene motion,\nand illuminant variation. While decomposing all of these effects from a stack\nof misaligned images is a highly ill-conditioned task, the conventional\nalign-and-merge burst pipeline takes the other extreme: blending them into a\nsingle image. In this work, we propose a versatile intermediate representation:\na two-layer alpha-composited image plus flow model constructed with neural\nspline fields -- networks trained to map input coordinates to spline control\npoints. Our method is able to, during test-time optimization, jointly fuse a\nburst image capture into one high-resolution reconstruction and decompose it\ninto transmission and obstruction layers. Then, by discarding the obstruction\nlayer, we can perform a range of tasks including seeing through occlusions,\nreflection suppression, and shadow removal. Validated on complex synthetic and\nin-the-wild captures we find that, with no post-processing steps or learned\npriors, our generalizable model is able to outperform existing dedicated\nsingle-image and multi-view obstruction removal approaches.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Ilya Chugunov", "David Shustin", "Ruyu Yan", "Chenyang Lei", "Felix Heide"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f67f"}, "filepath": "data/2310.08529v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995438721474043, "arXiv_link": "https://arxiv.org/abs/2310.08529v3", "other_link": "https://taoranyi.com/gaussiandreamer/.", "title": "GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models", "abstract": "In recent times, the generation of 3D assets from text prompts has shown\nimpressive results. Both 2D and 3D diffusion models can help generate decent 3D\nobjects based on prompts. 3D diffusion models have good 3D consistency, but\ntheir quality and generalization are limited as trainable 3D data is expensive\nand hard to obtain. 2D diffusion models enjoy strong abilities of\ngeneralization and fine generation, but 3D consistency is hard to guarantee.\nThis paper attempts to bridge the power from the two types of diffusion models\nvia the recent explicit and efficient 3D Gaussian splatting representation. A\nfast 3D object generation framework, named as GaussianDreamer, is proposed,\nwhere the 3D diffusion model provides priors for initialization and the 2D\ndiffusion model enriches the geometry and appearance. Operations of noisy point\ngrowing and color perturbation are introduced to enhance the initialized\nGaussians. Our GaussianDreamer can generate a high-quality 3D instance or 3D\navatar within 15 minutes on one GPU, much faster than previous methods, while\nthe generated instances can be directly rendered in real time. Demos and code\nare available at https://taoranyi.com/gaussiandreamer/.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Taoran Yi", "Jiemin Fang", "Junjie Wang", "Guanjun Wu", "Lingxi Xie", "Xiaopeng Zhang", "Wenyu Liu", "Qi Tian", "Xinggang Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f680"}, "filepath": "data/2403.01598.png", "tags": [], "_media_type": "image", "_rand": 0.9993320420121081, "arXiv_link": "https://arxiv.org/abs/2403.01598", "other_link": "", "title": "APISR: Anime Production Inspired Real-World Anime Super-Resolution", "abstract": "While real-world anime super-resolution (SR) has gained increasing attention\nin the SR community, existing methods still adopt techniques from the\nphotorealistic domain. In this paper, we analyze the anime production workflow\nand rethink how to use characteristics of it for the sake of the real-world\nanime SR. First, we argue that video networks and datasets are not necessary\nfor anime SR due to the repetition use of hand-drawing frames. Instead, we\npropose an anime image collection pipeline by choosing the least compressed and\nthe most informative frames from the video sources. Based on this pipeline, we\nintroduce the Anime Production-oriented Image (API) dataset. In addition, we\nidentify two anime-specific challenges of distorted and faint hand-drawn lines\nand unwanted color artifacts. We address the first issue by introducing a\nprediction-oriented compression module in the image degradation model and a\npseudo-ground truth preparation with enhanced hand-drawn lines. In addition, we\nintroduce the balanced twin perceptual loss combining both anime and\nphotorealistic high-level features to mitigate unwanted color artifacts and\nincrease visual clarity. We evaluate our method through extensive experiments\non the public benchmark, showing our method outperforms state-of-the-art anime\ndataset-trained approaches.", "keywords": ["Low-level vision"], "authors_list": ["Boyang Wang", "Fengyu Yang", "Xihang Yu", "Chao Zhang", "Hanbin Zhao"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f681"}, "filepath": "data/2402.18467.png", "tags": [], "_media_type": "image", "_rand": 0.9994852285958934, "arXiv_link": "https://arxiv.org/abs/2402.18467", "other_link": "https://github.com/zwyang6/SeCo.git.", "title": "Separate and Conquer: Decoupling Co-occurrence via Decomposition and Representation for Weakly Supervised Semantic Segmentation", "abstract": "Weakly supervised semantic segmentation (WSSS) with image-level labels aims\nto achieve segmentation tasks without dense annotations. However, attributed to\nthe frequent coupling of co-occurring objects and the limited supervision from\nimage-level labels, the challenging co-occurrence problem is widely present and\nleads to false activation of objects in WSSS. In this work, we devise a\n'Separate and Conquer' scheme SeCo to tackle this issue from dimensions of\nimage space and feature space. In the image space, we propose to 'separate' the\nco-occurring objects with image decomposition by subdividing images into\npatches. Importantly, we assign each patch a category tag from Class Activation\nMaps (CAMs), which spatially helps remove the co-context bias and guide the\nsubsequent representation. In the feature space, we propose to 'conquer' the\nfalse activation by enhancing semantic representation with multi-granularity\nknowledge contrast. To this end, a dual-teacher-single-student architecture is\ndesigned and tag-guided contrast is conducted, which guarantee the correctness\nof knowledge and further facilitate the discrepancy among co-contexts. We\nstreamline the multi-staged WSSS pipeline end-to-end and tackle this issue\nwithout external supervision. Extensive experiments are conducted, validating\nthe efficiency of our method and the superiority over previous single-staged\nand even multi-staged competitors on PASCAL VOC and MS COCO. Code is available\nat https://github.com/zwyang6/SeCo.git.", "keywords": [], "authors_list": ["Zhiwei Yang", "Kexue Fu", "Minghong Duan", "Linhao Qu", "Shuo Wang", "Zhijian Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f682"}, "filepath": "data/2402.05408.png", "tags": [], "_media_type": "image", "_rand": 0.9994384343536861, "arXiv_link": "https://arxiv.org/abs/2402.05408", "other_link": "https://migcproject.github.io/.", "title": "MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis", "abstract": "We present a Multi-Instance Generation (MIG) task, simultaneously generating\nmultiple instances with diverse controls in one image. Given a set of\npredefined coordinates and their corresponding descriptions, the task is to\nensure that generated instances are accurately at the designated locations and\nthat all instances' attributes adhere to their corresponding description. This\nbroadens the scope of current research on Single-instance generation, elevating\nit to a more versatile and practical dimension. Inspired by the idea of divide\nand conquer, we introduce an innovative approach named Multi-Instance\nGeneration Controller (MIGC) to address the challenges of the MIG task.\nInitially, we break down the MIG task into several subtasks, each involving the\nshading of a single instance. To ensure precise shading for each instance, we\nintroduce an instance enhancement attention mechanism. Lastly, we aggregate all\nthe shaded instances to provide the necessary information for accurately\ngenerating multiple instances in stable diffusion (SD). To evaluate how well\ngeneration models perform on the MIG task, we provide a COCO-MIG benchmark\nalong with an evaluation pipeline. Extensive experiments were conducted on the\nproposed COCO-MIG benchmark, as well as on various commonly used benchmarks.\nThe evaluation results illustrate the exceptional control capabilities of our\nmodel in terms of quantity, position, attribute, and interaction. Code and\ndemos will be released at https://migcproject.github.io/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Dewei Zhou", "You Li", "Fan Ma", "Xiaoting Zhang", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f683"}, "filepath": "data/2311.17597.png", "tags": [], "_media_type": "image", "_rand": 0.9995219359795832, "arXiv_link": "https://arxiv.org/abs/2311.17597", "other_link": "https://github.com/yeerwen/MedCoSS.", "title": "Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning", "abstract": "Self-supervised learning is an efficient pre-training method for medical\nimage analysis. However, current research is mostly confined to\nspecific-modality data pre-training, consuming considerable time and resources\nwithout achieving universality across different modalities. A straightforward\nsolution is combining all modality data for joint self-supervised pre-training,\nwhich poses practical challenges. Firstly, our experiments reveal conflicts in\nrepresentation learning as the number of modalities increases. Secondly,\nmulti-modal data collected in advance cannot cover all real-world scenarios. In\nthis paper, we reconsider versatile self-supervised learning from the\nperspective of continual learning and propose MedCoSS, a continuous\nself-supervised learning approach for multi-modal medical data. Unlike joint\nself-supervised learning, MedCoSS assigns different modality data to different\ntraining stages, forming a multi-stage pre-training process. To balance modal\nconflicts and prevent catastrophic forgetting, we propose a rehearsal-based\ncontinual learning method. We introduce the k-means sampling strategy to retain\ndata from previous modalities and rehearse it when learning new modalities.\nInstead of executing the pretext task on buffer data, a feature distillation\nstrategy and an intra-modal mixup strategy are applied to these data for\nknowledge retention. We conduct continuous self-supervised pre-training on a\nlarge-scale multi-modal unlabeled dataset, including clinical reports, X-rays,\nCT scans, MRI scans, and pathological images. Experimental results demonstrate\nMedCoSS's exceptional generalization ability across nine downstream datasets\nand its significant scalability in integrating new modality data. Code and\npre-trained weight are available at https://github.com/yeerwen/MedCoSS.", "keywords": ["Multimodal models and vision-language models", "Medical imaging and biological vision"], "authors_list": ["Yiwen Ye", "Yutong Xie", "Jianpeng Zhang", "Ziyang Chen", "Qi Wu", "Yong Xia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f684"}, "filepath": "data/2403.17935.png", "tags": [], "_media_type": "image", "_rand": 0.9999591211578089, "arXiv_link": "https://arxiv.org/abs/2403.17935", "other_link": "https://github.com/wangjk666/OmniVid.", "title": "OmniVid: A Generative Framework for Universal Video Understanding", "abstract": "The core of video understanding tasks, such as recognition, captioning, and\ntracking, is to automatically detect objects or actions in a video and analyze\ntheir temporal evolution. Despite sharing a common goal, different tasks often\nrely on distinct model architectures and annotation formats. In contrast,\nnatural language processing benefits from a unified output space, i.e., text\nsequences, which simplifies the training of powerful foundational language\nmodels, such as GPT-3, with extensive training corpora. Inspired by this, we\nseek to unify the output space of video understanding tasks by using languages\nas labels and additionally introducing time and box tokens. In this way, a\nvariety of video tasks could be formulated as video-grounded token generation.\nThis enables us to address various types of video tasks, including\nclassification (such as action recognition), captioning (covering clip\ncaptioning, video question answering, and dense video captioning), and\nlocalization tasks (such as visual object tracking) within a fully shared\nencoder-decoder architecture, following a generative framework. Through\ncomprehensive experiments, we demonstrate such a simple and straightforward\nidea is quite effective and can achieve state-of-the-art or competitive results\non seven video benchmarks, providing a novel perspective for more universal\nvideo understanding. Code is available at https://github.com/wangjk666/OmniVid.", "keywords": ["Scene analysis and understanding", "Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Junke Wang", "Dongdong Chen", "Chong Luo", "Bo He", "Lu Yuan", "Zuxuan Wu", "Yu-Gang Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f685"}, "filepath": "data/2312.00598.png", "tags": [], "_media_type": "image", "_rand": 0.9994766421879491, "arXiv_link": "https://arxiv.org/abs/2312.00598", "other_link": "", "title": "Learning from One Continuous Video Stream", "abstract": "We introduce a framework for online learning from a single continuous video\nstream -- the way people and animals learn, without mini-batches, data\naugmentation or shuffling. This poses great challenges given the high\ncorrelation between consecutive video frames and there is very little prior\nwork on it. Our framework allows us to do a first deep dive into the topic and\nincludes a collection of streams and tasks composed from two existing video\ndatasets, plus methodology for performance evaluation that considers both\nadaptation and generalization. We employ pixel-to-pixel modelling as a\npractical and flexible way to switch between pre-training and single-stream\nevaluation as well as between arbitrary tasks, without ever requiring changes\nto models and always using the same pixel loss. Equipped with this framework we\nobtained large single-stream learning gains from pre-training with a novel\nfamily of future prediction tasks, found that momentum hurts, and that the pace\nof weight updates matters. The combination of these insights leads to matching\nthe performance of IID learning with batch size 1, when using the same\narchitecture and without costly replay buffers.", "keywords": [], "authors_list": ["Joao Carreira", "Michael King", "Viorica Patraucean", "Dilara Gokay", "Catalin Ionescu", "Yi Yang", "Daniel Zoran", "Joseph Heyward", "Carl Doersch", "Yusuf Aytar", "Dima Damen", "Andrew Zisserman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f686"}, "filepath": "data/2403.16205.png", "tags": [], "_media_type": "image", "_rand": 0.9990731565893093, "arXiv_link": "https://arxiv.org/abs/2403.16205", "other_link": "https://zero1778.github.io/blur2blur/", "title": "Blur2Blur: Blur Conversion for Unsupervised Image Deblurring on Unknown Domains", "abstract": "This paper presents an innovative framework designed to train an image\ndeblurring algorithm tailored to a specific camera device. This algorithm works\nby transforming a blurry input image, which is challenging to deblur, into\nanother blurry image that is more amenable to deblurring. The transformation\nprocess, from one blurry state to another, leverages unpaired data consisting\nof sharp and blurry images captured by the target camera device. Learning this\nblur-to-blur transformation is inherently simpler than direct blur-to-sharp\nconversion, as it primarily involves modifying blur patterns rather than the\nintricate task of reconstructing fine image details. The efficacy of the\nproposed approach has been demonstrated through comprehensive experiments on\nvarious benchmarks, where it significantly outperforms state-of-the-art methods\nboth quantitatively and qualitatively. Our code and data are available at\nhttps://zero1778.github.io/blur2blur/", "keywords": ["Low-level vision"], "authors_list": ["Bang-Dang Pham", "Phong Tran", "Anh Tran", "Cuong Pham", "Rang Nguyen", "Minh Hoai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f687"}, "filepath": "data/2403.01431.png", "tags": [], "_media_type": "image", "_rand": 0.9996589719265114, "arXiv_link": "https://arxiv.org/abs/2403.01431", "other_link": "", "title": "D3still: Decoupled Differential Distillation for Asymmetric Image Retrieval", "abstract": "The task of composed image retrieval (CIR) aims to retrieve images based on\nthe query image and the text describing the users' intent. Existing methods\nhave made great progress with the advanced large vision-language (VL) model in\nCIR task, however, they generally suffer from two main issues: lack of labeled\ntriplets for model training and difficulty of deployment on resource-restricted\nenvironments when deploying the large vision-language model. To tackle the\nabove problems, we propose Image2Sentence based Asymmetric zero-shot composed\nimage retrieval (ISA), which takes advantage of the VL model and only relies on\nunlabeled images for composition learning. In the framework, we propose a new\nadaptive token learner that maps an image to a sentence in the word embedding\nspace of VL model. The sentence adaptively captures discriminative visual\ninformation and is further integrated with the text modifier. An asymmetric\nstructure is devised for flexible deployment, in which the lightweight model is\nadopted for the query side while the large VL model is deployed on the gallery\nside. The global contrastive distillation and the local alignment\nregularization are adopted for the alignment between the light model and the VL\nmodel for CIR task. Our experiments demonstrate that the proposed ISA could\nbetter cope with the real retrieval scenarios and further improve retrieval\naccuracy and efficiency.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Yi Xie", "Yihong Lin", "Wenjie Cai", "Xuemiao Xu", "Huaidong Zhang", "Yong Du", "Shengfeng He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f688"}, "filepath": "data/2405.04408.png", "tags": [], "_media_type": "image", "_rand": 0.9999332769851765, "arXiv_link": "https://arxiv.org/abs/2405.04408", "other_link": "https://github.com/ZZZHANG-jx/DocRes", "title": "DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks", "abstract": "Document image restoration is a crucial aspect of Document AI systems, as the\nquality of document images significantly influences the overall performance.\nPrevailing methods address distinct restoration tasks independently, leading to\nintricate systems and the incapability to harness the potential synergies of\nmulti-task learning. To overcome this challenge, we propose DocRes, a\ngeneralist model that unifies five document image restoration tasks including\ndewarping, deshadowing, appearance enhancement, deblurring, and binarization.\nTo instruct DocRes to perform various restoration tasks, we propose a novel\nvisual prompt approach called Dynamic Task-Specific Prompt (DTSPrompt). The\nDTSPrompt for different tasks comprises distinct prior features, which are\nadditional characteristics extracted from the input image. Beyond its role as a\ncue for task-specific execution, DTSPrompt can also serve as supplementary\ninformation to enhance the model's performance. Moreover, DTSPrompt is more\nflexible than prior visual prompt approaches as it can be seamlessly applied\nand adapted to inputs with high and variable resolutions. Experimental results\ndemonstrate that DocRes achieves competitive or superior performance compared\nto existing state-of-the-art task-specific models. This underscores the\npotential of DocRes across a broader spectrum of document image restoration\ntasks. The source code is publicly available at\nhttps://github.com/ZZZHANG-jx/DocRes", "keywords": ["Document analysis and understanding", "Low-level vision"], "authors_list": ["Jiaxin Zhang", "Dezhi Peng", "Chongyu Liu", "Peirong Zhang", "Lianwen Jin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f689"}, "filepath": "data/2404.06065.png", "tags": [], "_media_type": "image", "_rand": 0.9998246134571677, "arXiv_link": "https://arxiv.org/abs/2404.06065", "other_link": "https://github.com/gaozhengqing/UniEnt", "title": "Unified Entropy Optimization for Open-Set Test-Time Adaptation", "abstract": "Test-time adaptation (TTA) aims at adapting a model pre-trained on the\nlabeled source domain to the unlabeled target domain. Existing methods usually\nfocus on improving TTA performance under covariate shifts, while neglecting\nsemantic shifts. In this paper, we delve into a realistic open-set TTA setting\nwhere the target domain may contain samples from unknown classes. Many\nstate-of-the-art closed-set TTA methods perform poorly when applied to open-set\nscenarios, which can be attributed to the inaccurate estimation of data\ndistribution and model confidence. To address these issues, we propose a simple\nbut effective framework called unified entropy optimization (UniEnt), which is\ncapable of simultaneously adapting to covariate-shifted in-distribution (csID)\ndata and detecting covariate-shifted out-of-distribution (csOOD) data.\nSpecifically, UniEnt first mines pseudo-csID and pseudo-csOOD samples from test\ndata, followed by entropy minimization on the pseudo-csID data and entropy\nmaximization on the pseudo-csOOD data. Furthermore, we introduce UniEnt+ to\nalleviate the noise caused by hard data partition leveraging sample-level\nconfidence. Extensive experiments on CIFAR benchmarks and Tiny-ImageNet-C show\nthe superiority of our framework. The code is available at\nhttps://github.com/gaozhengqing/UniEnt", "keywords": [], "authors_list": ["Zhengqing Gao", "Xu-Yao Zhang", "Cheng-Lin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f68a"}, "filepath": "data/2402.17228.png", "tags": [], "_media_type": "image", "_rand": 0.9993138717764344, "arXiv_link": "https://arxiv.org/abs/2402.17228", "other_link": "https://github.com/DearCaat/RRT-MIL.", "title": "Feature Re-Embedding: Towards Foundation Model-Level Performance in Computational Pathology", "abstract": "Multiple instance learning (MIL) is the most widely used framework in\ncomputational pathology, encompassing sub-typing, diagnosis, prognosis, and\nmore. However, the existing MIL paradigm typically requires an offline instance\nfeature extractor, such as a pre-trained ResNet or a foundation model. This\napproach lacks the capability for feature fine-tuning within the specific\ndownstream tasks, limiting its adaptability and performance. To address this\nissue, we propose a Re-embedded Regional Transformer (R$^2$T) for re-embedding\nthe instance features online, which captures fine-grained local features and\nestablishes connections across different regions. Unlike existing works that\nfocus on pre-training powerful feature extractor or designing sophisticated\ninstance aggregator, R$^2$T is tailored to re-embed instance features online.\nIt serves as a portable module that can seamlessly integrate into mainstream\nMIL models. Extensive experimental results on common computational pathology\ntasks validate that: 1) feature re-embedding improves the performance of MIL\nmodels based on ResNet-50 features to the level of foundation model features,\nand further enhances the performance of foundation model features; 2) the\nR$^2$T can introduce more significant performance improvements to various MIL\nmodels; 3) R$^2$T-MIL, as an R$^2$T-enhanced AB-MIL, outperforms other latest\nmethods by a large margin.The code is available at:\nhttps://github.com/DearCaat/RRT-MIL.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Wenhao Tang", "Fengtao ZHOU", "Sheng Huang", "Xiang Zhu", "Yi Zhang", "Bo Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f68b"}, "filepath": "data/2312.10136.png", "tags": [], "_media_type": "image", "_rand": 0.999319488921663, "arXiv_link": "https://arxiv.org/abs/2312.10136", "other_link": "", "title": "Gradient-based Parameter Selection for Efficient Fine-Tuning", "abstract": "With the growing size of pre-trained models, full fine-tuning and storing all\nthe parameters for various downstream tasks is costly and infeasible. In this\npaper, we propose a new parameter-efficient fine-tuning method, Gradient-based\nParameter Selection (GPS), demonstrating that only tuning a few selected\nparameters from the pre-trained model while keeping the remainder of the model\nfrozen can generate similar or better performance compared with the full model\nfine-tuning method. Different from the existing popular and state-of-the-art\nparameter-efficient fine-tuning approaches, our method does not introduce any\nadditional parameters and computational costs during both the training and\ninference stages. Another advantage is the model-agnostic and non-destructive\nproperty, which eliminates the need for any other design specific to a\nparticular model. Compared with the full fine-tuning, GPS achieves 3.33%\n(91.78% vs. 88.45%, FGVC) and 9.61% (73.1% vs. 65.57%, VTAB) improvement of the\naccuracy with tuning only 0.36% parameters of the pre-trained model on average\nover 24 image classification tasks; it also demonstrates a significant\nimprovement of 17% and 16.8% in mDice and mIoU, respectively, on medical image\nsegmentation task. Moreover, GPS achieves state-of-the-art performance compared\nwith existing PEFT methods.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhi Zhang", "Qizhe Zhang", "Zijun Gao", "Renrui Zhang", "Ekaterina Shutova", "Shiji Zhou", "Shanghang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f68c"}, "filepath": "data/2405.02608.png", "tags": [], "_media_type": "image", "_rand": 0.9990530812720196, "arXiv_link": "https://arxiv.org/abs/2405.02608", "other_link": "", "title": "UnSAMFlow: Unsupervised Optical Flow Guided by Segment Anything Model", "abstract": "Traditional unsupervised optical flow methods are vulnerable to occlusions\nand motion boundaries due to lack of object-level information. Therefore, we\npropose UnSAMFlow, an unsupervised flow network that also leverages object\ninformation from the latest foundation model Segment Anything Model (SAM). We\nfirst include a self-supervised semantic augmentation module tailored to SAM\nmasks. We also analyze the poor gradient landscapes of traditional smoothness\nlosses and propose a new smoothness definition based on homography instead. A\nsimple yet effective mask feature module has also been added to further\naggregate features on the object level. With all these adaptations, our method\nproduces clear optical flow estimation with sharp boundaries around objects,\nwhich outperforms state-of-the-art methods on both KITTI and Sintel datasets.\nOur method also generalizes well across domains and runs very efficiently.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Shuai Yuan", "Lei Luo", "Zhuo Hui", "Can Pu", "Xiaoyu Xiang", "Rakesh Ranjan", "Denis Demandolx"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f68d"}, "filepath": "data/2403.13548.png", "tags": [], "_media_type": "image", "_rand": 0.9993711340609545, "arXiv_link": "https://arxiv.org/abs/2403.13548", "other_link": "", "title": "Diversity-aware Channel Pruning for StyleGAN Compression", "abstract": "StyleGAN has shown remarkable performance in unconditional image generation.\nHowever, its high computational cost poses a significant challenge for\npractical applications. Although recent efforts have been made to compress\nStyleGAN while preserving its performance, existing compressed models still lag\nbehind the original model, particularly in terms of sample diversity. To\novercome this, we propose a novel channel pruning method that leverages varying\nsensitivities of channels to latent vectors, which is a key factor in sample\ndiversity. Specifically, by assessing channel importance based on their\nsensitivities to latent vector perturbations, our method enhances the diversity\nof samples in the compressed model. Since our method solely focuses on the\nchannel pruning stage, it has complementary benefits with prior training\nschemes without additional training cost. Extensive experiments demonstrate\nthat our method significantly enhances sample diversity across various\ndatasets. Moreover, in terms of FID scores, our method not only surpasses\nstate-of-the-art by a large margin but also achieves comparable scores with\nonly half training iterations.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jiwoo Chung", "Sangeek Hyun", "Sang-Heon Shim", "Jae-Pil Heo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f68e"}, "filepath": "data/2403.04492.png", "tags": [], "_media_type": "image", "_rand": 0.999026040167941, "arXiv_link": "https://arxiv.org/abs/2403.04492", "other_link": "https://github.com/rashindrie/DIPA.", "title": "Discriminative Sample-Guided and Parameter-Efficient Feature Space Adaptation for Cross-Domain Few-Shot Learning", "abstract": "In this paper, we look at cross-domain few-shot classification which presents\nthe challenging task of learning new classes in previously unseen domains with\nfew labelled examples. Existing methods, though somewhat effective, encounter\nseveral limitations, which we alleviate through two significant improvements.\nFirst, we introduce a lightweight parameter-efficient adaptation strategy to\naddress overfitting associated with fine-tuning a large number of parameters on\nsmall datasets. This strategy employs a linear transformation of pre-trained\nfeatures, significantly reducing the trainable parameter count. Second, we\nreplace the traditional nearest centroid classifier with a discriminative\nsample-aware loss function, enhancing the model's sensitivity to the inter- and\nintra-class variances within the training set for improved clustering in\nfeature space. Empirical evaluations on the Meta-Dataset benchmark showcase\nthat our approach not only improves accuracy up to 7.7\\% and 5.3\\% on\npreviously seen and unseen datasets, respectively, but also achieves the above\nperformance while being at least $\\sim3\\times$ more parameter-efficient than\nexisting methods, establishing a new state-of-the-art in cross-domain few-shot\nlearning. Our code is available at https://github.com/rashindrie/DIPA.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Rashindrie Perera", "Saman Halgamuge"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f68f"}, "filepath": "data/2405.19646.png", "tags": [], "_media_type": "image", "_rand": 0.999274317204541, "arXiv_link": "https://arxiv.org/abs/2405.19646", "other_link": "https://davidcferman.github.io/FaceLift", "title": "FaceLift: Semi-supervised 3D Facial Landmark Localization", "abstract": "3D facial landmark localization has proven to be of particular use for\napplications, such as face tracking, 3D face modeling, and image-based 3D face\nreconstruction. In the supervised learning case, such methods usually rely on\n3D landmark datasets derived from 3DMM-based registration that often lack\nspatial definition alignment, as compared with that chosen by hand-labeled\nhuman consensus, e.g., how are eyebrow landmarks defined? This creates a gap\nbetween landmark datasets generated via high-quality 2D human labels and 3DMMs,\nand it ultimately limits their effectiveness. To address this issue, we\nintroduce a novel semi-supervised learning approach that learns 3D landmarks by\ndirectly lifting (visible) hand-labeled 2D landmarks and ensures better\ndefinition alignment, without the need for 3D landmark datasets. To lift 2D\nlandmarks to 3D, we leverage 3D-aware GANs for better multi-view consistency\nlearning and in-the-wild multi-frame videos for robust cross-generalization.\nEmpirical experiments demonstrate that our method not only achieves better\ndefinition alignment between 2D-3D landmarks but also outperforms other\nsupervised learning 3D landmark localization methods on both 3DMM labeled and\nphotogrammetric ground truth evaluation datasets. Project Page:\nhttps://davidcferman.github.io/FaceLift", "keywords": ["Biometrics and human analysis"], "authors_list": ["David Ferman", "Pablo Garrido", "Gaurav Bharaj"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f690"}, "filepath": "data/2310.13772.png", "tags": [], "_media_type": "image", "_rand": 0.9995448257904288, "arXiv_link": "https://arxiv.org/abs/2310.13772", "other_link": "", "title": "MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models", "abstract": "We present TexFusion (Texture Diffusion), a new method to synthesize textures\nfor given 3D geometries, using large-scale text-guided image diffusion models.\nIn contrast to recent works that leverage 2D text-to-image diffusion models to\ndistill 3D objects using a slow and fragile optimization process, TexFusion\nintroduces a new 3D-consistent generation technique specifically designed for\ntexture synthesis that employs regular diffusion model sampling on different 2D\nrendered views. Specifically, we leverage latent diffusion models, apply the\ndiffusion model's denoiser on a set of 2D renders of the 3D object, and\naggregate the different denoising predictions on a shared latent texture map.\nFinal output RGB textures are produced by optimizing an intermediate neural\ncolor field on the decodings of 2D renders of the latent texture. We thoroughly\nvalidate TexFusion and show that we can efficiently generate diverse, high\nquality and globally coherent textures. We achieve state-of-the-art text-guided\ntexture synthesis performance using only image diffusion models, while avoiding\nthe pitfalls of previous distillation-based methods. The text-conditioning\noffers detailed control and we also do not rely on any ground truth 3D textures\nfor training. This makes our method versatile and applicable to a broad range\nof geometry and texture types. We hope that TexFusion will advance AI-based\ntexturing of 3D assets for applications in virtual reality, game design,\nsimulation, and more.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Sanjoy Chowdhury", "Sayan Nag", "Joseph K J", "Balaji Vasan Srinivasan", "Dinesh Manocha"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f691"}, "filepath": "data/2405.16790.png", "tags": [], "_media_type": "image", "_rand": 0.9995597161841676, "arXiv_link": "https://arxiv.org/abs/2405.16790", "other_link": "https://github.com/Acnext/SCSim.", "title": "Intensity-Robust Autofocus for Spike Camera", "abstract": "Spike cameras, with their exceptional temporal resolution, are\nrevolutionizing high-speed visual applications. Large-scale synthetic datasets\nhave significantly accelerated the development of these cameras, particularly\nin reconstruction and optical flow. However, current synthetic datasets for\nspike cameras lack sophistication. Addressing this gap, we introduce SCSim, a\nnovel and more realistic spike camera simulator with a comprehensive noise\nmodel. SCSim is adept at autonomously generating driving scenarios and\nsynthesizing corresponding spike streams. To enhance the fidelity of these\nstreams, we've developed a comprehensive noise model tailored to the unique\ncircuitry of spike cameras. Our evaluations demonstrate that SCSim outperforms\nexisting simulation methods in generating authentic spike streams. Crucially,\nSCSim simplifies the creation of datasets, thereby greatly advancing\nspike-based visual tasks like reconstruction. Our project refers to\nhttps://github.com/Acnext/SCSim.", "keywords": [], "authors_list": ["Changqing Su", "Zhiyuan Ye", "Yongsheng Xiao", "You Zhou", "Zhen Cheng", "Bo Xiong", "Zhaofei Yu", "Tiejun Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f692"}, "filepath": "data/2311.15803.png", "tags": [], "_media_type": "image", "_rand": 0.9998063132679069, "arXiv_link": "https://arxiv.org/abs/2311.15803", "other_link": "", "title": "SOAC: Spatio-Temporal Overlap-Aware Multi-Sensor Calibration using Neural Radiance Fields", "abstract": "In rapidly-evolving domains such as autonomous driving, the use of multiple\nsensors with different modalities is crucial to ensure high operational\nprecision and stability. To correctly exploit the provided information by each\nsensor in a single common frame, it is essential for these sensors to be\naccurately calibrated. In this paper, we leverage the ability of Neural\nRadiance Fields (NeRF) to represent different sensors modalities in a common\nvolumetric representation to achieve robust and accurate spatio-temporal sensor\ncalibration. By designing a partitioning approach based on the visible part of\nthe scene for each sensor, we formulate the calibration problem using only the\noverlapping areas. This strategy results in a more robust and accurate\ncalibration that is less prone to failure. We demonstrate that our approach\nworks on outdoor urban scenes by validating it on multiple established driving\ndatasets. Results show that our method is able to get better accuracy and\nrobustness compared to existing methods.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Quentin HERAU", "Nathan Piasco", "Moussab Bennehar", "Luis Guiller,o Roldao Jimenez", "Dzmitry Tsishkou", "MigniotCyrille", "Mod\u00e9lisation Information Syst\u00e8mes", "Cedric Demonceaux"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f693"}, "filepath": "data/2311.17938.png", "tags": [], "_media_type": "image", "_rand": 0.9991651297445832, "arXiv_link": "https://arxiv.org/abs/2311.17938", "other_link": "", "title": "Active Open-Vocabulary Recognition: Let Intelligent Moving Mitigate CLIP Limitations", "abstract": "Active recognition, which allows intelligent agents to explore observations\nfor better recognition performance, serves as a prerequisite for various\nembodied AI tasks, such as grasping, navigation and room arrangements. Given\nthe evolving environment and the multitude of object classes, it is impractical\nto include all possible classes during the training stage. In this paper, we\naim at advancing active open-vocabulary recognition, empowering embodied agents\nto actively perceive and classify arbitrary objects. However, directly adopting\nrecent open-vocabulary classification models, like Contrastive Language Image\nPretraining (CLIP), poses its unique challenges. Specifically, we observe that\nCLIP's performance is heavily affected by the viewpoint and occlusions,\ncompromising its reliability in unconstrained embodied perception scenarios.\nFurther, the sequential nature of observations in agent-environment\ninteractions necessitates an effective method for integrating features that\nmaintains discriminative strength for open-vocabulary classification. To\naddress these issues, we introduce a novel agent for active open-vocabulary\nrecognition. The proposed method leverages inter-frame and inter-concept\nsimilarities to navigate agent movements and to fuse features, without relying\non class-specific knowledge. Compared to baseline CLIP model with 29.6%\naccuracy on ShapeNet dataset, the proposed agent could achieve 53.3% accuracy\nfor open-vocabulary recognition, without any fine-tuning to the equipped CLIP\nmodel. Additional experiments conducted with the Habitat simulator further\naffirm the efficacy of our method.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Lei Fan", "Jianxiong Zhou", "Xiaoying Xing", "Ying Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f694"}, "filepath": "data/2308.09302.png", "tags": [], "_media_type": "image", "_rand": 0.9990785536003957, "arXiv_link": "https://arxiv.org/abs/2308.09302", "other_link": "", "title": "2S-UDF: A Novel Two-stage UDF Learning Method for Robust Non-watertight Model Reconstruction from Multi-view Images", "abstract": "Robust audio anti-spoofing has been increasingly challenging due to the\nrecent advancements on deepfake techniques. While spectrograms have\ndemonstrated their capability for anti-spoofing, complementary information\npresented in multi-order spectral patterns have not been well explored, which\nlimits their effectiveness for varying spoofing attacks. Therefore, we propose\na novel deep learning method with a spectral fusion-reconstruction strategy,\nnamely S2pecNet, to utilise multi-order spectral patterns for robust audio\nanti-spoofing representations. Specifically, spectral patterns up to\nsecond-order are fused in a coarse-to-fine manner and two branches are designed\nfor the fine-level fusion from the spectral and temporal contexts. A\nreconstruction from the fused representation to the input spectrograms further\nreduces the potential fused information loss. Our method achieved the\nstate-of-the-art performance with an EER of 0.77% on a widely used dataset:\nASVspoof2019 LA Challenge.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Junkai Deng", "Fei Hou", "Xuhui Chen", "Wencheng Wang", "Ying He"], "category_name": "Sound", "all_categories": ["Sound", "Artificial Intelligence", "Multimedia", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f695"}, "filepath": "data/2402.15017.png", "tags": [], "_media_type": "image", "_rand": 0.9991559458985457, "arXiv_link": "https://arxiv.org/abs/2402.15017", "other_link": "https://github.com/OliverXUZY/Foudation-Model_Multitask.", "title": "Sheared Backpropagation for Finetuning Foundation Models", "abstract": "Foundation models have emerged as a powerful tool for many AI problems.\nDespite the tremendous success of foundation models, effective adaptation to\nnew tasks, particularly those with limited labels, remains an open question and\nlacks theoretical understanding. An emerging solution with recent success in\nvision and NLP involves finetuning a foundation model on a selection of\nrelevant tasks, before its adaptation to a target task with limited labeled\nsamples. In this paper, we study the theoretical justification of this\nmultitask finetuning approach. Our theoretical analysis reveals that with a\ndiverse set of related tasks, this multitask finetuning leads to reduced error\nin the target task, in comparison to directly adapting the same pretrained\nmodel. We quantify the relationship between finetuning tasks and target tasks\nby diversity and consistency metrics, and further propose a practical task\nselection algorithm. We substantiate our theoretical claims with extensive\nempirical evidence. Further, we present results affirming our task selection\nalgorithm adeptly chooses related finetuning tasks, providing advantages to the\nmodel performance on target tasks. We believe our study shed new light on the\neffective adaptation of foundation models to new tasks that lack abundant\nlabels. Our code is available at\nhttps://github.com/OliverXUZY/Foudation-Model_Multitask.", "keywords": [], "authors_list": ["Zhiyuan Yu", "Li Shen", "Liang Ding", "Xinmei Tian", "Yixin Chen", "Dacheng Tao"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f696"}, "filepath": "data/2403.20254.png", "tags": [], "_media_type": "image", "_rand": 0.9991598789376827, "arXiv_link": "https://arxiv.org/abs/2403.20254", "other_link": "https://github.com/Alvin-Zeng/temporal-robustness-benchmark.", "title": "Benchmarking the Robustness of Temporal Action Detection Models Against Temporal Corruptions", "abstract": "Temporal action detection (TAD) aims to locate action positions and recognize\naction categories in long-term untrimmed videos. Although many methods have\nachieved promising results, their robustness has not been thoroughly studied.\nIn practice, we observe that temporal information in videos can be occasionally\ncorrupted, such as missing or blurred frames. Interestingly, existing methods\noften incur a significant performance drop even if only one frame is affected.\nTo formally evaluate the robustness, we establish two temporal corruption\nrobustness benchmarks, namely THUMOS14-C and ActivityNet-v1.3-C. In this paper,\nwe extensively analyze the robustness of seven leading TAD methods and obtain\nsome interesting findings: 1) Existing methods are particularly vulnerable to\ntemporal corruptions, and end-to-end methods are often more susceptible than\nthose with a pre-trained feature extractor; 2) Vulnerability mainly comes from\nlocalization error rather than classification error; 3) When corruptions occur\nin the middle of an action instance, TAD models tend to yield the largest\nperformance drop. Besides building a benchmark, we further develop a simple but\neffective robust training method to defend against temporal corruptions,\nthrough the FrameDrop augmentation and Temporal-Robust Consistency loss.\nRemarkably, our approach not only improves robustness but also yields promising\nimprovements on clean data. We believe that this study will serve as a\nbenchmark for future research in robust video analysis. Source code and models\nare available at https://github.com/Alvin-Zeng/temporal-robustness-benchmark.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Runhao Zeng", "Xiaoyong Chen", "Jiaming Liang", "Huisi Wu", "Guang-Zhong Cao", "Yong Guo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f697"}, "filepath": "data/2401.00847.png", "tags": [], "_media_type": "image", "_rand": 0.9993221067828609, "arXiv_link": "https://arxiv.org/abs/2401.00847", "other_link": "", "title": "Mocap Everyone Everywhere: Lightweight Motion Capture With Smartwatches and a Head-Mounted Camera", "abstract": "We present a lightweight and affordable motion capture method based on two\nsmartwatches and a head-mounted camera. In contrast to the existing approaches\nthat use six or more expert-level IMU devices, our approach is much more\ncost-effective and convenient. Our method can make wearable motion capture\naccessible to everyone everywhere, enabling 3D full-body motion capture in\ndiverse environments. As a key idea to overcome the extreme sparsity and\nambiguities of sensor inputs with different modalities, we integrate 6D head\nposes obtained from the head-mounted cameras for motion estimation. To enable\ncapture in expansive indoor and outdoor scenes, we propose an algorithm to\ntrack and update floor level changes to define head poses, coupled with a\nmulti-stage Transformer-based regression module. We also introduce novel\nstrategies leveraging visual cues of egocentric images to further enhance the\nmotion capture quality while reducing ambiguities. We demonstrate the\nperformance of our method on various challenging scenarios, including complex\noutdoor environments and everyday motions including object interactions and\nsocial interactions among multiple individuals.", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Jiye Lee", "Hanbyul Joo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f698"}, "filepath": "data/2401.16456.png", "tags": [], "_media_type": "image", "_rand": 0.9990312543255444, "arXiv_link": "https://arxiv.org/abs/2401.16456", "other_link": "", "title": "SHViT: Single-Head Vision Transformer with Memory Efficient Macro Design", "abstract": "Recently, efficient Vision Transformers have shown great performance with low\nlatency on resource-constrained devices. Conventionally, they use 4x4 patch\nembeddings and a 4-stage structure at the macro level, while utilizing\nsophisticated attention with multi-head configuration at the micro level. This\npaper aims to address computational redundancy at all design levels in a\nmemory-efficient manner. We discover that using larger-stride patchify stem not\nonly reduces memory access costs but also achieves competitive performance by\nleveraging token representations with reduced spatial redundancy from the early\nstages. Furthermore, our preliminary analyses suggest that attention layers in\nthe early stages can be substituted with convolutions, and several attention\nheads in the latter stages are computationally redundant. To handle this, we\nintroduce a single-head attention module that inherently prevents head\nredundancy and simultaneously boosts accuracy by parallelly combining global\nand local information. Building upon our solutions, we introduce SHViT, a\nSingle-Head Vision Transformer that obtains the state-of-the-art speed-accuracy\ntradeoff. For example, on ImageNet-1k, our SHViT-S4 is 3.3x, 8.1x, and 2.4x\nfaster than MobileViTv2 x1.0 on GPU, CPU, and iPhone12 mobile device,\nrespectively, while being 1.3% more accurate. For object detection and instance\nsegmentation on MS COCO using Mask-RCNN head, our model achieves performance\ncomparable to FastViT-SA12 while exhibiting 3.8x and 2.0x lower backbone\nlatency on GPU and mobile device, respectively.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Seokju Yun", "Youngmin Ro"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f699"}, "filepath": "data/2309.14949v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990544228803401, "arXiv_link": "https://arxiv.org/abs/2309.14949v1", "other_link": "https://github.com/Gorilla-Lab-SCUT/TRIBE}.", "title": "Improved Self-Training for Test-Time Adaptation", "abstract": "Test-Time Adaptation aims to adapt source domain model to testing data at\ninference stage with success demonstrated in adapting to unseen corruptions.\nHowever, these attempts may fail under more challenging real-world scenarios.\nExisting works mainly consider real-world test-time adaptation under non-i.i.d.\ndata stream and continual domain shift. In this work, we first complement the\nexisting real-world TTA protocol with a globally class imbalanced testing set.\nWe demonstrate that combining all settings together poses new challenges to\nexisting methods. We argue the failure of state-of-the-art methods is first\ncaused by indiscriminately adapting normalization layers to imbalanced testing\ndata. To remedy this shortcoming, we propose a balanced batchnorm layer to swap\nout the regular batchnorm at inference stage. The new batchnorm layer is\ncapable of adapting without biasing towards majority classes. We are further\ninspired by the success of self-training~(ST) in learning from unlabeled data\nand adapt ST for test-time adaptation. However, ST alone is prone to over\nadaption which is responsible for the poor performance under continual domain\nshift. Hence, we propose to improve self-training under continual domain shift\nby regularizing model updates with an anchored loss. The final TTA model,\ntermed as TRIBE, is built upon a tri-net architecture with balanced batchnorm\nlayers. We evaluate TRIBE on four datasets representing real-world TTA\nsettings. TRIBE consistently achieves the state-of-the-art performance across\nmultiple evaluation protocols. The code is available at\n\\url{https://github.com/Gorilla-Lab-SCUT/TRIBE}.", "keywords": [], "authors_list": ["Jing Ma"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f69a"}, "filepath": "data/2405.15265.png", "tags": [], "_media_type": "image", "_rand": 0.9996687229818512, "arXiv_link": "https://arxiv.org/abs/2405.15265", "other_link": "https://github.com/ChenJiayi68/DMTNet.", "title": "APSeg: Auto-Prompt Network for Cross-Domain Few-Shot Semantic Segmentation", "abstract": "Cross-Domain Few-shot Semantic Segmentation (CD-FSS) aims to train\ngeneralized models that can segment classes from different domains with a few\nlabeled images. Previous works have proven the effectiveness of feature\ntransformation in addressing CD-FSS. However, they completely rely on support\nimages for feature transformation, and repeatedly utilizing a few support\nimages for each class may easily lead to overfitting and overlooking\nintra-class appearance differences. In this paper, we propose a Doubly Matching\nTransformation-based Network (DMTNet) to solve the above issue. Instead of\ncompletely relying on support images, we propose Self-Matching Transformation\n(SMT) to construct query-specific transformation matrices based on query images\nthemselves to transform domain-specific query features into domain-agnostic\nones. Calculating query-specific transformation matrices can prevent\noverfitting, especially for the meta-testing stage where only one or several\nimages are used as support images to segment hundreds or thousands of images.\nAfter obtaining domain-agnostic features, we exploit a Dual Hypercorrelation\nConstruction (DHC) module to explore the hypercorrelations between the query\nimage with the foreground and background of the support image, based on which\nforeground and background prediction maps are generated and supervised,\nrespectively, to enhance the segmentation result. In addition, we propose a\nTest-time Self-Finetuning (TSF) strategy to more accurately self-tune the query\nprediction in unseen domains. Extensive experiments on four popular datasets\nshow that DMTNet achieves superior performance over state-of-the-art\napproaches. Code is available at https://github.com/ChenJiayi68/DMTNet.", "keywords": [], "authors_list": ["Weizhao He", "Yang Zhang", "Wei Zhuo", "Linlin Shen", "Jiaqi Yang", "Songhe Deng", "Liang Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f69b"}, "filepath": "data/2306.04300v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992245680872166, "arXiv_link": "https://arxiv.org/abs/2306.04300v3", "other_link": "https://github.com/BBBBchan/CorrMatch.", "title": "CorrMatch: Label Propagation via Correlation Matching for Semi-Supervised Semantic Segmentation", "abstract": "This paper presents a simple but performant semi-supervised semantic\nsegmentation approach, called CorrMatch. Previous approaches mostly employ\ncomplicated training strategies to leverage unlabeled data but overlook the\nrole of correlation maps in modeling the relationships between pairs of\nlocations. We observe that the correlation maps not only enable clustering\npixels of the same category easily but also contain good shape information,\nwhich previous works have omitted. Motivated by these, we aim to improve the\nuse efficiency of unlabeled data by designing two novel label propagation\nstrategies. First, we propose to conduct pixel propagation by modeling the\npairwise similarities of pixels to spread the high-confidence pixels and dig\nout more. Then, we perform region propagation to enhance the pseudo labels with\naccurate class-agnostic masks extracted from the correlation maps. CorrMatch\nachieves great performance on popular segmentation benchmarks. Taking the\nDeepLabV3+ with ResNet-101 backbone as our segmentation model, we receive a\n76%+ mIoU score on the Pascal VOC 2012 dataset with only 92 annotated images.\nCode is available at https://github.com/BBBBchan/CorrMatch.", "keywords": [], "authors_list": ["Bo-Yuan Sun", "Yuqi Yang", "Le Zhang", "Ming-Ming Cheng", "Qibin Hou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f69c"}, "filepath": "data/2312.02228.png", "tags": [], "_media_type": "image", "_rand": 0.9996861859188011, "arXiv_link": "https://arxiv.org/abs/2312.02228", "other_link": "", "title": "PixelLM: Pixel Reasoning with Large Multimodal Model", "abstract": "While large multimodal models (LMMs) have achieved remarkable progress,\ngenerating pixel-level masks for image reasoning tasks involving multiple\nopen-world targets remains a challenge. To bridge this gap, we introduce\nPixelLM, an effective and efficient LMM for pixel-level reasoning and\nunderstanding. Central to PixelLM is a novel, lightweight pixel decoder and a\ncomprehensive segmentation codebook. The decoder efficiently produces masks\nfrom the hidden embeddings of the codebook tokens, which encode detailed\ntarget-relevant information. With this design, PixelLM harmonizes with the\nstructure of popular LMMs and avoids the need for additional costly\nsegmentation models. Furthermore, we propose a target refinement loss to\nenhance the model's ability to differentiate between multiple targets, leading\nto substantially improved mask quality. To advance research in this area, we\nconstruct MUSE, a high-quality multi-target reasoning segmentation benchmark.\nPixelLM excels across various pixel-level image reasoning and understanding\ntasks, outperforming well-established methods in multiple benchmarks, including\nMUSE, single- and multi-referring segmentation. Comprehensive ablations confirm\nthe efficacy of each proposed component. All code, models, and datasets will be\npublicly available.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhongwei Ren", "Zhicheng Huang", "Yunchao Wei", "Yao Zhao", "Dongmei Fu", "Jiashi Feng", "Xiaojie Jin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f69d"}, "filepath": "data/2404.02072.png", "tags": [], "_media_type": "image", "_rand": 0.9995565868844155, "arXiv_link": "https://arxiv.org/abs/2404.02072", "other_link": "https://github.com/naver-ai/egtr.", "title": "EGTR: Extracting Graph from Transformer for Scene Graph Generation", "abstract": "Scene Graph Generation (SGG) is a challenging task of detecting objects and\npredicting relationships between objects. After DETR was developed, one-stage\nSGG models based on a one-stage object detector have been actively studied.\nHowever, complex modeling is used to predict the relationship between objects,\nand the inherent relationship between object queries learned in the multi-head\nself-attention of the object detector has been neglected. We propose a\nlightweight one-stage SGG model that extracts the relation graph from the\nvarious relationships learned in the multi-head self-attention layers of the\nDETR decoder. By fully utilizing the self-attention by-products, the relation\ngraph can be extracted effectively with a shallow relation extraction head.\nConsidering the dependency of the relation extraction task on the object\ndetection task, we propose a novel relation smoothing technique that adjusts\nthe relation label adaptively according to the quality of the detected objects.\nBy the relation smoothing, the model is trained according to the continuous\ncurriculum that focuses on object detection task at the beginning of training\nand performs multi-task learning as the object detection performance gradually\nimproves. Furthermore, we propose a connectivity prediction task that predicts\nwhether a relation exists between object pairs as an auxiliary task of the\nrelation extraction. We demonstrate the effectiveness and efficiency of our\nmethod for the Visual Genome and Open Image V6 datasets. Our code is publicly\navailable at https://github.com/naver-ai/egtr.", "keywords": ["Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Jinbae Im", "JeongYeon Nam", "Nokyung Park", "Hyungmin Lee", "Seunghyun Park"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f69e"}, "filepath": "data/2404.00909v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998705724288683, "arXiv_link": "https://arxiv.org/abs/2404.00909v1", "other_link": "", "title": "Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language Reasoning", "abstract": "Generative vision-language models (VLMs) have shown impressive performance in\nzero-shot vision-language tasks like image captioning and visual question\nanswering. However, improving their zero-shot reasoning typically requires\nsecond-stage instruction tuning, which relies heavily on human-labeled or large\nlanguage model-generated annotation, incurring high labeling costs. To tackle\nthis challenge, we introduce Image-Conditioned Caption Correction (ICCC), a\nnovel pre-training task designed to enhance VLMs' zero-shot performance without\nthe need for labeled task-aware data. The ICCC task compels VLMs to rectify\nmismatches between visual and language concepts, thereby enhancing instruction\nfollowing and text generation conditioned on visual inputs. Leveraging language\nstructure and a lightweight dependency parser, we construct data samples of\nICCC task from image-text datasets with low labeling and computation costs.\nExperimental results on BLIP-2 and InstructBLIP demonstrate significant\nimprovements in zero-shot image-text generation-based VL tasks through ICCC\ninstruction tuning.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Efficient and scalable vision"], "authors_list": ["Rongjie Li", "Yu Wu", "Xuming He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f69f"}, "filepath": "data/2405.06849.png", "tags": [], "_media_type": "image", "_rand": 0.9997495902395503, "arXiv_link": "https://arxiv.org/abs/2405.06849", "other_link": "", "title": "GreedyViG: Dynamic Axial Graph Construction for Efficient Vision GNNs", "abstract": "Vision graph neural networks (ViG) offer a new avenue for exploration in\ncomputer vision. A major bottleneck in ViGs is the inefficient k-nearest\nneighbor (KNN) operation used for graph construction. To solve this issue, we\npropose a new method for designing ViGs, Dynamic Axial Graph Construction\n(DAGC), which is more efficient than KNN as it limits the number of considered\ngraph connections made within an image. Additionally, we propose a novel\nCNN-GNN architecture, GreedyViG, which uses DAGC. Extensive experiments show\nthat GreedyViG beats existing ViG, CNN, and ViT architectures in terms of\naccuracy, GMACs, and parameters on image classification, object detection,\ninstance segmentation, and semantic segmentation tasks. Our smallest model,\nGreedyViG-S, achieves 81.1% top-1 accuracy on ImageNet-1K, 2.9% higher than\nVision GNN and 2.2% higher than Vision HyperGraph Neural Network (ViHGNN), with\nless GMACs and a similar number of parameters. Our largest model, GreedyViG-B\nobtains 83.9% top-1 accuracy, 0.2% higher than Vision GNN, with a 66.6%\ndecrease in parameters and a 69% decrease in GMACs. GreedyViG-B also obtains\nthe same accuracy as ViHGNN with a 67.3% decrease in parameters and a 71.3%\ndecrease in GMACs. Our work shows that hybrid CNN-GNN architectures not only\nprovide a new avenue for designing efficient models, but that they can also\nexceed the performance of current state-of-the-art models.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Mustafa Munir", "William Avery", "Md Mostafijur Rahman", "Radu Marculescu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a0"}, "filepath": "data/2310.10404.png", "tags": [], "_media_type": "image", "_rand": 0.9998280177434765, "arXiv_link": "https://arxiv.org/abs/2310.10404", "other_link": "", "title": "LLM4SGG: Large Language Models for Weakly Supervised Scene Graph Generation", "abstract": "Weakly-Supervised Scene Graph Generation (WSSGG) research has recently\nemerged as an alternative to the fully-supervised approach that heavily relies\non costly annotations. In this regard, studies on WSSGG have utilized image\ncaptions to obtain unlocalized triplets while primarily focusing on grounding\nthe unlocalized triplets over image regions. However, they have overlooked the\ntwo issues involved in the triplet formation process from the captions: 1)\nSemantic over-simplification issue arises when extracting triplets from\ncaptions, where fine-grained predicates in captions are undesirably converted\ninto coarse-grained predicates, resulting in a long-tailed predicate\ndistribution, and 2) Low-density scene graph issue arises when aligning the\ntriplets in the caption with entity/predicate classes of interest, where many\ntriplets are discarded and not used in training, leading to insufficient\nsupervision. To tackle the two issues, we propose a new approach, i.e., Large\nLanguage Model for weakly-supervised SGG (LLM4SGG), where we mitigate the two\nissues by leveraging the LLM's in-depth understanding of language and reasoning\nability during the extraction of triplets from captions and alignment of\nentity/predicate classes with target data. To further engage the LLM in these\nprocesses, we adopt the idea of Chain-of-Thought and the in-context few-shot\nlearning strategy. To validate the effectiveness of LLM4SGG, we conduct\nextensive experiments on Visual Genome and GQA datasets, showing significant\nimprovements in both Recall@K and mean Recall@K compared to the\nstate-of-the-art WSSGG methods. A further appeal is that LLM4SGG is\ndata-efficient, enabling effective model training with a small amount of\ntraining images.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Kibum Kim", "Kanghoon Yoon", "Jaehyeong Jeon", "Yeonjun In", "Jinyoung Moon", "Donghyun Kim", "Chanyoung Park"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a1"}, "filepath": "data/2312.14132v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996753540124722, "arXiv_link": "https://arxiv.org/abs/2312.14132v1", "other_link": "", "title": "DUSt3R: Geometric 3D Vision Made Easy", "abstract": "Multi-view stereo reconstruction (MVS) in the wild requires to first estimate\nthe camera parameters e.g. intrinsic and extrinsic parameters. These are\nusually tedious and cumbersome to obtain, yet they are mandatory to triangulate\ncorresponding pixels in 3D space, which is the core of all best performing MVS\nalgorithms. In this work, we take an opposite stance and introduce DUSt3R, a\nradically novel paradigm for Dense and Unconstrained Stereo 3D Reconstruction\nof arbitrary image collections, i.e. operating without prior information about\ncamera calibration nor viewpoint poses. We cast the pairwise reconstruction\nproblem as a regression of pointmaps, relaxing the hard constraints of usual\nprojective camera models. We show that this formulation smoothly unifies the\nmonocular and binocular reconstruction cases. In the case where more than two\nimages are provided, we further propose a simple yet effective global alignment\nstrategy that expresses all pairwise pointmaps in a common reference frame. We\nbase our network architecture on standard Transformer encoders and decoders,\nallowing us to leverage powerful pretrained models. Our formulation directly\nprovides a 3D model of the scene as well as depth information, but\ninterestingly, we can seamlessly recover from it, pixel matches, relative and\nabsolute camera. Exhaustive experiments on all these tasks showcase that the\nproposed DUSt3R can unify various 3D vision tasks and set new SoTAs on\nmonocular/multi-view depth estimation as well as relative pose estimation. In\nsummary, DUSt3R makes many geometric 3D vision tasks easy.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Shuzhe Wang", "Vincent Leroy", "Yohann Cabon", "Boris Chidlovskii", "Jerome Revaud"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a2"}, "filepath": "data/2404.16451.png", "tags": [], "_media_type": "image", "_rand": 0.9994268259230955, "arXiv_link": "https://arxiv.org/abs/2404.16451", "other_link": "https://github.com/HeZongyao/LMF.", "title": "Latent Modulated Function for Computational Optimal Continuous Image Representation", "abstract": "The recent work Local Implicit Image Function (LIIF) and subsequent Implicit\nNeural Representation (INR) based works have achieved remarkable success in\nArbitrary-Scale Super-Resolution (ASSR) by using MLP to decode Low-Resolution\n(LR) features. However, these continuous image representations typically\nimplement decoding in High-Resolution (HR) High-Dimensional (HD) space, leading\nto a quadratic increase in computational cost and seriously hindering the\npractical applications of ASSR. To tackle this problem, we propose a novel\nLatent Modulated Function (LMF), which decouples the HR-HD decoding process\ninto shared latent decoding in LR-HD space and independent rendering in HR\nLow-Dimensional (LD) space, thereby realizing the first computational optimal\nparadigm of continuous image representation. Specifically, LMF utilizes an HD\nMLP in latent space to generate latent modulations of each LR feature vector.\nThis enables a modulated LD MLP in render space to quickly adapt to any input\nfeature vector and perform rendering at arbitrary resolution. Furthermore, we\nleverage the positive correlation between modulation intensity and input image\ncomplexity to design a Controllable Multi-Scale Rendering (CMSR) algorithm,\noffering the flexibility to adjust the decoding efficiency based on the\nrendering precision. Extensive experiments demonstrate that converting existing\nINR-based ASSR methods to LMF can reduce the computational cost by up to 99.9%,\naccelerate inference by up to 57 times, and save up to 76% of parameters, while\nmaintaining competitive performance. The code is available at\nhttps://github.com/HeZongyao/LMF.", "keywords": ["Deep learning architectures and techniques", "Low-level vision"], "authors_list": ["Zongyao He", "Zhi Jin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a3"}, "filepath": "data/2310.03898.png", "tags": [], "_media_type": "image", "_rand": 0.9996849425860677, "arXiv_link": "https://arxiv.org/abs/2310.03898", "other_link": "", "title": "NICE: Neurogenesis Inspired Contextual Encoding for Replay-free Class Incremental Learning", "abstract": "Learning new tasks accumulatively without forgetting remains a critical\nchallenge in continual learning. Generative experience replay addresses this\nchallenge by synthesizing pseudo-data points for past learned tasks and later\nreplaying them for concurrent training along with the new tasks' data.\nGenerative replay is the best strategy for continual learning under a strict\nclass-incremental setting when certain constraints need to be met: (i) constant\nmodel size, (ii) no pre-training dataset, and (iii) no memory buffer for\nstoring past tasks' data. Inspired by the biological nervous system mechanisms,\nwe introduce a time-aware regularization method to dynamically fine-tune the\nthree training objective terms used for generative replay: supervised learning,\nlatent regularization, and data reconstruction. Experimental results on major\nbenchmarks indicate that our method pushes the limit of brain-inspired\ncontinual learners under such strict settings, improves memory retention, and\nincreases the average performance over continually arriving tasks.", "keywords": [], "authors_list": ["Mustafa B Gurbuz", "Jean Moorman", "Constantine Dovrolis"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a4"}, "filepath": "data/2311.17922.png", "tags": [], "_media_type": "image", "_rand": 0.9991978053339229, "arXiv_link": "https://arxiv.org/abs/2311.17922", "other_link": "https://github.com/astra-vision/FAMix", "title": "A Simple Recipe for Language-guided Domain Generalized Segmentation", "abstract": "Generalization to new domains not seen during training is one of the\nlong-standing challenges in deploying neural networks in real-world\napplications. Existing generalization techniques either necessitate external\nimages for augmentation, and/or aim at learning invariant representations by\nimposing various alignment constraints. Large-scale pretraining has recently\nshown promising generalization capabilities, along with the potential of\nbinding different modalities. For instance, the advent of vision-language\nmodels like CLIP has opened the doorway for vision models to exploit the\ntextual modality. In this paper, we introduce a simple framework for\ngeneralizing semantic segmentation networks by employing language as the source\nof randomization. Our recipe comprises three key ingredients: (i) the\npreservation of the intrinsic CLIP robustness through minimal fine-tuning, (ii)\nlanguage-driven local style augmentation, and (iii) randomization by locally\nmixing the source and augmented styles during training. Extensive experiments\nreport state-of-the-art results on various generalization benchmarks. Code is\naccessible at https://github.com/astra-vision/FAMix .", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Mohammad Fahes", "TUAN-HUNG VU", "Andrei Bursuc", "Patrick P\u00e9rez", "Raoul de Charette"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a5"}, "filepath": "data/2307.13539.png", "tags": [], "_media_type": "image", "_rand": 0.9994441860663049, "arXiv_link": "https://arxiv.org/abs/2307.13539", "other_link": "https://github.com/Carlisle-Liu/ASLP.", "title": "Self-Calibrating Vicinal Risk Minimisation for Model Calibration", "abstract": "For safety-related applications, it is crucial to produce trustworthy deep\nneural networks whose prediction is associated with confidence that can\nrepresent the likelihood of correctness for subsequent decision-making.\nExisting dense binary classification models are prone to being over-confident.\nTo improve model calibration, we propose Adaptive Stochastic Label Perturbation\n(ASLP) which learns a unique label perturbation level for each training image.\nASLP employs our proposed Self-Calibrating Binary Cross Entropy (SC-BCE) loss,\nwhich unifies label perturbation processes including stochastic approaches\n(like DisturbLabel), and label smoothing, to correct calibration while\nmaintaining classification rates. ASLP follows Maximum Entropy Inference of\nclassic statistical mechanics to maximise prediction entropy with respect to\nmissing information. It performs this while: (1) preserving classification\naccuracy on known data as a conservative solution, or (2) specifically improves\nmodel calibration degree by minimising the gap between the prediction accuracy\nand expected confidence of the target training label. Extensive results\ndemonstrate that ASLP can significantly improve calibration degrees of dense\nbinary classification models on both in-distribution and out-of-distribution\ndata. The code is available on https://github.com/Carlisle-Liu/ASLP.", "keywords": [], "authors_list": ["Jiawei Liu", "Changkun Ye", "Ruikai Cui", "Nick Barnes"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a6"}, "filepath": "data/2403.05239.png", "tags": [], "_media_type": "image", "_rand": 0.9994229669319847, "arXiv_link": "https://arxiv.org/abs/2403.05239", "other_link": "https://hcplayercvpr2024.github.io}.", "title": "Towards Effective Usage of Human-Centric Priors in Diffusion Models for Text-based Human Image Generation", "abstract": "Vanilla text-to-image diffusion models struggle with generating accurate\nhuman images, commonly resulting in imperfect anatomies such as unnatural\npostures or disproportionate limbs.Existing methods address this issue mostly\nby fine-tuning the model with extra images or adding additional controls --\nhuman-centric priors such as pose or depth maps -- during the image generation\nphase. This paper explores the integration of these human-centric priors\ndirectly into the model fine-tuning stage, essentially eliminating the need for\nextra conditions at the inference stage. We realize this idea by proposing a\nhuman-centric alignment loss to strengthen human-related information from the\ntextual prompts within the cross-attention maps. To ensure semantic detail\nrichness and human structural accuracy during fine-tuning, we introduce\nscale-aware and step-wise constraints within the diffusion process, according\nto an in-depth analysis of the cross-attention layer. Extensive experiments\nshow that our method largely improves over state-of-the-art text-to-image\nmodels to synthesize high-quality human images based on user-written prompts.\nProject page: \\url{https://hcplayercvpr2024.github.io}.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Junyan Wang", "Zhenhong Sun", "Stewart Tan", "Xuanbai Chen", "Weihua Chen", "li", "Cheng Zhang", "Yang Song"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a7"}, "filepath": "data/2312.10634.png", "tags": [], "_media_type": "image", "_rand": 0.9994578565265195, "arXiv_link": "https://arxiv.org/abs/2312.10634", "other_link": "", "title": "Anomaly Score: Evaluating Generative Models and Individual Generated Images based on Complexity and Vulnerability", "abstract": "With the advancement of generative models, the assessment of generated images\nbecomes more and more important. Previous methods measure distances between\nfeatures of reference and generated images from trained vision models. In this\npaper, we conduct an extensive investigation into the relationship between the\nrepresentation space and input space around generated images. We first propose\ntwo measures related to the presence of unnatural elements within images:\ncomplexity, which indicates how non-linear the representation space is, and\nvulnerability, which is related to how easily the extracted feature changes by\nadversarial input changes. Based on these, we introduce a new metric to\nevaluating image-generative models called anomaly score (AS). Moreover, we\npropose AS-i (anomaly score for individual images) that can effectively\nevaluate generated images individually. Experimental results demonstrate the\nvalidity of the proposed approach.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jaehui Hwang", "Junghyuk Lee", "Jong-Seok Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a8"}, "filepath": "data/2310.06282.png", "tags": [], "_media_type": "image", "_rand": 0.9997050596509807, "arXiv_link": "https://arxiv.org/abs/2310.06282", "other_link": "", "title": "MuseChat: A Conversational Music Recommendation System for Videos", "abstract": "Music recommendation for videos attracts growing interest in multi-modal\nresearch. However, existing systems focus primarily on content compatibility,\noften ignoring the users' preferences. Their inability to interact with users\nfor further refinements or to provide explanations leads to a less satisfying\nexperience. We address these issues with MuseChat, a first-of-its-kind\ndialogue-based recommendation system that personalizes music suggestions for\nvideos. Our system consists of two key functionalities with associated modules:\nrecommendation and reasoning. The recommendation module takes a video along\nwith optional information including previous suggested music and user's\npreference as inputs and retrieves an appropriate music matching the context.\nThe reasoning module, equipped with the power of Large Language Model\n(Vicuna-7B) and extended to multi-modal inputs, is able to provide reasonable\nexplanation for the recommended music. To evaluate the effectiveness of\nMuseChat, we build a large-scale dataset, conversational music recommendation\nfor videos, that simulates a two-turn interaction between a user and a\nrecommender based on accurate music track information. Experiment results show\nthat MuseChat achieves significant improvements over existing video-based music\nretrieval methods as well as offers strong interpretability and\ninteractability.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Zhikang Dong", "Bin Chen", "Xiulong Liu", "Pawel Polak", "Peng Zhang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition", "Information Retrieval"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6a9"}, "filepath": "data/2402.06106.png", "tags": [], "_media_type": "image", "_rand": 0.9993925052388878, "arXiv_link": "https://arxiv.org/abs/2402.06106", "other_link": "", "title": "Learning Degradation-unaware Representation with Prior-based Latent Transformations for Blind Face Restoration", "abstract": "Recent generative-prior-based methods have shown promising blind face\nrestoration performance. They usually project the degraded images to the latent\nspace and then decode high-quality faces either by single-stage latent\noptimization or directly from the encoding. Generating fine-grained facial\ndetails faithful to inputs remains a challenging problem. Most existing methods\nproduce either overly smooth outputs or alter the identity as they attempt to\nbalance between generation and reconstruction. This may be attributed to the\ntypical trade-off between quality and resolution in the latent space. If the\nlatent space is highly compressed, the decoded output is more robust to\ndegradations but shows worse fidelity. On the other hand, a more flexible\nlatent space can capture intricate facial details better, but is extremely\ndifficult to optimize for highly degraded faces using existing techniques. To\naddress these issues, we introduce a diffusion-based-prior inside a VQGAN\narchitecture that focuses on learning the distribution over uncorrupted latent\nembeddings. With such knowledge, we iteratively recover the clean embedding\nconditioning on the degraded counterpart. Furthermore, to ensure the reverse\ndiffusion trajectory does not deviate from the underlying identity, we train a\nseparate Identity Recovery Network and use its output to constrain the reverse\ndiffusion process. Specifically, using a learnable latent mask, we add\ngradients from a face-recognition network to a subset of latent features that\ncorrelates with the finer identity-related details in the pixel space, leaving\nthe other features untouched. Disentanglement between perception and fidelity\nin the latent space allows us to achieve the best of both worlds. We perform\nextensive evaluations on multiple real and synthetic datasets to validate the\nsuperiority of our approach.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Lianxin Xie", "csbingbing zheng", "Wen Xue", "Le Jiang", "Cheng Liu", "Si Wu", "Hau San Wong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6aa"}, "filepath": "data/2405.10272.png", "tags": [], "_media_type": "image", "_rand": 0.9994516992797867, "arXiv_link": "https://arxiv.org/abs/2405.10272", "other_link": "", "title": "Faces that Speak: Jointly Synthesising Talking Face and Speech from Text", "abstract": "The goal of this work is to simultaneously generate natural talking faces and\nspeech outputs from text. We achieve this by integrating Talking Face\nGeneration (TFG) and Text-to-Speech (TTS) systems into a unified framework. We\naddress the main challenges of each task: (1) generating a range of head poses\nrepresentative of real-world scenarios, and (2) ensuring voice consistency\ndespite variations in facial motion for the same identity. To tackle these\nissues, we introduce a motion sampler based on conditional flow matching, which\nis capable of high-quality motion code generation in an efficient way.\nMoreover, we introduce a novel conditioning method for the TTS system, which\nutilises motion-removed features from the TFG model to yield uniform speech\noutputs. Our extensive experiments demonstrate that our method effectively\ncreates natural-looking talking faces and speech that accurately match the\ninput text. To our knowledge, this is the first effort to build a multimodal\nsynthesis system that can generalise to unseen identities.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Youngjoon Jang", "Jihoon Kim", "Junseok Ahn", "Doyeop Kwak", "Hongsun Yang", "Yooncheol Ju", "ILHWAN KIM", "Byeong-Yeol Kim", "Joon Chung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Sound", "Audio and Speech Processing", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ab"}, "filepath": "data/2312.04529.png", "tags": [], "_media_type": "image", "_rand": 0.999954721775549, "arXiv_link": "https://arxiv.org/abs/2312.04529", "other_link": "", "title": "Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance", "abstract": "Reflectance bounds the frequency spectrum of illumination in the object\nappearance. In this paper, we introduce the first stochastic inverse rendering\nmethod, which recovers the attenuated frequency spectrum of an illumination\njointly with the reflectance of an object of known geometry from a single\nimage. Our key idea is to solve this blind inverse problem in the reflectance\nmap, an appearance representation invariant to the underlying geometry, by\nlearning to reverse the image formation with a novel diffusion model which we\nrefer to as the Diffusion Reflectance Map Network (DRMNet). Given an observed\nreflectance map converted and completed from the single input image, DRMNet\ngenerates a reflectance map corresponding to a perfect mirror sphere while\njointly estimating the reflectance. The forward process can be understood as\ngradually filtering a natural illumination with lower and lower frequency\nreflectance and additive Gaussian noise. DRMNet learns to invert this process\nwith two subnetworks, IllNet and RefNet, which work in concert towards this\njoint estimation. The network is trained on an extensive synthetic dataset and\nis demonstrated to generalize to real images, showing state-of-the-art accuracy\non established datasets.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Yuto Enyo", "Ko Nishino"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ac"}, "filepath": "data/2405.17765.png", "tags": [], "_media_type": "image", "_rand": 0.9996745327832235, "arXiv_link": "https://arxiv.org/abs/2405.17765", "other_link": "", "title": "PTM-VQA: Efficient Video Quality Assessment Leveraging Diverse PreTrained Models from the Wild", "abstract": "Video quality assessment (VQA) is a challenging problem due to the numerous\nfactors that can affect the perceptual quality of a video, \\eg, content\nattractiveness, distortion type, motion pattern, and level. However, annotating\nthe Mean opinion score (MOS) for videos is expensive and time-consuming, which\nlimits the scale of VQA datasets, and poses a significant obstacle for deep\nlearning-based methods. In this paper, we propose a VQA method named PTM-VQA,\nwhich leverages PreTrained Models to transfer knowledge from models pretrained\non various pre-tasks, enabling benefits for VQA from different aspects.\n Specifically, we extract features of videos from different pretrained models\nwith frozen weights and integrate them to generate representation. Since these\nmodels possess various fields of knowledge and are often trained with labels\nirrelevant to quality, we propose an Intra-Consistency and Inter-Divisibility\n(ICID) loss to impose constraints on features extracted by multiple pretrained\nmodels. The intra-consistency constraint ensures that features extracted by\ndifferent pretrained models are in the same unified quality-aware latent space,\nwhile the inter-divisibility introduces pseudo clusters based on the annotation\nof samples and tries to separate features of samples from different clusters.\nFurthermore, with a constantly growing number of pretrained models, it is\ncrucial to determine which models to use and how to use them. To address this\nproblem, we propose an efficient scheme to select suitable candidates. Models\nwith better clustering performance on VQA datasets are chosen to be our\ncandidates. Extensive experiments demonstrate the effectiveness of the proposed\nmethod.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Kun Yuan", "Hongbo Liu", "Mading Li", "Muyi Sun", "Ming Sun", "Jiachao Gong", "Jinhua Hao", "Chao Zhou", "Yansong Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ad"}, "filepath": "data/2403.12015.png", "tags": [], "_media_type": "image", "_rand": 0.999229715444744, "arXiv_link": "https://arxiv.org/abs/2403.12015", "other_link": "", "title": "Plug-and-Play Diffusion Distillation", "abstract": "Diffusion models are the main driver of progress in image and video\nsynthesis, but suffer from slow inference speed. Distillation methods, like the\nrecently introduced adversarial diffusion distillation (ADD) aim to shift the\nmodel from many-shot to single-step inference, albeit at the cost of expensive\nand difficult optimization due to its reliance on a fixed pretrained DINOv2\ndiscriminator. We introduce Latent Adversarial Diffusion Distillation (LADD), a\nnovel distillation approach overcoming the limitations of ADD. In contrast to\npixel-based ADD, LADD utilizes generative features from pretrained latent\ndiffusion models. This approach simplifies training and enhances performance,\nenabling high-resolution multi-aspect ratio image synthesis. We apply LADD to\nStable Diffusion 3 (8B) to obtain SD3-Turbo, a fast model that matches the\nperformance of state-of-the-art text-to-image generators using only four\nunguided sampling steps. Moreover, we systematically investigate its scaling\nbehavior and demonstrate LADD's effectiveness in various applications such as\nimage editing and inpainting.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yi-Ting Hsiao", "Siavash Khodadadeh", "Kevin Duarte", "Wei-An Lin", "Hui Qu", "Mingi Kwon", "Ratheesh Kalarot"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ae"}, "filepath": "data/2404.09389.png", "tags": [], "_media_type": "image", "_rand": 0.9998369847223165, "arXiv_link": "https://arxiv.org/abs/2404.09389", "other_link": "", "title": "Masked and Shuffled Blind Spot Denoising for Real-World Images", "abstract": "We introduce a novel approach to single image denoising based on the Blind\nSpot Denoising principle, which we call MAsked and SHuffled Blind Spot\nDenoising (MASH). We focus on the case of correlated noise, which often plagues\nreal images. MASH is the result of a careful analysis to determine the\nrelationships between the level of blindness (masking) of the input and the\n(unknown) noise correlation. Moreover, we introduce a shuffling technique to\nweaken the local correlation of noise, which in turn yields an additional\ndenoising performance improvement. We evaluate MASH via extensive experiments\non real-world noisy image datasets. We demonstrate on par or better results\ncompared to existing self-supervised denoising methods.", "keywords": ["Low-level vision"], "authors_list": ["Hamadi Chihaoui", "Paolo Favaro"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6af"}, "filepath": "data/2311.17117.png", "tags": [], "_media_type": "image", "_rand": 0.9992189315508193, "arXiv_link": "https://arxiv.org/abs/2311.17117", "other_link": "", "title": "Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation", "abstract": "Character Animation aims to generating character videos from still images\nthrough driving signals. Currently, diffusion models have become the mainstream\nin visual generation research, owing to their robust generative capabilities.\nHowever, challenges persist in the realm of image-to-video, especially in\ncharacter animation, where temporally maintaining consistency with detailed\ninformation from character remains a formidable problem. In this paper, we\nleverage the power of diffusion models and propose a novel framework tailored\nfor character animation. To preserve consistency of intricate appearance\nfeatures from reference image, we design ReferenceNet to merge detail features\nvia spatial attention. To ensure controllability and continuity, we introduce\nan efficient pose guider to direct character's movements and employ an\neffective temporal modeling approach to ensure smooth inter-frame transitions\nbetween video frames. By expanding the training data, our approach can animate\narbitrary characters, yielding superior results in character animation compared\nto other image-to-video methods. Furthermore, we evaluate our method on\nbenchmarks for fashion video and human dance synthesis, achieving\nstate-of-the-art results.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Li Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b0"}, "filepath": "data/2312.01987.png", "tags": [], "_media_type": "image", "_rand": 0.9994690720639027, "arXiv_link": "https://arxiv.org/abs/2312.01987", "other_link": "https://github.com/showlab/sparseformer", "title": "Bootstrapping SparseFormers from Vision Foundation Models", "abstract": "The recently proposed SparseFormer architecture provides an alternative\napproach to visual understanding by utilizing a significantly lower number of\nvisual tokens via adjusting RoIs, greatly reducing computational costs while\nstill achieving promising performance. However, training SparseFormers from\nscratch is still expensive, and scaling up the number of parameters can be\nchallenging. In this paper, we propose to bootstrap SparseFormers from\nViT-based vision foundation models in a simple and efficient way. Since the\nmajority of SparseFormer blocks are the standard transformer ones, we can\ninherit weights from large-scale pre-trained vision transformers and freeze\nthem as much as possible. Therefore, we only need to train the\nSparseFormer-specific lightweight focusing transformer to adjust token RoIs and\nfine-tune a few early pre-trained blocks to align the final token\nrepresentation. In such a way, we can bootstrap SparseFormer architectures from\nvarious large-scale pre-trained models (e.g., IN-21K pre-trained AugRegs or\nCLIPs) using a rather smaller amount of training samples (e.g., IN-1K) and\nwithout labels or captions within just a few hours. As a result, the\nbootstrapped unimodal SparseFormer (from AugReg-ViT-L/16-384) can reach 84.9%\naccuracy on IN-1K with only 49 tokens, and the multimodal SparseFormer from\nCLIPs also demonstrates notable zero-shot performance with highly reduced\ncomputational cost without seeing any caption during the bootstrapping\nprocedure. In addition, CLIP-bootstrapped SparseFormers, which align the output\nspace with language without seeing a word, can serve as efficient vision\nencoders in multimodal large language models. Code and models are available at\nhttps://github.com/showlab/sparseformer", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Ziteng Gao", "Zhan Tong", "Kevin Qinghong Lin", "Joya Chen", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b1"}, "filepath": "data/2405.18131.png", "tags": [], "_media_type": "image", "_rand": 0.999076530824443, "arXiv_link": "https://arxiv.org/abs/2405.18131", "other_link": "https://github.com/Sentient07/SDC", "title": "Self-Supervised Dual Contouring", "abstract": "Learning-based isosurface extraction methods have recently emerged as a\nrobust and efficient alternative to axiomatic techniques. However, the vast\nmajority of such approaches rely on supervised training with axiomatically\ncomputed ground truths, thus potentially inheriting biases and data artifacts\nof the corresponding axiomatic methods. Steering away from such dependencies,\nwe propose a self-supervised training scheme for the Neural Dual Contouring\nmeshing framework, resulting in our method: Self-Supervised Dual Contouring\n(SDC). Instead of optimizing predicted mesh vertices with supervised training,\nwe use two novel self-supervised loss functions that encourage the consistency\nbetween distances to the generated mesh up to the first order. Meshes\nreconstructed by SDC surpass existing data-driven methods in capturing\nintricate details while being more robust to possible irregularities in the\ninput. Furthermore, we use the same self-supervised training objective linking\ninferred mesh and input SDF, to regularize the training process of Deep\nImplicit Networks (DINs). We demonstrate that the resulting DINs produce\nhigher-quality implicit functions, ultimately leading to more accurate and\ndetail-preserving surfaces compared to prior baselines for different input\nmodalities. Finally, we demonstrate that our self-supervised losses improve\nmeshing performance in the single-view reconstruction task by enabling joint\ntraining of predicted SDF and resulting output mesh. We open-source our code at\nhttps://github.com/Sentient07/SDC", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ramana Sundararaman", "Roman Klokov", "Maks Ovsjanikov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b2"}, "filepath": "data/2312.16272.png", "tags": [], "_media_type": "image", "_rand": 0.9999108357388167, "arXiv_link": "https://arxiv.org/abs/2312.16272", "other_link": "https://ssr-encoder.github.io", "title": "SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation", "abstract": "Recent advancements in subject-driven image generation have led to zero-shot\ngeneration, yet precise selection and focus on crucial subject representations\nremain challenging. Addressing this, we introduce the SSR-Encoder, a novel\narchitecture designed for selectively capturing any subject from single or\nmultiple reference images. It responds to various query modalities including\ntext and masks, without necessitating test-time fine-tuning. The SSR-Encoder\ncombines a Token-to-Patch Aligner that aligns query inputs with image patches\nand a Detail-Preserving Subject Encoder for extracting and preserving fine\nfeatures of the subjects, thereby generating subject embeddings. These\nembeddings, used in conjunction with original text embeddings, condition the\ngeneration process. Characterized by its model generalizability and efficiency,\nthe SSR-Encoder adapts to a range of custom models and control modules.\nEnhanced by the Embedding Consistency Regularization Loss for improved\ntraining, our extensive experiments demonstrate its effectiveness in versatile\nand high-quality image generation, indicating its broad applicability. Project\npage: https://ssr-encoder.github.io", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yuxuan Zhang", "Yiren Song", "Jiaming Liu", "Rui Wang", "Jinpeng Yu", "Hao Tang", "Huaxia Li", "Xu Tang", "Yao Hu", "Han Pan", "Zhongliang Jing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b3"}, "filepath": "data/2403.00272.png", "tags": [], "_media_type": "image", "_rand": 0.9994950618337607, "arXiv_link": "https://arxiv.org/abs/2403.00272", "other_link": "", "title": "Dual Pose-invariant Embeddings: Learning Category and Object-specific Discriminative Representations for Recognition and Retrieval", "abstract": "In the context of pose-invariant object recognition and retrieval, we\ndemonstrate that it is possible to achieve significant improvements in\nperformance if both the category-based and the object-identity-based embeddings\nare learned simultaneously during training. In hindsight, that sounds intuitive\nbecause learning about the categories is more fundamental than learning about\nthe individual objects that correspond to those categories. However, to the\nbest of what we know, no prior work in pose-invariant learning has demonstrated\nthis effect. This paper presents an attention-based dual-encoder architecture\nwith specially designed loss functions that optimize the inter- and intra-class\ndistances simultaneously in two different embedding spaces, one for the\ncategory embeddings and the other for the object-level embeddings. The loss\nfunctions we have proposed are pose-invariant ranking losses that are designed\nto minimize the intra-class distances and maximize the inter-class distances in\nthe dual representation spaces. We demonstrate the power of our approach with\nthree challenging multi-view datasets, ModelNet-40, ObjectPI, and FG3D. With\nour dual approach, for single-view object recognition, we outperform the\nprevious best by 20.0% on ModelNet40, 2.0% on ObjectPI, and 46.5% on FG3D. On\nthe other hand, for single-view object retrieval, we outperform the previous\nbest by 33.7% on ModelNet40, 18.8% on ObjectPI, and 56.9% on FG3D.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Rohan Sarkar", "Avinash Kak"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Information Retrieval", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b4"}, "filepath": "data/2306.15670v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996869781336546, "arXiv_link": "https://arxiv.org/abs/2306.15670v2", "other_link": "https://github.com/hustvl/Symphonies.", "title": "Symphonize 3D Semantic Scene Completion with Contextual Instance Queries", "abstract": "`3D Semantic Scene Completion (SSC) has emerged as a nascent and pivotal\nundertaking in autonomous driving, aiming to predict voxel occupancy within\nvolumetric scenes. However, prevailing methodologies primarily focus on\nvoxel-wise feature aggregation, while neglecting instance semantics and scene\ncontext. In this paper, we present a novel paradigm termed Symphonies\n(Scene-from-Insts), that delves into the integration of instance queries to\norchestrate 2D-to-3D reconstruction and 3D scene modeling. Leveraging our\nproposed Serial Instance-Propagated Attentions, Symphonies dynamically encodes\ninstance-centric semantics, facilitating intricate interactions between\nimage-based and volumetric domains. Simultaneously, Symphonies enables holistic\nscene comprehension by capturing context through the efficient fusion of\ninstance queries, alleviating geometric ambiguity such as occlusion and\nperspective errors through contextual scene reasoning. Experimental results\ndemonstrate that Symphonies achieves state-of-the-art performance on\nchallenging benchmarks SemanticKITTI and SSCBench-KITTI-360, yielding\nremarkable mIoU scores of 15.04 and 18.58, respectively. These results showcase\nthe paradigm's promising advancements. The code is available at\nhttps://github.com/hustvl/Symphonies.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Haoyi Jiang", "Tianheng Cheng", "Naiyu Gao", "Haoyang Zhang", "Tianwei Lin", "Wenyu Liu", "Xinggang Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b5"}, "filepath": "data/2403.14852.png", "tags": [], "_media_type": "image", "_rand": 0.9997456677414083, "arXiv_link": "https://arxiv.org/abs/2403.14852", "other_link": "", "title": "KeyPoint Relative Position Encoding for Face Recognition", "abstract": "In this paper, we address the challenge of making ViT models more robust to\nunseen affine transformations. Such robustness becomes useful in various\nrecognition tasks such as face recognition when image alignment failures occur.\nWe propose a novel method called KP-RPE, which leverages key points\n(e.g.~facial landmarks) to make ViT more resilient to scale, translation, and\npose variations. We begin with the observation that Relative Position Encoding\n(RPE) is a good way to bring affine transform generalization to ViTs. RPE,\nhowever, can only inject the model with prior knowledge that nearby pixels are\nmore important than far pixels. Keypoint RPE (KP-RPE) is an extension of this\nprinciple, where the significance of pixels is not solely dictated by their\nproximity but also by their relative positions to specific keypoints within the\nimage. By anchoring the significance of pixels around keypoints, the model can\nmore effectively retain spatial relationships, even when those relationships\nare disrupted by affine transformations. We show the merit of KP-RPE in face\nand gait recognition. The experimental results demonstrate the effectiveness in\nimproving face recognition performance from low-quality images, particularly\nwhere alignment is prone to failure. Code and pre-trained models are available.", "keywords": [], "authors_list": ["Minchul Kim", "Feng Liu", "Yiyang Su", "Anil Jain", "Xiaoming Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b6"}, "filepath": "data/2306.10014.png", "tags": [], "_media_type": "image", "_rand": 0.9993923103650995, "arXiv_link": "https://arxiv.org/abs/2306.10014", "other_link": "", "title": "Feedback-Guided Autonomous Driving", "abstract": "We propose a novel knowledge distillation framework for effectively teaching\na sensorimotor student agent to drive from the supervision of a privileged\nteacher agent. Current distillation for sensorimotor agents methods tend to\nresult in suboptimal learned driving behavior by the student, which we\nhypothesize is due to inherent differences between the input, modeling\ncapacity, and optimization processes of the two agents. We develop a novel\ndistillation scheme that can address these limitations and close the gap\nbetween the sensorimotor agent and its privileged teacher. Our key insight is\nto design a student which learns to align their input features with the\nteacher's privileged Bird's Eye View (BEV) space. The student then can benefit\nfrom direct supervision by the teacher over the internal representation\nlearning. To scaffold the difficult sensorimotor learning task, the student\nmodel is optimized via a student-paced coaching mechanism with various\nauxiliary supervision. We further propose a high-capacity imitation learned\nprivileged agent that surpasses prior privileged agents in CARLA and ensures\nthe student learns safe driving behavior. Our proposed sensorimotor agent\nresults in a robust image-based behavior cloning agent in CARLA, improving over\ncurrent models by over 20.6% in driving score without requiring LiDAR,\nhistorical observations, ensemble of models, on-policy data aggregation or\nreinforcement learning.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Jimuyang Zhang", "Zanming Huang", "Arijit Ray", "Eshed Ohn-Bar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b7"}, "filepath": "data/2307.08544.png", "tags": [], "_media_type": "image", "_rand": 0.9999242366703803, "arXiv_link": "https://arxiv.org/abs/2307.08544", "other_link": "https://github.com/liuguandu/RC-LUT.", "title": "Look-Up Table Compression for Efficient Image Restoration", "abstract": "Look-up table(LUT)-based methods have shown the great efficacy in single\nimage super-resolution (SR) task. However, previous methods ignore the\nessential reason of restricted receptive field (RF) size in LUT, which is\ncaused by the interaction of space and channel features in vanilla convolution.\nThey can only increase the RF at the cost of linearly increasing LUT size. To\nenlarge RF with contained LUT sizes, we propose a novel Reconstructed\nConvolution(RC) module, which decouples channel-wise and spatial calculation.\nIt can be formulated as $n^2$ 1D LUTs to maintain $n\\times n$ receptive field,\nwhich is obviously smaller than $n\\times n$D LUT formulated before. The LUT\ngenerated by our RC module reaches less than 1/10000 storage compared with\nSR-LUT baseline. The proposed Reconstructed Convolution module based LUT\nmethod, termed as RCLUT, can enlarge the RF size by 9 times than the\nstate-of-the-art LUT-based SR method and achieve superior performance on five\npopular benchmark dataset. Moreover, the efficient and robust RC module can be\nused as a plugin to improve other LUT-based SR methods. The code is available\nat https://github.com/liuguandu/RC-LUT.", "keywords": ["Low-level vision"], "authors_list": ["Yinglong Li", "Jiacheng Li", "Zhiwei Xiong"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b8"}, "filepath": "data/2404.07985v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994451554928437, "arXiv_link": "https://arxiv.org/abs/2404.07985v1", "other_link": "https://wavemo-2024.github.io/.", "title": "WaveMo: Learning Wavefront Modulations to See Through Scattering", "abstract": "Imaging through scattering media is a fundamental and pervasive challenge in\nfields ranging from medical diagnostics to astronomy. A promising strategy to\novercome this challenge is wavefront modulation, which induces measurement\ndiversity during image acquisition. Despite its importance, designing optimal\nwavefront modulations to image through scattering remains under-explored. This\npaper introduces a novel learning-based framework to address the gap. Our\napproach jointly optimizes wavefront modulations and a computationally\nlightweight feedforward \"proxy\" reconstruction network. This network is trained\nto recover scenes obscured by scattering, using measurements that are modified\nby these modulations. The learned modulations produced by our framework\ngeneralize effectively to unseen scattering scenarios and exhibit remarkable\nversatility. During deployment, the learned modulations can be decoupled from\nthe proxy network to augment other more computationally expensive restoration\nalgorithms. Through extensive experiments, we demonstrate our approach\nsignificantly advances the state of the art in imaging through scattering\nmedia. Our project webpage is at https://wavemo-2024.github.io/.", "keywords": ["Deep learning architectures and techniques", "Medical imaging and biological vision"], "authors_list": ["Mingyang Xie", "Haiyun Guo", "Brandon Y. Feng", "Lingbo Jin", "Ashok Veeraraghavan", "Christopher Metzler"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6b9"}, "filepath": "data/2404.00385.png", "tags": [], "_media_type": "image", "_rand": 0.9998675690233918, "arXiv_link": "https://arxiv.org/abs/2404.00385", "other_link": "", "title": "Constrained Layout Generation with Factor Graphs", "abstract": "This paper addresses the challenge of object-centric layout generation under\nspatial constraints, seen in multiple domains including floorplan design\nprocess. The design process typically involves specifying a set of spatial\nconstraints that include object attributes like size and inter-object relations\nsuch as relative positioning. Existing works, which typically represent objects\nas single nodes, lack the granularity to accurately model complex interactions\nbetween objects. For instance, often only certain parts of an object, like a\nroom's right wall, interact with adjacent objects. To address this gap, we\nintroduce a factor graph based approach with four latent variable nodes for\neach room, and a factor node for each constraint. The factor nodes represent\ndependencies among the variables to which they are connected, effectively\ncapturing constraints that are potentially of a higher order. We then develop\nmessage-passing on the bipartite graph, forming a factor graph neural network\nthat is trained to produce a floorplan that aligns with the desired\nrequirements. Our approach is simple and generates layouts faithful to the user\nrequirements, demonstrated by a large improvement in IOU scores over existing\nmethods. Additionally, our approach, being inferential and accurate, is\nwell-suited to the practical human-in-the-loop design process where\nspecifications evolve iteratively, offering a practical and powerful tool for\nAI-guided design.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Mohammed Haroon Dupty", "Yanfei Dong", "Sicong Leng", "Guoji Fu", "Yong Liang Goh", "Wei Lu", "Wee Sun Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ba"}, "filepath": "data/2404.00851.png", "tags": [], "_media_type": "image", "_rand": 0.9995677287174038, "arXiv_link": "https://arxiv.org/abs/2404.00851", "other_link": "https://github.com/mlvlab/ProMetaR.", "title": "Prompt Learning via Meta-Regularization", "abstract": "Pre-trained vision-language models have shown impressive success on various\ncomputer vision tasks with their zero-shot generalizability. Recently, prompt\nlearning approaches have been explored to efficiently and effectively adapt the\nvision-language models to a variety of downstream tasks. However, most existing\nprompt learning methods suffer from task overfitting since the general\nknowledge of the pre-trained vision language models is forgotten while the\nprompts are finetuned on a small data set from a specific target task. To\naddress this issue, we propose a Prompt Meta-Regularization (ProMetaR) to\nimprove the generalizability of prompt learning for vision-language models.\nSpecifically, ProMetaR meta-learns both the regularizer and the soft prompts to\nharness the task-specific knowledge from the downstream tasks and task-agnostic\ngeneral knowledge from the vision-language models. Further, ProMetaR augments\nthe task to generate multiple virtual tasks to alleviate the meta-overfitting.\nIn addition, we provide the analysis to comprehend how ProMetaR improves the\ngeneralizability of prompt tuning in the perspective of the gradient alignment.\nOur extensive experiments demonstrate that our ProMetaR improves the\ngeneralizability of conventional prompt learning methods under\nbase-to-base/base-to-new and domain generalization settings. The code of\nProMetaR is available at https://github.com/mlvlab/ProMetaR.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jinyoung Park", "Juyeon Ko", "Hyunwoo J. Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6bb"}, "filepath": "data/2312.13328.png", "tags": [], "_media_type": "image", "_rand": 0.9991465787093617, "arXiv_link": "https://arxiv.org/abs/2312.13328", "other_link": "https://sinoyou.github.io/nelf-pro/.", "title": "NeLF-Pro: Neural Light Field Probes for Multi-Scale Novel View Synthesis", "abstract": "We present NeLF-Pro, a novel representation to model and reconstruct light\nfields in diverse natural scenes that vary in extent and spatial granularity.\nIn contrast to previous fast reconstruction methods that represent the 3D scene\nglobally, we model the light field of a scene as a set of local light field\nfeature probes, parameterized with position and multi-channel 2D feature maps.\nOur central idea is to bake the scene's light field into spatially varying\nlearnable representations and to query point features by weighted blending of\nprobes close to the camera - allowing for mipmap representation and rendering.\nWe introduce a novel vector-matrix-matrix (VMM) factorization technique that\neffectively represents the light field feature probes as products of core\nfactors (i.e., VM) shared among local feature probes, and a basis factor (i.e.,\nM) - efficiently encoding internal relationships and patterns within the scene.\nExperimentally, we demonstrate that NeLF-Pro significantly boosts the\nperformance of feature grid-based representations, and achieves fast\nreconstruction with better rendering quality while maintaining compact\nmodeling. Project webpage https://sinoyou.github.io/nelf-pro/.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Zinuo You", "Andreas Geiger", "Anpei Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6bc"}, "filepath": "data/2405.14467.png", "tags": [], "_media_type": "image", "_rand": 0.9999012683541209, "arXiv_link": "https://arxiv.org/abs/2405.14467", "other_link": "", "title": "ALGM: Adaptive Local-then-Global Token Merging for Efficient Semantic Segmentation with Plain Vision Transformers", "abstract": "Utilizing transformer architectures for semantic segmentation of\nhigh-resolution images is hindered by the attention's quadratic computational\ncomplexity in the number of tokens. A solution to this challenge involves\ndecreasing the number of tokens through token merging, which has exhibited\nremarkable enhancements in inference speed, training efficiency, and memory\nutilization for image classification tasks. In this paper, we explore various\ntoken merging strategies within the framework of the Segformer architecture and\nperform experiments on multiple semantic segmentation and human pose estimation\ndatasets. Notably, without model re-training, we, for example, achieve an\ninference acceleration of 61% on the Cityscapes dataset while maintaining the\nmIoU performance. Consequently, this paper facilitates the deployment of\ntransformer-based architectures on resource-constrained devices and in\nreal-time applications.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Narges Norouzi", "Svetlana Orlova", "Daan de Geus", "Gijs Dubbelman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6bd"}, "filepath": "data/2405.09996.png", "tags": [], "_media_type": "image", "_rand": 0.9997724403085381, "arXiv_link": "https://arxiv.org/abs/2405.09996", "other_link": "", "title": "Driving-Video Dehazing with Non-Aligned Regularization for Safety Assistance", "abstract": "Real driving-video dehazing poses a significant challenge due to the inherent\ndifficulty in acquiring precisely aligned hazy/clear video pairs for effective\nmodel training, especially in dynamic driving scenarios with unpredictable\nweather conditions. In this paper, we propose a pioneering approach that\naddresses this challenge through a nonaligned regularization strategy. Our core\nconcept involves identifying clear frames that closely match hazy frames,\nserving as references to supervise a video dehazing network. Our approach\ncomprises two key components: reference matching and video dehazing. Firstly,\nwe introduce a non-aligned reference frame matching module, leveraging an\nadaptive sliding window to match high-quality reference frames from clear\nvideos. Video dehazing incorporates flow-guided cosine attention sampler and\ndeformable cosine attention fusion modules to enhance spatial multiframe\nalignment and fuse their improved information. To validate our approach, we\ncollect a GoProHazy dataset captured effortlessly with GoPro cameras in diverse\nrural and urban road environments. Extensive experiments demonstrate the\nsuperiority of the proposed method over current state-of-the-art methods in the\nchallenging task of real driving-video dehazing. Project page.", "keywords": ["Deep learning architectures and techniques", "Low-level vision"], "authors_list": ["Junkai Fan", "Jiangwei Weng", "Kun Wang", "Yijun Yang", "Jianjun Qian", "Jun Li", "Jian Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6be"}, "filepath": "data/2404.04346.png", "tags": [], "_media_type": "image", "_rand": 0.9995264997130201, "arXiv_link": "https://arxiv.org/abs/2404.04346", "other_link": "", "title": "Koala: Key frame-conditioned long video-LLM", "abstract": "Long video question answering is a challenging task that involves recognizing\nshort-term activities and reasoning about their fine-grained relationships.\nState-of-the-art video Large Language Models (vLLMs) hold promise as a viable\nsolution due to their demonstrated emergent capabilities on new tasks. However,\ndespite being trained on millions of short seconds-long videos, vLLMs are\nunable to understand minutes-long videos and accurately answer questions about\nthem. To address this limitation, we propose a lightweight and self-supervised\napproach, Key frame-conditioned long video-LLM (Koala), that introduces\nlearnable spatiotemporal queries to adapt pretrained vLLMs for generalizing to\nlonger videos. Our approach introduces two new tokenizers that condition on\nvisual tokens computed from sparse video key frames for understanding short and\nlong video moments. We train our proposed approach on HowTo100M and demonstrate\nits effectiveness on zero-shot long video understanding benchmarks, where it\noutperforms state-of-the-art large models by 3 - 6% in absolute accuracy across\nall tasks. Surprisingly, we also empirically show that our approach not only\nhelps a pretrained vLLM to understand long videos but also improves its\naccuracy on short-term action recognition.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Reuben Tan", "Ximeng Sun", "Ping Hu", "Jui-Hsien Wang", "Hanieh Deilamsalehy", "Bryan A. Plummer", "Bryan Russell", "Kate Saenko"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6bf"}, "filepath": "data/2403.16937.png", "tags": [], "_media_type": "image", "_rand": 0.9996528323410029, "arXiv_link": "https://arxiv.org/abs/2403.16937", "other_link": "https://github.com/msed-Ebrahimi/DL2PA_CVPR24", "title": "Hyperspherical Classification with Dynamic Label-to-Prototype Assignment", "abstract": "Aiming to enhance the utilization of metric space by the parametric softmax\nclassifier, recent studies suggest replacing it with a non-parametric\nalternative. Although a non-parametric classifier may provide better metric\nspace utilization, it introduces the challenge of capturing inter-class\nrelationships. A shared characteristic among prior non-parametric classifiers\nis the static assignment of labels to prototypes during the training, ie, each\nprototype consistently represents a class throughout the training course.\nOrthogonal to previous works, we present a simple yet effective method to\noptimize the category assigned to each prototype (label-to-prototype\nassignment) during the training. To this aim, we formalize the problem as a\ntwo-step optimization objective over network parameters and label-to-prototype\nassignment mapping. We solve this optimization using a sequential combination\nof gradient descent and Bipartide matching. We demonstrate the benefits of the\nproposed approach by conducting experiments on balanced and long-tail\nclassification problems using different backbone network architectures. In\nparticular, our method outperforms its competitors by 1.22\\% accuracy on\nCIFAR-100, and 2.15\\% on ImageNet-200 using a metric space dimension half of\nthe size of its competitors. Code:\nhttps://github.com/msed-Ebrahimi/DL2PA_CVPR24", "keywords": ["Efficient and scalable vision"], "authors_list": ["Mohammad Saadabadi Saadabadi", "Ali Dabouei", "Sahar Rahimi Malakshan", "Nasser Nasrabadi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c0"}, "filepath": "data/2403.19205.png", "tags": [], "_media_type": "image", "_rand": 0.9991831073107504, "arXiv_link": "https://arxiv.org/abs/2403.19205", "other_link": "", "title": "From Activation to Initialization: Scaling Insights for Optimizing Neural Fields", "abstract": "In the realm of computer vision, Neural Fields have gained prominence as a\ncontemporary tool harnessing neural networks for signal representation. Despite\nthe remarkable progress in adapting these networks to solve a variety of\nproblems, the field still lacks a comprehensive theoretical framework. This\narticle aims to address this gap by delving into the intricate interplay\nbetween initialization and activation, providing a foundational basis for the\nrobust optimization of Neural Fields. Our theoretical insights reveal a\ndeep-seated connection among network initialization, architectural choices, and\nthe optimization process, emphasizing the need for a holistic approach when\ndesigning cutting-edge Neural Fields.", "keywords": [], "authors_list": ["Hemanth Saratchandran", "Sameera Ramasinghe", "Simon Lucey"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c1"}, "filepath": "data/2312.02470v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996541207459252, "arXiv_link": "https://arxiv.org/abs/2312.02470v1", "other_link": "", "title": "Neural Lineage", "abstract": "In this paper, we make a bold attempt toward an ambitious task: given a\npre-trained classifier, we aim to reconstruct an image generator, without\nrelying on any data samples. From a black-box perspective, this challenge seems\nintractable, since it inevitably involves identifying the inverse function for\na classifier, which is, by nature, an information extraction process. As such,\nwe resort to leveraging the knowledge encapsulated within the parameters of the\nneural network. Grounded on the theory of Maximum-Margin Bias of gradient\ndescent, we propose a novel learning paradigm, in which the generator is\ntrained to ensure that the convergence conditions of the network parameters are\nsatisfied over the generated distribution of the samples. Empirical validation\nfrom various image generation tasks substantiates the efficacy of our strategy.", "keywords": [], "authors_list": ["Runpeng Yu", "Xinchao Wang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c2"}, "filepath": "data/2403.14430.png", "tags": [], "_media_type": "image", "_rand": 0.9993784056747383, "arXiv_link": "https://arxiv.org/abs/2403.14430", "other_link": "", "title": "Ranking Distillation for Open-Ended Video Question Answering with Insufficient Labels", "abstract": "This paper focuses on open-ended video question answering, which aims to find\nthe correct answers from a large answer set in response to a video-related\nquestion. This is essentially a multi-label classification task, since a\nquestion may have multiple answers. However, due to annotation costs, the\nlabels in existing benchmarks are always extremely insufficient, typically one\nanswer per question. As a result, existing works tend to directly treat all the\nunlabeled answers as negative labels, leading to limited ability for\ngeneralization. In this work, we introduce a simple yet effective ranking\ndistillation framework (RADI) to mitigate this problem without additional\nmanual annotation. RADI employs a teacher model trained with incomplete labels\nto generate rankings for potential answers, which contain rich knowledge about\nlabel priority as well as label-associated visual cues, thereby enriching the\ninsufficient labeling information. To avoid overconfidence in the imperfect\nteacher model, we further present two robust and parameter-free ranking\ndistillation approaches: a pairwise approach which introduces adaptive soft\nmargins to dynamically refine the optimization constraints on various pairwise\nrankings, and a listwise approach which adopts sampling-based partial listwise\nlearning to resist the bias in teacher ranking. Extensive experiments on five\npopular benchmarks consistently show that both our pairwise and listwise RADIs\noutperform state-of-the-art methods. Further analysis demonstrates the\neffectiveness of our methods on the insufficient labeling problem.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Tianming Liang", "Chaolei Tan", "Beihao Xia", "Wei-Shi Zheng", "Jian-Fang Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c3"}, "filepath": "data/2403.04547.png", "tags": [], "_media_type": "image", "_rand": 0.9994422914265756, "arXiv_link": "https://arxiv.org/abs/2403.04547", "other_link": "", "title": "Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation", "abstract": "We study the effectiveness of data-balancing for mitigating biases in\ncontrastive language-image pretraining (CLIP), identifying areas of strength\nand limitation. First, we reaffirm prior conclusions that CLIP models can\ninadvertently absorb societal stereotypes. To counter this, we present a novel\nalgorithm, called Multi-Modal Moment Matching (M4), designed to reduce both\nrepresentation and association biases (i.e. in first- and second-order\nstatistics) in multimodal data. We use M4 to conduct an in-depth analysis\ntaking into account various factors, such as the model, representation, and\ndata size. Our study also explores the dynamic nature of how CLIP learns and\nunlearns biases. In particular, we find that fine-tuning is effective in\ncountering representation biases, though its impact diminishes for association\nbiases. Also, data balancing has a mixed impact on quality: it tends to improve\nclassification but can hurt retrieval. Interestingly, data and architectural\nimprovements seem to mitigate the negative impact of data balancing on\nperformance; e.g. applying M4 to SigLIP-B/16 with data quality filters improves\nCOCO image-to-text retrieval @5 from 86% (without data balancing) to 87% and\nImageNet 0-shot classification from 77% to 77.5%! Finally, we conclude with\nrecommendations for improving the efficacy of data balancing in multimodal\nsystems.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jingyun Wang", "Guoliang Kang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c4"}, "filepath": "data/2312.02434.png", "tags": [], "_media_type": "image", "_rand": 0.9991575073984856, "arXiv_link": "https://arxiv.org/abs/2312.02434", "other_link": "", "title": "FINER: Flexible spectral-bias tuning in Implicit NEural Representation by Variable-periodic Activation Functions", "abstract": "Implicit Neural Representation (INR), which utilizes a neural network to map\ncoordinate inputs to corresponding attributes, is causing a revolution in the\nfield of signal processing. However, current INR techniques suffer from a\nrestricted capability to tune their supported frequency set, resulting in\nimperfect performance when representing complex signals with multiple\nfrequencies. We have identified that this frequency-related problem can be\ngreatly alleviated by introducing variable-periodic activation functions, for\nwhich we propose FINER. By initializing the bias of the neural network within\ndifferent ranges, sub-functions with various frequencies in the\nvariable-periodic function are selected for activation. Consequently, the\nsupported frequency set of FINER can be flexibly tuned, leading to improved\nperformance in signal representation. We demonstrate the capabilities of FINER\nin the contexts of 2D image fitting, 3D signed distance field representation,\nand 5D neural radiance fields optimization, and we show that it outperforms\nexisting INRs.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhen Liu", "Hao Zhu", "Qi Zhang", "Jingde Fu", "Weibing Deng", "Zhan Ma", "Yanwen Guo", "Xun Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c5"}, "filepath": "data/2404.00228.png", "tags": [], "_media_type": "image", "_rand": 0.999758706251247, "arXiv_link": "https://arxiv.org/abs/2404.00228", "other_link": "", "title": "InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning", "abstract": "Continual learning requires the model to learn multiple tasks sequentially.\nIn continual learning, the model should possess the ability to maintain its\nperformance on old tasks (stability) and the ability to adapt to new tasks\ncontinuously (plasticity). Recently, parameter-efficient fine-tuning (PEFT),\nwhich involves freezing a pre-trained model and injecting a small number of\nlearnable parameters to adapt to downstream tasks, has gained increasing\npopularity in continual learning. Although existing continual learning methods\nbased on PEFT have demonstrated superior performance compared to those not\nbased on PEFT, most of them do not consider how to eliminate the interference\nof the new task on the old tasks, which inhibits the model from making a good\ntrade-off between stability and plasticity. In this work, we propose a new PEFT\nmethod, called interference-free low-rank adaptation (InfLoRA), for continual\nlearning. InfLoRA injects a small number of parameters to reparameterize the\npre-trained weights and shows that fine-tuning these injected parameters is\nequivalent to fine-tuning the pre-trained weights within a subspace.\nFurthermore, InfLoRA designs this subspace to eliminate the interference of the\nnew task on the old tasks, making a good trade-off between stability and\nplasticity. Experimental results show that InfLoRA outperforms existing\nstate-of-the-art continual learning methods on multiple datasets.", "keywords": [], "authors_list": ["Yan-Shuo Liang", "Wu-Jun Li"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c6"}, "filepath": "data/2403.11184.png", "tags": [], "_media_type": "image", "_rand": 0.9992281108733027, "arXiv_link": "https://arxiv.org/abs/2403.11184", "other_link": "https://github.com/Wu0409/DuPL.", "title": "DuPL: Dual Student with Trustworthy Progressive Learning for Robust Weakly Supervised Semantic Segmentation", "abstract": "Recently, One-stage Weakly Supervised Semantic Segmentation (WSSS) with\nimage-level labels has gained increasing interest due to simplification over\nits cumbersome multi-stage counterpart. Limited by the inherent ambiguity of\nClass Activation Map (CAM), we observe that one-stage pipelines often encounter\nconfirmation bias caused by incorrect CAM pseudo-labels, impairing their final\nsegmentation performance. Although recent works discard many unreliable\npseudo-labels to implicitly alleviate this issue, they fail to exploit\nsufficient supervision for their models. To this end, we propose a dual student\nframework with trustworthy progressive learning (DuPL). Specifically, we\npropose a dual student network with a discrepancy loss to yield diverse CAMs\nfor each sub-net. The two sub-nets generate supervision for each other,\nmitigating the confirmation bias caused by learning their own incorrect\npseudo-labels. In this process, we progressively introduce more trustworthy\npseudo-labels to be involved in the supervision through dynamic threshold\nadjustment with an adaptive noise filtering strategy. Moreover, we believe that\nevery pixel, even discarded from supervision due to its unreliability, is\nimportant for WSSS. Thus, we develop consistency regularization on these\ndiscarded regions, providing supervision of every pixel. Experiment results\ndemonstrate the superiority of the proposed DuPL over the recent\nstate-of-the-art alternatives on PASCAL VOC 2012 and MS COCO datasets. Code is\navailable at https://github.com/Wu0409/DuPL.", "keywords": [], "authors_list": ["Yuanchen Wu", "Xichen Ye", "KequanYang", "Jide Li", "Xiaoqiang Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c7"}, "filepath": "data/2403.11463v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991629432235858, "arXiv_link": "https://arxiv.org/abs/2403.11463v2", "other_link": "", "title": "Siamese Learning with Joint Alignment and Regression for Weakly-Supervised Video Paragraph Grounding", "abstract": "Video Paragraph Grounding (VPG) is an emerging task in video-language\nunderstanding, which aims at localizing multiple sentences with semantic\nrelations and temporal order from an untrimmed video. However, existing VPG\napproaches are heavily reliant on a considerable number of temporal labels that\nare laborious and time-consuming to acquire. In this work, we introduce and\nexplore Weakly-Supervised Video Paragraph Grounding (WSVPG) to eliminate the\nneed of temporal annotations. Different from previous weakly-supervised\ngrounding frameworks based on multiple instance learning or reconstruction\nlearning for two-stage candidate ranking, we propose a novel siamese learning\nframework that jointly learns the cross-modal feature alignment and temporal\ncoordinate regression without timestamp labels to achieve concise one-stage\nlocalization for WSVPG. Specifically, we devise a Siamese Grounding TRansformer\n(SiamGTR) consisting of two weight-sharing branches for learning complementary\nsupervision. An Augmentation Branch is utilized for directly regressing the\ntemporal boundaries of a complete paragraph within a pseudo video, and an\nInference Branch is designed to capture the order-guided feature correspondence\nfor localizing multiple sentences in a normal video. We demonstrate by\nextensive experiments that our paradigm has superior practicability and\nflexibility to achieve efficient weakly-supervised or semi-supervised learning,\noutperforming state-of-the-art methods trained with the same or stronger\nsupervision.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Chaolei Tan", "Jianhuang Lai", "Wei-Shi Zheng", "Jian-Fang Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c8"}, "filepath": "data/2403.10255v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994485124892609, "arXiv_link": "https://arxiv.org/html/2403.10255v1", "other_link": "", "title": "Prompt Augmentation for Self-supervised Text-guided Image Manipulation", "abstract": "Super-resolution (SR) and image generation are important tasks in computer\nvision and are widely adopted in real-world applications. Most existing\nmethods, however, generate images only at fixed-scale magnification and suffer\nfrom over-smoothing and artifacts. Additionally, they do not offer enough\ndiversity of output images nor image consistency at different scales. Most\nrelevant work applied Implicit Neural Representation (INR) to the denoising\ndiffusion model to obtain continuous-resolution yet diverse and high-quality SR\nresults. Since this model operates in the image space, the larger the\nresolution of image is produced, the more memory and inference time is\nrequired, and it also does not maintain scale-specific consistency. We propose\na novel pipeline that can super-resolve an input image or generate from a\nrandom noise a novel image at arbitrary scales. The method consists of a\npretrained auto-encoder, a latent diffusion model, and an implicit neural\ndecoder, and their learning strategies. The proposed method adopts diffusion\nprocesses in a latent space, thus efficient, yet aligned with output image\nspace decoded by MLPs at arbitrary scales. More specifically, our\narbitrary-scale decoder is designed by the symmetric decoder w/o up-scaling\nfrom the pretrained auto-encoder, and Local Implicit Image Function (LIIF) in\nseries. The latent diffusion process is learnt by the denoising and the\nalignment losses jointly. Errors in output images are backpropagated via the\nfixed decoder, improving the quality of output images. In the extensive\nexperiments using multiple public benchmarks on the two tasks i.e. image\nsuper-resolution and novel image generation at arbitrary scales, the proposed\nmethod outperforms relevant methods in metrics of image quality, diversity and\nscale consistency. It is significantly better than the relevant prior-art in\nthe inference speed and memory usage.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Rumeysa Bodur", "Binod Bhattarai", "Tae-Kyun Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6c9"}, "filepath": "data/2312.04265.png", "tags": [], "_media_type": "image", "_rand": 0.9990017920751632, "arXiv_link": "https://arxiv.org/abs/2312.04265", "other_link": "https://github.com/w1oves/Rein.git.", "title": "Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation", "abstract": "In this paper, we first assess and harness various Vision Foundation Models\n(VFMs) in the context of Domain Generalized Semantic Segmentation (DGSS).\nDriven by the motivation that Leveraging Stronger pre-trained models and Fewer\ntrainable parameters for Superior generalizability, we introduce a robust\nfine-tuning approach, namely Rein, to parameter-efficiently harness VFMs for\nDGSS. Built upon a set of trainable tokens, each linked to distinct instances,\nRein precisely refines and forwards the feature maps from each layer to the\nnext layer within the backbone. This process produces diverse refinements for\ndifferent categories within a single image. With fewer trainable parameters,\nRein efficiently fine-tunes VFMs for DGSS tasks, surprisingly surpassing full\nparameter fine-tuning. Extensive experiments across various settings\ndemonstrate that Rein significantly outperforms state-of-the-art methods.\nRemarkably, with just an extra 1% of trainable parameters within the frozen\nbackbone, Rein achieves a mIoU of 78.4% on the Cityscapes, without accessing\nany real urban-scene datasets.Code is available at\nhttps://github.com/w1oves/Rein.git.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["ZHIXIANG WEI", "Lin Chen", "Xiaoxiao Ma", "Huaian Chen", "Tianle Liu", "Pengyang Ling", "Jinjin Zheng", "Ben Wang", "Yi Jin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ca"}, "filepath": "data/2311.18836.png", "tags": [], "_media_type": "image", "_rand": 0.9998178763046249, "arXiv_link": "https://arxiv.org/abs/2311.18836", "other_link": "", "title": "PoseGPT: Chatting about 3D Human Pose", "abstract": "We introduce ChatPose, a framework employing Large Language Models (LLMs) to\nunderstand and reason about 3D human poses from images or textual descriptions.\nOur work is motivated by the human ability to intuitively understand postures\nfrom a single image or a brief description, a process that intertwines image\ninterpretation, world knowledge, and an understanding of body language.\nTraditional human pose estimation and generation methods often operate in\nisolation, lacking semantic understanding and reasoning abilities. ChatPose\naddresses these limitations by embedding SMPL poses as distinct signal tokens\nwithin a multimodal LLM, enabling the direct generation of 3D body poses from\nboth textual and visual inputs. Leveraging the powerful capabilities of\nmultimodal LLMs, ChatPose unifies classical 3D human pose and generation tasks\nwhile offering user interactions. Additionally, ChatPose empowers LLMs to apply\ntheir extensive world knowledge in reasoning about human poses, leading to two\nadvanced tasks: speculative pose generation and reasoning about pose\nestimation. These tasks involve reasoning about humans to generate 3D poses\nfrom subtle text queries, possibly accompanied by images. We establish\nbenchmarks for these tasks, moving beyond traditional 3D pose generation and\nestimation methods. Our results show that ChatPose outperforms existing\nmultimodal LLMs and task-specific methods on these newly proposed tasks.\nFurthermore, ChatPose's ability to understand and generate 3D human poses based\non complex reasoning opens new directions in human pose analysis.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Biometrics and human analysis"], "authors_list": ["Yao Feng", "Jing Lin", "Sai Kumar Dwivedi", "Yu Sun", "Priyanka Patel", "Michael J. Black"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6cb"}, "filepath": "data/2403.11380.png", "tags": [], "_media_type": "image", "_rand": 0.999679849434443, "arXiv_link": "https://arxiv.org/abs/2403.11380", "other_link": "", "title": "Boosting Order-Preserving and Transferability for Neural Architecture Search: a Joint Architecture Refined Search and Fine-tuning Approach", "abstract": "Supernet is a core component in many recent Neural Architecture Search (NAS)\nmethods. It not only helps embody the search space but also provides a\n(relative) estimation of the final performance of candidate architectures.\nThus, it is critical that the top architectures ranked by a supernet should be\nconsistent with those ranked by true performance, which is known as the\norder-preserving ability. In this work, we analyze the order-preserving ability\non the whole search space (global) and a sub-space of top architectures\n(local), and empirically show that the local order-preserving for current\ntwo-stage NAS methods still need to be improved. To rectify this, we propose a\nnovel concept of Supernet Shifting, a refined search strategy combining\narchitecture searching with supernet fine-tuning. Specifically, apart from\nevaluating, the training loss is also accumulated in searching and the supernet\nis updated every iteration. Since superior architectures are sampled more\nfrequently in evolutionary searching, the supernet is encouraged to focus on\ntop architectures, thus improving local order-preserving. Besides, a\npre-trained supernet is often un-reusable for one-shot methods. We show that\nSupernet Shifting can fulfill transferring supernet to a new dataset.\nSpecifically, the last classifier layer will be unset and trained through\nevolutionary searching. Comprehensive experiments show that our method has\nbetter order-preserving ability and can find a dominating architecture.\nMoreover, the pre-trained supernet can be easily transferred into a new dataset\nwith no loss of performance.", "keywords": [], "authors_list": ["Beichen Zhang", "Xiaoxing Wang", "Xiaohan Qin", "Junchi Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6cc"}, "filepath": "data/2403.03662v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996949969469794, "arXiv_link": "https://arxiv.org/abs/2403.03662v1", "other_link": "", "title": "Harnessing Meta-Learning for Improving Full-Frame Video Stabilization", "abstract": "Video stabilization is a longstanding computer vision problem, particularly\npixel-level synthesis solutions for video stabilization which synthesize full\nframes add to the complexity of this task. These techniques aim to stabilize\nvideos by synthesizing full frames while enhancing the stability of the\nconsidered video. This intensifies the complexity of the task due to the\ndistinct mix of unique motion profiles and visual content present in each video\nsequence, making robust generalization with fixed parameters difficult. In our\nstudy, we introduce a novel approach to enhance the performance of pixel-level\nsynthesis solutions for video stabilization by adapting these models to\nindividual input video sequences. The proposed adaptation exploits low-level\nvisual cues accessible during test-time to improve both the stability and\nquality of resulting videos. We highlight the efficacy of our methodology of\n\"test-time adaptation\" through simple fine-tuning of one of these models,\nfollowed by significant stability gain via the integration of meta-learning\ntechniques. Notably, significant improvement is achieved with only a single\nadaptation step. The versatility of the proposed algorithm is demonstrated by\nconsistently improving the performance of various pixel-level synthesis models\nfor video stabilization in real-world scenarios.", "keywords": ["Image and video generation and manipulation", "Low-level vision", "Deep learning architectures and techniques"], "authors_list": ["Muhammad Kashif Ali", "Eun Woo Im", "Dongjin Kim", "Tae Hyun Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6cd"}, "filepath": "data/2307.16125.png", "tags": [], "_media_type": "image", "_rand": 0.9991092326742183, "arXiv_link": "https://arxiv.org/abs/2307.16125", "other_link": "", "title": "SEED-Bench: Benchmarking Multimodal Large Language Models", "abstract": "Based on powerful Large Language Models (LLMs), recent generative Multimodal\nLarge Language Models (MLLMs) have gained prominence as a pivotal research\narea, exhibiting remarkable capability for both comprehension and generation.\nIn this work, we address the evaluation of generative comprehension in MLLMs as\na preliminary step towards a comprehensive assessment of generative models, by\nintroducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple\nchoice questions with accurate human annotations (x 6 larger than existing\nbenchmarks), which spans 12 evaluation dimensions including the comprehension\nof both the image and video modality. We develop an advanced pipeline for\ngenerating multiple-choice questions that target specific evaluation\ndimensions, integrating both automatic filtering and manual verification\nprocesses. Multiple-choice questions with groundtruth options derived from\nhuman annotation enables an objective and efficient assessment of model\nperformance, eliminating the need for human or GPT intervention during\nevaluation. We further evaluate the performance of 18 models across all 12\ndimensions, covering both the spatial and temporal understanding. By revealing\nthe limitations of existing MLLMs through evaluation results, we aim for\nSEED-Bench to provide insights for motivating future research. We will launch\nand consistently maintain a leaderboard to provide a platform for the community\nto assess and investigate model capability.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Bohao Li", "Yuying Ge", "Yixiao Ge", "Guangzhi Wang", "Rui Wang", "Ruimao Zhang", "Ying Shan"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ce"}, "filepath": "data/2311.06607.png", "tags": [], "_media_type": "image", "_rand": 0.9992781158449897, "arXiv_link": "https://arxiv.org/abs/2311.06607", "other_link": "https://github.com/Yuliang-Liu/Monkey.", "title": "Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models", "abstract": "Large Multimodal Models (LMMs) have shown promise in vision-language tasks\nbut struggle with high-resolution input and detailed scene understanding.\nAddressing these challenges, we introduce Monkey to enhance LMM capabilities.\nFirstly, Monkey processes input images by dividing them into uniform patches,\neach matching the size (e.g., 448x448) used in the original training of the\nwell-trained vision encoder. Equipped with individual adapter for each patch,\nMonkey can handle higher resolutions up to 1344x896 pixels, enabling the\ndetailed capture of complex visual information. Secondly, it employs a\nmulti-level description generation method, enriching the context for\nscene-object associations. This two-part strategy ensures more effective\nlearning from generated data: the higher resolution allows for a more detailed\ncapture of visuals, which in turn enhances the effectiveness of comprehensive\ndescriptions. Extensive ablative results validate the effectiveness of our\ndesigns. Additionally, experiments on 18 datasets further demonstrate that\nMonkey surpasses existing LMMs in many tasks like Image Captioning and various\nVisual Question Answering formats. Specially, in qualitative tests focused on\ndense text question answering, Monkey has exhibited encouraging results\ncompared with GPT4V. Code is available at\nhttps://github.com/Yuliang-Liu/Monkey.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding", "Deep learning architectures and techniques"], "authors_list": ["Zhang Li", "Biao Yang", "Qiang Liu", "Zhiyin Ma", "Shuo Zhang", "Jingxu Yang", "Yabo Sun", "Yuliang Liu", "Xiang Bai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6cf"}, "filepath": "data/2402.06117.png", "tags": [], "_media_type": "image", "_rand": 0.9991963404498312, "arXiv_link": "https://arxiv.org/abs/2402.06117", "other_link": "", "title": "AdaRevD: Adaptive Patch Exiting Reversible Decoder Pushes the Limit of Image Deblurring", "abstract": "This paper tackles the problem of motion deblurring of dynamic scenes.\nAlthough end-to-end fully convolutional designs have recently advanced the\nstate-of-the-art in non-uniform motion deblurring, their performance-complexity\ntrade-off is still sub-optimal. Most existing approaches achieve a large\nreceptive field by increasing the number of generic convolution layers and\nkernel size. In this work, we propose a pixel adaptive and feature attentive\ndesign for handling large blur variations across different spatial locations\nand process each test image adaptively. We design a content-aware global-local\nfiltering module that significantly improves performance by considering not\nonly global dependencies but also by dynamically exploiting neighboring pixel\ninformation. We further introduce a pixel-adaptive non-uniform sampling\nstrategy that implicitly discovers the difficult-to-restore regions present in\nthe image and, in turn, performs fine-grained refinement in a progressive\nmanner. Extensive qualitative and quantitative comparisons with prior art on\ndeblurring benchmarks demonstrate that our approach performs favorably against\nthe state-of-the-art deblurring algorithms.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Xintian Mao", "Xiwen Gao", "Yan Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d0"}, "filepath": "data/2401.16287.png", "tags": [], "_media_type": "image", "_rand": 0.999127513454478, "arXiv_link": "https://arxiv.org/abs/2401.16287", "other_link": "", "title": "E-GPS: Explainable Geometry Problem Solving via Top-Down Solver and Bottom-Up Generator", "abstract": "Geometry problem solving presents a formidable challenge within the NLP\ncommunity. Existing approaches often rely on models designed for solving math\nword problems, neglecting the unique characteristics of geometry math problems.\nAdditionally, the current research predominantly focuses on geometry\ncalculation problems, while overlooking other essential aspects like proving.\nIn this study, we address these limitations by proposing the Geometry-Aware\nProblem Solver (GAPS) model. GAPS is specifically designed to generate solution\nprograms for geometry math problems of various types with the help of its\nunique problem-type classifier. To achieve this, GAPS treats the solution\nprogram as a composition of operators and operands, segregating their\ngeneration processes. Furthermore, we introduce the geometry elements\nenhancement method, which enhances the ability of GAPS to recognize geometry\nelements accurately. By leveraging these improvements, GAPS showcases\nremarkable performance in resolving geometry math problems. Our experiments\nconducted on the UniGeo dataset demonstrate the superiority of GAPS over the\nstate-of-the-art model, Geoformer. Specifically, GAPS achieves an accuracy\nimprovement of more than 5.3% for calculation tasks and an impressive 41.1% for\nproving tasks. Notably, GAPS achieves an impressive accuracy of 97.5% on\nproving problems, representing a significant advancement in solving geometry\nproving tasks.", "keywords": ["Document analysis and understanding"], "authors_list": ["Wenjun Wu", "Lingling Zhang", "Jun Liu", "Xi Tang", "Yaxian Wang", "Shaowei Wang", "QianYing Wang"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d1"}, "filepath": "data/2403.15241.png", "tags": [], "_media_type": "image", "_rand": 0.9999789215577725, "arXiv_link": "https://arxiv.org/abs/2403.15241", "other_link": "https://github.com/yinjunbo/IS-Fusion.", "title": "IS-Fusion: Instance-Scene Collaborative Fusion for Multimodal 3D Object Detection", "abstract": "Bird's eye view (BEV) representation has emerged as a dominant solution for\ndescribing 3D space in autonomous driving scenarios. However, objects in the\nBEV representation typically exhibit small sizes, and the associated point\ncloud context is inherently sparse, which leads to great challenges for\nreliable 3D perception. In this paper, we propose IS-Fusion, an innovative\nmultimodal fusion framework that jointly captures the Instance- and Scene-level\ncontextual information. IS-Fusion essentially differs from existing approaches\nthat only focus on the BEV scene-level fusion by explicitly incorporating\ninstance-level multimodal information, thus facilitating the instance-centric\ntasks like 3D object detection. It comprises a Hierarchical Scene Fusion (HSF)\nmodule and an Instance-Guided Fusion (IGF) module. HSF applies Point-to-Grid\nand Grid-to-Region transformers to capture the multimodal scene context at\ndifferent granularities. IGF mines instance candidates, explores their\nrelationships, and aggregates the local multimodal context for each instance.\nThese instances then serve as guidance to enhance the scene feature and yield\nan instance-aware BEV representation. On the challenging nuScenes benchmark,\nIS-Fusion outperforms all the published multimodal works to date. Code is\navailable at: https://github.com/yinjunbo/IS-Fusion.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Junbo Yin", "Wenguan Wang", "Runnan Chen", "Wei Li", "Ruigang Yang", "Pascal Frossard", "Jianbing Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d2"}, "filepath": "data/2312.04089.png", "tags": [], "_media_type": "image", "_rand": 0.9992070712440652, "arXiv_link": "https://arxiv.org/abs/2312.04089", "other_link": "", "title": "Open-Vocabulary Semantic Segmentation with Image Embedding Balancing", "abstract": "This paper studies open-vocabulary segmentation (OVS) through calibrating\nin-vocabulary and domain-biased embedding space with generalized contextual\nprior of CLIP. As the core of open-vocabulary understanding, alignment of\nvisual content with the semantics of unbounded text has become the bottleneck\nof this field. To address this challenge, recent works propose to utilize CLIP\nas an additional classifier and aggregate model predictions with CLIP\nclassification results. Despite their remarkable progress, performance of OVS\nmethods in relevant scenarios is still unsatisfactory compared with supervised\ncounterparts. We attribute this to the in-vocabulary embedding and\ndomain-biased CLIP prediction. To this end, we present a Semantic-assisted\nCAlibration Network (SCAN). In SCAN, we incorporate generalized semantic prior\nof CLIP into proposal embedding to avoid collapsing on known categories.\nBesides, a contextual shift strategy is applied to mitigate the lack of global\ncontext and unnatural background noise. With above designs, SCAN achieves\nstate-of-the-art performance on all popular open-vocabulary segmentation\nbenchmarks. Furthermore, we also focus on the problem of existing evaluation\nsystem that ignores semantic duplication across categories, and propose a new\nmetric called Semantic-Guided IoU (SG-IoU).", "keywords": [], "authors_list": ["Xiangheng Shan", "Dongyue Wu", "Guilin Zhu", "Yuanjie Shao", "Nong Sang", "Changxin Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d3"}, "filepath": "data/2403.02981.png", "tags": [], "_media_type": "image", "_rand": 0.9999979131781779, "arXiv_link": "https://arxiv.org/abs/2403.02981", "other_link": "https://github.com/xuesong39/DAC.", "title": "Doubly Abductive Counterfactual Inference for Text-based Image Editing", "abstract": "We study text-based image editing (TBIE) of a single image by counterfactual\ninference because it is an elegant formulation to precisely address the\nrequirement: the edited image should retain the fidelity of the original one.\nThrough the lens of the formulation, we find that the crux of TBIE is that\nexisting techniques hardly achieve a good trade-off between editability and\nfidelity, mainly due to the overfitting of the single-image fine-tuning. To\nthis end, we propose a Doubly Abductive Counterfactual inference framework\n(DAC). We first parameterize an exogenous variable as a UNet LoRA, whose\nabduction can encode all the image details. Second, we abduct another exogenous\nvariable parameterized by a text encoder LoRA, which recovers the lost\neditability caused by the overfitted first abduction. Thanks to the second\nabduction, which exclusively encodes the visual transition from post-edit to\npre-edit, its inversion -- subtracting the LoRA -- effectively reverts pre-edit\nback to post-edit, thereby accomplishing the edit. Through extensive\nexperiments, our DAC achieves a good trade-off between editability and\nfidelity. Thus, we can support a wide spectrum of user editing intents,\nincluding addition, removal, manipulation, replacement, style transfer, and\nfacial change, which are extensively validated in both qualitative and\nquantitative evaluations. Codes are in https://github.com/xuesong39/DAC.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Xue Song", "Jiequan Cui", "Hanwang Zhang", "Jingjing Chen", "Richang Hong", "Yu-Gang Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d4"}, "filepath": "data/2303.10613.png", "tags": [], "_media_type": "image", "_rand": 0.9993390066145347, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2303.10613", "other_link": "https://github.com/BunnySoCrazy/SECAD-Net.", "title": "SfmCAD: Unsupervised CAD Reconstruction by Learning Sketch-based Feature Modeling Operations", "abstract": "Reverse engineering CAD models from raw geometry is a classic but strenuous\nresearch problem. Previous learning-based methods rely heavily on labels due to\nthe supervised design patterns or reconstruct CAD shapes that are not easily\neditable. In this work, we introduce SECAD-Net, an end-to-end neural network\naimed at reconstructing compact and easy-to-edit CAD models in a\nself-supervised manner. Drawing inspiration from the modeling language that is\nmost commonly used in modern CAD software, we propose to learn 2D sketches and\n3D extrusion parameters from raw shapes, from which a set of extrusion\ncylinders can be generated by extruding each sketch from a 2D plane into a 3D\nbody. By incorporating the Boolean operation (i.e., union), these cylinders can\nbe combined to closely approximate the target geometry. We advocate the use of\nimplicit fields for sketch representation, which allows for creating CAD\nvariations by interpolating latent codes in the sketch latent space. Extensive\nexperiments on both ABC and Fusion 360 datasets demonstrate the effectiveness\nof our method, and show superiority over state-of-the-art alternatives\nincluding the closely related method for supervised CAD reconstruction. We\nfurther apply our approach to CAD editing and single-view CAD reconstruction.\nThe code is released at https://github.com/BunnySoCrazy/SECAD-Net.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Pu Li", "Jianwei Guo", "HUIBIN LI", "Bedrich Benes", "Dong-Ming Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d5"}, "filepath": "data/2403.20002.png", "tags": [], "_media_type": "image", "_rand": 0.9998723380239167, "arXiv_link": "https://arxiv.org/abs/2403.20002", "other_link": "https://sites.google.com/view/cvpr24-2034-submission/home.", "title": "Grounding and Enhancing Grid-based Models for Neural Fields", "abstract": "Many contemporary studies utilize grid-based models for neural field\nrepresentation, but a systematic analysis of grid-based models is still\nmissing, hindering the improvement of those models. Therefore, this paper\nintroduces a theoretical framework for grid-based models. This framework points\nout that these models' approximation and generalization behaviors are\ndetermined by grid tangent kernels (GTK), which are intrinsic properties of\ngrid-based models. The proposed framework facilitates a consistent and\nsystematic analysis of diverse grid-based models. Furthermore, the introduced\nframework motivates the development of a novel grid-based model named the\nMultiplicative Fourier Adaptive Grid (MulFAGrid). The numerical analysis\ndemonstrates that MulFAGrid exhibits a lower generalization bound than its\npredecessors, indicating its robust generalization performance. Empirical\nstudies reveal that MulFAGrid achieves state-of-the-art performance in various\ntasks, including 2D image fitting, 3D signed distance field (SDF)\nreconstruction, and novel view synthesis, demonstrating superior representation\nability. The project website is available at\nhttps://sites.google.com/view/cvpr24-2034-submission/home.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zelin Zhao", "FENGLEI FAN", "Wenlong Liao", "Junchi Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d6"}, "filepath": "data/2404.01591.png", "tags": [], "_media_type": "image", "_rand": 0.9998668867048653, "arXiv_link": "https://arxiv.org/abs/2404.01591", "other_link": "https://github.com/NingWang2049/LaIAR.", "title": "Language Model Guided Interpretable Video Action Reasoning", "abstract": "While neural networks have excelled in video action recognition tasks, their\nblack-box nature often obscures the understanding of their decision-making\nprocesses. Recent approaches used inherently interpretable models to analyze\nvideo actions in a manner akin to human reasoning. These models, however,\nusually fall short in performance compared to their black-box counterparts. In\nthis work, we present a new framework named Language-guided Interpretable\nAction Recognition framework (LaIAR). LaIAR leverages knowledge from language\nmodels to enhance both the recognition capabilities and the interpretability of\nvideo models. In essence, we redefine the problem of understanding video model\ndecisions as a task of aligning video and language models. Using the logical\nreasoning captured by the language model, we steer the training of the video\nmodel. This integrated approach not only improves the video model's\nadaptability to different domains but also boosts its overall performance.\nExtensive experiments on two complex video action datasets, Charades & CAD-120,\nvalidates the improved performance and interpretability of our LaIAR framework.\nThe code of LaIAR is available at https://github.com/NingWang2049/LaIAR.", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Ning Wang", "Guangming Zhu", "Hongsheng Li", "Liang Zhang", "Syed Afaq Ali Shah", "Mohammed Bennamoun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d7"}, "filepath": "data/2311.17984.png", "tags": [], "_media_type": "image", "_rand": 0.9999922572327297, "arXiv_link": "https://arxiv.org/abs/2311.17984", "other_link": "", "title": "4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling", "abstract": "Recent breakthroughs in text-to-4D generation rely on pre-trained\ntext-to-image and text-to-video models to generate dynamic 3D scenes. However,\ncurrent text-to-4D methods face a three-way tradeoff between the quality of\nscene appearance, 3D structure, and motion. For example, text-to-image models\nand their 3D-aware variants are trained on internet-scale image datasets and\ncan be used to produce scenes with realistic appearance and 3D structure -- but\nno motion. Text-to-video models are trained on relatively smaller video\ndatasets and can produce scenes with motion, but poorer appearance and 3D\nstructure. While these models have complementary strengths, they also have\nopposing weaknesses, making it difficult to combine them in a way that\nalleviates this three-way tradeoff. Here, we introduce hybrid score\ndistillation sampling, an alternating optimization procedure that blends\nsupervision signals from multiple pre-trained diffusion models and incorporates\nbenefits of each for high-fidelity text-to-4D generation. Using hybrid SDS, we\ndemonstrate synthesis of 4D scenes with compelling appearance, 3D structure,\nand motion.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Sherwin Bahmani", "Ivan Skorokhodov", "Victor Rong", "Gordon Wetzstein", "Leonidas Guibas", "Peter Wonka", "Sergey Tulyakov", "Jeong Joon Park", "Andrea Tagliasacchi", "David B. Lindell"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d8"}, "filepath": "data/2311.15851.png", "tags": [], "_media_type": "image", "_rand": 0.9995479790647743, "arXiv_link": "https://arxiv.org/abs/2311.15851", "other_link": "https://github.com/Zongwei97/UnTrack.", "title": "Single-Model and Any-Modality for Video Object Tracking", "abstract": "In the realm of video object tracking, auxiliary modalities such as depth,\nthermal, or event data have emerged as valuable assets to complement the RGB\ntrackers. In practice, most existing RGB trackers learn a single set of\nparameters to use them across datasets and applications. However, a similar\nsingle-model unification for multi-modality tracking presents several\nchallenges. These challenges stem from the inherent heterogeneity of inputs --\neach with modality-specific representations, the scarcity of multi-modal\ndatasets, and the absence of all the modalities at all times. In this work, we\nintroduce Un-Track, a Unified Tracker of a single set of parameters for any\nmodality. To handle any modality, our method learns their common latent space\nthrough low-rank factorization and reconstruction techniques. More importantly,\nwe use only the RGB-X pairs to learn the common latent space. This unique\nshared representation seamlessly binds all modalities together, enabling\neffective unification and accommodating any missing modality, all within a\nsingle transformer-based architecture. Our Un-Track achieves +8.1 absolute\nF-score gain, on the DepthTrack dataset, by introducing only +2.14 (over 21.50)\nGFLOPs with +6.6M (over 93M) parameters, through a simple yet efficient\nprompting strategy. Extensive comparisons on five benchmark datasets with\ndifferent modalities show that Un-Track surpasses both SOTA unified trackers\nand modality-specific counterparts, validating our effectiveness and\npracticality. The source code is publicly available at\nhttps://github.com/Zongwei97/UnTrack.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Zongwei Wu", "Jilai Zheng", "Xiangxuan Ren", "Florin-Alexandru Vasluianu", "Chao Ma", "Danda Paudel", "Luc Van Gool", "Radu Timofte"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6d9"}, "filepath": "data/2403.16387.png", "tags": [], "_media_type": "image", "_rand": 0.9995042912524842, "arXiv_link": "https://arxiv.org/abs/2403.16387", "other_link": "https://github.com/XunpengYi/Text-IF.", "title": "Text-IF: Leveraging Semantic Text Guidance for Degradation-Aware and Interactive Image Fusion", "abstract": "Image fusion aims to combine information from different source images to\ncreate a comprehensively representative image. Existing fusion methods are\ntypically helpless in dealing with degradations in low-quality source images\nand non-interactive to multiple subjective and objective needs. To solve them,\nwe introduce a novel approach that leverages semantic text guidance image\nfusion model for degradation-aware and interactive image fusion task, termed as\nText-IF. It innovatively extends the classical image fusion to the text guided\nimage fusion along with the ability to harmoniously address the degradation and\ninteraction issues during fusion. Through the text semantic encoder and\nsemantic interaction fusion decoder, Text-IF is accessible to the all-in-one\ninfrared and visible image degradation-aware processing and the interactive\nflexible fusion outcomes. In this way, Text-IF achieves not only multi-modal\nimage fusion, but also multi-modal information fusion. Extensive experiments\nprove that our proposed text guided image fusion strategy has obvious\nadvantages over SOTA methods in the image fusion performance and degradation\ntreatment. The code is available at https://github.com/XunpengYi/Text-IF.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Xunpeng Yi", "Han Xu", "HAO ZHANG", "Linfeng Tang", "Jiayi Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6da"}, "filepath": "data/2404.16752.png", "tags": [], "_media_type": "image", "_rand": 0.999276747256016, "arXiv_link": "https://arxiv.org/abs/2404.16752", "other_link": "https://tokenhmr.is.tue.mpg.de.", "title": "TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation", "abstract": "We address the problem of regressing 3D human pose and shape from a single\nimage, with a focus on 3D accuracy. The current best methods leverage large\ndatasets of 3D pseudo-ground-truth (p-GT) and 2D keypoints, leading to robust\nperformance. With such methods, we observe a paradoxical decline in 3D pose\naccuracy with increasing 2D accuracy. This is caused by biases in the p-GT and\nthe use of an approximate camera projection model. We quantify the error\ninduced by current camera models and show that fitting 2D keypoints and p-GT\naccurately causes incorrect 3D poses. Our analysis defines the invalid\ndistances within which minimizing 2D and p-GT losses is detrimental. We use\nthis to formulate a new loss Threshold-Adaptive Loss Scaling (TALS) that\npenalizes gross 2D and p-GT losses but not smaller ones. With such a loss,\nthere are many 3D poses that could equally explain the 2D evidence. To reduce\nthis ambiguity we need a prior over valid human poses but such priors can\nintroduce unwanted bias. To address this, we exploit a tokenized representation\nof human pose and reformulate the problem as token prediction. This restricts\nthe estimated poses to the space of valid poses, effectively providing a\nuniform prior. Extensive experiments on the EMDB and 3DPW datasets show that\nour reformulated keypoint loss and tokenization allows us to train on\nin-the-wild data while improving 3D accuracy over the state-of-the-art. Our\nmodels and code are available for research at https://tokenhmr.is.tue.mpg.de.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Sai Kumar Dwivedi", "Yu Sun", "Priyanka Patel", "Yao Feng", "Michael J. Black"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6db"}, "filepath": "data/2303.09383v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996773953746152, "arXiv_link": "https://arxiv.org/html/2303.09383v3", "other_link": "https://github.com/cvlab-stonybrook/HAT.", "title": "Unifying Top-down and Bottom-up Scanpath Prediction using Transformers", "abstract": "Most models of visual attention aim at predicting either top-down or\nbottom-up control, as studied using different visual search and free-viewing\ntasks. In this paper we propose the Human Attention Transformer (HAT), a single\nmodel that predicts both forms of attention control. HAT uses a novel\ntransformer-based architecture and a simplified foveated retina that\ncollectively create a spatio-temporal awareness akin to the dynamic visual\nworking memory of humans. HAT not only establishes a new state-of-the-art in\npredicting the scanpath of fixations made during target-present and\ntarget-absent visual search and ``taskless'' free viewing, but also makes human\ngaze behavior interpretable. Unlike previous methods that rely on a coarse grid\nof fixation cells and experience information loss due to fixation\ndiscretization, HAT features a sequential dense prediction architecture and\noutputs a dense heatmap for each fixation, thus avoiding discretizing\nfixations. HAT sets a new standard in computational attention, which emphasizes\neffectiveness, generality, and interpretability. HAT's demonstrated scope and\napplicability will likely inspire the development of new attention models that\ncan better predict human behavior in various attention-demanding scenarios.\nCode is available at https://github.com/cvlab-stonybrook/HAT.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Zhibo Yang", "Sounak Mondal", "Seoyoung Ahn", "Ruoyu Xue", "Gregory Zelinsky", "Minh Hoai", "Dimitris Samaras"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6dc"}, "filepath": "data/2401.04105.png", "tags": [], "_media_type": "image", "_rand": 0.9995688250021498, "arXiv_link": "https://arxiv.org/abs/2401.04105", "other_link": "", "title": "Dr2Net: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning", "abstract": "Large pretrained models are increasingly crucial in modern computer vision\ntasks. These models are typically used in downstream tasks by end-to-end\nfinetuning, which is highly memory-intensive for tasks with high-resolution\ndata, e.g., video understanding, small object detection, and point cloud\nanalysis. In this paper, we propose Dynamic Reversible Dual-Residual Networks,\nor Dr$^2$Net, a novel family of network architectures that acts as a surrogate\nnetwork to finetune a pretrained model with substantially reduced memory\nconsumption. Dr$^2$Net contains two types of residual connections, one\nmaintaining the residual structure in the pretrained models, and the other\nmaking the network reversible. Due to its reversibility, intermediate\nactivations, which can be reconstructed from output, are cleared from memory\nduring training. We use two coefficients on either type of residual connections\nrespectively, and introduce a dynamic training strategy that seamlessly\ntransitions the pretrained model to a reversible network with much higher\nnumerical precision. We evaluate Dr$^2$Net on various pretrained models and\nvarious tasks, and show that it can reach comparable performance to\nconventional finetuning but with significantly less memory usage.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chen Zhao", "Shuming Liu", "Karttikeya Mangalam", "Guocheng Qian", "Fatimah Zohra", "Abdulmohsen Alghannam", "Jitendra Malik", "Bernard Ghanem"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6dd"}, "filepath": "data/2404.09502.png", "tags": [], "_media_type": "image", "_rand": 0.999766552844961, "arXiv_link": "https://arxiv.org/abs/2404.09502", "other_link": "", "title": "SparseOcc: Rethinking Sparse Latent Representation for Vision-Based Semantic Occupancy Prediction", "abstract": "Vision-based perception for autonomous driving requires an explicit modeling\nof a 3D space, where 2D latent representations are mapped and subsequent 3D\noperators are applied. However, operating on dense latent spaces introduces a\ncubic time and space complexity, which limits scalability in terms of\nperception range or spatial resolution. Existing approaches compress the dense\nrepresentation using projections like Bird's Eye View (BEV) or Tri-Perspective\nView (TPV). Although efficient, these projections result in information loss,\nespecially for tasks like semantic occupancy prediction. To address this, we\npropose SparseOcc, an efficient occupancy network inspired by sparse point\ncloud processing. It utilizes a lossless sparse latent representation with\nthree key innovations. Firstly, a 3D sparse diffuser performs latent completion\nusing spatially decomposed 3D sparse convolutional kernels. Secondly, a feature\npyramid and sparse interpolation enhance scales with information from others.\nFinally, the transformer head is redesigned as a sparse variant. SparseOcc\nachieves a remarkable 74.9% reduction on FLOPs over the dense baseline.\nInterestingly, it also improves accuracy, from 12.8% to 14.1% mIOU, which in\npart can be attributed to the sparse representation's ability to avoid\nhallucinations on empty voxels.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Pin Tang", "Zhongdao Wang", "Guoqing Wang", "Jilai Zheng", "Xiangxuan Ren", "Bailan Feng", "Chao Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6de"}, "filepath": "data/2310.08528.png", "tags": [], "_media_type": "image", "_rand": 0.9992927089458855, "arXiv_link": "https://arxiv.org/abs/2310.08528", "other_link": "https://guanjunwu.github.io/4dgs/.", "title": "4D Gaussian Splatting for Real-Time Dynamic Scene Rendering", "abstract": "Representing and rendering dynamic scenes has been an important but\nchallenging task. Especially, to accurately model complex motions, high\nefficiency is usually hard to guarantee. To achieve real-time dynamic scene\nrendering while also enjoying high training and storage efficiency, we propose\n4D Gaussian Splatting (4D-GS) as a holistic representation for dynamic scenes\nrather than applying 3D-GS for each individual frame. In 4D-GS, a novel\nexplicit representation containing both 3D Gaussians and 4D neural voxels is\nproposed. A decomposed neural voxel encoding algorithm inspired by HexPlane is\nproposed to efficiently build Gaussian features from 4D neural voxels and then\na lightweight MLP is applied to predict Gaussian deformations at novel\ntimestamps. Our 4D-GS method achieves real-time rendering under high\nresolutions, 82 FPS at an 800$\\times$800 resolution on an RTX 3090 GPU while\nmaintaining comparable or better quality than previous state-of-the-art\nmethods. More demos and code are available at\nhttps://guanjunwu.github.io/4dgs/.", "keywords": ["Scene analysis and understanding", "Image and video generation and manipulation"], "authors_list": ["Guanjun Wu", "Taoran Yi", "Jiemin Fang", "Lingxi Xie", "Xiaopeng Zhang", "Wei Wei", "Wenyu Liu", "Qi Tian", "Xinggang Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6df"}, "filepath": "data/2405.19899.png", "tags": [], "_media_type": "image", "_rand": 0.999881720790685, "arXiv_link": "https://arxiv.org/abs/2405.19899", "other_link": "https://github.com/KHU-AGI/BUS.", "title": "Open-Set Domain Adaptation for Semantic Segmentation", "abstract": "Unsupervised domain adaptation (UDA) for semantic segmentation aims to\ntransfer the pixel-wise knowledge from the labeled source domain to the\nunlabeled target domain. However, current UDA methods typically assume a shared\nlabel space between source and target, limiting their applicability in\nreal-world scenarios where novel categories may emerge in the target domain. In\nthis paper, we introduce Open-Set Domain Adaptation for Semantic Segmentation\n(OSDA-SS) for the first time, where the target domain includes unknown classes.\nWe identify two major problems in the OSDA-SS scenario as follows: 1) the\nexisting UDA methods struggle to predict the exact boundary of the unknown\nclasses, and 2) they fail to accurately predict the shape of the unknown\nclasses. To address these issues, we propose Boundary and Unknown Shape-Aware\nopen-set domain adaptation, coined BUS. Our BUS can accurately discern the\nboundaries between known and unknown classes in a contrastive manner using a\nnovel dilation-erosion-based contrastive loss. In addition, we propose\nOpenReMix, a new domain mixing augmentation method that guides our model to\neffectively learn domain and size-invariant features for improving the shape\ndetection of the known and unknown classes. Through extensive experiments, we\ndemonstrate that our proposed BUS effectively detects unknown classes in the\nchallenging OSDA-SS scenario compared to the previous methods by a large\nmargin. The code is available at https://github.com/KHU-AGI/BUS.", "keywords": [], "authors_list": ["Seun-An Choe", "Ah-Hyung Shin", "Keon Hee Park", "Jinwoo Choi", "Gyeong-Moon Park"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e0"}, "filepath": "data/2404.07543.png", "tags": [], "_media_type": "image", "_rand": 0.9996497589411537, "arXiv_link": "https://arxiv.org/abs/2404.07543", "other_link": "https://github.com/duanyll/CANConv.", "title": "Content-Adaptive Non-Local Convolution for Remote Sensing Pansharpening", "abstract": "Currently, machine learning-based methods for remote sensing pansharpening\nhave progressed rapidly. However, existing pansharpening methods often do not\nfully exploit differentiating regional information in non-local spaces, thereby\nlimiting the effectiveness of the methods and resulting in redundant learning\nparameters. In this paper, we introduce a so-called content-adaptive non-local\nconvolution (CANConv), a novel method tailored for remote sensing image\npansharpening. Specifically, CANConv employs adaptive convolution, ensuring\nspatial adaptability, and incorporates non-local self-similarity through the\nsimilarity relationship partition (SRP) and the partition-wise adaptive\nconvolution (PWAC) sub-modules. Furthermore, we also propose a corresponding\nnetwork architecture, called CANNet, which mainly utilizes the multi-scale\nself-similarity. Extensive experiments demonstrate the superior performance of\nCANConv, compared with recent promising fusion methods. Besides, we\nsubstantiate the method's effectiveness through visualization, ablation\nexperiments, and comparison with existing methods on multiple test sets. The\nsource code is publicly available at https://github.com/duanyll/CANConv.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Yule Duan", "Xiao Wu", "Haoyu Deng", "Liang-Jian Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e1"}, "filepath": "data/2312.10103.png", "tags": [], "_media_type": "image", "_rand": 0.9990870003526383, "arXiv_link": "https://arxiv.org/abs/2312.10103", "other_link": "", "title": "GSVA: Generalized Segmentation via Multimodal Large Language Models", "abstract": "Generalized Referring Expression Segmentation (GRES) extends the scope of\nclassic RES to refer to multiple objects in one expression or identify the\nempty targets absent in the image. GRES poses challenges in modeling the\ncomplex spatial relationships of the instances in the image and identifying\nnon-existing referents. Multimodal Large Language Models (MLLMs) have recently\nshown tremendous progress in these complicated vision-language tasks.\nConnecting Large Language Models (LLMs) and vision models, MLLMs are proficient\nin understanding contexts with visual inputs. Among them, LISA, as a\nrepresentative, adopts a special [SEG] token to prompt a segmentation mask\ndecoder, e.g., SAM, to enable MLLMs in the RES task. However, existing\nsolutions to GRES remain unsatisfactory since current segmentation MLLMs cannot\ncorrectly handle the cases where users might reference multiple subjects in a\nsingular prompt or provide descriptions incongruent with any image target. In\nthis paper, we propose Generalized Segmentation Vision Assistant (GSVA) to\naddress this gap. Specifically, GSVA reuses the [SEG] token to prompt the\nsegmentation model towards supporting multiple mask references simultaneously\nand innovatively learns to generate a [REJ] token to reject the null targets\nexplicitly. Experiments validate GSVA's efficacy in resolving the GRES issue,\nmarking a notable enhancement and setting a new record on the GRES benchmark\ngRefCOCO dataset. GSVA also proves effective across various classic referring\nsegmentation and comprehension tasks.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zhuofan Xia", "Dongchen Han", "Yizeng Han", "Xuran Pan", "Shiji Song", "Gao Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e2"}, "filepath": "data/2311.07113.png", "tags": [], "_media_type": "image", "_rand": 0.9990948743207939, "arXiv_link": "https://arxiv.org/abs/2311.07113", "other_link": "", "title": "S2MAE: A Spatial-Spectral Pretraining Foundation Model for Spectral Remote Sensing Data", "abstract": "The foundation model has recently garnered significant attention due to its\npotential to revolutionize the field of visual representation learning in a\nself-supervised manner. While most foundation models are tailored to\neffectively process RGB images for various visual tasks, there is a noticeable\ngap in research focused on spectral data, which offers valuable information for\nscene understanding, especially in remote sensing (RS) applications. To fill\nthis gap, we created for the first time a universal RS foundation model, named\nSpectralGPT, which is purpose-built to handle spectral RS images using a novel\n3D generative pretrained transformer (GPT). Compared to existing foundation\nmodels, SpectralGPT 1) accommodates input images with varying sizes,\nresolutions, time series, and regions in a progressive training fashion,\nenabling full utilization of extensive RS big data; 2) leverages 3D token\ngeneration for spatial-spectral coupling; 3) captures spectrally sequential\npatterns via multi-target reconstruction; 4) trains on one million spectral RS\nimages, yielding models with over 600 million parameters. Our evaluation\nhighlights significant performance improvements with pretrained SpectralGPT\nmodels, signifying substantial potential in advancing spectral RS big data\napplications within the field of geoscience across four downstream tasks:\nsingle/multi-label scene classification, semantic segmentation, and change\ndetection.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding", "Remote sensing and photogrammetry"], "authors_list": ["Xuyang Li", "Danfeng Hong", "Jocelyn Chanussot"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e3"}, "filepath": "data/2311.14757.png", "tags": [], "_media_type": "image", "_rand": 0.9993417039663198, "arXiv_link": "https://arxiv.org/abs/2311.14757", "other_link": "", "title": "PointOBB: Learning Oriented Object Detection via Single Point Supervision", "abstract": "Single point-supervised object detection is gaining attention due to its\ncost-effectiveness. However, existing approaches focus on generating horizontal\nbounding boxes (HBBs) while ignoring oriented bounding boxes (OBBs) commonly\nused for objects in aerial images. This paper proposes PointOBB, the first\nsingle Point-based OBB generation method, for oriented object detection.\nPointOBB operates through the collaborative utilization of three distinctive\nviews: an original view, a resized view, and a rotated/flipped (rot/flp) view.\nUpon the original view, we leverage the resized and rot/flp views to build a\nscale augmentation module and an angle acquisition module, respectively. In the\nformer module, a Scale-Sensitive Consistency (SSC) loss is designed to enhance\nthe deep network's ability to perceive the object scale. For accurate object\nangle predictions, the latter module incorporates self-supervised learning to\npredict angles, which is associated with a scale-guided Dense-to-Sparse (DS)\nmatching strategy for aggregating dense angles corresponding to sparse objects.\nThe resized and rot/flp views are switched using a progressive multi-view\nswitching strategy during training to achieve coupled optimization of scale and\nangle. Experimental results on the DIOR-R and DOTA-v1.0 datasets demonstrate\nthat PointOBB achieves promising performance, and significantly outperforms\npotential point-supervised baselines.", "keywords": ["Remote sensing and photogrammetry"], "authors_list": ["Junwei Luo", "Xue Yang", "Yi Yu", "Qingyun Li", "Junchi Yan", "Yansheng Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e4"}, "filepath": "data/2210.00266.png", "tags": [], "_media_type": "image", "_rand": 0.9993954999458702, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2210.00266", "other_link": "https://github.com/xialeiliu/Long-Tailed-CIL", "title": "Long-Tail Class Incremental Learning via Independent Sub-prototype Construction", "abstract": "In class incremental learning (CIL) a model must learn new classes in a\nsequential manner without forgetting old ones. However, conventional CIL\nmethods consider a balanced distribution for each new task, which ignores the\nprevalence of long-tailed distributions in the real world. In this work we\npropose two long-tailed CIL scenarios, which we term ordered and shuffled\nLT-CIL. Ordered LT-CIL considers the scenario where we learn from head classes\ncollected with more samples than tail classes which have few. Shuffled LT-CIL,\non the other hand, assumes a completely random long-tailed distribution for\neach task. We systematically evaluate existing methods in both LT-CIL scenarios\nand demonstrate very different behaviors compared to conventional CIL\nscenarios. Additionally, we propose a two-stage learning baseline with a\nlearnable weight scaling layer for reducing the bias caused by long-tailed\ndistribution in LT-CIL and which in turn also improves the performance of\nconventional CIL due to the limited exemplars. Our results demonstrate the\nsuperior performance (up to 6.44 points in average incremental accuracy) of our\napproach on CIFAR-100 and ImageNet-Subset. The code is available at\nhttps://github.com/xialeiliu/Long-Tailed-CIL", "keywords": [], "authors_list": ["Xi Wang", "Xu Yang", "Jie Yin", "Kun Wei", "Cheng Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e5"}, "filepath": "data/2403.12821.png", "tags": [], "_media_type": "image", "_rand": 0.9995492461452734, "arXiv_link": "https://arxiv.org/abs/2403.12821", "other_link": "http://github.com/y0ngjaenius/CVPR2024_FLOWERFormer.", "title": "FlowerFormer: Empowering Neural Architecture Encoding using a Flow-aware Graph Transformer", "abstract": "The success of a specific neural network architecture is closely tied to the\ndataset and task it tackles; there is no one-size-fits-all solution. Thus,\nconsiderable efforts have been made to quickly and accurately estimate the\nperformances of neural architectures, without full training or evaluation, for\ngiven tasks and datasets. Neural architecture encoding has played a crucial\nrole in the estimation, and graphbased methods, which treat an architecture as\na graph, have shown prominent performance. For enhanced representation learning\nof neural architectures, we introduce FlowerFormer, a powerful graph\ntransformer that incorporates the information flows within a neural\narchitecture. FlowerFormer consists of two key components: (a) bidirectional\nasynchronous message passing, inspired by the flows; (b) global attention built\non flow-based masking. Our extensive experiments demonstrate the superiority of\nFlowerFormer over existing neural encoding methods, and its effectiveness\nextends beyond computer vision models to include graph neural networks and auto\nspeech recognition models. Our code is available at\nhttp://github.com/y0ngjaenius/CVPR2024_FLOWERFormer.", "keywords": [], "authors_list": ["Dongyeong Hwang", "Hyunju Kim", "Sunwoo Kim", "Kijung Shin"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e6"}, "filepath": "data/2403.20317.png", "tags": [], "_media_type": "image", "_rand": 0.9990586378604218, "arXiv_link": "https://arxiv.org/abs/2403.20317", "other_link": "", "title": "Convolutional Prompting meets Language Models for Continual Learning", "abstract": "Continual Learning (CL) enables machine learning models to learn from\ncontinuously shifting new training data in absence of data from old tasks.\nRecently, pretrained vision transformers combined with prompt tuning have shown\npromise for overcoming catastrophic forgetting in CL. These approaches rely on\na pool of learnable prompts which can be inefficient in sharing knowledge\nacross tasks leading to inferior performance. In addition, the lack of\nfine-grained layer specific prompts does not allow these to fully express the\nstrength of the prompts for CL. We address these limitations by proposing\nConvPrompt, a novel convolutional prompt creation mechanism that maintains\nlayer-wise shared embeddings, enabling both layer-specific learning and better\nconcept transfer across tasks. The intelligent use of convolution enables us to\nmaintain a low parameter overhead without compromising performance. We further\nleverage Large Language Models to generate fine-grained text descriptions of\neach category which are used to get task similarity and dynamically decide the\nnumber of prompts to be learned. Extensive experiments demonstrate the\nsuperiority of ConvPrompt and improves SOTA by ~3% with significantly less\nparameter overhead. We also perform strong ablation over various modules to\ndisentangle the importance of different components.", "keywords": ["Efficient and scalable vision", "Large multimodal models and prompting techniques"], "authors_list": ["Anurag Roy", "Riddhiman Moulick", "Vinay Verma", "Saptarshi Ghosh", "Abir Das"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e7"}, "filepath": "data/2311.16739.png", "tags": [], "_media_type": "image", "_rand": 0.9999543104194317, "arXiv_link": "https://arxiv.org/abs/2311.16739", "other_link": "https://as-plausible-aspossible.github.io/", "title": "As-Plausible-As-Possible: Plausibility-Aware Mesh Deformation Using 2D Diffusion Priors", "abstract": "We present As-Plausible-as-Possible (APAP) mesh deformation technique that\nleverages 2D diffusion priors to preserve the plausibility of a mesh under\nuser-controlled deformation. Our framework uses per-face Jacobians to represent\nmesh deformations, where mesh vertex coordinates are computed via a\ndifferentiable Poisson Solve. The deformed mesh is rendered, and the resulting\n2D image is used in the Score Distillation Sampling (SDS) process, which\nenables extracting meaningful plausibility priors from a pretrained 2D\ndiffusion model. To better preserve the identity of the edited mesh, we\nfine-tune our 2D diffusion model with LoRA. Gradients extracted by SDS and a\nuser-prescribed handle displacement are then backpropagated to the per-face\nJacobians, and we use iterative gradient descent to compute the final\ndeformation that balances between the user edit and the output plausibility. We\nevaluate our method with 2D and 3D meshes and demonstrate qualitative and\nquantitative improvements when using plausibility priors over\ngeometry-preservation or distortion-minimization priors used by previous\ntechniques. Our project page is at: https://as-plausible-aspossible.github.io/", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Seungwoo Yoo", "Kunho Kim", "Vladimir G. Kim", "Minhyuk Sung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e8"}, "filepath": "data/2403.17879v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995130035328299, "arXiv_link": "https://arxiv.org/html/2403.17879v1", "other_link": "", "title": "Low-Latency Neural Stereo Streaming", "abstract": "The rise of new video modalities like virtual reality or autonomous driving\nhas increased the demand for efficient multi-view video compression methods,\nboth in terms of rate-distortion (R-D) performance and in terms of delay and\nruntime. While most recent stereo video compression approaches have shown\npromising performance, they compress left and right views sequentially, leading\nto poor parallelization and runtime performance. This work presents Low-Latency\nneural codec for Stereo video Streaming (LLSS), a novel parallel stereo video\ncoding method designed for fast and efficient low-latency stereo video\nstreaming. Instead of using a sequential cross-view motion compensation like\nexisting methods, LLSS introduces a bidirectional feature shifting module to\ndirectly exploit mutual information among views and encode them effectively\nwith a joint cross-view prior model for entropy coding. Thanks to this design,\nLLSS processes left and right views in parallel, minimizing latency; all while\nsubstantially improving R-D performance compared to both existing neural and\nconventional codecs.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Qiqi Hou", "Farzad Farhadzadeh", "Amir Said", "Guillaume Sautiere", "Hoang Le"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6e9"}, "filepath": "data/2311.18331v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992926970320964, "arXiv_link": "https://arxiv.org/abs/2311.18331v1", "other_link": "", "title": "MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation", "abstract": "Deep neural networks have shown exemplary performance on semantic scene\nunderstanding tasks on source domains, but due to the absence of style\ndiversity during training, enhancing performance on unseen target domains using\nonly single source domain data remains a challenging task. Generation of\nsimulated data is a feasible alternative to retrieving large style-diverse\nreal-world datasets as it is a cumbersome and budget-intensive process.\nHowever, the large domain-specific inconsistencies between simulated and\nreal-world data pose a significant generalization challenge in semantic\nsegmentation. In this work, to alleviate this problem, we propose a novel\nMultiResolution Feature Perturbation (MRFP) technique to randomize\ndomain-specific fine-grained features and perturb style of coarse features. Our\nexperimental results on various urban-scene segmentation datasets clearly\nindicate that, along with the perturbation of style-information, perturbation\nof fine-feature components is paramount to learn domain invariant robust\nfeature maps for semantic segmentation models. MRFP is a simple and\ncomputationally efficient, transferable module with no additional learnable\nparameters or objective functions, that helps state-of-the-art deep neural\nnetworks to learn robust domain invariant features for simulation-to-real\nsemantic segmentation.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Sumanth Udupa", "Prajwal Gurunath", "Aniruddh Sikdar", "Suresh Sundaram"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ea"}, "filepath": "data/2402.18975v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998677913086689, "arXiv_link": "https://arxiv.org/abs/2402.18975v1", "other_link": "", "title": "Theoretically Achieving Continuous Representation of Oriented Bounding Boxes", "abstract": "Considerable efforts have been devoted to Oriented Object Detection (OOD).\nHowever, one lasting issue regarding the discontinuity in Oriented Bounding Box\n(OBB) representation remains unresolved, which is an inherent bottleneck for\nextant OOD methods. This paper endeavors to completely solve this issue in a\ntheoretically guaranteed manner and puts an end to the ad-hoc efforts in this\ndirection. Prior studies typically can only address one of the two cases of\ndiscontinuity: rotation and aspect ratio, and often inadvertently introduce\ndecoding discontinuity, e.g. Decoding Incompleteness (DI) and Decoding\nAmbiguity (DA) as discussed in literature. Specifically, we propose a novel\nrepresentation method called Continuous OBB (COBB), which can be readily\nintegrated into existing detectors e.g. Faster-RCNN as a plugin. It can\ntheoretically ensure continuity in bounding box regression which to our best\nknowledge, has not been achieved in literature for rectangle-based object\nrepresentation. For fairness and transparency of experiments, we have developed\na modularized benchmark based on the open-source deep learning framework\nJittor's detection toolbox JDet for OOD evaluation. On the popular DOTA\ndataset, by integrating Faster-RCNN as the same baseline model, our new method\noutperforms the peer method Gliding Vertex by 1.13% mAP50 (relative improvement\n1.54%), and 2.46% mAP75 (relative improvement 5.91%), without any tricks.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Zikai Xiao", "Guo-Ye Yang", "Xue Yang", "Tai-Jiang Mu", "Junchi Yan", "Shi-Min Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6eb"}, "filepath": "data/2403.17610.png", "tags": [], "_media_type": "image", "_rand": 0.9991948516958497, "arXiv_link": "https://arxiv.org/abs/2403.17610", "other_link": "https://metaverse-ai-lab-thu.github.io/MMVP-Dataset/.", "title": "MMVP: A Multimodal MoCap Dataset with Vision and Pressure Sensors", "abstract": "Foot contact is an important cue for human motion capture, understanding, and\ngeneration. Existing datasets tend to annotate dense foot contact using visual\nmatching with thresholding or incorporating pressure signals. However, these\napproaches either suffer from low accuracy or are only designed for small-range\nand slow motion. There is still a lack of a vision-pressure multimodal dataset\nwith large-range and fast human motion, as well as accurate and dense\nfoot-contact annotation. To fill this gap, we propose a Multimodal MoCap\nDataset with Vision and Pressure sensors, named MMVP. MMVP provides accurate\nand dense plantar pressure signals synchronized with RGBD observations, which\nis especially useful for both plausible shape estimation, robust pose fitting\nwithout foot drifting, and accurate global translation tracking. To validate\nthe dataset, we propose an RGBD-P SMPL fitting method and also a\nmonocular-video-based baseline framework, VP-MoCap, for human motion capture.\nExperiments demonstrate that our RGBD-P SMPL Fitting results significantly\noutperform pure visual motion capture. Moreover, VP-MoCap outperforms SOTA\nmethods in foot-contact and global translation estimation accuracy. We believe\nthe configuration of the dataset and the baseline frameworks will stimulate the\nresearch in this direction and also provide a good reference for MoCap\napplications in various domains. Project page:\nhttps://metaverse-ai-lab-thu.github.io/MMVP-Dataset/.", "keywords": ["Biometrics and human analysis"], "authors_list": ["He Zhang", "Shenghao Ren", "Haolei Yuan", "Jianhui Zhao", "Fan Li", "Shuangpeng Sun", "Zhenghao Liang", "Tao Yu", "Qiu Shen", "Xun Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ec"}, "filepath": "data/2404.03924.png", "tags": [], "_media_type": "image", "_rand": 0.9997230929541866, "arXiv_link": "https://arxiv.org/abs/2404.03924", "other_link": "", "title": "Learning Correlation Structures for Vision Transformers", "abstract": "We introduce a new attention mechanism, dubbed structural self-attention\n(StructSA), that leverages rich correlation patterns naturally emerging in\nkey-query interactions of attention. StructSA generates attention maps by\nrecognizing space-time structures of key-query correlations via convolution and\nuses them to dynamically aggregate local contexts of value features. This\neffectively leverages rich structural patterns in images and videos such as\nscene layouts, object motion, and inter-object relations. Using StructSA as a\nmain building block, we develop the structural vision transformer (StructViT)\nand evaluate its effectiveness on both image and video classification tasks,\nachieving state-of-the-art results on ImageNet-1K, Kinetics-400,\nSomething-Something V1 & V2, Diving-48, and FineGym.", "keywords": ["Scene analysis and understanding", "Image and video generation and manipulation"], "authors_list": ["Manjin Kim", "Paul Hongsuck Seo", "Cordelia Schmid", "Minsu Cho"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ed"}, "filepath": "data/2312.16519.png", "tags": [], "_media_type": "image", "_rand": 0.9995544694068552, "arXiv_link": "https://arxiv.org/abs/2312.16519", "other_link": "", "title": "Image Restoration by Denoising Diffusion Models With Iteratively Preconditioned Guidance", "abstract": "Training deep neural networks has become a common approach for addressing\nimage restoration problems. An alternative for training a \"task-specific\"\nnetwork for each observation model is to use pretrained deep denoisers for\nimposing only the signal's prior within iterative algorithms, without\nadditional training. Recently, a sampling-based variant of this approach has\nbecome popular with the rise of diffusion/score-based generative models. Using\ndenoisers for general purpose restoration requires guiding the iterations to\nensure agreement of the signal with the observations. In low-noise settings,\nguidance that is based on back-projection (BP) has been shown to be a promising\nstrategy (used recently also under the names \"pseudoinverse\" or\n\"range/null-space\" guidance). However, the presence of noise in the\nobservations hinders the gains from this approach. In this paper, we propose a\nnovel guidance technique, based on preconditioning that allows traversing from\nBP-based guidance to least squares based guidance along the restoration scheme.\nThe proposed approach is robust to noise while still having much simpler\nimplementation than alternative methods (e.g., it does not require SVD or a\nlarge number of iterations). We use it within both an optimization scheme and a\nsampling-based scheme, and demonstrate its advantages over existing methods for\nimage deblurring and super-resolution.", "keywords": ["Low-level vision"], "authors_list": ["Tomer Garber", "Tom Tirer"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ee"}, "filepath": "data/2403.14608.png", "tags": [], "_media_type": "image", "_rand": 0.9995278455588185, "arXiv_link": "https://arxiv.org/abs/2403.14608", "other_link": "", "title": "Resource-Efficient Transformer Pruning for Finetuning of Large Models", "abstract": "Large models represent a groundbreaking advancement in multiple application\nfields, enabling remarkable achievements across various tasks. However, their\nunprecedented scale comes with significant computational costs. These models,\noften consisting of billions of parameters, require vast amounts of\ncomputational resources for execution. Especially, the expansive scale and\ncomputational demands pose considerable challenges when customizing them for\nparticular downstream tasks, particularly over the hardware platforms\nconstrained by computational capabilities. Parameter Efficient Fine-Tuning\n(PEFT) provides a practical solution by efficiently adapt the large models over\nthe various downstream tasks. In particular, PEFT refers to the process of\nadjusting the parameters of a pre-trained large models to adapt it to a\nspecific task while minimizing the number of additional parameters introduced\nor computational resources required. This approach is particularly important\nwhen dealing with large language models with high parameter counts, as\nfine-tuning these models from scratch can be computationally expensive and\nresource-intensive, posing considerable challenges in the supporting system\nplatform design. In this survey, we present comprehensive studies of various\nPEFT algorithms, examining their performance and computational overhead.\nMoreover, we provide an overview of applications developed using different PEFT\nalgorithms and discuss common techniques employed to mitigate computation costs\nfor PEFT. In addition to the algorithmic perspective, we overview various\nreal-world system designs to investigate the implementation costs associated\nwith different PEFT algorithms. This survey serves as an indispensable resource\nfor researchers aiming to understand both the PEFT algorithm and its system\nimplementation, offering detailed insights into recent advancements and\npractical applications.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Fatih Ilhan", "Gong Su", "Selim Tekin", "Tiansheng Huang", "Sihao Hu", "Ling Liu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ef"}, "filepath": "data/2403.00372.png", "tags": [], "_media_type": "image", "_rand": 0.9992303273790893, "arXiv_link": "https://arxiv.org/abs/2403.00372", "other_link": "", "title": "HyperSDFusion: Bridging Hierarchical Structures in Language and Geometry for Enhanced 3D Text2Shape Generation", "abstract": "3D shape generation from text is a fundamental task in 3D representation\nlearning. The text-shape pairs exhibit a hierarchical structure, where a\ngeneral text like ``chair\" covers all 3D shapes of the chair, while more\ndetailed prompts refer to more specific shapes. Furthermore, both text and 3D\nshapes are inherently hierarchical structures. However, existing Text2Shape\nmethods, such as SDFusion, do not exploit that. In this work, we propose\nHyperSDFusion, a dual-branch diffusion model that generates 3D shapes from a\ngiven text. Since hyperbolic space is suitable for handling hierarchical data,\nwe propose to learn the hierarchical representations of text and 3D shapes in\nhyperbolic space. First, we introduce a hyperbolic text-image encoder to learn\nthe sequential and multi-modal hierarchical features of text in hyperbolic\nspace. In addition, we design a hyperbolic text-graph convolution module to\nlearn the hierarchical features of text in hyperbolic space. In order to fully\nutilize these text features, we introduce a dual-branch structure to embed text\nfeatures in 3D feature space. At last, to endow the generated 3D shapes with a\nhierarchical structure, we devise a hyperbolic hierarchical loss. Our method is\nthe first to explore the hyperbolic hierarchical representation for\ntext-to-shape generation. Experimental results on the existing text-to-shape\npaired dataset, Text2Shape, achieved state-of-the-art results. We release our\nimplementation under HyperSDFusion.github.io.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Zhiying Leng", "Tolga Birdal", "Xiaohui Liang", "Federico Tombari"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f0"}, "filepath": "data/2404.01143.png", "tags": [], "_media_type": "image", "_rand": 0.9997939122840946, "arXiv_link": "https://arxiv.org/abs/2404.01143", "other_link": "", "title": "Condition-Aware Neural Network for Controlled Image Generation", "abstract": "We present Condition-Aware Neural Network (CAN), a new method for adding\ncontrol to image generative models. In parallel to prior conditional control\nmethods, CAN controls the image generation process by dynamically manipulating\nthe weight of the neural network. This is achieved by introducing a\ncondition-aware weight generation module that generates conditional weight for\nconvolution/linear layers based on the input condition. We test CAN on\nclass-conditional image generation on ImageNet and text-to-image generation on\nCOCO. CAN consistently delivers significant improvements for diffusion\ntransformer models, including DiT and UViT. In particular, CAN combined with\nEfficientViT (CaT) achieves 2.78 FID on ImageNet 512x512, surpassing DiT-XL/2\nwhile requiring 52x fewer MACs per sampling step.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Han Cai", "Muyang Li", "Qinsheng Zhang", "Ming-Yu Liu", "Song Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f1"}, "filepath": "data/2312.06733.png", "tags": [], "_media_type": "image", "_rand": 0.9994086791612774, "arXiv_link": "https://arxiv.org/abs/2312.06733", "other_link": "", "title": "TULIP: Transformer for Upsampling of LiDAR Point Cloud", "abstract": "LiDAR Upsampling is a challenging task for the perception systems of robots\nand autonomous vehicles, due to the sparse and irregular structure of\nlarge-scale scene contexts. Recent works propose to solve this problem by\nconverting LiDAR data from 3D Euclidean space into an image super-resolution\nproblem in 2D image space. Although their methods can generate high-resolution\nrange images with fine-grained details, the resulting 3D point clouds often\nblur out details and predict invalid points. In this paper, we propose TULIP, a\nnew method to reconstruct high-resolution LiDAR point clouds from\nlow-resolution LiDAR input. We also follow a range image-based approach but\nspecifically modify the patch and window geometries of a Swin-Transformer-based\nnetwork to better fit the characteristics of range images. We conducted several\nexperiments on three public real-world and simulated datasets. TULIP\noutperforms state-of-the-art methods in all relevant metrics and generates\nrobust and more realistic point clouds than prior works.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Bin Yang", "Patrick Pfreundschuh", "Roland Siegwart", "Marco Hutter", "Peyman Moghadam", "Vaishakh Patil"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f2"}, "filepath": "data/2311.02077.png", "tags": [], "_media_type": "image", "_rand": 0.9995274830779162, "arXiv_link": "https://arxiv.org/abs/2311.02077", "other_link": "", "title": "PARA-Drive: Parallelized Architecture for Real-time Autonomous Driving", "abstract": "We present EmerNeRF, a simple yet powerful approach for learning\nspatial-temporal representations of dynamic driving scenes. Grounded in neural\nfields, EmerNeRF simultaneously captures scene geometry, appearance, motion,\nand semantics via self-bootstrapping. EmerNeRF hinges upon two core components:\nFirst, it stratifies scenes into static and dynamic fields. This decomposition\nemerges purely from self-supervision, enabling our model to learn from general,\nin-the-wild data sources. Second, EmerNeRF parameterizes an induced flow field\nfrom the dynamic field and uses this flow field to further aggregate\nmulti-frame features, amplifying the rendering precision of dynamic objects.\nCoupling these three fields (static, dynamic, and flow) enables EmerNeRF to\nrepresent highly-dynamic scenes self-sufficiently, without relying on ground\ntruth object annotations or pre-trained models for dynamic object segmentation\nor optical flow estimation. Our method achieves state-of-the-art performance in\nsensor simulation, significantly outperforming previous methods when\nreconstructing static (+2.93 PSNR) and dynamic (+3.70 PSNR) scenes. In\naddition, to bolster EmerNeRF's semantic generalization, we lift 2D visual\nfoundation model features into 4D space-time and address a general positional\nbias in modern Transformers, significantly boosting 3D perception performance\n(e.g., 37.50% relative improvement in occupancy prediction accuracy on\naverage). Finally, we construct a diverse and challenging 120-sequence dataset\nto benchmark neural fields under extreme and highly-dynamic settings.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xinshuo Weng", "Boris Ivanovic", "Yan Wang", "Yue Wang", "Marco Pavone"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f3"}, "filepath": "data/2402.05932.png", "tags": [], "_media_type": "image", "_rand": 0.9995538062805066, "arXiv_link": "https://arxiv.org/abs/2402.05932", "other_link": "https://boyiliee.github.io/llada.", "title": "Driving Everywhere with Large Language Model Policy Adaptation", "abstract": "Adapting driving behavior to new environments, customs, and laws is a\nlong-standing problem in autonomous driving, precluding the widespread\ndeployment of autonomous vehicles (AVs). In this paper, we present LLaDA, a\nsimple yet powerful tool that enables human drivers and autonomous vehicles\nalike to drive everywhere by adapting their tasks and motion plans to traffic\nrules in new locations. LLaDA achieves this by leveraging the impressive\nzero-shot generalizability of large language models (LLMs) in interpreting the\ntraffic rules in the local driver handbook. Through an extensive user study, we\nshow that LLaDA's instructions are useful in disambiguating in-the-wild\nunexpected situations. We also demonstrate LLaDA's ability to adapt AV motion\nplanning policies in real-world datasets; LLaDA outperforms baseline planning\napproaches on all our metrics. Please check our website for more details:\nhttps://boyiliee.github.io/llada.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Boyi Li", "Yue Wang", "Jiageng Mao", "Boris Ivanovic", "Sushant Veer", "Karen Leung", "Marco Pavone"], "category_name": "Robotics", "all_categories": ["Robotics", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f4"}, "filepath": "data/2403.19979.png", "tags": [], "_media_type": "image", "_rand": 0.9995401961570431, "arXiv_link": "https://arxiv.org/abs/2403.19979", "other_link": "", "title": "Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer", "abstract": "Class-incremental learning (CIL) aims to enable models to continuously learn\nnew classes while overcoming catastrophic forgetting. The introduction of\npre-trained models has brought new tuning paradigms to CIL. In this paper, we\nrevisit different parameter-efficient tuning (PET) methods within the context\nof continual learning. We observe that adapter tuning demonstrates superiority\nover prompt-based methods, even without parameter expansion in each learning\nsession. Motivated by this, we propose incrementally tuning the shared adapter\nwithout imposing parameter update constraints, enhancing the learning capacity\nof the backbone. Additionally, we employ feature sampling from stored\nprototypes to retrain a unified classifier, further improving its performance.\nWe estimate the semantic shift of old prototypes without access to past samples\nand update stored prototypes session by session. Our proposed method eliminates\nmodel expansion and avoids retaining any image samples. It surpasses previous\npre-trained model-based CIL methods and demonstrates remarkable continual\nlearning capabilities. Experimental results on five CIL benchmarks validate the\neffectiveness of our approach, achieving state-of-the-art (SOTA) performance.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yuwen Tan", "Qinhao Zhou", "Xiang Xiang", "Ke Wang", "Yuchuan Wu", "Yongbin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f5"}, "filepath": "data/2402.17862v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993307758498322, "arXiv_link": "https://arxiv.org/abs/2402.17862v1", "other_link": "", "title": "BilevelPruning: Unified Dynamic and Static Channel Pruning for Convolutional Neural Networks", "abstract": "Channel pruning is widely accepted to accelerate modern convolutional neural\nnetworks (CNNs). The resulting pruned model benefits from its immediate\ndeployment on general-purpose software and hardware resources. However, its\nlarge pruning granularity, specifically at the unit of a convolution filter,\noften leads to undesirable accuracy drops due to the inflexibility of deciding\nhow and where to introduce sparsity to the CNNs. In this paper, we propose\nREPrune, a novel channel pruning technique that emulates kernel pruning, fully\nexploiting the finer but structured granularity. REPrune identifies similar\nkernels within each channel using agglomerative clustering. Then, it selects\nfilters that maximize the incorporation of kernel representatives while\noptimizing the maximum cluster coverage problem. By integrating with a\nsimultaneous training-pruning paradigm, REPrune promotes efficient, progressive\npruning throughout training CNNs, avoiding the conventional\ntrain-prune-finetune sequence. Experimental results highlight that REPrune\nperforms better in computer vision tasks than existing methods, effectively\nachieving a balance between acceleration ratio and performance retention.", "keywords": [], "authors_list": ["Shangqian Gao", "Yanfu Zhang", "Feihu Huang", "Heng Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f6"}, "filepath": "data/2310.10624.png", "tags": [], "_media_type": "image", "_rand": 0.9996375407801715, "arXiv_link": "https://arxiv.org/abs/2310.10624", "other_link": "https://showlab.github.io/DynVideo-E/.", "title": "DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing", "abstract": "Despite recent progress in diffusion-based video editing, existing methods\nare limited to short-length videos due to the contradiction between long-range\nconsistency and frame-wise editing. Prior attempts to address this challenge by\nintroducing video-2D representations encounter significant difficulties with\nlarge-scale motion- and view-change videos, especially in human-centric\nscenarios. To overcome this, we propose to introduce the dynamic Neural\nRadiance Fields (NeRF) as the innovative video representation, where the\nediting can be performed in the 3D spaces and propagated to the entire video\nvia the deformation field. To provide consistent and controllable editing, we\npropose the image-based video-NeRF editing pipeline with a set of innovative\ndesigns, including multi-view multi-pose Score Distillation Sampling (SDS) from\nboth the 2D personalized diffusion prior and 3D diffusion prior, reconstruction\nlosses, text-guided local parts super-resolution, and style transfer. Extensive\nexperiments demonstrate that our method, dubbed as DynVideo-E, significantly\noutperforms SOTA approaches on two challenging datasets by a large margin of\n50% ~ 95% for human preference. Code will be released at\nhttps://showlab.github.io/DynVideo-E/.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Jia-Wei Liu", "Yan-Pei Cao", "Jay Zhangjie Wu", "Weijia Mao", "Yuchao Gu", "Rui Zhao", "Jussi Keppo", "Ying Shan", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f7"}, "filepath": "data/2312.02238.png", "tags": [], "_media_type": "image", "_rand": 0.9999728326149214, "arXiv_link": "https://arxiv.org/abs/2312.02238", "other_link": "", "title": "X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model", "abstract": "We introduce X-Adapter, a universal upgrader to enable the pretrained\nplug-and-play modules (e.g., ControlNet, LoRA) to work directly with the\nupgraded text-to-image diffusion model (e.g., SDXL) without further retraining.\nWe achieve this goal by training an additional network to control the frozen\nupgraded model with the new text-image data pairs. In detail, X-Adapter keeps a\nfrozen copy of the old model to preserve the connectors of different plugins.\nAdditionally, X-Adapter adds trainable mapping layers that bridge the decoders\nfrom models of different versions for feature remapping. The remapped features\nwill be used as guidance for the upgraded model. To enhance the guidance\nability of X-Adapter, we employ a null-text training strategy for the upgraded\nmodel. After training, we also introduce a two-stage denoising strategy to\nalign the initial latents of X-Adapter and the upgraded model. Thanks to our\nstrategies, X-Adapter demonstrates universal compatibility with various plugins\nand also enables plugins of different versions to work together, thereby\nexpanding the functionalities of diffusion community. To verify the\neffectiveness of the proposed method, we conduct extensive experiments and the\nresults show that X-Adapter may facilitate wider application in the upgraded\nfoundational diffusion model.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Lingmin Ran", "Xiaodong Cun", "Jia-Wei Liu", "Rui Zhao", "Song Zijie", "Xintao Wang", "Jussi Keppo", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f8"}, "filepath": "data/2311.16498.png", "tags": [], "_media_type": "image", "_rand": 0.9992838715508782, "arXiv_link": "https://arxiv.org/abs/2311.16498", "other_link": "", "title": "MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model", "abstract": "This paper studies the human image animation task, which aims to generate a\nvideo of a certain reference identity following a particular motion sequence.\nExisting animation works typically employ the frame-warping technique to\nanimate the reference image towards the target motion. Despite achieving\nreasonable results, these approaches face challenges in maintaining temporal\nconsistency throughout the animation due to the lack of temporal modeling and\npoor preservation of reference identity. In this work, we introduce\nMagicAnimate, a diffusion-based framework that aims at enhancing temporal\nconsistency, preserving reference image faithfully, and improving animation\nfidelity. To achieve this, we first develop a video diffusion model to encode\ntemporal information. Second, to maintain the appearance coherence across\nframes, we introduce a novel appearance encoder to retain the intricate details\nof the reference image. Leveraging these two innovations, we further employ a\nsimple video fusion technique to encourage smooth transitions for long video\nanimation. Empirical results demonstrate the superiority of our method over\nbaseline approaches on two benchmarks. Notably, our approach outperforms the\nstrongest baseline by over 38% in terms of video fidelity on the challenging\nTikTok dancing dataset. Code and model will be made available.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Zhongcong Xu", "Jianfeng Zhang", "Jun Hao Liew", "Hanshu Yan", "Jia-Wei Liu", "Chenxu Zhang", "Jiashi Feng", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6f9"}, "filepath": "data/2312.02087.png", "tags": [], "_media_type": "image", "_rand": 0.9990946371858762, "arXiv_link": "https://arxiv.org/abs/2312.02087", "other_link": "", "title": "VideoSwap: Customized Video Subject Swapping with Interactive Semantic Point Correspondence", "abstract": "Current diffusion-based video editing primarily focuses on\nstructure-preserved editing by utilizing various dense correspondences to\nensure temporal consistency and motion alignment. However, these approaches are\noften ineffective when the target edit involves a shape change. To embark on\nvideo editing with shape change, we explore customized video subject swapping\nin this work, where we aim to replace the main subject in a source video with a\ntarget subject having a distinct identity and potentially different shape. In\ncontrast to previous methods that rely on dense correspondences, we introduce\nthe VideoSwap framework that exploits semantic point correspondences, inspired\nby our observation that only a small number of semantic points are necessary to\nalign the subject's motion trajectory and modify its shape. We also introduce\nvarious user-point interactions (\\eg, removing points and dragging points) to\naddress various semantic point correspondence. Extensive experiments\ndemonstrate state-of-the-art video subject swapping results across a variety of\nreal-world videos.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Yuchao Gu", "Yipin Zhou", "Bichen Wu", "Licheng Yu", "Jia-Wei Liu", "Rui Zhao", "Jay Zhangjie Wu", "David Junhao Zhang", "Mike Zheng Shou", "Kevin Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6fa"}, "filepath": "data/2405.16009.png", "tags": [], "_media_type": "image", "_rand": 0.9995453808814708, "arXiv_link": "https://arxiv.org/abs/2405.16009", "other_link": "", "title": "LIVE: Online Large Video-Language Model for Streaming Video", "abstract": "This paper presents VideoStreaming, an advanced vision-language large model\n(VLLM) for video understanding, that capably understands arbitrary-length video\nwith a constant number of video tokens streamingly encoded and adaptively\nselected. The challenge of video understanding in the vision language area\nmainly lies in the significant computational burden caused by the great number\nof tokens extracted from long videos. Previous works rely on sparse sampling or\nframe compression to reduce tokens. However, such approaches either disregard\ntemporal information in a long time span or sacrifice spatial details,\nresulting in flawed compression. To address these limitations, our\nVideoStreaming has two core designs: Memory-Propagated Streaming Encoding and\nAdaptive Memory Selection. The Memory-Propagated Streaming Encoding\narchitecture segments long videos into short clips and sequentially encodes\neach clip with a propagated memory. In each iteration, we utilize the encoded\nresults of the preceding clip as historical memory, which is integrated with\nthe current clip to distill a condensed representation that encapsulates the\nvideo content up to the current timestamp. After the encoding process, the\nAdaptive Memory Selection strategy selects a constant number of\nquestion-related memories from all the historical memories and feeds them into\nthe LLM to generate informative responses. The question-related selection\nreduces redundancy within the memories, enabling efficient and precise video\nunderstanding. Meanwhile, the disentangled video extraction and reasoning\ndesign allows the LLM to answer different questions about a video by directly\nselecting corresponding memories, without the need to encode the whole video\nfor each question. Our model achieves superior performance and higher\nefficiency on long video benchmarks, showcasing precise temporal comprehension\nfor detailed question answering.", "keywords": ["Large multimodal models and prompting techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Joya Chen", "Zhaoyang Lv", "Shiwei Wu", "Kevin Qinghong Lin", "Chenan Song", "Difei Gao", "Jia-Wei Liu", "Ziteng Gao", "Dongxing Mao", "Mike Zheng Shou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6fb"}, "filepath": "data/2312.17161.png", "tags": [], "_media_type": "image", "_rand": 0.999437287617294, "arXiv_link": "https://arxiv.org/abs/2312.17161", "other_link": "https://gen2res.github.io.", "title": "Restoration by Generation with Constrained Priors", "abstract": "The inherent generative power of denoising diffusion models makes them\nwell-suited for image restoration tasks where the objective is to find the\noptimal high-quality image within the generative space that closely resembles\nthe input image. We propose a method to adapt a pretrained diffusion model for\nimage restoration by simply adding noise to the input image to be restored and\nthen denoise. Our method is based on the observation that the space of a\ngenerative model needs to be constrained. We impose this constraint by\nfinetuning the generative model with a set of anchor images that capture the\ncharacteristics of the input image. With the constrained space, we can then\nleverage the sampling strategy used for generation to do image restoration. We\nevaluate against previous methods and show superior performances on multiple\nreal-world restoration datasets in preserving identity and image quality. We\nalso demonstrate an important and practical application on personalized\nrestoration, where we use a personal album as the anchor images to constrain\nthe generative space. This approach allows us to produce results that\naccurately preserve high-frequency details, which previous works are unable to\ndo. Project webpage: https://gen2res.github.io.", "keywords": ["Image and video generation and manipulation", "Low-level vision"], "authors_list": ["Zheng Ding", "Xuaner Zhang", "Zhuowen Tu", "Zhihao Xia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6fc"}, "filepath": "data/2404.12887.png", "tags": [], "_media_type": "image", "_rand": 0.9997704335003557, "arXiv_link": "https://arxiv.org/abs/2404.12887", "other_link": "", "title": "3D Multi-frame Fusion for Video Stabilization", "abstract": "In this paper, we present RStab, a novel framework for video stabilization\nthat integrates 3D multi-frame fusion through volume rendering. Departing from\nconventional methods, we introduce a 3D multi-frame perspective to generate\nstabilized images, addressing the challenge of full-frame generation while\npreserving structure. The core of our approach lies in Stabilized Rendering\n(SR), a volume rendering module, which extends beyond the image fusion by\nincorporating feature fusion. The core of our RStab framework lies in\nStabilized Rendering (SR), a volume rendering module, fusing multi-frame\ninformation in 3D space. Specifically, SR involves warping features and colors\nfrom multiple frames by projection, fusing them into descriptors to render the\nstabilized image. However, the precision of warped information depends on the\nprojection accuracy, a factor significantly influenced by dynamic regions. In\nresponse, we introduce the Adaptive Ray Range (ARR) module to integrate depth\npriors, adaptively defining the sampling range for the projection process.\nAdditionally, we propose Color Correction (CC) assisting geometric constraints\nwith optical flow for accurate color aggregation. Thanks to the three modules,\nour RStab demonstrates superior performance compared with previous stabilizers\nin the field of view (FOV), image quality, and video stability across various\ndatasets.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Zhan Peng", "Xinyi Ye", "Weiyue Zhao", "TIANQI LIU", "Huiqiang Sun", "Baopu Li", "Zhiguo Cao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6fd"}, "filepath": "data/2404.04104.png", "tags": [], "_media_type": "image", "_rand": 0.9999725873991367, "arXiv_link": "https://arxiv.org/abs/2404.04104", "other_link": "https://georgeretsi.github.io/smirk/.", "title": "3D Facial Expressions through Analysis-by-Neural-Synthesis", "abstract": "While existing methods for 3D face reconstruction from in-the-wild images\nexcel at recovering the overall face shape, they commonly miss subtle, extreme,\nasymmetric, or rarely observed expressions. We improve upon these methods with\nSMIRK (Spatial Modeling for Image-based Reconstruction of Kinesics), which\nfaithfully reconstructs expressive 3D faces from images. We identify two key\nlimitations in existing methods: shortcomings in their self-supervised training\nformulation, and a lack of expression diversity in the training images. For\ntraining, most methods employ differentiable rendering to compare a predicted\nface mesh with the input image, along with a plethora of additional loss\nfunctions. This differentiable rendering loss not only has to provide\nsupervision to optimize for 3D face geometry, camera, albedo, and lighting,\nwhich is an ill-posed optimization problem, but the domain gap between\nrendering and input image further hinders the learning process. Instead, SMIRK\nreplaces the differentiable rendering with a neural rendering module that,\ngiven the rendered predicted mesh geometry, and sparsely sampled pixels of the\ninput image, generates a face image. As the neural rendering gets color\ninformation from sampled image pixels, supervising with neural rendering-based\nreconstruction loss can focus solely on the geometry. Further, it enables us to\ngenerate images of the input identity with varying expressions while training.\nThese are then utilized as input to the reconstruction model and used as\nsupervision with ground truth geometry. This effectively augments the training\ndata and enhances the generalization for diverse expressions. Our qualitative,\nquantitative and particularly our perceptual evaluations demonstrate that SMIRK\nachieves the new state-of-the art performance on accurate expression\nreconstruction. Project webpage: https://georgeretsi.github.io/smirk/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["George Retsinas", "Panagiotis Filntisis", "Radek Danecek", "Victoria Abrevaya", "Anastasios Roussos", "Timo Bolkart", "Petros Maragos"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6fe"}, "filepath": "data/2312.16476.png", "tags": [], "_media_type": "image", "_rand": 0.9991046820528923, "arXiv_link": "https://arxiv.org/abs/2312.16476", "other_link": "https://ximinng.github.io/SVGDreamer-project/}{https://ximinng.github.io/SVGDreamer-project/}", "title": "SVGDreamer: Text Guided SVG Generation with Diffusion Model", "abstract": "Recently, text-guided scalable vector graphics (SVGs) synthesis has shown\npromise in domains such as iconography and sketch. However, existing\ntext-to-SVG generation methods lack editability and struggle with visual\nquality and result diversity. To address these limitations, we propose a novel\ntext-guided vector graphics synthesis method called SVGDreamer. SVGDreamer\nincorporates a semantic-driven image vectorization (SIVE) process that enables\nthe decomposition of synthesis into foreground objects and background, thereby\nenhancing editability. Specifically, the SIVE process introduces\nattention-based primitive control and an attention-mask loss function for\neffective control and manipulation of individual elements. Additionally, we\npropose a Vectorized Particle-based Score Distillation (VPSD) approach to\naddress issues of shape over-smoothing, color over-saturation, limited\ndiversity, and slow convergence of the existing text-to-SVG generation methods\nby modeling SVGs as distributions of control points and colors. Furthermore,\nVPSD leverages a reward model to re-weight vector particles, which improves\naesthetic appeal and accelerates convergence. Extensive experiments are\nconducted to validate the effectiveness of SVGDreamer, demonstrating its\nsuperiority over baseline methods in terms of editability, visual quality, and\ndiversity. Project page:\n\\href{https://ximinng.github.io/SVGDreamer-project/}{https://ximinng.github.io/SVGDreamer-project/}", "keywords": ["Image and video generation and manipulation", "Vision systems and graphics integration"], "authors_list": ["XiMing Xing", "Chuang Wang", "Haitao Zhou", "Jing Zhang", "Dong Xu", "Qian Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f6ff"}, "filepath": "data/2403.13870.png", "tags": [], "_media_type": "image", "_rand": 0.9991280871922377, "arXiv_link": "https://arxiv.org/abs/2403.13870", "other_link": "https://github.com/rwchakra/exmap}}.", "title": "ExMap: Leveraging Explainability Heatmaps for Unsupervised Group Robustness to Spurious Correlations", "abstract": "Group robustness strategies aim to mitigate learned biases in deep learning\nmodels that arise from spurious correlations present in their training\ndatasets. However, most existing methods rely on the access to the label\ndistribution of the groups, which is time-consuming and expensive to obtain. As\na result, unsupervised group robustness strategies are sought. Based on the\ninsight that a trained model's classification strategies can be inferred\naccurately based on explainability heatmaps, we introduce ExMap, an\nunsupervised two stage mechanism designed to enhance group robustness in\ntraditional classifiers. ExMap utilizes a clustering module to infer\npseudo-labels based on a model's explainability heatmaps, which are then used\nduring training in lieu of actual labels. Our empirical studies validate the\nefficacy of ExMap - We demonstrate that it bridges the performance gap with its\nsupervised counterparts and outperforms existing partially supervised and\nunsupervised methods. Additionally, ExMap can be seamlessly integrated with\nexisting group robustness learning strategies. Finally, we demonstrate its\npotential in tackling the emerging issue of multiple shortcut\nmitigation\\footnote{Code available at \\url{https://github.com/rwchakra/exmap}}.", "keywords": [], "authors_list": ["Rwiddhi Chakraborty", "Adrian de Sena Sletten", "Michael C. Kampffmeyer"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f700"}, "filepath": "data/2311.18605.png", "tags": [], "_media_type": "image", "_rand": 0.9993679187568241, "arXiv_link": "https://arxiv.org/abs/2311.18605", "other_link": "", "title": "Learning Triangular Distribution in Visual World", "abstract": "Convolution neural network is successful in pervasive vision tasks, including\nlabel distribution learning, which usually takes the form of learning an\ninjection from the non-linear visual features to the well-defined labels.\nHowever, how the discrepancy between features is mapped to the label\ndiscrepancy is ambient, and its correctness is not guaranteed.To address these\nproblems, we study the mathematical connection between feature and its label,\npresenting a general and simple framework for label distribution learning. We\npropose a so-called Triangular Distribution Transform (TDT) to build an\ninjective function between feature and label, guaranteeing that any symmetric\nfeature discrepancy linearly reflects the difference between labels. The\nproposed TDT can be used as a plug-in in mainstream backbone networks to\naddress different label distribution learning tasks. Experiments on Facial Age\nRecognition, Illumination Chromaticity Estimation, and Aesthetics assessment\nshow that TDT achieves on-par or better results than the prior arts.", "keywords": ["Biometrics and human analysis", "Efficient and scalable vision"], "authors_list": ["Ping Chen", "Xingpeng Zhang", "Chengtao Zhou", "dichao Fan", "Peng Tu", "Le Zhang", "Yanlin Qian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f701"}, "filepath": "data/2403.17520.png", "tags": [], "_media_type": "image", "_rand": 0.9992616111713873, "arXiv_link": "https://arxiv.org/abs/2403.17520", "other_link": "https://github.com/TrustAI/LOAT.", "title": "Boosting Adversarial Training via Fisher-Rao Norm-based Regularization", "abstract": "Adversarial training is extensively utilized to improve the adversarial\nrobustness of deep neural networks. Yet, mitigating the degradation of standard\ngeneralization performance in adversarial-trained models remains an open\nproblem. This paper attempts to resolve this issue through the lens of model\ncomplexity. First, We leverage the Fisher-Rao norm, a geometrically invariant\nmetric for model complexity, to establish the non-trivial bounds of the\nCross-Entropy Loss-based Rademacher complexity for a ReLU-activated Multi-Layer\nPerceptron. Then we generalize a complexity-related variable, which is\nsensitive to the changes in model width and the trade-off factors in\nadversarial training. Moreover, intensive empirical evidence validates that\nthis variable highly correlates with the generalization gap of Cross-Entropy\nloss between adversarial-trained and standard-trained models, especially during\nthe initial and final phases of the training process. Building upon this\nobservation, we propose a novel regularization framework, called Logit-Oriented\nAdversarial Training (LOAT), which can mitigate the trade-off between\nrobustness and accuracy while imposing only a negligible increase in\ncomputational overhead. Our extensive experiments demonstrate that the proposed\nregularization strategy can boost the performance of the prevalent adversarial\ntraining algorithms, including PGD-AT, TRADES, TRADES (LSE), MART, and DM-AT,\nacross various network architectures. Our code will be available at\nhttps://github.com/TrustAI/LOAT.", "keywords": [], "authors_list": ["Xiangyu Yin", "Wenjie Ruan"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f702"}, "filepath": "data/2405.01662.png", "tags": [], "_media_type": "image", "_rand": 0.9993413990150735, "arXiv_link": "https://arxiv.org/abs/2405.01662", "other_link": "https://github.com/Hewell0/ProjOOD.", "title": "CORES: Convolutional Response-based Score for Out-of-distribution Detection", "abstract": "Out-of-distribution (OOD) detection, crucial for reliable pattern\nclassification, discerns whether a sample originates outside the training\ndistribution. This paper concentrates on the high-dimensional features output\nby the final convolutional layer, which contain rich image features. Our key\nidea is to project these high-dimensional features into two specific feature\nsubspaces, leveraging the dimensionality reduction capacity of the network's\nlinear layers, trained with Predefined Evenly-Distribution Class Centroids\n(PEDCC)-Loss. This involves calculating the cosines of three projection angles\nand the norm values of features, thereby identifying distinctive information\nfor in-distribution (ID) and OOD data, which assists in OOD detection. Building\nupon this, we have modified the batch normalization (BN) and ReLU layer\npreceding the fully connected layer, diminishing their impact on the output\nfeature distributions and thereby widening the distribution gap between ID and\nOOD data features. Our method requires only the training of the classification\nnetwork model, eschewing any need for input pre-processing or specific OOD data\npre-tuning. Extensive experiments on several benchmark datasets demonstrates\nthat our approach delivers state-of-the-art performance. Our code is available\nat https://github.com/Hewell0/ProjOOD.", "keywords": [], "authors_list": ["Keke Tang", "Chao Hou", "Weilong Peng", "Runnan Chen", "Peican Zhu", "Wenping Wang", "Zhihong Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f703"}, "filepath": "data/2403.08032.png", "tags": [], "_media_type": "image", "_rand": 0.9995788970108899, "arXiv_link": "https://arxiv.org/abs/2403.08032", "other_link": "", "title": "Higher-order Relational Reasoning for Pedestrian Trajectory Prediction", "abstract": "Accurate pedestrian trajectory prediction is crucial for various\napplications, and it requires a deep understanding of pedestrian motion\npatterns in dynamic environments. However, existing pedestrian trajectory\nprediction methods still need more exploration to fully leverage these motion\npatterns. This paper investigates the possibilities of using Large Language\nModels (LLMs) to improve pedestrian trajectory prediction tasks by inducing\nmotion cues. We introduce LG-Traj, a novel approach incorporating LLMs to\ngenerate motion cues present in pedestrian past/observed trajectories. Our\napproach also incorporates motion cues present in pedestrian future\ntrajectories by clustering future trajectories of training data using a mixture\nof Gaussians. These motion cues, along with pedestrian coordinates, facilitate\na better understanding of the underlying representation. Furthermore, we\nutilize singular value decomposition to augment the observed trajectories,\nincorporating them into the model learning process to further enhance\nrepresentation learning. Our method employs a transformer-based architecture\ncomprising a motion encoder to model motion patterns and a social decoder to\ncapture social interactions among pedestrians. We demonstrate the effectiveness\nof our approach on popular pedestrian trajectory prediction benchmarks, namely\nETH-UCY and SDD, and present various ablation experiments to validate our\napproach.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Sungjune Kim", "Hyung-gun Chi", "Hyerin Lim", "Karthik Ramani", "Jinkyu Kim", "Sangpil Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f704"}, "filepath": "data/2405.19718.png", "tags": [], "_media_type": "image", "_rand": 0.9993364571445915, "arXiv_link": "https://arxiv.org/abs/2405.19718", "other_link": "https://github.com/Yee-Sing/led.", "title": "LED: A Large-scale Real-world Paired Dataset for Event Camera Denoising", "abstract": "Event camera has significant advantages in capturing dynamic scene\ninformation while being prone to noise interference, particularly in\nchallenging conditions like low threshold and low illumination. However, most\nexisting research focuses on gentle situations, hindering event camera\napplications in realistic complex scenarios. To tackle this limitation and\nadvance the field, we construct a new paired real-world event denoising dataset\n(LED), including 3K sequences with 18K seconds of high-resolution (1200*680)\nevent streams and showing three notable distinctions compared to others:\ndiverse noise levels and scenes, larger-scale with high-resolution, and\nhigh-quality GT. Specifically, it contains stepped parameters and varying\nillumination with diverse scenarios. Moreover, based on the property of noise\nevents inconsistency and signal events consistency, we propose a novel\neffective denoising framework(DED) using homogeneous dual events to generate\nthe GT with better separating noise from the raw. Furthermore, we design a\nbio-inspired baseline leveraging Leaky-Integrate-and-Fire (LIF) neurons with\ndynamic thresholds to realize accurate denoising. The experimental results\ndemonstrate that the remarkable performance of the proposed approach on\ndifferent datasets.The dataset and code are at https://github.com/Yee-Sing/led.", "keywords": ["Low-level vision", "Efficient and scalable vision"], "authors_list": ["Yuxing Duan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f705"}, "filepath": "data/2312.04962.png", "tags": [], "_media_type": "image", "_rand": 0.9995948371940802, "arXiv_link": "https://arxiv.org/abs/2312.04962", "other_link": "https://www.obukhov.ai/point2cad}{https://www.obukhov.ai/point2cad.", "title": "Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds", "abstract": "Computer-Aided Design (CAD) model reconstruction from point clouds is an\nimportant problem at the intersection of computer vision, graphics, and machine\nlearning; it saves the designer significant time when iterating on in-the-wild\nobjects. Recent advancements in this direction achieve relatively reliable\nsemantic segmentation but still struggle to produce an adequate topology of the\nCAD model. In this work, we analyze the current state of the art for that\nill-posed task and identify shortcomings of existing methods. We propose a\nhybrid analytic-neural reconstruction scheme that bridges the gap between\nsegmented point clouds and structured CAD models and can be readily combined\nwith different segmentation backbones. Moreover, to power the surface fitting\nstage, we propose a novel implicit neural representation of freeform surfaces,\ndriving up the performance of our overall CAD reconstruction scheme. We\nextensively evaluate our method on the popular ABC benchmark of CAD models and\nset a new state-of-the-art for that dataset. Project page:\nhttps://www.obukhov.ai/point2cad}{https://www.obukhov.ai/point2cad.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yujia Liu", "Anton Obukhov", "Jan D. Wegner", "Konrad Schindler"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f706"}, "filepath": "data/2404.14908.png", "tags": [], "_media_type": "image", "_rand": 0.9998413644397358, "arXiv_link": "https://arxiv.org/abs/2404.14908", "other_link": "", "title": "Mining Supervision for Dynamic Regions in Self-Supervised Monocular Depth Estimation", "abstract": "This paper focuses on self-supervised monocular depth estimation in dynamic\nscenes trained on monocular videos. Existing methods jointly estimate\npixel-wise depth and motion, relying mainly on an image reconstruction loss.\nDynamic regions1 remain a critical challenge for these methods due to the\ninherent ambiguity in depth and motion estimation, resulting in inaccurate\ndepth estimation. This paper proposes a self-supervised training framework\nexploiting pseudo depth labels for dynamic regions from training data. The key\ncontribution of our framework is to decouple depth estimation for static and\ndynamic regions of images in the training data. We start with an unsupervised\ndepth estimation approach, which provides reliable depth estimates for static\nregions and motion cues for dynamic regions and allows us to extract moving\nobject information at the instance level. In the next stage, we use an object\nnetwork to estimate the depth of those moving objects assuming rigid motions.\nThen, we propose a new scale alignment module to address the scale ambiguity\nbetween estimated depths for static and dynamic regions. We can then use the\ndepth labels generated to train an end-to-end depth estimation network and\nimprove its performance. Extensive experiments on the Cityscapes and KITTI\ndatasets show that our self-training strategy consistently outperforms existing\nself/unsupervised depth estimation methods.", "keywords": [], "authors_list": ["Hoang Chuong Nguyen", "Tianyu Wang", "Jose M. Alvarez", "Miaomiao Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f707"}, "filepath": "data/2404.00847.png", "tags": [], "_media_type": "image", "_rand": 0.999483739665175, "arXiv_link": "https://arxiv.org/abs/2404.00847", "other_link": "https://github.com/AnasEmad11/CLAP", "title": "Collaborative Learning of Anomalies with Privacy (CLAP) for Unsupervised Video Anomaly Detection: A New Baseline", "abstract": "Unsupervised (US) video anomaly detection (VAD) in surveillance applications\nis gaining more popularity recently due to its practical real-world\napplications. As surveillance videos are privacy sensitive and the availability\nof large-scale video data may enable better US-VAD systems, collaborative\nlearning can be highly rewarding in this setting. However, due to the extremely\nchallenging nature of the US-VAD task, where learning is carried out without\nany annotations, privacy-preserving collaborative learning of US-VAD systems\nhas not been studied yet. In this paper, we propose a new baseline for anomaly\ndetection capable of localizing anomalous events in complex surveillance videos\nin a fully unsupervised fashion without any labels on a privacy-preserving\nparticipant-based distributed training configuration. Additionally, we propose\nthree new evaluation protocols to benchmark anomaly detection approaches on\nvarious scenarios of collaborations and data availability. Based on these\nprotocols, we modify existing VAD datasets to extensively evaluate our approach\nas well as existing US SOTA methods on two large-scale datasets including\nUCF-Crime and XD-Violence. All proposed evaluation protocols, dataset splits,\nand codes are available here: https://github.com/AnasEmad11/CLAP", "keywords": [], "authors_list": ["Anas Al-lahham", "Muhammad Zaigham Zaheer", "Nurbek Tastan", "Karthik Nandakumar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f708"}, "filepath": "data/2312.13314.png", "tags": [], "_media_type": "image", "_rand": 0.9998360038825356, "arXiv_link": "https://arxiv.org/abs/2312.13314", "other_link": "", "title": "Unlocking Pretrained Image Backbones for Semantic Image Synthesis", "abstract": "Semantic image synthesis, i.e., generating images from user-provided semantic\nlabel maps, is an important conditional image generation task as it allows to\ncontrol both the content as well as the spatial layout of generated images.\nAlthough diffusion models have pushed the state of the art in generative image\nmodeling, the iterative nature of their inference process makes them\ncomputationally demanding. Other approaches such as GANs are more efficient as\nthey only need a single feed-forward pass for generation, but the image quality\ntends to suffer on large and diverse datasets. In this work, we propose a new\nclass of GAN discriminators for semantic image synthesis that generates highly\nrealistic images by exploiting feature backbone networks pre-trained for tasks\nsuch as image classification. We also introduce a new generator architecture\nwith better context modeling and using cross-attention to inject noise into\nlatent variables, leading to more diverse generated images. Our model, which we\ndub DP-SIMS, achieves state-of-the-art results in terms of image quality and\nconsistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes,\nsurpassing recent diffusion models while requiring two orders of magnitude less\ncompute for inference.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Tariq Berrada", "Jakob Verbeek", "camille couprie", "Karteek Alahari"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f709"}, "filepath": "data/2311.14758.png", "tags": [], "_media_type": "image", "_rand": 0.9998698687550988, "arXiv_link": "https://arxiv.org/abs/2311.14758", "other_link": "", "title": "Point2RBox: Combine Knowledge from Synthetic Visual Patterns for End-to-end Oriented Object Detection with Single Point Supervision", "abstract": "With the rapidly increasing demand for oriented object detection (OOD),\nrecent research involving weakly-supervised detectors for learning rotated box\n(RBox) from the horizontal box (HBox) has attracted more and more attention. In\nthis paper, we explore a more challenging yet label-efficient setting, namely\nsingle point-supervised OOD, and present our approach called Point2RBox.\nSpecifically, we propose to leverage two principles: 1) Synthetic pattern\nknowledge combination: By sampling around each labeled point on the image, we\nspread the object feature to synthetic visual patterns with known boxes to\nprovide the knowledge for box regression. 2) Transform self-supervision: With a\ntransformed input image (e.g. scaled/rotated), the output RBoxes are trained to\nfollow the same transformation so that the network can perceive the relative\nsize/rotation between objects. The detector is further enhanced by a few\ndevised techniques to cope with peripheral issues, e.g. the anchor/layer\nassignment as the size of the object is not available in our point supervision\nsetting. To our best knowledge, Point2RBox is the first end-to-end solution for\npoint-supervised OOD. In particular, our method uses a lightweight paradigm,\nyet it achieves a competitive performance among point-supervised alternatives,\n41.05%/27.62%/80.01% on DOTA/DIOR/HRSC datasets.", "keywords": [], "authors_list": ["Yi Yu", "Xue Yang", "Qingyun Li", "Feipeng Da", "Jifeng Dai", "Yu Qiao", "Junchi Yan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f70a"}, "filepath": "data/2405.14602.png", "tags": [], "_media_type": "image", "_rand": 0.9996435444450567, "arXiv_link": "https://arxiv.org/abs/2405.14602", "other_link": "", "title": "A Versatile Framework for Continual Test-Time Domain Adaptation: Balancing Discriminability and Generalizability", "abstract": "Continual Test-Time Adaptation (CTTA) is an emerging and challenging task\nwhere a model trained in a source domain must adapt to continuously changing\nconditions during testing, without access to the original source data. CTTA is\nprone to error accumulation due to uncontrollable domain shifts, leading to\nblurred decision boundaries between categories. Existing CTTA methods primarily\nfocus on suppressing domain shifts, which proves inadequate during the\nunsupervised test phase. In contrast, we introduce a novel approach that guides\nrather than suppresses these shifts. Specifically, we propose\n$\\textbf{C}$ontrollable $\\textbf{Co}$ntinual $\\textbf{T}$est-$\\textbf{T}$ime\n$\\textbf{A}$daptation (C-CoTTA), which explicitly prevents any single category\nfrom encroaching on others, thereby mitigating the mutual influence between\ncategories caused by uncontrollable shifts. Moreover, our method reduces the\nsensitivity of model to domain transformations, thereby minimizing the\nmagnitude of category shifts. Extensive quantitative experiments demonstrate\nthe effectiveness of our method, while qualitative analyses, such as t-SNE\nplots, confirm the theoretical validity of our approach.", "keywords": [], "authors_list": ["Xu Yang", "Xuan chen", "Moqi Li", "Kun Wei", "Cheng Deng"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f70b"}, "filepath": "data/2311.13613.png", "tags": [], "_media_type": "image", "_rand": 0.9992597965090395, "arXiv_link": "https://arxiv.org/abs/2311.13613", "other_link": "", "title": "Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning", "abstract": "Dataset pruning aims to construct a coreset capable of achieving performance\ncomparable to the original, full dataset. Most existing dataset pruning methods\nrely on snapshot-based criteria to identify representative samples, often\nresulting in poor generalization across various pruning and cross-architecture\nscenarios. Recent studies have addressed this issue by expanding the scope of\ntraining dynamics considered, including factors such as forgetting event and\nprobability change, typically using an averaging approach. However, these works\nstruggle to integrate a broader range of training dynamics without overlooking\nwell-generalized samples, which may not be sufficiently highlighted in an\naveraging manner. In this study, we propose a novel dataset pruning method\ntermed as Temporal Dual-Depth Scoring (TDDS), to tackle this problem. TDDS\nutilizes a dual-depth strategy to achieve a balance between incorporating\nextensive training dynamics and identifying representative samples for dataset\npruning. In the first depth, we estimate the series of each sample's individual\ncontributions spanning the training progress, ensuring comprehensive\nintegration of training dynamics. In the second depth, we focus on the\nvariability of the sample-wise contributions identified in the first depth to\nhighlight well-generalized samples. Extensive experiments conducted on CIFAR\nand ImageNet datasets verify the superiority of TDDS over previous SOTA\nmethods. Specifically on CIFAR-100, our method achieves 54.51% accuracy with\nonly 10% training data, surpassing random selection by 7.83% and other\ncomparison methods by at least 12.69%.", "keywords": ["Efficient and scalable vision"], "authors_list": ["xin zhang", "Jiawei Du", "Weiying Xie", "Yunsong Li", "Joey Tianyi Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f70c"}, "filepath": "data/2312.01746v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990705043405504, "arXiv_link": "https://arxiv.org/html/2312.01746v1", "other_link": "https://github.com/DQiaole/FlowDiffusion_pytorch.", "title": "FlowDiffuser: Advancing Optical Flow Estimation with Diffusion Models", "abstract": "Recently, Google proposes DDVM which for the first time demonstrates that a\ngeneral diffusion model for image-to-image translation task works impressively\nwell on optical flow estimation task without any specific designs like RAFT.\nHowever, DDVM is still a closed-source model with the expensive and private\nPalette-style pretraining. In this technical report, we present the first\nopen-source DDVM by reproducing it. We study several design choices and find\nthose important ones. By training on 40k public data with 4 GPUs, our\nreproduction achieves comparable performance to the closed-source DDVM. The\ncode and model have been released in\nhttps://github.com/DQiaole/FlowDiffusion_pytorch.", "keywords": [], "authors_list": ["Ao Luo", "XIN LI", "Fan Yang", "Jiangyu Liu", "Haoqiang Fan", "Shuaicheng Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f70d"}, "filepath": "data/2310.12153.png", "tags": [], "_media_type": "image", "_rand": 0.9993774035177269, "arXiv_link": "https://arxiv.org/abs/2310.12153", "other_link": "", "title": "Probabilistic Sampling of Balanced K-Means using Adiabatic Quantum Computing", "abstract": "Adiabatic quantum computing (AQC) is a promising approach for discrete and\noften NP-hard optimization problems. Current AQCs allow to implement problems\nof research interest, which has sparked the development of quantum\nrepresentations for many computer vision tasks. Despite requiring multiple\nmeasurements from the noisy AQC, current approaches only utilize the best\nmeasurement, discarding information contained in the remaining ones. In this\nwork, we explore the potential of using this information for probabilistic\nbalanced k-means clustering. Instead of discarding non-optimal solutions, we\npropose to use them to compute calibrated posterior probabilities with little\nadditional compute cost. This allows us to identify ambiguous solutions and\ndata points, which we demonstrate on a D-Wave AQC on synthetic tasks and real\nvisual data.", "keywords": [], "authors_list": ["Jan-Nico Zaech", "Martin Danelljan", "Tolga Birdal", "Luc Van Gool"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f70e"}, "filepath": "data/2401.10229.png", "tags": [], "_media_type": "image", "_rand": 0.9995912900544738, "arXiv_link": "https://arxiv.org/abs/2401.10229", "other_link": "https://github.com/lxtGH/OMG-Seg.", "title": "OMG-Seg: Is One Model Good Enough For All Segmentation?", "abstract": "In this work, we address various segmentation tasks, each traditionally\ntackled by distinct or partially unified models. We propose OMG-Seg, One Model\nthat is Good enough to efficiently and effectively handle all the segmentation\ntasks, including image semantic, instance, and panoptic segmentation, as well\nas their video counterparts, open vocabulary settings, prompt-driven,\ninteractive segmentation like SAM, and video object segmentation. To our\nknowledge, this is the first model to handle all these tasks in one model and\nachieve satisfactory performance. We show that OMG-Seg, a transformer-based\nencoder-decoder architecture with task-specific queries and outputs, can\nsupport over ten distinct segmentation tasks and yet significantly reduce\ncomputational and parameter overhead across various tasks and datasets. We\nrigorously evaluate the inter-task influences and correlations during\nco-training. Code and models are available at https://github.com/lxtGH/OMG-Seg.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Large multimodal models and prompting techniques"], "authors_list": ["Xiangtai Li", "Haobo Yuan", "Wei Li", "Henghui Ding", "Size Wu", "Wenwei Zhang", "Yining Li", "Kai Chen", "Chen Change Loy"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f70f"}, "filepath": "data/2402.17729.png", "tags": [], "_media_type": "image", "_rand": 0.9999139688902944, "arXiv_link": "https://arxiv.org/abs/2402.17729", "other_link": "", "title": "Towards Fairness-Aware Adversarial Learning", "abstract": "Although adversarial training (AT) has proven effective in enhancing the\nmodel's robustness, the recently revealed issue of fairness in robustness has\nnot been well addressed, i.e. the robust accuracy varies significantly among\ndifferent categories. In this paper, instead of uniformly evaluating the\nmodel's average class performance, we delve into the issue of robust fairness,\nby considering the worst-case distribution across various classes. We propose a\nnovel learning paradigm, named Fairness-Aware Adversarial Learning (FAAL). As a\ngeneralization of conventional AT, we re-define the problem of adversarial\ntraining as a min-max-max framework, to ensure both robustness and fairness of\nthe trained model. Specifically, by taking advantage of distributional robust\noptimization, our method aims to find the worst distribution among different\ncategories, and the solution is guaranteed to obtain the upper bound\nperformance with high probability. In particular, FAAL can fine-tune an unfair\nrobust model to be fair within only two epochs, without compromising the\noverall clean and robust accuracies. Extensive experiments on various image\ndatasets validate the superior performance and efficiency of the proposed FAAL\ncompared to other state-of-the-art methods.", "keywords": ["Vision applications for social good and ethics"], "authors_list": ["Yanghao Zhang", "Tianle Zhang", "Ronghui Mu", "Xiaowei Huang", "Wenjie Ruan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f710"}, "filepath": "data/2312.16051.png", "tags": [], "_media_type": "image", "_rand": 0.9999923996715943, "arXiv_link": "https://arxiv.org/abs/2312.16051", "other_link": "", "title": "Inter-X: Towards Versatile Human-Human Interaction Analysis", "abstract": "The analysis of the ubiquitous human-human interactions is pivotal for\nunderstanding humans as social beings. Existing human-human interaction\ndatasets typically suffer from inaccurate body motions, lack of hand gestures\nand fine-grained textual descriptions. To better perceive and generate\nhuman-human interactions, we propose Inter-X, a currently largest human-human\ninteraction dataset with accurate body movements and diverse interaction\npatterns, together with detailed hand gestures. The dataset includes ~11K\ninteraction sequences and more than 8.1M frames. We also equip Inter-X with\nversatile annotations of more than 34K fine-grained human part-level textual\ndescriptions, semantic interaction categories, interaction order, and the\nrelationship and personality of the subjects. Based on the elaborate\nannotations, we propose a unified benchmark composed of 4 categories of\ndownstream tasks from both the perceptual and generative directions. Extensive\nexperiments and comprehensive analysis show that Inter-X serves as a testbed\nfor promoting the development of versatile human-human interaction analysis.\nOur dataset and benchmark will be publicly available for research purposes.", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Liang Xu", "Xintao Lv", "Yichao Yan", "Xin Jin", "Wu Shuwen", "Congsheng Xu", "Yifan Liu", "Yizhou Zhou", "Fengyun Rao", "Xingdong Sheng", "Yunhui LIU", "Wenjun Zeng", "Xiaokang Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f711"}, "filepath": "data/2403.11882.png", "tags": [], "_media_type": "image", "_rand": 0.9993769250978914, "arXiv_link": "https://arxiv.org/abs/2403.11882", "other_link": "", "title": "ReGenNet: Towards Human Action-Reaction Synthesis", "abstract": "Humans constantly interact with their surrounding environments. Current\nhuman-centric generative models mainly focus on synthesizing humans plausibly\ninteracting with static scenes and objects, while the dynamic human\naction-reaction synthesis for ubiquitous causal human-human interactions is\nless explored. Human-human interactions can be regarded as asymmetric with\nactors and reactors in atomic interaction periods. In this paper, we\ncomprehensively analyze the asymmetric, dynamic, synchronous, and detailed\nnature of human-human interactions and propose the first multi-setting human\naction-reaction synthesis benchmark to generate human reactions conditioned on\ngiven human actions. To begin with, we propose to annotate the actor-reactor\norder of the interaction sequences for the NTU120, InterHuman, and Chi3D\ndatasets. Based on them, a diffusion-based generative model with a Transformer\ndecoder architecture called ReGenNet together with an explicit distance-based\ninteraction loss is proposed to predict human reactions in an online manner,\nwhere the future states of actors are unavailable to reactors. Quantitative and\nqualitative results show that our method can generate instant and plausible\nhuman reactions compared to the baselines, and can generalize to unseen actor\nmotions and viewpoint changes.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Liang Xu", "Yizhou Zhou", "Yichao Yan", "Xin Jin", "Wenhan Zhu", "Fengyun Rao", "Xiaokang Yang", "Wenjun Zeng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f712"}, "filepath": "data/2401.03043v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993299634225481, "arXiv_link": "https://arxiv.org/html/2401.03043v1", "other_link": "https://github.com/Levishery/Flywire-Neuron-Tracing.", "title": "Cross-dimension Affinity Distillation for 3D EM Neuron Segmentation", "abstract": "The current neuron reconstruction pipeline for electron microscopy (EM) data\nusually includes automatic image segmentation followed by extensive human\nexpert proofreading. In this work, we aim to reduce human workload by\npredicting connectivity between over-segmented neuron pieces, taking both\nmicroscopy image and 3D morphology features into account, similar to human\nproofreading workflow. To this end, we first construct a dataset, named\nFlyTracing, that contains millions of pairwise connections of segments\nexpanding the whole fly brain, which is three orders of magnitude larger than\nexisting datasets for neuron segment connection. To learn sophisticated\nbiological imaging features from the connectivity annotations, we propose a\nnovel connectivity-aware contrastive learning method to generate dense\nvolumetric EM image embedding. The learned embeddings can be easily\nincorporated with any point or voxel-based morphological representations for\nautomatic neuron tracing. Extensive comparisons of different combination\nschemes of image and morphological representation in identifying split errors\nacross the whole fly brain demonstrate the superiority of the proposed\napproach, especially for the locations that contain severe imaging artifacts,\nsuch as section missing and misalignment. The dataset and code are available at\nhttps://github.com/Levishery/Flywire-Neuron-Tracing.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiaoyu Liu", "Miaomiao Cai", "Yinda Chen", "Yueyi Zhang", "Te Shi", "Ruobing Zhang", "Xuejin Chen", "Zhiwei Xiong"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f713"}, "filepath": "data/2312.04016.png", "tags": [], "_media_type": "image", "_rand": 0.9991857017830054, "arXiv_link": "https://arxiv.org/abs/2312.04016", "other_link": "https://github.com/ardianumam/PartDistill.", "title": "PartDistill: 3D Shape Part Segmentation by Vision-Language Model Distillation", "abstract": "This paper proposes a cross-modal distillation framework, PartDistill, which\ntransfers 2D knowledge from vision-language models (VLMs) to facilitate 3D\nshape part segmentation. PartDistill addresses three major challenges in this\ntask: the lack of 3D segmentation in invisible or undetected regions in the 2D\nprojections, inconsistent 2D predictions by VLMs, and the lack of knowledge\naccumulation across different 3D shapes. PartDistill consists of a teacher\nnetwork that uses a VLM to make 2D predictions and a student network that\nlearns from the 2D predictions while extracting geometrical features from\nmultiple 3D shapes to carry out 3D part segmentation. A bi-directional\ndistillation, including forward and backward distillations, is carried out\nwithin the framework, where the former forward distills the 2D predictions to\nthe student network, and the latter improves the quality of the 2D predictions,\nwhich subsequently enhances the final 3D segmentation. Moreover, PartDistill\ncan exploit generative models that facilitate effortless 3D shape creation for\ngenerating knowledge sources to be distilled. Through extensive experiments,\nPartDistill boosts the existing methods with substantial margins on widely used\nShapeNetPart and PartNetE datasets, by more than 15% and 12% higher mIoU\nscores, respectively. The code for this work is available at\nhttps://github.com/ardianumam/PartDistill.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ardian Umam", "Cheng-Kun Yang", "Min-Hung Chen", "Jen-Hui Chuang", "Yen-Yu Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f714"}, "filepath": "data/2403.16368.png", "tags": [], "_media_type": "image", "_rand": 0.9999312505402195, "arXiv_link": "https://arxiv.org/abs/2403.16368", "other_link": "", "title": "Distilling Semantic Priors from SAM to Efficient Image Restoration Models", "abstract": "In image restoration (IR), leveraging semantic priors from segmentation\nmodels has been a common approach to improve performance. The recent segment\nanything model (SAM) has emerged as a powerful tool for extracting advanced\nsemantic priors to enhance IR tasks. However, the computational cost of SAM is\nprohibitive for IR, compared to existing smaller IR models. The incorporation\nof SAM for extracting semantic priors considerably hampers the model inference\nefficiency. To address this issue, we propose a general framework to distill\nSAM's semantic knowledge to boost exiting IR models without interfering with\ntheir inference process. Specifically, our proposed framework consists of the\nsemantic priors fusion (SPF) scheme and the semantic priors distillation (SPD)\nscheme. SPF fuses two kinds of information between the restored image predicted\nby the original IR model and the semantic mask predicted by SAM for the refined\nrestored image. SPD leverages a self-distillation manner to distill the fused\nsemantic priors to boost the performance of original IR models. Additionally,\nwe design a semantic-guided relation (SGR) module for SPD, which ensures\nsemantic feature representation space consistency to fully distill the priors.\nWe demonstrate the effectiveness of our framework across multiple IR models and\ntasks, including deraining, deblurring, and denoising.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Quan Zhang", "Xiaoyu Liu", "Wei Li", "Hanting Chen", "Junchao Liu", "Jie Hu", "Zhiwei Xiong", "Chun Yuan", "Yunhe Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f715"}, "filepath": "data/2311.17024.png", "tags": [], "_media_type": "image", "_rand": 0.9996880390261995, "arXiv_link": "https://arxiv.org/abs/2311.17024", "other_link": "https://diff3f.github.io/", "title": "Diffusion 3D Features (Diff3F): Decorating Untextured Shapes with Distilled Semantic Features", "abstract": "We present Diff3F as a simple, robust, and class-agnostic feature descriptor\nthat can be computed for untextured input shapes (meshes or point clouds). Our\nmethod distills diffusion features from image foundational models onto input\nshapes. Specifically, we use the input shapes to produce depth and normal maps\nas guidance for conditional image synthesis. In the process, we produce\n(diffusion) features in 2D that we subsequently lift and aggregate on the\noriginal surface. Our key observation is that even if the conditional image\ngenerations obtained from multi-view rendering of the input shapes are\ninconsistent, the associated image features are robust and, hence, can be\ndirectly aggregated across views. This produces semantic features on the input\nshapes, without requiring additional data or training. We perform extensive\nexperiments on multiple benchmarks (SHREC'19, SHREC'20, FAUST, and TOSCA) and\ndemonstrate that our features, being semantic instead of geometric, produce\nreliable correspondence across both isometric and non-isometrically related\nshape families. Code is available via the project page at\nhttps://diff3f.github.io/", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Niladri Shekhar Dutt", "Sanjeev Muralikrishnan", "Niloy J. Mitra"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f716"}, "filepath": "data/2307.09892.png", "tags": [], "_media_type": "image", "_rand": 0.9999013443811641, "arXiv_link": "https://arxiv.org/abs/2307.09892", "other_link": "", "title": "TutteNet: Injective 3D Deformations by Composition of 2D Mesh Deformations", "abstract": "We propose 3Deformer, a general-purpose framework for interactive 3D shape\nediting. Given a source 3D mesh with semantic materials, and a user-specified\nsemantic image, 3Deformer can accurately edit the source mesh following the\nshape guidance of the semantic image, while preserving the source topology as\nrigid as possible. Recent studies of 3D shape editing mostly focus on learning\nneural networks to predict 3D shapes, which requires high-cost 3D training\ndatasets and is limited to handling objects involved in the datasets. Unlike\nthese studies, our 3Deformer is a non-training and common framework, which only\nrequires supervision of readily-available semantic images, and is compatible\nwith editing various objects unlimited by datasets. In 3Deformer, the source\nmesh is deformed utilizing the differentiable renderer technique, according to\nthe correspondences between semantic images and mesh materials. However,\nguiding complex 3D shapes with a simple 2D image incurs extra challenges, that\nis, the deform accuracy, surface smoothness, geometric rigidity, and global\nsynchronization of the edited mesh should be guaranteed. To address these\nchallenges, we propose a hierarchical optimization architecture to balance the\nglobal and local shape features, and propose further various strategies and\nlosses to improve properties of accuracy, smoothness, rigidity, and so on.\nExtensive experiments show that our 3Deformer is able to produce impressive\nresults and reaches the state-of-the-art level.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Bo Sun", "Thibault Groueix", "Chen Song", "Qixing Huang", "Noam Aigerman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f717"}, "filepath": "data/2307.10350.png", "tags": [], "_media_type": "image", "_rand": 0.999376498325443, "arXiv_link": "https://arxiv.org/abs/2307.10350", "other_link": "", "title": "MAGICK: A Large-scale Captioned Dataset from Matting Generated Images using Chroma Keying", "abstract": "Massive web datasets play a key role in the success of large vision-language\nmodels like CLIP and Flamingo. However, the raw web data is noisy, and existing\nfiltering methods to reduce noise often come at the expense of data diversity.\nOur work focuses on caption quality as one major source of noise, and studies\nhow generated captions can increase the utility of web-scraped datapoints with\nnondescript text. Through exploring different mixing strategies for raw and\ngenerated captions, we outperform the best filtering method proposed by the\nDataComp benchmark by 2% on ImageNet and 4% on average across 38 tasks, given a\ncandidate pool of 128M image-text pairs. Our best approach is also 2x better at\nFlickr and MS-COCO retrieval. We then analyze what makes synthetic captions an\neffective source of text supervision. In experimenting with different image\ncaptioning models, we also demonstrate that the performance of a model on\nstandard image captioning benchmarks (e.g., NoCaps CIDEr) is not a reliable\nindicator of the utility of the captions it generates for multimodal training.\nFinally, our experiments with using generated captions at DataComp's large\nscale (1.28B image-text pairs) offer insights into the limitations of synthetic\ntext, as well as the importance of image curation with increasing training data\nquantity. The synthetic captions used in our experiments are now available on\nHuggingFace.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Ryan Burgert", "Brian Price", "Jason Kuen", "Yijun Li", "Michael Ryoo"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f718"}, "filepath": "data/2403.03736.png", "tags": [], "_media_type": "image", "_rand": 0.9994408633016878, "arXiv_link": "https://arxiv.org/abs/2403.03736", "other_link": "", "title": "Generative Latent Coding for Ultra-Low Bitrate Image Compression", "abstract": "Recent progress in generative compression technology has significantly\nimproved the perceptual quality of compressed data. However, these advancements\nprimarily focus on producing high-frequency details, often overlooking the\nability of generative models to capture the prior distribution of image\ncontent, thus impeding further bitrate reduction in extreme compression\nscenarios (<0.05 bpp). Motivated by the capabilities of predictive language\nmodels for lossless compression, this paper introduces a novel Unified Image\nGeneration-Compression (UIGC) paradigm, merging the processes of generation and\ncompression. A key feature of the UIGC framework is the adoption of\nvector-quantized (VQ) image models for tokenization, alongside a multi-stage\ntransformer designed to exploit spatial contextual information for modeling the\nprior distribution. As such, the dual-purpose framework effectively utilizes\nthe learned prior for entropy estimation and assists in the regeneration of\nlost tokens. Extensive experiments demonstrate the superiority of the proposed\nUIGC framework over existing codecs in perceptual quality and human perception,\nparticularly in ultra-low bitrate scenarios (<=0.03 bpp), pioneering a new\ndirection in generative compression.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Zhaoyang Jia", "Jiahao Li", "Bin Li", "Houqiang Li", "Yan Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f719"}, "filepath": "data/2403.17761.png", "tags": [], "_media_type": "image", "_rand": 0.9991257154583886, "arXiv_link": "https://arxiv.org/abs/2403.17761", "other_link": "", "title": "Makeup Prior Models for 3D Facial Makeup Estimation and Applications", "abstract": "In this work, we introduce two types of makeup prior models to extend\nexisting 3D face prior models: PCA-based and StyleGAN2-based priors. The\nPCA-based prior model is a linear model that is easy to construct and is\ncomputationally efficient. However, it retains only low-frequency information.\nConversely, the StyleGAN2-based model can represent high-frequency information\nwith relatively higher computational cost than the PCA-based model. Although\nthere is a trade-off between the two models, both are applicable to 3D facial\nmakeup estimation and related applications. By leveraging makeup prior models\nand designing a makeup consistency module, we effectively address the\nchallenges that previous methods faced in robustly estimating makeup,\nparticularly in the context of handling self-occluded faces. In experiments, we\ndemonstrate that our approach reduces computational costs by several orders of\nmagnitude, achieving speeds up to 180 times faster. In addition, by improving\nthe accuracy of the estimated makeup, we confirm that our methods are highly\nadvantageous for various 3D facial makeup applications such as 3D makeup face\nreconstruction, user-friendly makeup editing, makeup transfer, and\ninterpolation.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Xingchao Yang", "Takafumi Taketomi", "Yuki Endo", "Yoshihiro Kanamori"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f71a"}, "filepath": "data/2404.11273.png", "tags": [], "_media_type": "image", "_rand": 0.9993073295571397, "arXiv_link": "https://arxiv.org/abs/2404.11273", "other_link": "", "title": "Uncertainty-Aware Source-Free Adaptive Image Super-Resolution with Wavelet Augmentation Transformer", "abstract": "Transformer-based models have achieved remarkable results in low-level vision\ntasks including image super-resolution (SR). However, early Transformer-based\napproaches that rely on self-attention within non-overlapping windows encounter\nchallenges in acquiring global information. To activate more input pixels\nglobally, hybrid attention models have been proposed. Moreover, training by\nsolely minimizing pixel-wise RGB losses, such as L1, have been found inadequate\nfor capturing essential high-frequency details. This paper presents two\ncontributions: i) We introduce convolutional non-local sparse attention (NLSA)\nblocks to extend the hybrid transformer architecture in order to further\nenhance its receptive field. ii) We employ wavelet losses to train Transformer\nmodels to improve quantitative and subjective performance. While wavelet losses\nhave been explored previously, showing their power in training\nTransformer-based SR models is novel. Our experimental results demonstrate that\nthe proposed model provides state-of-the-art PSNR results as well as superior\nvisual performance across various benchmark datasets.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Yuang Ai", "Xiaoqiang Zhou", "Huaibo Huang", "Lei Zhang", "Ran He"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f71b"}, "filepath": "data/2312.03767.png", "tags": [], "_media_type": "image", "_rand": 0.9991641039980029, "arXiv_link": "https://arxiv.org/abs/2312.03767", "other_link": "", "title": "Unveiling the Unknown: Unleashing the Power of Unknown to Known in Open-Set Source-Free Domain Adaptation", "abstract": "Open Set Domain Adaptation (OSDA) aims to adapt a model trained on a source\ndomain to a target domain that undergoes distribution shift and contains\nsamples from novel classes outside the source domain. Source-free OSDA\n(SF-OSDA) techniques eliminate the need to access source domain samples, but\ncurrent SF-OSDA methods utilize only the known classes in the target domain for\nadaptation, and require access to the entire target domain even during\ninference after adaptation, to make the distinction between known and unknown\nsamples. In this paper, we introduce Unknown Sample Discovery (USD) as an\nSF-OSDA method that utilizes a temporally ensembled teacher model to conduct\nknown-unknown target sample separation and adapts the student model to the\ntarget domain over all classes using co-training and temporal consistency\nbetween the teacher and the student. USD promotes Jensen-Shannon distance (JSD)\nas an effective measure for known-unknown sample separation. Our\nteacher-student framework significantly reduces error accumulation resulting\nfrom imperfect known-unknown sample separation, while curriculum guidance helps\nto reliably learn the distinction between target known and target unknown\nsubspaces. USD appends the target model with an unknown class node, thus\nreadily classifying a target sample into any of the known or unknown classes in\nsubsequent post-adaptation inference stages. Empirical results show that USD is\nsuperior to existing SF-OSDA methods and is competitive with current OSDA\nmodels that utilize both source and target domains during adaptation.", "keywords": [], "authors_list": ["Fuli Wan", "Han Zhao", "Xu Yang", "Cheng Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f71c"}, "filepath": "data/2402.18133.png", "tags": [], "_media_type": "image", "_rand": 0.9990704202434944, "arXiv_link": "https://arxiv.org/abs/2402.18133", "other_link": "https://github.com/dvlab-research/Parametric-Contrastive-Learning.", "title": "Classes Are Not Equal: An Empirical Study on Image Recognition Fairness", "abstract": "In this paper, we present an empirical study on image recognition fairness,\ni.e., extreme class accuracy disparity on balanced data like ImageNet. We\nexperimentally demonstrate that classes are not equal and the fairness issue is\nprevalent for image classification models across various datasets, network\narchitectures, and model capacities. Moreover, several intriguing properties of\nfairness are identified. First, the unfairness lies in problematic\nrepresentation rather than classifier bias. Second, with the proposed concept\nof Model Prediction Bias, we investigate the origins of problematic\nrepresentation during optimization. Our findings reveal that models tend to\nexhibit greater prediction biases for classes that are more challenging to\nrecognize. It means that more other classes will be confused with harder\nclasses. Then the False Positives (FPs) will dominate the learning in\noptimization, thus leading to their poor accuracy. Further, we conclude that\ndata augmentation and representation learning algorithms improve overall\nperformance by promoting fairness to some degree in image classification. The\nCode is available at\nhttps://github.com/dvlab-research/Parametric-Contrastive-Learning.", "keywords": [], "authors_list": ["Jiequan Cui", "Beier Zhu", "Xin Wen", "Xiaojuan Qi", "Bei Yu", "Hanwang Zhang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f71d"}, "filepath": "data/2311.17518v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996247363694443, "arXiv_link": "https://arxiv.org/abs/2311.17518v2", "other_link": "https://lorebianchi98.github.io/FG-OVD/.", "title": "The devil is in the fine-grained details: Evaluating open-vocabulary object detectors for fine-grained understanding", "abstract": "Recent advancements in large vision-language models enabled visual object\ndetection in open-vocabulary scenarios, where object classes are defined in\nfree-text formats during inference. In this paper, we aim to probe the\nstate-of-the-art methods for open-vocabulary object detection to determine to\nwhat extent they understand fine-grained properties of objects and their parts.\nTo this end, we introduce an evaluation protocol based on dynamic vocabulary\ngeneration to test whether models detect, discern, and assign the correct\nfine-grained description to objects in the presence of hard-negative classes.\nWe contribute with a benchmark suite of increasing difficulty and probing\ndifferent properties like color, pattern, and material. We further enhance our\ninvestigation by evaluating several state-of-the-art open-vocabulary object\ndetectors using the proposed protocol and find that most existing solutions,\nwhich shine in standard open-vocabulary benchmarks, struggle to accurately\ncapture and distinguish finer object details. We conclude the paper by\nhighlighting the limitations of current methodologies and exploring promising\nresearch directions to overcome the discovered drawbacks. Data and code are\navailable at https://lorebianchi98.github.io/FG-OVD/.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Lorenzo Bianchi", "Fabio Carrara", "Nicola Messina", "Claudio Gennaro", "Fabrizio Falchi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f71e"}, "filepath": "data/2312.02133v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994706698598567, "arXiv_link": "https://arxiv.org/abs/2312.02133v1", "other_link": "", "title": "Style Aligned Image Generation via Shared Attention", "abstract": "Large-scale Text-to-Image (T2I) models have rapidly gained prominence across\ncreative fields, generating visually compelling outputs from textual prompts.\nHowever, controlling these models to ensure consistent style remains\nchallenging, with existing methods necessitating fine-tuning and manual\nintervention to disentangle content and style. In this paper, we introduce\nStyleAligned, a novel technique designed to establish style alignment among a\nseries of generated images. By employing minimal `attention sharing' during the\ndiffusion process, our method maintains style consistency across images within\nT2I models. This approach allows for the creation of style-consistent images\nusing a reference style through a straightforward inversion operation. Our\nmethod's evaluation across diverse styles and text prompts demonstrates\nhigh-quality synthesis and fidelity, underscoring its efficacy in achieving\nconsistent style across various inputs.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Amir Hertz", "Andrey Voynov", "Shlomi Fruchter", "Daniel Cohen-Or"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f71f"}, "filepath": "data/2312.02918v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999728853805505, "arXiv_link": "https://arxiv.org/abs/2312.02918v2", "other_link": "", "title": "Multimodal Prompt Perceiver: Empower Adaptiveness, Generalizability and Fidelity for All-in-One Image Restoration", "abstract": "Despite substantial progress, all-in-one image restoration (IR) grapples with\npersistent challenges in handling intricate real-world degradations. This paper\nintroduces MPerceiver: a novel multimodal prompt learning approach that\nharnesses Stable Diffusion (SD) priors to enhance adaptiveness,\ngeneralizability and fidelity for all-in-one image restoration. Specifically,\nwe develop a dual-branch module to master two types of SD prompts: textual for\nholistic representation and visual for multiscale detail representation. Both\nprompts are dynamically adjusted by degradation predictions from the CLIP image\nencoder, enabling adaptive responses to diverse unknown degradations. Moreover,\na plug-in detail refinement module improves restoration fidelity via direct\nencoder-to-decoder information transformation. To assess our method, MPerceiver\nis trained on 9 tasks for all-in-one IR and outperforms state-of-the-art\ntask-specific methods across most tasks. Post multitask pre-training,\nMPerceiver attains a generalized representation in low-level vision, exhibiting\nremarkable zero-shot and few-shot capabilities in unseen tasks. Extensive\nexperiments on 16 IR tasks underscore the superiority of MPerceiver in terms of\nadaptiveness, generalizability and fidelity.", "keywords": ["Low-level vision", "Multimodal models and vision-language models"], "authors_list": ["Yuang Ai", "Huaibo Huang", "Xiaoqiang Zhou", "Jiexiang Wang", "Ran He"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f720"}, "filepath": "data/2401.03707.png", "tags": [], "_media_type": "image", "_rand": 0.9998564615773695, "arXiv_link": "https://arxiv.org/abs/2401.03707", "other_link": "https://kaist-viclab.github.io/fmanet-site", "title": "FMA-Net: Flow Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring", "abstract": "We present a joint learning scheme of video super-resolution and deblurring,\ncalled VSRDB, to restore clean high-resolution (HR) videos from blurry\nlow-resolution (LR) ones. This joint restoration problem has drawn much less\nattention compared to single restoration problems. In this paper, we propose a\nnovel flow-guided dynamic filtering (FGDF) and iterative feature refinement\nwith multi-attention (FRMA), which constitutes our VSRDB framework, denoted as\nFMA-Net. Specifically, our proposed FGDF enables precise estimation of both\nspatio-temporally-variant degradation and restoration kernels that are aware of\nmotion trajectories through sophisticated motion representation learning.\nCompared to conventional dynamic filtering, the FGDF enables the FMA-Net to\neffectively handle large motions into the VSRDB. Additionally, the stacked FRMA\nblocks trained with our novel temporal anchor (TA) loss, which temporally\nanchors and sharpens features, refine features in a course-to-fine manner\nthrough iterative updates. Extensive experiments demonstrate the superiority of\nthe proposed FMA-Net over state-of-the-art methods in terms of both\nquantitative and qualitative quality. Codes and pre-trained models are\navailable at: https://kaist-viclab.github.io/fmanet-site", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Geunhyuk Youk", "Jihyong Oh", "Munchurl Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f721"}, "filepath": "data/2306.13325.png", "tags": [], "_media_type": "image", "_rand": 0.9991802700055117, "arXiv_link": "https://arxiv.org/abs/2306.13325", "other_link": "", "title": "Differentiable Display Photometric Stereo", "abstract": "Photometric stereo leverages variations in illumination conditions to\nreconstruct surface normals. Display photometric stereo, which employs a\nconventional monitor as an illumination source, has the potential to overcome\nlimitations often encountered in bulky and difficult-to-use conventional\nsetups. In this paper, we present differentiable display photometric stereo\n(DDPS), addressing an often overlooked challenge in display photometric stereo:\nthe design of display patterns. Departing from using heuristic display\npatterns, DDPS learns the display patterns that yield accurate normal\nreconstruction for a target system in an end-to-end manner. To this end, we\npropose a differentiable framework that couples basis-illumination image\nformation with analytic photometric-stereo reconstruction. The differentiable\nframework facilitates the effective learning of display patterns via\nauto-differentiation. Also, for training supervision, we propose to use 3D\nprinting for creating a real-world training dataset, enabling accurate\nreconstruction on the target real-world setup. Finally, we exploit that\nconventional LCD monitors emit polarized light, which allows for the optical\nseparation of diffuse and specular reflections when combined with a\npolarization camera, leading to accurate normal reconstruction. Extensive\nevaluation of DDPS shows improved normal-reconstruction accuracy compared to\nheuristic patterns and demonstrates compelling properties such as robustness to\npattern initialization, calibration errors, and simplifications in image\nformation and reconstruction.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Seokjun Choi", "Seungwoo Yoon", "Giljoo Nam", "Seungyong Lee", "Seung-Hwan Baek"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f722"}, "filepath": "data/2312.02221.png", "tags": [], "_media_type": "image", "_rand": 0.9993167268837136, "arXiv_link": "https://arxiv.org/abs/2312.02221", "other_link": "", "title": "Slice3D: Multi-Slice, Occlusion-Revealing, Single View 3D Reconstruction", "abstract": "We introduce multi-slice reasoning, a new notion for single-view 3D\nreconstruction which challenges the current and prevailing belief that\nmulti-view synthesis is the most natural conduit between single-view and 3D.\nOur key observation is that object slicing is more advantageous than altering\nviews to reveal occluded structures. Specifically, slicing is more\nocclusion-revealing since it can peel through any occluders without\nobstruction. In the limit, i.e., with infinitely many slices, it is guaranteed\nto unveil all hidden object parts. We realize our idea by developing Slice3D, a\nnovel method for single-view 3D reconstruction which first predicts multi-slice\nimages from a single RGB image and then integrates the slices into a 3D model\nusing a coordinate-based transformer network for signed distance prediction.\nThe slice images can be regressed or generated, both through a U-Net based\nnetwork. For the former, we inject a learnable slice indicator code to\ndesignate each decoded image into a spatial slice location, while the slice\ngenerator is a denoising diffusion model operating on the entirety of slice\nimages stacked on the input channels. We conduct extensive evaluation against\nstate-of-the-art alternatives to demonstrate superiority of our method,\nespecially in recovering complex and severely occluded shape structures, amid\nambiguities. All Slice3D results were produced by networks trained on a single\nNvidia A40 GPU, with an inference time less than 20 seconds.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yizhi Wang", "Wallace Lira", "Wenqi Wang", "Ali Mahdavi Amiri", "Hao Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f723"}, "filepath": "data/2311.07630.png", "tags": [], "_media_type": "image", "_rand": 0.9995354842470197, "arXiv_link": "https://arxiv.org/abs/2311.07630", "other_link": "", "title": "Cyclic Learning for Binaural Audio Generation and Localization", "abstract": "Binaural stereo audio is recorded by imitating the way the human ear receives\nsound, which provides people with an immersive listening experience. Existing\napproaches leverage autoencoders and directly exploit visual spatial\ninformation to synthesize binaural stereo, resulting in a limited\nrepresentation of visual guidance. For the first time, we propose a visually\nguided generative adversarial approach for generating binaural stereo audio\nfrom mono audio. Specifically, we develop a Stereo Audio Generation Model\n(SAGM), which utilizes shared spatio-temporal visual information to guide the\ngenerator and the discriminator to work separately. The shared visual\ninformation is updated alternately in the generative adversarial stage,\nallowing the generator and discriminator to deliver their respective guided\nknowledge while visually sharing. The proposed method learns bidirectional\ncomplementary visual information, which facilitates the expression of visual\nguidance in generation. In addition, spatial perception is a crucial attribute\nof binaural stereo audio, and thus the evaluation of stereo spatial perception\nis essential. However, previous metrics failed to measure the spatial\nperception of audio. To this end, a metric to measure the spatial perception of\naudio is proposed for the first time. The proposed metric is capable of\nmeasuring the magnitude and direction of spatial perception in the temporal\ndimension. Further, considering its function, it is feasible to utilize it\ninstead of demanding user studies to some extent. The proposed method achieves\nstate-of-the-art performance on 2 datasets and 5 evaluation metrics.\nQualitative experiments and user studies demonstrate that the method generates\nspace-realistic stereo audio.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Zhaojian Li", "Bin Zhao", "Yuan Yuan"], "category_name": "Sound", "all_categories": ["Sound", "Computer Vision and Pattern Recognition", "Machine Learning", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f724"}, "filepath": "data/2312.00096.png", "tags": [], "_media_type": "image", "_rand": 0.9992666535315661, "arXiv_link": "https://arxiv.org/abs/2312.00096", "other_link": "", "title": "OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition", "abstract": "Due to the resource-intensive nature of training vision-language models on\nexpansive video data, a majority of studies have centered on adapting\npre-trained image-language models to the video domain. Dominant pipelines\npropose to tackle the visual discrepancies with additional temporal learners\nwhile overlooking the substantial discrepancy for web-scaled descriptive\nnarratives and concise action category names, leading to less distinct semantic\nspace and potential performance limitations. In this work, we prioritize the\nrefinement of text knowledge to facilitate generalizable video recognition. To\naddress the limitations of the less distinct semantic space of category names,\nwe prompt a large language model (LLM) to augment action class names into\nSpatio-Temporal Descriptors thus bridging the textual discrepancy and serving\nas a knowledge base for general recognition. Moreover, to assign the best\ndescriptors with different video instances, we propose Optimal Descriptor\nSolver, forming the video recognition problem as solving the optimal matching\nflow across frame-level representations and descriptors. Comprehensive\nevaluations in zero-shot, few-shot, and fully supervised video recognition\nhighlight the effectiveness of our approach. Our best model achieves a\nstate-of-the-art zero-shot accuracy of 75.1% on Kinetics-600.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Tongjia Chen", "Hongshan Yu", "Zhengeng Yang", "Zechuan Li", "Wei Sun", "Chen Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f725"}, "filepath": "data/2401.13296.png", "tags": [], "_media_type": "image", "_rand": 0.9998822244474889, "arXiv_link": "https://arxiv.org/abs/2401.13296", "other_link": "", "title": "Visual Objectification in Films: Towards a New AI Task for Video Interpretation", "abstract": "In film gender studies, the concept of 'male gaze' refers to the way the\ncharacters are portrayed on-screen as objects of desire rather than subjects.\nIn this article, we introduce a novel video-interpretation task, to detect\ncharacter objectification in films. The purpose is to reveal and quantify the\nusage of complex temporal patterns operated in cinema to produce the cognitive\nperception of objectification. We introduce the ObyGaze12 dataset, made of 1914\nmovie clips densely annotated by experts for objectification concepts\nidentified in film studies and psychology. We evaluate recent vision models,\nshow the feasibility of the task and where the challenges remain with concept\nbottleneck models. Our new dataset and code are made available to the\ncommunity.", "keywords": ["Scene analysis and understanding", "Vision applications for social good and ethics"], "authors_list": ["Julie Tores", "Lucile Sassatelli", "Hui-Yin Wu", "Clement Bergman", "L\u00e9a Andolfi", "Victor Ecrement", "Frederic Precioso", "Thierry Devars", "Magali GUARESI", "Virginie Julliard", "Sarah L\u00e9cossais"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f726"}, "filepath": "data/2405.10037v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995616142359438, "arXiv_link": "https://arxiv.org/abs/2405.10037v1", "other_link": "https://github.com/Lqm26/BMCNet-ESR.", "title": "Bilateral Event Mining and Complementary for Event Stream Super-Resolution", "abstract": "Event Stream Super-Resolution (ESR) aims to address the challenge of\ninsufficient spatial resolution in event streams, which holds great\nsignificance for the application of event cameras in complex scenarios.\nPrevious works for ESR often process positive and negative events in a mixed\nparadigm. This paradigm limits their ability to effectively model the unique\ncharacteristics of each event and mutually refine each other by considering\ntheir correlations. In this paper, we propose a bilateral event mining and\ncomplementary network (BMCNet) to fully leverage the potential of each event\nand capture the shared information to complement each other simultaneously.\nSpecifically, we resort to a two-stream network to accomplish comprehensive\nmining of each type of events individually. To facilitate the exchange of\ninformation between two streams, we propose a bilateral information exchange\n(BIE) module. This module is layer-wisely embedded between two streams,\nenabling the effective propagation of hierarchical global information while\nalleviating the impact of invalid information brought by inherent\ncharacteristics of events. The experimental results demonstrate that our\napproach outperforms the previous state-of-the-art methods in ESR, achieving\nperformance improvements of over 11\\% on both real and synthetic datasets.\nMoreover, our method significantly enhances the performance of event-based\ndownstream tasks such as object recognition and video reconstruction. Our code\nis available at https://github.com/Lqm26/BMCNet-ESR.", "keywords": ["Low-level vision"], "authors_list": ["Zhilin Huang", "Quanmin Liang", "Yijie Yu", "Chujun Qin", "Xiawu Zheng", "Kai Huang", "Zikun Zhou", "Wenming Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f727"}, "filepath": "data/2404.00928.png", "tags": [], "_media_type": "image", "_rand": 0.9998727626295275, "arXiv_link": "https://arxiv.org/abs/2404.00928", "other_link": "", "title": "Instance-Aware Group Quantization for Vision Transformers", "abstract": "Post-training quantization (PTQ) is an efficient model compression technique\nthat quantizes a pretrained full-precision model using only a small calibration\nset of unlabeled samples without retraining. PTQ methods for convolutional\nneural networks (CNNs) provide quantization results comparable to\nfull-precision counterparts. Directly applying them to vision transformers\n(ViTs), however, incurs severe performance degradation, mainly due to the\ndifferences in architectures between CNNs and ViTs. In particular, the\ndistribution of activations for each channel vary drastically according to\ninput instances, making PTQ methods for CNNs inappropriate for ViTs. To address\nthis, we introduce instance-aware group quantization for ViTs (IGQ-ViT). To\nthis end, we propose to split the channels of activation maps into multiple\ngroups dynamically for each input instance, such that activations within each\ngroup share similar statistical properties. We also extend our scheme to\nquantize softmax attentions across tokens. In addition, the number of groups\nfor each layer is adjusted to minimize the discrepancies between predictions\nfrom quantized and full-precision models, under a bit-operation (BOP)\nconstraint. We show extensive experimental results on image classification,\nobject detection, and instance segmentation, with various transformer\narchitectures, demonstrating the effectiveness of our approach.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jaehyeon Moon", "Dohyung Kim", "Jun Yong Cheon", "Bumsub Ham"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f728"}, "filepath": "data/2404.05657.png", "tags": [], "_media_type": "image", "_rand": 0.9994144202359477, "arXiv_link": "https://arxiv.org/abs/2404.05657", "other_link": "https://github.com/sihaoevery/lambda_vit.", "title": "MLP Can Be A Good Transformer Learner", "abstract": "Self-attention mechanism is the key of the Transformer but often criticized\nfor its computation demands. Previous token pruning works motivate their\nmethods from the view of computation redundancy but still need to load the full\nnetwork and require same memory costs. This paper introduces a novel strategy\nthat simplifies vision transformers and reduces computational load through the\nselective removal of non-essential attention layers, guided by entropy\nconsiderations. We identify that regarding the attention layer in bottom\nblocks, their subsequent MLP layers, i.e. two feed-forward layers, can elicit\nthe same entropy quantity. Meanwhile, the accompanied MLPs are under-exploited\nsince they exhibit smaller feature entropy compared to those MLPs in the top\nblocks. Therefore, we propose to integrate the uninformative attention layers\ninto their subsequent counterparts by degenerating them into identical mapping,\nyielding only MLP in certain transformer blocks. Experimental results on\nImageNet-1k show that the proposed method can remove 40% attention layer of\nDeiT-B, improving throughput and memory bound without performance compromise.\nCode is available at https://github.com/sihaoevery/lambda_vit.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Sihao Lin", "Pumeng Lyu", "Dongrui Liu", "Tao Tang", "Xiaodan Liang", "Andy Song", "Xiaojun Chang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f729"}, "filepath": "data/2404.01278.png", "tags": [], "_media_type": "image", "_rand": 0.9994651476055713, "arXiv_link": "https://arxiv.org/abs/2404.01278", "other_link": "https://github.com/edmav4/BiPer.", "title": "BiPer: Binary Neural Networks using a Periodic Function", "abstract": "Quantized neural networks employ reduced precision representations for both\nweights and activations. This quantization process significantly reduces the\nmemory requirements and computational complexity of the network. Binary Neural\nNetworks (BNNs) are the extreme quantization case, representing values with\njust one bit. Since the sign function is typically used to map real values to\nbinary values, smooth approximations are introduced to mimic the gradients\nduring error backpropagation. Thus, the mismatch between the forward and\nbackward models corrupts the direction of the gradient, causing training\ninconsistency problems and performance degradation. In contrast to current BNN\napproaches, we propose to employ a binary periodic (BiPer) function during\nbinarization. Specifically, we use a square wave for the forward pass to obtain\nthe binary values and employ the trigonometric sine function with the same\nperiod of the square wave as a differentiable surrogate during the backward\npass. We demonstrate that this approach can control the quantization error by\nusing the frequency of the periodic function and improves network performance.\nExtensive experiments validate the effectiveness of BiPer in benchmark datasets\nand network architectures, with improvements of up to 1% and 0.69% with respect\nto state-of-the-art methods in the classification task over CIFAR-10 and\nImageNet, respectively. Our code is publicly available at\nhttps://github.com/edmav4/BiPer.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Edwin Vargas", "Claudia Correa", "Carlos Hinojosa", "Henry Arguello"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f72a"}, "filepath": "data/2311.18618.png", "tags": [], "_media_type": "image", "_rand": 0.9997014727769152, "arXiv_link": "https://arxiv.org/abs/2311.18618", "other_link": "", "title": "Task-aligned Part-aware Panoptic Segmentation through Joint Object-Part Representations", "abstract": "Part-aware panoptic segmentation is a problem of computer vision that aims to\nprovide a semantic understanding of the scene at multiple levels of\ngranularity. More precisely, semantic areas, object instances, and semantic\nparts are predicted simultaneously. In this paper, we present our Joint\nPanoptic Part Fusion (JPPF) that combines the three individual segmentations\neffectively to obtain a panoptic-part segmentation. Two aspects are of utmost\nimportance for this: First, a unified model for the three problems is desired\nthat allows for mutually improved and consistent representation learning.\nSecond, balancing the combination so that it gives equal importance to all\nindividual results during fusion. Our proposed JPPF is parameter-free and\ndynamically balances its input. The method is evaluated and compared on the\nCityscapes Panoptic Parts (CPP) and Pascal Panoptic Parts (PPP) datasets in\nterms of PartPQ and Part-Whole Quality (PWQ). In extensive experiments, we\nverify the importance of our fair fusion, highlight its most significant impact\nfor areas that can be further segmented into parts, and demonstrate the\ngeneralization capabilities of our design without fine-tuning on 5 additional\ndatasets.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Daan de Geus", "Gijs Dubbelman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f72b"}, "filepath": "data/2403.11270.png", "tags": [], "_media_type": "image", "_rand": 0.999461758453733, "arXiv_link": "https://arxiv.org/abs/2403.11270", "other_link": "", "title": "Bilateral Propagation Network for Depth Completion", "abstract": "Depth completion aims to derive a dense depth map from sparse depth\nmeasurements with a synchronized color image. Current state-of-the-art (SOTA)\nmethods are predominantly propagation-based, which work as an iterative\nrefinement on the initial estimated dense depth. However, the initial depth\nestimations mostly result from direct applications of convolutional layers on\nthe sparse depth map. In this paper, we present a Bilateral Propagation Network\n(BP-Net), that propagates depth at the earliest stage to avoid directly\nconvolving on sparse data. Specifically, our approach propagates the target\ndepth from nearby depth measurements via a non-linear model, whose coefficients\nare generated through a multi-layer perceptron conditioned on both\n\\emph{radiometric difference} and \\emph{spatial distance}. By integrating\nbilateral propagation with multi-modal fusion and depth refinement in a\nmulti-scale framework, our BP-Net demonstrates outstanding performance on both\nindoor and outdoor scenes. It achieves SOTA on the NYUv2 dataset and ranks 1st\non the KITTI depth completion benchmark at the time of submission. Experimental\nresults not only show the effectiveness of bilateral propagation but also\nemphasize the significance of early-stage propagation in contrast to the\nrefinement stage. Our code and trained models will be available on the project\npage.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Jie Tang", "Fei-Peng Tian", "Boshi An", "Jian Li", "Ping Tan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f72c"}, "filepath": "data/2403.03170.png", "tags": [], "_media_type": "image", "_rand": 0.999569954634578, "arXiv_link": "https://arxiv.org/abs/2403.03170", "other_link": "", "title": "SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection", "abstract": "Misinformation is a prevalent societal issue due to its potential high risks.\nOut-of-context (OOC) misinformation, where authentic images are repurposed with\nfalse text, is one of the easiest and most effective ways to mislead audiences.\nCurrent methods focus on assessing image-text consistency but lack convincing\nexplanations for their judgments, which is essential for debunking\nmisinformation. While Multimodal Large Language Models (MLLMs) have rich\nknowledge and innate capability for visual reasoning and explanation\ngeneration, they still lack sophistication in understanding and discovering the\nsubtle crossmodal differences. In this paper, we introduce SNIFFER, a novel\nmultimodal large language model specifically engineered for OOC misinformation\ndetection and explanation. SNIFFER employs two-stage instruction tuning on\nInstructBLIP. The first stage refines the model's concept alignment of generic\nobjects with news-domain entities and the second stage leverages language-only\nGPT-4 generated OOC-specific instruction data to fine-tune the model's\ndiscriminatory powers. Enhanced by external tools and retrieval, SNIFFER not\nonly detects inconsistencies between text and image but also utilizes external\nknowledge for contextual verification. Our experiments show that SNIFFER\nsurpasses the original MLLM by over 40% and outperforms state-of-the-art\nmethods in detection accuracy. SNIFFER also provides accurate and persuasive\nexplanations as validated by quantitative and human evaluations.", "keywords": ["Large multimodal models and prompting techniques", "Vision applications for social good and ethics"], "authors_list": ["Peng Qi", "Zehong Yan", "Wynne Hsu", "Mong Li Lee"], "category_name": "Multimedia", "all_categories": ["Multimedia", "Artificial Intelligence", "Computation and Language", "Computer Vision and Pattern Recognition", "Computers and Society"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f72d"}, "filepath": "data/2312.15895.png", "tags": [], "_media_type": "image", "_rand": 0.9991359514102076, "arXiv_link": "https://arxiv.org/abs/2312.15895", "other_link": "", "title": "Semantic-aware SAM for Point-Prompted Instance Segmentation", "abstract": "Single-point annotation in visual tasks, with the goal of minimizing\nlabelling costs, is becoming increasingly prominent in research. Recently,\nvisual foundation models, such as Segment Anything (SAM), have gained\nwidespread usage due to their robust zero-shot capabilities and exceptional\nannotation performance. However, SAM's class-agnostic output and high\nconfidence in local segmentation introduce 'semantic ambiguity', posing a\nchallenge for precise category-specific segmentation. In this paper, we\nintroduce a cost-effective category-specific segmenter using SAM. To tackle\nthis challenge, we have devised a Semantic-Aware Instance Segmentation Network\n(SAPNet) that integrates Multiple Instance Learning (MIL) with matching\ncapability and SAM with point prompts. SAPNet strategically selects the most\nrepresentative mask proposals generated by SAM to supervise segmentation, with\na specific focus on object category information. Moreover, we introduce the\nPoint Distance Guidance and Box Mining Strategy to mitigate inherent\nchallenges: 'group' and 'local' issues in weakly supervised segmentation. These\nstrategies serve to further enhance the overall segmentation performance. The\nexperimental results on Pascal VOC and COCO demonstrate the promising\nperformance of our proposed SAPNet, emphasizing its semantic matching\ncapabilities and its potential to advance point-prompted instance segmentation.\nThe code will be made publicly available.", "keywords": [], "authors_list": ["Zhaoyang Wei", "Pengfei Chen", "Xuehui Yu", "Guorong Li", "Jianbin Jiao", "Zhenjun Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f72e"}, "filepath": "data/2402.09944.png", "tags": [], "_media_type": "image", "_rand": 0.9993945953647131, "arXiv_link": "https://arxiv.org/abs/2402.09944", "other_link": "", "title": "Loopy-SLAM: Dense Neural SLAM with Loop Closures", "abstract": "Neural RGBD SLAM techniques have shown promise in dense Simultaneous\nLocalization And Mapping (SLAM), yet face challenges such as error accumulation\nduring camera tracking resulting in distorted maps. In response, we introduce\nLoopy-SLAM that globally optimizes poses and the dense 3D model. We use\nframe-to-model tracking using a data-driven point-based submap generation\nmethod and trigger loop closures online by performing global place recognition.\nRobust pose graph optimization is used to rigidly align the local submaps. As\nour representation is point based, map corrections can be performed efficiently\nwithout the need to store the entire history of input frames used for mapping\nas typically required by methods employing a grid based mapping structure.\nEvaluation on the synthetic Replica and real-world TUM-RGBD and ScanNet\ndatasets demonstrate competitive or superior performance in tracking, mapping,\nand rendering accuracy when compared to existing dense neural RGBD SLAM\nmethods. Project page: notchla.github.io/Loopy-SLAM.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Lorenzo Liso", "Erik Sandstr\u00f6m", "Vladimir Yugay", "Luc Van Gool", "Martin R. Oswald"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f72f"}, "filepath": "data/2404.00234v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995252238688991, "arXiv_link": "https://arxiv.org/abs/2404.00234v1", "other_link": "", "title": "Grid Diffusion Models for Text-to-Video Generation", "abstract": "Recent advances in the diffusion models have significantly improved\ntext-to-image generation. However, generating videos from text is a more\nchallenging task than generating images from text, due to the much larger\ndataset and higher computational cost required. Most existing video generation\nmethods use either a 3D U-Net architecture that considers the temporal\ndimension or autoregressive generation. These methods require large datasets\nand are limited in terms of computational costs compared to text-to-image\ngeneration. To tackle these challenges, we propose a simple but effective novel\ngrid diffusion for text-to-video generation without temporal dimension in\narchitecture and a large text-video paired dataset. We can generate a\nhigh-quality video using a fixed amount of GPU memory regardless of the number\nof frames by representing the video as a grid image. Additionally, since our\nmethod reduces the dimensions of the video to the dimensions of the image,\nvarious image-based methods can be applied to videos, such as text-guided video\nmanipulation from image manipulation. Our proposed method outperforms the\nexisting methods in both quantitative and qualitative evaluations,\ndemonstrating the suitability of our model for real-world video generation.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Taegyeong Lee", "Soyeong Kwon", "Taehwan Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f730"}, "filepath": "data/2310.15008.png", "tags": [], "_media_type": "image", "_rand": 0.9999198784435901, "arXiv_link": "https://arxiv.org/abs/2310.15008", "other_link": "", "title": "Wonder3D: Single Image to 3D using Cross-Domain Diffusion", "abstract": "In this work, we introduce Wonder3D, a novel method for efficiently\ngenerating high-fidelity textured meshes from single-view images.Recent methods\nbased on Score Distillation Sampling (SDS) have shown the potential to recover\n3D geometry from 2D diffusion priors, but they typically suffer from\ntime-consuming per-shape optimization and inconsistent geometry. In contrast,\ncertain works directly produce 3D information via fast network inferences, but\ntheir results are often of low quality and lack geometric details. To\nholistically improve the quality, consistency, and efficiency of image-to-3D\ntasks, we propose a cross-domain diffusion model that generates multi-view\nnormal maps and the corresponding color images. To ensure consistency, we\nemploy a multi-view cross-domain attention mechanism that facilitates\ninformation exchange across views and modalities. Lastly, we introduce a\ngeometry-aware normal fusion algorithm that extracts high-quality surfaces from\nthe multi-view 2D representations. Our extensive evaluations demonstrate that\nour method achieves high-quality reconstruction results, robust generalization,\nand reasonably good efficiency compared to prior works.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Xiaoxiao Long", "Yuan-Chen Guo", "Cheng Lin", "Yuan Liu", "Zhiyang Dou", "Lingjie Liu", "Yuexin Ma", "Song-Hai Zhang", "Marc Habermann", "Christian Theobalt", "Wenping Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f731"}, "filepath": "data/2308.13628.png", "tags": [], "_media_type": "image", "_rand": 0.9990402281285568, "arXiv_link": "https://arxiv.org/abs/2308.13628", "other_link": "https://github.com/viridityzhu/HiFiHR.", "title": "Towards High-fidelity Artistic Image Vectorization via Texture-Encapsulated Shape Parameterization", "abstract": "We present HiFiHR, a high-fidelity hand reconstruction approach that utilizes\nrender-and-compare in the learning-based framework from a single image, capable\nof generating visually plausible and accurate 3D hand meshes while recovering\nrealistic textures. Our method achieves superior texture reconstruction by\nemploying a parametric hand model with predefined texture assets, and by\nestablishing a texture reconstruction consistency between the rendered and\ninput images during training. Moreover, based on pretraining the network on an\nannotated dataset, we apply varying degrees of supervision using our pipeline,\ni.e., self-supervision, weak supervision, and full supervision, and discuss the\nvarious levels of contributions of the learned high-fidelity textures in\nenhancing hand pose and shape estimation. Experimental results on public\nbenchmarks including FreiHAND and HO-3D demonstrate that our method outperforms\nthe state-of-the-art hand reconstruction methods in texture reconstruction\nquality while maintaining comparable accuracy in pose and shape estimation. Our\ncode is available at https://github.com/viridityzhu/HiFiHR.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ye Chen", "Bingbing Ni", "Jinfan Liu", "Xiaoyang Huang", "Xuanhong Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f732"}, "filepath": "data/2403.00644.png", "tags": [], "_media_type": "image", "_rand": 0.9991069541582773, "arXiv_link": "https://arxiv.org/abs/2403.00644", "other_link": "", "title": "Diff-Plugin: Revitalizing Details for Diffusion-based Low-level Tasks", "abstract": "Diffusion models trained on large-scale datasets have achieved remarkable\nprogress in image synthesis. However, due to the randomness in the diffusion\nprocess, they often struggle with handling diverse low-level tasks that require\ndetails preservation. To overcome this limitation, we present a new Diff-Plugin\nframework to enable a single pre-trained diffusion model to generate\nhigh-fidelity results across a variety of low-level tasks. Specifically, we\nfirst propose a lightweight Task-Plugin module with a dual branch design to\nprovide task-specific priors, guiding the diffusion process in preserving image\ncontent. We then propose a Plugin-Selector that can automatically select\ndifferent Task-Plugins based on the text instruction, allowing users to edit\nimages by indicating multiple low-level tasks with natural language. We conduct\nextensive experiments on 8 low-level vision tasks. The results demonstrate the\nsuperiority of Diff-Plugin over existing methods, particularly in real-world\nscenarios. Our ablations further validate that Diff-Plugin is stable,\nschedulable, and supports robust training across different dataset sizes.", "keywords": ["Low-level vision"], "authors_list": ["Yuhao Liu", "Zhanghan Ke", "Fang Liu", "Nanxuan Zhao", "Rynson W.H. Lau"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f733"}, "filepath": "data/2403.07630.png", "tags": [], "_media_type": "image", "_rand": 0.9997343152664869, "arXiv_link": "https://arxiv.org/abs/2403.07630", "other_link": "https://github.com/Barrett-python/CPAL.", "title": "Hunting Attributes: Context Prototype-Aware Learning for Weakly Supervised Semantic Segmentation", "abstract": "Recent weakly supervised semantic segmentation (WSSS) methods strive to\nincorporate contextual knowledge to improve the completeness of class\nactivation maps (CAM). In this work, we argue that the knowledge bias between\ninstances and contexts affects the capability of the prototype to sufficiently\nunderstand instance semantics. Inspired by prototype learning theory, we\npropose leveraging prototype awareness to capture diverse and fine-grained\nfeature attributes of instances. The hypothesis is that contextual prototypes\nmight erroneously activate similar and frequently co-occurring object\ncategories due to this knowledge bias. Therefore, we propose to enhance the\nprototype representation ability by mitigating the bias to better capture\nspatial coverage in semantic object regions. With this goal, we present a\nContext Prototype-Aware Learning (CPAL) strategy, which leverages semantic\ncontext to enrich instance comprehension. The core of this method is to\naccurately capture intra-class variations in object features through\ncontext-aware prototypes, facilitating the adaptation to the semantic\nattributes of various instances. We design feature distribution alignment to\noptimize prototype awareness, aligning instance feature distributions with\ndense features. In addition, a unified training framework is proposed to\ncombine label-guided classification supervision and prototypes-guided\nself-supervision. Experimental results on PASCAL VOC 2012 and MS COCO 2014 show\nthat CPAL significantly improves off-the-shelf methods and achieves\nstate-of-the-art performance. The project is available at\nhttps://github.com/Barrett-python/CPAL.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Feilong Tang", "Zhongxing Xu", "Zhaojun QU", "Wei Feng", "xingjian jiang", "Zongyuan Ge"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f734"}, "filepath": "data/2404.06542.png", "tags": [], "_media_type": "image", "_rand": 0.999396024520279, "arXiv_link": "https://arxiv.org/abs/2404.06542", "other_link": "", "title": "Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation", "abstract": "Open-vocabulary semantic segmentation aims at segmenting arbitrary categories\nexpressed in textual form. Previous works have trained over large amounts of\nimage-caption pairs to enforce pixel-level multimodal alignments. However,\ncaptions provide global information about the semantics of a given image but\nlack direct localization of individual concepts. Further, training on\nlarge-scale datasets inevitably brings significant computational costs. In this\npaper, we propose FreeDA, a training-free diffusion-augmented method for\nopen-vocabulary semantic segmentation, which leverages the ability of diffusion\nmodels to visually localize generated concepts and local-global similarities to\nmatch class-agnostic regions with semantic classes. Our approach involves an\noffline stage in which textual-visual reference embeddings are collected,\nstarting from a large set of captions and leveraging visual and semantic\ncontexts. At test time, these are queried to support the visual matching\nprocess, which is carried out by jointly considering class-agnostic regions and\nglobal semantic similarities. Extensive analyses demonstrate that FreeDA\nachieves state-of-the-art performance on five datasets, surpassing previous\nmethods by more than 7.0 average points in terms of mIoU and without requiring\nany training.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Luca Barsellotti", "Roberto Amoroso", "Marcella Cornia", "Lorenzo Baraldi", "Rita Cucchiara"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f735"}, "filepath": "data/2312.05208.png", "tags": [], "_media_type": "image", "_rand": 0.9995752140180604, "arXiv_link": "https://arxiv.org/abs/2312.05208", "other_link": "", "title": "ControlRoom3D: Room Generation using Semantic Controls", "abstract": "Manually creating 3D environments for AR/VR applications is a complex process\nrequiring expert knowledge in 3D modeling software. Pioneering works facilitate\nthis process by generating room meshes conditioned on textual style\ndescriptions. Yet, many of these automatically generated 3D meshes do not\nadhere to typical room layouts, compromising their plausibility, e.g., by\nplacing several beds in one bedroom. To address these challenges, we present\nControlRoom3D, a novel method to generate high-quality room meshes. Central to\nour approach is a user-defined 3D semantic proxy room that outlines a rough\nroom layout based on semantic bounding boxes and a textual description of the\noverall room style. Our key insight is that when rendered to 2D, this 3D\nrepresentation provides valuable geometric and semantic information to control\npowerful 2D models to generate 3D consistent textures and geometry that aligns\nwell with the proxy room. Backed up by an extensive study including\nquantitative metrics and qualitative user evaluations, our method generates\ndiverse and globally plausible 3D room meshes, thus empowering users to design\n3D rooms effortlessly without specialized knowledge.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Jonas Schult", "Sam Tsai", "Lukas H\u00f6llein", "Bichen Wu", "Jialiang Wang", "Chih-Yao Ma", "Kunpeng Li", "Xiaofang Wang", "Felix Wimbauer", "Zijian He", "Peizhao Zhang", "Bastian Leibe", "Peter Vajda", "Ji Hou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f736"}, "filepath": "data/2402.18102.png", "tags": [], "_media_type": "image", "_rand": 0.9992738883469319, "arXiv_link": "https://arxiv.org/abs/2402.18102", "other_link": "", "title": "Passive Snapshot Coded Aperture Dual-Pixel RGB-D Imaging", "abstract": "Passive, compact, single-shot 3D sensing is useful in many application areas\nsuch as microscopy, medical imaging, surgical navigation, and autonomous\ndriving where form factor, time, and power constraints can exist. Obtaining\nRGB-D scene information over a short imaging distance, in an ultra-compact form\nfactor, and in a passive, snapshot manner is challenging. Dual-pixel (DP)\nsensors are a potential solution to achieve the same. DP sensors collect light\nrays from two different halves of the lens in two interleaved pixel arrays,\nthus capturing two slightly different views of the scene, like a stereo camera\nsystem. However, imaging with a DP sensor implies that the defocus blur size is\ndirectly proportional to the disparity seen between the views. This creates a\ntrade-off between disparity estimation vs. deblurring accuracy. To improve this\ntrade-off effect, we propose CADS (Coded Aperture Dual-Pixel Sensing), in which\nwe use a coded aperture in the imaging lens along with a DP sensor. In our\napproach, we jointly learn an optimal coded pattern and the reconstruction\nalgorithm in an end-to-end optimization setting. Our resulting CADS imaging\nsystem demonstrates improvement of >1.5dB PSNR in all-in-focus (AIF) estimates\nand 5-6% in depth estimation quality over naive DP sensing for a wide range of\naperture settings. Furthermore, we build the proposed CADS prototypes for DSLR\nphotography settings and in an endoscope and a dermoscope form factor. Our\nnovel coded dual-pixel sensing approach demonstrates accurate RGB-D\nreconstruction results in simulations and real-world experiments in a passive,\nsnapshot, and compact manner.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Bhargav Ghanekar", "Salman Siddique Khan", "Pranav Sharma", "Shreyas Singh", "Vivek Boominathan", "Kaushik Mitra", "Ashok Veeraraghavan"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f737"}, "filepath": "data/2402.14000v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994759404701133, "arXiv_link": "https://arxiv.org/html/2402.14000v1", "other_link": "", "title": "Real-time 3D-aware Portrait Video Relighting", "abstract": "This work presents 3DPE, a practical tool that can efficiently edit a face\nimage following given prompts, like reference images or text descriptions, in\nthe 3D-aware manner. To this end, a lightweight module is distilled from a 3D\nportrait generator and a text-to-image model, which provide prior knowledge of\nface geometry and open-vocabulary editing capability, respectively. Such a\ndesign brings two compelling advantages over existing approaches. First, our\nsystem achieves real-time editing with a feedforward network (i.e., ~0.04s per\nimage), over 100x faster than the second competitor. Second, thanks to the\npowerful priors, our module could focus on the learning of editing-related\nvariations, such that it manages to handle various types of editing\nsimultaneously in the training phase and further supports fast adaptation to\nuser-specified novel types of editing during inference (e.g., with ~5min\nfine-tuning per case). The code, the model, and the interface will be made\npublicly available to facilitate future research.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Ziqi Cai", "Kaiwen Jiang", "Shu-Yu Chen", "Yu-Kun Lai", "Hongbo Fu", "Boxin Shi", "Lin Gao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f738"}, "filepath": "data/2312.07920.png", "tags": [], "_media_type": "image", "_rand": 0.9990656704456551, "arXiv_link": "https://arxiv.org/abs/2312.07920", "other_link": "https://github.com/VDIGPKU/DrivingGaussian.", "title": "DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes", "abstract": "We present DrivingGaussian, an efficient and effective framework for\nsurrounding dynamic autonomous driving scenes. For complex scenes with moving\nobjects, we first sequentially and progressively model the static background of\nthe entire scene with incremental static 3D Gaussians. We then leverage a\ncomposite dynamic Gaussian graph to handle multiple moving objects,\nindividually reconstructing each object and restoring their accurate positions\nand occlusion relationships within the scene. We further use a LiDAR prior for\nGaussian Splatting to reconstruct scenes with greater details and maintain\npanoramic consistency. DrivingGaussian outperforms existing methods in dynamic\ndriving scene reconstruction and enables photorealistic surround-view synthesis\nwith high-fidelity and multi-camera consistency. Our project page is at:\nhttps://github.com/VDIGPKU/DrivingGaussian.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xiaoyu Zhou", "Zhiwei Lin", "Xiaojun Shan", "Yongtao Wang", "Deqing Sun", "Ming-Hsuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f739"}, "filepath": "data/2403.04245.png", "tags": [], "_media_type": "image", "_rand": 0.99976227559029, "arXiv_link": "https://arxiv.org/abs/2403.04245", "other_link": "https://github.com/dalision/ModalBiasAVSR", "title": "A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech Recognition", "abstract": "Advanced Audio-Visual Speech Recognition (AVSR) systems have been observed to\nbe sensitive to missing video frames, performing even worse than\nsingle-modality models. While applying the dropout technique to the video\nmodality enhances robustness to missing frames, it simultaneously results in a\nperformance loss when dealing with complete data input. In this paper, we\ninvestigate this contrasting phenomenon from the perspective of modality bias\nand reveal that an excessive modality bias on the audio caused by dropout is\nthe underlying reason. Moreover, we present the Modality Bias Hypothesis (MBH)\nto systematically describe the relationship between modality bias and\nrobustness against missing modality in multimodal systems. Building on these\nfindings, we propose a novel Multimodal Distribution Approximation with\nKnowledge Distillation (MDA-KD) framework to reduce over-reliance on the audio\nmodality and to maintain performance and robustness simultaneously. Finally, to\naddress an entirely missing modality, we adopt adapters to dynamically switch\ndecision strategies. The effectiveness of our proposed approach is evaluated\nand validated through a series of comprehensive experiments using the MISP2021\nand MISP2022 datasets. Our code is available at\nhttps://github.com/dalision/ModalBiasAVSR", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yusheng Dai", "HangChen", "Jun Du", "Ruoyu Wang", "shihao chen", "Haotian Wang", "Chin-Hui Lee"], "category_name": "Sound", "all_categories": ["Sound", "Computer Vision and Pattern Recognition", "Machine Learning", "Multimedia", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f73a"}, "filepath": "data/2401.17603.png", "tags": [], "_media_type": "image", "_rand": 0.9995013860544726, "arXiv_link": "https://arxiv.org/abs/2401.17603", "other_link": "", "title": "GPLD3D: Latent Diffusion of 3D Shape Generative Models by Enforcing Geometric and Physical Priors", "abstract": "We introduce a new generative model that combines latent diffusion with\npersistent homology to create 3D shapes with high diversity, with a special\nemphasis on their topological characteristics. Our method involves representing\n3D shapes as implicit fields, then employing persistent homology to extract\ntopological features, including Betti numbers and persistence diagrams. The\nshape generation process consists of two steps. Initially, we employ a\ntransformer-based autoencoding module to embed the implicit representation of\neach 3D shape into a set of latent vectors. Subsequently, we navigate through\nthe learned latent space via a diffusion model. By strategically incorporating\ntopological features into the diffusion process, our generative module is able\nto produce a richer variety of 3D shapes with different topological structures.\nFurthermore, our framework is flexible, supporting generation tasks constrained\nby a variety of inputs, including sparse and partial point clouds, as well as\nsketches. By modifying the persistence diagrams, we can alter the topology of\nthe shapes generated from these input modalities.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yuan Dong", "Qi Zuo", "Xiaodong Gu", "Weihao Yuan", "zhengyi zhao", "Zilong Dong", "Liefeng Bo", "Qixing Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f73b"}, "filepath": "data/2403.19473.png", "tags": [], "_media_type": "image", "_rand": 0.999450038440779, "arXiv_link": "https://arxiv.org/abs/2403.19473", "other_link": "", "title": "Benchmarking Implicit Neural Representation and Geometric Rendering in Real-Time RGB-D SLAM", "abstract": "Implicit neural representation (INR), in combination with geometric\nrendering, has recently been employed in real-time dense RGB-D SLAM. Despite\nactive research endeavors being made, there lacks a unified protocol for fair\nevaluation, impeding the evolution of this area. In this work, we establish, to\nour knowledge, the first open-source benchmark framework to evaluate the\nperformance of a wide spectrum of commonly used INRs and rendering functions\nfor mapping and localization. The goal of our benchmark is to 1) gain an\nintuition of how different INRs and rendering functions impact mapping and\nlocalization and 2) establish a unified evaluation protocol w.r.t. the design\nchoices that may impact the mapping and localization. With the framework, we\nconduct a large suite of experiments, offering various insights in choosing the\nINRs and geometric rendering functions: for example, the dense feature grid\noutperforms other INRs (e.g. tri-plane and hash grid), even when geometric and\ncolor features are jointly encoded for memory efficiency. To extend the\nfindings into the practical scenario, a hybrid encoding strategy is proposed to\nbring the best of the accuracy and completion from the grid-based and\ndecomposition-based INRs. We further propose explicit hybrid encoding for\nhigh-fidelity dense grid mapping to comply with the RGB-D SLAM system that puts\nthe premise on robustness and computation efficiency.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Tongyan Hua", "Addison, Lin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f73c"}, "filepath": "data/2403.16376.png", "tags": [], "_media_type": "image", "_rand": 0.9993389557466001, "arXiv_link": "http://export.arxiv.org/abs/2403.16376", "other_link": "", "title": "Elite360D: Towards Efficient 360 Depth Estimation via Semantic- and Distance-Aware Bi-Projection Fusion", "abstract": "360 depth estimation has recently received great attention for 3D\nreconstruction owing to its omnidirectional field of view (FoV). Recent\napproaches are predominantly focused on cross-projection fusion with\ngeometry-based re-projection: they fuse 360 images with equirectangular\nprojection (ERP) and another projection type, e.g., cubemap projection to\nestimate depth with the ERP format. However, these methods suffer from 1)\nlimited local receptive fields, making it hardly possible to capture large FoV\nscenes, and 2) prohibitive computational cost, caused by the complex\ncross-projection fusion module design. In this paper, we propose Elite360D, a\nnovel framework that inputs the ERP image and icosahedron projection (ICOSAP)\npoint set, which is undistorted and spatially continuous. Elite360D is superior\nin its capacity in learning a representation from a local-with-global\nperspective. With a flexible ERP image encoder, it includes an ICOSAP point\nencoder, and a Bi-projection Bi-attention Fusion (B2F) module (totally ~1M\nparameters). Specifically, the ERP image encoder can take various perspective\nimage-trained backbones (e.g., ResNet, Transformer) to extract local features.\nThe point encoder extracts the global features from the ICOSAP. Then, the B2F\nmodule captures the semantic- and distance-aware dependencies between each\npixel of the ERP feature and the entire ICOSAP feature set. Without specific\nbackbone design and obvious computational cost increase, Elite360D outperforms\nthe prior arts on several benchmark datasets.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Hao Ai", "Addison, Lin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f73d"}, "filepath": "data/2403.14082.png", "tags": [], "_media_type": "image", "_rand": 0.9999612445876602, "arXiv_link": "https://arxiv.org/abs/2403.14082", "other_link": "", "title": "EventDance: Unsupervised Cross-modal Source-free Adaptation for Event-based Object Recognition", "abstract": "In this paper, we make the first attempt at achieving the cross-modal (i.e.,\nimage-to-events) adaptation for event-based object recognition without\naccessing any labeled source image data owning to privacy and commercial\nissues. Tackling this novel problem is non-trivial due to the novelty of event\ncameras and the distinct modality gap between images and events. In particular,\nas only the source model is available, a hurdle is how to extract the knowledge\nfrom the source model by only using the unlabeled target event data while\nachieving knowledge transfer. To this end, we propose a novel framework, dubbed\nEventDance for this unsupervised source-free cross-modal adaptation problem.\nImportantly, inspired by event-to-video reconstruction methods, we propose a\nreconstruction-based modality bridging (RMB) module, which reconstructs\nintensity frames from events in a self-supervised manner. This makes it\npossible to build up the surrogate images to extract the knowledge (i.e.,\nlabels) from the source model. We then propose a multi-representation knowledge\nadaptation (MKA) module that transfers the knowledge to target models learning\nevents with multiple representation types for fully exploring the\nspatiotemporal information of events. The two modules connecting the source and\ntarget models are mutually updated so as to achieve the best performance.\nExperiments on three benchmark datasets with two adaption settings show that\nEventDance is on par with prior methods utilizing the source data.", "keywords": [], "authors_list": ["Xu Zheng", "Addison, Lin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f73e"}, "filepath": "data/2403.16370.png", "tags": [], "_media_type": "image", "_rand": 0.999944469240285, "arXiv_link": "https://arxiv.org/abs/2403.16370", "other_link": "", "title": "GoodSAM: Bridging Domain and Capacity Gaps via Segment Anything Model for Distortion-aware Panoramic Semantic Segmentation", "abstract": "This paper tackles a novel yet challenging problem: how to transfer knowledge\nfrom the emerging Segment Anything Model (SAM) -- which reveals impressive\nzero-shot instance segmentation capacity -- to learn a compact panoramic\nsemantic segmentation model, i.e., student, without requiring any labeled data.\nThis poses considerable challenges due to SAM's inability to provide semantic\nlabels and the large capacity gap between SAM and the student. To this end, we\npropose a novel framework, called GoodSAM, that introduces a teacher assistant\n(TA) to provide semantic information, integrated with SAM to generate ensemble\nlogits to achieve knowledge transfer. Specifically, we propose a\nDistortion-Aware Rectification (DAR) module that first addresses the distortion\nproblem of panoramic images by imposing prediction-level consistency and\nboundary enhancement. This subtly enhances TA's prediction capacity on\npanoramic images. DAR then incorporates a cross-task complementary fusion block\nto adaptively merge the predictions of SAM and TA to obtain more reliable\nensemble logits. Moreover, we introduce a Multi-level Knowledge Adaptation\n(MKA) module to efficiently transfer the multi-level feature knowledge from TA\nand ensemble logits to learn a compact student model. Extensive experiments on\ntwo benchmarks show that our GoodSAM achieves a remarkable +3.75\\% mIoU\nimprovement over the state-of-the-art (SOTA) domain adaptation methods. Also,\nour most lightweight model achieves comparable performance to the SOTA methods\nwith only 3.7M parameters.", "keywords": ["Efficient and scalable vision"], "authors_list": ["WEIMING ZHANG", "Yexin Liu", "Xu Zheng", "Addison, Lin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f73f"}, "filepath": "data/2403.12534.png", "tags": [], "_media_type": "image", "_rand": 0.999479970325625, "arXiv_link": "https://arxiv.org/abs/2403.12534", "other_link": "", "title": "ExACT: Language-guided Conceptual Reasoning and Uncertainty Estimation for Event-based Action Recognition and More", "abstract": "Event cameras have recently been shown beneficial for practical vision tasks,\nsuch as action recognition, thanks to their high temporal resolution, power\nefficiency, and reduced privacy concerns. However, current research is hindered\nby 1) the difficulty in processing events because of their prolonged duration\nand dynamic actions with complex and ambiguous semantics and 2) the redundant\naction depiction of the event frame representation with fixed stacks. We find\nlanguage naturally conveys abundant semantic information, rendering it\nstunningly superior in reducing semantic uncertainty. In light of this, we\npropose ExACT, a novel approach that, for the first time, tackles event-based\naction recognition from a cross-modal conceptualizing perspective. Our ExACT\nbrings two technical contributions. Firstly, we propose an adaptive\nfine-grained event (AFE) representation to adaptively filter out the repeated\nevents for the stationary objects while preserving dynamic ones. This subtly\nenhances the performance of ExACT without extra computational cost. Then, we\npropose a conceptual reasoning-based uncertainty estimation module, which\nsimulates the recognition process to enrich the semantic representation. In\nparticular, conceptual reasoning builds the temporal relation based on the\naction semantics, and uncertainty estimation tackles the semantic uncertainty\nof actions based on the distributional representation. Experiments show that\nour ExACT achieves superior recognition accuracy of 94.83%(+2.23%),\n90.10%(+37.47%) and 67.24% on PAF, HARDVS and our SeAct datasets respectively.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques"], "authors_list": ["Jiazhou Zhou", "Xu Zheng", "Yuanhuiyi Lyu", "Addison, Lin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f740"}, "filepath": "data/2403.12505.png", "tags": [], "_media_type": "image", "_rand": 0.9992183067623817, "arXiv_link": "https://arxiv.org/abs/2403.12505", "other_link": "", "title": "Semantics, Distortion, and Style Matter: Towards Source-free UDA for Panoramic Segmentation", "abstract": "This paper addresses an interesting yet challenging problem -- source-free\nunsupervised domain adaptation (SFUDA) for pinhole-to-panoramic semantic\nsegmentation -- given only a pinhole image-trained model (i.e., source) and\nunlabeled panoramic images (i.e., target). Tackling this problem is nontrivial\ndue to the semantic mismatches, style discrepancies, and inevitable distortion\nof panoramic images. To this end, we propose a novel method that utilizes\nTangent Projection (TP) as it has less distortion and meanwhile slits the\nequirectangular projection (ERP) with a fixed FoV to mimic the pinhole images.\nBoth projections are shown effective in extracting knowledge from the source\nmodel. However, the distinct projection discrepancies between source and target\ndomains impede the direct knowledge transfer; thus, we propose a panoramic\nprototype adaptation module (PPAM) to integrate panoramic prototypes from the\nextracted knowledge for adaptation. We then impose the loss constraints on both\npredictions and prototypes and propose a cross-dual attention module (CDAM) at\nthe feature level to better align the spatial and channel characteristics\nacross the domains and projections. Both knowledge extraction and transfer\nprocesses are synchronously updated to reach the best performance. Extensive\nexperiments on the synthetic and real-world benchmarks, including outdoor and\nindoor scenarios, demonstrate that our method achieves significantly better\nperformance than prior SFUDA methods for pinhole-to-panoramic adaptation.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Xu Zheng", "Pengyuan Zhou", "ATHANASIOS", "Addison, Lin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f741"}, "filepath": "data/2405.16108.png", "tags": [], "_media_type": "image", "_rand": 0.9998891500860384, "arXiv_link": "https://arxiv.org/abs/2405.16108", "other_link": "", "title": "UniBind: LLM-Augmented Unified and Balanced Representation Space to Bind Them All", "abstract": "Research on multi-modal learning dominantly aligns the modalities in a\nunified space at training, and only a single one is taken for prediction at\ninference. However, for a real machine, e.g., a robot, sensors could be added\nor removed at any time. Thus, it is crucial to enable the machine to tackle the\nmismatch and unequal-scale problems of modality combinations between training\nand inference. In this paper, we tackle these problems from a new perspective:\n\"Modalities Help Modalities\". Intuitively, we present OmniBind, a novel\ntwo-stage learning framework that can achieve any modality combinations and\ninteraction. It involves teaching data-constrained, a.k.a, student, modalities\nto be aligned with the well-trained data-abundant, a.k.a, teacher, modalities.\nThis subtly enables the adaptive fusion of any modalities to build a unified\nrepresentation space for any combinations. Specifically, we propose Cross-modal\nAlignment Distillation (CAD) to address the unequal-scale problem between\nstudent and teacher modalities and effectively align student modalities into\nthe teacher modalities' representation space in stage one. We then propose an\nAdaptive Fusion (AF) module to fuse any modality combinations and learn a\nunified representation space in stage two. To address the mismatch problem, we\naggregate existing datasets and combine samples from different modalities by\nthe same semantics. This way, we build the first dataset for training and\nevaluation that consists of teacher (image, text) and student (touch, thermal,\nevent, point cloud, audio) modalities and enables omni-bind for any of them.\nExtensive experiments on the recognition task show performance gains over prior\narts by an average of 4.05 % on the arbitrary modality combination setting. It\nalso achieves state-of-the-art performance for a single modality, e.g., touch,\nwith a 4.34 % gain.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yuanhuiyi Lyu", "Xu Zheng", "Jiazhou Zhou", "Addison, Lin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f742"}, "filepath": "data/2311.17951.png", "tags": [], "_media_type": "image", "_rand": 0.9991819614660792, "arXiv_link": "https://arxiv.org/abs/2311.17951", "other_link": "", "title": "C3Net: Compound Conditioned ControlNet for Multimodal Content Generation", "abstract": "We present Compound Conditioned ControlNet, C3Net, a novel generative neural\narchitecture taking conditions from multiple modalities and synthesizing\nmultimodal contents simultaneously (e.g., image, text, audio). C3Net adapts the\nControlNet architecture to jointly train and make inferences on a\nproduction-ready diffusion model and its trainable copies. Specifically, C3Net\nfirst aligns the conditions from multi-modalities to the same semantic latent\nspace using modality-specific encoders based on contrastive training. Then, it\ngenerates multimodal outputs based on the aligned latent space, whose semantic\ninformation is combined using a ControlNet-like architecture called Control\nC3-UNet. Correspondingly, with this system design, our model offers an improved\nsolution for joint-modality generation through learning and explaining\nmultimodal conditions instead of simply taking linear interpolations on the\nlatent space. Meanwhile, as we align conditions to a unified latent space,\nC3Net only requires one trainable Control C3-UNet to work on multimodal\nsemantic information. Furthermore, our model employs unimodal pretraining on\nthe condition alignment stage, outperforming the non-pretrained alignment even\non relatively scarce training data and thus demonstrating high-quality compound\ncondition generation. We contribute the first high-quality tri-modal validation\nset to validate quantitatively that C3Net outperforms or is on par with first\nand contemporary state-of-the-art multimodal generation. Our codes and\ntri-modal dataset will be released.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Juntao Zhang", "Yuehuai LIU", "Yu-Wing Tai", "Chi-Keung Tang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f743"}, "filepath": "data/2404.00834v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995124588178457, "arXiv_link": "https://arxiv.org/abs/2404.00834v1", "other_link": "https://vlislab22.github.io/eg-lowlight/.", "title": "Towards Robust Event-guided Low-Light Image Enhancement: A Large-Scale Real-World Event-Image Dataset and Novel Approach", "abstract": "Event camera has recently received much attention for low-light image\nenhancement (LIE) thanks to their distinct advantages, such as high dynamic\nrange. However, current research is prohibitively restricted by the lack of\nlarge-scale, real-world, and spatial-temporally aligned event-image datasets.\nTo this end, we propose a real-world (indoor and outdoor) dataset comprising\nover 30K pairs of images and events under both low and normal illumination\nconditions. To achieve this, we utilize a robotic arm that traces a consistent\nnon-linear trajectory to curate the dataset with spatial alignment precision\nunder 0.03mm. We then introduce a matching alignment strategy, rendering 90% of\nour dataset with errors less than 0.01s. Based on the dataset, we propose a\nnovel event-guided LIE approach, called EvLight, towards robust performance in\nreal-world low-light scenes. Specifically, we first design the multi-scale\nholistic fusion branch to extract holistic structural and textural information\nfrom both events and images. To ensure robustness against variations in the\nregional illumination and noise, we then introduce a Signal-to-Noise-Ratio\n(SNR)-guided regional feature selection to selectively fuse features of images\nfrom regions with high SNR and enhance those with low SNR by extracting\nregional structure information from events. Extensive experiments on our\ndataset and the synthetic SDSD dataset demonstrate our EvLight significantly\nsurpasses the frame-based methods. Code and datasets are available at\nhttps://vlislab22.github.io/eg-lowlight/.", "keywords": ["Low-level vision"], "authors_list": ["Guoqiang Liang", "Kanghao Chen", "Hangyu Li", "Yunfan Lu", "Addison, Lin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f744"}, "filepath": "data/2403.00567.png", "tags": [], "_media_type": "image", "_rand": 0.9990628889152257, "arXiv_link": "https://arxiv.org/abs/2403.00567", "other_link": "", "title": "Flatten Long-Range Loss Landscapes for Cross-Domain Few-Shot Learning", "abstract": "Cross-domain few-shot learning (CDFSL) aims to acquire knowledge from limited\ntraining data in the target domain by leveraging prior knowledge transferred\nfrom source domains with abundant training samples. CDFSL faces challenges in\ntransferring knowledge across dissimilar domains and fine-tuning models with\nlimited training data. To address these challenges, we initially extend the\nanalysis of loss landscapes from the parameter space to the representation\nspace, which allows us to simultaneously interpret the transferring and\nfine-tuning difficulties of CDFSL models. We observe that sharp minima in the\nloss landscapes of the representation space result in representations that are\nhard to transfer and fine-tune. Moreover, existing flatness-based methods have\nlimited generalization ability due to their short-range flatness. To enhance\nthe transferability and facilitate fine-tuning, we introduce a simple yet\neffective approach to achieve long-range flattening of the minima in the loss\nlandscape. This approach considers representations that are differently\nnormalized as minima in the loss landscape and flattens the high-loss region in\nthe middle by randomly sampling interpolated representations. We implement this\nmethod as a new normalization layer that replaces the original one in both CNNs\nand ViTs. This layer is simple and lightweight, introducing only a minimal\nnumber of additional parameters. Experimental results on 8 datasets demonstrate\nthat our approach outperforms state-of-the-art methods in terms of average\naccuracy. Moreover, our method achieves performance improvements of up to 9\\%\ncompared to the current best approaches on individual datasets. Our code will\nbe released.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yixiong Zou", "Yicong Liu", "Yiman Hu", "Yuhua Li", "Ruixuan Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f745"}, "filepath": "data/2312.06354.png", "tags": [], "_media_type": "image", "_rand": 0.9996520255709209, "arXiv_link": "https://arxiv.org/abs/2312.06354", "other_link": "", "title": "PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved Personalization", "abstract": "Recent advancements in personalized image generation using diffusion models\nhave been noteworthy. However, existing methods suffer from inefficiencies due\nto the requirement for subject-specific fine-tuning. This computationally\nintensive process hinders efficient deployment, limiting practical usability.\nMoreover, these methods often grapple with identity distortion and limited\nexpression diversity. In light of these challenges, we propose PortraitBooth,\nan innovative approach designed for high efficiency, robust identity\npreservation, and expression-editable text-to-image generation, without the\nneed for fine-tuning. PortraitBooth leverages subject embeddings from a face\nrecognition model for personalized image generation without fine-tuning. It\neliminates computational overhead and mitigates identity distortion. The\nintroduced dynamic identity preservation strategy further ensures close\nresemblance to the original image identity. Moreover, PortraitBooth\nincorporates emotion-aware cross-attention control for diverse facial\nexpressions in generated images, supporting text-driven expression editing. Its\nscalability enables efficient and high-quality image creation, including\nmulti-subject generation. Extensive results demonstrate superior performance\nover other state-of-the-art methods in both single and multiple image\ngeneration scenarios.", "keywords": ["Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Xu Peng", "Junwei Zhu", "Boyuan Jiang", "Ying Tai", "Donghao Luo", "Jiangning Zhang", "Wei Lin", "Taisong Jin", "Chengjie Wang", "Rongrong Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f746"}, "filepath": "data/2312.16649.png", "tags": [], "_media_type": "image", "_rand": 0.9992991837946111, "arXiv_link": "https://arxiv.org/abs/2312.16649", "other_link": "", "title": "Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection", "abstract": "In this paper, we study the problem of generalizable synthetic image\ndetection, aiming to detect forgery images from diverse generative methods,\ne.g., GANs and diffusion models. Cutting-edge solutions start to explore the\nbenefits of pre-trained models, and mainly follow the fixed paradigm of solely\ntraining an attached classifier, e.g., combining frozen CLIP-ViT with a\nlearnable linear layer in UniFD. However, our analysis shows that such a fixed\nparadigm is prone to yield detectors with insufficient learning regarding\nforgery representations. We attribute the key challenge to the lack of forgery\nadaptation, and present a novel forgery-aware adaptive transformer approach,\nnamely FatFormer. Based on the pre-trained vision-language spaces of CLIP,\nFatFormer introduces two core designs for the adaption to build generalized\nforgery representations. First, motivated by the fact that both image and\nfrequency analysis are essential for synthetic image detection, we develop a\nforgery-aware adapter to adapt image features to discern and integrate local\nforgery traces within image and frequency domains. Second, we find that\nconsidering the contrastive objectives between adapted image features and text\nprompt embeddings, a previously overlooked aspect, results in a nontrivial\ngeneralization improvement. Accordingly, we introduce language-guided alignment\nto supervise the forgery adaptation with image and text prompts in FatFormer.\nExperiments show that, by coupling these two designs, our approach tuned on\n4-class ProGAN data attains a remarkable detection performance, achieving an\naverage of 98% accuracy to unseen GANs, and surprisingly generalizes to unseen\ndiffusion models with 95% accuracy.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Huan Liu", "Zichang Tan", "Chuangchuang Tan", "Yunchao Wei", "Jingdong Wang", "Yao Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f747"}, "filepath": "data/2404.00521.png", "tags": [], "_media_type": "image", "_rand": 0.9999162072463714, "arXiv_link": "https://arxiv.org/abs/2404.00521", "other_link": "https://github.com/MaxwellYaoNi/CHAIN", "title": "CHAIN: Enhancing Generalization in Data-Efficient GANs via lipsCHitz continuity constrAIned Normalization", "abstract": "Generative Adversarial Networks (GANs) significantly advanced image\ngeneration but their performance heavily depends on abundant training data. In\nscenarios with limited data, GANs often struggle with discriminator overfitting\nand unstable training. Batch Normalization (BN), despite being known for\nenhancing generalization and training stability, has rarely been used in the\ndiscriminator of Data-Efficient GANs. Our work addresses this gap by\nidentifying a critical flaw in BN: the tendency for gradient explosion during\nthe centering and scaling steps. To tackle this issue, we present CHAIN\n(lipsCHitz continuity constrAIned Normalization), which replaces the\nconventional centering step with zero-mean regularization and integrates a\nLipschitz continuity constraint in the scaling step. CHAIN further enhances GAN\ntraining by adaptively interpolating the normalized and unnormalized features,\neffectively avoiding discriminator overfitting. Our theoretical analyses firmly\nestablishes CHAIN's effectiveness in reducing gradients in latent features and\nweights, improving stability and generalization in GAN training. Empirical\nevidence supports our theory. CHAIN achieves state-of-the-art results in\ndata-limited scenarios on CIFAR-10/100, ImageNet, five low-shot and seven\nhigh-resolution few-shot image datasets. Code:\nhttps://github.com/MaxwellYaoNi/CHAIN", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yao Ni", "Piotr Koniusz"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f748"}, "filepath": "data/2312.00600.png", "tags": [], "_media_type": "image", "_rand": 0.9997689364754107, "arXiv_link": "https://arxiv.org/abs/2312.00600", "other_link": "https://github.com/maorong-wang/CCL-DC.", "title": "Improving Plasticity in Online Continual Learning via Collaborative Learning", "abstract": "Online Continual Learning (CL) solves the problem of learning the\never-emerging new classification tasks from a continuous data stream. Unlike\nits offline counterpart, in online CL, the training data can only be seen once.\nMost existing online CL research regards catastrophic forgetting (i.e., model\nstability) as almost the only challenge. In this paper, we argue that the\nmodel's capability to acquire new knowledge (i.e., model plasticity) is another\nchallenge in online CL. While replay-based strategies have been shown to be\neffective in alleviating catastrophic forgetting, there is a notable gap in\nresearch attention toward improving model plasticity. To this end, we propose\nCollaborative Continual Learning (CCL), a collaborative learning based strategy\nto improve the model's capability in acquiring new concepts. Additionally, we\nintroduce Distillation Chain (DC), a collaborative learning scheme to boost the\ntraining of the models. We adapt CCL-DC to existing representative online CL\nworks. Extensive experiments demonstrate that even if the learners are\nwell-trained with state-of-the-art online CL methods, our strategy can still\nimprove model plasticity dramatically, and thereby improve the overall\nperformance by a large margin. The source code of our work is available at\nhttps://github.com/maorong-wang/CCL-DC.", "keywords": [], "authors_list": ["Maorong Wang", "Nicolas Michel", "Ling Xiao", "Toshihiko Yamasaki"], "category_name": "Machine Learning", "all_categories": ["Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f749"}, "filepath": "data/2312.05752.png", "tags": [], "_media_type": "image", "_rand": 0.9997467702841486, "arXiv_link": "https://arxiv.org/abs/2312.05752", "other_link": "", "title": "Bi-SSC: Geometric-Semantic Bidirectional Fusion for Camera-based 3D Semantic Scene Completion", "abstract": "Semantic scene completion (SSC) aims to predict the semantic occupancy of\neach voxel in the entire 3D scene from limited observations, which is an\nemerging and critical task for autonomous driving. Recently, many studies have\nturned to camera-based SSC solutions due to the richer visual cues and\ncost-effectiveness of cameras. However, existing methods usually rely on\nsophisticated and heavy 3D models to directly process the lifted 3D features\nthat are not discriminative enough for clear segmentation boundaries. In this\npaper, we adopt the dense-sparse-dense design and propose an end-to-end\ncamera-based SSC framework, termed SGN, to diffuse semantics from the semantic-\nand occupancy-aware seed voxels to the whole scene based on geometry prior and\noccupancy information. By designing hybrid guidance (sparse semantic and\ngeometry guidance) and effective voxel aggregation for spatial occupancy and\ngeometry priors, we enhance the feature separation between different categories\nand expedite the convergence of semantic diffusion. Extensive experimental\nresults on the SemanticKITTI dataset demonstrate the superiority of our SGN\nover existing state-of-the-art methods.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Yujie Xue", "Ruihui Li", "F anWu", "Zhuo Tang", "Kenli Li", "Duan Mingxing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f74a"}, "filepath": "data/2403.18775.png", "tags": [], "_media_type": "image", "_rand": 0.9992611282566882, "arXiv_link": "https://arxiv.org/abs/2403.18775", "other_link": "https://github.com/chenshuang-zhang/imagenet_d.", "title": "ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object", "abstract": "We establish rigorous benchmarks for visual perception robustness. Synthetic\nimages such as ImageNet-C, ImageNet-9, and Stylized ImageNet provide specific\ntype of evaluation over synthetic corruptions, backgrounds, and textures, yet\nthose robustness benchmarks are restricted in specified variations and have low\nsynthetic quality. In this work, we introduce generative model as a data source\nfor synthesizing hard images that benchmark deep models' robustness. Leveraging\ndiffusion models, we are able to generate images with more diversified\nbackgrounds, textures, and materials than any prior work, where we term this\nbenchmark as ImageNet-D. Experimental results show that ImageNet-D results in a\nsignificant accuracy drop to a range of vision models, from the standard ResNet\nvisual classifier to the latest foundation models like CLIP and MiniGPT-4,\nsignificantly reducing their accuracy by up to 60\\%. Our work suggests that\ndiffusion models can be an effective source to test vision models. The code and\ndataset are available at https://github.com/chenshuang-zhang/imagenet_d.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Chenshuang Zhang", "Fei Pan", "Junmo Kim", "In So Kweon", "Chengzhi Mao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f74b"}, "filepath": "data/2403.18791.png", "tags": [], "_media_type": "image", "_rand": 0.9992660072507598, "arXiv_link": "https://arxiv.org/abs/2403.18791", "other_link": "https://github.com/Tianfu18/diff-feats-pose.", "title": "Object Pose Estimation via the Aggregation of Diffusion Features", "abstract": "Estimating the pose of objects from images is a crucial task of 3D scene\nunderstanding, and recent approaches have shown promising results on very large\nbenchmarks. However, these methods experience a significant performance drop\nwhen dealing with unseen objects. We believe that it results from the limited\ngeneralizability of image features. To address this problem, we have an\nin-depth analysis on the features of diffusion models, e.g. Stable Diffusion,\nwhich hold substantial potential for modeling unseen objects. Based on this\nanalysis, we then innovatively introduce these diffusion features for object\npose estimation. To achieve this, we propose three distinct architectures that\ncan effectively capture and aggregate diffusion features of different\ngranularity, greatly improving the generalizability of object pose estimation.\nOur approach outperforms the state-of-the-art methods by a considerable margin\non three popular benchmark datasets, LM, O-LM, and T-LESS. In particular, our\nmethod achieves higher accuracy than the previous best arts on unseen objects:\n98.2% vs. 93.5% on Unseen LM, 85.9% vs. 76.3% on Unseen O-LM, showing the\nstrong generalizability of our method. Our code is released at\nhttps://github.com/Tianfu18/diff-feats-pose.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Tianfu Wang", "Guosheng Hu", "Hongguang Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f74c"}, "filepath": "data/2307.05033.png", "tags": [], "_media_type": "image", "_rand": 0.9993515004527063, "arXiv_link": "https://arxiv.org/abs/2307.05033", "other_link": "https://github.com/Yaozhuwa/EVA-Flow.", "title": "Efficient Meshflow and Optical Flow Estimation from Event Cameras", "abstract": "Optical flow estimation is a fundamental task in the field of autonomous\ndriving. Event cameras are capable of responding to log-brightness changes in\nmicroseconds. Its characteristic of producing responses only to the changing\nregion is particularly suitable for optical flow estimation. In contrast to the\nsuper low-latency response speed of event cameras, existing datasets collected\nvia event cameras, however, only provide limited frame rate optical flow ground\ntruth, (e.g., at 10Hz), greatly restricting the potential of event-driven\noptical flow. To address this challenge, we put forward a high-frame-rate,\nlow-latency event representation Unified Voxel Grid, sequentially fed into the\nnetwork bin by bin. We then propose EVA-Flow, an EVent-based Anytime Flow\nestimation network to produce high-frame-rate event optical flow with only\nlow-frame-rate optical flow ground truth for supervision. The key component of\nour EVA-Flow is the stacked Spatiotemporal Motion Refinement (SMR) module,\nwhich predicts temporally dense optical flow and enhances the accuracy via\nspatial-temporal motion refinement. The time-dense feature warping utilized in\nthe SMR module provides implicit supervision for the intermediate optical flow.\nAdditionally, we introduce the Rectified Flow Warp Loss (RFWL) for the\nunsupervised evaluation of intermediate optical flow in the absence of ground\ntruth. This is, to the best of our knowledge, the first work focusing on\nanytime optical flow estimation via event cameras. A comprehensive variety of\nexperiments on MVSEC, DESC, and our EVA-FlowSet demonstrates that EVA-Flow\nachieves competitive performance, super-low-latency (5ms), fastest inference\n(9.2ms), time-dense motion estimation (200Hz), and strong generalization. Our\ncode will be available at https://github.com/Yaozhuwa/EVA-Flow.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Xinglong Luo", "Ao Luo", "Zhengning Wang", "Chunyu Lin", "Bing Zeng", "Shuaicheng Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f74d"}, "filepath": "data/2405.02859.png", "tags": [], "_media_type": "image", "_rand": 0.9992665328419752, "arXiv_link": "https://arxiv.org/abs/2405.02859", "other_link": "", "title": "MVIP-NeRF: Multi-view 3D Inpainting on NeRF Scenes via Diffusion Prior", "abstract": "Despite the emergence of successful NeRF inpainting methods built upon\nexplicit RGB and depth 2D inpainting supervisions, these methods are inherently\nconstrained by the capabilities of their underlying 2D inpainters. This is due\nto two key reasons: (i) independently inpainting constituent images results in\nview-inconsistent imagery, and (ii) 2D inpainters struggle to ensure\nhigh-quality geometry completion and alignment with inpainted RGB images.\n To overcome these limitations, we propose a novel approach called MVIP-NeRF\nthat harnesses the potential of diffusion priors for NeRF inpainting,\naddressing both appearance and geometry aspects. MVIP-NeRF performs joint\ninpainting across multiple views to reach a consistent solution, which is\nachieved via an iterative optimization process based on Score Distillation\nSampling (SDS). Apart from recovering the rendered RGB images, we also extract\nnormal maps as a geometric representation and define a normal SDS loss that\nmotivates accurate geometry inpainting and alignment with the appearance.\nAdditionally, we formulate a multi-view SDS score function to distill\ngenerative priors simultaneously from different view images, ensuring\nconsistent visual completion when dealing with large view variations. Our\nexperimental results show better appearance and geometry recovery than previous\nNeRF inpainting methods.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Honghua Chen", "Chen Change Loy", "Xingang Pan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f74e"}, "filepath": "data/2311.15435.png", "tags": [], "_media_type": "image", "_rand": 0.9994812939253096, "arXiv_link": "https://arxiv.org/abs/2311.15435", "other_link": "", "title": "Functional Diffusion", "abstract": "We propose a new class of generative diffusion models, called functional\ndiffusion. In contrast to previous work, functional diffusion works on samples\nthat are represented by functions with a continuous domain. Functional\ndiffusion can be seen as an extension of classical diffusion models to an\ninfinite-dimensional domain. Functional diffusion is very versatile as images,\nvideos, audio, 3D shapes, deformations, \\etc, can be handled by the same\nframework with minimal changes. In addition, functional diffusion is especially\nsuited for irregular data or data defined in non-standard domains. In our work,\nwe derive the necessary foundations for functional diffusion and propose a\nfirst implementation based on the transformer architecture. We show generative\nresults on complicated signed distance functions and deformation functions\ndefined on 3D surfaces.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Biao Zhang", "Peter Wonka"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f74f"}, "filepath": "data/2404.00974v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997632059972114, "arXiv_link": "https://arxiv.org/abs/2404.00974v1", "other_link": "", "title": "Improving Visual Recognition with Hyperbolical Visual Hierarchy Mapping", "abstract": "Visual scenes are naturally organized in a hierarchy, where a coarse semantic\nis recursively comprised of several fine details. Exploring such a visual\nhierarchy is crucial to recognize the complex relations of visual elements,\nleading to a comprehensive scene understanding. In this paper, we propose a\nVisual Hierarchy Mapper (Hi-Mapper), a novel approach for enhancing the\nstructured understanding of the pre-trained Deep Neural Networks (DNNs).\nHi-Mapper investigates the hierarchical organization of the visual scene by 1)\npre-defining a hierarchy tree through the encapsulation of probability\ndensities; and 2) learning the hierarchical relations in hyperbolic space with\na novel hierarchical contrastive loss. The pre-defined hierarchy tree\nrecursively interacts with the visual features of the pre-trained DNNs through\nhierarchy decomposition and encoding procedures, thereby effectively\nidentifying the visual hierarchy and enhancing the recognition of an entire\nscene. Extensive experiments demonstrate that Hi-Mapper significantly enhances\nthe representation capability of DNNs, leading to an improved performance on\nvarious tasks, including image classification and dense prediction tasks.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Hyeongjun Kwon", "Jinhyun Jang", "Jin Kim", "Kwonyoung Kim", "Kwanghoon Sohn"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f750"}, "filepath": "data/2402.18842.png", "tags": [], "_media_type": "image", "_rand": 0.999798243079725, "arXiv_link": "https://arxiv.org/abs/2402.18842", "other_link": "", "title": "ViewFusion: Towards Multi-View Consistency via Interpolated Denoising", "abstract": "Novel-view synthesis through diffusion models has demonstrated remarkable\npotential for generating diverse and high-quality images. Yet, the independent\nprocess of image generation in these prevailing methods leads to challenges in\nmaintaining multiple-view consistency. To address this, we introduce\nViewFusion, a novel, training-free algorithm that can be seamlessly integrated\ninto existing pre-trained diffusion models. Our approach adopts an\nauto-regressive method that implicitly leverages previously generated views as\ncontext for the next view generation, ensuring robust multi-view consistency\nduring the novel-view generation process. Through a diffusion process that\nfuses known-view information via interpolated denoising, our framework\nsuccessfully extends single-view conditioned models to work in multiple-view\nconditional settings without any additional fine-tuning. Extensive experimental\nresults demonstrate the effectiveness of ViewFusion in generating consistent\nand detailed novel views.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xianghui Yang", "Gil Avraham", "Yan Zuo", "Sameera Ramasinghe", "Loris Bazzani", "Anton van den Hengel"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f751"}, "filepath": "data/2403.04381.png", "tags": [], "_media_type": "image", "_rand": 0.9995789181606178, "arXiv_link": "https://arxiv.org/abs/2403.04381", "other_link": "https://github.com/MickeyLLG/S2DHand.", "title": "Single-to-Dual-View Adaptation for Egocentric 3D Hand Pose Estimation", "abstract": "The pursuit of accurate 3D hand pose estimation stands as a keystone for\nunderstanding human activity in the realm of egocentric vision. The majority of\nexisting estimation methods still rely on single-view images as input, leading\nto potential limitations, e.g., limited field-of-view and ambiguity in depth.\nTo address these problems, adding another camera to better capture the shape of\nhands is a practical direction. However, existing multi-view hand pose\nestimation methods suffer from two main drawbacks: 1) Requiring multi-view\nannotations for training, which are expensive. 2) During testing, the model\nbecomes inapplicable if camera parameters/layout are not the same as those used\nin training. In this paper, we propose a novel Single-to-Dual-view adaptation\n(S2DHand) solution that adapts a pre-trained single-view estimator to dual\nviews. Compared with existing multi-view training methods, 1) our adaptation\nprocess is unsupervised, eliminating the need for multi-view annotation. 2)\nMoreover, our method can handle arbitrary dual-view pairs with unknown camera\nparameters, making the model applicable to diverse camera settings.\nSpecifically, S2DHand is built on certain stereo constraints, including\npair-wise cross-view consensus and invariance of transformation between both\nviews. These two stereo constraints are used in a complementary manner to\ngenerate pseudo-labels, allowing reliable adaptation. Evaluation results reveal\nthat S2DHand achieves significant improvements on arbitrary camera pairs under\nboth in-dataset and cross-dataset settings, and outperforms existing adaptation\nmethods with leading performance. Project page:\nhttps://github.com/MickeyLLG/S2DHand.", "keywords": [], "authors_list": ["Ruicong Liu", "Takehiko Ohkawa", "Mingfang Zhang", "Yoichi Sato"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f752"}, "filepath": "data/2403.18442.png", "tags": [], "_media_type": "image", "_rand": 0.9995690761241167, "arXiv_link": "https://arxiv.org/abs/2403.18442", "other_link": "https://github.com/abie-e/BFTT3D}.", "title": "Backpropagation-free Network for 3D Test-time Adaptation", "abstract": "Real-world systems often encounter new data over time, which leads to\nexperiencing target domain shifts. Existing Test-Time Adaptation (TTA) methods\ntend to apply computationally heavy and memory-intensive backpropagation-based\napproaches to handle this. Here, we propose a novel method that uses a\nbackpropagation-free approach for TTA for the specific case of 3D data. Our\nmodel uses a two-stream architecture to maintain knowledge about the source\ndomain as well as complementary target-domain-specific information. The\nbackpropagation-free property of our model helps address the well-known\nforgetting problem and mitigates the error accumulation issue. The proposed\nmethod also eliminates the need for the usually noisy process of\npseudo-labeling and reliance on costly self-supervised training. Moreover, our\nmethod leverages subspace learning, effectively reducing the distribution\nvariance between the two domains. Furthermore, the source-domain-specific and\nthe target-domain-specific streams are aligned using a novel entropy-based\nadaptive fusion strategy. Extensive experiments on popular benchmarks\ndemonstrate the effectiveness of our method. The code will be available at\n\\url{https://github.com/abie-e/BFTT3D}.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["YANSHUO WANG", "Ali Cheraghian", "Zeeshan Hayder", "JIE HONG", "Sameera Ramasinghe", "Shafin Rahman", "David Ahmedt-Aristizabal", "Xuesong Li", "Lars Petersson", "Mehrtash Harandi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f753"}, "filepath": "data/2405.14497.png", "tags": [], "_media_type": "image", "_rand": 0.9997622319782947, "arXiv_link": "https://arxiv.org/abs/2405.14497", "other_link": "https://github.com/msohaildanish/DivAlign", "title": "Improving Single Domain-Generalized Object Detection: A Focus on Diversification and Alignment", "abstract": "In this work, we tackle the problem of domain generalization for object\ndetection, specifically focusing on the scenario where only a single source\ndomain is available. We propose an effective approach that involves two key\nsteps: diversifying the source domain and aligning detections based on class\nprediction confidence and localization. Firstly, we demonstrate that by\ncarefully selecting a set of augmentations, a base detector can outperform\nexisting methods for single domain generalization by a good margin. This\nhighlights the importance of domain diversification in improving the\nperformance of object detectors. Secondly, we introduce a method to align\ndetections from multiple views, considering both classification and\nlocalization outputs. This alignment procedure leads to better generalized and\nwell-calibrated object detector models, which are crucial for accurate\ndecision-making in safety-critical applications. Our approach is\ndetector-agnostic and can be seamlessly applied to both single-stage and\ntwo-stage detectors. To validate the effectiveness of our proposed methods, we\nconduct extensive experiments and ablations on challenging domain-shift\nscenarios. The results consistently demonstrate the superiority of our approach\ncompared to existing methods. Our code and models are available at:\nhttps://github.com/msohaildanish/DivAlign", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Muhammad Sohail Danish", "Muhammad Haris Khan", "Muhammad Akhtar Munir", "M. Sarfraz", "Mohsen Ali"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f754"}, "filepath": "data/2312.01623.png", "tags": [], "_media_type": "image", "_rand": 0.9997829898005196, "arXiv_link": "https://arxiv.org/abs/2312.01623", "other_link": "", "title": "Universal Segmentation at Arbitrary Granularity with Language Instruction", "abstract": "This paper aims to achieve universal segmentation of arbitrary semantic\nlevel. Despite significant progress in recent years, specialist segmentation\napproaches are limited to specific tasks and data distribution. Retraining a\nnew model for adaptation to new scenarios or settings takes expensive\ncomputation and time cost, which raises the demand for versatile and universal\nsegmentation model that can cater to various granularity. Although some\nattempts have been made for unifying different segmentation tasks or\ngeneralization to various scenarios, limitations in the definition of paradigms\nand input-output spaces make it difficult for them to achieve accurate\nunderstanding of content at arbitrary granularity. To this end, we present\nUniLSeg, a universal segmentation model that can perform segmentation at any\nsemantic level with the guidance of language instructions. For training\nUniLSeg, we reorganize a group of tasks from original diverse distributions\ninto a unified data format, where images with texts describing segmentation\ntargets as input and corresponding masks are output. Combined with a automatic\nannotation engine for utilizing numerous unlabeled data, UniLSeg achieves\nexcellent performance on various tasks and settings, surpassing both specialist\nand unified segmentation models.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Yong Liu", "Cairong Zhang", "Yitong Wang", "Jiahao Wang", "Yujiu Yang", "Yansong Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f755"}, "filepath": "data/2306.04451.png", "tags": [], "_media_type": "image", "_rand": 0.9994808063528913, "arXiv_link": "http://export.arxiv.org/abs/2306.04451", "other_link": "", "title": "ScanFormer: Referring Expression Comprehension by Iteratively Scanning", "abstract": "Different from universal object detection, referring expression comprehension\n(REC) aims to locate specific objects referred to by natural language\nexpressions. The expression provides high-level concepts of relevant visual and\ncontextual patterns, which vary significantly with different expressions and\naccount for only a few of those encoded in the REC model. This leads us to a\nquestion: do we really need the entire network with a fixed structure for\nvarious referring expressions? Ideally, given an expression, only\nexpression-relevant components of the REC model are required. These components\nshould be small in number as each expression only contains very few visual and\ncontextual clues. This paper explores the adaptation between expressions and\nREC models for dynamic inference. Concretely, we propose a neat yet efficient\nframework named Language Adaptive Dynamic Subnets (LADS), which can extract\nlanguage-adaptive subnets from the REC model conditioned on the referring\nexpressions. By using the compact subnet, the inference can be more economical\nand efficient. Extensive experiments on RefCOCO, RefCOCO+, RefCOCOg, and\nReferit show that the proposed method achieves faster inference speed and\nhigher accuracy against state-of-the-art approaches.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Wei Su", "Peihan Miao", "Huanzhang Dou", "Xi Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f756"}, "filepath": "data/2312.04553.png", "tags": [], "_media_type": "image", "_rand": 0.9997992659526155, "arXiv_link": "https://arxiv.org/abs/2312.04553", "other_link": "", "title": "SPIDeRS: Structured Polarization for Invisible Depth and Reflectance Sensing", "abstract": "Can we capture shape and reflectance in stealth? Such capability would be\nvaluable for many application domains in vision, xR, robotics, and HCI. We\nintroduce structured polarization for invisible depth and reflectance sensing\n(SPIDeRS), the first depth and reflectance sensing method using patterns of\npolarized light. The key idea is to modulate the angle of linear polarization\n(AoLP) of projected light at each pixel. The use of polarization makes it\ninvisible and lets us recover not only depth but also directly surface normals\nand even reflectance. We implement SPIDeRS with a liquid crystal spatial light\nmodulator (SLM) and a polarimetric camera. We derive a novel method for\nrobustly extracting the projected structured polarization pattern from the\npolarimetric object appearance. We evaluate the effectiveness of SPIDeRS by\napplying it to a number of real-world objects. The results show that our method\nsuccessfully reconstructs object shapes of various materials and is robust to\ndiffuse reflection and ambient light. We also demonstrate relighting using\nrecovered surface normals and reflectance. We believe SPIDeRS opens a new\navenue of polarization use in visual sensing.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Tomoki Ichikawa", "Shohei Nobuhara", "Ko Nishino"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f757"}, "filepath": "data/2311.17435.png", "tags": [], "_media_type": "image", "_rand": 0.9998952482743388, "arXiv_link": "https://arxiv.org/abs/2311.17435", "other_link": "", "title": "MM-Narrator: Narrating Long-form Videos with Multimodal In-Context Learning", "abstract": "We present MM-Narrator, a novel system leveraging GPT-4 with multimodal\nin-context learning for the generation of audio descriptions (AD). Unlike\nprevious methods that primarily focused on downstream fine-tuning with short\nvideo clips, MM-Narrator excels in generating precise audio descriptions for\nvideos of extensive lengths, even beyond hours, in an autoregressive manner.\nThis capability is made possible by the proposed memory-augmented generation\nprocess, which effectively utilizes both the short-term textual context and\nlong-term visual memory through an efficient register-and-recall mechanism.\nThese contextual memories compile pertinent past information, including\nstorylines and character identities, ensuring an accurate tracking and\ndepicting of story-coherent and character-centric audio descriptions.\nMaintaining the training-free design of MM-Narrator, we further propose a\ncomplexity-based demonstration selection strategy to largely enhance its\nmulti-step reasoning capability via few-shot multimodal in-context learning\n(MM-ICL). Experimental results on MAD-eval dataset demonstrate that MM-Narrator\nconsistently outperforms both the existing fine-tuning-based approaches and\nLLM-based approaches in most scenarios, as measured by standard evaluation\nmetrics. Additionally, we introduce the first segment-based evaluator for\nrecurrent text generation. Empowered by GPT-4, this evaluator comprehensively\nreasons and marks AD generation performance in various extendable dimensions.", "keywords": ["Large multimodal models and prompting techniques", "Image and video generation and manipulation", "Efficient and scalable vision"], "authors_list": ["Chaoyi Zhang", "Kevin Lin", "Zhengyuan Yang", "Jianfeng Wang", "Linjie Li", "Chung-Ching Lin", "Zicheng Liu", "Lijuan Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f758"}, "filepath": "data/2307.00040.png", "tags": [], "_media_type": "image", "_rand": 0.999241065959767, "arXiv_link": "https://arxiv.org/abs/2307.00040", "other_link": "https://disco-dance.github.io/.", "title": "DisCo: Disentangled Control for Realistic Human Dance Generation", "abstract": "Generative AI has made significant strides in computer vision, particularly\nin text-driven image/video synthesis (T2I/T2V). Despite the notable\nadvancements, it remains challenging in human-centric content synthesis such as\nrealistic dance generation. Current methodologies, primarily tailored for human\nmotion transfer, encounter difficulties when confronted with real-world dance\nscenarios (e.g., social media dance), which require to generalize across a wide\nspectrum of poses and intricate human details. In this paper, we depart from\nthe traditional paradigm of human motion transfer and emphasize two additional\ncritical attributes for the synthesis of human dance content in social media\ncontexts: (i) Generalizability: the model should be able to generalize beyond\ngeneric human viewpoints as well as unseen human subjects, backgrounds, and\nposes; (ii) Compositionality: it should allow for the seamless composition of\nseen/unseen subjects, backgrounds, and poses from different sources. To address\nthese challenges, we introduce DISCO, which includes a novel model architecture\nwith disentangled control to improve the compositionality of dance synthesis,\nand an effective human attribute pre-training for better generalizability to\nunseen humans. Extensive qualitative and quantitative results demonstrate that\nDisCc can generate high-quality human dance images and videos with diverse\nappearances and flexible motions. Code is available at\nhttps://disco-dance.github.io/.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Tan Wang", "Linjie Li", "Kevin Lin", "Yuanhao Zhai", "Chung-Ching Lin", "Zhengyuan Yang", "Hanwang Zhang", "Zicheng Liu", "Lijuan Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f759"}, "filepath": "data/2404.16670.png", "tags": [], "_media_type": "image", "_rand": 0.9997163061735623, "arXiv_link": "https://arxiv.org/abs/2404.16670", "other_link": "https://github.com/aimmemotion/EmoVIT}.", "title": "EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning", "abstract": "Visual Instruction Tuning represents a novel learning paradigm involving the\nfine-tuning of pre-trained language models using task-specific instructions.\nThis paradigm shows promising zero-shot results in various natural language\nprocessing tasks but is still unexplored in vision emotion understanding. In\nthis work, we focus on enhancing the model's proficiency in understanding and\nadhering to instructions related to emotional contexts. Initially, we identify\nkey visual clues critical to visual emotion recognition. Subsequently, we\nintroduce a novel GPT-assisted pipeline for generating emotion visual\ninstruction data, effectively addressing the scarcity of annotated instruction\ndata in this domain. Expanding on the groundwork established by InstructBLIP,\nour proposed EmoVIT architecture incorporates emotion-specific instruction\ndata, leveraging the powerful capabilities of Large Language Models to enhance\nperformance. Through extensive experiments, our model showcases its proficiency\nin emotion classification, adeptness in affective reasoning, and competence in\ncomprehending humor. The comparative analysis provides a robust benchmark for\nEmotion Visual Instruction Tuning in the era of LLMs, providing valuable\ninsights and opening avenues for future exploration in this domain. Our code is\navailable at \\url{https://github.com/aimmemotion/EmoVIT}.", "keywords": [], "authors_list": ["Hongxia Xie", "Chu-Jun Peng", "Yu-Wen Tseng", "Hung-Jen Chen", "Chan-Feng Hsu", "Hong-Han Shuai", "Wen-Huang Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f75a"}, "filepath": "data/2312.09069.png", "tags": [], "_media_type": "image", "_rand": 0.9992892411403859, "arXiv_link": "https://arxiv.org/abs/2312.09069", "other_link": "", "title": "PI3D: Efficient Text-to-3D Generation with Pseudo-Image Diffusion", "abstract": "Diffusion models trained on large-scale text-image datasets have demonstrated\na strong capability of controllable high-quality image generation from\narbitrary text prompts. However, the generation quality and generalization\nability of 3D diffusion models is hindered by the scarcity of high-quality and\nlarge-scale 3D datasets. In this paper, we present PI3D, a framework that fully\nleverages the pre-trained text-to-image diffusion models' ability to generate\nhigh-quality 3D shapes from text prompts in minutes. The core idea is to\nconnect the 2D and 3D domains by representing a 3D shape as a set of Pseudo RGB\nImages. We fine-tune an existing text-to-image diffusion model to produce such\npseudo-images using a small number of text-3D pairs. Surprisingly, we find that\nit can already generate meaningful and consistent 3D shapes given complex text\ndescriptions. We further take the generated shapes as the starting point for a\nlightweight iterative refinement using score distillation sampling to achieve\nhigh-quality generation under a low budget. PI3D generates a single 3D shape\nfrom text in only 3 minutes and the quality is validated to outperform existing\n3D generative models by a large margin.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Ying-Tian Liu", "Yuan-Chen Guo", "Guan Luo", "Heyi Sun", "Wei Yin", "Song-Hai Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f75b"}, "filepath": "data/2402.17427.png", "tags": [], "_media_type": "image", "_rand": 0.9998673465024192, "arXiv_link": "https://arxiv.org/abs/2402.17427", "other_link": "", "title": "VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction", "abstract": "Existing NeRF-based methods for large scene reconstruction often have\nlimitations in visual quality and rendering speed. While the recent 3D Gaussian\nSplatting works well on small-scale and object-centric scenes, scaling it up to\nlarge scenes poses challenges due to limited video memory, long optimization\ntime, and noticeable appearance variations. To address these challenges, we\npresent VastGaussian, the first method for high-quality reconstruction and\nreal-time rendering on large scenes based on 3D Gaussian Splatting. We propose\na progressive partitioning strategy to divide a large scene into multiple\ncells, where the training cameras and point cloud are properly distributed with\nan airspace-aware visibility criterion. These cells are merged into a complete\nscene after parallel optimization. We also introduce decoupled appearance\nmodeling into the optimization process to reduce appearance variations in the\nrendered images. Our approach outperforms existing NeRF-based methods and\nachieves state-of-the-art results on multiple large scene datasets, enabling\nfast optimization and high-fidelity real-time rendering.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Jiaqi Lin", "Zhihao Li", "Xiao Tang", "Jianzhuang Liu", "Shiyong Liu", "Jiayue Liu", "Yangdi Lu", "Xiaofei Wu", "Songcen Xu", "Youliang Yan", "Wenming Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f75c"}, "filepath": "data/2312.02980.png", "tags": [], "_media_type": "image", "_rand": 0.9996754597882569, "arXiv_link": "https://arxiv.org/abs/2312.02980", "other_link": "", "title": "GPT4Point: A Unified Framework for Point-Language Understanding and Generation", "abstract": "Multimodal Large Language Models (MLLMs) have excelled in 2D image-text\ncomprehension and image generation, but their understanding of the 3D world is\nnotably deficient, limiting progress in 3D language understanding and\ngeneration. To solve this problem, we introduce GPT4Point, an innovative\ngroundbreaking point-language multimodal model designed specifically for\nunified 3D object understanding and generation within the MLLM framework.\nGPT4Point as a powerful 3D MLLM seamlessly can execute a variety of point-text\nreference tasks such as point-cloud captioning and Q&A. Additionally, GPT4Point\nis equipped with advanced capabilities for controllable 3D generation, it can\nget high-quality results through a low-quality point-text feature maintaining\nthe geometric shapes and colors. To support the expansive needs of 3D\nobject-text pairs, we develop Pyramid-XL, a point-language dataset annotation\nengine. It constructs a large-scale database over 1M objects of varied text\ngranularity levels from the Objaverse-XL dataset, essential for training\nGPT4Point. A comprehensive benchmark has been proposed to evaluate 3D\npoint-language understanding capabilities. In extensive evaluations, GPT4Point\nhas demonstrated superior performance in understanding and generation.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zhangyang Qi", "Ye Fang", "Zeyi Sun", "Xiaoyang Wu", "Tong Wu", "Jiaqi Wang", "Dahua Lin", "Hengshuang Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f75d"}, "filepath": "data/2401.08399.png", "tags": [], "_media_type": "image", "_rand": 0.9995751510439065, "arXiv_link": "https://arxiv.org/abs/2401.08399", "other_link": "https://taco2024.github.io.", "title": "TACO: Benchmarking Generalizable Bimanual Tool-ACtion-Object Understanding", "abstract": "Humans commonly work with multiple objects in daily life and can intuitively\ntransfer manipulation skills to novel objects by understanding object\nfunctional regularities. However, existing technical approaches for analyzing\nand synthesizing hand-object manipulation are mostly limited to handling a\nsingle hand and object due to the lack of data support. To address this, we\nconstruct TACO, an extensive bimanual hand-object-interaction dataset spanning\na large variety of tool-action-object compositions for daily human activities.\nTACO contains 2.5K motion sequences paired with third-person and egocentric\nviews, precise hand-object 3D meshes, and action labels. To rapidly expand the\ndata scale, we present a fully automatic data acquisition pipeline combining\nmulti-view sensing with an optical motion capture system. With the vast\nresearch fields provided by TACO, we benchmark three generalizable\nhand-object-interaction tasks: compositional action recognition, generalizable\nhand-object motion forecasting, and cooperative grasp synthesis. Extensive\nexperiments reveal new insights, challenges, and opportunities for advancing\nthe studies of generalizable hand-object motion analysis and synthesis. Our\ndata and code are available at https://taco2024.github.io.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Yun Liu", "Haolin Yang", "Xu Si", "Ling Liu", "Zipeng Li", "Yuxiang Zhang", "Yebin Liu", "Li Yi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f75e"}, "filepath": "data/2312.03818.png", "tags": [], "_media_type": "image", "_rand": 0.9998815080556571, "arXiv_link": "https://arxiv.org/abs/2312.03818", "other_link": "", "title": "Alpha-CLIP: A CLIP Model Focusing on Wherever You Want", "abstract": "Contrastive Language-Image Pre-training (CLIP) plays an essential role in\nextracting valuable content information from images across diverse tasks. It\naligns textual and visual modalities to comprehend the entire image, including\nall the details, even those irrelevant to specific tasks. However, for a finer\nunderstanding and controlled editing of images, it becomes crucial to focus on\nspecific regions of interest, which can be indicated as points, masks, or boxes\nby humans or perception models. To fulfill the requirements, we introduce\nAlpha-CLIP, an enhanced version of CLIP with an auxiliary alpha channel to\nsuggest attentive regions and fine-tuned with constructed millions of RGBA\nregion-text pairs. Alpha-CLIP not only preserves the visual recognition ability\nof CLIP but also enables precise control over the emphasis of image contents.\nIt demonstrates effectiveness in various tasks, including but not limited to\nopen-world recognition, multimodal large language models, and conditional 2D /\n3D generation. It has a strong potential to serve as a versatile tool for\nimage-related tasks.", "keywords": [], "authors_list": ["Zeyi Sun", "Ye Fang", "Tong Wu", "Pan Zhang", "Yuhang Zang", "Shu Kong", "Yuanjun Xiong", "Dahua Lin", "Jiaqi Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f75f"}, "filepath": "data/2312.14233.png", "tags": [], "_media_type": "image", "_rand": 0.9999170617725578, "arXiv_link": "https://arxiv.org/abs/2312.14233", "other_link": "https://github.com/SHI-Labs/VCoder", "title": "VCoder: Versatile Vision Encoders for Multimodal Large Language Models", "abstract": "Humans possess the remarkable skill of Visual Perception, the ability to see\nand understand the seen, helping them make sense of the visual world and, in\nturn, reason. Multimodal Large Language Models (MLLM) have recently achieved\nimpressive performance on vision-language tasks ranging from visual\nquestion-answering and image captioning to visual reasoning and image\ngeneration. However, when prompted to identify or count (perceive) the entities\nin a given image, existing MLLM systems fail. Working towards developing an\naccurate MLLM system for perception and reasoning, we propose using Versatile\nvision enCoders (VCoder) as perception eyes for Multimodal LLMs. We feed the\nVCoder with perception modalities such as segmentation or depth maps, improving\nthe MLLM's perception abilities. Secondly, we leverage the images from COCO and\noutputs from off-the-shelf vision perception models to create our COCO\nSegmentation Text (COST) dataset for training and evaluating MLLMs on the\nobject perception task. Thirdly, we introduce metrics to assess the object\nperception abilities in MLLMs on our COST dataset. Lastly, we provide extensive\nexperimental evidence proving the VCoder's improved object-level perception\nskills over existing Multimodal LLMs, including GPT-4V. We open-source our\ndataset, code, and models to promote research. We open-source our code at\nhttps://github.com/SHI-Labs/VCoder", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Jitesh Jain", "Jianwei Yang", "Humphrey Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f760"}, "filepath": "data/2312.04466.png", "tags": [], "_media_type": "image", "_rand": 0.9998037680859194, "arXiv_link": "https://arxiv.org/abs/2312.04466", "other_link": "", "title": "Emotional Speech-Driven 3D Body Animation via Disentangled Latent Diffusion", "abstract": "Existing methods for synthesizing 3D human gestures from speech have shown\npromising results, but they do not explicitly model the impact of emotions on\nthe generated gestures. Instead, these methods directly output animations from\nspeech without control over the expressed emotion. To address this limitation,\nwe present AMUSE, an emotional speech-driven body animation model based on\nlatent diffusion. Our observation is that content (i.e., gestures related to\nspeech rhythm and word utterances), emotion, and personal style are separable.\nTo account for this, AMUSE maps the driving audio to three disentangled latent\nvectors: one for content, one for emotion, and one for personal style. A latent\ndiffusion model, trained to generate gesture motion sequences, is then\nconditioned on these latent vectors. Once trained, AMUSE synthesizes 3D human\ngestures directly from speech with control over the expressed emotions and\nstyle by combining the content from the driving speech with the emotion and\nstyle of another speech sequence. Randomly sampling the noise of the diffusion\nmodel further generates variations of the gesture with the same emotional\nexpressivity. Qualitative, quantitative, and perceptual evaluations demonstrate\nthat AMUSE outputs realistic gesture sequences. Compared to the state of the\nart, the generated gestures are better synchronized with the speech content,\nand better represent the emotion expressed by the input speech. Our code is\navailable at amuse.is.tue.mpg.de.", "keywords": ["Biometrics and human analysis", "Multimodal models and vision-language models"], "authors_list": ["Kiran Chhatre", "Radek Danecek", "Nikos Athanasiou", "Giorgio Becherini", "Christopher Peters", "Michael J. Black", "Timo Bolkart"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f761"}, "filepath": "data/2312.03611.png", "tags": [], "_media_type": "image", "_rand": 0.9998141293739292, "arXiv_link": "https://arxiv.org/abs/2312.03611", "other_link": "", "title": "DreamComposer: Controllable 3D Object Generation via Multi-View Conditions", "abstract": "Utilizing pre-trained 2D large-scale generative models, recent works are\ncapable of generating high-quality novel views from a single in-the-wild image.\nHowever, due to the lack of information from multiple views, these works\nencounter difficulties in generating controllable novel views. In this paper,\nwe present DreamComposer, a flexible and scalable framework that can enhance\nexisting view-aware diffusion models by injecting multi-view conditions.\nSpecifically, DreamComposer first uses a view-aware 3D lifting module to obtain\n3D representations of an object from multiple views. Then, it renders the\nlatent features of the target view from 3D representations with the multi-view\nfeature fusion module. Finally the target view features extracted from\nmulti-view inputs are injected into a pre-trained diffusion model. Experiments\nshow that DreamComposer is compatible with state-of-the-art diffusion models\nfor zero-shot novel view synthesis, further enhancing them to generate\nhigh-fidelity novel view images with multi-view conditions, ready for\ncontrollable 3D object reconstruction and various other applications.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Yunhan Yang", "Yukun Huang", "Xiaoyang Wu", "Yuan-Chen Guo", "Song-Hai Zhang", "Hengshuang Zhao", "Tong He", "Xihui Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f762"}, "filepath": "data/2207.12955.png", "tags": [], "_media_type": "image", "_rand": 0.9996802212326629, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2207.12955", "other_link": "https://sg-vilab.github.io/publication/xue2022contextual/.", "title": "LayoutFormer: Hierarchical Text Detection Towards Scene Text Understanding", "abstract": "Most existing scene text detectors focus on detecting characters or words\nthat only capture partial text messages due to missing contextual information.\nFor a better understanding of text in scenes, it is more desired to detect\ncontextual text blocks (CTBs) which consist of one or multiple integral text\nunits (e.g., characters, words, or phrases) in natural reading order and\ntransmit certain complete text messages. This paper presents contextual text\ndetection, a new setup that detects CTBs for better understanding of texts in\nscenes. We formulate the new setup by a dual detection task which first detects\nintegral text units and then groups them into a CTB. To this end, we design a\nnovel scene text clustering technique that treats integral text units as tokens\nand groups them (belonging to the same CTB) into an ordered token sequence. In\naddition, we create two datasets SCUT-CTW-Context and ReCTS-Context to\nfacilitate future research, where each CTB is well annotated by an ordered\nsequence of integral text units. Further, we introduce three metrics that\nmeasure contextual text detection in local accuracy, continuity, and global\naccuracy. Extensive experiments show that our method accurately detects CTBs\nwhich effectively facilitates downstream tasks such as text classification and\ntranslation. The project is available at\nhttps://sg-vilab.github.io/publication/xue2022contextual/.", "keywords": [], "authors_list": ["Min Liang", "Jia-Wei Ma", "Xiaobin Zhu", "Jingyan Qin", "Xu-Cheng Yin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f763"}, "filepath": "data/2404.08921.png", "tags": [], "_media_type": "image", "_rand": 0.9998050530408824, "arXiv_link": "https://arxiv.org/abs/2404.08921", "other_link": "", "title": "PNeRV: Enhancing Spatial Consistency via Pyramidal Neural Representation for Videos", "abstract": "The primary focus of Neural Representation for Videos (NeRV) is to\neffectively model its spatiotemporal consistency. However, current NeRV systems\noften face a significant issue of spatial inconsistency, leading to decreased\nperceptual quality. To address this issue, we introduce the Pyramidal Neural\nRepresentation for Videos (PNeRV), which is built on a multi-scale information\nconnection and comprises a lightweight rescaling operator, Kronecker\nFully-connected layer (KFc), and a Benign Selective Memory (BSM) mechanism. The\nKFc, inspired by the tensor decomposition of the vanilla Fully-connected layer,\nfacilitates low-cost rescaling and global correlation modeling. BSM merges\nhigh-level features with granular ones adaptively. Furthermore, we provide an\nanalysis based on the Universal Approximation Theory of the NeRV system and\nvalidate the effectiveness of the proposed PNeRV.We conducted comprehensive\nexperiments to demonstrate that PNeRV surpasses the performance of contemporary\nNeRV models, achieving the best results in video regression on UVG and DAVIS\nunder various metrics (PSNR, SSIM, LPIPS, and FVD). Compared to vanilla NeRV,\nPNeRV achieves a +4.49 dB gain in PSNR and a 231% increase in FVD on UVG, along\nwith a +3.28 dB PSNR and 634% FVD increase on DAVIS.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Qi Zhao", "M. Salman Asif", "Zhan Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f764"}, "filepath": "data/2312.09008.png", "tags": [], "_media_type": "image", "_rand": 0.999411369387266, "arXiv_link": "https://arxiv.org/abs/2312.09008", "other_link": "", "title": "Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer", "abstract": "Despite the impressive generative capabilities of diffusion models, existing\ndiffusion model-based style transfer methods require inference-stage\noptimization (e.g. fine-tuning or textual inversion of style) which is\ntime-consuming, or fails to leverage the generative ability of large-scale\ndiffusion models. To address these issues, we introduce a novel artistic style\ntransfer method based on a pre-trained large-scale diffusion model without any\noptimization. Specifically, we manipulate the features of self-attention layers\nas the way the cross-attention mechanism works; in the generation process,\nsubstituting the key and value of content with those of style image. This\napproach provides several desirable characteristics for style transfer\nincluding 1) preservation of content by transferring similar styles into\nsimilar image patches and 2) transfer of style based on similarity of local\ntexture (e.g. edge) between content and style images. Furthermore, we introduce\nquery preservation and attention temperature scaling to mitigate the issue of\ndisruption of original content, and initial latent Adaptive Instance\nNormalization (AdaIN) to deal with the disharmonious color (failure to transfer\nthe colors of style). Our experimental results demonstrate that our proposed\nmethod surpasses state-of-the-art methods in both conventional and\ndiffusion-based style transfer baselines.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jiwoo Chung", "Sangeek Hyun", "Jae-Pil Heo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f765"}, "filepath": "data/2401.15261.png", "tags": [], "_media_type": "image", "_rand": 0.9999163192497356, "arXiv_link": "https://arxiv.org/abs/2401.15261", "other_link": "", "title": "Vanishing-Point-Guided Video Semantic Segmentation of Driving Scenes", "abstract": "The estimation of implicit cross-frame correspondences and the high\ncomputational cost have long been major challenges in video semantic\nsegmentation (VSS) for driving scenes. Prior works utilize keyframes, feature\npropagation, or cross-frame attention to address these issues. By contrast, we\nare the first to harness vanishing point (VP) priors for more effective\nsegmentation. Intuitively, objects near VPs (i.e., away from the vehicle) are\nless discernible. Moreover, they tend to move radially away from the VP over\ntime in the usual case of a forward-facing camera, a straight road, and linear\nforward motion of the vehicle. Our novel, efficient network for VSS, named\nVPSeg, incorporates two modules that utilize exactly this pair of static and\ndynamic VP priors: sparse-to-dense feature mining (DenseVP) and VP-guided\nmotion fusion (MotionVP). MotionVP employs VP-guided motion estimation to\nestablish explicit correspondences across frames and help attend to the most\nrelevant features from neighboring frames, while DenseVP enhances weak dynamic\nfeatures in distant regions around VPs. These modules operate within a\ncontext-detail framework, which separates contextual features from\nhigh-resolution local features at different input resolutions to reduce\ncomputational costs. Contextual and local features are integrated through\ncontextualized motion attention (CMA) for the final prediction. Extensive\nexperiments on two popular driving segmentation benchmarks, Cityscapes and\nACDC, demonstrate that VPSeg outperforms previous SOTA methods, with only\nmodest computational overhead.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Diandian Guo", "Deng-Ping Fan", "Tongyu Lu", "Christos Sakaridis", "Luc Van Gool"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f766"}, "filepath": "data/2311.17132.png", "tags": [], "_media_type": "image", "_rand": 0.9995016423325715, "arXiv_link": "https://arxiv.org/abs/2311.17132", "other_link": "", "title": "TransNeXt: Robust Foveal Visual Perception for Vision Transformers", "abstract": "Due to the depth degradation effect in residual connections, many efficient\nVision Transformers models that rely on stacking layers for information\nexchange often fail to form sufficient information mixing, leading to unnatural\nvisual perception. To address this issue, in this paper, we propose Aggregated\nAttention, a biomimetic design-based token mixer that simulates biological\nfoveal vision and continuous eye movement while enabling each token on the\nfeature map to have a global perception. Furthermore, we incorporate learnable\ntokens that interact with conventional queries and keys, which further\ndiversifies the generation of affinity matrices beyond merely relying on the\nsimilarity between queries and keys. Our approach does not rely on stacking for\ninformation exchange, thus effectively avoiding depth degradation and achieving\nnatural visual perception. Additionally, we propose Convolutional GLU, a\nchannel mixer that bridges the gap between GLU and SE mechanism, which empowers\neach token to have channel attention based on its nearest neighbor image\nfeatures, enhancing local modeling capability and model robustness. We combine\naggregated attention and convolutional GLU to create a new visual backbone\ncalled TransNeXt. Extensive experiments demonstrate that our TransNeXt achieves\nstate-of-the-art performance across multiple model sizes. At a resolution of\n$224^2$, TransNeXt-Tiny attains an ImageNet accuracy of 84.0%, surpassing\nConvNeXt-B with 69% fewer parameters. Our TransNeXt-Base achieves an ImageNet\naccuracy of 86.2% and an ImageNet-A accuracy of 61.6% at a resolution of\n$384^2$, a COCO object detection mAP of 57.1, and an ADE20K semantic\nsegmentation mIoU of 54.7.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Dai Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f767"}, "filepath": "data/2404.01440.png", "tags": [], "_media_type": "image", "_rand": 0.9990240926915167, "arXiv_link": "https://arxiv.org/abs/2404.01440", "other_link": "https://github.com/NVlabs/DigitalTwinArt", "title": "Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects", "abstract": "We address the problem of building digital twins of unknown articulated\nobjects from two RGBD scans of the object at different articulation states. We\ndecompose the problem into two stages, each addressing distinct aspects. Our\nmethod first reconstructs object-level shape at each state, then recovers the\nunderlying articulation model including part segmentation and joint\narticulations that associate the two states. By explicitly modeling point-level\ncorrespondences and exploiting cues from images, 3D reconstructions, and\nkinematics, our method yields more accurate and stable results compared to\nprior work. It also handles more than one movable part and does not rely on any\nobject shape or structure priors. Project page:\nhttps://github.com/NVlabs/DigitalTwinArt", "keywords": ["Scene analysis and understanding"], "authors_list": ["Yijia Weng", "Bowen Wen", "Jonathan Tremblay", "Valts Blukis", "Dieter Fox", "Leonidas Guibas", "Stan Birchfield"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f768"}, "filepath": "data/2311.16502.png", "tags": [], "_media_type": "image", "_rand": 0.9990072406763407, "arXiv_link": "https://arxiv.org/abs/2311.16502", "other_link": "", "title": "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI", "abstract": "We introduce MMMU: a new benchmark designed to evaluate multimodal models on\nmassive multi-discipline tasks demanding college-level subject knowledge and\ndeliberate reasoning. MMMU includes 11.5K meticulously collected multimodal\nquestions from college exams, quizzes, and textbooks, covering six core\ndisciplines: Art & Design, Business, Science, Health & Medicine, Humanities &\nSocial Science, and Tech & Engineering. These questions span 30 subjects and\n183 subfields, comprising 30 highly heterogeneous image types, such as charts,\ndiagrams, maps, tables, music sheets, and chemical structures. Unlike existing\nbenchmarks, MMMU focuses on advanced perception and reasoning with\ndomain-specific knowledge, challenging models to perform tasks akin to those\nfaced by experts. The evaluation of 14 open-source LMMs as well as the\nproprietary GPT-4V(ision) and Gemini highlights the substantial challenges\nposed by MMMU. Even the advanced GPT-4V and Gemini Ultra only achieve\naccuracies of 56% and 59% respectively, indicating significant room for\nimprovement. We believe MMMU will stimulate the community to build\nnext-generation multimodal foundation models towards expert artificial general\nintelligence.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Xiang Yue", "Yuansheng Ni", "Kai Zhang", "Tianyu Zheng", "Ruoqi Liu", "Ge Zhang", "Samuel Stevens", "Dongfu Jiang", "Weiming Ren", "Yuxuan Sun", "Cong Wei", "Botao Yu", "Ruibin Yuan", "Renliang Sun", "Ming Yin", "Boyuan Zheng", "Zhenzhu Yang", "Yibo Liu", "Wenhao Huang", "Huan Sun", "Yu Su", "Wenhu Chen"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f769"}, "filepath": "data/2308.15692.png", "tags": [], "_media_type": "image", "_rand": 0.9997147716839031, "arXiv_link": "https://arxiv.org/abs/2308.15692", "other_link": "", "title": "Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models", "abstract": "Denoising probabilistic diffusion models have shown breakthrough performance\nto generate more photo-realistic images or human-level illustrations than the\nprior models such as GANs. This high image-generation capability has stimulated\nthe creation of many downstream applications in various areas. However, we find\nthat this technology is actually a double-edged sword: We identify a new type\nof attack, called the Natural Denoising Diffusion (NDD) attack based on the\nfinding that state-of-the-art deep neural network (DNN) models still hold their\nprediction even if we intentionally remove their robust features, which are\nessential to the human visual system (HVS), through text prompts. The NDD\nattack shows a significantly high capability to generate low-cost,\nmodel-agnostic, and transferable adversarial attacks by exploiting the natural\nattack capability in diffusion models. To systematically evaluate the risk of\nthe NDD attack, we perform a large-scale empirical study with our newly created\ndataset, the Natural Denoising Diffusion Attack (NDDA) dataset. We evaluate the\nnatural attack capability by answering 6 research questions. Through a user\nstudy, we find that it can achieve an 88% detection rate while being stealthy\nto 93% of human subjects; we also find that the non-robust features embedded by\ndiffusion models contribute to the natural attack capability. To confirm the\nmodel-agnostic and transferable attack capability, we perform the NDD attack\nagainst the Tesla Model 3 and find that 73% of the physically printed attacks\ncan be detected as stop signs. Our hope is that the study and dataset can help\nour community be aware of the risks in diffusion models and facilitate further\nresearch toward robust DNN models.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Takami Sato", "Justin Yue", "Nanze Chen", "Ningfei Wang", "Alfred Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f76a"}, "filepath": "data/2402.18848.png", "tags": [], "_media_type": "image", "_rand": 0.9995574140561407, "arXiv_link": "https://arxiv.org/abs/2402.18848", "other_link": "", "title": "SwitchLight: Co-design of Physics-driven Architecture and Pre-training Framework for Human Portrait Relighting", "abstract": "We introduce a co-designed approach for human portrait relighting that\ncombines a physics-guided architecture with a pre-training framework. Drawing\non the Cook-Torrance reflectance model, we have meticulously configured the\narchitecture design to precisely simulate light-surface interactions.\nFurthermore, to overcome the limitation of scarce high-quality lightstage data,\nwe have developed a self-supervised pre-training strategy. This novel\ncombination of accurate physical modeling and expanded training dataset\nestablishes a new benchmark in relighting realism.", "keywords": ["Computational imaging and physics-based vision", "Image and video generation and manipulation"], "authors_list": ["Hoon Kim", "Minje Jang", "Wonjun Yoon", "Jisoo Lee", "Donghyun Na", "Sanghyun Woo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f76b"}, "filepath": "data/2403.19975.png", "tags": [], "_media_type": "image", "_rand": 0.9993303789449193, "arXiv_link": "https://arxiv.org/abs/2403.19975", "other_link": "", "title": "Context-Aware Integration of Language and Visual References for Natural Language Tracking", "abstract": "Tracking by natural language specification (TNL) aims to consistently\nlocalize a target in a video sequence given a linguistic description in the\ninitial frame. Existing methodologies perform language-based and template-based\nmatching for target reasoning separately and merge the matching results from\ntwo sources, which suffer from tracking drift when language and visual\ntemplates miss-align with the dynamic target state and ambiguity in the later\nmerging stage. To tackle the issues, we propose a joint multi-modal tracking\nframework with 1) a prompt modulation module to leverage the complementarity\nbetween temporal visual templates and language expressions, enabling precise\nand context-aware appearance and linguistic cues, and 2) a unified target\ndecoding module to integrate the multi-modal reference cues and executes the\nintegrated queries on the search image to predict the target location in an\nend-to-end manner directly. This design ensures spatio-temporal consistency by\nleveraging historical visual information and introduces an integrated solution,\ngenerating predictions in a single step. Extensive experiments conducted on\nTNL2K, OTB-Lang, LaSOT, and RefCOCOg validate the efficacy of our proposed\napproach. The results demonstrate competitive performance against\nstate-of-the-art methods for both tracking and grounding.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yanyan Shao", "Shuting He", "Qi Ye", "Yuchao Feng", "Wenhan Luo", "Jiming Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f76c"}, "filepath": "data/2404.16456.png", "tags": [], "_media_type": "image", "_rand": 0.9995685171808122, "arXiv_link": "https://arxiv.org/abs/2404.16456", "other_link": "", "title": "Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities", "abstract": "Multimodal sentiment analysis (MSA) aims to understand human sentiment\nthrough multimodal data. Most MSA efforts are based on the assumption of\nmodality completeness. However, in real-world applications, some practical\nfactors cause uncertain modality missingness, which drastically degrades the\nmodel's performance. To this end, we propose a Correlation-decoupled Knowledge\nDistillation (CorrKD) framework for the MSA task under uncertain missing\nmodalities. Specifically, we present a sample-level contrastive distillation\nmechanism that transfers comprehensive knowledge containing cross-sample\ncorrelations to reconstruct missing semantics. Moreover, a category-guided\nprototype distillation mechanism is introduced to capture cross-category\ncorrelations using category prototypes to align feature distributions and\ngenerate favorable joint representations. Eventually, we design a\nresponse-disentangled consistency distillation strategy to optimize the\nsentiment decision boundaries of the student network through response\ndisentanglement and mutual information maximization. Comprehensive experiments\non three datasets indicate that our framework can achieve favorable\nimprovements compared with several baselines.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Mingcheng Li", "Dingkang Yang", "Xiao Zhao", "Shuaibing Wang", "Yan Wang", "Kun Yang", "Mingyang Sun", "Dongliang Kou", "Qian", "Lihua Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f76d"}, "filepath": "data/2308.06564.png", "tags": [], "_media_type": "image", "_rand": 0.9998343493555251, "arXiv_link": "https://arxiv.org/abs/2308.06564", "other_link": "", "title": "Pose-Transformed Equivariant Network for 3D Point Trajectory Prediction", "abstract": "Accurate trajectory prediction is crucial for the safe and efficient\noperation of autonomous vehicles. The growing popularity of deep learning has\nled to the development of numerous methods for trajectory prediction. While\ndeterministic deep learning models have been widely used, deep generative\nmodels have gained popularity as they learn data distributions from training\ndata and account for trajectory uncertainties. In this study, we propose\nEquiDiff, a deep generative model for predicting future vehicle trajectories.\nEquiDiff is based on the conditional diffusion model, which generates future\ntrajectories by incorporating historical information and random Gaussian noise.\nThe backbone model of EquiDiff is an SO(2)-equivariant transformer that fully\nutilizes the geometric properties of location coordinates. In addition, we\nemploy Recurrent Neural Networks and Graph Attention Networks to extract social\ninteractions from historical trajectories. To evaluate the performance of\nEquiDiff, we conduct extensive experiments on the NGSIM dataset. Our results\ndemonstrate that EquiDiff outperforms other baseline models in short-term\nprediction, but has slightly higher errors for long-term prediction.\nFurthermore, we conduct an ablation study to investigate the contribution of\neach component of EquiDiff to the prediction accuracy. Additionally, we present\na visualization of the generation process of our diffusion model, providing\ninsights into the uncertainty of the prediction.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ruixuan Yu", "Jian Sun"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f76e"}, "filepath": "data/2403.13263.png", "tags": [], "_media_type": "image", "_rand": 0.9990956327988781, "arXiv_link": "https://arxiv.org/abs/2403.13263", "other_link": "https://github.com/ivattyue/SC-Tune.", "title": "SC-Tune: Unleashing Self-Consistent Referential Comprehension in Large Vision Language Models", "abstract": "Recent trends in Large Vision Language Models (LVLMs) research have been\nincreasingly focusing on advancing beyond general image understanding towards\nmore nuanced, object-level referential comprehension. In this paper, we present\nand delve into the self-consistency capability of LVLMs, a crucial aspect that\nreflects the models' ability to both generate informative captions for specific\nobjects and subsequently utilize these captions to accurately re-identify the\nobjects in a closed-loop process. This capability significantly mirrors the\nprecision and reliability of fine-grained visual-language understanding. Our\nfindings reveal that the self-consistency level of existing LVLMs falls short\nof expectations, posing limitations on their practical applicability and\npotential. To address this gap, we introduce a novel fine-tuning paradigm named\nSelf-Consistency Tuning (SC-Tune). It features the synergistic learning of a\ncyclic describer-locator system. This paradigm is not only data-efficient but\nalso exhibits generalizability across multiple LVLMs. Through extensive\nexperiments, we demonstrate that SC-Tune significantly elevates performance\nacross a spectrum of object-level vision-language benchmarks and maintains\ncompetitive or improved performance on image-level vision-language benchmarks.\nBoth our model and code will be publicly available at\nhttps://github.com/ivattyue/SC-Tune.", "keywords": ["Large multimodal models and prompting techniques", "Efficient and scalable vision"], "authors_list": ["Tongtian Yue", "Jie Cheng", "Longteng Guo", "Xingyuan Dai", "Zijia Zhao", "Xingjian He", "Gang Xiong", "Yisheng Lv", "Jing Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f76f"}, "filepath": "data/2403.16646.png", "tags": [], "_media_type": "image", "_rand": 0.999611213823916, "arXiv_link": "https://arxiv.org/abs/2403.16646", "other_link": "", "title": "Clustering Propagation for Universal Medical Image Segmentation", "abstract": "Prominent solutions for medical image segmentation are typically tailored for\nautomatic or interactive setups, posing challenges in facilitating progress\nachieved in one task to another.$_{\\!}$ This$_{\\!}$ also$_{\\!}$\nnecessitates$_{\\!}$ separate$_{\\!}$ models for each task, duplicating both\ntraining time and parameters.$_{\\!}$ To$_{\\!}$ address$_{\\!}$ above$_{\\!}$\nissues,$_{\\!}$ we$_{\\!}$ introduce$_{\\!}$ S2VNet,$_{\\!}$ a$_{\\!}$\nuniversal$_{\\!}$ framework$_{\\!}$ that$_{\\!}$ leverages$_{\\!}$\nSlice-to-Volume$_{\\!}$ propagation$_{\\!}$ to$_{\\!}$ unify automatic/interactive\nsegmentation within a single model and one training session. Inspired by\nclustering-based segmentation techniques, S2VNet makes full use of the\nslice-wise structure of volumetric data by initializing cluster centers from\nthe cluster$_{\\!}$ results$_{\\!}$ of$_{\\!}$ previous$_{\\!}$ slice.$_{\\!}$ This\nenables knowledge acquired from prior slices to assist in the segmentation of\nthe current slice, further efficiently bridging the communication between\nremote slices using mere 2D networks. Moreover, such a framework readily\naccommodates interactive segmentation with no architectural change, simply by\ninitializing centroids from user inputs. S2VNet distinguishes itself by swift\ninference speeds and reduced memory consumption compared to prevailing 3D\nsolutions. It can also handle multi-class interactions with each of them\nserving to initialize different centroids. Experiments on three benchmarks\ndemonstrate S2VNet surpasses task-specified solutions on both\nautomatic/interactive setups.", "keywords": ["Efficient and scalable vision", "Deep learning architectures and techniques"], "authors_list": ["Yuhang Ding", "Liulei Li", "Wenguan Wang", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f770"}, "filepath": "data/2403.00691.png", "tags": [], "_media_type": "image", "_rand": 0.9990469965214983, "arXiv_link": "https://arxiv.org/abs/2403.00691", "other_link": "", "title": "Tri-Modal Motion Retrieval by Learning a Joint Embedding Space", "abstract": "Information retrieval is an ever-evolving and crucial research domain. The\nsubstantial demand for high-quality human motion data especially in online\nacquirement has led to a surge in human motion research works. Prior works have\nmainly concentrated on dual-modality learning, such as text and motion tasks,\nbut three-modality learning has been rarely explored. Intuitively, an extra\nintroduced modality can enrich a model's application scenario, and more\nimportantly, an adequate choice of the extra modality can also act as an\nintermediary and enhance the alignment between the other two disparate\nmodalities. In this work, we introduce LAVIMO (LAnguage-VIdeo-MOtion\nalignment), a novel framework for three-modality learning integrating\nhuman-centric videos as an additional modality, thereby effectively bridging\nthe gap between text and motion. Moreover, our approach leverages a specially\ndesigned attention mechanism to foster enhanced alignment and synergistic\neffects among text, video, and motion modalities. Empirically, our results on\nthe HumanML3D and KIT-ML datasets show that LAVIMO achieves state-of-the-art\nperformance in various motion-related cross-modal retrieval tasks, including\ntext-to-motion, motion-to-text, video-to-motion and motion-to-video.", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Kangning Yin", "Shihao Zou", "Yuxuan Ge", "Zheng Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f771"}, "filepath": "data/2312.06184.png", "tags": [], "_media_type": "image", "_rand": 0.9990105948754767, "arXiv_link": "https://arxiv.org/abs/2312.06184", "other_link": "", "title": "Rethinking Human Motion Prediction with Symplectic Integral", "abstract": "In recent years, with the continuous advancement of deep learning and the\nemergence of large-scale human motion datasets, human motion prediction\ntechnology has gradually gained prominence in various fields such as\nhuman-computer interaction, autonomous driving, sports analysis, and personnel\ntracking. This article introduces common model architectures in this domain\nalong with their respective advantages and disadvantages. It also\nsystematically summarizes recent research innovations, focusing on in-depth\ndiscussions of relevant papers in these areas, thereby highlighting\nforward-looking insights into the field's development. Furthermore, this paper\nprovides a comprehensive overview of existing methods, commonly used datasets,\nand evaluation metrics in this field. Finally, it discusses some of the current\nlimitations in the field and proposes potential future research directions to\naddress these challenges and promote further advancements in human motion\nprediction.", "keywords": [], "authors_list": ["Haipeng Chen", "Kedi L\u2006yu", "Zhenguang Liu", "Yifang Yin", "Xun Yang", "Yingda Lyu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f772"}, "filepath": "data/2310.08370.png", "tags": [], "_media_type": "image", "_rand": 0.9990926616883137, "arXiv_link": "https://arxiv.org/abs/2310.08370", "other_link": "https://github.com/Nightmare-n/UniPAD.", "title": "UniPAD: A Universal Pre-training Paradigm for Autonomous Driving", "abstract": "In the context of autonomous driving, the significance of effective feature\nlearning is widely acknowledged. While conventional 3D self-supervised\npre-training methods have shown widespread success, most methods follow the\nideas originally designed for 2D images. In this paper, we present UniPAD, a\nnovel self-supervised learning paradigm applying 3D volumetric differentiable\nrendering. UniPAD implicitly encodes 3D space, facilitating the reconstruction\nof continuous 3D shape structures and the intricate appearance characteristics\nof their 2D projections. The flexibility of our method enables seamless\nintegration into both 2D and 3D frameworks, enabling a more holistic\ncomprehension of the scenes. We manifest the feasibility and effectiveness of\nUniPAD by conducting extensive experiments on various downstream 3D tasks. Our\nmethod significantly improves lidar-, camera-, and lidar-camera-based baseline\nby 9.1, 7.7, and 6.9 NDS, respectively. Notably, our pre-training pipeline\nachieves 73.2 NDS for 3D object detection and 79.4 mIoU for 3D semantic\nsegmentation on the nuScenes validation set, achieving state-of-the-art results\nin comparison with previous methods. The code will be available at\nhttps://github.com/Nightmare-n/UniPAD.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Honghui Yang", "Sha Zhang", "Di Huang", "Xiaoyang Wu", "Haoyi Zhu", "Tong He", "SHIXIANG TANG", "Hengshuang Zhao", "Qibo Qiu", "Binbin Lin", "Xiaofei He", "Wanli Ouyang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f773"}, "filepath": "data/2308.09303.png", "tags": [], "_media_type": "image", "_rand": 0.9994902601884817, "arXiv_link": "https://arxiv.org/abs/2308.09303", "other_link": "", "title": "Dual-Enhanced Coreset Selection with Class-wise Collaboration for Online Blurry Class Incremental Learning", "abstract": "Continual learning aims to learn a model from a continuous stream of data,\nbut it mainly assumes a fixed number of data and tasks with clear task\nboundaries. However, in real-world scenarios, the number of input data and\ntasks is constantly changing in a statistical way, not a static way. Although\nrecently introduced incremental learning scenarios having blurry task\nboundaries somewhat address the above issues, they still do not fully reflect\nthe statistical properties of real-world situations because of the fixed ratio\nof disjoint and blurry samples. In this paper, we propose a new Stochastic\nincremental Blurry task boundary scenario, called Si-Blurry, which reflects the\nstochastic properties of the real-world. We find that there are two major\nchallenges in the Si-Blurry scenario: (1) inter- and intra-task forgettings and\n(2) class imbalance problem. To alleviate them, we introduce Mask and Visual\nPrompt tuning (MVP). In MVP, to address the inter- and intra-task forgetting\nissues, we propose a novel instance-wise logit masking and contrastive visual\nprompt tuning loss. Both of them help our model discern the classes to be\nlearned in the current batch. It results in consolidating the previous\nknowledge. In addition, to alleviate the class imbalance problem, we introduce\na new gradient similarity-based focal loss and adaptive feature scaling to ease\noverfitting to the major classes and underfitting to the minor classes.\nExtensive experiments show that our proposed MVP significantly outperforms the\nexisting state-of-the-art methods in our challenging Si-Blurry scenario.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yutian Luo", "Shiqi Zhao", "Haoran Wu", "Zhiwu Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f774"}, "filepath": "data/2404.08531.png", "tags": [], "_media_type": "image", "_rand": 0.9999026064375773, "arXiv_link": "https://arxiv.org/abs/2404.08531", "other_link": "", "title": "Text Prompt with Normality Guidance for Weakly Supervised Video Anomaly Detection", "abstract": "Weakly supervised video anomaly detection (WSVAD) is a challenging task.\nGenerating fine-grained pseudo-labels based on weak-label and then\nself-training a classifier is currently a promising solution. However, since\nthe existing methods use only RGB visual modality and the utilization of\ncategory text information is neglected, thus limiting the generation of more\naccurate pseudo-labels and affecting the performance of self-training. Inspired\nby the manual labeling process based on the event description, in this paper,\nwe propose a novel pseudo-label generation and self-training framework based on\nText Prompt with Normality Guidance (TPWNG) for WSVAD. Our idea is to transfer\nthe rich language-visual knowledge of the contrastive language-image\npre-training (CLIP) model for aligning the video event description text and\ncorresponding video frames to generate pseudo-labels. Specifically, We first\nfine-tune the CLIP for domain adaptation by designing two ranking losses and a\ndistributional inconsistency loss. Further, we propose a learnable text prompt\nmechanism with the assist of a normality visual prompt to further improve the\nmatching accuracy of video event description text and video frames. Then, we\ndesign a pseudo-label generation module based on the normality guidance to\ninfer reliable frame-level pseudo-labels. Finally, we introduce a temporal\ncontext self-adaptive learning module to learn the temporal dependencies of\ndifferent video events more flexibly and accurately. Extensive experiments show\nthat our method achieves state-of-the-art performance on two benchmark\ndatasets, UCF-Crime and XD-Viole", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Zhiwei Yang", "Jing Liu", "Peng Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f775"}, "filepath": "data/2404.12209.png", "tags": [], "_media_type": "image", "_rand": 0.9990394639468578, "arXiv_link": "https://arxiv.org/abs/2404.12209", "other_link": "", "title": "Partial-to-Partial Shape Matching with Geometric Consistency", "abstract": "Finding correspondences between 3D shapes is an important and long-standing\nproblem in computer vision, graphics and beyond. A prominent challenge are\npartial-to-partial shape matching settings, which occur when the shapes to\nmatch are only observed incompletely (e.g. from 3D scanning). Although\npartial-to-partial matching is a highly relevant setting in practice, it is\nrarely explored. Our work bridges the gap between existing (rather artificial)\n3D full shape matching and partial-to-partial real-world settings by exploiting\ngeometric consistency as a strong constraint. We demonstrate that it is indeed\npossible to solve this challenging problem in a variety of settings. For the\nfirst time, we achieve geometric consistency for partial-to-partial matching,\nwhich is realized by a novel integer non-linear program formalism building on\ntriangle product spaces, along with a new pruning algorithm based on linear\ninteger programming. Further, we generate a new inter-class dataset for\npartial-to-partial shape-matching. We show that our method outperforms current\nSOTA methods on both an established intra-class dataset and our novel\ninter-class dataset.", "keywords": [], "authors_list": ["Viktoria Ehm", "Maolin Gao", "Paul Roetzer", "Marvin Eisenberger", "Daniel Cremers", "Florian Bernard"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f776"}, "filepath": "data/2404.01692.png", "tags": [], "_media_type": "image", "_rand": 0.9995353807340731, "arXiv_link": "https://arxiv.org/abs/2404.01692", "other_link": "https://github.com/JaehaKim97/SR4IR.", "title": "Beyond Image Super-Resolution for Image Recognition with Task-Driven Perceptual Loss", "abstract": "In real-world scenarios, image recognition tasks, such as semantic\nsegmentation and object detection, often pose greater challenges due to the\nlack of information available within low-resolution (LR) content. Image\nsuper-resolution (SR) is one of the promising solutions for addressing the\nchallenges. However, due to the ill-posed property of SR, it is challenging for\ntypical SR methods to restore task-relevant high-frequency contents, which may\ndilute the advantage of utilizing the SR method. Therefore, in this paper, we\npropose Super-Resolution for Image Recognition (SR4IR) that effectively guides\nthe generation of SR images beneficial to achieving satisfactory image\nrecognition performance when processing LR images. The critical component of\nour SR4IR is the task-driven perceptual (TDP) loss that enables the SR network\nto acquire task-specific knowledge from a network tailored for a specific task.\nMoreover, we propose a cross-quality patch mix and an alternate training\nframework that significantly enhances the efficacy of the TDP loss by\naddressing potential problems when employing the TDP loss. Through extensive\nexperiments, we demonstrate that our SR4IR achieves outstanding task\nperformance by generating SR images useful for a specific image recognition\ntask, including semantic segmentation, object detection, and image\nclassification. The implementation code is available at\nhttps://github.com/JaehaKim97/SR4IR.", "keywords": ["Low-level vision"], "authors_list": ["Jaeha Kim", "Junghun Oh", "Kyoung Mu Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f777"}, "filepath": "data/2308.16758.png", "tags": [], "_media_type": "image", "_rand": 0.9993601729463597, "arXiv_link": "https://arxiv.org/abs/2308.16758", "other_link": "", "title": "FaceCom: Towards High-fidelity 3D Facial Shape Completion via Optimization and Inpainting Guidance", "abstract": "Generating 3D faces from textual descriptions has a multitude of\napplications, such as gaming, movie, and robotics. Recent progresses have\ndemonstrated the success of unconditional 3D face generation and text-to-3D\nshape generation. However, due to the limited text-3D face data pairs,\ntext-driven 3D face generation remains an open problem. In this paper, we\npropose a text-guided 3D faces generation method, refer as TG-3DFace, for\ngenerating realistic 3D faces using text guidance. Specifically, we adopt an\nunconditional 3D face generation framework and equip it with text conditions,\nwhich learns the text-guided 3D face generation with only text-2D face data. On\ntop of that, we propose two text-to-face cross-modal alignment techniques,\nincluding the global contrastive learning and the fine-grained alignment\nmodule, to facilitate high semantic consistency between generated 3D faces and\ninput texts. Besides, we present directional classifier guidance during the\ninference process, which encourages creativity for out-of-domain generations.\nCompared to the existing methods, TG-3DFace creates more realistic and\naesthetically pleasing 3D faces, boosting 9% multi-view consistency (MVIC) over\nLatent3D. The rendered face images generated by TG-3DFace achieve higher FID\nand CLIP score than text-to-2D face/image generation models, demonstrating our\nsuperiority in generating realistic and semantic-consistent textures.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Yinglong Li", "Hongyu Wu", "Wang", "Qingzhao Qin", "yijiao zhao", "Yong Wang", "Aimin Hao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f778"}, "filepath": "data/2310.11448.png", "tags": [], "_media_type": "image", "_rand": 0.9990723583631882, "arXiv_link": "https://arxiv.org/abs/2310.11448", "other_link": "https://zju3dv.github.io/4k4d/.", "title": "4K4D: Real-Time 4D View Synthesis at 4K Resolution", "abstract": "This paper targets high-fidelity and real-time view synthesis of dynamic 3D\nscenes at 4K resolution. Recently, some methods on dynamic view synthesis have\nshown impressive rendering quality. However, their speed is still limited when\nrendering high-resolution images. To overcome this problem, we propose 4K4D, a\n4D point cloud representation that supports hardware rasterization and enables\nunprecedented rendering speed. Our representation is built on a 4D feature grid\nso that the points are naturally regularized and can be robustly optimized. In\naddition, we design a novel hybrid appearance model that significantly boosts\nthe rendering quality while preserving efficiency. Moreover, we develop a\ndifferentiable depth peeling algorithm to effectively learn the proposed model\nfrom RGB videos. Experiments show that our representation can be rendered at\nover 400 FPS on the DNA-Rendering dataset at 1080p resolution and 80 FPS on the\nENeRF-Outdoor dataset at 4K resolution using an RTX 4090 GPU, which is 30x\nfaster than previous methods and achieves the state-of-the-art rendering\nquality. Our project page is available at https://zju3dv.github.io/4k4d/.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Zhen Xu", "Sida Peng", "Haotong Lin", "Guangzhao He", "Jiaming Sun", "Yujun Shen", "Hujun Bao", "Xiaowei Zhou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f779"}, "filepath": "data/2311.11700.png", "tags": [], "_media_type": "image", "_rand": 0.9993871652508005, "arXiv_link": "https://arxiv.org/abs/2311.11700", "other_link": "https://gs-slam.github.io/.", "title": "GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting", "abstract": "In this paper, we introduce \\textbf{GS-SLAM} that first utilizes 3D Gaussian\nrepresentation in the Simultaneous Localization and Mapping (SLAM) system. It\nfacilitates a better balance between efficiency and accuracy. Compared to\nrecent SLAM methods employing neural implicit representations, our method\nutilizes a real-time differentiable splatting rendering pipeline that offers\nsignificant speedup to map optimization and RGB-D rendering. Specifically, we\npropose an adaptive expansion strategy that adds new or deletes noisy 3D\nGaussians in order to efficiently reconstruct new observed scene geometry and\nimprove the mapping of previously observed areas. This strategy is essential to\nextend 3D Gaussian representation to reconstruct the whole scene rather than\nsynthesize a static object in existing methods. Moreover, in the pose tracking\nprocess, an effective coarse-to-fine technique is designed to select reliable\n3D Gaussian representations to optimize camera pose, resulting in runtime\nreduction and robust estimation. Our method achieves competitive performance\ncompared with existing state-of-the-art real-time methods on the Replica,\nTUM-RGBD datasets. Project page: https://gs-slam.github.io/.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chi Yan", "Delin Qu", "Dong Wang", "Dan Xu", "Zhigang Wang", "Bin Zhao", "Xuelong Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f77a"}, "filepath": "data/2404.00849.png", "tags": [], "_media_type": "image", "_rand": 0.9991457059986673, "arXiv_link": "https://arxiv.org/abs/2404.00849", "other_link": "", "title": "Generating Content for HDR Deghosting from Frequency View", "abstract": "Recovering ghost-free High Dynamic Range (HDR) images from multiple Low\nDynamic Range (LDR) images becomes challenging when the LDR images exhibit\nsaturation and significant motion. Recent Diffusion Models (DMs) have been\nintroduced in HDR imaging field, demonstrating promising performance,\nparticularly in achieving visually perceptible results compared to previous\nDNN-based methods. However, DMs require extensive iterations with large models\nto estimate entire images, resulting in inefficiency that hinders their\npractical application. To address this challenge, we propose the Low-Frequency\naware Diffusion (LF-Diff) model for ghost-free HDR imaging. The key idea of\nLF-Diff is implementing the DMs in a highly compacted latent space and\nintegrating it into a regression-based model to enhance the details of\nreconstructed images. Specifically, as low-frequency information is closely\nrelated to human visual perception we propose to utilize DMs to create compact\nlow-frequency priors for the reconstruction process. In addition, to take full\nadvantage of the above low-frequency priors, the Dynamic HDR Reconstruction\nNetwork (DHRNet) is carried out in a regression-based manner to obtain final\nHDR images. Extensive experiments conducted on synthetic and real-world\nbenchmark datasets demonstrate that our LF-Diff performs favorably against\nseveral state-of-the-art methods and is 10$\\times$ faster than previous\nDM-based methods.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Tao Hu", "Qingsen Yan", "Yuankai Qi", "Yanning Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f77b"}, "filepath": "data/2312.02702.png", "tags": [], "_media_type": "image", "_rand": 0.9991502751674048, "arXiv_link": "https://arxiv.org/abs/2312.02702", "other_link": "", "title": "Neural Sign Actors: A diffusion model for 3D sign language production from text", "abstract": "Sign Languages (SL) serve as the primary mode of communication for the Deaf\nand Hard of Hearing communities. Deep learning methods for SL recognition and\ntranslation have achieved promising results. However, Sign Language Production\n(SLP) poses a challenge as the generated motions must be realistic and have\nprecise semantic meaning. Most SLP methods rely on 2D data, which hinders their\nrealism. In this work, a diffusion-based SLP model is trained on a curated\nlarge-scale dataset of 4D signing avatars and their corresponding text\ntranscripts. The proposed method can generate dynamic sequences of 3D avatars\nfrom an unconstrained domain of discourse using a diffusion process formed on a\nnovel and anatomically informed graph neural network defined on the SMPL-X body\nskeleton. Through quantitative and qualitative experiments, we show that the\nproposed method considerably outperforms previous methods of SLP. This work\nmakes an important step towards realistic neural sign avatars, bridging the\ncommunication gap between Deaf and hearing communities.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Vasileios Baltatzis", "Rolandos Alexandros Potamias", "Evangelos Ververas", "Guanxiong Sun", "Jiankang Deng", "Stefanos Zafeiriou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f77c"}, "filepath": "data/2312.02152.png", "tags": [], "_media_type": "image", "_rand": 0.9995544348885038, "arXiv_link": "https://arxiv.org/abs/2312.02152", "other_link": "https://github.com/georg-bn/rotation-steerers.", "title": "Steerers: A framework for rotation equivariant keypoint descriptors", "abstract": "Image keypoint descriptions that are discriminative and matchable over large\nchanges in viewpoint are vital for 3D reconstruction. However, descriptions\noutput by learned descriptors are typically not robust to camera rotation.\nWhile they can be made more robust by, e.g., data augmentation, this degrades\nperformance on upright images. Another approach is test-time augmentation,\nwhich incurs a significant increase in runtime. Instead, we learn a linear\ntransform in description space that encodes rotations of the input image. We\ncall this linear transform a steerer since it allows us to transform the\ndescriptions as if the image was rotated. From representation theory, we know\nall possible steerers for the rotation group. Steerers can be optimized (A)\ngiven a fixed descriptor, (B) jointly with a descriptor or (C) we can optimize\na descriptor given a fixed steerer. We perform experiments in these three\nsettings and obtain state-of-the-art results on the rotation invariant image\nmatching benchmarks AIMS and Roto-360. We publish code and model weights at\nhttps://github.com/georg-bn/rotation-steerers.", "keywords": [], "authors_list": ["Georg B\u00f6kman", "Johan Edstedt", "Michael Felsberg", "Fredrik Kahl"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f77d"}, "filepath": "data/2403.17188.png", "tags": [], "_media_type": "image", "_rand": 0.9990731076255721, "arXiv_link": "https://arxiv.org/abs/2403.17188", "other_link": "https://github.com/Megum1/LOTUS.", "title": "LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning", "abstract": "Backdoor attack poses a significant security threat to Deep Learning\napplications. Existing attacks are often not evasive to established backdoor\ndetection techniques. This susceptibility primarily stems from the fact that\nthese attacks typically leverage a universal trigger pattern or transformation\nfunction, such that the trigger can cause misclassification for any input. In\nresponse to this, recent papers have introduced attacks using sample-specific\ninvisible triggers crafted through special transformation functions. While\nthese approaches manage to evade detection to some extent, they reveal\nvulnerability to existing backdoor mitigation techniques. To address and\nenhance both evasiveness and resilience, we introduce a novel backdoor attack\nLOTUS. Specifically, it leverages a secret function to separate samples in the\nvictim class into a set of partitions and applies unique triggers to different\npartitions. Furthermore, LOTUS incorporates an effective trigger focusing\nmechanism, ensuring only the trigger corresponding to the partition can induce\nthe backdoor behavior. Extensive experimental results show that LOTUS can\nachieve high attack success rate across 4 datasets and 7 model structures, and\neffectively evading 13 backdoor detection and mitigation techniques. The code\nis available at https://github.com/Megum1/LOTUS.", "keywords": [], "authors_list": ["Siyuan Cheng", "Guanhong Tao", "Yingqi Liu", "Guangyu Shen", "Shengwei An", "Shiwei Feng", "Xiangzhe Xu", "Kaiyuan Zhang", "Shiqing Ma", "Xiangyu Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f77e"}, "filepath": "data/2312.01998.png", "tags": [], "_media_type": "image", "_rand": 0.9995024629849585, "arXiv_link": "https://arxiv.org/abs/2312.01998", "other_link": "https://github.com/navervision/lincir", "title": "Language-only Training of Zero-shot Composed Image Retrieval", "abstract": "Composed image retrieval (CIR) task takes a composed query of image and text,\naiming to search relative images for both conditions. Conventional CIR\napproaches need a training dataset composed of triplets of query image, query\ntext, and target image, which is very expensive to collect. Several recent\nworks have worked on the zero-shot (ZS) CIR paradigm to tackle the issue\nwithout using pre-collected triplets. However, the existing ZS-CIR methods show\nlimited backbone scalability and generalizability due to the lack of diversity\nof the input texts during training. We propose a novel CIR framework, only\nusing language for its training. Our LinCIR (Language-only training for CIR)\ncan be trained only with text datasets by a novel self-supervision named\nself-masking projection (SMP). We project the text latent embedding to the\ntoken embedding space and construct a new text by replacing the keyword tokens\nof the original text. Then, we let the new and original texts have the same\nlatent embedding vector. With this simple strategy, LinCIR is surprisingly\nefficient and highly effective; LinCIR with CLIP ViT-G backbone is trained in\n48 minutes and shows the best ZS-CIR performances on four different CIR\nbenchmarks, CIRCO, GeneCIS, FashionIQ, and CIRR, even outperforming supervised\nmethod on FashionIQ. Code is available at https://github.com/navervision/lincir", "keywords": ["Efficient and scalable vision"], "authors_list": ["Geonmo Gu", "Sanghyuk Chun", "Wonjae Kim", "Yoohoon Kang", "Sangdoo Yun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Information Retrieval"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f77f"}, "filepath": "data/2405.11487.png", "tags": [], "_media_type": "image", "_rand": 0.9993527464431596, "arXiv_link": "https://arxiv.org/abs/2405.11487", "other_link": "", "title": "\"Previously on ...\" From Recaps to Story Summarization", "abstract": "We introduce multimodal story summarization by leveraging TV episode recaps -\nshort video sequences interweaving key story moments from previous episodes to\nbring viewers up to speed. We propose PlotSnap, a dataset featuring two crime\nthriller TV shows with rich recaps and long episodes of 40 minutes. Story\nsummarization labels are unlocked by matching recap shots to corresponding\nsub-stories in the episode. We propose a hierarchical model TaleSumm that\nprocesses entire episodes by creating compact shot and dialog representations,\nand predicts importance scores for each video shot and dialog utterance by\nenabling interactions between local story groups. Unlike traditional\nsummarization, our method extracts multiple plot points from long videos. We\npresent a thorough evaluation on story summarization, including promising\ncross-series generalization. TaleSumm also shows good results on classic video\nsummarization benchmarks.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Aditya Kumar Singh", "Dhruv Srivastava", "Makarand Tapaswi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f780"}, "filepath": "data/2404.01628.png", "tags": [], "_media_type": "image", "_rand": 0.9994105679959424, "arXiv_link": "https://arxiv.org/abs/2404.01628", "other_link": "", "title": "Learning Equi-angular Representations for Online Continual Learning", "abstract": "Online continual learning suffers from an underfitted solution due to\ninsufficient training for prompt model update (e.g., single-epoch training). To\naddress the challenge, we propose an efficient online continual learning method\nusing the neural collapse phenomenon. In particular, we induce neural collapse\nto form a simplex equiangular tight frame (ETF) structure in the representation\nspace so that the continuously learned model with a single epoch can better fit\nto the streamed data by proposing preparatory data training and residual\ncorrection in the representation space. With an extensive set of empirical\nvalidations using CIFAR-10/100, TinyImageNet, ImageNet-200, and ImageNet-1K, we\nshow that our proposed method outperforms state-of-the-art methods by a\nnoticeable margin in various online continual learning scenarios such as\ndisjoint and Gaussian scheduled continuous (i.e., boundary-free) data setups.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Minhyuk Seo", "Hyunseo Koh", "Wonje Jeung", "Minjae Lee", "San Kim", "Hankook Lee", "Sungjun Cho", "Sungik Choi", "Hyunwoo Kim", "Jonghyun Choi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f781"}, "filepath": "data/2312.09067.png", "tags": [], "_media_type": "image", "_rand": 0.9990668836092517, "arXiv_link": "https://arxiv.org/abs/2312.09067", "other_link": "", "title": "Holodeck: Language Guided Generation of 3D Embodied AI Environments", "abstract": "3D simulated environments play a critical role in Embodied AI, but their\ncreation requires expertise and extensive manual effort, restricting their\ndiversity and scope. To mitigate this limitation, we present Holodeck, a system\nthat generates 3D environments to match a user-supplied prompt fully\nautomatedly. Holodeck can generate diverse scenes, e.g., arcades, spas, and\nmuseums, adjust the designs for styles, and can capture the semantics of\ncomplex queries such as \"apartment for a researcher with a cat\" and \"office of\na professor who is a fan of Star Wars\". Holodeck leverages a large language\nmodel (i.e., GPT-4) for common sense knowledge about what the scene might look\nlike and uses a large collection of 3D assets from Objaverse to populate the\nscene with diverse objects. To address the challenge of positioning objects\ncorrectly, we prompt GPT-4 to generate spatial relational constraints between\nobjects and then optimize the layout to satisfy those constraints. Our\nlarge-scale human evaluation shows that annotators prefer Holodeck over\nmanually designed procedural baselines in residential scenes and that Holodeck\ncan produce high-quality outputs for diverse scene types. We also demonstrate\nan exciting application of Holodeck in Embodied AI, training agents to navigate\nin novel scenes like music rooms and daycares without human-constructed data,\nwhich is a significant step forward in developing general-purpose embodied\nagents.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Yue Yang", "Fan-Yun Sun", "Luca Weihs", "Eli VanderBilt", "Alvaro Herrasti", "Winson Han", "Jiajun Wu", "Nick Haber", "Ranjay Krishna", "Lingjie Liu", "Chris Callison-Burch", "Mark Yatskar", "Aniruddha Kembhavi", "Christopher Clark"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f782"}, "filepath": "data/2403.17537.png", "tags": [], "_media_type": "image", "_rand": 0.9998982739855542, "arXiv_link": "https://arxiv.org/abs/2403.17537", "other_link": "https://cnhaox.github.io/NeRF-HuGS/.", "title": "NeRF-HuGS: Improved Neural Radiance Fields in Non-static Scenes Using Heuristics-Guided Segmentation", "abstract": "Neural Radiance Field (NeRF) has been widely recognized for its excellence in\nnovel view synthesis and 3D scene reconstruction. However, their effectiveness\nis inherently tied to the assumption of static scenes, rendering them\nsusceptible to undesirable artifacts when confronted with transient distractors\nsuch as moving objects or shadows. In this work, we propose a novel paradigm,\nnamely \"Heuristics-Guided Segmentation\" (HuGS), which significantly enhances\nthe separation of static scenes from transient distractors by harmoniously\ncombining the strengths of hand-crafted heuristics and state-of-the-art\nsegmentation models, thus significantly transcending the limitations of\nprevious solutions. Furthermore, we delve into the meticulous design of\nheuristics, introducing a seamless fusion of Structure-from-Motion (SfM)-based\nheuristics and color residual heuristics, catering to a diverse range of\ntexture profiles. Extensive experiments demonstrate the superiority and\nrobustness of our method in mitigating transient distractors for NeRFs trained\nin non-static scenes. Project page: https://cnhaox.github.io/NeRF-HuGS/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiahao Chen", "Yipeng Qin", "Lingjie Liu", "Jiangbo Lu", "Guanbin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f783"}, "filepath": "data/2306.05172.png", "tags": [], "_media_type": "image", "_rand": 0.9996388065669176, "arXiv_link": "https://arxiv.org/abs/2306.05172", "other_link": "", "title": "FLHetBench: Benchmarking Device and State Heterogeneity in Federated Learning", "abstract": "Federated Machine Learning (FL) has received considerable attention in recent\nyears. FL benchmarks are predominantly explored in either simulated systems or\ndata center environments, neglecting the setups of real-world systems, which\nare often closely linked to edge computing. We close this research gap by\nintroducing FLEdge, a benchmark targeting FL workloads in edge computing\nsystems. We systematically study hardware heterogeneity, energy efficiency\nduring training, and the effect of various differential privacy levels on\ntraining in FL systems. To make this benchmark applicable to real-world\nscenarios, we evaluate the impact of client dropouts on state-of-the-art FL\nstrategies with failure rates as high as 50%. FLEdge provides new insights,\nsuch as that training state-of-the-art FL workloads on older GPU-accelerated\nembedded devices is up to 3x more energy efficient than on modern server-grade\nGPUs.", "keywords": [], "authors_list": ["Junyuan Zhang", "Shuang Zeng", "Miao Zhang", "Runxi Wang", "Feifei Wang", "Yuyin Zhou", "Paul Pu Liang", "Liangqiong Qu"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Distributed, Parallel, and Cluster Computing", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f784"}, "filepath": "data/2405.06536.png", "tags": [], "_media_type": "image", "_rand": 0.9997618275996306, "arXiv_link": "https://arxiv.org/abs/2405.06536", "other_link": "", "title": "Hyper-MD: Mesh Denoising with Customized Parameters Aware of Noise Intensity and Geometric Characteristics", "abstract": "Mesh denoising, aimed at removing noise from input meshes while preserving\ntheir feature structures, is a practical yet challenging task. Despite the\nremarkable progress in learning-based mesh denoising methodologies in recent\nyears, their network designs often encounter two principal drawbacks: a\ndependence on single-modal geometric representations, which fall short in\ncapturing the multifaceted attributes of meshes, and a lack of effective global\nfeature aggregation, hindering their ability to fully understand the mesh's\ncomprehensive structure. To tackle these issues, we propose SurfaceFormer, a\npioneering Transformer-based mesh denoising framework. Our first contribution\nis the development of a new representation known as Local Surface Descriptor,\nwhich is crafted by establishing polar systems on each mesh face, followed by\nsampling points from adjacent surfaces using geodesics. The normals of these\npoints are organized into 2D patches, mimicking images to capture local\ngeometric intricacies, whereas the poles and vertex coordinates are\nconsolidated into a point cloud to embody spatial information. This advancement\nsurmounts the hurdles posed by the irregular and non-Euclidean characteristics\nof mesh data, facilitating a smooth integration with Transformer architecture.\nNext, we propose a dual-stream structure consisting of a Geometric Encoder\nbranch and a Spatial Encoder branch, which jointly encode local geometry\ndetails and spatial information to fully explore multimodal information for\nmesh denoising. A subsequent Denoising Transformer module receives the\nmultimodal information and achieves efficient global feature aggregation\nthrough self-attention operators. Our experimental evaluations demonstrate that\nthis novel approach outperforms existing state-of-the-art methods in both\nobjective and subjective assessments, marking a significant leap forward in\nmesh denoising.", "keywords": ["Deep learning architectures and techniques", "Low-level vision"], "authors_list": ["Xingtao Wang", "Hongliang Wei", "Xiaopeng Fan", "Debin Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f785"}, "filepath": "data/2403.17870.png", "tags": [], "_media_type": "image", "_rand": 0.999906515841781, "arXiv_link": "https://arxiv.org/abs/2403.17870", "other_link": "", "title": "Boosting Diffusion Models with Moving Average Sampling in Frequency Domain", "abstract": "Diffusion models have recently brought a powerful revolution in image\ngeneration. Despite showing impressive generative capabilities, most of these\nmodels rely on the current sample to denoise the next one, possibly resulting\nin denoising instability. In this paper, we reinterpret the iterative denoising\nprocess as model optimization and leverage a moving average mechanism to\nensemble all the prior samples. Instead of simply applying moving average to\nthe denoised samples at different timesteps, we first map the denoised samples\nto data space and then perform moving average to avoid distribution shift\nacross timesteps. In view that diffusion models evolve the recovery from\nlow-frequency components to high-frequency details, we further decompose the\nsamples into different frequency components and execute moving average\nseparately on each component. We name the complete approach \"Moving Average\nSampling in Frequency domain (MASF)\". MASF could be seamlessly integrated into\nmainstream pre-trained diffusion models and sampling schedules. Extensive\nexperiments on both unconditional and conditional diffusion models demonstrate\nthat our MASF leads to superior performances compared to the baselines, with\nalmost negligible additional complexity cost.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yurui Qian", "Qi Cai", "Yingwei Pan", "Yehao Li", "Ting Yao", "Qibin Sun", "Tao Mei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f786"}, "filepath": "data/2404.04848.png", "tags": [], "_media_type": "image", "_rand": 0.999526155766849, "arXiv_link": "https://arxiv.org/abs/2404.04848", "other_link": "", "title": "Task-Aware Encoder Control for Deep Video Compression", "abstract": "Prior research on deep video compression (DVC) for machine tasks typically\nnecessitates training a unique codec for each specific task, mandating a\ndedicated decoder per task. In contrast, traditional video codecs employ a\nflexible encoder controller, enabling the adaptation of a single codec to\ndifferent tasks through mechanisms like mode prediction. Drawing inspiration\nfrom this, we introduce an innovative encoder controller for deep video\ncompression for machines. This controller features a mode prediction and a\nGroup of Pictures (GoP) selection module. Our approach centralizes control at\nthe encoding stage, allowing for adaptable encoder adjustments across different\ntasks, such as detection and tracking, while maintaining compatibility with a\nstandard pre-trained DVC decoder. Empirical evidence demonstrates that our\nmethod is applicable across multiple tasks with various existing pre-trained\nDVCs. Moreover, extensive experiments demonstrate that our method outperforms\nprevious DVC by about 25% bitrate for different tasks, with only one\npre-trained decoder.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xingtong Ge", "Jixiang Luo", "XINJIE ZHANG", "Tongda Xu", "Guo Lu", "Dailan He", "Jing Geng", "Yan Wang", "Jun Zhang", "Hongwei Qin"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f787"}, "filepath": "data/2307.10206.png", "tags": [], "_media_type": "image", "_rand": 0.9998132438977132, "arXiv_link": "https://arxiv.org/abs/2307.10206", "other_link": "https://xuenan.net/neat}.", "title": "NEAT: Distilling 3D Wireframes from Neural Attraction Fields", "abstract": "This paper studies the problem of structured 3D reconstruction using\nwireframes that consist of line segments and junctions, focusing on the\ncomputation of structured boundary geometries of scenes. Instead of leveraging\nmatching-based solutions from 2D wireframes (or line segments) for 3D wireframe\nreconstruction as done in prior arts, we present NEAT, a rendering-distilling\nformulation using neural fields to represent 3D line segments with 2D\nobservations, and bipartite matching for perceiving and distilling of a sparse\nset of 3D global junctions. The proposed {NEAT} enjoys the joint optimization\nof the neural fields and the global junctions from scratch, using\nview-dependent 2D observations without precomputed cross-view feature matching.\nComprehensive experiments on the DTU and BlendedMVS datasets demonstrate our\nNEAT's superiority over state-of-the-art alternatives for 3D wireframe\nreconstruction. Moreover, the distilled 3D global junctions by NEAT, are a\nbetter initialization than SfM points, for the recently-emerged 3D Gaussian\nSplatting for high-fidelity novel view synthesis using about 20 times fewer\ninitial 3D points. Project page: \\url{https://xuenan.net/neat}.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Nan Xue", "Bin Tan", "Yuxi Xiao", "Liang Dong", "Gui-Song Xia", "Tianfu Wu", "Yujun Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f788"}, "filepath": "data/2403.17998.png", "tags": [], "_media_type": "image", "_rand": 0.9998346116051385, "arXiv_link": "https://arxiv.org/abs/2403.17998", "other_link": "", "title": "Text Is MASS: Modeling as Stochastic Embedding for Text-Video Retrieval", "abstract": "The increasing prevalence of video clips has sparked growing interest in\ntext-video retrieval. Recent advances focus on establishing a joint embedding\nspace for text and video, relying on consistent embedding representations to\ncompute similarity. However, the text content in existing datasets is generally\nshort and concise, making it hard to fully describe the redundant semantics of\na video. Correspondingly, a single text embedding may be less expressive to\ncapture the video embedding and empower the retrieval. In this study, we\npropose a new stochastic text modeling method T-MASS, i.e., text is modeled as\na stochastic embedding, to enrich text embedding with a flexible and resilient\nsemantic range, yielding a text mass. To be specific, we introduce a\nsimilarity-aware radius module to adapt the scale of the text mass upon the\ngiven text-video pairs. Plus, we design and develop a support text\nregularization to further control the text mass during the training. The\ninference pipeline is also tailored to fully exploit the text mass for accurate\nretrieval. Empirical evidence suggests that T-MASS not only effectively\nattracts relevant text-video pairs while distancing irrelevant ones, but also\nenables the determination of precise text embeddings for relevant pairs. Our\nexperimental results show a substantial improvement of T-MASS over baseline (3%\nto 6.3% by R@1). Also, T-MASS achieves state-of-the-art performance on five\nbenchmark datasets, including MSRVTT, LSMDC, DiDeMo, VATEX, and Charades.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Jiamian Wang", "Guohao Sun", "Pichao Wang", "Dongfang Liu", "Sohail Dianat", "MAJID RABBANI", "Raghuveer Rao", "ZHIQIANG TAO"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f789"}, "filepath": "data/2312.11994v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992476388980617, "arXiv_link": "https://arxiv.org/abs/2312.11994v1", "other_link": "", "title": "Optimizing Diffusion Noise Can Serve As Universal Motion Priors", "abstract": "We propose Diffusion Noise Optimization (DNO), a new method that effectively\nleverages existing motion diffusion models as motion priors for a wide range of\nmotion-related tasks. Instead of training a task-specific diffusion model for\neach new task, DNO operates by optimizing the diffusion latent noise of an\nexisting pre-trained text-to-motion model. Given the corresponding latent noise\nof a human motion, it propagates the gradient from the target criteria defined\non the motion space through the whole denoising process to update the diffusion\nlatent noise. As a result, DNO supports any use cases where criteria can be\ndefined as a function of motion. In particular, we show that, for motion\nediting and control, DNO outperforms existing methods in both achieving the\nobjective and preserving the motion content. DNO accommodates a diverse range\nof editing modes, including changing trajectory, pose, joint locations, or\navoiding newly added obstacles. In addition, DNO is effective in motion\ndenoising and completion, producing smooth and realistic motion from noisy and\npartial inputs. DNO achieves these results at inference time without the need\nfor model retraining, offering great versatility for any defined reward or loss\nfunction on the motion representation.", "keywords": [], "authors_list": ["Korrawe Karunratanakul", "Konpat Preechakul", "Emre Aksan", "Thabo Beeler", "Supasorn Suwajanakorn", "Siyu Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f78a"}, "filepath": "data/2311.18363.png", "tags": [], "_media_type": "image", "_rand": 0.9990165130974905, "arXiv_link": "https://arxiv.org/abs/2311.18363", "other_link": "https://github.com/Chen-Ziyang/VPTTA.", "title": "Each Test Image Deserves A Specific Prompt: Continual Test-Time Adaptation for 2D Medical Image Segmentation", "abstract": "Distribution shift widely exists in medical images acquired from different\nmedical centres and poses a significant obstacle to deploying the pre-trained\nsemantic segmentation model in real-world applications. Test-time adaptation\nhas proven its effectiveness in tackling the cross-domain distribution shift\nduring inference. However, most existing methods achieve adaptation by updating\nthe pre-trained models, rendering them susceptible to error accumulation and\ncatastrophic forgetting when encountering a series of distribution shifts\n(i.e., under the continual test-time adaptation setup). To overcome these\nchallenges caused by updating the models, in this paper, we freeze the\npre-trained model and propose the Visual Prompt-based Test-Time Adaptation\n(VPTTA) method to train a specific prompt for each test image to align the\nstatistics in the batch normalization layers. Specifically, we present the\nlow-frequency prompt, which is lightweight with only a few parameters and can\nbe effectively trained in a single iteration. To enhance prompt initialization,\nwe equip VPTTA with a memory bank to benefit the current prompt from previous\nones. Additionally, we design a warm-up mechanism, which mixes source and\ntarget statistics to construct warm-up statistics, thereby facilitating the\ntraining process. Extensive experiments demonstrate the superiority of our\nVPTTA over other state-of-the-art methods on two medical image segmentation\nbenchmark tasks. The code and weights of pre-trained source models are\navailable at https://github.com/Chen-Ziyang/VPTTA.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Ziyang Chen", "Yongsheng Pan", "Yiwen Ye", "Mengkang Lu", "Yong Xia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f78b"}, "filepath": "data/2405.04115.png", "tags": [], "_media_type": "image", "_rand": 0.9994132641960107, "arXiv_link": "https://arxiv.org/abs/2405.04115", "other_link": "", "title": "A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning", "abstract": "Split Learning (SL) is a distributed learning framework renowned for its\nprivacy-preserving features and minimal computational requirements. Previous\nresearch consistently highlights the potential privacy breaches in SL systems\nby server adversaries reconstructing training data. However, these studies\noften rely on strong assumptions or compromise system utility to enhance attack\nperformance. This paper introduces a new semi-honest Data Reconstruction Attack\non SL, named Feature-Oriented Reconstruction Attack (FORA). In contrast to\nprior works, FORA relies on limited prior knowledge, specifically that the\nserver utilizes auxiliary samples from the public without knowing any client's\nprivate information. This allows FORA to conduct the attack stealthily and\nachieve robust performance. The key vulnerability exploited by FORA is the\nrevelation of the model representation preference in the smashed data output by\nvictim client. FORA constructs a substitute client through feature-level\ntransfer learning, aiming to closely mimic the victim client's representation\npreference. Leveraging this substitute client, the server trains the attack\nmodel to effectively reconstruct private data. Extensive experiments showcase\nFORA's superior performance compared to state-of-the-art methods. Furthermore,\nthe paper systematically evaluates the proposed method's applicability across\ndiverse settings and advanced defense strategies.", "keywords": [], "authors_list": ["Xiaoyang Xu", "Mengda Yang", "Wenzhe Yi", "Ziang Li", "Juan Wang", "Hongxin Hu", "Yong ZHUANG", "Yaxin Liu"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f78c"}, "filepath": "data/2311.12796.png", "tags": [], "_media_type": "image", "_rand": 0.9995086904887266, "arXiv_link": "https://arxiv.org/abs/2311.12796", "other_link": "", "title": "Physics-guided Shape-from-Template: Monocular Video Perception through Neural Surrogate Models", "abstract": "3D reconstruction of dynamic scenes is a long-standing problem in computer\ngraphics and increasingly difficult the less information is available.\nShape-from-Template (SfT) methods aim to reconstruct a template-based geometry\nfrom RGB images or video sequences, often leveraging just a single monocular\ncamera without depth information, such as regular smartphone recordings.\nUnfortunately, existing reconstruction methods are either unphysical and noisy\nor slow in optimization. To solve this problem, we propose a novel SfT\nreconstruction algorithm for cloth using a pre-trained neural surrogate model\nthat is fast to evaluate, stable, and produces smooth reconstructions due to a\nregularizing physics simulation. Differentiable rendering of the simulated mesh\nenables pixel-wise comparisons between the reconstruction and a target video\nsequence that can be used for a gradient-based optimization procedure to\nextract not only shape information but also physical parameters such as\nstretching, shearing, or bending stiffness of the cloth. This allows to retain\na precise, stable, and smooth reconstructed geometry while reducing the runtime\nby a factor of 400-500 compared to $\\phi$-SfT, a state-of-the-art physics-based\nSfT approach.", "keywords": ["Computational imaging and physics-based vision", "Efficient and scalable vision"], "authors_list": ["David Stotko", "Nils Wandel", "Reinhard Klein"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f78d"}, "filepath": "data/2404.07487.png", "tags": [], "_media_type": "image", "_rand": 0.9994962385537233, "arXiv_link": "https://arxiv.org/abs/2404.07487", "other_link": "", "title": "Part-aware Unified Representation of Language and Skeleton for Zero-shot Action Recognition", "abstract": "Skeleton-based zero-shot action recognition aims to recognize unknown human\nactions based on the learned priors of the known skeleton-based actions and a\nsemantic descriptor space shared by both known and unknown categories. However,\nprevious works focus on establishing the bridges between the known skeleton\nrepresentation space and semantic descriptions space at the coarse-grained\nlevel for recognizing unknown action categories, ignoring the fine-grained\nalignment of these two spaces, resulting in suboptimal performance in\ndistinguishing high-similarity action categories. To address these challenges,\nwe propose a novel method via Side information and dual-prompts learning for\nskeleton-based zero-shot action recognition (STAR) at the fine-grained level.\nSpecifically, 1) we decompose the skeleton into several parts based on its\ntopology structure and introduce the side information concerning multi-part\ndescriptions of human body movements for alignment between the skeleton and the\nsemantic space at the fine-grained level; 2) we design the visual-attribute and\nsemantic-part prompts to improve the intra-class compactness within the\nskeleton space and inter-class separability within the semantic space,\nrespectively, to distinguish the high-similarity actions. Extensive experiments\nshow that our method achieves state-of-the-art performance in ZSL and GZSL\nsettings on NTU RGB+D, NTU RGB+D 120, and PKU-MMD datasets.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Anqi Zhu", "Qiuhong Ke", "Mingming Gong", "James Bailey"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f78e"}, "filepath": "data/2405.11483.png", "tags": [], "_media_type": "image", "_rand": 0.9990645171636666, "arXiv_link": "https://arxiv.org/abs/2405.11483", "other_link": "", "title": "MICap: A Unified Model for Identity-aware Movie Descriptions", "abstract": "Characters are an important aspect of any storyline and identifying and\nincluding them in descriptions is necessary for story understanding. While\nprevious work has largely ignored identity and generated captions with someone\n(anonymized names), recent work formulates id-aware captioning as a\nfill-in-the-blanks (FITB) task, where, given a caption with blanks, the goal is\nto predict person id labels. However, to predict captions with ids, a two-stage\napproach is required: first predict captions with someone, then fill in\nidentities. In this work, we present a new single stage approach that can\nseamlessly switch between id-aware caption generation or FITB when given a\ncaption with blanks. Our model, Movie-Identity Captioner (MICap), uses a shared\nauto-regressive decoder that benefits from training with FITB and full-caption\ngeneration objectives, while the encoder can benefit from or disregard captions\nwith blanks as input. Another challenge with id-aware captioning is the lack of\na metric to capture subtle differences between person ids. To this end, we\nintroduce iSPICE, a caption evaluation metric that focuses on identity tuples\ncreated through intermediate scene graphs. We evaluate MICap on Large-Scale\nMovie Description Challenge (LSMDC), where we show a 4.2% improvement in FITB\naccuracy, and a 1-2% bump in classic captioning metrics.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Haran Raajesh", "Naveen Reddy Desanur", "Zeeshan Khan", "Makarand Tapaswi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f78f"}, "filepath": "data/2404.04072.png", "tags": [], "_media_type": "image", "_rand": 0.9996389999670858, "arXiv_link": "https://arxiv.org/abs/2404.04072", "other_link": "https://github.com/vladan-stojnic/ZLaP", "title": "Label Propagation for Zero-shot Classification with Vision-Language Models", "abstract": "Vision-Language Models (VLMs) have demonstrated impressive performance on\nzero-shot classification, i.e. classification when provided merely with a list\nof class names. In this paper, we tackle the case of zero-shot classification\nin the presence of unlabeled data. We leverage the graph structure of the\nunlabeled data and introduce ZLaP, a method based on label propagation (LP)\nthat utilizes geodesic distances for classification. We tailor LP to graphs\ncontaining both text and image features and further propose an efficient method\nfor performing inductive inference based on a dual solution and a\nsparsification step. We perform extensive experiments to evaluate the\neffectiveness of our method on 14 common datasets and show that ZLaP\noutperforms the latest related works. Code:\nhttps://github.com/vladan-stojnic/ZLaP", "keywords": ["Efficient and scalable vision"], "authors_list": ["Vladan Stojni\u0107", "Yannis Kalantidis", "Giorgos Tolias"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f790"}, "filepath": "data/2402.07220.png", "tags": [], "_media_type": "image", "_rand": 0.9993798843113244, "arXiv_link": "https://arxiv.org/abs/2402.07220", "other_link": "", "title": "KVQ: Kwai Video Quality Assessment for Short-form Videos", "abstract": "Short-form UGC video platforms, like Kwai and TikTok, have been an emerging\nand irreplaceable mainstream media form, thriving on user-friendly engagement,\nand kaleidoscope creation, etc. However, the advancing content-generation\nmodes, e.g., special effects, and sophisticated processing workflows, e.g.,\nde-artifacts, have introduced significant challenges to recent UGC video\nquality assessment: (i) the ambiguous contents hinder the identification of\nquality-determined regions. (ii) the diverse and complicated hybrid distortions\nare hard to distinguish. To tackle the above challenges and assist in the\ndevelopment of short-form videos, we establish the first large-scale\nKaleidoscope short Video database for Quality assessment, termed KVQ, which\ncomprises 600 user-uploaded short videos and 3600 processed videos through the\ndiverse practical processing workflows, including pre-processing, transcoding,\nand enhancement. Among them, the absolute quality score of each video and\npartial ranking score among indistinguishable samples are provided by a team of\nprofessional researchers specializing in image processing. Based on this\ndatabase, we propose the first short-form video quality evaluator, i.e., KSVQE,\nwhich enables the quality evaluator to identify the quality-determined\nsemantics with the content understanding of large vision language models (i.e.,\nCLIP) and distinguish the distortions with the distortion understanding module.\nExperimental results have shown the effectiveness of KSVQE on our KVQ database\nand popular VQA databases.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yiting Lu", "Xin Li", "Yajing Pei", "Kun Yuan", "Qizhi Xie", "Yunpeng Qu", "Ming Sun", "Chao Zhou", "Zhibo Chen"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f791"}, "filepath": "data/2311.16961v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990173371042681, "arXiv_link": "https://arxiv.org/abs/2311.16961v1", "other_link": "", "title": "HumanRef: Single Image to 3D Human Generation via Reference-Guided Diffusion", "abstract": "Generating a 3D human model from a single reference image is challenging\nbecause it requires inferring textures and geometries in invisible views while\nmaintaining consistency with the reference image. Previous methods utilizing 3D\ngenerative models are limited by the availability of 3D training data.\nOptimization-based methods that lift text-to-image diffusion models to 3D\ngeneration often fail to preserve the texture details of the reference image,\nresulting in inconsistent appearances in different views. In this paper, we\npropose HumanRef, a 3D human generation framework from a single-view input. To\nensure the generated 3D model is photorealistic and consistent with the input\nimage, HumanRef introduces a novel method called reference-guided score\ndistillation sampling (Ref-SDS), which effectively incorporates image guidance\ninto the generation process. Furthermore, we introduce region-aware attention\nto Ref-SDS, ensuring accurate correspondence between different body regions.\nExperimental results demonstrate that HumanRef outperforms state-of-the-art\nmethods in generating 3D clothed humans with fine geometry, photorealistic\ntextures, and view-consistent appearances.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jingbo Zhang", "Xiaoyu Li", "Qi Zhang", "Yan-Pei Cao", "Ying Shan", "Jing Liao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f792"}, "filepath": "data/2312.10115.png", "tags": [], "_media_type": "image", "_rand": 0.9997722772694031, "arXiv_link": "https://arxiv.org/abs/2312.10115", "other_link": "", "title": "SkySense: A Multi-Modal Remote Sensing Foundation Model Towards Universal Interpretation for Earth Observation Imagery", "abstract": "Prior studies on Remote Sensing Foundation Model (RSFM) reveal immense\npotential towards a generic model for Earth Observation. Nevertheless, these\nworks primarily focus on a single modality without temporal and geo-context\nmodeling, hampering their capabilities for diverse tasks. In this study, we\npresent SkySense, a generic billion-scale model, pre-trained on a curated\nmulti-modal Remote Sensing Imagery (RSI) dataset with 21.5 million temporal\nsequences. SkySense incorporates a factorized multi-modal spatiotemporal\nencoder taking temporal sequences of optical and Synthetic Aperture Radar (SAR)\ndata as input. This encoder is pre-trained by our proposed Multi-Granularity\nContrastive Learning to learn representations across different modal and\nspatial granularities. To further enhance the RSI representations by the\ngeo-context clue, we introduce Geo-Context Prototype Learning to learn\nregion-aware prototypes upon RSI's multi-modal spatiotemporal features. To our\nbest knowledge, SkySense is the largest Multi-Modal RSFM to date, whose modules\ncan be flexibly combined or used individually to accommodate various tasks. It\ndemonstrates remarkable generalization capabilities on a thorough evaluation\nencompassing 16 datasets over 7 tasks, from single- to multi-modal, static to\ntemporal, and classification to localization. SkySense surpasses 18 recent\nRSFMs in all test scenarios. Specifically, it outperforms the latest models\nsuch as GFM, SatLas and Scale-MAE by a large margin, i.e., 2.76%, 3.67% and\n3.61% on average respectively. We will release the pre-trained weights to\nfacilitate future research and Earth Observation applications.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Xin Guo", "Jiangwei Lao", "Bo Dang", "Yingying Zhang", "Lei Yu", "Lixiang Ru", "Liheng Zhong", "Ziyuan Huang", "Kang Wu", "Dingxiang Hu", "HUIMEI HE", "Jian Wang", "Jingdong Chen", "Ming Yang", "Yongjun Zhang", "Yansheng Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f793"}, "filepath": "data/2311.18803.png", "tags": [], "_media_type": "image", "_rand": 0.9997735723066218, "arXiv_link": "https://arxiv.org/abs/2311.18803", "other_link": "https://imageomics.github.io/bioclip", "title": "BioCLIP: A Vision Foundation Model for the Tree of Life", "abstract": "Images of the natural world, collected by a variety of cameras, from drones\nto individual phones, are increasingly abundant sources of biological\ninformation. There is an explosion of computational methods and tools,\nparticularly computer vision, for extracting biologically relevant information\nfrom images for science and conservation. Yet most of these are bespoke\napproaches designed for a specific task and are not easily adaptable or\nextendable to new questions, contexts, and datasets. A vision model for general\norganismal biology questions on images is of timely need. To approach this, we\ncurate and release TreeOfLife-10M, the largest and most diverse ML-ready\ndataset of biology images. We then develop BioCLIP, a foundation model for the\ntree of life, leveraging the unique properties of biology captured by\nTreeOfLife-10M, namely the abundance and variety of images of plants, animals,\nand fungi, together with the availability of rich structured biological\nknowledge. We rigorously benchmark our approach on diverse fine-grained biology\nclassification tasks and find that BioCLIP consistently and substantially\noutperforms existing baselines (by 16% to 17% absolute). Intrinsic evaluation\nreveals that BioCLIP has learned a hierarchical representation conforming to\nthe tree of life, shedding light on its strong generalizability.\nhttps://imageomics.github.io/bioclip has models, data and code.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Samuel Stevens", "Jiaman Wu", "Matthew Thompson", "Elizabeth Campolongo", "Chan Hee Song", "David Carlyn", "Li Dong", "Wasila Dahdul", "Charles Stewart", "Tanya Berger-Wolf", "Wei-Lun Chao", "Yu Su"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f794"}, "filepath": "data/2312.06112.png", "tags": [], "_media_type": "image", "_rand": 0.9990159664575906, "arXiv_link": "https://arxiv.org/abs/2312.06112", "other_link": "", "title": "MAFA: Managing False Negatives for Vision-Language Pre-training", "abstract": "We consider the critical issue of false negatives in Vision-Language\nPre-training (VLP), a challenge that arises from the inherent many-to-many\ncorrespondence of image-text pairs in large-scale web-crawled datasets. The\npresence of false negatives can impede achieving optimal performance and even\nlead to learning failures. To address this challenge, we propose a method\ncalled COSMO (COnverting and SMOoothing false negatives) that manages the false\nnegative issues, especially powerful in hard negative sampling. Building upon\nthe recently developed GRouped mIni-baTch sampling (GRIT) strategy, our\napproach consists of two pivotal components: 1) an efficient connection mining\nprocess that identifies and converts false negatives into positives, and 2)\nlabel smoothing for the image-text contrastive loss (ITC). Our comprehensive\nexperiments verify the effectiveness of COSMO across multiple downstream tasks,\nemphasizing the crucial role of addressing false negatives in VLP, potentially\neven surpassing the importance of addressing false positives. In addition, the\ncompatibility of COSMO with the recent BLIP-family model is also demonstrated.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jaeseok Byun", "Dohoon Kim", "Taesup Moon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f795"}, "filepath": "data/2312.09158.png", "tags": [], "_media_type": "image", "_rand": 0.999576173010373, "arXiv_link": "https://arxiv.org/abs/2312.09158", "other_link": "https://glee-vision.github.io", "title": "General Object Foundation Model for Images and Videos at Scale", "abstract": "We present GLEE in this work, an object-level foundation model for locating\nand identifying objects in images and videos. Through a unified framework, GLEE\naccomplishes detection, segmentation, tracking, grounding, and identification\nof arbitrary objects in the open world scenario for various object perception\ntasks. Adopting a cohesive learning strategy, GLEE acquires knowledge from\ndiverse data sources with varying supervision levels to formulate general\nobject representations, excelling in zero-shot transfer to new data and tasks.\nSpecifically, we employ an image encoder, text encoder, and visual prompter to\nhandle multi-modal inputs, enabling to simultaneously solve various\nobject-centric downstream tasks while maintaining state-of-the-art performance.\nDemonstrated through extensive training on over five million images from\ndiverse benchmarks, GLEE exhibits remarkable versatility and improved\ngeneralization performance, efficiently tackling downstream tasks without the\nneed for task-specific adaptation. By integrating large volumes of\nautomatically labeled data, we further enhance its zero-shot generalization\ncapabilities. Additionally, GLEE is capable of being integrated into Large\nLanguage Models, serving as a foundational model to provide universal\nobject-level information for multi-modal tasks. We hope that the versatility\nand universality of our method will mark a significant step in the development\nof efficient visual foundation models for AGI systems. The model and code will\nbe released at https://glee-vision.github.io .", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Efficient and scalable vision"], "authors_list": ["Junfeng Wu", "Yi Jiang", "Qihao Liu", "Zehuan Yuan", "Xiang Bai", "Song Bai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f796"}, "filepath": "data/2311.14749.png", "tags": [], "_media_type": "image", "_rand": 0.999055064059861, "arXiv_link": "https://arxiv.org/abs/2311.14749", "other_link": "", "title": "Troika: Multi-Path Cross-Modal Traction for Compositional Zero-Shot Learning", "abstract": "Compositional zero-shot learning aims to recognize unseen state-object\ncompositions by leveraging known primitives (state and object) during training.\nHowever, effectively modeling interactions between primitives and generalizing\nknowledge to novel compositions remains a perennial challenge. There are two\nkey factors: object-conditioned and state-conditioned variance, i.e., the\nappearance of states (or objects) can vary significantly when combined with\ndifferent objects (or states). For instance, the state \"old\" can signify a\nvintage design for a \"car\" or an advanced age for a \"cat\". In this paper, we\nargue that these variances can be mitigated by predicting composition\ncategories based on pre-observed primitive. To this end, we propose Progressive\nLanguage-based Observations (PLO), which can dynamically determine a better\nobservation order of primitives. These observations comprise a series of\nconcepts or languages that allow the model to understand image content in a\nstep-by-step manner. Specifically, PLO adopts pre-trained vision-language\nmodels (VLMs) to empower the model with observation capabilities. We further\ndevise two variants: 1) PLO-VLM: a two-step method, where a pre-observing\nclassifier dynamically determines the observation order of two primitives. 2)\nPLO-LLM: a multi-step scheme, which utilizes large language models (LLMs) to\ncraft composition-specific prompts for step-by-step observing. Extensive\nablations on three challenging datasets demonstrate the superiority of PLO\ncompared with state-of-the-art methods, affirming its abilities in\ncompositional recognition.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Siteng Huang", "Biao Gong", "Yutong Feng", "Zhang Min", "Yiliang Lv", "Donglin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f797"}, "filepath": "data/2311.15596.png", "tags": [], "_media_type": "image", "_rand": 0.9996755873429134, "arXiv_link": "https://arxiv.org/abs/2311.15596", "other_link": "", "title": "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Language Models", "abstract": "Vision-language models (VLMs) have recently shown promising results in\ntraditional downstream tasks. Evaluation studies have emerged to assess their\nabilities, with the majority focusing on the third-person perspective, and only\na few addressing specific tasks from the first-person perspective. However, the\ncapability of VLMs to \"think\" from a first-person perspective, a crucial\nattribute for advancing autonomous agents and robotics, remains largely\nunexplored. To bridge this research gap, we introduce EgoThink, a novel visual\nquestion-answering benchmark that encompasses six core capabilities with twelve\ndetailed dimensions. The benchmark is constructed using selected clips from\negocentric videos, with manually annotated question-answer pairs containing\nfirst-person information. To comprehensively assess VLMs, we evaluate eighteen\npopular VLMs on EgoThink. Moreover, given the open-ended format of the answers,\nwe use GPT-4 as the automatic judge to compute single-answer grading.\nExperimental results indicate that although GPT-4V leads in numerous\ndimensions, all evaluated VLMs still possess considerable potential for\nimprovement in first-person perspective tasks. Meanwhile, enlarging the number\nof trainable parameters has the most significant impact on model performance on\nEgoThink. In conclusion, EgoThink serves as a valuable addition to existing\nevaluation benchmarks for VLMs, providing an indispensable resource for future\nresearch in the realm of embodied artificial intelligence and robotics.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Sijie Cheng", "Zhicheng Guo", "Jingwen Wu", "Kechen Fang", "Peng Li", "Huaping Liu", "Yang Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f798"}, "filepath": "data/2403.16224.png", "tags": [], "_media_type": "image", "_rand": 0.9997635929755216, "arXiv_link": "https://arxiv.org/abs/2403.16224", "other_link": "https://whyy.site/paper/nep", "title": "Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields", "abstract": "Inverse rendering aims at recovering both geometry and materials of objects.\nIt provides a more compatible reconstruction for conventional rendering\nengines, compared with the neural radiance fields (NeRFs). On the other hand,\nexisting NeRF-based inverse rendering methods cannot handle glossy objects with\nlocal light interactions well, as they typically oversimplify the illumination\nas a 2D environmental map, which assumes infinite lights only. Observing the\nsuperiority of NeRFs in recovering radiance fields, we propose a novel 5D\nNeural Plenoptic Function (NeP) based on NeRFs and ray tracing, such that more\naccurate lighting-object interactions can be formulated via the rendering\nequation. We also design a material-aware cone sampling strategy to efficiently\nintegrate lights inside the BRDF lobes with the help of pre-filtered radiance\nfields. Our method has two stages: the geometry of the target object and the\npre-filtered environmental radiance fields are reconstructed in the first\nstage, and materials of the target object are estimated in the second stage\nwith the proposed NeP and material-aware cone sampling strategy. Extensive\nexperiments on the proposed real-world and synthetic datasets demonstrate that\nour method can reconstruct high-fidelity geometry/materials of challenging\nglossy objects with complex lighting interactions from nearby objects. Project\nwebpage: https://whyy.site/paper/nep", "keywords": ["Deep learning architectures and techniques", "Computational imaging and physics-based vision"], "authors_list": ["Haoyuan Wang", "Wenbo Hu", "Lei Zhu", "Rynson W.H. Lau"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f799"}, "filepath": "data/2312.09788.png", "tags": [], "_media_type": "image", "_rand": 0.9998975747593968, "arXiv_link": "https://arxiv.org/abs/2312.09788", "other_link": "https://github.com/yasserben/CLOUDS", "title": "Collaborating Foundation models for Domain Generalized Semantic Segmentation", "abstract": "Domain Generalized Semantic Segmentation (DGSS) deals with training a model\non a labeled source domain with the aim of generalizing to unseen domains\nduring inference. Existing DGSS methods typically effectuate robust features by\nmeans of Domain Randomization (DR). Such an approach is often limited as it can\nonly account for style diversification and not content. In this work, we take\nan orthogonal approach to DGSS and propose to use an assembly of CoLlaborative\nFOUndation models for Domain Generalized Semantic Segmentation (CLOUDS). In\ndetail, CLOUDS is a framework that integrates FMs of various kinds: (i) CLIP\nbackbone for its robust feature representation, (ii) generative models to\ndiversify the content, thereby covering various modes of the possible target\ndistribution, and (iii) Segment Anything Model (SAM) for iteratively refining\nthe predictions of the segmentation model. Extensive experiments show that our\nCLOUDS excels in adapting from synthetic to real DGSS benchmarks and under\nvarying weather conditions, notably outperforming prior methods by 5.6% and\n6.7% on averaged miou, respectively. The code is available at :\nhttps://github.com/yasserben/CLOUDS", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yasser Benigmim", "Subhankar Roy", "Slim Essid", "Vicky Kalogeiton", "St\u00e9phane Lathuili\u00e8re"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f79a"}, "filepath": "data/2306.10239.png", "tags": [], "_media_type": "image", "_rand": 0.9998275198580989, "arXiv_link": "https://arxiv.org/abs/2306.10239", "other_link": "", "title": "Multi-Scale Video Anomaly Detection by Multi-Grained Spatio-Temporal Representation Learning", "abstract": "Video Anomaly Detection (VAD) is an essential yet challenging task in signal\nprocessing. Since certain anomalies cannot be detected by isolated analysis of\neither temporal or spatial information, the interaction between these two types\nof data is considered crucial for VAD. However, current dual-stream\narchitectures either confine this integral interaction to the bottleneck of the\nautoencoder or introduce anomaly-irrelevant background pixels into the\ninteractive process, hindering the accuracy of VAD. To address these\ndeficiencies, we propose a Multi-scale Spatial-Temporal Interaction Network\n(MSTI-Net) for VAD. First, to prioritize the detection of moving objects in the\nscene and harmonize the substantial semantic discrepancies between the two\ntypes of data, we propose an Attention-based Spatial-Temporal Fusion Module\n(ASTFM) as a substitute for the conventional direct fusion. Furthermore, we\ninject multi-ASTFM-based connections that bridge the appearance and motion\nstreams of the dual-stream network, thus fostering multi-scale spatial-temporal\ninteraction. Finally, to bolster the delineation between normal and abnormal\nactivities, our system records the regular information in a memory module.\nExperimental results on three benchmark datasets validate the effectiveness of\nour approach, which achieves AUCs of 96.8%, 87.6%, and 73.9% on the UCSD Ped2,\nCUHK Avenue, and ShanghaiTech datasets, respectively.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Menghao Zhang", "Jingyu Wang", "Qi Qi", "Haifeng Sun", "Zirui Zhuang", "Pengfei Ren", "Ruilong Ma", "Jianxin Liao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f79b"}, "filepath": "data/2403.11256.png", "tags": [], "_media_type": "image", "_rand": 0.9999943919232248, "arXiv_link": "http://export.arxiv.org/abs/2403.11256", "other_link": "https://github.com/chenxi52/UPA.", "title": "Data-Free Quantization via Pseudo-label Filtering", "abstract": "Source-free unsupervised domain adaptation (SFUDA) aims to enable the\nutilization of a pre-trained source model in an unlabeled target domain without\naccess to source data. Self-training is a way to solve SFUDA, where confident\ntarget samples are iteratively selected as pseudo-labeled samples to guide\ntarget model learning. However, prior heuristic noisy pseudo-label filtering\nmethods all involve introducing extra models, which are sensitive to model\nassumptions and may introduce additional errors or mislabeling. In this work,\nwe propose a method called Uncertainty-aware Pseudo-label-filtering Adaptation\n(UPA) to efficiently address this issue in a coarse-to-fine manner. Specially,\nwe first introduce a sample selection module named Adaptive Pseudo-label\nSelection (APS), which is responsible for filtering noisy pseudo labels. The\nAPS utilizes a simple sample uncertainty estimation method by aggregating\nknowledge from neighboring samples and confident samples are selected as clean\npseudo-labeled. Additionally, we incorporate Class-Aware Contrastive Learning\n(CACL) to mitigate the memorization of pseudo-label noise by learning robust\npair-wise representation supervised by pseudo labels. Through extensive\nexperiments conducted on three widely used benchmarks, we demonstrate that our\nproposed method achieves competitive performance on par with state-of-the-art\nSFUDA methods. Code is available at https://github.com/chenxi52/UPA.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chunxiao Fan", "Ziqi Wang", "Dan Guo", "Meng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f79c"}, "filepath": "data/2309.13855.png", "tags": [], "_media_type": "image", "_rand": 0.9991864703579973, "arXiv_link": "https://arxiv.org/abs/2309.13855", "other_link": "", "title": "Adaptive Softassign via Hadamard-Equipped Sinkhorn", "abstract": "Softassign is a pivotal method in graph matching and other learning tasks.\nMany softassign-based algorithms exhibit performance sensitivity to a parameter\nin the softassign. However, tuning the parameter is challenging and almost done\nempirically. This paper proposes an adaptive softassign method for graph\nmatching by analyzing the relationship between the objective score and the\nparameter. This method can automatically tune the parameter based on a given\nerror bound to guarantee accuracy. The Hadamard-Equipped Sinkhorn formulas\nintroduced in this study significantly enhance the efficiency and stability of\nthe adaptive softassign. Moreover, these formulas can also be used in optimal\ntransport problems. The resulting adaptive softassign graph matching algorithm\nenjoys significantly higher accuracy than previous state-of-the-art large graph\nmatching algorithms while maintaining comparable efficiency.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Binrui Shen", "Qiang Niu", "Shengxin Zhu"], "category_name": "Optimization and Control", "all_categories": ["Optimization and Control", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f79d"}, "filepath": "data/2401.01647.png", "tags": [], "_media_type": "image", "_rand": 0.9994847609400913, "arXiv_link": "https://arxiv.org/abs/2401.01647", "other_link": "", "title": "SIGNeRF: Scene Integrated Generation for Neural Radiance Fields", "abstract": "Advances in image diffusion models have recently led to notable improvements\nin the generation of high-quality images. In combination with Neural Radiance\nFields (NeRFs), they enabled new opportunities in 3D generation. However, most\ngenerative 3D approaches are object-centric and applying them to editing\nexisting photorealistic scenes is not trivial. We propose SIGNeRF, a novel\napproach for fast and controllable NeRF scene editing and scene-integrated\nobject generation. A new generative update strategy ensures 3D consistency\nacross the edited images, without requiring iterative optimization. We find\nthat depth-conditioned diffusion models inherently possess the capability to\ngenerate 3D consistent views by requesting a grid of images instead of single\nviews. Based on these insights, we introduce a multi-view reference sheet of\nmodified images. Our method updates an image collection consistently based on\nthe reference sheet and refines the original NeRF with the newly generated\nimage set in one go. By exploiting the depth conditioning mechanism of the\nimage diffusion model, we gain fine control over the spatial location of the\nedit and enforce shape guidance by a selected region or an external mesh.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Jan-Niklas Dihlmann", "Andreas Engelhardt", "Hendrik Lensch"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f79e"}, "filepath": "data/2310.12982.png", "tags": [], "_media_type": "image", "_rand": 0.9994630884786748, "arXiv_link": "https://arxiv.org/abs/2310.12982", "other_link": "https://hkchengrex.github.io/Cutie", "title": "Putting the Object Back into Video Object Segmentation", "abstract": "We present Cutie, a video object segmentation (VOS) network with object-level\nmemory reading, which puts the object representation from memory back into the\nvideo object segmentation result. Recent works on VOS employ bottom-up\npixel-level memory reading which struggles due to matching noise, especially in\nthe presence of distractors, resulting in lower performance in more challenging\ndata. In contrast, Cutie performs top-down object-level memory reading by\nadapting a small set of object queries. Via those, it interacts with the\nbottom-up pixel features iteratively with a query-based object transformer (qt,\nhence Cutie). The object queries act as a high-level summary of the target\nobject, while high-resolution feature maps are retained for accurate\nsegmentation. Together with foreground-background masked attention, Cutie\ncleanly separates the semantics of the foreground object from the background.\nOn the challenging MOSE dataset, Cutie improves by 8.7 J&F over XMem with a\nsimilar running time and improves by 4.2 J&F over DeAOT while being three times\nfaster. Code is available at: https://hkchengrex.github.io/Cutie", "keywords": ["Efficient and scalable vision"], "authors_list": ["Ho Kei Cheng", "Seoung Wug Oh", "Brian Price", "Joon-Young Lee", "Alexander G. Schwing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f79f"}, "filepath": "data/2403.09630.png", "tags": [], "_media_type": "image", "_rand": 0.9994859105791934, "arXiv_link": "https://arxiv.org/abs/2403.09630", "other_link": "", "title": "Generalized Predictive Model for Autonomous Driving", "abstract": "In this paper, we introduce the first large-scale video prediction model in\nthe autonomous driving discipline. To eliminate the restriction of high-cost\ndata collection and empower the generalization ability of our model, we acquire\nmassive data from the web and pair it with diverse and high-quality text\ndescriptions. The resultant dataset accumulates over 2000 hours of driving\nvideos, spanning areas all over the world with diverse weather conditions and\ntraffic scenarios. Inheriting the merits from recent latent diffusion models,\nour model, dubbed GenAD, handles the challenging dynamics in driving scenes\nwith novel temporal reasoning blocks. We showcase that it can generalize to\nvarious unseen driving datasets in a zero-shot manner, surpassing general or\ndriving-specific video prediction counterparts. Furthermore, GenAD can be\nadapted into an action-conditioned prediction model or a motion planner,\nholding great potential for real-world driving applications.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jiazhi Yang", "Shenyuan Gao", "Yihang Qiu", "Li Chen", "Tianyu Li", "Bo Dai", "Kashyap Chitta", "Penghao Wu", "Jia Zeng", "Ping Luo", "Jun Zhang", "Andreas Geiger", "Yu Qiao", "Hongyang Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a0"}, "filepath": "data/2305.11468v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992843499906073, "arXiv_link": "https://arxiv.org/html/2305.11468v3", "other_link": "", "title": "BlockGCN: Redefine Topology Awareness for Skeleton-Based Action Recognition", "abstract": "Graph Convolutional Networks (GCNs) have long defined the state-of-the-art in\nskeleton-based action recognition, leveraging their ability to unravel the\ncomplex dynamics of human joint topology through the graph's adjacency matrix.\nHowever, an inherent flaw has come to light in these cutting-edge models: they\ntend to optimize the adjacency matrix jointly with the model weights. This\nprocess, while seemingly efficient, causes a gradual decay of bone connectivity\ndata, culminating in a model indifferent to the very topology it sought to map.\nAs a remedy, we propose a threefold strategy: (1) We forge an innovative\npathway that encodes bone connectivity by harnessing the power of graph\ndistances. This approach preserves the vital topological nuances often lost in\nconventional GCNs. (2) We highlight an oft-overlooked feature - the temporal\nmean of a skeletal sequence, which, despite its modest guise, carries highly\naction-specific information. (3) Our investigation revealed strong variations\nin joint-to-joint relationships across different actions. This finding exposes\nthe limitations of a single adjacency matrix in capturing the variations of\nrelational configurations emblematic of human movement, which we remedy by\nproposing an efficient refinement to Graph Convolutions (GC) - the BlockGC.\nThis evolution slashes parameters by a substantial margin (above 40%), while\nelevating performance beyond original GCNs. Our full model, the BlockGCN,\nestablishes new standards in skeleton-based action recognition for small model\nsizes. Its high accuracy, notably on the large-scale NTU RGB+D 120 dataset,\nstand as compelling proof of the efficacy of BlockGCN.", "keywords": ["Efficient and scalable vision", "Biometrics and human analysis"], "authors_list": ["Yuxuan Zhou", "Xudong Yan", "Zhi-Qi Cheng", "Yan Yan", "Qi Dai", "Xian-Sheng Hua"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a1"}, "filepath": "data/2311.18830.png", "tags": [], "_media_type": "image", "_rand": 0.9993921986839336, "arXiv_link": "https://arxiv.org/abs/2311.18830", "other_link": "", "title": "MotionEditor: Editing Video Motion via Content-Aware Diffusion", "abstract": "Existing diffusion-based video editing models have made gorgeous advances for\nediting attributes of a source video over time but struggle to manipulate the\nmotion information while preserving the original protagonist's appearance and\nbackground. To address this, we propose MotionEditor, a diffusion model for\nvideo motion editing. MotionEditor incorporates a novel content-aware motion\nadapter into ControlNet to capture temporal motion correspondence. While\nControlNet enables direct generation based on skeleton poses, it encounters\nchallenges when modifying the source motion in the inverted noise due to\ncontradictory signals between the noise (source) and the condition (reference).\nOur adapter complements ControlNet by involving source content to transfer\nadapted control signals seamlessly. Further, we build up a two-branch\narchitecture (a reconstruction branch and an editing branch) with a\nhigh-fidelity attention injection mechanism facilitating branch interaction.\nThis mechanism enables the editing branch to query the key and value from the\nreconstruction branch in a decoupled manner, making the editing branch retain\nthe original background and protagonist appearance. We also propose a skeleton\nalignment algorithm to address the discrepancies in pose size and position.\nExperiments demonstrate the promising motion editing ability of MotionEditor,\nboth qualitatively and quantitatively.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Shuyuan Tu", "Qi Dai", "Zhi-Qi Cheng", "Han Hu", "Xintong Han", "Zuxuan Wu", "Yu-Gang Jiang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a2"}, "filepath": "data/2312.02981v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991961766067396, "arXiv_link": "https://arxiv.org/abs/2312.02981v1", "other_link": "", "title": "ReconFusion: 3D Reconstruction with Diffusion Priors", "abstract": "3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at\nrendering photorealistic novel views of complex scenes. However, recovering a\nhigh-quality NeRF typically requires tens to hundreds of input images,\nresulting in a time-consuming capture process. We present ReconFusion to\nreconstruct real-world scenes using only a few photos. Our approach leverages a\ndiffusion prior for novel view synthesis, trained on synthetic and multiview\ndatasets, which regularizes a NeRF-based 3D reconstruction pipeline at novel\ncamera poses beyond those captured by the set of input images. Our method\nsynthesizes realistic geometry and texture in underconstrained regions while\npreserving the appearance of observed regions. We perform an extensive\nevaluation across various real-world datasets, including forward-facing and\n360-degree scenes, demonstrating significant performance improvements over\nprevious few-view NeRF reconstruction approaches.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Rundi Wu", "Ben Mildenhall", "Philipp Henzler", "Ruiqi Gao", "Keunhong Park", "Daniel Watson", "Pratul P. Srinivasan", "Dor Verbin", "Jonathan T. Barron", "Ben Poole", "Aleksander Holynski"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a3"}, "filepath": "data/2312.17742.png", "tags": [], "_media_type": "image", "_rand": 0.9991449422345654, "arXiv_link": "https://arxiv.org/abs/2312.17742", "other_link": "", "title": "Learning Vision from Models Rivals Learning Vision from Data", "abstract": "We introduce SynCLR, a novel approach for learning visual representations\nexclusively from synthetic images and synthetic captions, without any real\ndata. We synthesize a large dataset of image captions using LLMs, then use an\noff-the-shelf text-to-image model to generate multiple images corresponding to\neach synthetic caption. We perform visual representation learning on these\nsynthetic images via contrastive learning, treating images sharing the same\ncaption as positive pairs. The resulting representations transfer well to many\ndownstream tasks, competing favorably with other general-purpose visual\nrepresentation learners such as CLIP and DINO v2 in image classification tasks.\nFurthermore, in dense prediction tasks such as semantic segmentation, SynCLR\noutperforms previous self-supervised methods by a significant margin, e.g.,\nimproving over MAE and iBOT by 6.2 and 4.3 mIoU on ADE20k for ViT-B/16.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yonglong Tian", "Lijie Fan", "Kaifeng Chen", "Dina Katabi", "Dilip Krishnan", "Phillip Isola"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a4"}, "filepath": "data/2404.00710.png", "tags": [], "_media_type": "image", "_rand": 0.9990551438478882, "arXiv_link": "https://arxiv.org/abs/2404.00710", "other_link": "https://github.com/mainaksingha01/ODG-CLIP.", "title": "Unknown Prompt, the only Lacuna: Unveiling CLIP's Potential for Open Domain Generalization", "abstract": "We delve into Open Domain Generalization (ODG), marked by domain and category\nshifts between training's labeled source and testing's unlabeled target\ndomains. Existing solutions to ODG face limitations due to constrained\ngeneralizations of traditional CNN backbones and errors in detecting target\nopen samples in the absence of prior knowledge. Addressing these pitfalls, we\nintroduce ODG-CLIP, harnessing the semantic prowess of the vision-language\nmodel, CLIP. Our framework brings forth three primary innovations: Firstly,\ndistinct from prevailing paradigms, we conceptualize ODG as a multi-class\nclassification challenge encompassing both known and novel categories. Central\nto our approach is modeling a unique prompt tailored for detecting unknown\nclass samples, and to train this, we employ a readily accessible stable\ndiffusion model, elegantly generating proxy images for the open class.\nSecondly, aiming for domain-tailored classification (prompt) weights while\nensuring a balance of precision and simplicity, we devise a novel visual\nstylecentric prompt learning mechanism. Finally, we infuse images with\nclass-discriminative knowledge derived from the prompt space to augment the\nfidelity of CLIP's visual embeddings. We introduce a novel objective to\nsafeguard the continuity of this infused semantic intel across domains,\nespecially for the shared classes. Through rigorous testing on diverse\ndatasets, covering closed and open-set DG contexts, ODG-CLIP demonstrates clear\nsupremacy, consistently outpacing peers with performance boosts between 8%-16%.\nCode will be available at https://github.com/mainaksingha01/ODG-CLIP.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Mainak Singha", "Ankit Jha", "Shirsha Bose", "Ashwin Nair", "Moloud Abdar", "Biplab Banerjee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a5"}, "filepath": "data/2403.18469.png", "tags": [], "_media_type": "image", "_rand": 0.9997404954058565, "arXiv_link": "https://arxiv.org/abs/2403.18469", "other_link": "https://github.com/yuan-zm/DGT-ST}.", "title": "Density-guided Translator Boosts Synthetic-to-Real Unsupervised Domain Adaptive Segmentation of 3D Point Clouds", "abstract": "3D synthetic-to-real unsupervised domain adaptive segmentation is crucial to\nannotating new domains. Self-training is a competitive approach for this task,\nbut its performance is limited by different sensor sampling patterns (i.e.,\nvariations in point density) and incomplete training strategies. In this work,\nwe propose a density-guided translator (DGT), which translates point density\nbetween domains, and integrates it into a two-stage self-training pipeline\nnamed DGT-ST. First, in contrast to existing works that simultaneously conduct\ndata generation and feature/output alignment within unstable adversarial\ntraining, we employ the non-learnable DGT to bridge the domain gap at the input\nlevel. Second, to provide a well-initialized model for self-training, we\npropose a category-level adversarial network in stage one that utilizes the\nprototype to prevent negative transfer. Finally, by leveraging the designs\nabove, a domain-mixed self-training method with source-aware consistency loss\nis proposed in stage two to narrow the domain gap further. Experiments on two\nsynthetic-to-real segmentation tasks (SynLiDAR $\\rightarrow$ semanticKITTI and\nSynLiDAR $\\rightarrow$ semanticPOSS) demonstrate that DGT-ST outperforms\nstate-of-the-art methods, achieving 9.4$\\%$ and 4.3$\\%$ mIoU improvements,\nrespectively. Code is available at \\url{https://github.com/yuan-zm/DGT-ST}.", "keywords": [], "authors_list": ["Zhimin Yuan", "Wankang Zeng", "Yanfei Su", "Weiquan Liu", "Ming Cheng", "Yulan Guo", "Cheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a6"}, "filepath": "data/2404.00269.png", "tags": [], "_media_type": "image", "_rand": 0.9994744422080857, "arXiv_link": "https://arxiv.org/abs/2404.00269", "other_link": "https://yushuang-wu.github.io/IPoD.", "title": "IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images", "abstract": "Generalizable 3D object reconstruction from single-view RGB-D images remains\na challenging task, particularly with real-world data. Current state-of-the-art\nmethods develop Transformer-based implicit field learning, necessitating an\nintensive learning paradigm that requires dense query-supervision uniformly\nsampled throughout the entire space. We propose a novel approach, IPoD, which\nharmonizes implicit field learning with point diffusion. This approach treats\nthe query points for implicit field learning as a noisy point cloud for\niterative denoising, allowing for their dynamic adaptation to the target object\nshape. Such adaptive query points harness diffusion learning's capability for\ncoarse shape recovery and also enhances the implicit representation's ability\nto delineate finer details. Besides, an additional self-conditioning mechanism\nis designed to use implicit predictions as the guidance of diffusion learning,\nleading to a cooperative system. Experiments conducted on the CO3D-v2 dataset\naffirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6%\nin Chamfer distance over existing methods. The generalizability of IPoD is also\ndemonstrated on the MVImgNet dataset. Our project page is at\nhttps://yushuang-wu.github.io/IPoD.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yushuang Wu", "Luyue Shi", "Junhao Cai", "Weihao Yuan", "Lingteng Qiu", "Zilong Dong", "Liefeng Bo", "Shuguang Cui", "Xiaoguang Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a7"}, "filepath": "data/2312.02963.png", "tags": [], "_media_type": "image", "_rand": 0.9998651728173008, "arXiv_link": "https://arxiv.org/abs/2312.02963", "other_link": "", "title": "MVHumanNet: A Large-scale Dataset of Multi-view Daily Dressing Human Captures", "abstract": "In this era, the success of large language models and text-to-image models\ncan be attributed to the driving force of large-scale datasets. However, in the\nrealm of 3D vision, while remarkable progress has been made with models trained\non large-scale synthetic and real-captured object data like Objaverse and\nMVImgNet, a similar level of progress has not been observed in the domain of\nhuman-centric tasks partially due to the lack of a large-scale human dataset.\nExisting datasets of high-fidelity 3D human capture continue to be mid-sized\ndue to the significant challenges in acquiring large-scale high-quality 3D\nhuman data. To bridge this gap, we present MVHumanNet, a dataset that comprises\nmulti-view human action sequences of 4,500 human identities. The primary focus\nof our work is on collecting human data that features a large number of diverse\nidentities and everyday clothing using a multi-view human capture system, which\nfacilitates easily scalable data collection. Our dataset contains 9,000 daily\noutfits, 60,000 motion sequences and 645 million frames with extensive\nannotations, including human masks, camera parameters, 2D and 3D keypoints,\nSMPL/SMPLX parameters, and corresponding textual descriptions. To explore the\npotential of MVHumanNet in various 2D and 3D visual tasks, we conducted pilot\nstudies on view-consistent action recognition, human NeRF reconstruction,\ntext-driven view-unconstrained human image generation, as well as 2D\nview-unconstrained human image and 3D avatar generation. Extensive experiments\ndemonstrate the performance improvements and effective applications enabled by\nthe scale provided by MVHumanNet. As the current largest-scale 3D human\ndataset, we hope that the release of MVHumanNet data with annotations will\nfoster further innovations in the domain of 3D human-centric tasks at scale.", "keywords": ["Biometrics and human analysis", "Image and video generation and manipulation"], "authors_list": ["Zhangyang Xiong", "Chenghong Li", "Kenkun Liu", "Hongjie Liao", "Jianqiao HU", "Junyi Zhu", "Shuliang Ning", "Lingteng Qiu", "Chongjie Wang", "Shijie Wang", "Shuguang Cui", "Xiaoguang Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a8"}, "filepath": "data/2312.11392.png", "tags": [], "_media_type": "image", "_rand": 0.9997702783419001, "arXiv_link": "https://arxiv.org/abs/2312.11392", "other_link": "https://scedit.github.io/}", "title": "SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing", "abstract": "Image diffusion models have been utilized in various tasks, such as\ntext-to-image generation and controllable image synthesis. Recent research has\nintroduced tuning methods that make subtle adjustments to the original models,\nyielding promising results in specific adaptations of foundational generative\ndiffusion models. Rather than modifying the main backbone of the diffusion\nmodel, we delve into the role of skip connection in U-Net and reveal that\nhierarchical features aggregating long-distance information across encoder and\ndecoder make a significant impact on the content and quality of image\ngeneration. Based on the observation, we propose an efficient generative tuning\nframework, dubbed SCEdit, which integrates and edits Skip Connection using a\nlightweight tuning module named SC-Tuner. Furthermore, the proposed framework\nallows for straightforward extension to controllable image synthesis by\ninjecting different conditions with Controllable SC-Tuner, simplifying and\nunifying the network design for multi-condition inputs. Our SCEdit\nsubstantially reduces training parameters, memory usage, and computational\nexpense due to its lightweight tuners, with backward propagation only passing\nto the decoder blocks. Extensive experiments conducted on text-to-image\ngeneration and controllable image synthesis tasks demonstrate the superiority\nof our method in terms of efficiency and performance. Project page:\n\\url{https://scedit.github.io/}", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zeyinzi Jiang", "Chaojie Mao", "Yulin Pan", "Zhen Han", "Jingfeng Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7a9"}, "filepath": "data/2312.14988v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999441783700922, "arXiv_link": "https://arxiv.org/html/2312.14988v1", "other_link": "", "title": "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis", "abstract": "Autoregressive and diffusion models drive the recent breakthroughs on\ntext-to-image generation. Despite their huge success of generating\nhigh-realistic images, a common shortcoming of these models is their high\ninference latency - autoregressive models run more than a thousand times\nsuccessively to produce image tokens and diffusion models convert Gaussian\nnoise into images with many hundreds of denoising steps. In this work, we\nexplore non-autoregressive text-to-image models that efficiently generate\nhundreds of image tokens in parallel. We develop many model variations with\ndifferent learning and inference strategies, initialized text encoders, etc.\nCompared with autoregressive baselines that needs to run one thousand times,\nour model only runs 16 times to generate images of competitive quality with an\norder of magnitude lower inference latency. Our non-autoregressive model with\n346M parameters generates an image of 256$\\times$256 with about one second on\none V100 GPU.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Zanlin Ni", "Yulin Wang", "Renping Zhou", "Jiayi Guo", "Jinyi Hu", "Zhiyuan Liu", "Shiji Song", "Yuan Yao", "Gao Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7aa"}, "filepath": "data/2401.06578.png", "tags": [], "_media_type": "image", "_rand": 0.9999904520124325, "arXiv_link": "https://arxiv.org/abs/2401.06578", "other_link": "https://akaneqwq.github.io/360DVD/.", "title": "360DVD: Controllable Panorama Video Generation with 360-Degree Video Diffusion Model", "abstract": "Panorama video recently attracts more interest in both study and application,\ncourtesy of its immersive experience. Due to the expensive cost of capturing\n360-degree panoramic videos, generating desirable panorama videos by prompts is\nurgently required. Lately, the emerging text-to-video (T2V) diffusion methods\ndemonstrate notable effectiveness in standard video generation. However, due to\nthe significant gap in content and motion patterns between panoramic and\nstandard videos, these methods encounter challenges in yielding satisfactory\n360-degree panoramic videos. In this paper, we propose a pipeline named\n360-Degree Video Diffusion model (360DVD) for generating 360-degree panoramic\nvideos based on the given prompts and motion conditions. Specifically, we\nintroduce a lightweight 360-Adapter accompanied by 360 Enhancement Techniques\nto transform pre-trained T2V models for panorama video generation. We further\npropose a new panorama dataset named WEB360 consisting of panoramic video-text\npairs for training 360DVD, addressing the absence of captioned panoramic video\ndatasets. Extensive experiments demonstrate the superiority and effectiveness\nof 360DVD for panorama video generation. Our project page is at\nhttps://akaneqwq.github.io/360DVD/.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Qian Wang", "Weiqi Li", "Chong Mou", "Xinhua Cheng", "Jian Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ab"}, "filepath": "data/2405.04741.png", "tags": [], "_media_type": "image", "_rand": 0.999236315453364, "arXiv_link": "https://arxiv.org/abs/2405.04741", "other_link": "", "title": "All in One Framework for Multimodal Re-identification in the Wild", "abstract": "In Re-identification (ReID), recent advancements yield noteworthy progress in\nboth unimodal and cross-modal retrieval tasks. However, the challenge persists\nin developing a unified framework that could effectively handle varying\nmultimodal data, including RGB, infrared, sketches, and textual information.\nAdditionally, the emergence of large-scale models shows promising performance\nin various vision tasks but the foundation model in ReID is still blank. In\nresponse to these challenges, a novel multimodal learning paradigm for ReID is\nintroduced, referred to as All-in-One (AIO), which harnesses a frozen\npre-trained big model as an encoder, enabling effective multimodal retrieval\nwithout additional fine-tuning. The diverse multimodal data in AIO are\nseamlessly tokenized into a unified space, allowing the modality-shared frozen\nencoder to extract identity-consistent features comprehensively across all\nmodalities. Furthermore, a meticulously crafted ensemble of cross-modality\nheads is designed to guide the learning trajectory. AIO is the \\textbf{first}\nframework to perform all-in-one ReID, encompassing four commonly used\nmodalities. Experiments on cross-modal and multimodal ReID reveal that AIO not\nonly adeptly handles various modal data but also excels in challenging\ncontexts, showcasing exceptional performance in zero-shot and domain\ngeneralization scenarios.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["He Li", "Mang Ye", "Ming Zhang", "Bo Du"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ac"}, "filepath": "data/2403.17005v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992956242343691, "arXiv_link": "https://arxiv.org/abs/2403.17005v1", "other_link": "https://trip-i2v.github.io/TRIP/.", "title": "TRIP: Temporal Residual Learning with Image Noise Prior for Image-to-Video Diffusion Models", "abstract": "Recent advances in text-to-video generation have demonstrated the utility of\npowerful diffusion models. Nevertheless, the problem is not trivial when\nshaping diffusion models to animate static image (i.e., image-to-video\ngeneration). The difficulty originates from the aspect that the diffusion\nprocess of subsequent animated frames should not only preserve the faithful\nalignment with the given image but also pursue temporal coherence among\nadjacent frames. To alleviate this, we present TRIP, a new recipe of\nimage-to-video diffusion paradigm that pivots on image noise prior derived from\nstatic image to jointly trigger inter-frame relational reasoning and ease the\ncoherent temporal modeling via temporal residual learning. Technically, the\nimage noise prior is first attained through one-step backward diffusion process\nbased on both static image and noised video latent codes. Next, TRIP executes a\nresidual-like dual-path scheme for noise prediction: 1) a shortcut path that\ndirectly takes image noise prior as the reference noise of each frame to\namplify the alignment between the first frame and subsequent frames; 2) a\nresidual path that employs 3D-UNet over noised video and static image latent\ncodes to enable inter-frame relational reasoning, thereby easing the learning\nof the residual noise for each frame. Furthermore, both reference and residual\nnoise of each frame are dynamically merged via attention mechanism for final\nvideo generation. Extensive experiments on WebVid-10M, DTDB and MSR-VTT\ndatasets demonstrate the effectiveness of our TRIP for image-to-video\ngeneration. Please see our project page at https://trip-i2v.github.io/TRIP/.", "keywords": [], "authors_list": ["Zhongwei Zhang", "Fuchen Long", "Yingwei Pan", "Zhaofan Qiu", "Ting Yao", "Yang Cao", "Tao Mei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ad"}, "filepath": "data/2401.14405.png", "tags": [], "_media_type": "image", "_rand": 0.9997453488317388, "arXiv_link": "https://arxiv.org/abs/2401.14405", "other_link": "https://github.com/AILab-CVC/M2PT.", "title": "Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities", "abstract": "We propose to improve transformers of a specific modality with irrelevant\ndata from other modalities, e.g., improve an ImageNet model with audio or point\ncloud datasets. We would like to highlight that the data samples of the target\nmodality are irrelevant to the other modalities, which distinguishes our method\nfrom other works utilizing paired (e.g., CLIP) or interleaved data of different\nmodalities. We propose a methodology named Multimodal Pathway - given a target\nmodality and a transformer designed for it, we use an auxiliary transformer\ntrained with data of another modality and construct pathways to connect\ncomponents of the two models so that data of the target modality can be\nprocessed by both models. In this way, we utilize the universal\nsequence-to-sequence modeling abilities of transformers obtained from two\nmodalities. As a concrete implementation, we use a modality-specific tokenizer\nand task-specific head as usual but utilize the transformer blocks of the\nauxiliary model via a proposed method named Cross-Modal Re-parameterization,\nwhich exploits the auxiliary weights without any inference costs. On the image,\npoint cloud, video, and audio recognition tasks, we observe significant and\nconsistent performance improvements with irrelevant data from other modalities.\nThe code and models are available at https://github.com/AILab-CVC/M2PT.", "keywords": ["Deep learning architectures and techniques", "Large multimodal models and prompting techniques"], "authors_list": ["Yiyuan Zhang", "Xiaohan Ding", "Kaixiong Gong", "Yixiao Ge", "Ying Shan", "Xiangyu Yue"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ae"}, "filepath": "data/2402.00863.png", "tags": [], "_media_type": "image", "_rand": 0.9997537968262425, "arXiv_link": "https://arxiv.org/abs/2402.00863", "other_link": "", "title": "Geometry Transfer for Stylizing Radiance Fields", "abstract": "Shape and geometric patterns are essential in defining stylistic identity.\nHowever, current 3D style transfer methods predominantly focus on transferring\ncolors and textures, often overlooking geometric aspects. In this paper, we\nintroduce Geometry Transfer, a novel method that leverages geometric\ndeformation for 3D style transfer. This technique employs depth maps to extract\na style guide, subsequently applied to stylize the geometry of radiance fields.\nMoreover, we propose new techniques that utilize geometric cues from the 3D\nscene, thereby enhancing aesthetic expressiveness and more accurately\nreflecting intended styles. Our extensive experiments show that Geometry\nTransfer enables a broader and more expressive range of stylizations, thereby\nsignificantly expanding the scope of 3D style transfer.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Hyunyoung Jung", "Seonghyeon Nam", "Nikolaos Sarafianos", "Sungjoo Yoo", "Alexander Sorkine-Hornung", "Rakesh Ranjan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7af"}, "filepath": "data/2404.06918.png", "tags": [], "_media_type": "image", "_rand": 0.9992427026247852, "arXiv_link": "https://arxiv.org/abs/2404.06918", "other_link": "", "title": "HRVDA: High-Resolution Visual Document Assistant", "abstract": "Leveraging vast training data, multimodal large language models (MLLMs) have\ndemonstrated formidable general visual comprehension capabilities and achieved\nremarkable performance across various tasks. However, their performance in\nvisual document understanding still leaves much room for improvement. This\ndiscrepancy is primarily attributed to the fact that visual document\nunderstanding is a fine-grained prediction task. In natural scenes, MLLMs\ntypically use low-resolution images, leading to a substantial loss of visual\ninformation. Furthermore, general-purpose MLLMs do not excel in handling\ndocument-oriented instructions. In this paper, we propose a High-Resolution\nVisual Document Assistant (HRVDA), which bridges the gap between MLLMs and\nvisual document understanding. This model employs a content filtering mechanism\nand an instruction filtering module to separately filter out the\ncontent-agnostic visual tokens and instruction-agnostic visual tokens, thereby\nachieving efficient model training and inference for high-resolution images. In\naddition, we construct a document-oriented visual instruction tuning dataset\nand apply a multi-stage training strategy to enhance the model's document\nmodeling capabilities. Extensive experiments demonstrate that our model\nachieves state-of-the-art performance across multiple document understanding\ndatasets, while maintaining training efficiency and inference speed comparable\nto low-resolution models.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Efficient and scalable vision"], "authors_list": ["Chaohu Liu", "Kun Yin", "Haoyu Cao", "Xinghua Jiang", "Xin Li", "Yinsong Liu", "Deqiang Jiang", "Xing Sun", "Linli Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b0"}, "filepath": "data/2312.04248.png", "tags": [], "_media_type": "image", "_rand": 0.9993895957601818, "arXiv_link": "https://arxiv.org/abs/2312.04248", "other_link": "", "title": "TeMO: Towards Text-Driven 3D Stylization for Multi-Object Meshes", "abstract": "Recent progress in the text-driven 3D stylization of a single object has been\nconsiderably promoted by CLIP-based methods. However, the stylization of\nmulti-object 3D scenes is still impeded in that the image-text pairs used for\npre-training CLIP mostly consist of an object. Meanwhile, the local details of\nmultiple objects may be susceptible to omission due to the existing supervision\nmanner primarily relying on coarse-grained contrast of image-text pairs. To\novercome these challenges, we present a novel framework, dubbed TeMO, to parse\nmulti-object 3D scenes and edit their styles under the contrast supervision at\nmultiple levels. We first propose a Decoupled Graph Attention (DGA) module to\ndistinguishably reinforce the features of 3D surface points. Particularly, a\ncross-modal graph is constructed to align the object points accurately and noun\nphrases decoupled from the 3D mesh and textual description. Then, we develop a\nCross-Grained Contrast (CGC) supervision system, where a fine-grained loss\nbetween the words in the textual description and the randomly rendered images\nare constructed to complement the coarse-grained loss. Extensive experiments\nshow that our method can synthesize high-quality stylized content and\noutperform the existing methods over a wide range of multi-object 3D meshes.\nOur code and results will be made publicly available", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Xuying Zhang", "Bo-Wen Yin", "yuming chen", "Zheng Lin", "Yunheng Li", "Qibin Hou", "Ming-Ming Cheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b1"}, "filepath": "data/2311.17320.png", "tags": [], "_media_type": "image", "_rand": 0.9991016035758206, "arXiv_link": "https://arxiv.org/abs/2311.17320", "other_link": "", "title": "Revisiting Single Image Reflection Removal In the Wild", "abstract": "This research focuses on the issue of single-image reflection removal (SIRR)\nin real-world conditions, examining it from two angles: the collection pipeline\nof real reflection pairs and the perception of real reflection locations. We\ndevise an advanced reflection collection pipeline that is highly adaptable to a\nwide range of real-world reflection scenarios and incurs reduced costs in\ncollecting large-scale aligned reflection pairs. In the process, we develop a\nlarge-scale, high-quality reflection dataset named Reflection Removal in the\nWild (RRW). RRW contains over 14,950 high-resolution real-world reflection\npairs, a dataset forty-five times larger than its predecessors. Regarding\nperception of reflection locations, we identify that numerous virtual\nreflection objects visible in reflection images are not present in the\ncorresponding ground-truth images. This observation, drawn from the aligned\npairs, leads us to conceive the Maximum Reflection Filter (MaxRF). The MaxRF\ncould accurately and explicitly characterize reflection locations from pairs of\nimages. Building upon this, we design a reflection location-aware cascaded\nframework, specifically tailored for SIRR. Powered by these innovative\ntechniques, our solution achieves superior performance than current leading\nmethods across multiple real-world benchmarks. Codes and datasets will be\npublicly available.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yurui Zhu", "Bo Li", "Xueyang Fu", "Peng-Tao Jiang", "Hao Zhang", "Qibin Sun", "Zheng-Jun Zha", "Jinwei Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b2"}, "filepath": "data/2307.14019.png", "tags": [], "_media_type": "image", "_rand": 0.9994020546372484, "arXiv_link": "https://arxiv.org/abs/2307.14019", "other_link": "", "title": "Inlier Confidence Calibration for Point Cloud Registration", "abstract": "The precision of unsupervised point cloud registration methods is typically\nlimited by the lack of reliable inlier estimation and self-supervised signal,\nespecially in partially overlapping scenarios. In this paper, we propose an\neffective inlier estimation method for unsupervised point cloud registration by\ncapturing geometric structure consistency between the source point cloud and\nits corresponding reference point cloud copy. Specifically, to obtain a high\nquality reference point cloud copy, an One-Nearest Neighborhood (1-NN) point\ncloud is generated by input point cloud. This facilitates matching map\nconstruction and allows for integrating dual neighborhood matching scores of\n1-NN point cloud and input point cloud to improve matching confidence.\nBenefiting from the high quality reference copy, we argue that the neighborhood\ngraph formed by inlier and its neighborhood should have consistency between\nsource point cloud and its corresponding reference copy. Based on this\nobservation, we construct transformation-invariant geometric structure\nrepresentations and capture geometric structure consistency to score the inlier\nconfidence for estimated correspondences between source point cloud and its\nreference copy. This strategy can simultaneously provide the reliable\nself-supervised signal for model optimization. Finally, we further calculate\ntransformation estimation by the weighted SVD algorithm with the estimated\ncorrespondences and corresponding inlier confidence. We train the proposed\nmodel in an unsupervised manner, and extensive experiments on synthetic and\nreal-world datasets illustrate the effectiveness of the proposed method.", "keywords": [], "authors_list": ["Yongzhe Yuan", "Yue Wu", "Xiaolong Fan", "Maoguo Gong", "Qiguang Miao", "Wenping Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b3"}, "filepath": "data/2404.10966v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995692006502268, "arXiv_link": "https://arxiv.org/abs/2404.10966v2", "other_link": "https://github.com/gist-ailab/domain-specific-block-selection-and-paired-view-pseudo-labeling-for-online-TTA.", "title": "Domain-Specific Block Selection and Paired-View Pseudo-Labeling for Online Test-Time Adaptation", "abstract": "Test-time adaptation (TTA) aims to adapt a pre-trained model to a new test\ndomain without access to source data after deployment. Existing approaches\ntypically rely on self-training with pseudo-labels since ground-truth cannot be\nobtained from test data. Although the quality of pseudo labels is important for\nstable and accurate long-term adaptation, it has not been previously addressed.\nIn this work, we propose DPLOT, a simple yet effective TTA framework that\nconsists of two components: (1) domain-specific block selection and (2)\npseudo-label generation using paired-view images. Specifically, we select\nblocks that involve domain-specific feature extraction and train these blocks\nby entropy minimization. After blocks are adjusted for current test domain, we\ngenerate pseudo-labels by averaging given test images and corresponding flipped\ncounterparts. By simply using flip augmentation, we prevent a decrease in the\nquality of the pseudo-labels, which can be caused by the domain gap resulting\nfrom strong augmentation. Our experimental results demonstrate that DPLOT\noutperforms previous TTA methods in CIFAR10-C, CIFAR100-C, and ImageNet-C\nbenchmarks, reducing error by up to 5.4%, 9.1%, and 2.9%, respectively. Also,\nwe provide an extensive analysis to demonstrate effectiveness of our framework.\nCode is available at\nhttps://github.com/gist-ailab/domain-specific-block-selection-and-paired-view-pseudo-labeling-for-online-TTA.", "keywords": [], "authors_list": ["Yeonguk Yu", "Sungho Shin", "Seunghyeok Back", "Minhwan Ko", "Sangjun Noh", "Kyoobin Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b4"}, "filepath": "data/2403.08262.png", "tags": [], "_media_type": "image", "_rand": 0.9992721561607557, "arXiv_link": "https://arxiv.org/abs/2403.08262", "other_link": "https://github.com/yunminjin2/BiTT", "title": "BiTT: Bi-directional Texture Reconstruction of Interacting Two Hands from a Single Image", "abstract": "Creating personalized hand avatars is important to offer a realistic\nexperience to users on AR / VR platforms. While most prior studies focused on\nreconstructing 3D hand shapes, some recent work has tackled the reconstruction\nof hand textures on top of shapes. However, these methods are often limited to\ncapturing pixels on the visible side of a hand, requiring diverse views of the\nhand in a video or multiple images as input. In this paper, we propose a novel\nmethod, BiTT(Bi-directional Texture reconstruction of Two hands), which is the\nfirst end-to-end trainable method for relightable, pose-free texture\nreconstruction of two interacting hands taking only a single RGB image, by\nthree novel components: 1) bi-directional (left $\\leftrightarrow$ right)\ntexture reconstruction using the texture symmetry of left / right hands, 2)\nutilizing a texture parametric model for hand texture recovery, and 3) the\noverall coarse-to-fine stage pipeline for reconstructing personalized texture\nof two interacting hands. BiTT first estimates the scene light condition and\nalbedo image from an input image, then reconstructs the texture of both hands\nthrough the texture parametric model and bi-directional texture reconstructor.\nIn experiments using InterHand2.6M and RGB2Hands datasets, our method\nsignificantly outperforms state-of-the-art hand texture reconstruction methods\nquantitatively and qualitatively. The code is available at\nhttps://github.com/yunminjin2/BiTT", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Minje Kim", "Tae-Kyun Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b5"}, "filepath": "data/2311.17948.png", "tags": [], "_media_type": "image", "_rand": 0.9995050493145117, "arXiv_link": "https://arxiv.org/abs/2311.17948", "other_link": "https://hcis-lab.github.io/Action-slot/", "title": "Action-slot: Visual Action-centric Representations for Multi-label Atomic Activity Recognition in Traffic Scenes", "abstract": "In this paper, we study multi-label atomic activity recognition. Despite the\nnotable progress in action recognition, it is still challenging to recognize\natomic activities due to a deficiency in a holistic understanding of both\nmultiple road users' motions and their contextual information. In this paper,\nwe introduce Action-slot, a slot attention-based approach that learns visual\naction-centric representations, capturing both motion and contextual\ninformation. Our key idea is to design action slots that are capable of paying\nattention to regions where atomic activities occur, without the need for\nexplicit perception guidance. To further enhance slot attention, we introduce a\nbackground slot that competes with action slots, aiding the training process in\navoiding unnecessary focus on background regions devoid of activities. Yet, the\nimbalanced class distribution in the existing dataset hampers the assessment of\nrare activities. To address the limitation, we collect a synthetic dataset\ncalled TACO, which is four times larger than OATS and features a balanced\ndistribution of atomic activities. To validate the effectiveness of our method,\nwe conduct comprehensive experiments and ablation studies against various\naction recognition baselines. We also show that the performance of multi-label\natomic activity recognition on real-world datasets can be improved by\npretraining representations on TACO. We will release our source code and\ndataset. See the videos of visualization on the project page:\nhttps://hcis-lab.github.io/Action-slot/", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Chi-Hsi Kung", "\u66f8\u7def \u5442", "Yi-Hsuan Tsai", "Yi-Ting Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b6"}, "filepath": "data/2311.17095v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996993478506491, "arXiv_link": "https://arxiv.org/abs/2311.17095v1", "other_link": "", "title": "Plug-and-Play, Dense-Label-Free Extraction of Open-Vocabulary Semantic Segmentation from Vision-Language Models", "abstract": "From an enormous amount of image-text pairs, large-scale vision-language\nmodels (VLMs) learn to implicitly associate image regions with words, which is\nvital for tasks such as image captioning and visual question answering.\nHowever, leveraging such pre-trained models for open-vocabulary semantic\nsegmentation remains a challenge. In this paper, we propose a simple, yet\nextremely effective, training-free technique, Plug-and-Play Open-Vocabulary\nSemantic Segmentation (PnP-OVSS) for this task. PnP-OVSS leverages a VLM with\ndirect text-to-image cross-attention and an image-text matching loss to produce\nsemantic segmentation. However, cross-attention alone tends to over-segment,\nwhereas cross-attention plus GradCAM tend to under-segment. To alleviate this\nissue, we introduce Salience Dropout; by iteratively dropping patches that the\nmodel is most attentive to, we are able to better resolve the entire extent of\nthe segmentation mask. Compared to existing techniques, the proposed method\ndoes not require any neural network training and performs hyperparameter tuning\nwithout the need for any segmentation annotations, even for a validation set.\nPnP-OVSS demonstrates substantial improvements over a comparable baseline\n(+29.4% mIoU on Pascal VOC, +13.2% mIoU on Pascal Context, +14.0% mIoU on MS\nCOCO, +2.4% mIoU on COCO Stuff) and even outperforms most baselines that\nconduct additional network training on top of pretrained VLMs.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Luo Jiayun", "Siddhesh Khandelwal", "Leonid Sigal", "Boyang Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b7"}, "filepath": "data/2404.08968.png", "tags": [], "_media_type": "image", "_rand": 0.9996816095511648, "arXiv_link": "https://arxiv.org/abs/2404.08968", "other_link": "", "title": "MCPNet: An Interpretable Classifier via Multi-Level Concept Prototypes", "abstract": "Recent advancements in post-hoc and inherently interpretable methods have\nmarkedly enhanced the explanations of black box classifier models. These\nmethods operate either through post-analysis or by integrating concept learning\nduring model training. Although being effective in bridging the semantic gap\nbetween a model's latent space and human interpretation, these explanation\nmethods only partially reveal the model's decision-making process. The outcome\nis typically limited to high-level semantics derived from the last feature map.\nWe argue that the explanations lacking insights into the decision processes at\nlow and mid-level features are neither fully faithful nor useful. Addressing\nthis gap, we introduce the Multi-Level Concept Prototypes Classifier (MCPNet),\nan inherently interpretable model. MCPNet autonomously learns meaningful\nconcept prototypes across multiple feature map levels using Centered Kernel\nAlignment (CKA) loss and an energy-based weighted PCA mechanism, and it does so\nwithout reliance on predefined concept labels. Further, we propose a novel\nclassifier paradigm that learns and aligns multi-level concept prototype\ndistributions for classification purposes via Class-aware Concept Distribution\n(CCD) loss. Our experiments reveal that our proposed MCPNet while being\nadaptable to various model architectures, offers comprehensive multi-level\nexplanations while maintaining classification accuracy. Additionally, its\nconcept distribution-based classification approach shows improved\ngeneralization capabilities in few-shot classification scenarios.", "keywords": [], "authors_list": ["Bor Shiun Wang", "Chien-Yi Wang", "Wei-Chen Chiu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b8"}, "filepath": "data/2311.18303.png", "tags": [], "_media_type": "image", "_rand": 0.9995830197511426, "arXiv_link": "https://arxiv.org/abs/2311.18303", "other_link": "", "title": "OmniMotionGPT: Animal Motion Generation with Limited Data", "abstract": "Our paper aims to generate diverse and realistic animal motion sequences from\ntextual descriptions, without a large-scale animal text-motion dataset. While\nthe task of text-driven human motion synthesis is already extensively studied\nand benchmarked, it remains challenging to transfer this success to other\nskeleton structures with limited data. In this work, we design a model\narchitecture that imitates Generative Pretraining Transformer (GPT), utilizing\nprior knowledge learned from human data to the animal domain. We jointly train\nmotion autoencoders for both animal and human motions and at the same time\noptimize through the similarity scores among human motion encoding, animal\nmotion encoding, and text CLIP embedding. Presenting the first solution to this\nproblem, we are able to generate animal motions with high diversity and\nfidelity, quantitatively and qualitatively outperforming the results of\ntraining human motion generation baselines on animal data. Additionally, we\nintroduce AnimalML3D, the first text-animal motion dataset with 1240 animation\nsequences spanning 36 different animal identities. We hope this dataset would\nmediate the data scarcity problem in text-driven animal motion generation,\nproviding a new playground for the research community.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Zhangsihao Yang", "Mingyuan Zhou", "Mengyi Shan", "Bingbing Wen", "Ziwei Xuan", "Mitch Hill", "Junjie Bai", "Guo-Jun Qi", "Yalin Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7b9"}, "filepath": "data/2308.09911.png", "tags": [], "_media_type": "image", "_rand": 0.9990790360431534, "arXiv_link": "https://arxiv.org/abs/2308.09911", "other_link": "https://github.com/QinYang79/RDE.", "title": "Noisy-Correspondence Learning for Text-to-Image Person Re-identification", "abstract": "Text-to-image person re-identification (TIReID) is a compelling topic in the\ncross-modal community, which aims to retrieve the target person based on a\ntextual query. Although numerous TIReID methods have been proposed and achieved\npromising performance, they implicitly assume the training image-text pairs are\ncorrectly aligned, which is not always the case in real-world scenarios. In\npractice, the image-text pairs inevitably exist under-correlated or even\nfalse-correlated, a.k.a noisy correspondence (NC), due to the low quality of\nthe images and annotation errors. To address this problem, we propose a novel\nRobust Dual Embedding method (RDE) that can learn robust visual-semantic\nassociations even with NC. Specifically, RDE consists of two main components:\n1) A Confident Consensus Division (CCD) module that leverages the dual-grained\ndecisions of dual embedding modules to obtain a consensus set of clean training\ndata, which enables the model to learn correct and reliable visual-semantic\nassociations. 2) A Triplet Alignment Loss (TAL) relaxes the conventional\nTriplet Ranking loss with the hardest negative samples to a log-exponential\nupper bound over all negative ones, thus preventing the model collapse under NC\nand can also focus on hard-negative samples for promising performance. We\nconduct extensive experiments on three public benchmarks, namely CUHK-PEDES,\nICFG-PEDES, and RSTPReID, to evaluate the performance and robustness of our\nRDE. Our method achieves state-of-the-art results both with and without\nsynthetic noisy correspondences on all three datasets. Code is available at\nhttps://github.com/QinYang79/RDE.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Yang Qin", "Yingke Chen", "Dezhong Peng", "Xi Peng", "Joey Tianyi Zhou", "Peng Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ba"}, "filepath": "data/2403.17004.png", "tags": [], "_media_type": "image", "_rand": 0.999808181602494, "arXiv_link": "https://arxiv.org/abs/2403.17004", "other_link": "", "title": "SD-DiT: Unleashing the Power of Self-supervised Discrimination in Diffusion Transformer", "abstract": "Diffusion Transformer (DiT) has emerged as the new trend of generative\ndiffusion models on image generation. In view of extremely slow convergence in\ntypical DiT, recent breakthroughs have been driven by mask strategy that\nsignificantly improves the training efficiency of DiT with additional\nintra-image contextual learning. Despite this progress, mask strategy still\nsuffers from two inherent limitations: (a) training-inference discrepancy and\n(b) fuzzy relations between mask reconstruction & generative diffusion process,\nresulting in sub-optimal training of DiT. In this work, we address these\nlimitations by novelly unleashing the self-supervised discrimination knowledge\nto boost DiT training. Technically, we frame our DiT in a teacher-student\nmanner. The teacher-student discriminative pairs are built on the diffusion\nnoises along the same Probability Flow Ordinary Differential Equation (PF-ODE).\nInstead of applying mask reconstruction loss over both DiT encoder and decoder,\nwe decouple DiT encoder and decoder to separately tackle discriminative and\ngenerative objectives. In particular, by encoding discriminative pairs with\nstudent and teacher DiT encoders, a new discriminative loss is designed to\nencourage the inter-image alignment in the self-supervised embedding space.\nAfter that, student samples are fed into student DiT decoder to perform the\ntypical generative diffusion task. Extensive experiments are conducted on\nImageNet dataset, and our method achieves a competitive balance between\ntraining cost and generative capacity.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Rui Zhu", "Yingwei Pan", "Yehao Li", "Ting Yao", "Zhenglong Sun", "Tao Mei", "Chang-Wen Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7bb"}, "filepath": "data/2308.07926.png", "tags": [], "_media_type": "image", "_rand": 0.9999608960965265, "arXiv_link": "https://arxiv.org/abs/2308.07926", "other_link": "https://qiuyu96.github.io/CoDeF/.", "title": "CoDeF: Content Deformation Fields for Temporally Consistent Video Processing", "abstract": "We present the content deformation field CoDeF as a new type of video\nrepresentation, which consists of a canonical content field aggregating the\nstatic contents in the entire video and a temporal deformation field recording\nthe transformations from the canonical image (i.e., rendered from the canonical\ncontent field) to each individual frame along the time axis.Given a target\nvideo, these two fields are jointly optimized to reconstruct it through a\ncarefully tailored rendering pipeline.We advisedly introduce some\nregularizations into the optimization process, urging the canonical content\nfield to inherit semantics (e.g., the object shape) from the video.With such a\ndesign, CoDeF naturally supports lifting image algorithms for video processing,\nin the sense that one can apply an image algorithm to the canonical image and\neffortlessly propagate the outcomes to the entire video with the aid of the\ntemporal deformation field.We experimentally show that CoDeF is able to lift\nimage-to-image translation to video-to-video translation and lift keypoint\ndetection to keypoint tracking without any training.More importantly, thanks to\nour lifting strategy that deploys the algorithms on only one image, we achieve\nsuperior cross-frame consistency in processed videos compared to existing\nvideo-to-video translation approaches, and even manage to track non-rigid\nobjects like water and smog.Project page can be found at\nhttps://qiuyu96.github.io/CoDeF/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Hao Ouyang", "Qiuyu Wang", "Yuxi Xiao", "Qingyan Bai", "Juntao Zhang", "Kecheng Zheng", "Xiaowei Zhou", "Qifeng Chen", "Yujun Shen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7bc"}, "filepath": "data/2404.01051.png", "tags": [], "_media_type": "image", "_rand": 0.999582654964306, "arXiv_link": "https://arxiv.org/abs/2404.01051", "other_link": "", "title": "Action Detection via an Image Diffusion Process", "abstract": "Action detection aims to localize the starting and ending points of action\ninstances in untrimmed videos, and predict the classes of those instances. In\nthis paper, we make the observation that the outputs of the action detection\ntask can be formulated as images. Thus, from a novel perspective, we tackle\naction detection via a three-image generation process to generate starting\npoint, ending point and action-class predictions as images via our proposed\nAction Detection Image Diffusion (ADI-Diff) framework. Furthermore, since our\nimages differ from natural images and exhibit special properties, we further\nexplore a Discrete Action-Detection Diffusion Process and a Row-Column\nTransformer design to better handle their processing. Our ADI-Diff framework\nachieves state-of-the-art results on two widely-used datasets.", "keywords": [], "authors_list": ["Lin Geng Foo", "Tianjiao Li", "Hossein Rahmani", "Jun Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7bd"}, "filepath": "data/2403.10052.png", "tags": [], "_media_type": "image", "_rand": 0.9996220728365814, "arXiv_link": "https://arxiv.org/abs/2403.10052", "other_link": "https://github.com/daeheepark/T4P.", "title": "T4P: Test-Time Training of Trajectory Prediction via Masked Autoencoder and Actor-specific Token Memory", "abstract": "Trajectory prediction is a challenging problem that requires considering\ninteractions among multiple actors and the surrounding environment. While\ndata-driven approaches have been used to address this complex problem, they\nsuffer from unreliable predictions under distribution shifts during test time.\nAccordingly, several online learning methods have been proposed using\nregression loss from the ground truth of observed data leveraging the\nauto-labeling nature of trajectory prediction task. We mainly tackle the\nfollowing two issues. First, previous works underfit and overfit as they only\noptimize the last layer of the motion decoder. To this end, we employ the\nmasked autoencoder (MAE) for representation learning to encourage complex\ninteraction modeling in shifted test distribution for updating deeper layers.\nSecond, utilizing the sequential nature of driving data, we propose an\nactor-specific token memory that enables the test-time learning of actor-wise\nmotion characteristics. Our proposed method has been validated across various\nchallenging cross-dataset distribution shift scenarios including nuScenes,\nLyft, Waymo, and Interaction. Our method surpasses the performance of existing\nstate-of-the-art online learning methods in terms of both prediction accuracy\nand computational efficiency. The code is available at\nhttps://github.com/daeheepark/T4P.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Daehee Park", "Jaeseok Jeong", "Sung-Hoon Yoon", "Jaewoo Jeong", "Kuk-Jin Yoon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7be"}, "filepath": "data/2404.11958.png", "tags": [], "_media_type": "image", "_rand": 0.9992052909283307, "arXiv_link": "https://arxiv.org/abs/2404.11958", "other_link": "https://github.com/songw-zju/HASSC.", "title": "Not All Voxels Are Equal: Hardness-Aware Semantic Scene Completion with Self-Distillation", "abstract": "Semantic scene completion, also known as semantic occupancy prediction, can\nprovide dense geometric and semantic information for autonomous vehicles, which\nattracts the increasing attention of both academia and industry. Unfortunately,\nexisting methods usually formulate this task as a voxel-wise classification\nproblem and treat each voxel equally in 3D space during training. As the hard\nvoxels have not been paid enough attention, the performance in some challenging\nregions is limited. The 3D dense space typically contains a large number of\nempty voxels, which are easy to learn but require amounts of computation due to\nhandling all the voxels uniformly for the existing models. Furthermore, the\nvoxels in the boundary region are more challenging to differentiate than those\nin the interior. In this paper, we propose HASSC approach to train the semantic\nscene completion model with hardness-aware design. The global hardness from the\nnetwork optimization process is defined for dynamical hard voxel selection.\nThen, the local hardness with geometric anisotropy is adopted for voxel-wise\nrefinement. Besides, self-distillation strategy is introduced to make training\nprocess stable and consistent. Extensive experiments show that our HASSC scheme\ncan effectively promote the accuracy of the baseline model without incurring\nthe extra inference cost. Source code is available at:\nhttps://github.com/songw-zju/HASSC.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Song Wang", "Jiawei Yu", "Wentong Li", "Wenyu Liu", "Xiaolu Liu", "Junbo Chen", "Jianke Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7bf"}, "filepath": "data/2312.08914.png", "tags": [], "_media_type": "image", "_rand": 0.9990652474730872, "arXiv_link": "https://arxiv.org/abs/2312.08914", "other_link": "https://github.com/THUDM/CogVLM", "title": "CogAgent: A Visual Language Model for GUI Agents", "abstract": "People are spending an enormous amount of time on digital devices through\ngraphical user interfaces (GUIs), e.g., computer or smartphone screens. Large\nlanguage models (LLMs) such as ChatGPT can assist people in tasks like writing\nemails, but struggle to understand and interact with GUIs, thus limiting their\npotential to increase automation levels. In this paper, we introduce CogAgent,\nan 18-billion-parameter visual language model (VLM) specializing in GUI\nunderstanding and navigation. By utilizing both low-resolution and\nhigh-resolution image encoders, CogAgent supports input at a resolution of\n1120*1120, enabling it to recognize tiny page elements and text. As a\ngeneralist visual language model, CogAgent achieves the state of the art on\nfive text-rich and four general VQA benchmarks, including VQAv2, OK-VQA,\nText-VQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. CogAgent, using\nonly screenshots as input, outperforms LLM-based methods that consume extracted\nHTML text on both PC and Android GUI navigation tasks -- Mind2Web and AITW,\nadvancing the state of the art. The model and codes are available at\nhttps://github.com/THUDM/CogVLM .", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Document analysis and understanding"], "authors_list": ["Wenyi Hong", "Weihan Wang", "Qingsong Lv", "Jiazheng Xu", "Wenmeng Yu", "Junhui Ji", "Yan Wang", "Zihan Wang", "Yuxiao Dong", "Ming Ding", "Jie Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c0"}, "filepath": "data/2404.00925.png", "tags": [], "_media_type": "image", "_rand": 0.9995702049083033, "arXiv_link": "https://arxiv.org/abs/2404.00925", "other_link": "", "title": "Representing Signs as Language: A New Method for Sign Language Translation from Videos", "abstract": "Sign Language Translation (SLT) is a challenging task that aims to translate\nsign videos into spoken language. Inspired by the strong translation\ncapabilities of large language models (LLMs) that are trained on extensive\nmultilingual text corpora, we aim to harness off-the-shelf LLMs to handle SLT.\nIn this paper, we regularize the sign videos to embody linguistic\ncharacteristics of spoken language, and propose a novel SignLLM framework to\ntransform sign videos into a language-like representation for improved\nreadability by off-the-shelf LLMs. SignLLM comprises two key modules: (1) The\nVector-Quantized Visual Sign module converts sign videos into a sequence of\ndiscrete character-level sign tokens, and (2) the Codebook Reconstruction and\nAlignment module converts these character-level tokens into word-level sign\nrepresentations using an optimal transport formulation. A sign-text alignment\nloss further bridges the gap between sign and text tokens, enhancing semantic\ncompatibility. We achieve state-of-the-art gloss-free results on two\nwidely-used SLT benchmarks.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Jia Gong", "Lin Geng Foo", "Yixuan He", "Hossein Rahmani", "Jun Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c1"}, "filepath": "data/2403.17000.png", "tags": [], "_media_type": "image", "_rand": 0.9993943842491314, "arXiv_link": "https://arxiv.org/abs/2403.17000", "other_link": "", "title": "Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution", "abstract": "Diffusion models are just at a tipping point for image super-resolution task.\nNevertheless, it is not trivial to capitalize on diffusion models for video\nsuper-resolution which necessitates not only the preservation of visual\nappearance from low-resolution to high-resolution videos, but also the temporal\nconsistency across video frames. In this paper, we propose a novel approach,\npursuing Spatial Adaptation and Temporal Coherence (SATeCo), for video\nsuper-resolution. SATeCo pivots on learning spatial-temporal guidance from\nlow-resolution videos to calibrate both latent-space high-resolution video\ndenoising and pixel-space video reconstruction. Technically, SATeCo freezes all\nthe parameters of the pre-trained UNet and VAE, and only optimizes two\ndeliberately-designed spatial feature adaptation (SFA) and temporal feature\nalignment (TFA) modules, in the decoder of UNet and VAE. SFA modulates frame\nfeatures via adaptively estimating affine parameters for each pixel,\nguaranteeing pixel-wise guidance for high-resolution frame synthesis. TFA\ndelves into feature interaction within a 3D local window (tubelet) through\nself-attention, and executes cross-attention between tubelet and its\nlow-resolution counterpart to guide temporal feature alignment. Extensive\nexperiments conducted on the REDS4 and Vid4 datasets demonstrate the\neffectiveness of our approach.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Zhikai Chen", "Fuchen Long", "Zhaofan Qiu", "Ting Yao", "Wengang Zhou", "Jiebo Luo", "Tao Mei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c2"}, "filepath": "data/2404.04804.png", "tags": [], "_media_type": "image", "_rand": 0.9993120222994478, "arXiv_link": "https://arxiv.org/abs/2404.04804", "other_link": "", "title": "Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving", "abstract": "Vision-centric perception systems for autonomous driving have gained\nconsiderable attention recently due to their cost-effectiveness and\nscalability, especially compared to LiDAR-based systems. However, these systems\noften struggle in low-light conditions, potentially compromising their\nperformance and safety. To address this, our paper introduces LightDiff, a\ndomain-tailored framework designed to enhance the low-light image quality for\nautonomous driving applications. Specifically, we employ a multi-condition\ncontrolled diffusion model. LightDiff works without any human-collected paired\ndata, leveraging a dynamic data degradation process instead. It incorporates a\nnovel multi-condition adapter that adaptively controls the input weights from\ndifferent modalities, including depth maps, RGB images, and text captions, to\neffectively illuminate dark scenes while maintaining context consistency.\nFurthermore, to align the enhanced images with the detection model's knowledge,\nLightDiff employs perception-specific scores as rewards to guide the diffusion\ntraining process through reinforcement learning. Extensive experiments on the\nnuScenes datasets demonstrate that LightDiff can significantly improve the\nperformance of several state-of-the-art 3D detectors in night-time conditions\nwhile achieving high visual quality scores, highlighting its potential to\nsafeguard autonomous driving.", "keywords": ["Deep learning architectures and techniques", "Low-level vision"], "authors_list": ["JINLONG LI", "Baolu Li", "Zhengzhong Tu", "XINYU LIU", "Qing Guo", "Felix Juefei Xu", "Runsheng Xu", "Hongkai Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c3"}, "filepath": "data/2311.16845.png", "tags": [], "_media_type": "image", "_rand": 0.9997638113378678, "arXiv_link": "https://arxiv.org/abs/2311.16845", "other_link": "", "title": "Wavelet-based Fourier Information Interaction with Frequency Diffusion Adjustment for Underwater Image Restoration", "abstract": "Underwater images are subject to intricate and diverse degradation,\ninevitably affecting the effectiveness of underwater visual tasks. However,\nmost approaches primarily operate in the raw pixel space of images, which\nlimits the exploration of the frequency characteristics of underwater images,\nleading to an inadequate utilization of deep models' representational\ncapabilities in producing high-quality images. In this paper, we introduce a\nnovel Underwater Image Enhancement (UIE) framework, named WF-Diff, designed to\nfully leverage the characteristics of frequency domain information and\ndiffusion models. WF-Diff consists of two detachable networks: Wavelet-based\nFourier information interaction network (WFI2-net) and Frequency Residual\nDiffusion Adjustment Module (FRDAM). With our full exploration of the frequency\ndomain information, WFI2-net aims to achieve preliminary enhancement of\nfrequency information in the wavelet space. Our proposed FRDAM can further\nrefine the high- and low-frequency information of the initial enhanced images,\nwhich can be viewed as a plug-and-play universal module to adjust the detail of\nthe underwater images. With the above techniques, our algorithm can show SOTA\nperformance on real-world underwater image datasets, and achieves competitive\nperformance in visual quality.", "keywords": ["Low-level vision", "Image and video generation and manipulation"], "authors_list": ["Chen Zhao", "Weiling Cai", "Chenyu Dong", "Chengwei Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c4"}, "filepath": "data/2404.05218v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992863610837767, "arXiv_link": "https://arxiv.org/abs/2404.05218v1", "other_link": "https://github.com/Jaewoo97/T2P.", "title": "Multi-agent Long-term 3D Human Pose Forecasting via Interaction-aware Trajectory Conditioning", "abstract": "Human pose forecasting garners attention for its diverse applications.\nHowever, challenges in modeling the multi-modal nature of human motion and\nintricate interactions among agents persist, particularly with longer\ntimescales and more agents. In this paper, we propose an interaction-aware\ntrajectory-conditioned long-term multi-agent human pose forecasting model,\nutilizing a coarse-to-fine prediction approach: multi-modal global trajectories\nare initially forecasted, followed by respective local pose forecasts\nconditioned on each mode. In doing so, our Trajectory2Pose model introduces a\ngraph-based agent-wise interaction module for a reciprocal forecast of local\nmotion-conditioned global trajectory and trajectory-conditioned local pose. Our\nmodel effectively handles the multi-modality of human motion and the complexity\nof long-term multi-agent interactions, improving performance in complex\nenvironments. Furthermore, we address the lack of long-term (6s+) multi-agent\n(5+) datasets by constructing a new dataset from real-world images and 2D\nannotations, enabling a comprehensive evaluation of our proposed model.\nState-of-the-art prediction performance on both complex and simpler datasets\nconfirms the generalized effectiveness of our method. The code is available at\nhttps://github.com/Jaewoo97/T2P.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jaewoo Jeong", "Daehee Park", "Kuk-Jin Yoon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c5"}, "filepath": "data/2404.15882.png", "tags": [], "_media_type": "image", "_rand": 0.9990830727930706, "arXiv_link": "https://arxiv.org/abs/2404.15882", "other_link": "https://github.com/Edw2n/ImageNet-ES.", "title": "Unexplored Faces of Robustness and Out-of-Distribution: Covariate Shifts in Environment and Sensor Domains", "abstract": "Computer vision applications predict on digital images acquired by a camera\nfrom physical scenes through light. However, conventional robustness benchmarks\nrely on perturbations in digitized images, diverging from distribution shifts\noccurring in the image acquisition process. To bridge this gap, we introduce a\nnew distribution shift dataset, ImageNet-ES, comprising variations in\nenvironmental and camera sensor factors by directly capturing 202k images with\na real camera in a controllable testbed. With the new dataset, we evaluate\nout-of-distribution (OOD) detection and model robustness. We find that existing\nOOD detection methods do not cope with the covariate shifts in ImageNet-ES,\nimplying that the definition and detection of OOD should be revisited to\nembrace real-world distribution shifts. We also observe that the model becomes\nmore robust in both ImageNet-C and -ES by learning environment and sensor\nvariations in addition to existing digital augmentations. Lastly, our results\nsuggest that effective shift mitigation via camera sensor control can\nsignificantly improve performance without increasing model size. With these\nfindings, our benchmark may aid future research on robustness, OOD, and camera\nsensor control for computer vision. Our code and dataset are available at\nhttps://github.com/Edw2n/ImageNet-ES.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Eunsu Baek", "Keondo Park", "Ji-yoon Kim", "Hyung-Sin Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c6"}, "filepath": "data/2308.09718.png", "tags": [], "_media_type": "image", "_rand": 0.9997395454244942, "arXiv_link": "https://arxiv.org/abs/2308.09718", "other_link": "", "title": "Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training", "abstract": "The rapid advancement of deep learning models often attributes to their\nability to leverage massive training data. In contrast, such privilege has not\nyet fully benefited 3D deep learning, mainly due to the limited availability of\nlarge-scale 3D datasets. Merging multiple available data sources and letting\nthem collaboratively train a single model is a potential solution. However, due\nto the large domain gap between 3D point cloud datasets, such mixed supervision\ncould adversely affect the model's performance and lead to degenerated\nperformance (i.e., negative transfer) compared to single-dataset training. In\nview of this challenge, we introduce Point Prompt Training (PPT), a novel\nframework for multi-dataset synergistic learning in the context of 3D\nrepresentation learning that supports multiple pre-training paradigms. Based on\nthis framework, we propose Prompt-driven Normalization, which adapts the model\nto different datasets with domain-specific prompts and Language-guided\nCategorical Alignment that decently unifies the multiple-dataset label spaces\nby leveraging the relationship between label text. Extensive experiments verify\nthat PPT can overcome the negative transfer associated with synergistic\nlearning and produce generalizable representations. Notably, it achieves\nstate-of-the-art performance on each dataset using a single weight-shared model\nwith supervised multi-dataset training. Moreover, when served as a pre-training\nframework, it outperforms other pre-training approaches regarding\nrepresentation quality and attains remarkable state-of-the-art performance\nacross over ten diverse downstream tasks spanning both indoor and outdoor 3D\nscenarios.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiaoyang Wu", "Zhuotao Tian", "Xin Wen", "Bohao Peng", "Xihui Liu", "Kaicheng Yu", "Hengshuang Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c7"}, "filepath": "data/2312.10531.png", "tags": [], "_media_type": "image", "_rand": 0.9999650552398552, "arXiv_link": "https://arxiv.org/abs/2312.10531", "other_link": "", "title": "How to Train Neural Field Representations: A Comprehensive Study and Benchmark", "abstract": "Neural fields (NeFs) have recently emerged as a versatile method for modeling\nsignals of various modalities, including images, shapes, and scenes.\nSubsequently, a number of works have explored the use of NeFs as\nrepresentations for downstream tasks, e.g. classifying an image based on the\nparameters of a NeF that has been fit to it. However, the impact of the NeF\nhyperparameters on their quality as downstream representation is scarcely\nunderstood and remains largely unexplored. This is in part caused by the large\namount of time required to fit datasets of neural fields.\n In this work, we propose $\\verb|fit-a-nef|$, a JAX-based library that\nleverages parallelization to enable fast optimization of large-scale NeF\ndatasets, resulting in a significant speed-up. With this library, we perform a\ncomprehensive study that investigates the effects of different hyperparameters\n-- including initialization, network architecture, and optimization strategies\n-- on fitting NeFs for downstream tasks. Our study provides valuable insights\non how to train NeFs and offers guidance for optimizing their effectiveness in\ndownstream applications. Finally, based on the proposed library and our\nanalysis, we propose Neural Field Arena, a benchmark consisting of neural field\nvariants of popular vision datasets, including MNIST, CIFAR, variants of\nImageNet, and ShapeNetv2. Our library and the Neural Field Arena will be\nopen-sourced to introduce standardized benchmarking and promote further\nresearch on neural fields.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Samuele Papa", "Riccardo Valperga", "David Knigge", "Miltiadis Kofinas", "Phillip Lippe", "Jan-Jakob Sonke", "Efstratios Gavves"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c8"}, "filepath": "data/2404.00922.png", "tags": [], "_media_type": "image", "_rand": 0.9997604216915096, "arXiv_link": "https://arxiv.org/abs/2404.00922", "other_link": "", "title": "Towards Memorization-Free Diffusion Models", "abstract": "Pretrained diffusion models and their outputs are widely accessible due to\ntheir exceptional capacity for synthesizing high-quality images and their\nopen-source nature. The users, however, may face litigation risks owing to the\nmodels' tendency to memorize and regurgitate training data during inference. To\naddress this, we introduce Anti-Memorization Guidance (AMG), a novel framework\nemploying three targeted guidance strategies for the main causes of\nmemorization: image and caption duplication, and highly specific user prompts.\nConsequently, AMG ensures memorization-free outputs while maintaining high\nimage quality and text alignment, leveraging the synergy of its guidance\nmethods, each indispensable in its own right. AMG also features an innovative\nautomatic detection system for potential memorization during each step of\ninference process, allows selective application of guidance strategies,\nminimally interfering with the original sampling process to preserve output\nutility. We applied AMG to pretrained Denoising Diffusion Probabilistic Models\n(DDPM) and Stable Diffusion across various generation tasks. The results\ndemonstrate that AMG is the first approach to successfully eradicates all\ninstances of memorization with no or marginal impacts on image quality and\ntext-alignment, as evidenced by FID and CLIP scores.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Chen Chen", "Daochang Liu", "Chang Xu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7c9"}, "filepath": "data/2402.18817.png", "tags": [], "_media_type": "image", "_rand": 0.9992074059735206, "arXiv_link": "https://arxiv.org/abs/2402.18817", "other_link": "https://github.com/leminhbinh0209/CVPR24-FAS.", "title": "Gradient Alignment for Cross-domain Face Anti-Spoofing", "abstract": "Recent advancements in domain generalization (DG) for face anti-spoofing\n(FAS) have garnered considerable attention. Traditional methods have focused on\ndesigning learning objectives and additional modules to isolate domain-specific\nfeatures while retaining domain-invariant characteristics in their\nrepresentations. However, such approaches often lack guarantees of consistent\nmaintenance of domain-invariant features or the complete removal of\ndomain-specific features. Furthermore, most prior works of DG for FAS do not\nensure convergence to a local flat minimum, which has been shown to be\nadvantageous for DG. In this paper, we introduce GAC-FAS, a novel learning\nobjective that encourages the model to converge towards an optimal flat minimum\nwithout necessitating additional learning modules. Unlike conventional\nsharpness-aware minimizers, GAC-FAS identifies ascending points for each domain\nand regulates the generalization gradient updates at these points to align\ncoherently with empirical risk minimization (ERM) gradient updates. This unique\napproach specifically guides the model to be robust against domain shifts. We\ndemonstrate the efficacy of GAC-FAS through rigorous testing on challenging\ncross-domain FAS datasets, where it establishes state-of-the-art performance.\nThe code is available at https://github.com/leminhbinh0209/CVPR24-FAS.", "keywords": [], "authors_list": ["MINH BINH LE", "Simon Woo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ca"}, "filepath": "data/2312.02137.png", "tags": [], "_media_type": "image", "_rand": 0.9998620888534269, "arXiv_link": "https://arxiv.org/abs/2312.02137", "other_link": "", "title": "MANUS: Markerless Grasp Capture using Articulated 3D Gaussians", "abstract": "Understanding how we grasp objects with our hands has important applications\nin areas like robotics and mixed reality. However, this challenging problem\nrequires accurate modeling of the contact between hands and objects. To capture\ngrasps, existing methods use skeletons, meshes, or parametric models that does\nnot represent hand shape accurately resulting in inaccurate contacts. We\npresent MANUS, a method for Markerless Hand-Object Grasp Capture using\nArticulated 3D Gaussians. We build a novel articulated 3D Gaussians\nrepresentation that extends 3D Gaussian splatting for high-fidelity\nrepresentation of articulating hands. Since our representation uses Gaussian\nprimitives, it enables us to efficiently and accurately estimate contacts\nbetween the hand and the object. For the most accurate results, our method\nrequires tens of camera views that current datasets do not provide. We\ntherefore build MANUS-Grasps, a new dataset that contains hand-object grasps\nviewed from 50+ cameras across 30+ scenes, 3 subjects, and comprising over 7M\nframes. In addition to extensive qualitative results, we also show that our\nmethod outperforms others on a quantitative contact evaluation method that uses\npaint transfer from the object to the hand.", "keywords": [], "authors_list": ["Chandradeep Pokhariya", "Ishaan Shah", "Angela Xing", "Zekun Li", "Kefan Chen", "Avinash Sharma", "Srinath Sridhar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7cb"}, "filepath": "data/2402.11874.png", "tags": [], "_media_type": "image", "_rand": 0.9994424707403414, "arXiv_link": "https://arxiv.org/abs/2402.11874", "other_link": "", "title": "Language-guided Image Reflection Separation", "abstract": "This paper studies the problem of language-guided reflection separation,\nwhich aims at addressing the ill-posed reflection separation problem by\nintroducing language descriptions to provide layer content. We propose a\nunified framework to solve this problem, which leverages the cross-attention\nmechanism with contrastive learning strategies to construct the correspondence\nbetween language descriptions and image layers. A gated network design and a\nrandomized training strategy are employed to tackle the recognizable layer\nambiguity. The effectiveness of the proposed method is validated by the\nsignificant performance advantage over existing reflection separation methods\non both quantitative and qualitative comparisons.", "keywords": ["Low-level vision"], "authors_list": ["Haofeng Zhong", "Yuchen Hong", "Shuchen Weng", "Jinxiu Liang", "Boxin Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7cc"}, "filepath": "data/2404.11590.png", "tags": [], "_media_type": "image", "_rand": 0.9991664775343074, "arXiv_link": "https://arxiv.org/abs/2404.11590", "other_link": "", "title": "A Subspace-Constrained Tyler's Estimator and its Applications to Structure from Motion", "abstract": "We present the subspace-constrained Tyler's estimator (STE) designed for\nrecovering a low-dimensional subspace within a dataset that may be highly\ncorrupted with outliers. STE is a fusion of the Tyler's M-estimator (TME) and a\nvariant of the fast median subspace. Our theoretical analysis suggests that,\nunder a common inlier-outlier model, STE can effectively recover the underlying\nsubspace, even when it contains a smaller fraction of inliers relative to other\nmethods in the field of robust subspace recovery. We apply STE in the context\nof Structure from Motion (SfM) in two ways: for robust estimation of the\nfundamental matrix and for the removal of outlying cameras, enhancing the\nrobustness of the SfM pipeline. Numerical experiments confirm the\nstate-of-the-art performance of our method in these applications. This research\nmakes significant contributions to the field of robust subspace recovery,\nparticularly in the context of computer vision and 3D reconstruction.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Feng Yu", "Teng Zhang", "Gilad Lerman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7cd"}, "filepath": "data/2312.09925.png", "tags": [], "_media_type": "image", "_rand": 0.9998787413961454, "arXiv_link": "https://arxiv.org/abs/2312.09925", "other_link": "", "title": "CNC-Net: Self-Supervised Learning for CNC Machining Operations", "abstract": "CNC manufacturing is a process that employs computer numerical control (CNC)\nmachines to govern the movements of various industrial tools and machinery,\nencompassing equipment ranging from grinders and lathes to mills and CNC\nrouters. However, the reliance on manual CNC programming has become a\nbottleneck, and the requirement for expert knowledge can result in significant\ncosts. Therefore, we introduce a pioneering approach named CNC-Net,\nrepresenting the use of deep neural networks (DNNs) to simulate CNC machines\nand grasp intricate operations when supplied with raw materials. CNC-Net\nconstitutes a self-supervised framework that exclusively takes an input 3D\nmodel and subsequently generates the essential operation parameters required by\nthe CNC machine to construct the object. Our method has the potential to\ntransformative automation in manufacturing by offering a cost-effective\nalternative to the high costs of manual CNC programming while maintaining\nexceptional precision in 3D object production. Our experiments underscore the\neffectiveness of our CNC-Net in constructing the desired 3D objects through the\nutilization of CNC operations. Notably, it excels in preserving finer local\ndetails, exhibiting a marked enhancement in precision compared to the\nstate-of-the-art 3D CAD reconstruction approaches.", "keywords": [], "authors_list": ["Mohsen Yavartanoo", "Sangmin Hong", "Reyhaneh Neshatavar", "Kyoung Mu Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ce"}, "filepath": "data/2401.04728.png", "tags": [], "_media_type": "image", "_rand": 0.9996638256465552, "arXiv_link": "https://arxiv.org/abs/2401.04728", "other_link": "", "title": "Morphable Diffusion: 3D-Consistent Diffusion for Single-image Avatar Creation", "abstract": "Recent advances in generative diffusion models have enabled the previously\nunfeasible capability of generating 3D assets from a single input image or a\ntext prompt. In this work, we aim to enhance the quality and functionality of\nthese models for the task of creating controllable, photorealistic human\navatars. We achieve this by integrating a 3D morphable model into the\nstate-of-the-art multi-view-consistent diffusion approach. We demonstrate that\naccurate conditioning of a generative pipeline on the articulated 3D model\nenhances the baseline model performance on the task of novel view synthesis\nfrom a single image. More importantly, this integration facilitates a seamless\nand accurate incorporation of facial expression and body pose control into the\ngeneration process. To the best of our knowledge, our proposed framework is the\nfirst diffusion model to enable the creation of fully 3D-consistent,\nanimatable, and photorealistic human avatars from a single image of an unseen\nsubject; extensive quantitative and qualitative evaluations demonstrate the\nadvantages of our approach over existing state-of-the-art avatar creation\nmodels on both novel view and novel expression synthesis tasks. The code for\nour project is publicly available.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Xiyi Chen", "Marko Mihajlovic", "Shaofei Wang", "Sergey Prokudin", "Siyu Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7cf"}, "filepath": "data/2312.09228.png", "tags": [], "_media_type": "image", "_rand": 0.9991133124265457, "arXiv_link": "https://arxiv.org/abs/2312.09228", "other_link": "", "title": "3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting", "abstract": "We introduce an approach that creates animatable human avatars from monocular\nvideos using 3D Gaussian Splatting (3DGS). Existing methods based on neural\nradiance fields (NeRFs) achieve high-quality novel-view/novel-pose image\nsynthesis but often require days of training, and are extremely slow at\ninference time. Recently, the community has explored fast grid structures for\nefficient training of clothed avatars. Albeit being extremely fast at training,\nthese methods can barely achieve an interactive rendering frame rate with\naround 15 FPS. In this paper, we use 3D Gaussian Splatting and learn a\nnon-rigid deformation network to reconstruct animatable clothed human avatars\nthat can be trained within 30 minutes and rendered at real-time frame rates\n(50+ FPS). Given the explicit nature of our representation, we further\nintroduce as-isometric-as-possible regularizations on both the Gaussian mean\nvectors and the covariance matrices, enhancing the generalization of our model\non highly articulated unseen poses. Experimental results show that our method\nachieves comparable and even better performance compared to state-of-the-art\napproaches on animatable avatar creation from a monocular input, while being\n400x and 250x faster in training and inference, respectively.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Zhiyin Qian", "Shaofei Wang", "Marko Mihajlovic", "Andreas Geiger", "Siyu Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d0"}, "filepath": "data/2311.16503.png", "tags": [], "_media_type": "image", "_rand": 0.999781102872467, "arXiv_link": "https://arxiv.org/abs/2311.16503", "other_link": "https://github.com/ModelTC/TFMQ-DM.", "title": "TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models", "abstract": "The Diffusion model, a prevalent framework for image generation, encounters\nsignificant challenges in terms of broad applicability due to its extended\ninference times and substantial memory requirements. Efficient Post-training\nQuantization (PTQ) is pivotal for addressing these issues in traditional\nmodels. Different from traditional models, diffusion models heavily depend on\nthe time-step $t$ to achieve satisfactory multi-round denoising. Usually, $t$\nfrom the finite set $\\{1, \\ldots, T\\}$ is encoded to a temporal feature by a\nfew modules totally irrespective of the sampling data. However, existing PTQ\nmethods do not optimize these modules separately. They adopt inappropriate\nreconstruction targets and complex calibration methods, resulting in a severe\ndisturbance of the temporal feature and denoising trajectory, as well as a low\ncompression efficiency. To solve these, we propose a Temporal Feature\nMaintenance Quantization (TFMQ) framework building upon a Temporal Information\nBlock which is just related to the time-step $t$ and unrelated to the sampling\ndata. Powered by the pioneering block design, we devise temporal information\naware reconstruction (TIAR) and finite set calibration (FSC) to align the\nfull-precision temporal features in a limited time. Equipped with the\nframework, we can maintain the most temporal information and ensure the\nend-to-end generation quality. Extensive experiments on various datasets and\ndiffusion models prove our state-of-the-art results. Remarkably, our\nquantization approach, for the first time, achieves model performance nearly on\npar with the full-precision model under 4-bit weight quantization.\nAdditionally, our method incurs almost no extra computational cost and\naccelerates quantization time by $2.0 \\times$ on LSUN-Bedrooms $256 \\times 256$\ncompared to previous works. Our code is publicly available at\nhttps://github.com/ModelTC/TFMQ-DM.", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Yushi Huang", "Ruihao Gong", "Jing Liu", "Tianlong Chen", "Xianglong Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d1"}, "filepath": "data/2312.10035.png", "tags": [], "_media_type": "image", "_rand": 0.9996247551412545, "arXiv_link": "https://arxiv.org/abs/2312.10035", "other_link": "", "title": "Point Transformer V3: Simpler, Faster, Stronger", "abstract": "This paper is not motivated to seek innovation within the attention\nmechanism. Instead, it focuses on overcoming the existing trade-offs between\naccuracy and efficiency within the context of point cloud processing,\nleveraging the power of scale. Drawing inspiration from recent advances in 3D\nlarge-scale representation learning, we recognize that model performance is\nmore influenced by scale than by intricate design. Therefore, we present Point\nTransformer V3 (PTv3), which prioritizes simplicity and efficiency over the\naccuracy of certain mechanisms that are minor to the overall performance after\nscaling, such as replacing the precise neighbor search by KNN with an efficient\nserialized neighbor mapping of point clouds organized with specific patterns.\nThis principle enables significant scaling, expanding the receptive field from\n16 to 1024 points while remaining efficient (a 3x increase in processing speed\nand a 10x improvement in memory efficiency compared with its predecessor,\nPTv2). PTv3 attains state-of-the-art results on over 20 downstream tasks that\nspan both indoor and outdoor scenarios. Further enhanced with multi-dataset\njoint training, PTv3 pushes these results to a higher level.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Xiaoyang Wu", "Li Jiang", "Peng-Shuai Wang", "Zhijian Liu", "Xihui Liu", "Yu Qiao", "Wanli Ouyang", "Tong He", "Hengshuang Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d2"}, "filepath": "data/2311.17352.png", "tags": [], "_media_type": "image", "_rand": 0.9996462358299949, "arXiv_link": "https://arxiv.org/abs/2311.17352", "other_link": "", "title": "Efficient Stitchable Task Adaptation", "abstract": "The paradigm of pre-training and fine-tuning has laid the foundation for\ndeploying deep learning models. However, most fine-tuning methods are designed\nto meet a specific resource budget. Recently, considering diverse deployment\nscenarios with various resource budgets, stitchable neural network (SN-Net) is\nintroduced to quickly obtain numerous new networks (stitches) from the\npre-trained models (anchors) in a model family via model stitching. Although\npromising, SN-Net confronts new challenges when adapting it to new target\ndomains, including huge memory and storage requirements and a long and\nsub-optimal multistage adaptation process. In this work, we present a novel\nframework, Efficient Stitchable Task Adaptation (ESTA), to efficiently produce\na palette of fine-tuned models that adhere to diverse resource constraints.\nSpecifically, we first tailor parameter-efficient fine-tuning to share low-rank\nupdates among the stitches while maintaining independent bias terms. In this\nway, we largely reduce fine-tuning memory burdens and mitigate the interference\namong stitches that arises in task adaptation. Furthermore, we streamline a\nsimple yet effective one-stage deployment pipeline, which estimates the\nimportant stitches to deploy with training-time gradient statistics. By\nassigning higher sampling probabilities to important stitches, we also get a\nboosted Pareto frontier. Extensive experiments on 25 downstream visual\nrecognition tasks demonstrate that our ESTA is capable of generating stitches\nwith smooth accuracy-efficiency trade-offs and surpasses the direct SN-Net\nadaptation by remarkable margins with significantly lower training time and\nfewer trainable parameters. Furthermore, we demonstrate the flexibility and\nscalability of our ESTA framework by stitching LLMs from LLaMA family,\nobtaining chatbot stitches of assorted sizes.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Haoyu He", "Zizheng Pan", "Jing Liu", "Jianfei Cai", "Bohan Zhuang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computation and Language", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d3"}, "filepath": "data/2403.10362.png", "tags": [], "_media_type": "image", "_rand": 0.999987658767101, "arXiv_link": "https://arxiv.org/abs/2403.10362", "other_link": "https://github.com/CPGA/CPGA.git.", "title": "CPGA: Coding Priors-Guided Aggregation Network for Compressed Video Quality Enhancement", "abstract": "Recently, numerous approaches have achieved notable success in compressed\nvideo quality enhancement (VQE). However, these methods usually ignore the\nutilization of valuable coding priors inherently embedded in compressed videos,\nsuch as motion vectors and residual frames, which carry abundant temporal and\nspatial information. To remedy this problem, we propose the Coding\nPriors-Guided Aggregation (CPGA) network to utilize temporal and spatial\ninformation from coding priors. The CPGA mainly consists of an inter-frame\ntemporal aggregation (ITA) module and a multi-scale non-local aggregation (MNA)\nmodule. Specifically, the ITA module aggregates temporal information from\nconsecutive frames and coding priors, while the MNA module globally captures\nspatial information guided by residual frames. In addition, to facilitate\nresearch in VQE task, we newly construct the Video Coding Priors (VCP) dataset,\ncomprising 300 videos with various coding priors extracted from corresponding\nbitstreams. It remedies the shortage of previous datasets on the lack of coding\ninformation. Experimental results demonstrate the superiority of our method\ncompared to existing state-of-the-art methods. The code and dataset will be\nreleased at https://github.com/CPGA/CPGA.git.", "keywords": ["Low-level vision"], "authors_list": ["Qiang Zhu", "Jinhua Hao", "Yukang Ding", "Yu Liu", "Qiao Mo", "Ming Sun", "Chao Zhou", "Shuyuan Zhu"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d4"}, "filepath": "data/2403.17420.png", "tags": [], "_media_type": "image", "_rand": 0.9998784547623469, "arXiv_link": "https://arxiv.org/abs/2403.17420", "other_link": "https://github.com/VisualAIKHU/NoPrior_MultiSSL", "title": "Learning to Visually Localize Sound Sources from Mixtures without Prior Source Knowledge", "abstract": "The goal of the multi-sound source localization task is to localize sound\nsources from the mixture individually. While recent multi-sound source\nlocalization methods have shown improved performance, they face challenges due\nto their reliance on prior information about the number of objects to be\nseparated. In this paper, to overcome this limitation, we present a novel\nmulti-sound source localization method that can perform localization without\nprior knowledge of the number of sound sources. To achieve this goal, we\npropose an iterative object identification (IOI) module, which can recognize\nsound-making objects in an iterative manner. After finding the regions of\nsound-making objects, we devise object similarity-aware clustering (OSC) loss\nto guide the IOI module to effectively combine regions of the same object but\nalso distinguish between different objects and backgrounds. It enables our\nmethod to perform accurate localization of sound-making objects without any\nprior knowledge. Extensive experimental results on the MUSIC and VGGSound\nbenchmarks show the significant performance improvements of the proposed method\nover the existing methods for both single and multi-source. Our code is\navailable at: https://github.com/VisualAIKHU/NoPrior_MultiSSL", "keywords": [], "authors_list": ["Dongjin Kim", "Sung Jin Um", "Sangmin Lee", "Jung Uk Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Multimedia", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d5"}, "filepath": "data/2312.10032.png", "tags": [], "_media_type": "image", "_rand": 0.999226129004693, "arXiv_link": "https://arxiv.org/abs/2312.10032", "other_link": "https://github.com/CircleRadon/Osprey.", "title": "Osprey: Pixel Understanding with Visual Instruction Tuning", "abstract": "Multimodal large language models (MLLMs) have recently achieved impressive\ngeneral-purpose vision-language capabilities through visual instruction tuning.\nHowever, current MLLMs primarily focus on image-level or box-level\nunderstanding, falling short in achieving fine-grained vision-language\nalignment at pixel level. Besides, the lack of mask-based instruction data\nlimits their advancements. In this paper, we propose Osprey, a mask-text\ninstruction tuning approach, to extend MLLMs by incorporating fine-grained mask\nregions into language instruction, aiming at achieving pixel-wise visual\nunderstanding. To achieve this goal, we first meticulously curate a mask-based\nregion-text dataset with 724K samples, and then design a vision-language model\nby injecting pixel-level representation into LLM. Specifically, Osprey adopts a\nconvolutional CLIP backbone as the vision encoder and employs a mask-aware\nvisual extractor to extract precise visual mask features from high resolution\ninput. Experimental results demonstrate Osprey's superiority in various region\nunderstanding tasks, showcasing its new capability for pixel-level instruction\ntuning. In particular, Osprey can be integrated with Segment Anything Model\n(SAM) seamlessly to obtain multi-granularity semantics. The source code,\ndataset and demo can be found at https://github.com/CircleRadon/Osprey.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yuqian Yuan", "Wentong Li", "Jian liu", "Dongqi Tang", "Xinjie Luo", "Chi Qin", "Lei Zhang", "Jianke Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d6"}, "filepath": "data/2308.13812.png", "tags": [], "_media_type": "image", "_rand": 0.9993536191861212, "arXiv_link": "https://arxiv.org/abs/2308.13812", "other_link": "https://haofei.vip/Dysen-VDM", "title": "Dysen-VDM: Empowering Dynamics-aware Text-to-Video Diffusion with LLMs", "abstract": "Text-to-video (T2V) synthesis has gained increasing attention in the\ncommunity, in which the recently emerged diffusion models (DMs) have\npromisingly shown stronger performance than the past approaches. While existing\nstate-of-the-art DMs are competent to achieve high-resolution video generation,\nthey may largely suffer from key limitations (e.g., action occurrence\ndisorders, crude video motions) with respect to the intricate temporal dynamics\nmodeling, one of the crux of video synthesis. In this work, we investigate\nstrengthening the awareness of video dynamics for DMs, for high-quality T2V\ngeneration. Inspired by human intuition, we design an innovative dynamic scene\nmanager (dubbed as Dysen) module, which includes (step-1) extracting from input\ntext the key actions with proper time-order arrangement, (step-2) transforming\nthe action schedules into the dynamic scene graph (DSG) representations, and\n(step-3) enriching the scenes in the DSG with sufficient and reasonable\ndetails. Taking advantage of the existing powerful LLMs (e.g., ChatGPT) via\nin-context learning, Dysen realizes (nearly) human-level temporal dynamics\nunderstanding. Finally, the resulting video DSG with rich action scene details\nis encoded as fine-grained spatio-temporal features, integrated into the\nbackbone T2V DM for video generating. Experiments on popular T2V datasets\nsuggest that our Dysen-VDM consistently outperforms prior arts with significant\nmargins, especially in scenarios with complex actions. Codes at\nhttps://haofei.vip/Dysen-VDM", "keywords": ["Image and video generation and manipulation", "Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Hao Fei", "Shengqiong Wu", "Wei Ji", "Hanwang Zhang", "Tat-seng Chua"], "category_name": "Artificial Intelligence", "all_categories": ["Artificial Intelligence", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d7"}, "filepath": "data/2311.16918v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990488689487192, "arXiv_link": "https://arxiv.org/abs/2311.16918v1", "other_link": "https://lingtengqiu.github.io/RichDreamer/.", "title": "RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D", "abstract": "Lifting 2D diffusion for 3D generation is a challenging problem due to the\nlack of geometric prior and the complex entanglement of materials and lighting\nin natural images. Existing methods have shown promise by first creating the\ngeometry through score-distillation sampling (SDS) applied to rendered surface\nnormals, followed by appearance modeling. However, relying on a 2D RGB\ndiffusion model to optimize surface normals is suboptimal due to the\ndistribution discrepancy between natural images and normals maps, leading to\ninstability in optimization. In this paper, recognizing that the normal and\ndepth information effectively describe scene geometry and be automatically\nestimated from images, we propose to learn a generalizable Normal-Depth\ndiffusion model for 3D generation. We achieve this by training on the\nlarge-scale LAION dataset together with the generalizable image-to-depth and\nnormal prior models. In an attempt to alleviate the mixed illumination effects\nin the generated materials, we introduce an albedo diffusion model to impose\ndata-driven constraints on the albedo component. Our experiments show that when\nintegrated into existing text-to-3D pipelines, our models significantly enhance\nthe detail richness, achieving state-of-the-art results. Our project page is\nhttps://lingtengqiu.github.io/RichDreamer/.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Lingteng Qiu", "Guanying Chen", "Xiaodong Gu", "Qi Zuo", "Mutian Xu", "Yushuang Wu", "Weihao Yuan", "Zilong Dong", "Liefeng Bo", "Xiaoguang Han"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d8"}, "filepath": "data/2403.11674.png", "tags": [], "_media_type": "image", "_rand": 0.9995946933591452, "arXiv_link": "https://arxiv.org/abs/2403.11674", "other_link": "", "title": "Towards Generalizing to Unseen Domains with Few Labels", "abstract": "We approach the challenge of addressing semi-supervised domain generalization\n(SSDG). Specifically, our aim is to obtain a model that learns\ndomain-generalizable features by leveraging a limited subset of labelled data\nalongside a substantially larger pool of unlabeled data. Existing domain\ngeneralization (DG) methods which are unable to exploit unlabeled data perform\npoorly compared to semi-supervised learning (SSL) methods under SSDG setting.\nNevertheless, SSL methods have considerable room for performance improvement\nwhen compared to fully-supervised DG training. To tackle this underexplored,\nyet highly practical problem of SSDG, we make the following core contributions.\nFirst, we propose a feature-based conformity technique that matches the\nposterior distributions from the feature space with the pseudo-label from the\nmodel's output space. Second, we develop a semantics alignment loss to learn\nsemantically-compatible representations by regularizing the semantic structure\nin the feature space. Our method is plug-and-play and can be readily integrated\nwith different SSL-based SSDG baselines without introducing any additional\nparameters. Extensive experimental results across five challenging DG\nbenchmarks with four strong SSL baselines suggest that our method provides\nconsistent and notable gains in two different SSDG settings.", "keywords": ["Low-level vision", "Deep learning architectures and techniques"], "authors_list": ["Chamuditha Jayanga Galappaththige", "Sanoojan Baliah", "Malitha Gunawardhana", "Muhammad Haris Khan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7d9"}, "filepath": "data/2403.17094.png", "tags": [], "_media_type": "image", "_rand": 0.999359122410084, "arXiv_link": "https://arxiv.org/abs/2403.17094", "other_link": "", "title": "SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving", "abstract": "To advance research in learning-based defogging algorithms, various synthetic\nfog datasets have been developed. However, existing datasets created using the\nAtmospheric Scattering Model (ASM) or real-time rendering engines often\nstruggle to produce photo-realistic foggy images that accurately mimic the\nactual imaging process. This limitation hinders the effective generalization of\nmodels from synthetic to real data. In this paper, we introduce an end-to-end\nsimulation pipeline designed to generate photo-realistic foggy images. This\npipeline comprehensively considers the entire physically-based foggy scene\nimaging process, closely aligning with real-world image capture methods. Based\non this pipeline, we present a new synthetic fog dataset named SynFog, which\nfeatures both sky light and active lighting conditions, as well as three levels\nof fog density. Experimental results demonstrate that models trained on SynFog\nexhibit superior performance in visual perception and detection accuracy\ncompared to others when applied to real-world foggy images.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Yiming Xie", "Henglu Wei", "Zhenyi Liu", "Xiaoyu Wang", "Xiangyang Ji"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7da"}, "filepath": "data/2404.16123.png", "tags": [], "_media_type": "image", "_rand": 0.9990774731627236, "arXiv_link": "https://arxiv.org/abs/2404.16123", "other_link": "", "title": "FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication", "abstract": "Recent dataset deduplication techniques have demonstrated that content-aware\ndataset pruning can dramatically reduce the cost of training Vision-Language\nPretrained (VLP) models without significant performance losses compared to\ntraining on the original dataset. These results have been based on pruning\ncommonly used image-caption datasets collected from the web -- datasets that\nare known to harbor harmful social biases that may then be codified in trained\nmodels. In this work, we evaluate how deduplication affects the prevalence of\nthese biases in the resulting trained models and introduce an easy-to-implement\nmodification to the recent SemDeDup algorithm that can reduce the negative\neffects that we observe. When examining CLIP-style models trained on\ndeduplicated variants of LAION-400M, we find our proposed FairDeDup algorithm\nconsistently leads to improved fairness metrics over SemDeDup on the FairFace\nand FACET datasets while maintaining zero-shot performance on CLIP benchmarks.", "keywords": ["Vision applications for social good and ethics"], "authors_list": ["Eric Slyman", "Stefan Lee", "Scott Cohen", "Kushal Kafle"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7db"}, "filepath": "data/2311.15707.png", "tags": [], "_media_type": "image", "_rand": 0.9997910740646627, "arXiv_link": "https://arxiv.org/abs/2311.15707", "other_link": "", "title": "SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation", "abstract": "Zero-shot 6D object pose estimation involves the detection of novel objects\nwith their 6D poses in cluttered scenes, presenting significant challenges for\nmodel generalizability. Fortunately, the recent Segment Anything Model (SAM)\nhas showcased remarkable zero-shot transfer performance, which provides a\npromising solution to tackle this task. Motivated by this, we introduce SAM-6D,\na novel framework designed to realize the task through two steps, including\ninstance segmentation and pose estimation. Given the target objects, SAM-6D\nemploys two dedicated sub-networks, namely Instance Segmentation Model (ISM)\nand Pose Estimation Model (PEM), to perform these steps on cluttered RGB-D\nimages. ISM takes SAM as an advanced starting point to generate all possible\nobject proposals and selectively preserves valid ones through meticulously\ncrafted object matching scores in terms of semantics, appearance and geometry.\nBy treating pose estimation as a partial-to-partial point matching problem, PEM\nperforms a two-stage point matching process featuring a novel design of\nbackground tokens to construct dense 3D-3D correspondence, ultimately yielding\nthe pose estimates. Without bells and whistles, SAM-6D outperforms the existing\nmethods on the seven core datasets of the BOP Benchmark for both instance\nsegmentation and pose estimation of novel objects.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Jiehong Lin", "lihua liu", "Dekun Lu", "Kui Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7dc"}, "filepath": "data/2404.05426.png", "tags": [], "_media_type": "image", "_rand": 0.9994584826346363, "arXiv_link": "https://arxiv.org/abs/2404.05426", "other_link": "", "title": "Test-Time Zero-Shot Temporal Action Localization", "abstract": "Zero-Shot Temporal Action Localization (ZS-TAL) seeks to identify and locate\nactions in untrimmed videos unseen during training. Existing ZS-TAL methods\ninvolve fine-tuning a model on a large amount of annotated training data. While\neffective, training-based ZS-TAL approaches assume the availability of labeled\ndata for supervised learning, which can be impractical in some applications.\nFurthermore, the training process naturally induces a domain bias into the\nlearned model, which may adversely affect the model's generalization ability to\narbitrary videos. These considerations prompt us to approach the ZS-TAL problem\nfrom a radically novel perspective, relaxing the requirement for training data.\nTo this aim, we introduce a novel method that performs Test-Time adaptation for\nTemporal Action Localization (T3AL). In a nutshell, T3AL adapts a pre-trained\nVision and Language Model (VLM). T3AL operates in three steps. First, a\nvideo-level pseudo-label of the action category is computed by aggregating\ninformation from the entire video. Then, action localization is performed\nadopting a novel procedure inspired by self-supervised learning. Finally,\nframe-level textual descriptions extracted with a state-of-the-art captioning\nmodel are employed for refining the action region proposals. We validate the\neffectiveness of T3AL by conducting experiments on the THUMOS14 and the\nActivityNet-v1.3 datasets. Our results demonstrate that T3AL significantly\noutperforms zero-shot baselines based on state-of-the-art VLMs, confirming the\nbenefit of a test-time adaptation approach.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Benedetta Liberatori", "Alessandro Conti", "Paolo Rota", "Yiming Wang", "Elisa Ricci"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7dd"}, "filepath": "data/2307.16897.png", "tags": [], "_media_type": "image", "_rand": 0.9991317979159454, "arXiv_link": "https://arxiv.org/abs/2307.16897", "other_link": "", "title": "DiVa-360: The Dynamic Visual Dataset for Immersive Neural Fields", "abstract": "Advances in neural fields are enabling high-fidelity capture of the shape and\nappearance of dynamic 3D scenes. However, their capabilities lag behind those\noffered by conventional representations such as 2D videos because of\nalgorithmic challenges and the lack of large-scale multi-view real-world\ndatasets. We address the dataset limitation with DiVa-360, a real-world 360\ndynamic visual dataset that contains synchronized high-resolution and\nlong-duration multi-view video sequences of table-scale scenes captured using a\ncustomized low-cost system with 53 cameras. It contains 21 object-centric\nsequences categorized by different motion types, 25 intricate hand-object\ninteraction sequences, and 8 long-duration sequences for a total of 17.4 M\nimage frames. In addition, we provide foreground-background segmentation masks,\nsynchronized audio, and text descriptions. We benchmark the state-of-the-art\ndynamic neural field methods on DiVa-360 and provide insights about existing\nmethods and future challenges on long-duration neural field capture.", "keywords": ["Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Cheng-You Lu", "Peisen Zhou", "Angela Xing", "Chandradeep Pokhariya", "Arnab Dey", "Ishaan Shah", "Rugved Mavidipalli", "Dylan Hu", "Andrew Comport", "Kefan Chen", "Srinath Sridhar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7de"}, "filepath": "data/2311.17461v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997505180349843, "arXiv_link": "https://arxiv.org/abs/2311.17461v1", "other_link": "https://github.com/csxmli2016/w-plus-adapter}.", "title": "When StyleGAN Meets Stable Diffusion: a ${\\mathcal{W}_+}$ Adapter for Personalized Image Generation", "abstract": "Text-to-image diffusion models have remarkably excelled in producing diverse,\nhigh-quality, and photo-realistic images. This advancement has spurred a\ngrowing interest in incorporating specific identities into generated content.\nMost current methods employ an inversion approach to embed a target visual\nconcept into the text embedding space using a single reference image. However,\nthe newly synthesized faces either closely resemble the reference image in\nterms of facial attributes, such as expression, or exhibit a reduced capacity\nfor identity preservation. Text descriptions intended to guide the facial\nattributes of the synthesized face may fall short, owing to the intricate\nentanglement of identity information with identity-irrelevant facial attributes\nderived from the reference image. To address these issues, we present the novel\nuse of the extended StyleGAN embedding space $\\mathcal{W}_+$, to achieve\nenhanced identity preservation and disentanglement for diffusion models. By\naligning this semantically meaningful human face latent space with\ntext-to-image diffusion models, we succeed in maintaining high fidelity in\nidentity preservation, coupled with the capacity for semantic editing.\nAdditionally, we propose new training objectives to balance the influences of\nboth prompt and identity conditions, ensuring that the identity-irrelevant\nbackground remains unaffected during facial attribute modifications. Extensive\nexperiments reveal that our method adeptly generates personalized text-to-image\noutputs that are not only compatible with prompt descriptions but also amenable\nto common StyleGAN editing directions in diverse settings. Our source code will\nbe available at \\url{https://github.com/csxmli2016/w-plus-adapter}.", "keywords": [], "authors_list": ["Xiaoming Li", "Xinyu Hou", "Chen Change Loy"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7df"}, "filepath": "data/2311.17396.png", "tags": [], "_media_type": "image", "_rand": 0.9992083754851919, "arXiv_link": "https://arxiv.org/abs/2311.17396", "other_link": "", "title": "Spectral and Polarization Vision: Spectro-polarimetric Real-world Dataset", "abstract": "Image datasets are essential not only in validating existing methods in\ncomputer vision but also in developing new methods. Most existing image\ndatasets focus on trichromatic intensity images to mimic human vision. However,\npolarization and spectrum, the wave properties of light that animals in harsh\nenvironments and with limited brain capacity often rely on, remain\nunderrepresented in existing datasets. Although spectro-polarimetric datasets\nexist, these datasets have insufficient object diversity, limited illumination\nconditions, linear-only polarization data, and inadequate image count. Here, we\nintroduce two spectro-polarimetric datasets: trichromatic Stokes images and\nhyperspectral Stokes images. These novel datasets encompass both linear and\ncircular polarization; they introduce multiple spectral channels; and they\nfeature a broad selection of real-world scenes. With our dataset in hand, we\nanalyze the spectro-polarimetric image statistics, develop efficient\nrepresentations of such high-dimensional data, and evaluate spectral dependency\nof shape-from-polarization methods. As such, the proposed dataset promises a\nfoundation for data-driven spectro-polarimetric imaging and vision research.\nDataset and code will be publicly available.", "keywords": ["Computational imaging and physics-based vision"], "authors_list": ["Yujin Jeon", "Eunsue Choi", "Youngchan Kim", "Yunseong Moon", "Khalid Omer", "Felix Heide", "Seung-Hwan Baek"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e0"}, "filepath": "data/2306.08736.png", "tags": [], "_media_type": "image", "_rand": 0.9990310509678757, "arXiv_link": "https://arxiv.org/abs/2306.08736", "other_link": "https://github.com/LinfengYuan1997/Losh.", "title": "LoSh: Long-Short Text Joint Prediction Network for Referring Video Object Segmentation", "abstract": "Referring video object segmentation (RVOS) aims to segment the target\ninstance referred by a given text expression in a video clip. The text\nexpression normally contains sophisticated description of the instance's\nappearance, action, and relation with others. It is therefore rather difficult\nfor a RVOS model to capture all these attributes correspondingly in the video;\nin fact, the model often favours more on the action- and relation-related\nvisual attributes of the instance. This can end up with partial or even\nincorrect mask prediction of the target instance. We tackle this problem by\ntaking a subject-centric short text expression from the original long text\nexpression. The short one retains only the appearance-related information of\nthe target instance so that we can use it to focus the model's attention on the\ninstance's appearance. We let the model make joint predictions using both long\nand short text expressions; and insert a long-short cross-attention module to\ninteract the joint features and a long-short predictions intersection loss to\nregulate the joint predictions. Besides the improvement on the linguistic part,\nwe also introduce a forward-backward visual consistency loss, which utilizes\noptical flows to warp visual features between the annotated frames and their\ntemporal neighbors for consistency. We build our method on top of two state of\nthe art pipelines. Extensive experiments on A2D-Sentences, Refer-YouTube-VOS,\nJHMDB-Sentences and Refer-DAVIS17 show impressive improvements of our\nmethod.Code is available at https://github.com/LinfengYuan1997/Losh.", "keywords": ["Multimodal models and vision-language models", "Image and video generation and manipulation"], "authors_list": ["Linfeng Yuan", "Miaojing Shi", "Zijie Yue", "Qijun Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e1"}, "filepath": "data/2404.00130.png", "tags": [], "_media_type": "image", "_rand": 0.9993561903385239, "arXiv_link": "https://arxiv.org/abs/2404.00130", "other_link": "", "title": "FISBe: A real-world benchmark dataset for instance segmentation of long-range thin filamentous structures", "abstract": "Instance segmentation of neurons in volumetric light microscopy images of\nnervous systems enables groundbreaking research in neuroscience by facilitating\njoint functional and morphological analyses of neural circuits at cellular\nresolution. Yet said multi-neuron light microscopy data exhibits extremely\nchallenging properties for the task of instance segmentation: Individual\nneurons have long-ranging, thin filamentous and widely branching morphologies,\nmultiple neurons are tightly inter-weaved, and partial volume effects, uneven\nillumination and noise inherent to light microscopy severely impede local\ndisentangling as well as long-range tracing of individual neurons. These\nproperties reflect a current key challenge in machine learning research, namely\nto effectively capture long-range dependencies in the data. While respective\nmethodological research is buzzing, to date methods are typically benchmarked\non synthetic datasets. To address this gap, we release the FlyLight Instance\nSegmentation Benchmark (FISBe) dataset, the first publicly available\nmulti-neuron light microscopy dataset with pixel-wise annotations. In addition,\nwe define a set of instance segmentation metrics for benchmarking that we\ndesigned to be meaningful with regard to downstream analyses. Lastly, we\nprovide three baselines to kick off a competition that we envision to both\nadvance the field of machine learning regarding methodology for capturing\nlong-range data dependencies, and facilitate scientific discovery in basic\nneuroscience.", "keywords": ["Medical imaging and biological vision"], "authors_list": ["Lisa Mais", "Peter Hirsch", "Claire Managan", "Ramya Kandarpa", "Josef Rumberger", "Annika Reinke", "Lena Maier-Hein", "Gudrun Ihrke", "Dagmar Kainmueller"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e2"}, "filepath": "data/2307.04760.png", "tags": [], "_media_type": "image", "_rand": 0.9992465575719859, "arXiv_link": "https://arxiv.org/abs/2307.04760", "other_link": "http://vision.cs.utexas.edu/projects/ego_av_corr.", "title": "Learning Spatial Features from Audio-Visual Correspondence in Egocentric Videos", "abstract": "We propose a self-supervised method for learning representations based on\nspatial audio-visual correspondences in egocentric videos. Our method uses a\nmasked auto-encoding framework to synthesize masked binaural (multi-channel)\naudio through the synergy of audio and vision, thereby learning useful spatial\nrelationships between the two modalities. We use our pretrained features to\ntackle two downstream video tasks requiring spatial understanding in social\nscenarios: active speaker detection and spatial audio denoising. Through\nextensive experiments, we show that our features are generic enough to improve\nover multiple state-of-the-art baselines on both tasks on two challenging\negocentric video datasets that offer binaural audio, EgoCom and EasyCom.\nProject: http://vision.cs.utexas.edu/projects/ego_av_corr.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Sagnik Majumder", "Ziad Al-Halah", "Kristen Grauman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e3"}, "filepath": "data/2404.01120.png", "tags": [], "_media_type": "image", "_rand": 0.9999897618759411, "arXiv_link": "https://arxiv.org/abs/2404.01120", "other_link": "", "title": "Motion Blur Decomposition with Cross-shutter Guidance", "abstract": "Motion blur is a frequently observed image artifact, especially under\ninsufficient illumination where exposure time has to be prolonged so as to\ncollect more photons for a bright enough image. Rather than simply removing\nsuch blurring effects, recent researches have aimed at decomposing a blurry\nimage into multiple sharp images with spatial and temporal coherence. Since\nmotion blur decomposition itself is highly ambiguous, priors from neighbouring\nframes or human annotation are usually needed for motion disambiguation. In\nthis paper, inspired by the complementary exposure characteristics of a global\nshutter (GS) camera and a rolling shutter (RS) camera, we propose to utilize\nthe ordered scanline-wise delay in a rolling shutter image to robustify motion\ndecomposition of a single blurry image. To evaluate this novel dual imaging\nsetting, we construct a triaxial system to collect realistic data, as well as a\ndeep network architecture that explicitly addresses temporal and contextual\ninformation through reciprocal branches for cross-shutter motion blur\ndecomposition. Experiment results have verified the effectiveness of our\nproposed algorithm, as well as the validity of our dual imaging setting.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiang Ji", "Haiyang Jiang", "Yinqiang Zheng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e4"}, "filepath": "data/2403.08161.png", "tags": [], "_media_type": "image", "_rand": 0.999595825521092, "arXiv_link": "https://arxiv.org/abs/2403.08161", "other_link": "", "title": "LAFS: Landmark-based Facial Self-supervised Learning for Face Recognition", "abstract": "In this work we focus on learning facial representations that can be adapted\nto train effective face recognition models, particularly in the absence of\nlabels. Firstly, compared with existing labelled face datasets, a vastly larger\nmagnitude of unlabeled faces exists in the real world. We explore the learning\nstrategy of these unlabeled facial images through self-supervised pretraining\nto transfer generalized face recognition performance. Moreover, motivated by\none recent finding, that is, the face saliency area is critical for face\nrecognition, in contrast to utilizing random cropped blocks of images for\nconstructing augmentations in pretraining, we utilize patches localized by\nextracted facial landmarks. This enables our method - namely LAndmark-based\nFacial Self-supervised learning LAFS), to learn key representation that is more\ncritical for face recognition. We also incorporate two landmark-specific\naugmentations which introduce more diversity of landmark information to further\nregularize the learning. With learned landmark-based facial representations, we\nfurther adapt the representation for face recognition with regularization\nmitigating variations in landmark positions. Our method achieves significant\nimprovement over the state-of-the-art on multiple face recognition benchmarks,\nespecially on more challenging few-shot scenarios.", "keywords": ["Biometrics and human analysis", "Deep learning architectures and techniques"], "authors_list": ["Zhonglin Sun", "Chen Feng", "Ioannis Patras", "Georgios Tzimiropoulos"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e5"}, "filepath": "data/2312.08007.png", "tags": [], "_media_type": "image", "_rand": 0.9994895010447686, "arXiv_link": "https://arxiv.org/abs/2312.08007", "other_link": "https://github.com/Rubics-Xuan/MRES", "title": "Unveiling Parts Beyond Objects: Towards Finer-Granularity Referring Expression Segmentation", "abstract": "Referring expression segmentation (RES) aims at segmenting the foreground\nmasks of the entities that match the descriptive natural language expression.\nPrevious datasets and methods for classic RES task heavily rely on the prior\nassumption that one expression must refer to object-level targets. In this\npaper, we take a step further to finer-grained part-level RES task. To promote\nthe object-level RES task towards finer-grained vision-language understanding,\nwe put forward a new multi-granularity referring expression segmentation (MRES)\ntask and construct an evaluation benchmark called RefCOCOm by manual\nannotations. By employing our automatic model-assisted data engine, we build\nthe largest visual grounding dataset namely MRES-32M, which comprises over\n32.2M high-quality masks and captions on the provided 1M images. Besides, a\nsimple yet strong model named UniRES is designed to accomplish the unified\nobject-level and part-level grounding task. Extensive experiments on our\nRefCOCOm for MRES and three datasets (i.e., RefCOCO(+/g) for classic RES task\ndemonstrate the superiority of our method over previous state-of-the-art\nmethods. To foster future research into fine-grained visual grounding, our\nbenchmark RefCOCOm, the MRES-32M dataset and model UniRES will be publicly\navailable at https://github.com/Rubics-Xuan/MRES", "keywords": [], "authors_list": ["Wenxuan Wang", "Tongtian Yue", "Yisi Zhang", "Longteng Guo", "Xingjian He", "Xinlong Wang", "Jing Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e6"}, "filepath": "data/2312.04328.png", "tags": [], "_media_type": "image", "_rand": 0.9993797979342137, "arXiv_link": "https://arxiv.org/abs/2312.04328", "other_link": "", "title": "Event-based Visible and Infrared Fusion via Multi-task Collaboration", "abstract": "Infrared and visible image fusion aims at generating a fused image containing\nthe intensity and detail information of source images, and the key issue is\neffectively measuring and integrating the complementary information of\nmulti-modality images from the same scene. Existing methods mostly adopt a\nsimple weight in the loss function to decide the information retention of each\nmodality rather than adaptively measuring complementary information for\ndifferent image pairs. In this study, we propose a multi-scale dual attention\n(MDA) framework for infrared and visible image fusion, which is designed to\nmeasure and integrate complementary information in both structure and loss\nfunction at the image and patch level. In our method, the residual downsample\nblock decomposes source images into three scales first. Then, dual attention\nfusion block integrates complementary information and generates a spatial and\nchannel attention map at each scale for feature fusion. Finally, the output\nimage is reconstructed by the residual reconstruction block. Loss function\nconsists of image-level, feature-level and patch-level three parts, of which\nthe calculation of the image-level and patch-level two parts are based on the\nweights generated by the complementary information measurement. Indeed, to\nconstrain the pixel intensity distribution between the output and infrared\nimage, a style loss is added. Our fusion results perform robust and informative\nacross different scenarios. Qualitative and quantitative results on two\ndatasets illustrate that our method is able to preserve both thermal radiation\nand detailed information from two modalities and achieve comparable results\ncompared with the other state-of-the-art methods. Ablation experiments show the\neffectiveness of our information integration architecture and adaptively\nmeasure complementary information retention in the loss function.", "keywords": ["Low-level vision", "Multimodal models and vision-language models"], "authors_list": ["Mengyue Geng", "Lin Zhu", "Lizhi Wang", "Wei Zhang", "Ruiqin Xiong", "Yonghong Tian"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e7"}, "filepath": "data/2403.08436.png", "tags": [], "_media_type": "image", "_rand": 0.999731558889318, "arXiv_link": "https://arxiv.org/abs/2403.08436", "other_link": "", "title": "PFStorer: Personalized Face Restoration and Super-Resolution", "abstract": "Recent developments in face restoration have achieved remarkable results in\nproducing high-quality and lifelike outputs. The stunning results however often\nfail to be faithful with respect to the identity of the person as the models\nlack necessary context. In this paper, we explore the potential of personalized\nface restoration with diffusion models. In our approach a restoration model is\npersonalized using a few images of the identity, leading to tailored\nrestoration with respect to the identity while retaining fine-grained details.\nBy using independent trainable blocks for personalization, the rich prior of a\nbase restoration model can be exploited to its fullest. To avoid the model\nrelying on parts of identity left in the conditioning low-quality images, a\ngenerative regularizer is employed. With a learnable parameter, the model\nlearns to balance between the details generated based on the input image and\nthe degree of personalization. Moreover, we improve the training pipeline of\nface restoration models to enable an alignment-free approach. We showcase the\nrobust capabilities of our approach in several real-world scenarios with\nmultiple identities, demonstrating our method's ability to generate\nfine-grained details with faithful restoration. In the user study we evaluate\nthe perceptual quality and faithfulness of the genereated details, with our\nmethod being voted best 61% of the time compared to the second best with 25% of\nthe votes.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Tuomas Varanka", "Tapani Toivonen", "Soumya Tripathy", "Guoying Zhao", "Erman Acar"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e8"}, "filepath": "data/2403.05086.png", "tags": [], "_media_type": "image", "_rand": 0.9996857239487136, "arXiv_link": "https://arxiv.org/abs/2403.05086", "other_link": "https://github.com/Youngju-Na/UFORecon.", "title": "UFORecon: Generalizable Sparse-View Surface Reconstruction from Arbitrary and Unfavorable Sets", "abstract": "Generalizable neural implicit surface reconstruction aims to obtain an\naccurate underlying geometry given a limited number of multi-view images from\nunseen scenes. However, existing methods select only informative and relevant\nviews using predefined scores for training and testing phases. This constraint\nrenders the model impractical in real-world scenarios, where the availability\nof favorable combinations cannot always be ensured. We introduce and validate a\nview-combination score to indicate the effectiveness of the input view\ncombination. We observe that previous methods output degenerate solutions under\narbitrary and unfavorable sets. Building upon this finding, we propose\nUFORecon, a robust view-combination generalizable surface reconstruction\nframework. To achieve this, we apply cross-view matching transformers to model\ninteractions between source images and build correlation frustums to capture\nglobal correlations. Additionally, we explicitly encode pairwise feature\nsimilarities as view-consistent priors. Our proposed framework significantly\noutperforms previous methods in terms of view-combination generalizability and\nalso in the conventional generalizable protocol trained with favorable\nview-combinations. The code is available at\nhttps://github.com/Youngju-Na/UFORecon.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Youngju Na", "Woo Jae Kim", "Kyu Han", "Suhyeon Ha", "Sung-Eui Yoon"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7e9"}, "filepath": "data/2404.12322.png", "tags": [], "_media_type": "image", "_rand": 0.9993080842317766, "arXiv_link": "https://arxiv.org/abs/2404.12322", "other_link": "https://plustwo0.github.io/project-face-landmarker.", "title": "Generalizable Face Landmarking Guided by Conditional Face Warping", "abstract": "As a significant step for human face modeling, editing, and generation, face\nlandmarking aims at extracting facial keypoints from images. A generalizable\nface landmarker is required in practice because real-world facial images, e.g.,\nthe avatars in animations and games, are often stylized in various ways.\nHowever, achieving generalizable face landmarking is challenging due to the\ndiversity of facial styles and the scarcity of labeled stylized faces. In this\nstudy, we propose a simple but effective paradigm to learn a generalizable face\nlandmarker based on labeled real human faces and unlabeled stylized faces. Our\nmethod learns the face landmarker as the key module of a conditional face\nwarper. Given a pair of real and stylized facial images, the conditional face\nwarper predicts a warping field from the real face to the stylized one, in\nwhich the face landmarker predicts the ending points of the warping field and\nprovides us with high-quality pseudo landmarks for the corresponding stylized\nfacial images. Applying an alternating optimization strategy, we learn the face\nlandmarker to minimize $i)$ the discrepancy between the stylized faces and the\nwarped real ones and $ii)$ the prediction errors of both real and pseudo\nlandmarks. Experiments on various datasets show that our method outperforms\nexisting state-of-the-art domain adaptation methods in face landmarking tasks,\nleading to a face landmarker with better generalizability. Code is available at\nhttps://plustwo0.github.io/project-face-landmarker.", "keywords": [], "authors_list": ["Jiayi Liang", "Haotian Liu", "Hongteng Xu", "Dixin Luo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ea"}, "filepath": "data/2402.08657.png", "tags": [], "_media_type": "image", "_rand": 0.9999005840718281, "arXiv_link": "https://arxiv.org/abs/2402.08657", "other_link": "", "title": "PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs", "abstract": "Vision-Language Models (VLMs), such as Flamingo and GPT-4V, have shown\nimmense potential by integrating large language models with vision systems.\nNevertheless, these models face challenges in the fundamental computer vision\ntask of object localisation, due to their training on multimodal data\ncontaining mostly captions without explicit spatial grounding. While it is\npossible to construct custom, supervised training pipelines with bounding box\nannotations that integrate with VLMs, these result in specialized and\nhard-to-scale models. In this paper, we aim to explore the limits of\ncaption-based VLMs and instead propose to tackle the challenge in a simpler\nmanner by i) keeping the weights of a caption-based VLM frozen and ii) not\nusing any supervised detection data. To this end, we introduce an\ninput-agnostic Positional Insert (PIN), a learnable spatial prompt, containing\na minimal set of parameters that are slid inside the frozen VLM, unlocking\nobject localisation capabilities. Our PIN module is trained with a simple\nnext-token prediction task on synthetic data without requiring the introduction\nof new output heads. Our experiments demonstrate strong zero-shot localisation\nperformances on a variety of images, including Pascal VOC, COCO, LVIS, and\ndiverse images like paintings or cartoons.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Michael Dorkenwald", "Nimrod Barazani", "Cees G. M. Snoek", "Yuki Asano"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7eb"}, "filepath": "data/2402.19422.png", "tags": [], "_media_type": "image", "_rand": 0.9997255096994729, "arXiv_link": "https://arxiv.org/abs/2402.19422", "other_link": "", "title": "PEM: Prototype-based Efficient MaskFormer for Image Segmentation", "abstract": "Recent transformer-based architectures have shown impressive results in the\nfield of image segmentation. Thanks to their flexibility, they obtain\noutstanding performance in multiple segmentation tasks, such as semantic and\npanoptic, under a single unified framework. To achieve such impressive\nperformance, these architectures employ intensive operations and require\nsubstantial computational resources, which are often not available, especially\non edge devices. To fill this gap, we propose Prototype-based Efficient\nMaskFormer (PEM), an efficient transformer-based architecture that can operate\nin multiple segmentation tasks. PEM proposes a novel prototype-based\ncross-attention which leverages the redundancy of visual features to restrict\nthe computation and improve the efficiency without harming the performance. In\naddition, PEM introduces an efficient multi-scale feature pyramid network,\ncapable of extracting features that have high semantic content in an efficient\nway, thanks to the combination of deformable convolutions and context-based\nself-modulation. We benchmark the proposed PEM architecture on two tasks,\nsemantic and panoptic segmentation, evaluated on two different datasets,\nCityscapes and ADE20K. PEM demonstrates outstanding performance on every task\nand dataset, outperforming task-specific architectures while being comparable\nand even better than computationally-expensive baselines.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Niccol\u00f2 Cavagnero", "Gabriele Rosi", "Claudia Cuttano", "Francesca Pistilli", "Marco Ciccone", "Giuseppe Averta", "Fabio Cermelli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ec"}, "filepath": "data/2403.09230.png", "tags": [], "_media_type": "image", "_rand": 0.9992727627437766, "arXiv_link": "https://arxiv.org/abs/2403.09230", "other_link": "", "title": "Improving Distant 3D Object Detection Using 2D Box Supervision", "abstract": "Improving the detection of distant 3d objects is an important yet challenging\ntask. For camera-based 3D perception, the annotation of 3d bounding relies\nheavily on LiDAR for accurate depth information. As such, the distance of\nannotation is often limited due to the sparsity of LiDAR points on distant\nobjects, which hampers the capability of existing detectors for long-range\nscenarios. We address this challenge by considering only 2D box supervision for\ndistant objects since they are easy to annotate. We propose LR3D, a framework\nthat learns to recover the missing depth of distant objects. LR3D adopts an\nimplicit projection head to learn the generation of mapping between 2D boxes\nand depth using the 3D supervision on close objects. This mapping allows the\ndepth estimation of distant objects conditioned on their 2D boxes, making\nlong-range 3D detection with 2D supervision feasible. Experiments show that\nwithout distant 3D annotations, LR3D allows camera-based methods to detect\ndistant objects (over 200m) with comparable accuracy to full 3D supervision.\nOur framework is general, and could widely benefit 3D detection methods to a\nlarge extent.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Zetong Yang", "Zhiding Yu", "Christopher Choy", "Renhao Wang", "Anima Anandkumar", "Jose M. Alvarez"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ed"}, "filepath": "data/2312.17655.png", "tags": [], "_media_type": "image", "_rand": 0.9994302372082404, "arXiv_link": "https://arxiv.org/abs/2312.17655", "other_link": "", "title": "Visual Point Cloud Forecasting enables Scalable Autonomous Driving", "abstract": "In contrast to extensive studies on general vision, pre-training for scalable\nvisual autonomous driving remains seldom explored. Visual autonomous driving\napplications require features encompassing semantics, 3D geometry, and temporal\ninformation simultaneously for joint perception, prediction, and planning,\nposing dramatic challenges for pre-training. To resolve this, we bring up a new\npre-training task termed as visual point cloud forecasting - predicting future\npoint clouds from historical visual input. The key merit of this task captures\nthe synergic learning of semantics, 3D structures, and temporal dynamics. Hence\nit shows superiority in various downstream tasks. To cope with this new\nproblem, we present ViDAR, a general model to pre-train downstream visual\nencoders. It first extracts historical embeddings by the encoder. These\nrepresentations are then transformed to 3D geometric space via a novel Latent\nRendering operator for future point cloud prediction. Experiments show\nsignificant gain in downstream tasks, e.g., 3.1% NDS on 3D detection, ~10%\nerror reduction on motion forecasting, and ~15% less collision rate on\nplanning.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zetong Yang", "Li Chen", "Yanan Sun", "Hongyang Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ee"}, "filepath": "data/2403.19898.png", "tags": [], "_media_type": "image", "_rand": 0.9992700117678047, "arXiv_link": "https://arxiv.org/abs/2403.19898", "other_link": "https://github.com/htyjers/StrDiffusion.", "title": "Structure Matters: Tackling the Semantic Discrepancy in Diffusion Models for Image Inpainting", "abstract": "Denoising diffusion probabilistic models for image inpainting aim to add the\nnoise to the texture of image during the forward process and recover masked\nregions with unmasked ones of the texture via the reverse denoising process.\nDespite the meaningful semantics generation, the existing arts suffer from the\nsemantic discrepancy between masked and unmasked regions, since the\nsemantically dense unmasked texture fails to be completely degraded while the\nmasked regions turn to the pure noise in diffusion process, leading to the\nlarge discrepancy between them. In this paper, we aim to answer how unmasked\nsemantics guide texture denoising process;together with how to tackle the\nsemantic discrepancy, to facilitate the consistent and meaningful semantics\ngeneration. To this end, we propose a novel structure-guided diffusion model\nnamed StrDiffusion, to reformulate the conventional texture denoising process\nunder structure guidance to derive a simplified denoising objective for image\ninpainting, while revealing: 1) the semantically sparse structure is beneficial\nto tackle semantic discrepancy in early stage, while dense texture generates\nreasonable semantics in late stage; 2) the semantics from unmasked regions\nessentially offer the time-dependent structure guidance for the texture\ndenoising process, benefiting from the time-dependent sparsity of the structure\nsemantics. For the denoising process, a structure-guided neural network is\ntrained to estimate the simplified denoising objective by exploiting the\nconsistency of the denoised structure between masked and unmasked regions.\nBesides, we devise an adaptive resampling strategy as a formal criterion as\nwhether structure is competent to guide the texture denoising process, while\nregulate their semantic correlations. Extensive experiments validate the merits\nof StrDiffusion over the state-of-the-arts. Our code is available at\nhttps://github.com/htyjers/StrDiffusion.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Haipeng Liu", "Yang Wang", "Biao Qian", "Meng Wang", "Yong Rui"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ef"}, "filepath": "data/2311.16117.png", "tags": [], "_media_type": "image", "_rand": 0.9999972275040113, "arXiv_link": "https://arxiv.org/abs/2311.16117", "other_link": "", "title": "Predicated Diffusion: Predicate Logic-Based Attention Guidance for Text-to-Image Diffusion Models", "abstract": "Diffusion models have achieved remarkable results in generating high-quality,\ndiverse, and creative images. However, when it comes to text-based image\ngeneration, they often fail to capture the intended meaning presented in the\ntext. For instance, a specified object may not be generated, an unnecessary\nobject may be generated, and an adjective may alter objects it was not intended\nto modify. Moreover, we found that relationships indicating possession between\nobjects are often overlooked. While users' intentions in text are diverse,\nexisting methods tend to specialize in only some aspects of these. In this\npaper, we propose Predicated Diffusion, a unified framework to express users'\nintentions. We consider that the root of the above issues lies in the text\nencoder, which often focuses only on individual words and neglects the logical\nrelationships between them. The proposed method does not solely rely on the\ntext encoder, but instead, represents the intended meaning in the text as\npropositions using predicate logic and treats the pixels in the attention maps\nas the fuzzy predicates. This enables us to obtain a differentiable loss\nfunction that makes the image fulfill the proposition by minimizing it. When\ncompared to several existing methods, we demonstrated that Predicated Diffusion\ncan generate images that are more faithful to various text prompts, as verified\nby human evaluators and pretrained image-text models.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Kota Sueyoshi", "Takashi Matsubara"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f0"}, "filepath": "data/2308.02963.png", "tags": [], "_media_type": "image", "_rand": 0.9997916352708747, "arXiv_link": "https://arxiv.org/abs/2308.02963", "other_link": "https://github.com/hanbyel0105/Diff-HMR", "title": "Probabilistic Human Mesh Estimation with Hypothesis Scoring", "abstract": "This work focuses on the problem of reconstructing a 3D human body mesh from\na given 2D image. Despite the inherent ambiguity of the task of human mesh\nrecovery, most existing works have adopted a method of regressing a single\noutput. In contrast, we propose a generative approach framework, called\n\"Diffusion-based Human Mesh Recovery (Diff-HMR)\" that takes advantage of the\ndenoising diffusion process to account for multiple plausible outcomes. During\nthe training phase, the SMPL parameters are diffused from ground-truth\nparameters to random distribution, and Diff-HMR learns the reverse process of\nthis diffusion. In the inference phase, the model progressively refines the\ngiven random SMPL parameters into the corresponding parameters that align with\nthe input image. Diff-HMR, being a generative approach, is capable of\ngenerating diverse results for the same input image as the input noise varies.\nWe conduct validation experiments, and the results demonstrate that the\nproposed framework effectively models the inherent ambiguity of the task of\nhuman mesh recovery in a probabilistic manner. The code is available at\nhttps://github.com/hanbyel0105/Diff-HMR", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Yuan Xu", "Xiaoxuan Ma", "Jiajun Su", "Wentao Zhu", "Yu Qiao", "Yizhou Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f1"}, "filepath": "data/2404.00524.png", "tags": [], "_media_type": "image", "_rand": 0.999914342553989, "arXiv_link": "https://arxiv.org/abs/2404.00524", "other_link": "https://texvocab.github.io/.", "title": "TexVocab: Texture Vocabulary-conditioned Human Avatars", "abstract": "To adequately utilize the available image evidence in multi-view video-based\navatar modeling, we propose TexVocab, a novel avatar representation that\nconstructs a texture vocabulary and associates body poses with texture maps for\nanimation. Given multi-view RGB videos, our method initially back-projects all\nthe available images in the training videos to the posed SMPL surface,\nproducing texture maps in the SMPL UV domain. Then we construct pairs of human\nposes and texture maps to establish a texture vocabulary for encoding dynamic\nhuman appearances under various poses. Unlike the commonly used joint-wise\nmanner, we further design a body-part-wise encoding strategy to learn the\nstructural effects of the kinematic chain. Given a driving pose, we query the\npose feature hierarchically by decomposing the pose vector into several body\nparts and interpolating the texture features for synthesizing fine-grained\nhuman dynamics. Overall, our method is able to create animatable human avatars\nwith detailed and dynamic appearances from RGB videos, and the experiments show\nthat our method outperforms state-of-the-art approaches. The project page can\nbe found at https://texvocab.github.io/.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yuxiao Liu", "Zhe Li", "Yebin Liu", "Haoqian Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f2"}, "filepath": "data/2307.09815.png", "tags": [], "_media_type": "image", "_rand": 0.9993537203208072, "arXiv_link": "https://arxiv.org/abs/2307.09815", "other_link": "", "title": "LDP: Language-driven Dual-Pixel Image Defocus Deblurring Network", "abstract": "Recovering sharp images from dual-pixel (DP) pairs with disparity-dependent\nblur is a challenging task.~Existing blur map-based deblurring methods have\ndemonstrated promising results. In this paper, we propose, to the best of our\nknowledge, the first framework that introduces the contrastive language-image\npre-training framework (CLIP) to accurately estimate the blur map from a DP\npair unsupervisedly. To achieve this, we first carefully design text prompts to\nenable CLIP to understand blur-related geometric prior knowledge from the DP\npair. Then, we propose a format to input a stereo DP pair to CLIP without any\nfine-tuning, despite the fact that CLIP is pre-trained on monocular images.\nGiven the estimated blur map, we introduce a blur-prior attention block, a\nblur-weighting loss, and a blur-aware loss to recover the all-in-focus image.\nOur method achieves state-of-the-art performance in extensive experiments (see\nFig.~\\ref{fig:teaser}).", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Hao Yang", "Liyuan Pan", "Yan Yang", "Richard Hartley", "Miaomiao Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f3"}, "filepath": "data/2404.00149.png", "tags": [], "_media_type": "image", "_rand": 0.9999133022295698, "arXiv_link": "https://arxiv.org/abs/2404.00149", "other_link": "https://github.com/skmhrk1209/VSRD.", "title": "VSRD: Instance-Aware Volumetric Silhouette Rendering for Weakly Supervised 3D Object Detection", "abstract": "Monocular 3D object detection poses a significant challenge in 3D scene\nunderstanding due to its inherently ill-posed nature in monocular depth\nestimation. Existing methods heavily rely on supervised learning using abundant\n3D labels, typically obtained through expensive and labor-intensive annotation\non LiDAR point clouds. To tackle this problem, we propose a novel weakly\nsupervised 3D object detection framework named VSRD (Volumetric Silhouette\nRendering for Detection) to train 3D object detectors without any 3D\nsupervision but only weak 2D supervision. VSRD consists of multi-view 3D\nauto-labeling and subsequent training of monocular 3D object detectors using\nthe pseudo labels generated in the auto-labeling stage. In the auto-labeling\nstage, we represent the surface of each instance as a signed distance field\n(SDF) and render its silhouette as an instance mask through our proposed\ninstance-aware volumetric silhouette rendering. To directly optimize the 3D\nbounding boxes through rendering, we decompose the SDF of each instance into\nthe SDF of a cuboid and the residual distance field (RDF) that represents the\nresidual from the cuboid. This mechanism enables us to optimize the 3D bounding\nboxes in an end-to-end manner by comparing the rendered instance masks with the\nground truth instance masks. The optimized 3D bounding boxes serve as effective\ntraining data for 3D object detection. We conduct extensive experiments on the\nKITTI-360 dataset, demonstrating that our method outperforms the existing\nweakly supervised 3D object detection methods. The code is available at\nhttps://github.com/skmhrk1209/VSRD.", "keywords": [], "authors_list": ["Zihua Liu", "Hiroki Sakuma", "Masatoshi Okutomi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f4"}, "filepath": "data/2404.08514v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992482249498941, "arXiv_link": "https://arxiv.org/html/2404.08514v2", "other_link": "", "title": "Real-World Mobile Image Denoising Dataset with Efficient Baselines", "abstract": "Despite the significant progress in image denoising, it is still challenging\nto restore fine-scale details while removing noise, especially in extremely\nlow-light environments. Leveraging near-infrared (NIR) images to assist visible\nRGB image denoising shows the potential to address this issue, becoming a\npromising technology. Nonetheless, existing works still struggle with taking\nadvantage of NIR information effectively for real-world image denoising, due to\nthe content inconsistency between NIR-RGB images and the scarcity of real-world\npaired datasets. To alleviate the problem, we propose an efficient Selective\nFusion Module (SFM), which can be plug-and-played into the advanced denoising\nnetworks to merge the deep NIR-RGB features. Specifically, we sequentially\nperform the global and local modulation for NIR and RGB features, and then\nintegrate the two modulated features. Furthermore, we present a Real-world\nNIR-Assisted Image Denoising (Real-NAID) dataset, which covers diverse\nscenarios as well as various noise levels. Extensive experiments on both\nsynthetic and our real-world datasets demonstrate that the proposed method\nachieves better results than state-of-the-art ones.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Roman Flepp", "Andrey Ignatov", "Radu Timofte", "Luc Van Gool"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f5"}, "filepath": "data/2403.06592v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996524014238493, "arXiv_link": "https://arxiv.org/abs/2403.06592v1", "other_link": "", "title": "Exploiting Style Latent Flows for Generalizing Video Deepfake Detection", "abstract": "This paper presents a new approach for the detection of fake videos, based on\nthe analysis of style latent vectors and their abnormal behavior in temporal\nchanges in the generated videos. We discovered that the generated facial videos\nsuffer from the temporal distinctiveness in the temporal changes of style\nlatent vectors, which are inevitable during the generation of temporally stable\nvideos with various facial expressions and geometric transformations. Our\nframework utilizes the StyleGRU module, trained by contrastive learning, to\nrepresent the dynamic properties of style latent vectors. Additionally, we\nintroduce a style attention module that integrates StyleGRU-generated features\nwith content-based features, enabling the detection of visual and temporal\nartifacts. We demonstrate our approach across various benchmark scenarios in\ndeepfake detection, showing its superiority in cross-dataset and\ncross-manipulation scenarios. Through further analysis, we also validate the\nimportance of using temporal changes of style latent vectors to improve the\ngenerality of deepfake video detection.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jongwook Choi", "Taehoon Kim", "Yonghyun Jeong", "Seungryul Baek", "Jongwon Choi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f6"}, "filepath": "data/2312.00311.png", "tags": [], "_media_type": "image", "_rand": 0.9995627775692545, "arXiv_link": "https://arxiv.org/abs/2312.00311", "other_link": "https://github.com/wang-zidu/3DDFA-V3", "title": "3D Face Reconstruction with the Geometric Guidance of Facial Part Segmentation", "abstract": "3D Morphable Models (3DMMs) provide promising 3D face reconstructions in\nvarious applications. However, existing methods struggle to reconstruct faces\nwith extreme expressions due to deficiencies in supervisory signals, such as\nsparse or inaccurate landmarks. Segmentation information contains effective\ngeometric contexts for face reconstruction. Certain attempts intuitively depend\non differentiable renderers to compare the rendered silhouettes of\nreconstruction with segmentation, which is prone to issues like local optima\nand gradient instability. In this paper, we fully utilize the facial part\nsegmentation geometry by introducing Part Re-projection Distance Loss (PRDL).\nSpecifically, PRDL transforms facial part segmentation into 2D points and\nre-projects the reconstruction onto the image plane. Subsequently, by\nintroducing grid anchors and computing different statistical distances from\nthese anchors to the point sets, PRDL establishes geometry descriptors to\noptimize the distribution of the point sets for face reconstruction. PRDL\nexhibits a clear gradient compared to the renderer-based methods and presents\nstate-of-the-art reconstruction performance in extensive quantitative and\nqualitative experiments. Our project is available at\nhttps://github.com/wang-zidu/3DDFA-V3 .", "keywords": [], "authors_list": ["Zidu Wang", "Xiangyu Zhu", "Tianshuo Zhang", "baiqin wang", "Zhen Lei"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f7"}, "filepath": "data/2311.06612.png", "tags": [], "_media_type": "image", "_rand": 0.9995857160164869, "arXiv_link": "https://arxiv.org/abs/2311.06612", "other_link": "", "title": "PerceptionGPT: Effectively Fusing Visual Perception into LLM", "abstract": "The integration of visual inputs with large language models (LLMs) has led to\nremarkable advancements in multi-modal capabilities, giving rise to visual\nlarge language models (VLLMs). However, effectively harnessing VLLMs for\nintricate visual perception tasks remains a challenge. In this paper, we\npresent a novel end-to-end framework named PerceptionGPT, which efficiently and\neffectively equips the VLLMs with visual perception abilities by leveraging the\nrepresentation power of LLMs' token embedding. Our proposed method treats the\ntoken embedding of the LLM as the carrier of spatial information, then leverage\nlightweight visual task encoders and decoders to perform visual perception\ntasks (e.g., detection, segmentation). Our approach significantly alleviates\nthe training difficulty suffered by previous approaches that formulate the\nvisual outputs as discrete tokens, and enables achieving superior performance\nwith fewer trainable parameters, less training data and shorted training time.\nMoreover, as only one token embedding is required to decode the visual outputs,\nthe resulting sequence length during inference is significantly reduced.\nConsequently, our approach enables accurate and flexible representations,\nseamless integration of visual perception tasks, and efficient handling of a\nmultiple of visual outputs. We validate the effectiveness and efficiency of our\napproach through extensive experiments. The results demonstrate significant\nimprovements over previous methods with much fewer trainable parameters and GPU\nhours, which facilitates future research in enabling LLMs with visual\nperception abilities.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Renjie Pi", "Lewei Yao", "Jiahui Gao", "Jipeng Zhang", "Tong Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f8"}, "filepath": "data/2311.17094.png", "tags": [], "_media_type": "image", "_rand": 0.999184215071307, "arXiv_link": "https://arxiv.org/abs/2311.17094", "other_link": "", "title": "In Search of a Data Transformation That Accelerates Neural Field Training", "abstract": "Neural field is an emerging paradigm in data representation that trains a\nneural network to approximate the given signal. A key obstacle that prevents\nits widespread adoption is the encoding speed-generating neural fields requires\nan overfitting of a neural network, which can take a significant number of SGD\nsteps to reach the desired fidelity level. In this paper, we delve into the\nimpacts of data transformations on the speed of neural field training,\nspecifically focusing on how permuting pixel locations affect the convergence\nspeed of SGD. Counterintuitively, we find that randomly permuting the pixel\nlocations can considerably accelerate the training. To explain this phenomenon,\nwe examine the neural field training through the lens of PSNR curves, loss\nlandscapes, and error patterns. Our analyses suggest that the random pixel\npermutations remove the easy-to-fit patterns, which facilitate easy\noptimization in the early stage but hinder capturing fine details of the\nsignal.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Junwon Seo", "Sangyoon Lee", "Kwang In Kim", "Jaeho Lee"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7f9"}, "filepath": "data/2404.07445.png", "tags": [], "_media_type": "image", "_rand": 0.9990156810086439, "arXiv_link": "https://arxiv.org/abs/2404.07445", "other_link": "https://github.com/qianyu-dlut/MVANet}{MVANet}.", "title": "Multi-view Aggregation Network for Dichotomous Image Segmentation", "abstract": "Dichotomous Image Segmentation (DIS) has recently emerged towards\nhigh-precision object segmentation from high-resolution natural images.\n When designing an effective DIS model, the main challenge is how to balance\nthe semantic dispersion of high-resolution targets in the small receptive field\nand the loss of high-precision details in the large receptive field. Existing\nmethods rely on tedious multiple encoder-decoder streams and stages to\ngradually complete the global localization and local refinement.\n Human visual system captures regions of interest by observing them from\nmultiple views. Inspired by it, we model DIS as a multi-view object perception\nproblem and provide a parsimonious multi-view aggregation network (MVANet),\nwhich unifies the feature fusion of the distant view and close-up view into a\nsingle stream with one encoder-decoder structure. With the help of the proposed\nmulti-view complementary localization and refinement modules, our approach\nestablished long-range, profound visual interactions across multiple views,\nallowing the features of the detailed close-up view to focus on highly slender\nstructures.Experiments on the popular DIS-5K dataset show that our MVANet\nsignificantly outperforms state-of-the-art methods in both accuracy and speed.\nThe source code and datasets will be publicly available at\n\\href{https://github.com/qianyu-dlut/MVANet}{MVANet}.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Qian Yu", "Xiaoqi Zhao", "Youwei Pang", "Lihe Zhang", "Huchuan Lu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7fa"}, "filepath": "data/2310.17504.png", "tags": [], "_media_type": "image", "_rand": 0.9990400007321734, "arXiv_link": "https://arxiv.org/abs/2310.17504", "other_link": "", "title": "Three Pillars improving Vision Foundation Model Distillation for Lidar", "abstract": "Self-supervised image backbones can be used to address complex 2D tasks\n(e.g., semantic segmentation, object discovery) very efficiently and with\nlittle or no downstream supervision. Ideally, 3D backbones for lidar should be\nable to inherit these properties after distillation of these powerful 2D\nfeatures. The most recent methods for image-to-lidar distillation on autonomous\ndriving data show promising results, obtained thanks to distillation methods\nthat keep improving. Yet, we still notice a large performance gap when\nmeasuring the quality of distilled and fully supervised features by linear\nprobing. In this work, instead of focusing only on the distillation method, we\nstudy the effect of three pillars for distillation: the 3D backbone, the\npretrained 2D backbones, and the pretraining dataset. In particular, thanks to\nour scalable distillation method named ScaLR, we show that scaling the 2D and\n3D backbones and pretraining on diverse datasets leads to a substantial\nimprovement of the feature quality. This allows us to significantly reduce the\ngap between the quality of distilled and fully-supervised 3D features, and to\nimprove the robustness of the pretrained backbones to domain gaps and\nperturbations.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Gilles Puy", "Spyros Gidaris", "Alexandre Boulch", "Oriane Sim\u00e9oni", "Corentin Sautier", "Patrick P\u00e9rez", "Andrei Bursuc", "Renaud Marlet"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7fb"}, "filepath": "data/2312.16279.png", "tags": [], "_media_type": "image", "_rand": 0.9997658424115944, "arXiv_link": "https://arxiv.org/abs/2312.16279", "other_link": "", "title": "Cloud-Device Collaborative Learning for Multimodal Large Language Models", "abstract": "The burgeoning field of Multimodal Large Language Models (MLLMs) has\nexhibited remarkable performance in diverse tasks such as captioning,\ncommonsense reasoning, and visual scene understanding. However, the deployment\nof these large-scale MLLMs on client devices is hindered by their extensive\nmodel parameters, leading to a notable decline in generalization capabilities\nwhen these models are compressed for device deployment. Addressing this\nchallenge, we introduce a Cloud-Device Collaborative Continual Adaptation\nframework, designed to enhance the performance of compressed, device-deployed\nMLLMs by leveraging the robust capabilities of cloud-based, larger-scale MLLMs.\nOur framework is structured into three key components: a device-to-cloud uplink\nfor efficient data transmission, cloud-based knowledge adaptation, and an\noptimized cloud-to-device downlink for model deployment. In the uplink phase,\nwe employ an Uncertainty-guided Token Sampling (UTS) strategy to effectively\nfilter out-of-distribution tokens, thereby reducing transmission costs and\nimproving training efficiency. On the cloud side, we propose Adapter-based\nKnowledge Distillation (AKD) method to transfer refined knowledge from\nlarge-scale to compressed, pocket-size MLLMs. Furthermore, we propose a Dynamic\nWeight update Compression (DWC) strategy for the downlink, which adaptively\nselects and quantizes updated weight parameters, enhancing transmission\nefficiency and reducing the representational disparity between cloud and device\nmodels. Extensive experiments on several multimodal benchmarks demonstrate the\nsuperiority of our proposed framework over prior Knowledge Distillation and\ndevice-cloud collaboration methods. Notably, we also validate the feasibility\nof our approach to real-world experiments.", "keywords": ["Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Guanqun Wang", "Jiaming Liu", "Chenxuan Li", "Yuan Zhang", "Ma Junpeng", "Xinyu Wei", "Kevin Zhang", "Maurice Chong", "Renrui Zhang", "Yijiang Liu", "Shanghang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7fc"}, "filepath": "data/2404.02117.png", "tags": [], "_media_type": "image", "_rand": 0.9999741754484963, "arXiv_link": "https://arxiv.org/abs/2404.02117", "other_link": "https://github.com/KHU-AGI/PriViLege.", "title": "Unlocking the Potential of Pre-trained Vision Transformers for Few-Shot Semantic Segmentation through Relationship Descriptors", "abstract": "Few-Shot Class Incremental Learning (FSCIL) is a task that requires a model\nto learn new classes incrementally without forgetting when only a few samples\nfor each class are given. FSCIL encounters two significant challenges:\ncatastrophic forgetting and overfitting, and these challenges have driven prior\nstudies to primarily rely on shallow models, such as ResNet-18. Even though\ntheir limited capacity can mitigate both forgetting and overfitting issues, it\nleads to inadequate knowledge transfer during few-shot incremental sessions. In\nthis paper, we argue that large models such as vision and language transformers\npre-trained on large datasets can be excellent few-shot incremental learners.\nTo this end, we propose a novel FSCIL framework called PriViLege, Pre-trained\nVision and Language transformers with prompting functions and knowledge\ndistillation. Our framework effectively addresses the challenges of\ncatastrophic forgetting and overfitting in large models through new pre-trained\nknowledge tuning (PKT) and two losses: entropy-based divergence loss and\nsemantic knowledge distillation loss. Experimental results show that the\nproposed PriViLege significantly outperforms the existing state-of-the-art\nmethods with a large margin, e.g., +9.38% in CUB200, +20.58% in CIFAR-100, and\n+13.36% in miniImageNet. Our implementation code is available at\nhttps://github.com/KHU-AGI/PriViLege.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Ziqin Zhou", "Hai-Ming Xu", "Yangyang Shu", "Lingqiao Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7fd"}, "filepath": "data/2403.15173.png", "tags": [], "_media_type": "image", "_rand": 0.9993981603878477, "arXiv_link": "https://arxiv.org/abs/2403.15173", "other_link": "", "title": "LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels", "abstract": "Autonomous systems need to process large-scale, sparse, and irregular point\nclouds with limited compute resources. Consequently, it is essential to develop\nLiDAR perception methods that are both efficient and effective. Although\nnaively enlarging 3D kernel size can enhance performance, it will also lead to\na cubically-increasing overhead. Therefore, it is crucial to develop\nstreamlined 3D large kernel designs that eliminate redundant weights and work\neffectively with larger kernels. In this paper, we propose an efficient and\neffective Large Sparse Kernel 3D Neural Network (LSK3DNet) that leverages\ndynamic pruning to amplify the 3D kernel size. Our method comprises two core\ncomponents: Spatial-wise Dynamic Sparsity (SDS) and Channel-wise Weight\nSelection (CWS). SDS dynamically prunes and regrows volumetric weights from the\nbeginning to learn a large sparse 3D kernel. It not only boosts performance but\nalso significantly reduces model size and computational cost. Moreover, CWS\nselects the most important channels for 3D convolution during training and\nsubsequently prunes the redundant channels to accelerate inference for 3D\nvision tasks. We demonstrate the effectiveness of LSK3DNet on three benchmark\ndatasets and five tracks compared with classical models and large kernel\ndesigns. Notably, LSK3DNet achieves the state-of-the-art performance on\nSemanticKITTI (i.e., 75.6% on single-scan and 63.4% on multi-scan), with\nroughly 40% model size reduction and 60% computing operations reduction\ncompared to the naive large 3D kernel model.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Tuo Feng", "Wenguan Wang", "Fan Ma", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7fe"}, "filepath": "data/2312.03777.png", "tags": [], "_media_type": "image", "_rand": 0.9991480072344439, "arXiv_link": "https://arxiv.org/abs/2312.03777", "other_link": "", "title": "On the Robustness of Large Multimodal Models Against Image Adversarial Attacks", "abstract": "Recent advances in instruction tuning have led to the development of\nState-of-the-Art Large Multimodal Models (LMMs). Given the novelty of these\nmodels, the impact of visual adversarial attacks on LMMs has not been\nthoroughly examined. We conduct a comprehensive study of the robustness of\nvarious LMMs against different adversarial attacks, evaluated across tasks\nincluding image classification, image captioning, and Visual Question Answer\n(VQA). We find that in general LMMs are not robust to visual adversarial\ninputs. However, our findings suggest that context provided to the model via\nprompts, such as questions in a QA pair helps to mitigate the effects of visual\nadversarial inputs. Notably, the LMMs evaluated demonstrated remarkable\nresilience to such attacks on the ScienceQA task with only an 8.10% drop in\nperformance compared to their visual counterparts which dropped 99.73%. We also\npropose a new approach to real-world image classification which we term query\ndecomposition. By incorporating existence queries into our input prompt we\nobserve diminished attack effectiveness and improvements in image\nclassification accuracy. This research highlights a previously under-explored\nfacet of LMM robustness and sets the stage for future work aimed at\nstrengthening the resilience of multimodal systems in adversarial environments.", "keywords": ["Multimodal models and vision-language models", "Deep learning architectures and techniques"], "authors_list": ["Xuanming Cui", "Alejandro Aparcedo", "Young Kyun Jang", "Ser-Nam Lim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f7ff"}, "filepath": "data/2312.17247.png", "tags": [], "_media_type": "image", "_rand": 0.9999287857264985, "arXiv_link": "https://arxiv.org/abs/2312.17247", "other_link": "https://www.robots.ox.ac.uk/~vgg/research/amodal/.", "title": "Amodal Ground Truth and Completion in the Wild", "abstract": "This paper studies amodal image segmentation: predicting entire object\nsegmentation masks including both visible and invisible (occluded) parts. In\nprevious work, the amodal segmentation ground truth on real images is usually\npredicted by manual annotaton and thus is subjective. In contrast, we use 3D\ndata to establish an automatic pipeline to determine authentic ground truth\namodal masks for partially occluded objects in real images. This pipeline is\nused to construct an amodal completion evaluation benchmark, MP3D-Amodal,\nconsisting of a variety of object categories and labels. To better handle the\namodal completion task in the wild, we explore two architecture variants: a\ntwo-stage model that first infers the occluder, followed by amodal mask\ncompletion; and a one-stage model that exploits the representation power of\nStable Diffusion for amodal segmentation across many categories. Without bells\nand whistles, our method achieves a new state-of-the-art performance on Amodal\nsegmentation datasets that cover a large variety of objects, including COCOA\nand our new MP3D-Amodal dataset. The dataset, model, and code are available at\nhttps://www.robots.ox.ac.uk/~vgg/research/amodal/.", "keywords": [], "authors_list": ["Guanqi Zhan", "Chuanxia Zheng", "Weidi Xie", "Andrew Zisserman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f800"}, "filepath": "data/2404.00876.png", "tags": [], "_media_type": "image", "_rand": 0.9992708094941305, "arXiv_link": "https://arxiv.org/abs/2404.00876", "other_link": "https://github.com/xiaolul2/MGMap.", "title": "MGMap: Mask-Guided Learning for Online Vectorized HD Map Construction", "abstract": "Currently, high-definition (HD) map construction leans towards a lightweight\nonline generation tendency, which aims to preserve timely and reliable road\nscene information. However, map elements contain strong shape priors. Subtle\nand sparse annotations make current detection-based frameworks ambiguous in\nlocating relevant feature scopes and cause the loss of detailed structures in\nprediction. To alleviate these problems, we propose MGMap, a mask-guided\napproach that effectively highlights the informative regions and achieves\nprecise map element localization by introducing the learned masks.\nSpecifically, MGMap employs learned masks based on the enhanced multi-scale BEV\nfeatures from two perspectives. At the instance level, we propose the\nMask-activated instance (MAI) decoder, which incorporates global instance and\nstructural information into instance queries by the activation of instance\nmasks. At the point level, a novel position-guided mask patch refinement\n(PG-MPR) module is designed to refine point locations from a finer-grained\nperspective, enabling the extraction of point-specific patch information.\nCompared to the baselines, our proposed MGMap achieves a notable improvement of\naround 10 mAP for different input modalities. Extensive experiments also\ndemonstrate that our approach showcases strong robustness and generalization\ncapabilities. Our code can be found at https://github.com/xiaolul2/MGMap.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiaolu Liu", "Song Wang", "Wentong Li", "Ruizi Yang", "Junbo Chen", "Jianke Zhu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f801"}, "filepath": "data/2403.17502.png", "tags": [], "_media_type": "image", "_rand": 0.9993390109346679, "arXiv_link": "https://arxiv.org/abs/2403.17502", "other_link": "", "title": "SeNM-VAE: Semi-Supervised Noise Modeling with Hierarchical Variational Autoencoder", "abstract": "The data bottleneck has emerged as a fundamental challenge in learning based\nimage restoration methods. Researchers have attempted to generate synthesized\ntraining data using paired or unpaired samples to address this challenge. This\nstudy proposes SeNM-VAE, a semi-supervised noise modeling method that leverages\nboth paired and unpaired datasets to generate realistic degraded data. Our\napproach is based on modeling the conditional distribution of degraded and\nclean images with a specially designed graphical model. Under the variational\ninference framework, we develop an objective function for handling both paired\nand unpaired data. We employ our method to generate paired training samples for\nreal-world image denoising and super-resolution tasks. Our approach excels in\nthe quality of synthetic degraded images compared to other unpaired and paired\nnoise modeling methods. Furthermore, our approach demonstrates remarkable\nperformance in downstream image restoration tasks, even with limited paired\ndata. With more paired data, our method achieves the best performance on the\nSIDD dataset.", "keywords": ["Low-level vision"], "authors_list": ["Dihan Zheng", "Yihang Zou", "Xiaowen Zhang", "Chenglong Bao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f802"}, "filepath": "data/2403.15664.png", "tags": [], "_media_type": "image", "_rand": 0.9997364797388673, "arXiv_link": "https://arxiv.org/abs/2403.15664", "other_link": "https://yihua.zone/work/ivgaze.", "title": "What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation", "abstract": "Driver's eye gaze holds a wealth of cognitive and intentional cues crucial\nfor intelligent vehicles. Despite its significance, research on in-vehicle gaze\nestimation remains limited due to the scarcity of comprehensive and\nwell-annotated datasets in real driving scenarios. In this paper, we present\nthree novel elements to advance in-vehicle gaze research. Firstly, we introduce\nIVGaze, a pioneering dataset capturing in-vehicle gaze, collected from 125\nsubjects and covering a large range of gaze and head poses within vehicles.\nConventional gaze collection systems are inadequate for in-vehicle use. In this\ndataset, we propose a new vision-based solution for in-vehicle gaze collection,\nintroducing a refined gaze target calibration method to tackle annotation\nchallenges. Second, our research focuses on in-vehicle gaze estimation\nleveraging the IVGaze. In-vehicle face images often suffer from low resolution,\nprompting our introduction of a gaze pyramid transformer that leverages\ntransformer-based multilevel features integration. Expanding upon this, we\nintroduce the dual-stream gaze pyramid transformer (GazeDPTR). Employing\nperspective transformation, we rotate virtual cameras to normalize images,\nutilizing camera pose to merge normalized and original images for accurate gaze\nestimation. GazeDPTR shows state-of-the-art performance on the IVGaze dataset.\nThirdly, we explore a novel strategy for gaze zone classification by extending\nthe GazeDPTR. A foundational tri-plane and project gaze onto these planes are\nnewly defined. Leveraging both positional features from the projection points\nand visual attributes from images, we achieve superior performance compared to\nrelying solely on visual features, substantiating the advantage of gaze\nestimation. Our project is available at https://yihua.zone/work/ivgaze.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yihua Cheng", "Yaning Zhu", "Zongji Wang", "hongquan hao", "Liu wei", "Shiqing Cheng", "Xi Wang", "Hyung Jin Chang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f803"}, "filepath": "data/2312.16084.png", "tags": [], "_media_type": "image", "_rand": 0.9991162810510134, "arXiv_link": "https://arxiv.org/abs/2312.16084", "other_link": "https://langsplat.github.io/", "title": "LangSplat: 3D Language Gaussian Splatting", "abstract": "Humans live in a 3D world and commonly use natural language to interact with\na 3D scene. Modeling a 3D language field to support open-ended language queries\nin 3D has gained increasing attention recently. This paper introduces\nLangSplat, which constructs a 3D language field that enables precise and\nefficient open-vocabulary querying within 3D spaces. Unlike existing methods\nthat ground CLIP language embeddings in a NeRF model, LangSplat advances the\nfield by utilizing a collection of 3D Gaussians, each encoding language\nfeatures distilled from CLIP, to represent the language field. By employing a\ntile-based splatting technique for rendering language features, we circumvent\nthe costly rendering process inherent in NeRF. Instead of directly learning\nCLIP embeddings, LangSplat first trains a scene-wise language autoencoder and\nthen learns language features on the scene-specific latent space, thereby\nalleviating substantial memory demands imposed by explicit modeling. Existing\nmethods struggle with imprecise and vague 3D language fields, which fail to\ndiscern clear boundaries between objects. We delve into this issue and propose\nto learn hierarchical semantics using SAM, thereby eliminating the need for\nextensively querying the language field across various scales and the\nregularization of DINO features. Extensive experimental results show that\nLangSplat significantly outperforms the previous state-of-the-art method LERF\nby a large margin. Notably, LangSplat is extremely efficient, achieving a 199\n$\\times$ speedup compared to LERF at the resolution of 1440 $\\times$ 1080. We\nstrongly recommend readers to check out our video results at\nhttps://langsplat.github.io/", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Minghan Qin", "Wanhua Li", "Jiawei ZHOU", "Haoqian Wang", "Hanspeter Pfister"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f804"}, "filepath": "data/2306.12547.png", "tags": [], "_media_type": "image", "_rand": 0.9998178237880205, "arXiv_link": "https://arxiv.org/abs/2306.12547", "other_link": "", "title": "DGC-GNN: Leveraging Geometry and Color Cues for Visual Descriptor-Free 2D-3D Matching", "abstract": "Matching 2D keypoints in an image to a sparse 3D point cloud of the scene\nwithout requiring visual descriptors has garnered increased interest due to its\nlow memory requirements, inherent privacy preservation, and reduced need for\nexpensive 3D model maintenance compared to visual descriptor-based methods.\nHowever, existing algorithms often compromise on performance, resulting in a\nsignificant deterioration compared to their descriptor-based counterparts. In\nthis paper, we introduce DGC-GNN, a novel algorithm that employs a\nglobal-to-local Graph Neural Network (GNN) that progressively exploits\ngeometric and color cues to represent keypoints, thereby improving matching\naccuracy. Our procedure encodes both Euclidean and angular relations at a\ncoarse level, forming the geometric embedding to guide the point matching. We\nevaluate DGC-GNN on both indoor and outdoor datasets, demonstrating that it not\nonly doubles the accuracy of the state-of-the-art visual descriptor-free\nalgorithm but also substantially narrows the performance gap between\ndescriptor-based and descriptor-free methods.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Efficient and scalable vision"], "authors_list": ["Shuzhe Wang", "Juho Kannala", "Daniel Barath"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f805"}, "filepath": "data/2401.15859.png", "tags": [], "_media_type": "image", "_rand": 0.999964380833921, "arXiv_link": "https://arxiv.org/abs/2401.15859", "other_link": "", "title": "DiffForensics: Leveraging Diffusion Prior to Image Forgery Detection and Localization", "abstract": "Detecting diffusion-generated images has recently grown into an emerging\nresearch area. Existing diffusion-based datasets predominantly focus on general\nimage generation. However, facial forgeries, which pose a more severe social\nrisk, have remained less explored thus far. To address this gap, this paper\nintroduces DiFF, a comprehensive dataset dedicated to face-focused\ndiffusion-generated images. DiFF comprises over 500,000 images that are\nsynthesized using thirteen distinct generation methods under four conditions.\nIn particular, this dataset leverages 30,000 carefully collected textual and\nvisual prompts, ensuring the synthesis of images with both high fidelity and\nsemantic consistency. We conduct extensive experiments on the DiFF dataset via\na human test and several representative forgery detection methods. The results\ndemonstrate that the binary detection accuracy of both human observers and\nautomated detectors often falls below 30%, shedding light on the challenges in\ndetecting diffusion-generated facial forgeries. Furthermore, we propose an edge\ngraph regularization approach to effectively enhance the generalization\ncapability of existing detectors.", "keywords": ["Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Zeqin Yu", "Jiangqun Ni", "Yuzhen Lin", "Haoyi Deng", "Bin Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f806"}, "filepath": "data/2403.16182.png", "tags": [], "_media_type": "image", "_rand": 0.9994971872074844, "arXiv_link": "https://arxiv.org/abs/2403.16182", "other_link": "https://github.com/OpenGVLab/EgoExoLearn", "title": "EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World", "abstract": "Being able to map the activities of others into one's own point of view is\none fundamental human skill even from a very early age. Taking a step toward\nunderstanding this human ability, we introduce EgoExoLearn, a large-scale\ndataset that emulates the human demonstration following process, in which\nindividuals record egocentric videos as they execute tasks guided by\ndemonstration videos. Focusing on the potential applications in daily\nassistance and professional support, EgoExoLearn contains egocentric and\ndemonstration video data spanning 120 hours captured in daily life scenarios\nand specialized laboratories. Along with the videos we record high-quality gaze\ndata and provide detailed multimodal annotations, formulating a playground for\nmodeling the human ability to bridge asynchronous procedural actions from\ndifferent viewpoints. To this end, we present benchmarks such as cross-view\nassociation, cross-view action planning, and cross-view referenced skill\nassessment, along with detailed analysis. We expect EgoExoLearn can serve as an\nimportant resource for bridging the actions across views, thus paving the way\nfor creating AI agents capable of seamlessly learning by observing humans in\nthe real world. Code and data can be found at:\nhttps://github.com/OpenGVLab/EgoExoLearn", "keywords": ["Biometrics and human analysis", "Scene analysis and understanding"], "authors_list": ["Yifei Huang", "Guo Chen", "Jilan Xu", "Mingfang Zhang", "Lijin Yang", "Baoqi Pei", "Hongjie Zhang", "Lu Dong", "Yali Wang", "Limin Wang", "Yu Qiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f807"}, "filepath": "data/2405.09931.png", "tags": [], "_media_type": "image", "_rand": 0.9994184420242931, "arXiv_link": "https://arxiv.org/abs/2405.09931", "other_link": "", "title": "Learning from Observer Gaze: Zero-shot Attention Prediction Oriented by Human-Object Interaction Recognition", "abstract": "Most existing attention prediction research focuses on salient instances like\nhumans and objects. However, the more complex interaction-oriented attention,\narising from the comprehension of interactions between instances by human\nobservers, remains largely unexplored. This is equally crucial for advancing\nhuman-machine interaction and human-centered artificial intelligence. To bridge\nthis gap, we first collect a novel gaze fixation dataset named IG, comprising\n530,000 fixation points across 740 diverse interaction categories, capturing\nvisual attention during human observers cognitive processes of interactions.\nSubsequently, we introduce the zero-shot interaction-oriented attention\nprediction task ZeroIA, which challenges models to predict visual cues for\ninteractions not encountered during training. Thirdly, we present the\nInteractive Attention model IA, designed to emulate human observers cognitive\nprocesses to tackle the ZeroIA problem. Extensive experiments demonstrate that\nthe proposed IA outperforms other state-of-the-art approaches in both ZeroIA\nand fully supervised settings. Lastly, we endeavor to apply\ninteraction-oriented attention to the interaction recognition task itself.\nFurther experimental results demonstrate the promising potential to enhance the\nperformance and interpretability of existing state-of-the-art HOI models by\nincorporating real human attention data from IG and attention labels generated\nby IA.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Yuchen Zhou", "Linkai Liu", "Chao Gou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f808"}, "filepath": "data/2312.17205.png", "tags": [], "_media_type": "image", "_rand": 0.9990969799674542, "arXiv_link": "https://arxiv.org/abs/2312.17205", "other_link": "", "title": "EFHQ: Multi-purpose ExtremePose-Face-HQ dataset", "abstract": "The existing facial datasets, while having plentiful images at near frontal\nviews, lack images with extreme head poses, leading to the downgraded\nperformance of deep learning models when dealing with profile or pitched faces.\nThis work aims to address this gap by introducing a novel dataset named Extreme\nPose Face High-Quality Dataset (EFHQ), which includes a maximum of 450k\nhigh-quality images of faces at extreme poses. To produce such a massive\ndataset, we utilize a novel and meticulous dataset processing pipeline to\ncurate two publicly available datasets, VFHQ and CelebV-HQ, which contain many\nhigh-resolution face videos captured in various settings. Our dataset can\ncomplement existing datasets on various facial-related tasks, such as facial\nsynthesis with 2D/3D-aware GAN, diffusion-based text-to-image face generation,\nand face reenactment. Specifically, training with EFHQ helps models generalize\nwell across diverse poses, significantly improving performance in scenarios\ninvolving extreme views, confirmed by extensive experiments. Additionally, we\nutilize EFHQ to define a challenging cross-view face verification benchmark, in\nwhich the performance of SOTA face recognition models drops 5-37% compared to\nfrontal-to-frontal scenarios, aiming to stimulate studies on face recognition\nunder severe pose conditions in the wild.", "keywords": ["Biometrics and human analysis", "Deep learning architectures and techniques"], "authors_list": ["Trung Dao", "Duc H Vu", "Cuong Pham", "Anh Tran"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f809"}, "filepath": "data/2405.06214.png", "tags": [], "_media_type": "image", "_rand": 0.999703467226319, "arXiv_link": "https://arxiv.org/abs/2405.06214", "other_link": "https://drliuqi.github.io/.", "title": "Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling", "abstract": "Recent progress in large-scale scene rendering has yielded Neural Radiance\nFields (NeRF)-based models with an impressive ability to synthesize scenes\nacross small objects and indoor scenes. Nevertheless, extending this idea to\nlarge-scale aerial rendering poses two critical problems. Firstly, a single\nNeRF cannot render the entire scene with high-precision for complex large-scale\naerial datasets since the sampling range along each view ray is insufficient to\ncover buildings adequately. Secondly, traditional NeRFs are infeasible to train\non one GPU to enable interactive fly-throughs for modeling massive images.\nInstead, existing methods typically separate the whole scene into multiple\nregions and train a NeRF on each region, which are unaccustomed to different\nflight trajectories and difficult to achieve fast rendering. To that end, we\npropose Aerial-NeRF with three innovative modifications for jointly adapting\nNeRF in large-scale aerial rendering: (1) Designing an adaptive spatial\npartitioning and selection method based on drones' poses to adapt different\nflight trajectories; (2) Using similarity of poses instead of (expert) network\nfor rendering speedup to determine which region a new viewpoint belongs to; (3)\nDeveloping an adaptive sampling approach for rendering performance improvement\nto cover the entire buildings at different heights. Extensive experiments have\nconducted to verify the effectiveness and efficiency of Aerial-NeRF, and new\nstate-of-the-art results have been achieved on two public large-scale aerial\ndatasets and presented SCUTic dataset. Note that our model allows us to perform\nrendering over 4 times as fast as compared to multiple competitors. Our\ndataset, code, and model are publicly available at https://drliuqi.github.io/.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Xinhang Liu", "Yu-Wing Tai", "Chi-Keung Tang", "Pedro Miraldo", "Suhas Lohit", "Moitreya Chatterjee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f80a"}, "filepath": "data/2312.13964.png", "tags": [], "_media_type": "image", "_rand": 0.9997686492197219, "arXiv_link": "https://arxiv.org/abs/2312.13964", "other_link": "", "title": "PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models", "abstract": "Recent advancements in personalized text-to-image (T2I) models have\nrevolutionized content creation, empowering non-experts to generate stunning\nimages with unique styles. While promising, adding realistic motions into these\npersonalized images by text poses significant challenges in preserving distinct\nstyles, high-fidelity details, and achieving motion controllability by text. In\nthis paper, we present PIA, a Personalized Image Animator that excels in\naligning with condition images, achieving motion controllability by text, and\nthe compatibility with various personalized T2I models without specific tuning.\nTo achieve these goals, PIA builds upon a base T2I model with well-trained\ntemporal alignment layers, allowing for the seamless transformation of any\npersonalized T2I model into an image animation model. A key component of PIA is\nthe introduction of the condition module, which utilizes the condition frame\nand inter-frame affinity as input to transfer appearance information guided by\nthe affinity hint for individual frame synthesis in the latent space. This\ndesign mitigates the challenges of appearance-related image alignment within\nand allows for a stronger focus on aligning with motion-related guidance.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Yiming Zhang", "Zhening Xing", "Yanhong Zeng", "Youqing Fang", "Kai Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f80b"}, "filepath": "data/2312.05923.png", "tags": [], "_media_type": "image", "_rand": 0.9993207907518905, "arXiv_link": "https://arxiv.org/abs/2312.05923", "other_link": "https://github.com/streamer-AP/CGNet}{CGNet}", "title": "Weakly Supervised Video Individual Counting", "abstract": "Video Individual Counting (VIC) aims to predict the number of unique\nindividuals in a single video. % Existing methods learn representations based\non trajectory labels for individuals, which are annotation-expensive. % To\nprovide a more realistic reflection of the underlying practical challenge, we\nintroduce a weakly supervised VIC task, wherein trajectory labels are not\nprovided. Instead, two types of labels are provided to indicate traffic\nentering the field of view (inflow) and leaving the field view (outflow). % We\nalso propose the first solution as a baseline that formulates the task as a\nweakly supervised contrastive learning problem under group-level matching. In\ndoing so, we devise an end-to-end trainable soft contrastive loss to drive the\nnetwork to distinguish inflow, outflow, and the remaining. % To facilitate\nfuture study in this direction, we generate annotations from the existing VIC\ndatasets SenseCrowd and CroHD and also build a new dataset, UAVVIC. % Extensive\nresults show that our baseline weakly supervised method outperforms supervised\nmethods, and thus, little information is lost in the transition to the more\npractically relevant weakly supervised task. The code and trained model will be\npublic at \\href{https://github.com/streamer-AP/CGNet}{CGNet}", "keywords": [], "authors_list": ["Xinyan Liu", "Guorong Li", "Yuankai Qi", "Ziheng Yan", "Zhenjun Han", "Anton van den Hengel", "Ming-Hsuan Yang", "Qingming Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f80c"}, "filepath": "data/2405.05588.png", "tags": [], "_media_type": "image", "_rand": 0.9998472309625134, "arXiv_link": "https://arxiv.org/abs/2405.05588", "other_link": "https://hosytuyen.github.io/projects/TL-DMI", "title": "Model Inversion Robustness: Can Transfer Learning Help?", "abstract": "Model Inversion (MI) attacks aim to reconstruct private training data by\nabusing access to machine learning models. Contemporary MI attacks have\nachieved impressive attack performance, posing serious threats to privacy.\nMeanwhile, all existing MI defense methods rely on regularization that is in\ndirect conflict with the training objective, resulting in noticeable\ndegradation in model utility. In this work, we take a different perspective,\nand propose a novel and simple Transfer Learning-based Defense against Model\nInversion (TL-DMI) to render MI-robust models. Particularly, by leveraging TL,\nwe limit the number of layers encoding sensitive information from private\ntraining dataset, thereby degrading the performance of MI attack. We conduct an\nanalysis using Fisher Information to justify our method. Our defense is\nremarkably simple to implement. Without bells and whistles, we show in\nextensive experiments that TL-DMI achieves state-of-the-art (SOTA) MI\nrobustness. Our code, pre-trained models, demo and inverted data are available\nat: https://hosytuyen.github.io/projects/TL-DMI", "keywords": [], "authors_list": ["Sy-Tuyen Ho", "Koh Jun Hao", "Keshigeyan Chandrasegaran", "Ngoc-Bao Nguyen", "Ngai-Man Cheung"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Cryptography and Security", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f80d"}, "filepath": "data/2310.14172.png", "tags": [], "_media_type": "image", "_rand": 0.9992842125367063, "arXiv_link": "https://arxiv.org/abs/2310.14172", "other_link": "", "title": "$M^3$-UDA: A New Benchmark for Unsupervised Domain Adaptive Fetal Cardiac Structure Detection", "abstract": "Automatic tissue segmentation of fetal brain images is essential for the\nquantitative analysis of prenatal neurodevelopment. However, producing\nvoxel-level annotations of fetal brain imaging is time-consuming and expensive.\nTo reduce labeling costs, we propose a practical unsupervised domain adaptation\n(UDA) setting that adapts the segmentation labels of high-quality fetal brain\natlases to unlabeled fetal brain MRI data from another domain. To address the\ntask, we propose a new UDA framework based on Appearance and Structure\nConsistency, named ASC. We adapt the segmentation model to the appearances of\ndifferent domains by constraining the consistency before and after a\nfrequency-based image transformation, which is to swap the appearance between\nbrain MRI data and atlases. Consider that even in the same domain, the fetal\nbrain images of different gestational ages could have significant variations in\nthe anatomical structures. To make the model adapt to the structural variations\nin the target domain, we further encourage prediction consistency under\ndifferent structural perturbations. Extensive experiments on FeTA 2021\nbenchmark demonstrate the effectiveness of our ASC in comparison to\nregistration-based, semi-supervised learning-based, and existing UDA-based\nmethods.", "keywords": [], "authors_list": ["Bin Pu", "Liwen Wang", "Jiewen Yang", "He Guannan", "Xingbo Dong", "Shengli Li", "Ying Tan", "Ming Chen", "Zhe Jin", "Kenli Li", "Xiaomeng Li"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f80e"}, "filepath": "data/2404.01775.png", "tags": [], "_media_type": "image", "_rand": 0.9997139454711496, "arXiv_link": "https://arxiv.org/abs/2404.01775", "other_link": "https://github.com/glhr/ood-labelnoise", "title": "A noisy elephant in the room: Is your out-of-distribution detector robust to label noise?", "abstract": "The ability to detect unfamiliar or unexpected images is essential for safe\ndeployment of computer vision systems. In the context of classification, the\ntask of detecting images outside of a model's training domain is known as\nout-of-distribution (OOD) detection. While there has been a growing research\ninterest in developing post-hoc OOD detection methods, there has been\ncomparably little discussion around how these methods perform when the\nunderlying classifier is not trained on a clean, carefully curated dataset. In\nthis work, we take a closer look at 20 state-of-the-art OOD detection methods\nin the (more realistic) scenario where the labels used to train the underlying\nclassifier are unreliable (e.g. crowd-sourced or web-scraped labels). Extensive\nexperiments across different datasets, noise types & levels, architectures and\ncheckpointing strategies provide insights into the effect of class label noise\non OOD detection, and show that poor separation between incorrectly classified\nID samples vs. OOD samples is an overlooked yet important limitation of\nexisting methods. Code: https://github.com/glhr/ood-labelnoise", "keywords": [], "authors_list": ["Galadrielle Humblot-Renaux", "Sergio Escalera", "Thomas B. Moeslund"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f80f"}, "filepath": "data/2402.14654.png", "tags": [], "_media_type": "image", "_rand": 0.9991363942212027, "arXiv_link": "https://arxiv.org/abs/2402.14654", "other_link": "", "title": "HHMR: Holistic Hand Mesh Recovery by Enhancing the Multimodal Controllability of Graph Diffusion Models", "abstract": "We present Multi-HMR, a strong single-shot model for multi-person 3D human\nmesh recovery from a single RGB image. Predictions encompass the whole body,\ni.e, including hands and facial expressions, using the SMPL-X parametric model\nand spatial location in the camera coordinate system. Our model detects people\nby predicting coarse 2D heatmaps of person centers, using features produced by\na standard Vision Transformer (ViT) backbone. It then predicts their whole-body\npose, shape and spatial location using a new cross-attention module called the\nHuman Prediction Head (HPH), with one query per detected center token,\nattending to the entire set of features. As direct prediction of SMPL-X\nparameters yields suboptimal results, we introduce CUFFS; the Close-Up Frames\nof Full-Body Subjects dataset, containing humans close to the camera with\ndiverse hand poses. We show that incorporating this dataset into training\nfurther enhances predictions, particularly for hands, enabling us to achieve\nstate-of-the-art performance. Multi-HMR also optionally accounts for camera\nintrinsics, if available, by encoding camera ray directions for each image\ntoken. This simple design achieves strong performance on whole-body and\nbody-only benchmarks simultaneously. We train models with various backbone\nsizes and input resolutions. In particular, using a ViT-S backbone and\n$448\\times448$ input images already yields a fast and competitive model with\nrespect to state-of-the-art methods, while considering larger models and higher\nresolutions further improve performance.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Mengcheng Li", "Hongwen Zhang", "Yuxiang Zhang", "Ruizhi Shao", "Tao Yu", "Yebin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f810"}, "filepath": "data/2312.01689.png", "tags": [], "_media_type": "image", "_rand": 0.9998431505111077, "arXiv_link": "https://arxiv.org/abs/2312.01689", "other_link": "", "title": "C$^\\text{2}$RV: Cross-Regional and Cross-View Learning for Sparse-View CBCT Reconstruction", "abstract": "Cone beam computed tomography (CBCT) is an emerging medical imaging technique\nto visualize the internal anatomical structures of patients. During a CBCT\nscan, several projection images of different angles or views are collectively\nutilized to reconstruct a tomographic image. However, reducing the number of\nprojections in a CBCT scan while preserving the quality of a reconstructed\nimage is challenging due to the nature of an ill-posed inverse problem.\nRecently, a neural attenuation field (NAF) method was proposed by adopting a\nneural radiance field algorithm as a new way for CBCT reconstruction,\ndemonstrating fast and promising results using only 50 views. However,\ndecreasing the number of projections is still preferable to reduce potential\nradiation exposure, and a faster reconstruction time is required considering a\ntypical scan time. In this work, we propose a fast and accurate sparse-view\nCBCT reconstruction (FACT) method to provide better reconstruction quality and\nfaster optimization speed in the minimal number of view acquisitions ($<$ 50\nviews). In the FACT method, we meta-trained a neural network and a hash-encoder\nusing a few scans (= 15), and a new regularization technique is utilized to\nreconstruct the details of an anatomical structure. In conclusion, we have\nshown that the FACT method produced better, and faster reconstruction results\nover the other conventional algorithms based on CBCT scans of different body\nparts (chest, head, and abdomen) and CT vendors (Siemens, Phillips, and GE).", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Yiqun Lin", "Jiewen Yang", "hualiang wang", "Xinpeng Ding", "Wei Zhao", "Xiaomeng Li"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f811"}, "filepath": "data/2401.10217.png", "tags": [], "_media_type": "image", "_rand": 0.9993763144541608, "arXiv_link": "https://arxiv.org/abs/2401.10217", "other_link": "https://namithap10.github.io/xinc.", "title": "Explaining the Implicit Neural Canvas: Connecting Pixels to Neurons by Tracing their Contributions", "abstract": "The many variations of Implicit Neural Representations (INRs), where a neural\nnetwork is trained as a continuous representation of a signal, have tremendous\npractical utility for downstream tasks including novel view synthesis, video\ncompression, and image superresolution. Unfortunately, the inner workings of\nthese networks are seriously under-studied. Our work, eXplaining the Implicit\nNeural Canvas (XINC), is a unified framework for explaining properties of INRs\nby examining the strength of each neuron's contribution to each output pixel.\nWe call the aggregate of these contribution maps the Implicit Neural Canvas and\nwe use this concept to demonstrate that the INRs which we study learn to\n''see'' the frames they represent in surprising ways. For example, INRs tend to\nhave highly distributed representations. While lacking high-level object\nsemantics, they have a significant bias for color and edges, and are almost\nentirely space-agnostic. We arrive at our conclusions by examining how objects\nare represented across time in video INRs, using clustering to visualize\nsimilar neurons across layers and architectures, and show that this is\ndominated by motion. These insights demonstrate the general usefulness of our\nanalysis framework. Our project page is available at\nhttps://namithap10.github.io/xinc.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Namitha Padmanabhan", "Matthew A Gwilliam", "Pulkit Kumar", "Shishira R Maiya", "Max Ehrlich", "Abhinav Shrivastava"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f812"}, "filepath": "data/2311.13831.png", "tags": [], "_media_type": "image", "_rand": 0.999300980275811, "arXiv_link": "https://arxiv.org/abs/2311.13831", "other_link": "", "title": "Posterior Distillation Sampling", "abstract": "We introduce Posterior Distillation Sampling (PDS), a novel optimization\nmethod for parametric image editing based on diffusion models. Existing\noptimization-based methods, which leverage the powerful 2D prior of diffusion\nmodels to handle various parametric images, have mainly focused on generation.\nUnlike generation, editing requires a balance between conforming to the target\nattribute and preserving the identity of the source content. Recent 2D image\nediting methods have achieved this balance by leveraging the stochastic latent\nencoded in the generative process of diffusion models. To extend the editing\ncapabilities of diffusion models shown in pixel space to parameter space, we\nreformulate the 2D image editing method into an optimization form named PDS.\nPDS matches the stochastic latents of the source and the target, enabling the\nsampling of targets in diverse parameter spaces that align with a desired\nattribute while maintaining the source's identity. We demonstrate that this\noptimization resembles running a generative process with the target attribute,\nbut aligning this process with the trajectory of the source's generative\nprocess. Extensive editing results in Neural Radiance Fields and Scalable\nVector Graphics representations demonstrate that PDS is capable of sampling\ntargets to fulfill the aforementioned balance across various parameter spaces.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Juil Koo", "Chanho Park", "Minhyuk Sung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f813"}, "filepath": "data/2311.18129.png", "tags": [], "_media_type": "image", "_rand": 0.9994102195922806, "arXiv_link": "https://arxiv.org/abs/2311.18129", "other_link": "", "title": "Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices", "abstract": "While federated learning (FL) systems often utilize quantization to battle\ncommunication and computational bottlenecks, they have heretofore been limited\nto deploying fixed-precision quantization schemes. Meanwhile, the concept of\nmixed-precision quantization (MPQ), where different layers of a deep learning\nmodel are assigned varying bit-width, remains unexplored in the FL settings. We\npresent a novel FL algorithm, FedMPQ, which introduces mixed-precision\nquantization to resource-heterogeneous FL systems. Specifically, local models,\nquantized so as to satisfy bit-width constraint, are trained by optimizing an\nobjective function that includes a regularization term which promotes reduction\nof precision in some of the layers without significant performance degradation.\nThe server collects local model updates, de-quantizes them into full-precision\nmodels, and then aggregates them into a global model. To initialize the next\nround of local training, the server relies on the information learned in the\nprevious training round to customize bit-width assignments of the models\ndelivered to different clients. In extensive benchmarking experiments on\nseveral model architectures and different datasets in both iid and non-iid\nsettings, FedMPQ outperformed the baseline FL schemes that utilize\nfixed-precision quantization while incurring only a minor computational\noverhead on the participating devices.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Huancheng Chen", "Haris Vikalo"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Distributed, Parallel, and Cluster Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f814"}, "filepath": "data/2403.06102.png", "tags": [], "_media_type": "image", "_rand": 0.999814398636556, "arXiv_link": "https://arxiv.org/abs/2403.06102", "other_link": "", "title": "Coherent Temporal Synthesis for Incremental Action Segmentation", "abstract": "Data replay is a successful incremental learning technique for images. It\nprevents catastrophic forgetting by keeping a reservoir of previous data,\noriginal or synthesized, to ensure the model retains past knowledge while\nadapting to novel concepts. However, its application in the video domain is\nrudimentary, as it simply stores frame exemplars for action recognition. This\npaper presents the first exploration of video data replay techniques for\nincremental action segmentation, focusing on action temporal modeling. We\npropose a Temporally Coherent Action (TCA) model, which represents actions\nusing a generative model instead of storing individual frames. The integration\nof a conditioning variable that captures temporal coherence allows our model to\nunderstand the evolution of action features over time. Therefore, action\nsegments generated by TCA for replay are diverse and temporally coherent. In a\n10-task incremental setup on the Breakfast dataset, our approach achieves\nsignificant increases in accuracy for up to 22% compared to the baselines.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Guodong Ding", "Hans Golong", "Angela Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f815"}, "filepath": "data/2403.06247.png", "tags": [], "_media_type": "image", "_rand": 0.9995195846191225, "arXiv_link": "https://arxiv.org/abs/2403.06247", "other_link": "", "title": "Text-Guided Variational Image Generation for Industrial Anomaly Detection and Segmentation", "abstract": "We propose a text-guided variational image generation method to address the\nchallenge of getting clean data for anomaly detection in industrial\nmanufacturing. Our method utilizes text information about the target object,\nlearned from extensive text library documents, to generate non-defective data\nimages resembling the input image. The proposed framework ensures that the\ngenerated non-defective images align with anticipated distributions derived\nfrom textual and image-based knowledge, ensuring stability and generality.\nExperimental results demonstrate the effectiveness of our approach, surpassing\nprevious methods even with limited non-defective data. Our approach is\nvalidated through generalization tests across four baseline models and three\ndistinct datasets. We present an additional analysis to enhance the\neffectiveness of anomaly detection models by utilizing the generated images.", "keywords": [], "authors_list": ["Mingyu Lee", "Jongwon Choi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f816"}, "filepath": "data/2312.12142.png", "tags": [], "_media_type": "image", "_rand": 0.9990348583557215, "arXiv_link": "https://arxiv.org/abs/2312.12142", "other_link": "https://github.com/yeungchenwa/FontDiffuser.", "title": "Generate Like Experts: Multi-Stage Font Generation by Incorporating Font Transfer Process into Diffusion Models", "abstract": "Automatic font generation is an imitation task, which aims to create a font\nlibrary that mimics the style of reference images while preserving the content\nfrom source images. Although existing font generation methods have achieved\nsatisfactory performance, they still struggle with complex characters and large\nstyle variations. To address these issues, we propose FontDiffuser, a\ndiffusion-based image-to-image one-shot font generation method, which\ninnovatively models the font imitation task as a noise-to-denoise paradigm. In\nour method, we introduce a Multi-scale Content Aggregation (MCA) block, which\neffectively combines global and local content cues across different scales,\nleading to enhanced preservation of intricate strokes of complex characters.\nMoreover, to better manage the large variations in style transfer, we propose a\nStyle Contrastive Refinement (SCR) module, which is a novel structure for style\nrepresentation learning. It utilizes a style extractor to disentangle styles\nfrom images, subsequently supervising the diffusion model via a meticulously\ndesigned style contrastive loss. Extensive experiments demonstrate\nFontDiffuser's state-of-the-art performance in generating diverse characters\nand styles. It consistently excels on complex characters and large style\nchanges compared to previous methods. The code is available at\nhttps://github.com/yeungchenwa/FontDiffuser.", "keywords": [], "authors_list": ["Bin Fu", "Fanghua Yu", "Anran Liu", "Zixuan Wang", "Jie Wen", "Junjun He", "Yu Qiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f817"}, "filepath": "data/2405.03144.png", "tags": [], "_media_type": "image", "_rand": 0.9991026379126872, "arXiv_link": "https://arxiv.org/abs/2405.03144", "other_link": "https://github.com/chengtao-lv/PTQ4SAM}.", "title": "PTQ4SAM: Post-Training Quantization for Segment Anything", "abstract": "Segment Anything Model (SAM) has achieved impressive performance in many\ncomputer vision tasks. However, as a large-scale model, the immense memory and\ncomputation costs hinder its practical deployment. In this paper, we propose a\npost-training quantization (PTQ) framework for Segment Anything Model, namely\nPTQ4SAM. First, we investigate the inherent bottleneck of SAM quantization\nattributed to the bimodal distribution in post-Key-Linear activations. We\nanalyze its characteristics from both per-tensor and per-channel perspectives,\nand propose a Bimodal Integration strategy, which utilizes a mathematically\nequivalent sign operation to transform the bimodal distribution into a\nrelatively easy-quantized normal distribution offline. Second, SAM encompasses\ndiverse attention mechanisms (i.e., self-attention and two-way\ncross-attention), resulting in substantial variations in the post-Softmax\ndistributions. Therefore, we introduce an Adaptive Granularity Quantization for\nSoftmax through searching the optimal power-of-two base, which is\nhardware-friendly. Extensive experimental results across various vision tasks\n(instance segmentation, semantic segmentation and object detection), datasets\nand model variants show the superiority of PTQ4SAM. For example, when\nquantizing SAM-L to 6-bit, we achieve lossless accuracy for instance\nsegmentation, about 0.5\\% drop with theoretical 3.9$\\times$ acceleration. The\ncode is available at \\url{https://github.com/chengtao-lv/PTQ4SAM}.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Chengtao Lv", "Hong Chen", "Jinyang Guo", "Yifu Ding", "Xianglong Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f818"}, "filepath": "data/2404.11732.png", "tags": [], "_media_type": "image", "_rand": 0.9991264477524474, "arXiv_link": "https://arxiv.org/abs/2404.11732", "other_link": "", "title": "Visual Prompting for Generalized Few-shot Segmentation: A Multi-scale Approach", "abstract": "The emergence of attention-based transformer models has led to their\nextensive use in various tasks, due to their superior generalization and\ntransfer properties. Recent research has demonstrated that such models, when\nprompted appropriately, are excellent for few-shot inference. However, such\ntechniques are under-explored for dense prediction tasks like semantic\nsegmentation. In this work, we examine the effectiveness of prompting a\ntransformer-decoder with learned visual prompts for the generalized few-shot\nsegmentation (GFSS) task. Our goal is to achieve strong performance not only on\nnovel categories with limited examples, but also to retain performance on base\ncategories. We propose an approach to learn visual prompts with limited\nexamples. These learned visual prompts are used to prompt a multiscale\ntransformer decoder to facilitate accurate dense predictions. Additionally, we\nintroduce a unidirectional causal attention mechanism between the novel\nprompts, learned with limited examples, and the base prompts, learned with\nabundant data. This mechanism enriches the novel prompts without deteriorating\nthe base class performance. Overall, this form of prompting helps us achieve\nstate-of-the-art performance for GFSS on two different benchmark datasets:\nCOCO-$20^i$ and Pascal-$5^i$, without the need for test-time optimization (or\ntransduction). Furthermore, test-time optimization leveraging unlabelled test\ndata can be used to improve the prompts, which we refer to as transductive\nprompt tuning.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Mir Hossain Hossain", "Mennatullah Siam", "Leonid Sigal", "Jim Little"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f819"}, "filepath": "data/2311.10089.png", "tags": [], "_media_type": "image", "_rand": 0.9999861155350732, "arXiv_link": "https://arxiv.org/abs/2311.10089", "other_link": "", "title": "Precise Image Editing via Recognition and Generation Tasks", "abstract": "Instruction-based image editing holds immense potential for a variety of\napplications, as it enables users to perform any editing operation using a\nnatural language instruction. However, current models in this domain often\nstruggle with accurately executing user instructions. We present Emu Edit, a\nmulti-task image editing model which sets state-of-the-art results in\ninstruction-based image editing. To develop Emu Edit we train it to multi-task\nacross an unprecedented range of tasks, such as region-based editing, free-form\nediting, and Computer Vision tasks, all of which are formulated as generative\ntasks. Additionally, to enhance Emu Edit's multi-task learning abilities, we\nprovide it with learned task embeddings which guide the generation process\ntowards the correct edit type. Both these elements are essential for Emu Edit's\noutstanding performance. Furthermore, we show that Emu Edit can generalize to\nnew tasks, such as image inpainting, super-resolution, and compositions of\nediting tasks, with just a few labeled examples. This capability offers a\nsignificant advantage in scenarios where high-quality samples are scarce.\nLastly, to facilitate a more rigorous and informed assessment of instructable\nimage editing models, we release a new challenging and versatile benchmark that\nincludes seven different image editing tasks.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Shelly Sheynin", "Adam Polyak", "Uriel Singer", "Yuval Kirstain", "Amit Zohar", "Oron Ashual", "Devi Parikh", "Yaniv Taigman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f81a"}, "filepath": "data/2402.18115.png", "tags": [], "_media_type": "image", "_rand": 0.9990235571081257, "arXiv_link": "https://arxiv.org/abs/2402.18115", "other_link": "https://github.com/MinghanLi/UniVS}.", "title": "UniVS: Unified and Universal Video Segmentation with Prompts as Queries", "abstract": "Despite the recent advances in unified image segmentation (IS), developing a\nunified video segmentation (VS) model remains a challenge. This is mainly\nbecause generic category-specified VS tasks need to detect all objects and\ntrack them across consecutive frames, while prompt-guided VS tasks require\nre-identifying the target with visual/text prompts throughout the entire video,\nmaking it hard to handle the different tasks with the same architecture. We\nmake an attempt to address these issues and present a novel unified VS\narchitecture, namely UniVS, by using prompts as queries. UniVS averages the\nprompt features of the target from previous frames as its initial query to\nexplicitly decode masks, and introduces a target-wise prompt cross-attention\nlayer in the mask decoder to integrate prompt features in the memory pool. By\ntaking the predicted masks of entities from previous frames as their visual\nprompts, UniVS converts different VS tasks into prompt-guided target\nsegmentation, eliminating the heuristic inter-frame matching process. Our\nframework not only unifies the different VS tasks but also naturally achieves\nuniversal training and testing, ensuring robust performance across different\nscenarios. UniVS shows a commendable balance between performance and\nuniversality on 10 challenging VS benchmarks, covering video instance,\nsemantic, panoptic, object, and referring segmentation tasks. Code can be found\nat \\url{https://github.com/MinghanLi/UniVS}.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Minghan LI", "Shuai Li", "Xindong Zhang", "Lei Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f81b"}, "filepath": "data/2401.05011.png", "tags": [], "_media_type": "image", "_rand": 0.9995935105522312, "arXiv_link": "https://arxiv.org/abs/2401.05011", "other_link": "", "title": "A-Teacher: Asymmetric Network for 3D Semi-Supervised Object Detection", "abstract": "Semi-supervised 3D object detection is a promising yet under-explored\ndirection to reduce data annotation costs, especially for cluttered indoor\nscenes. A few prior works, such as SESS and 3DIoUMatch, attempt to solve this\ntask by utilizing a teacher model to generate pseudo-labels for unlabeled\nsamples. However, the availability of unlabeled samples in the 3D domain is\nrelatively limited compared to its 2D counterpart due to the greater effort\nrequired to collect 3D data. Moreover, the loose consistency regularization in\nSESS and restricted pseudo-label selection strategy in 3DIoUMatch lead to\neither low-quality supervision or a limited amount of pseudo labels. To address\nthese issues, we present a novel Dual-Perspective Knowledge Enrichment approach\nnamed DPKE for semi-supervised 3D object detection. Our DPKE enriches the\nknowledge of limited training data, particularly unlabeled data, from two\nperspectives: data-perspective and feature-perspective. Specifically, from the\ndata-perspective, we propose a class-probabilistic data augmentation method\nthat augments the input data with additional instances based on the varying\ndistribution of class probabilities. Our DPKE achieves feature-perspective\nknowledge enrichment by designing a geometry-aware feature matching method that\nregularizes feature-level similarity between object proposals from the student\nand teacher models. Extensive experiments on the two benchmark datasets\ndemonstrate that our DPKE achieves superior performance over existing\nstate-of-the-art approaches under various label ratio conditions. The source\ncode will be made available to the public.", "keywords": [], "authors_list": ["Hanshi Wang", "Zhipeng Zhang", "Jin Gao", "Weiming Hu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f81c"}, "filepath": "data/2311.11666.png", "tags": [], "_media_type": "image", "_rand": 0.9994890164951269, "arXiv_link": "https://arxiv.org/abs/2311.11666", "other_link": "", "title": "OmniSeg3D: Omniversal 3D Segmentation via Hierarchical Contrastive Learning", "abstract": "Towards holistic understanding of 3D scenes, a general 3D segmentation method\nis needed that can segment diverse objects without restrictions on object\nquantity or categories, while also reflecting the inherent hierarchical\nstructure. To achieve this, we propose OmniSeg3D, an omniversal segmentation\nmethod aims for segmenting anything in 3D all at once. The key insight is to\nlift multi-view inconsistent 2D segmentations into a consistent 3D feature\nfield through a hierarchical contrastive learning framework, which is\naccomplished by two steps. Firstly, we design a novel hierarchical\nrepresentation based on category-agnostic 2D segmentations to model the\nmulti-level relationship among pixels. Secondly, image features rendered from\nthe 3D feature field are clustered at different levels, which can be further\ndrawn closer or pushed apart according to the hierarchical relationship between\ndifferent levels. In tackling the challenges posed by inconsistent 2D\nsegmentations, this framework yields a global consistent 3D feature field,\nwhich further enables hierarchical segmentation, multi-object selection, and\nglobal discretization. Extensive experiments demonstrate the effectiveness of\nour method on high-quality 3D segmentation and accurate hierarchical structure\nunderstanding. A graphical user interface further facilitates flexible\ninteraction for omniversal 3D segmentation.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Haiyang Ying", "Yixuan Yin", "Jinzhi Zhang", "Fan Wang", "Tao Yu", "Ruqi Huang", "Lu Fang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f81d"}, "filepath": "data/2405.01356.png", "tags": [], "_media_type": "image", "_rand": 0.9996546685278089, "arXiv_link": "https://arxiv.org/abs/2405.01356", "other_link": "", "title": "Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance", "abstract": "In subject-driven text-to-image synthesis, the synthesis process tends to be\nheavily influenced by the reference images provided by users, often overlooking\ncrucial attributes detailed in the text prompt. In this work, we propose\nSubject-Agnostic Guidance (SAG), a simple yet effective solution to remedy the\nproblem. We show that through constructing a subject-agnostic condition and\napplying our proposed dual classifier-free guidance, one could obtain outputs\nconsistent with both the given subject and input text prompts. We validate the\nefficacy of our approach through both optimization-based and encoder-based\nmethods. Additionally, we demonstrate its applicability in second-order\ncustomization methods, where an encoder-based model is fine-tuned with\nDreamBooth. Our approach is conceptually simple and requires only minimal code\nmodifications, but leads to substantial quality improvements, as evidenced by\nour evaluations and user studies.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Kelvin C.K. Chan", "Yang Zhao", "Xuhui Jia", "Ming-Hsuan Yang", "Huisheng Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f81e"}, "filepath": "data/2404.02755.png", "tags": [], "_media_type": "image", "_rand": 0.9996225295354845, "arXiv_link": "https://arxiv.org/abs/2404.02755", "other_link": "", "title": "DIBS: Enhancing Dense Video Captioning with Unlabeled Videos via Pseudo Boundary Enrichment and Online Refinement", "abstract": "We present Dive Into the BoundarieS (DIBS), a novel pretraining framework for\ndense video captioning (DVC), that elaborates on improving the quality of the\ngenerated event captions and their associated pseudo event boundaries from\nunlabeled videos. By leveraging the capabilities of diverse large language\nmodels (LLMs), we generate rich DVC-oriented caption candidates and optimize\nthe corresponding pseudo boundaries under several meticulously designed\nobjectives, considering diversity, event-centricity, temporal ordering, and\ncoherence. Moreover, we further introduce a novel online boundary refinement\nstrategy that iteratively improves the quality of pseudo boundaries during\ntraining. Comprehensive experiments have been conducted to examine the\neffectiveness of the proposed technique components. By leveraging a substantial\namount of unlabeled video data, such as HowTo100M, we achieve a remarkable\nadvancement on standard DVC datasets like YouCook2 and ActivityNet. We\noutperform the previous state-of-the-art Vid2Seq across a majority of metrics,\nachieving this with just 0.4% of the unlabeled video data used for pre-training\nby Vid2Seq.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Hao Wu", "Huabin Liu", "Yu Qiao", "Xiao Sun"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f81f"}, "filepath": "data/2404.03296.png", "tags": [], "_media_type": "image", "_rand": 0.999071619862252, "arXiv_link": "https://arxiv.org/abs/2404.03296", "other_link": "https://github.com/Cheeun/AdaBM.", "title": "AdaBM: On-the-Fly Adaptive Bit Mapping for Image Super-Resolution", "abstract": "Although image super-resolution (SR) problem has experienced unprecedented\nrestoration accuracy with deep neural networks, it has yet limited versatile\napplications due to the substantial computational costs. Since different input\nimages for SR face different restoration difficulties, adapting computational\ncosts based on the input image, referred to as adaptive inference, has emerged\nas a promising solution to compress SR networks. Specifically, adapting the\nquantization bit-widths has successfully reduced the inference and memory cost\nwithout sacrificing the accuracy. However, despite the benefits of the\nresultant adaptive network, existing works rely on time-intensive\nquantization-aware training with full access to the original training pairs to\nlearn the appropriate bit allocation policies, which limits its ubiquitous\nusage. To this end, we introduce the first on-the-fly adaptive quantization\nframework that accelerates the processing time from hours to seconds. We\nformulate the bit allocation problem with only two bit mapping modules: one to\nmap the input image to the image-wise bit adaptation factor and one to obtain\nthe layer-wise adaptation factors. These bit mappings are calibrated and\nfine-tuned using only a small number of calibration images. We achieve\ncompetitive performance with the previous adaptive quantization methods, while\nthe processing time is accelerated by x2000. Codes are available at\nhttps://github.com/Cheeun/AdaBM.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Cheeun Hong", "Kyoung Mu Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Image and Video Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f820"}, "filepath": "data/2405.05587.png", "tags": [], "_media_type": "image", "_rand": 0.9999079544810041, "arXiv_link": "https://arxiv.org/abs/2405.05587", "other_link": "", "title": "Navigate Beyond Shortcuts: Debiased Learning through the Lens of Neural Collapse", "abstract": "Recent studies have noted an intriguing phenomenon termed Neural Collapse,\nthat is, when the neural networks establish the right correlation between\nfeature spaces and the training targets, their last-layer features, together\nwith the classifier weights, will collapse into a stable and symmetric\nstructure. In this paper, we extend the investigation of Neural Collapse to the\nbiased datasets with imbalanced attributes. We observe that models will easily\nfall into the pitfall of shortcut learning and form a biased, non-collapsed\nfeature space at the early period of training, which is hard to reverse and\nlimits the generalization capability. To tackle the root cause of biased\nclassification, we follow the recent inspiration of prime training, and propose\nan avoid-shortcut learning framework without additional training complexity.\nWith well-designed shortcut primes based on Neural Collapse structure, the\nmodels are encouraged to skip the pursuit of simple shortcuts and naturally\ncapture the intrinsic correlations. Experimental results demonstrate that our\nmethod induces better convergence properties during training, and achieves\nstate-of-the-art generalization performance on both synthetic and real-world\nbiased datasets.", "keywords": [], "authors_list": ["Yining Wang", "Junjie Sun", "Chenyue Wang", "Mi Zhang", "Min Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f821"}, "filepath": "data/2403.10701.png", "tags": [], "_media_type": "image", "_rand": 0.9998061540757888, "arXiv_link": "https://arxiv.org/abs/2403.10701", "other_link": "", "title": "IMPRINT: Generative Object Compositing by Learning Identity-Preserving Representation", "abstract": "Generative object compositing emerges as a promising new avenue for\ncompositional image editing. However, the requirement of object identity\npreservation poses a significant challenge, limiting practical usage of most\nexisting methods. In response, this paper introduces IMPRINT, a novel\ndiffusion-based generative model trained with a two-stage learning framework\nthat decouples learning of identity preservation from that of compositing. The\nfirst stage is targeted for context-agnostic, identity-preserving pretraining\nof the object encoder, enabling the encoder to learn an embedding that is both\nview-invariant and conducive to enhanced detail preservation. The subsequent\nstage leverages this representation to learn seamless harmonization of the\nobject composited to the background. In addition, IMPRINT incorporates a\nshape-guidance mechanism offering user-directed control over the compositing\nprocess. Extensive experiments demonstrate that IMPRINT significantly\noutperforms existing methods and various baselines on identity preservation and\ncomposition quality.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Yizhi Song", "Zhifei Zhang", "Zhe Lin", "Scott Cohen", "Brian Price", "Jianming Zhang", "Soo Ye Kim", "He Zhang", "Wei Xiong", "Daniel Aliaga"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f822"}, "filepath": "data/2310.08332v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995196049494559, "arXiv_link": "http://export.arxiv.org/abs/2310.08332v1", "other_link": "", "title": "Differentiable Micro-Mesh Construction", "abstract": "We propose a novel compact and efficient neural BRDF offering highly\nversatile material representation, yet with very-light memory and neural\ncomputation consumption towards achieving real-time rendering. The results in\nFigure 1, rendered at full HD resolution on a current desktop machine, show\nthat our system achieves real-time rendering with a wide variety of\nappearances, which is approached by the following two designs. On the one hand,\nnoting that bidirectional reflectance is distributed in a very sparse\nhigh-dimensional subspace, we propose to project the BRDF into two\nlow-dimensional components, i.e., two hemisphere feature-grids for incoming and\noutgoing directions, respectively. On the other hand, learnable neural\nreflectance primitives are distributed on our highly-tailored spherical surface\ngrid, which offer informative features for each component and alleviate the\nconventional heavy feature learning network to a much smaller one, leading to\nvery fast evaluation. These primitives are centrally stored in a codebook and\ncan be shared across multiple grids and even across materials, based on the\nlow-cost indices stored in material-specific spherical surface grids. Our\nneural BRDF, which is agnostic to the material, provides a unified framework\nthat can represent a variety of materials in consistent manner. Comprehensive\nexperimental results on measured BRDF compression, Monte Carlo simulated BRDF\nacceleration, and extension to spatially varying effect demonstrate the\nsuperior quality and generalizability achieved by the proposed scheme.", "keywords": ["Deep learning architectures and techniques", "Computational imaging and physics-based vision"], "authors_list": ["Yishun Dou", "Zhong Zheng", "Qiaoqiao Jin", "Rui Shi", "Yuhan Li", "Bingbing Ni"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f823"}, "filepath": "data/2403.06908v1.png", "tags": [], "_media_type": "image", "_rand": 0.999247512399681, "arXiv_link": "https://arxiv.org/abs/2403.06908v1", "other_link": "", "title": "FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization", "abstract": "3D Gaussian splatting has achieved very impressive performance in real-time\nnovel view synthesis. However, it often suffers from over-reconstruction during\nGaussian densification where high-variance image regions are covered by a few\nlarge Gaussians only, leading to blur and artifacts in the rendered images. We\ndesign a progressive frequency regularization (FreGS) technique to tackle the\nover-reconstruction issue within the frequency space. Specifically, FreGS\nperforms coarse-to-fine Gaussian densification by exploiting low-to-high\nfrequency components that can be easily extracted with low-pass and high-pass\nfilters in the Fourier space. By minimizing the discrepancy between the\nfrequency spectrum of the rendered image and the corresponding ground truth, it\nachieves high-quality Gaussian densification and alleviates the\nover-reconstruction of Gaussian splatting effectively. Experiments over\nmultiple widely adopted benchmarks (e.g., Mip-NeRF360, Tanks-and-Temples and\nDeep Blending) show that FreGS achieves superior novel view synthesis and\noutperforms the state-of-the-art consistently.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Jiahui Zhang", "Fangneng Zhan", "MUYU XU", "Shijian Lu", "Eric P. Xing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f824"}, "filepath": "data/2312.13066.png", "tags": [], "_media_type": "image", "_rand": 0.9995346694648314, "arXiv_link": "https://arxiv.org/abs/2312.13066", "other_link": "", "title": "Parameter Efficient Self-Supervised Geospatial Domain Adaptation", "abstract": "Self-supervised monocular depth estimation is of significant importance with\napplications spanning across autonomous driving and robotics. However, the\nreliance on self-supervision introduces a strong static-scene assumption,\nthereby posing challenges in achieving optimal performance in dynamic scenes,\nwhich are prevalent in most real-world situations. To address these issues, we\npropose PPEA-Depth, a Progressive Parameter-Efficient Adaptation approach to\ntransfer a pre-trained image model for self-supervised depth estimation. The\ntraining comprises two sequential stages: an initial phase trained on a dataset\nprimarily composed of static scenes, succeeded by an expansion to more\nintricate datasets involving dynamic scenes. To facilitate this process, we\ndesign compact encoder and decoder adapters to enable parameter-efficient\ntuning, allowing the network to adapt effectively. They not only uphold\ngeneralized patterns from pre-trained image models but also retain knowledge\ngained from the preceding phase into the subsequent one. Extensive experiments\ndemonstrate that PPEA-Depth achieves state-of-the-art performance on KITTI,\nCityScapes and DDAD datasets.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Linus Scheibenreif", "Michael Mommert", "Damian Borth"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f825"}, "filepath": "data/2402.05472.png", "tags": [], "_media_type": "image", "_rand": 0.9997630375002017, "arXiv_link": "https://arxiv.org/abs/2402.05472", "other_link": "", "title": "Question Aware Vision Transformer for Multimodal Reasoning", "abstract": "Vision-Language (VL) models have gained significant research focus, enabling\nremarkable advances in multimodal reasoning. These architectures typically\ncomprise a vision encoder, a Large Language Model (LLM), and a projection\nmodule that aligns visual features with the LLM's representation space. Despite\ntheir success, a critical limitation persists: the vision encoding process\nremains decoupled from user queries, often in the form of image-related\nquestions. Consequently, the resulting visual features may not be optimally\nattuned to the query-specific elements of the image. To address this, we\nintroduce QA-ViT, a Question Aware Vision Transformer approach for multimodal\nreasoning, which embeds question awareness directly within the vision encoder.\nThis integration results in dynamic visual features focusing on relevant image\naspects to the posed question. QA-ViT is model-agnostic and can be incorporated\nefficiently into any VL architecture. Extensive experiments demonstrate the\neffectiveness of applying our method to various multimodal architectures,\nleading to consistent improvement across diverse tasks and showcasing its\npotential for enhancing visual and scene-text understanding.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Roy Ganz", "Yair Kittenplon", "Aviad Aberdam", "Elad Ben Avraham", "Oren Nuriel", "Shai Mazor", "Ron Litman"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f826"}, "filepath": "data/2310.08332.png", "tags": [], "_media_type": "image", "_rand": 0.9990671422875933, "arXiv_link": "https://arxiv.org/abs/2310.08332", "other_link": "", "title": "Real-Time Neural BRDF with Spherically Distributed Primitives", "abstract": "We propose a novel compact and efficient neural BRDF offering highly\nversatile material representation, yet with very-light memory and neural\ncomputation consumption towards achieving real-time rendering. The results in\nFigure 1, rendered at full HD resolution on a current desktop machine, show\nthat our system achieves real-time rendering with a wide variety of\nappearances, which is approached by the following two designs. On the one hand,\nnoting that bidirectional reflectance is distributed in a very sparse\nhigh-dimensional subspace, we propose to project the BRDF into two\nlow-dimensional components, i.e., two hemisphere feature-grids for incoming and\noutgoing directions, respectively. On the other hand, learnable neural\nreflectance primitives are distributed on our highly-tailored spherical surface\ngrid, which offer informative features for each component and alleviate the\nconventional heavy feature learning network to a much smaller one, leading to\nvery fast evaluation. These primitives are centrally stored in a codebook and\ncan be shared across multiple grids and even across materials, based on the\nlow-cost indices stored in material-specific spherical surface grids. Our\nneural BRDF, which is agnostic to the material, provides a unified framework\nthat can represent a variety of materials in consistent manner. Comprehensive\nexperimental results on measured BRDF compression, Monte Carlo simulated BRDF\nacceleration, and extension to spatially varying effect demonstrate the\nsuperior quality and generalizability achieved by the proposed scheme.", "keywords": ["Efficient and scalable vision", "Computational imaging and physics-based vision"], "authors_list": ["Yishun Dou", "Zhong Zheng", "Qiaoqiao Jin", "Bingbing Ni", "Yugang Chen", "Junxiang Ke"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f827"}, "filepath": "data/2404.06351.png", "tags": [], "_media_type": "image", "_rand": 0.9993799702940759, "arXiv_link": "https://arxiv.org/abs/2404.06351", "other_link": "https://github.com/XiaolongTang23/HPNet.", "title": "HPNet: Dynamic Trajectory Forecasting with Historical Prediction Attention", "abstract": "Predicting the trajectories of road agents is essential for autonomous\ndriving systems. The recent mainstream methods follow a static paradigm, which\npredicts the future trajectory by using a fixed duration of historical frames.\nThese methods make the predictions independently even at adjacent time steps,\nwhich leads to potential instability and temporal inconsistency. As successive\ntime steps have largely overlapping historical frames, their forecasting should\nhave intrinsic correlation, such as overlapping predicted trajectories should\nbe consistent, or be different but share the same motion goal depending on the\nroad situation. Motivated by this, in this work, we introduce HPNet, a novel\ndynamic trajectory forecasting method. Aiming for stable and accurate\ntrajectory forecasting, our method leverages not only historical frames\nincluding maps and agent states, but also historical predictions. Specifically,\nwe newly design a Historical Prediction Attention module to automatically\nencode the dynamic relationship between successive predictions. Besides, it\nalso extends the attention range beyond the currently visible window\nbenefitting from the use of historical predictions. The proposed Historical\nPrediction Attention together with the Agent Attention and Mode Attention is\nfurther formulated as the Triple Factorized Attention module, serving as the\ncore design of HPNet.Experiments on the Argoverse and INTERACTION datasets show\nthat HPNet achieves state-of-the-art performance, and generates accurate and\nstable future trajectories. Our code are available at\nhttps://github.com/XiaolongTang23/HPNet.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Xiaolong Tang", "Meina Kan", "Shiguang Shan", "Zhilong Ji", "Jinfeng Bai", "Xilin Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f828"}, "filepath": "data/2404.02388.png", "tags": [], "_media_type": "image", "_rand": 0.9990921227224707, "arXiv_link": "https://arxiv.org/abs/2404.02388", "other_link": "https://github.com/AIML-MED/CAPE.", "title": "CAPE: CAM as a Probabilistic Ensemble for Enhanced DNN Interpretation", "abstract": "Deep Neural Networks (DNNs) are widely used for visual classification tasks,\nbut their complex computation process and black-box nature hinder decision\ntransparency and interpretability. Class activation maps (CAMs) and recent\nvariants provide ways to visually explain the DNN decision-making process by\ndisplaying 'attention' heatmaps of the DNNs. Nevertheless, the CAM explanation\nonly offers relative attention information, that is, on an attention heatmap,\nwe can interpret which image region is more or less important than the others.\nHowever, these regions cannot be meaningfully compared across classes, and the\ncontribution of each region to the model's class prediction is not revealed. To\naddress these challenges that ultimately lead to better DNN Interpretation, in\nthis paper, we propose CAPE, a novel reformulation of CAM that provides a\nunified and probabilistically meaningful assessment of the contributions of\nimage regions. We quantitatively and qualitatively compare CAPE with\nstate-of-the-art CAM methods on CUB and ImageNet benchmark datasets to\ndemonstrate enhanced interpretability. We also test on a cytology imaging\ndataset depicting a challenging Chronic Myelomonocytic Leukemia (CMML)\ndiagnosis problem. Code is available at: https://github.com/AIML-MED/CAPE.", "keywords": [], "authors_list": ["Townim Chowdhury", "Kewen Liao", "Vu Minh Hieu Phan", "Minh-Son To", "Yutong Xie", "Kevin Hung", "David Ross", "Anton van den Hengel", "Johan Verjans", "Zhibin Liao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f829"}, "filepath": "data/2312.07067.png", "tags": [], "_media_type": "image", "_rand": 0.9991769507874543, "arXiv_link": "https://arxiv.org/abs/2312.07067", "other_link": "", "title": "Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial Training", "abstract": "Adversarial training is often formulated as a min-max problem, however,\nconcentrating only on the worst adversarial examples causes alternating\nrepetitive confusion of the model, i.e., previously defended or correctly\nclassified samples are not defensible or accurately classifiable in subsequent\nadversarial training. We characterize such non-ignorable samples as \"hiders\",\nwhich reveal the hidden high-risk regions within the secure area obtained\nthrough adversarial training and prevent the model from finding the real worst\ncases. We demand the model to prevent hiders when defending against adversarial\nexamples for improving accuracy and robustness simultaneously. By rethinking\nand redefining the min-max optimization problem for adversarial training, we\npropose a generalized adversarial training algorithm called Hider-Focused\nAdversarial Training (HFAT). HFAT introduces the iterative evolution\noptimization strategy to simplify the optimization problem and employs an\nauxiliary model to reveal hiders, effectively combining the optimization\ndirections of standard adversarial training and prevention hiders. Furthermore,\nwe introduce an adaptive weighting mechanism that facilitates the model in\nadaptively adjusting its focus between adversarial examples and hiders during\ndifferent training periods. We demonstrate the effectiveness of our method\nbased on extensive experiments, and ensure that HFAT can provide higher\nrobustness and accuracy.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Qian Li", "Yuxiao Hu", "Yinpeng Dong", "Dongxiao Zhang", "Yuntian Chen"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Cryptography and Security", "Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f82a"}, "filepath": "data/2405.01538.png", "tags": [], "_media_type": "image", "_rand": 0.9995633350258153, "arXiv_link": "https://arxiv.org/abs/2405.01538", "other_link": "", "title": "Multi-Space Alignments Towards Universal LiDAR Segmentation", "abstract": "A unified and versatile LiDAR segmentation model with strong robustness and\ngeneralizability is desirable for safe autonomous driving perception. This work\npresents M3Net, a one-of-a-kind framework for fulfilling multi-task,\nmulti-dataset, multi-modality LiDAR segmentation in a universal manner using\njust a single set of parameters. To better exploit data volume and diversity,\nwe first combine large-scale driving datasets acquired by different types of\nsensors from diverse scenes and then conduct alignments in three spaces, namely\ndata, feature, and label spaces, during the training. As a result, M3Net is\ncapable of taming heterogeneous data for training state-of-the-art LiDAR\nsegmentation models. Extensive experiments on twelve LiDAR segmentation\ndatasets verify our effectiveness. Notably, using a shared set of parameters,\nM3Net achieves 75.1%, 83.1%, and 72.4% mIoU scores, respectively, on the\nofficial benchmarks of SemanticKITTI, nuScenes, and Waymo Open.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Youquan Liu", "Lingdong Kong", "Xiaoyang Wu", "Runnan Chen", "Xin Li", "Liang Pan", "Ziwei Liu", "Yuexin Ma"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f82b"}, "filepath": "data/2312.00094.png", "tags": [], "_media_type": "image", "_rand": 0.9991612718038968, "arXiv_link": "https://arxiv.org/abs/2312.00094", "other_link": "https://github.com/zju-pi/diff-sampler.", "title": "Fast ODE-based Sampling for Diffusion Models in Around 5 Steps", "abstract": "Sampling from diffusion models can be treated as solving the corresponding\nordinary differential equations (ODEs), with the aim of obtaining an accurate\nsolution with as few number of function evaluations (NFE) as possible.\nRecently, various fast samplers utilizing higher-order ODE solvers have emerged\nand achieved better performance than the initial first-order one. However,\nthese numerical methods inherently result in certain approximation errors,\nwhich significantly degrades sample quality with extremely small NFE (e.g.,\naround 5). In contrast, based on the geometric observation that each sampling\ntrajectory almost lies in a two-dimensional subspace embedded in the ambient\nspace, we propose Approximate MEan-Direction Solver (AMED-Solver) that\neliminates truncation errors by directly learning the mean direction for fast\ndiffusion sampling. Besides, our method can be easily used as a plugin to\nfurther improve existing ODE-based samplers. Extensive experiments on image\nsynthesis with the resolution ranging from 32 to 512 demonstrate the\neffectiveness of our method. With only 5 NFE, we achieve 6.61 FID on CIFAR-10,\n10.74 FID on ImageNet 64$\\times$64, and 13.20 FID on LSUN Bedroom. Our code is\navailable at https://github.com/zju-pi/diff-sampler.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Zhenyu Zhou", "Defang Chen", "Can Wang", "Chun Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f82c"}, "filepath": "data/2405.05259.png", "tags": [], "_media_type": "image", "_rand": 0.999900025964816, "arXiv_link": "http://export.arxiv.org/abs/2405.05259", "other_link": "", "title": "OpenESS: Event-based Semantic Scene Understanding with Open Vocabularies", "abstract": "Event-based semantic segmentation (ESS) is a fundamental yet challenging task\nfor event camera sensing. The difficulties in interpreting and annotating event\ndata limit its scalability. While domain adaptation from images to event data\ncan help to mitigate this issue, there exist data representational differences\nthat require additional effort to resolve. In this work, for the first time, we\nsynergize information from image, text, and event-data domains and introduce\nOpenESS to enable scalable ESS in an open-world, annotation-efficient manner.\nWe achieve this goal by transferring the semantically rich CLIP knowledge from\nimage-text pairs to event streams. To pursue better cross-modality adaptation,\nwe propose a frame-to-event contrastive distillation and a text-to-event\nsemantic consistency regularization. Experimental results on popular ESS\nbenchmarks showed our approach outperforms existing methods. Notably, we\nachieve 53.93% and 43.31% mIoU on DDD17 and DSEC-Semantic without using either\nevent or frame labels.", "keywords": ["Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Lingdong Kong", "Youquan Liu", "Lai Xing Ng", "Benoit Cottereau", "Wei Tsang Ooi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f82d"}, "filepath": "data/2402.06136.png", "tags": [], "_media_type": "image", "_rand": 0.9991922913505252, "arXiv_link": "https://arxiv.org/abs/2402.06136", "other_link": "https://xiaokangwei.github.io/SIR/.", "title": "VMINer: Versatile Multi-view Inverse Rendering with Near- and Far-field Light Sources", "abstract": "We propose SIR, an efficient method to decompose differentiable shadows for\ninverse rendering on indoor scenes using multi-view data, addressing the\nchallenges in accurately decomposing the materials and lighting conditions.\nUnlike previous methods that struggle with shadow fidelity in complex lighting\nenvironments, our approach explicitly learns shadows for enhanced realism in\nmaterial estimation under unknown light positions. Utilizing posed HDR images\nas input, SIR employs an SDF-based neural radiance field for comprehensive\nscene representation. Then, SIR integrates a shadow term with a three-stage\nmaterial estimation approach to improve SVBRDF quality. Specifically, SIR is\ndesigned to learn a differentiable shadow, complemented by BRDF regularization,\nto optimize inverse rendering accuracy. Extensive experiments on both synthetic\nand real-world indoor scenes demonstrate the superior performance of SIR over\nexisting methods in both quantitative metrics and qualitative analysis. The\nsignificant decomposing ability of SIR enables sophisticated editing\ncapabilities like free-view relighting, object insertion, and material\nreplacement. The code and data are available at\nhttps://xiaokangwei.github.io/SIR/.", "keywords": ["Efficient and scalable vision", "Scene analysis and understanding"], "authors_list": ["Fan Fei", "Jiajun Tang", "Ping Tan", "Boxin Shi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f82e"}, "filepath": "data/2404.00679.png", "tags": [], "_media_type": "image", "_rand": 0.9993378518661195, "arXiv_link": "https://arxiv.org/abs/2404.00679", "other_link": "https://github.com/sakharok13/X-Ray-Teacher-Patching-Tools.", "title": "Weak-to-Strong 3D Object Detection with X-Ray Distillation", "abstract": "This paper addresses the critical challenges of sparsity and occlusion in\nLiDAR-based 3D object detection. Current methods often rely on supplementary\nmodules or specific architectural designs, potentially limiting their\napplicability to new and evolving architectures. To our knowledge, we are the\nfirst to propose a versatile technique that seamlessly integrates into any\nexisting framework for 3D Object Detection, marking the first instance of\nWeak-to-Strong generalization in 3D computer vision. We introduce a novel\nframework, X-Ray Distillation with Object-Complete Frames, suitable for both\nsupervised and semi-supervised settings, that leverages the temporal aspect of\npoint cloud sequences. This method extracts crucial information from both\nprevious and subsequent LiDAR frames, creating Object-Complete frames that\nrepresent objects from multiple viewpoints, thus addressing occlusion and\nsparsity. Given the limitation of not being able to generate Object-Complete\nframes during online inference, we utilize Knowledge Distillation within a\nTeacher-Student framework. This technique encourages the strong Student model\nto emulate the behavior of the weaker Teacher, which processes simple and\ninformative Object-Complete frames, effectively offering a comprehensive view\nof objects as if seen through X-ray vision. Our proposed methods surpass\nstate-of-the-art in semi-supervised learning by 1-1.5 mAP and enhance the\nperformance of five established supervised models by 1-2 mAP on standard\nautonomous driving datasets, even with default hyperparameters. Code for\nObject-Complete frames is available here:\nhttps://github.com/sakharok13/X-Ray-Teacher-Patching-Tools.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Alexander Gambashidze", "Aleksandr Dadukin", "Maksim Golyadkin", "Maria Razzhivina", "Ilya Makarov"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f82f"}, "filepath": "data/2403.12835.png", "tags": [], "_media_type": "image", "_rand": 0.9990132278153367, "arXiv_link": "https://arxiv.org/abs/2403.12835", "other_link": "", "title": "AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents", "abstract": "Traditional approaches in physics-based motion generation, centered around\nimitation learning and reward shaping, often struggle to adapt to new\nscenarios. To tackle this limitation, we propose AnySkill, a novel hierarchical\nmethod that learns physically plausible interactions following open-vocabulary\ninstructions. Our approach begins by developing a set of atomic actions via a\nlow-level controller trained via imitation learning. Upon receiving an\nopen-vocabulary textual instruction, AnySkill employs a high-level policy that\nselects and integrates these atomic actions to maximize the CLIP similarity\nbetween the agent's rendered images and the text. An important feature of our\nmethod is the use of image-based rewards for the high-level policy, which\nallows the agent to learn interactions with objects without manual reward\nengineering. We demonstrate AnySkill's capability to generate realistic and\nnatural motion sequences in response to unseen instructions of varying lengths,\nmarking it the first method capable of open-vocabulary physical skill learning\nfor interactive humanoid agents.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Jieming Cui", "Tengyu Liu", "Nian Liu", "Yaodong Yang", "Yixin Zhu", "Siyuan Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Robotics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f830"}, "filepath": "data/2402.08654.png", "tags": [], "_media_type": "image", "_rand": 0.9995628784896592, "arXiv_link": "https://arxiv.org/abs/2402.08654", "other_link": "https://ttchengab.github.io/continuous_3d_words", "title": "Learning Continuous 3D Words for Text-to-Image Generation", "abstract": "Current controls over diffusion models (e.g., through text or ControlNet) for\nimage generation fall short in recognizing abstract, continuous attributes like\nillumination direction or non-rigid shape change. In this paper, we present an\napproach for allowing users of text-to-image models to have fine-grained\ncontrol of several attributes in an image. We do this by engineering special\nsets of input tokens that can be transformed in a continuous manner -- we call\nthem Continuous 3D Words. These attributes can, for example, be represented as\nsliders and applied jointly with text prompts for fine-grained control over\nimage generation. Given only a single mesh and a rendering engine, we show that\nour approach can be adopted to provide continuous user control over several\n3D-aware attributes, including time-of-day illumination, bird wing orientation,\ndollyzoom effect, and object poses. Our method is capable of conditioning image\ncreation with multiple Continuous 3D Words and text descriptions simultaneously\nwhile adding no overhead to the generative process. Project Page:\nhttps://ttchengab.github.io/continuous_3d_words", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Ta-Ying Cheng", "Matheus Gadelha", "Thibault Groueix", "Matthew Fisher", "Radomir Mech", "Andrew Markham", "Niki Trigoni"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f831"}, "filepath": "data/2404.01862v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990268691971962, "arXiv_link": "https://arxiv.org/abs/2404.01862v1", "other_link": "https://github.com/thuhcsi/S2G-MDDiffusion.", "title": "Co-Speech Gesture Video Generation via Motion-Decoupled Diffusion Model", "abstract": "Co-speech gestures, if presented in the lively form of videos, can achieve\nsuperior visual effects in human-machine interaction. While previous works\nmostly generate structural human skeletons, resulting in the omission of\nappearance information, we focus on the direct generation of audio-driven\nco-speech gesture videos in this work. There are two main challenges: 1) A\nsuitable motion feature is needed to describe complex human movements with\ncrucial appearance information. 2) Gestures and speech exhibit inherent\ndependencies and should be temporally aligned even of arbitrary length. To\nsolve these problems, we present a novel motion-decoupled framework to generate\nco-speech gesture videos. Specifically, we first introduce a well-designed\nnonlinear TPS transformation to obtain latent motion features preserving\nessential appearance information. Then a transformer-based diffusion model is\nproposed to learn the temporal correlation between gestures and speech, and\nperforms generation in the latent motion space, followed by an optimal motion\nselection module to produce long-term coherent and consistent gesture videos.\nFor better visual perception, we further design a refinement network focusing\non missing details of certain areas. Extensive experimental results show that\nour proposed framework significantly outperforms existing approaches in both\nmotion and video-related evaluations. Our code, demos, and more resources are\navailable at https://github.com/thuhcsi/S2G-MDDiffusion.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Xu He", "Qiaochu Huang", "Zhensong Zhang", "Zhiwei Lin", "Zhiyong Wu", "Sicheng Yang", "Minglei Li", "Zhiyi Chen", "Songcen Xu", "Xiaofei Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Human-Computer Interaction", "Multimedia"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f832"}, "filepath": "data/2403.18978.png", "tags": [], "_media_type": "image", "_rand": 0.9991389618796248, "arXiv_link": "https://arxiv.org/abs/2403.18978", "other_link": "", "title": "TextCraftor: Your Text Encoder Can be Image Quality Controller", "abstract": "Diffusion-based text-to-image generative models, e.g., Stable Diffusion, have\nrevolutionized the field of content generation, enabling significant\nadvancements in areas like image editing and video synthesis. Despite their\nformidable capabilities, these models are not without their limitations. It is\nstill challenging to synthesize an image that aligns well with the input text,\nand multiple runs with carefully crafted prompts are required to achieve\nsatisfactory results. To mitigate these limitations, numerous studies have\nendeavored to fine-tune the pre-trained diffusion models, i.e., UNet, utilizing\nvarious technologies. Yet, amidst these efforts, a pivotal question of\ntext-to-image diffusion model training has remained largely unexplored: Is it\npossible and feasible to fine-tune the text encoder to improve the performance\nof text-to-image diffusion models? Our findings reveal that, instead of\nreplacing the CLIP text encoder used in Stable Diffusion with other large\nlanguage models, we can enhance it through our proposed fine-tuning approach,\nTextCraftor, leading to substantial improvements in quantitative benchmarks and\nhuman assessments. Interestingly, our technique also empowers controllable\nimage generation through the interpolation of different text encoders\nfine-tuned with various rewards. We also demonstrate that TextCraftor is\northogonal to UNet finetuning, and can be combined to further improve\ngenerative quality.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Yanyu Li", "Xian Liu", "Anil Kag", "Ju Hu", "Yerlan Idelbayev", "Dhritiman Sagar", "Yanzhi Wang", "Sergey Tulyakov", "Jian Ren"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f833"}, "filepath": "data/2405.07044v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996588280990801, "arXiv_link": "https://arxiv.org/html/2405.07044v1", "other_link": "https://github.com/wwangcece/SGDM", "title": "Learning Large-Factor EM Image Super-Resolution with Generative Priors", "abstract": "Remote sensing images captured by different platforms exhibit significant\ndisparities in spatial resolution. Large scale factor super-resolution (SR)\nalgorithms are vital for maximizing the utilization of low-resolution (LR)\nsatellite data captured from orbit. However, existing methods confront\nchallenges in recovering SR images with clear textures and correct ground\nobjects. We introduce a novel framework, the Semantic Guided Diffusion Model\n(SGDM), designed for large scale factor remote sensing image super-resolution.\nThe framework exploits a pre-trained generative model as a prior to generate\nperceptually plausible SR images. We further enhance the reconstruction by\nincorporating vector maps, which carry structural and semantic cues. Moreover,\npixel-level inconsistencies in paired remote sensing images, stemming from\nsensor-specific imaging characteristics, may hinder the convergence of the\nmodel and diversity in generated results. To address this problem, we propose\nto extract the sensor-specific imaging characteristics and model the\ndistribution of them, allowing diverse SR images generation based on imaging\ncharacteristics provided by reference images or sampled from the imaging\ncharacteristic probability distributions. To validate and evaluate our\napproach, we create the Cross-Modal Super-Resolution Dataset (CMSRD).\nQualitative and quantitative experiments on CMSRD showcase the superiority and\nbroad applicability of our method. Experimental results on downstream vision\ntasks also demonstrate the utilitarian of the generated SR images. The dataset\nand code will be publicly available at https://github.com/wwangcece/SGDM", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Jiateng Shou", "Zeyu Xiao", "Shiyu Deng", "Wei Huang", "ShiPeiyao", "Ruobing Zhang", "Zhiwei Xiong", "Feng Wu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f834"}, "filepath": "data/2403.19949.png", "tags": [], "_media_type": "image", "_rand": 0.9997793423922894, "arXiv_link": "https://arxiv.org/abs/2403.19949", "other_link": "https://ophai.hms.harvard.edu/datasets/harvard-fairvlmed10k.", "title": "FairCLIP: Harnessing Fairness in Vision-Language Learning", "abstract": "Fairness is a critical concern in deep learning, especially in healthcare,\nwhere these models influence diagnoses and treatment decisions. Although\nfairness has been investigated in the vision-only domain, the fairness of\nmedical vision-language (VL) models remains unexplored due to the scarcity of\nmedical VL datasets for studying fairness. To bridge this research gap, we\nintroduce the first fair vision-language medical dataset Harvard-FairVLMed that\nprovides detailed demographic attributes, ground-truth labels, and clinical\nnotes to facilitate an in-depth examination of fairness within VL foundation\nmodels. Using Harvard-FairVLMed, we conduct a comprehensive fairness analysis\nof two widely-used VL models (CLIP and BLIP2), pre-trained on both natural and\nmedical domains, across four different protected attributes. Our results\nhighlight significant biases in all VL models, with Asian, Male, Non-Hispanic,\nand Spanish being the preferred subgroups across the protected attributes of\nrace, gender, ethnicity, and language, respectively. In order to alleviate\nthese biases, we propose FairCLIP, an optimal-transport-based approach that\nachieves a favorable trade-off between performance and fairness by reducing the\nSinkhorn distance between the overall sample distribution and the distributions\ncorresponding to each demographic group. As the first VL dataset of its kind,\nHarvard-FairVLMed holds the potential to catalyze advancements in the\ndevelopment of machine learning models that are both ethically aware and\nclinically effective. Our dataset and code are available at\nhttps://ophai.hms.harvard.edu/datasets/harvard-fairvlmed10k.", "keywords": ["Multimodal models and vision-language models", "Medical imaging and biological vision"], "authors_list": ["Yan Luo", "MIN SHI", "Muhammad Osama Khan", "Muhammad Muneeb Afzal", "Hao Huang", "Shuaihang Yuan", "Yu Tian", "Luo Song", "Ava Kouhana", "Tobias Elze", "Yi Fang", "Mengyu Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f835"}, "filepath": "data/2403.06606.png", "tags": [], "_media_type": "image", "_rand": 0.9996939274073787, "arXiv_link": "https://arxiv.org/abs/2403.06606", "other_link": "https://github.com/heqianpei/DiGA.", "title": "Distributionally Generative Augmentation for Fair Facial Attribute Classification", "abstract": "Facial Attribute Classification (FAC) holds substantial promise in widespread\napplications. However, FAC models trained by traditional methodologies can be\nunfair by exhibiting accuracy inconsistencies across varied data\nsubpopulations. This unfairness is largely attributed to bias in data, where\nsome spurious attributes (e.g., Male) statistically correlate with the target\nattribute (e.g., Smiling). Most of existing fairness-aware methods rely on the\nlabels of spurious attributes, which may be unavailable in practice. This work\nproposes a novel, generation-based two-stage framework to train a fair FAC\nmodel on biased data without additional annotation. Initially, we identify the\npotential spurious attributes based on generative models. Notably, it enhances\ninterpretability by explicitly showing the spurious attributes in image space.\nFollowing this, for each image, we first edit the spurious attributes with a\nrandom degree sampled from a uniform distribution, while keeping target\nattribute unchanged. Then we train a fair FAC model by fostering model\ninvariance to these augmentation. Extensive experiments on three common\ndatasets demonstrate the effectiveness of our method in promoting fairness in\nFAC without compromising accuracy. Codes are in\nhttps://github.com/heqianpei/DiGA.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Fengda Zhang", "Qianpei He", "Kun Kuang", "Jiashuo Liu", "Long Chen", "Chao Wu", "Jun Xiao", "Hanwang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f836"}, "filepath": "data/2306.07713.png", "tags": [], "_media_type": "image", "_rand": 0.9998446079047192, "arXiv_link": "https://arxiv.org/abs/2306.07713", "other_link": "", "title": "RobustSAM: Segment Anything Robustly on Degraded Images", "abstract": "Segment anything model (SAM), as the name suggests, is claimed to be capable\nof cutting out any object and demonstrates impressive zero-shot transfer\nperformance with the guidance of prompts. However, there is currently a lack of\ncomprehensive evaluation regarding its robustness under various corruptions.\nUnderstanding the robustness of SAM across different corruption scenarios is\ncrucial for its real-world deployment. Prior works show that SAM is biased\ntowards texture (style) rather than shape, motivated by which we start by\ninvestigating its robustness against style transfer, which is synthetic\ncorruption. Following by interpreting the effects of synthetic corruption as\nstyle changes, we proceed to conduct a comprehensive evaluation for its\nrobustness against 15 types of common corruption. These corruptions mainly fall\ninto categories such as digital, noise, weather, and blur, and within each\ncorruption category, we explore 5 severity levels to simulate real-world\ncorruption scenarios. Beyond the corruptions, we further assess the robustness\nof SAM against local occlusion and local adversarial patch attacks. To the best\nof our knowledge, our work is the first of its kind to evaluate the robustness\nof SAM under style change, local occlusion, and local adversarial patch\nattacks. Given that patch attacks visible to human eyes are easily detectable,\nwe further assess its robustness against global adversarial attacks that are\nimperceptible to human eyes. Overall, this work provides a comprehensive\nempirical study of the robustness of SAM, evaluating its performance under\nvarious corruptions and extending the assessment to critical aspects such as\nlocal occlusion, local adversarial patch attacks, and global adversarial\nattacks. These evaluations yield valuable insights into the practical\napplicability and effectiveness of SAM in addressing real-world challenges.", "keywords": [], "authors_list": ["Wei-Ting Chen", "Yu Jiet Vong", "Sy-Yen Kuo", "Sizhuo Ma", "Jian Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f837"}, "filepath": "data/2312.02109v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990469769317345, "arXiv_link": "https://arxiv.org/abs/2312.02109v1", "other_link": "", "title": "ArtAdapter: Text-to-Image Style Transfer using Multi-Level Style Encoder and Explicit Adaptation", "abstract": "This work introduces ArtAdapter, a transformative text-to-image (T2I) style\ntransfer framework that transcends traditional limitations of color,\nbrushstrokes, and object shape, capturing high-level style elements such as\ncomposition and distinctive artistic expression. The integration of a\nmulti-level style encoder with our proposed explicit adaptation mechanism\nenables ArtAdapte to achieve unprecedented fidelity in style transfer, ensuring\nclose alignment with textual descriptions. Additionally, the incorporation of\nan Auxiliary Content Adapter (ACA) effectively separates content from style,\nalleviating the borrowing of content from style references. Moreover, our novel\nfast finetuning approach could further enhance zero-shot style representation\nwhile mitigating the risk of overfitting. Comprehensive evaluations confirm\nthat ArtAdapter surpasses current state-of-the-art methods.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Dar-Yen Chen", "Hamish Tennent", "Ching-Wen Hsu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f838"}, "filepath": "data/2307.08076.png", "tags": [], "_media_type": "image", "_rand": 0.9993360556390604, "arXiv_link": "https://arxiv.org/abs/2307.08076", "other_link": "", "title": "NAPGuard: Towards Detecting Naturalistic Adversarial Patches", "abstract": "Many physical adversarial patch generation methods are widely proposed to\nprotect personal privacy from malicious monitoring using object detectors.\nHowever, they usually fail to generate satisfactory patch images in terms of\nboth stealthiness and attack performance without making huge efforts on careful\nhyperparameter tuning. To address this issue, we propose a novel naturalistic\nadversarial patch generation method based on the diffusion models (DM). Through\nsampling the optimal image from the DM model pretrained upon natural images, it\nallows us to stably craft high-quality and naturalistic physical adversarial\npatches to humans without suffering from serious mode collapse problems as\nother deep generative models. To the best of our knowledge, we are the first to\npropose DM-based naturalistic adversarial patch generation for object\ndetectors. With extensive quantitative, qualitative, and subjective\nexperiments, the results demonstrate the effectiveness of the proposed approach\nto generate better-quality and more naturalistic adversarial patches while\nachieving acceptable attack performance than other state-of-the-art patch\ngeneration methods. We also show various generation trade-offs under different\nconditions.", "keywords": [], "authors_list": ["Siyang Wu", "Jiakai Wang", "Jiejie Zhao", "Yazhe Wang", "Xianglong Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f839"}, "filepath": "data/2404.19722v1.png", "tags": [], "_media_type": "image", "_rand": 0.999388143703432, "arXiv_link": "https://arxiv.org/html/2404.19722v1", "other_link": "https://wangjingbo1219.github.io/papers/CVPR2024_PACER_PLUS/PACERPLUSPage.html", "title": "PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios", "abstract": "We address the challenge of content diversity and controllability in\npedestrian simulation for driving scenarios. Recent pedestrian animation\nframeworks have a significant limitation wherein they primarily focus on either\nfollowing trajectory [46] or the content of the reference video [57],\nconsequently overlooking the potential diversity of human motion within such\nscenarios. This limitation restricts the ability to generate pedestrian\nbehaviors that exhibit a wider range of variations and realistic motions and\ntherefore restricts its usage to provide rich motion content for other\ncomponents in the driving simulation system, e.g., suddenly changed motion to\nwhich the autonomous vehicle should respond. In our approach, we strive to\nsurpass the limitation by showcasing diverse human motions obtained from\nvarious sources, such as generated human motions, in addition to following the\ngiven trajectory. The fundamental contribution of our framework lies in\ncombining the motion tracking task with trajectory following, which enables the\ntracking of specific motion parts (e.g., upper body) while simultaneously\nfollowing the given trajectory by a single policy. This way, we significantly\nenhance both the diversity of simulated human motion within the given scenario\nand the controllability of the content, including language-based control. Our\nframework facilitates the generation of a wide range of human motions,\ncontributing to greater realism and adaptability in pedestrian simulations for\ndriving scenarios. More information is on our project page\nhttps://wangjingbo1219.github.io/papers/CVPR2024_PACER_PLUS/PACERPLUSPage.html .", "keywords": ["Scene analysis and understanding"], "authors_list": ["Jingbo Wang", "Zhengyi Luo", "Ye Yuan", "Yixuan LI", "Bo Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f83a"}, "filepath": "data/2312.03209.png", "tags": [], "_media_type": "image", "_rand": 0.9992218383720569, "arXiv_link": "https://arxiv.org/abs/2312.03209", "other_link": "", "title": "Cache Me if You Can: Accelerating Diffusion Models through Block Caching", "abstract": "Diffusion models have recently revolutionized the field of image synthesis\ndue to their ability to generate photorealistic images. However, one of the\nmajor drawbacks of diffusion models is that the image generation process is\ncostly. A large image-to-image network has to be applied many times to\niteratively refine an image from random noise. While many recent works propose\ntechniques to reduce the number of required steps, they generally treat the\nunderlying denoising network as a black box. In this work, we investigate the\nbehavior of the layers within the network and find that 1) the layers' output\nchanges smoothly over time, 2) the layers show distinct patterns of change, and\n3) the change from step to step is often very small. We hypothesize that many\nlayer computations in the denoising network are redundant. Leveraging this, we\nintroduce block caching, in which we reuse outputs from layer blocks of\nprevious steps to speed up inference. Furthermore, we propose a technique to\nautomatically determine caching schedules based on each block's changes over\ntimesteps. In our experiments, we show through FID, human evaluation and\nqualitative analysis that Block Caching allows to generate images with higher\nvisual quality at the same computational cost. We demonstrate this for\ndifferent state-of-the-art models (LDM and EMU) and solvers (DDIM and DPM).", "keywords": ["Efficient and scalable vision"], "authors_list": ["Felix Wimbauer", "Bichen Wu", "Edgar Schoenfeld", "Xiaoliang Dai", "Ji Hou", "Zijian He", "Artsiom Sanakoyeu", "Peizhao Zhang", "Sam Tsai", "Jonas Kohler", "Christian Rupprecht", "Daniel Cremers", "Peter Vajda", "Jialiang Wang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f83b"}, "filepath": "data/2403.14003.png", "tags": [], "_media_type": "image", "_rand": 0.9990265851594438, "arXiv_link": "https://arxiv.org/abs/2403.14003", "other_link": "", "title": "Multi-Modal Hallucination Control by Visual Information Grounding", "abstract": "Generative Vision-Language Models (VLMs) are prone to generate\nplausible-sounding textual answers that, however, are not always grounded in\nthe input image. We investigate this phenomenon, usually referred to as\n\"hallucination\" and show that it stems from an excessive reliance on the\nlanguage prior. In particular, we show that as more tokens are generated, the\nreliance on the visual prompt decreases, and this behavior strongly correlates\nwith the emergence of hallucinations. To reduce hallucinations, we introduce\nMulti-Modal Mutual-Information Decoding (M3ID), a new sampling method for\nprompt amplification. M3ID amplifies the influence of the reference image over\nthe language prior, hence favoring the generation of tokens with higher mutual\ninformation with the visual prompt. M3ID can be applied to any pre-trained\nautoregressive VLM at inference time without necessitating further training and\nwith minimal computational overhead. If training is an option, we show that\nM3ID can be paired with Direct Preference Optimization (DPO) to improve the\nmodel's reliance on the prompt image without requiring any labels. Our\nempirical findings show that our algorithms maintain the fluency and linguistic\ncapabilities of pre-trained VLMs while reducing hallucinations by mitigating\nvisually ungrounded answers. Specifically, for the LLaVA 13B model, M3ID and\nM3ID+DPO reduce the percentage of hallucinated objects in captioning tasks by\n25% and 28%, respectively, and improve the accuracy on VQA benchmarks such as\nPOPE by 21% and 24%.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Deep learning architectures and techniques"], "authors_list": ["Alessandro Favero", "Luca Zancato", "Matthew Trager", "Siddharth Choudhary", "Pramuditha Perera", "Alessandro Achille", "Ashwin Swaminathan", "Stefano Soatto"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f83c"}, "filepath": "data/2404.04562.png", "tags": [], "_media_type": "image", "_rand": 0.9993440020394768, "arXiv_link": "https://arxiv.org/abs/2404.04562", "other_link": "https://github.com/yxymessi/DTC123.", "title": "Diffusion Time-step Curriculum for One Image to 3D Generation", "abstract": "Score distillation sampling~(SDS) has been widely adopted to overcome the\nabsence of unseen views in reconstructing 3D objects from a \\textbf{single}\nimage. It leverages pre-trained 2D diffusion models as teacher to guide the\nreconstruction of student 3D models. Despite their remarkable success,\nSDS-based methods often encounter geometric artifacts and texture saturation.\nWe find out the crux is the overlooked indiscriminate treatment of diffusion\ntime-steps during optimization: it unreasonably treats the student-teacher\nknowledge distillation to be equal at all time-steps and thus entangles\ncoarse-grained and fine-grained modeling. Therefore, we propose the Diffusion\nTime-step Curriculum one-image-to-3D pipeline (DTC123), which involves both the\nteacher and student models collaborating with the time-step curriculum in a\ncoarse-to-fine manner. Extensive experiments on NeRF4, RealFusion15, GSO and\nLevel50 benchmark demonstrate that DTC123 can produce multi-view consistent,\nhigh-quality, and diverse 3D assets. Codes and more generation demos will be\nreleased in https://github.com/yxymessi/DTC123.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["YI Xuanyu", "Zike Wu", "Qingshan Xu", "Pan Zhou", "Joo Lim", "Hanwang Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f83d"}, "filepath": "data/2311.17917.png", "tags": [], "_media_type": "image", "_rand": 0.9990159185112699, "arXiv_link": "https://arxiv.org/abs/2311.17917", "other_link": "http://jeff95.me/projects/avatarstudio.html", "title": "3DToonify: Creating Your High-Fidelity 3D Stylized Avatar Easily from 2D Portrait Images", "abstract": "We study the problem of creating high-fidelity and animatable 3D avatars from\nonly textual descriptions. Existing text-to-avatar methods are either limited\nto static avatars which cannot be animated or struggle to generate animatable\navatars with promising quality and precise pose control. To address these\nlimitations, we propose AvatarStudio, a coarse-to-fine generative model that\ngenerates explicit textured 3D meshes for animatable human avatars.\nSpecifically, AvatarStudio begins with a low-resolution NeRF-based\nrepresentation for coarse generation, followed by incorporating SMPL-guided\narticulation into the explicit mesh representation to support avatar animation\nand high resolution rendering. To ensure view consistency and pose\ncontrollability of the resulting avatars, we introduce a 2D diffusion model\nconditioned on DensePose for Score Distillation Sampling supervision. By\neffectively leveraging the synergy between the articulated mesh representation\nand the DensePose-conditional diffusion model, AvatarStudio can create\nhigh-quality avatars from text that are ready for animation, significantly\noutperforming previous methods. Moreover, it is competent for many\napplications, e.g., multimodal avatar animations and style-guided avatar\ncreation. For more results, please refer to our project page:\nhttp://jeff95.me/projects/avatarstudio.html", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Yifang Men", "Hanxi Liu", "Yuan Yao", "Miaomiao Cui", "Xuansong Xie", "Zhouhui Lian"], "category_name": "Graphics", "all_categories": ["Graphics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f83e"}, "filepath": "data/2403.07560v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992615924280077, "arXiv_link": "https://arxiv.org/abs/2403.07560v2", "other_link": "", "title": "Unleashing Network Potentials for Semantic Scene Completion", "abstract": "Semantic scene completion (SSC) aims to predict complete 3D voxel occupancy\nand semantics from a single-view RGB-D image, and recent SSC methods commonly\nadopt multi-modal inputs. However, our investigation reveals two limitations:\nineffective feature learning from single modalities and overfitting to limited\ndatasets. To address these issues, this paper proposes a novel SSC framework -\nAdversarial Modality Modulation Network (AMMNet) - with a fresh perspective of\noptimizing gradient updates. The proposed AMMNet introduces two core modules: a\ncross-modal modulation enabling the interdependence of gradient flows between\nmodalities, and a customized adversarial training scheme leveraging dynamic\ngradient competition. Specifically, the cross-modal modulation adaptively\nre-calibrates the features to better excite representation potentials from each\nsingle modality. The adversarial training employs a minimax game of evolving\ngradients, with customized guidance to strengthen the generator's perception of\nvisual fidelity from both geometric completeness and semantic correctness.\nExtensive experimental results demonstrate that AMMNet outperforms\nstate-of-the-art SSC methods by a large margin, providing a promising direction\nfor improving the effectiveness and generalization of SSC methods.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Fengyun Wang", "Qianru Sun", "Dong Zhang", "Jinhui Tang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f83f"}, "filepath": "data/2310.20685.png", "tags": [], "_media_type": "image", "_rand": 0.9998863836479112, "arXiv_link": "https://arxiv.org/abs/2310.20685", "other_link": "", "title": "NeRF Director: Revisiting View Selection in Neural Volume Rendering", "abstract": "Neural radiance fields (NeRF) rely on volume rendering to synthesize novel\nviews. Volume rendering requires evaluating an integral along each ray, which\nis numerically approximated with a finite sum that corresponds to the exact\nintegral along the ray under piecewise constant volume density. As a\nconsequence, the rendered result is unstable w.r.t. the choice of samples along\nthe ray, a phenomenon that we dub quadrature instability. We propose a\nmathematically principled solution by reformulating the sample-based rendering\nequation so that it corresponds to the exact integral under piecewise linear\nvolume density. This simultaneously resolves multiple issues: conflicts between\nsamples along different rays, imprecise hierarchical sampling, and\nnon-differentiability of quantiles of ray termination distances w.r.t. model\nparameters. We demonstrate several benefits over the classical sample-based\nrendering equation, such as sharper textures, better geometric reconstruction,\nand stronger depth supervision. Our proposed formulation can be also be used as\na drop-in replacement to the volume rendering equation of existing NeRF-based\nmethods. Our project page can be found at pl-nerf.github.io.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Wenhui Xiao", "Rodrigo Santa Cruz", "David Ahmedt-Aristizabal", "Olivier Salvado", "Clinton Fookes", "Leo Lebrat"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f840"}, "filepath": "data/2303.16783.png", "tags": [], "_media_type": "image", "_rand": 0.9999020874369409, "arXiv_link": "https://ar5iv.labs.arxiv.org/html/2303.16783", "other_link": "", "title": "Exploring Efficient Asymmetric Blind-Spots for Self-Supervised Denoising in Real-World Scenarios", "abstract": "Self-supervised denoising has attracted widespread attention due to its\nability to train without clean images. However, noise in real-world scenarios\nis often spatially correlated, which causes many self-supervised algorithms\nthat assume pixel-wise independent noise to perform poorly. Recent works have\nattempted to break noise correlation with downsampling or neighborhood masking.\nHowever, denoising on downsampled subgraphs can lead to aliasing effects and\nloss of details due to a lower sampling rate. Furthermore, the neighborhood\nmasking methods either come with high computational complexity or do not\nconsider local spatial preservation during inference. Through the analysis of\nexisting methods, we point out that the key to obtaining high-quality and\ntexture-rich results in real-world self-supervised denoising tasks is to train\nat the original input resolution structure and use asymmetric operations during\ntraining and inference. Based on this, we propose Asymmetric Tunable Blind-Spot\nNetwork (AT-BSN), where the blind-spot size can be freely adjusted, thus better\nbalancing noise correlation suppression and image local spatial destruction\nduring training and inference. In addition, we regard the pre-trained AT-BSN as\na meta-teacher network capable of generating various teacher networks by\nsampling different blind-spots. We propose a blind-spot based multi-teacher\ndistillation strategy to distill a lightweight network, significantly improving\nperformance. Experimental results on multiple datasets prove that our method\nachieves state-of-the-art, and is superior to other self-supervised algorithms\nin terms of computational overhead and visual effects.", "keywords": ["Deep learning architectures and techniques", "Low-level vision"], "authors_list": ["Shiyan Chen", "Jiyuan Zhang", "Zhaofei Yu", "Tiejun Huang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f841"}, "filepath": "data/2402.08919v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995598621467701, "arXiv_link": "https://arxiv.org/abs/2402.08919v1", "other_link": "", "title": "Interpretable Measures of Conceptual Similarity by Complexity-Constrained Descriptive Auto-Encoding", "abstract": "Quantifying the degree of similarity between images is a key copyright issue\nfor image-based machine learning. In legal doctrine however, determining the\ndegree of similarity between works requires subjective analysis, and\nfact-finders (judges and juries) can demonstrate considerable variability in\nthese subjective judgement calls. Images that are structurally similar can be\ndeemed dissimilar, whereas images of completely different scenes can be deemed\nsimilar enough to support a claim of copying. We seek to define and compute a\nnotion of \"conceptual similarity\" among images that captures high-level\nrelations even among images that do not share repeated elements or visually\nsimilar components. The idea is to use a base multi-modal model to generate\n\"explanations\" (captions) of visual data at increasing levels of complexity.\nThen, similarity can be measured by the length of the caption needed to\ndiscriminate between the two images: Two highly dissimilar images can be\ndiscriminated early in their description, whereas conceptually dissimilar ones\nwill need more detail to be distinguished. We operationalize this definition\nand show that it correlates with subjective (averaged human evaluation)\nassessment, and beats existing baselines on both image-to-image and\ntext-to-text similarity benchmarks. Beyond just providing a number, our method\nalso offers interpretability by pointing to the specific level of granularity\nof the description where the source data are differentiated.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Alessandro Achille", "Greg Ver Steeg", "Tian Yu Liu", "Matthew Trager", "Carson Klingenberg", "Stefano Soatto"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f842"}, "filepath": "data/2312.06230.png", "tags": [], "_media_type": "image", "_rand": 0.999914930304969, "arXiv_link": "https://arxiv.org/abs/2312.06230", "other_link": "", "title": "Attack To Defend: Exploiting Adversarial Attacks for Detecting Poisoned Models", "abstract": "This work studies the task of poisoned sample detection for defending against\ndata poisoning based backdoor attacks. Its core challenge is finding a\ngeneralizable and discriminative metric to distinguish between clean and\nvarious types of poisoned samples (e.g., various triggers, various poisoning\nratios). Inspired by a common phenomenon in backdoor attacks that the\nbackdoored model tend to map significantly different poisoned and clean samples\nwithin the target class to similar activation areas, we introduce a novel\nperspective of the circular distribution of the gradients w.r.t. sample\nactivation, dubbed gradient circular distribution (GCD). And, we find two\ninteresting observations based on GCD. One is that the GCD of samples in the\ntarget class is much more dispersed than that in the clean class. The other is\nthat in the GCD of target class, poisoned and clean samples are clearly\nseparated. Inspired by above two observations, we develop an innovative\nthree-stage poisoned sample detection approach, called Activation Gradient\nbased Poisoned sample Detection (AGPD). First, we calculate GCDs of all classes\nfrom the model trained on the untrustworthy dataset. Then, we identify the\ntarget class(es) based on the difference on GCD dispersion between target and\nclean classes. Last, we filter out poisoned samples within the identified\ntarget class(es) based on the clear separation between poisoned and clean\nsamples. Extensive experiments under various settings of backdoor attacks\ndemonstrate the superior detection performance of the proposed method to\nexisting poisoned detection approaches according to sample activation-based\nmetrics.", "keywords": [], "authors_list": ["Samar Fares", "Karthik Nandakumar"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f843"}, "filepath": "data/2401.17270.png", "tags": [], "_media_type": "image", "_rand": 0.999727551129837, "arXiv_link": "https://arxiv.org/abs/2401.17270", "other_link": "", "title": "YOLO-World: Real-Time Open-Vocabulary Object Detection", "abstract": "The You Only Look Once (YOLO) series of detectors have established themselves\nas efficient and practical tools. However, their reliance on predefined and\ntrained object categories limits their applicability in open scenarios.\nAddressing this limitation, we introduce YOLO-World, an innovative approach\nthat enhances YOLO with open-vocabulary detection capabilities through\nvision-language modeling and pre-training on large-scale datasets.\nSpecifically, we propose a new Re-parameterizable Vision-Language Path\nAggregation Network (RepVL-PAN) and region-text contrastive loss to facilitate\nthe interaction between visual and linguistic information. Our method excels in\ndetecting a wide range of objects in a zero-shot manner with high efficiency.\nOn the challenging LVIS dataset, YOLO-World achieves 35.4 AP with 52.0 FPS on\nV100, which outperforms many state-of-the-art methods in terms of both accuracy\nand speed. Furthermore, the fine-tuned YOLO-World achieves remarkable\nperformance on several downstream tasks, including object detection and\nopen-vocabulary instance segmentation.", "keywords": ["Efficient and scalable vision", "Multimodal models and vision-language models"], "authors_list": ["Tianheng Cheng", "Lin Song", "Yixiao Ge", "Wenyu Liu", "Xinggang Wang", "Ying Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f844"}, "filepath": "data/2312.01196.png", "tags": [], "_media_type": "image", "_rand": 0.9997902225198921, "arXiv_link": "https://arxiv.org/abs/2312.01196", "other_link": "", "title": "Neural Parametric Gaussians for Monocular Non-Rigid Object Reconstruction", "abstract": "Reconstructing dynamic objects from monocular videos is a severely\nunderconstrained and challenging problem, and recent work has approached it in\nvarious directions. However, owing to the ill-posed nature of this problem,\nthere has been no solution that can provide consistent, high-quality novel\nviews from camera positions that are significantly different from the training\nviews. In this work, we introduce Neural Parametric Gaussians (NPGs) to take on\nthis challenge by imposing a two-stage approach: first, we fit a low-rank\nneural deformation model, which then is used as regularization for non-rigid\nreconstruction in the second stage. The first stage learns the object's\ndeformations such that it preserves consistency in novel views. The second\nstage obtains high reconstruction quality by optimizing 3D Gaussians that are\ndriven by the coarse model. To this end, we introduce a local 3D Gaussian\nrepresentation, where temporally shared Gaussians are anchored in and deformed\nby local oriented volumes. The resulting combined model can be rendered as\nradiance fields, resulting in high-quality photo-realistic reconstructions of\nthe non-rigidly deforming objects. We demonstrate that NPGs achieve superior\nresults compared to previous works, especially in challenging scenarios with\nfew multi-view cues.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Devikalyan Das", "Christopher Wewer", "Raza Yunus", "Eddy Ilg", "Jan Lenssen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f845"}, "filepath": "data/2312.02209.png", "tags": [], "_media_type": "image", "_rand": 0.9991659402356234, "arXiv_link": "https://arxiv.org/abs/2312.02209", "other_link": "", "title": "AttriHuman-3D: Editable 3D Human Avatar Generation with Attribute Decomposition and Indexing", "abstract": "Editable 3D-aware generation, which supports user-interacted editing, has\nwitnessed rapid development recently. However, existing editable 3D GANs either\nfail to achieve high-accuracy local editing or suffer from huge computational\ncosts. We propose AttriHuman-3D, an editable 3D human generation model, which\naddress the aforementioned problems with attribute decomposition and indexing.\nThe core idea of the proposed model is to generate all attributes (e.g. human\nbody, hair, clothes and so on) in an overall attribute space with six feature\nplanes, which are then decomposed and manipulated with different attribute\nindexes. To precisely extract features of different attributes from the\ngenerated feature planes, we propose a novel attribute indexing method as well\nas an orthogonal projection regularization to enhance the disentanglement. We\nalso introduce a hyper-latent training strategy and an attribute-specific\nsampling strategy to avoid style entanglement and misleading punishment from\nthe discriminator. Our method allows users to interactively edit selected\nattributes in the generated 3D human avatars while keeping others fixed. Both\nqualitative and quantitative experiments demonstrate that our model provides a\nstrong disentanglement between different attributes, allows fine-grained image\nediting and generates high-quality 3D human avatars.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Fan Yang", "Tianyi Chen", "XIAOSHENG HE", "Zhongang Cai", "Lei Yang", "Si Wu", "Guosheng Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f846"}, "filepath": "data/2311.14521.png", "tags": [], "_media_type": "image", "_rand": 0.9994114730407916, "arXiv_link": "https://arxiv.org/abs/2311.14521", "other_link": "https://buaacyw.github.io/gaussian-editor/", "title": "GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting", "abstract": "3D editing plays a crucial role in many areas such as gaming and virtual\nreality. Traditional 3D editing methods, which rely on representations like\nmeshes and point clouds, often fall short in realistically depicting complex\nscenes. On the other hand, methods based on implicit 3D representations, like\nNeural Radiance Field (NeRF), render complex scenes effectively but suffer from\nslow processing speeds and limited control over specific scene areas. In\nresponse to these challenges, our paper presents GaussianEditor, an innovative\nand efficient 3D editing algorithm based on Gaussian Splatting (GS), a novel 3D\nrepresentation. GaussianEditor enhances precision and control in editing\nthrough our proposed Gaussian semantic tracing, which traces the editing target\nthroughout the training process. Additionally, we propose Hierarchical Gaussian\nsplatting (HGS) to achieve stabilized and fine results under stochastic\ngenerative guidance from 2D diffusion models. We also develop editing\nstrategies for efficient object removal and integration, a challenging task for\nexisting methods. Our comprehensive experiments demonstrate GaussianEditor's\nsuperior control, efficacy, and rapid performance, marking a significant\nadvancement in 3D editing. Project Page:\nhttps://buaacyw.github.io/gaussian-editor/", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Yiwen Chen", "Zilong Chen", "Chi Zhang", "Feng Wang", "Xiaofeng Yang", "Yikai Wang", "Zhongang Cai", "Lei Yang", "Huaping Liu", "Guosheng Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f847"}, "filepath": "data/2403.17934.png", "tags": [], "_media_type": "image", "_rand": 0.9999777264078319, "arXiv_link": "https://arxiv.org/abs/2403.17934", "other_link": "", "title": "AiOS: All-in-One-Stage Expressive Human Pose and Shape Estimation", "abstract": "Expressive human pose and shape estimation (a.k.a. 3D whole-body mesh\nrecovery) involves the human body, hand, and expression estimation. Most\nexisting methods have tackled this task in a two-stage manner, first detecting\nthe human body part with an off-the-shelf detection model and inferring the\ndifferent human body parts individually. Despite the impressive results\nachieved, these methods suffer from 1) loss of valuable contextual information\nvia cropping, 2) introducing distractions, and 3) lacking inter-association\namong different persons and body parts, inevitably causing performance\ndegradation, especially for crowded scenes. To address these issues, we\nintroduce a novel all-in-one-stage framework, AiOS, for multiple expressive\nhuman pose and shape recovery without an additional human detection step.\nSpecifically, our method is built upon DETR, which treats multi-person\nwhole-body mesh recovery task as a progressive set prediction problem with\nvarious sequential detection. We devise the decoder tokens and extend them to\nour task. Specifically, we first employ a human token to probe a human location\nin the image and encode global features for each instance, which provides a\ncoarse location for the later transformer block. Then, we introduce a\njoint-related token to probe the human joint in the image and encoder a\nfine-grained local feature, which collaborates with the global feature to\nregress the whole-body mesh. This straightforward but effective model\noutperforms previous state-of-the-art methods by a 9% reduction in NMVE on\nAGORA, a 30% reduction in PVE on EHF, a 10% reduction in PVE on ARCTIC, and a\n3% reduction in PVE on EgoBody.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Qingping SUN", "Yanjun Wang", "Ailing Zeng", "Wanqi Yin", "Chen Wei", "Wenjia Wang", "Haiy Mei", "Chi LEUNG", "Ziwei Liu", "Lei Yang", "Zhongang Cai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f848"}, "filepath": "data/2311.12291.png", "tags": [], "_media_type": "image", "_rand": 0.9990757186033636, "arXiv_link": "https://arxiv.org/abs/2311.12291", "other_link": "", "title": "Edge-Aware 3D Instance Segmentation Network with Intelligent Semantic Prior", "abstract": "Existing 3D semantic segmentation methods rely on point-wise or voxel-wise\nfeature descriptors to output segmentation predictions. However, these\ndescriptors are often supervised at point or voxel level, leading to\nsegmentation models that can behave poorly at instance-level. In this paper, we\nproposed a novel instance-aware approach for 3D semantic segmentation. Our\nmethod combines several geometry processing tasks supervised at instance-level\nto promote the consistency of the learned feature representation. Specifically,\nour methods use shape generators and shape classifiers to perform shape\nreconstruction and classification tasks for each shape instance. This enforces\nthe feature representation to faithfully encode both structural and local shape\ninformation, with an awareness of shape instances. In the experiments, our\nmethod significantly outperform existing approaches in 3D semantic segmentation\non several public benchmarks, such as Waymo Open Dataset, SemanticKITTI and\nScanNetV2.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Wonseok Roh", "Hwanhee Jung", "Giljoo Nam", "Jinseop Yeom", "Hyunje Park", "Sang Ho Yoon", "Sangpil Kim"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f849"}, "filepath": "data/2403.18807.png", "tags": [], "_media_type": "image", "_rand": 0.9990402405150327, "arXiv_link": "https://arxiv.org/abs/2403.18807", "other_link": "https://ecodepth-iitd.github.io", "title": "ECoDepth: Effective Conditioning of Diffusion Models for Monocular Depth Estimation", "abstract": "In the absence of parallax cues, a learning-based single image depth\nestimation (SIDE) model relies heavily on shading and contextual cues in the\nimage. While this simplicity is attractive, it is necessary to train such\nmodels on large and varied datasets, which are difficult to capture. It has\nbeen shown that using embeddings from pre-trained foundational models, such as\nCLIP, improves zero shot transfer in several applications. Taking inspiration\nfrom this, in our paper we explore the use of global image priors generated\nfrom a pre-trained ViT model to provide more detailed contextual information.\nWe argue that the embedding vector from a ViT model, pre-trained on a large\ndataset, captures greater relevant information for SIDE than the usual route of\ngenerating pseudo image captions, followed by CLIP based text embeddings. Based\non this idea, we propose a new SIDE model using a diffusion backbone which is\nconditioned on ViT embeddings. Our proposed design establishes a new\nstate-of-the-art (SOTA) for SIDE on NYUv2 dataset, achieving Abs Rel error of\n0.059 (14% improvement) compared to 0.069 by the current SOTA (VPD). And on\nKITTI dataset, achieving Sq Rel error of 0.139 (2% improvement) compared to\n0.142 by the current SOTA (GEDepth). For zero-shot transfer with a model\ntrained on NYUv2, we report mean relative improvement of (20%, 23%, 81%, 25%)\nover NeWCRFs on (Sun-RGBD, iBims1, DIODE, HyperSim) datasets, compared to (16%,\n18%, 45%, 9%) by ZoeDepth. The project page is available at\nhttps://ecodepth-iitd.github.io", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Suraj Patni", "Aradhye Agarwal", "Chetan Arora"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f84a"}, "filepath": "data/2405.18572.png", "tags": [], "_media_type": "image", "_rand": 0.9991517496857796, "arXiv_link": "https://arxiv.org/abs/2405.18572", "other_link": "", "title": "Low-Rank Approximation for Sparse Attention in Multi-Modal LLMs", "abstract": "Low-rank approximation techniques have become the de facto standard for\nfine-tuning Large Language Models (LLMs) due to their reduced computational and\nmemory requirements. This paper investigates the effectiveness of these methods\nin capturing the shift of fine-tuning datasets from the initial pre-trained\ndata distribution. Our findings reveal that there are cases in which low-rank\nfine-tuning falls short in learning such shifts. This, in turn, produces\nnon-negligible side effects, especially when fine-tuning is adopted for\ntoxicity mitigation in pre-trained models, or in scenarios where it is\nimportant to provide fair models. Through comprehensive empirical evidence on\nseveral models, datasets, and tasks, we show that low-rank fine-tuning\ninadvertently preserves undesirable biases and toxic behaviors. We also show\nthat this extends to sequential decision-making tasks, emphasizing the need for\ncareful evaluation to promote responsible LLMs development.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Lin Song", "Yukang Chen", "Shuai Yang", "Xiaohan Ding", "Yixiao Ge", "Ying-Cong Chen", "Ying Shan"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f84b"}, "filepath": "data/2311.16097v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990431160036964, "arXiv_link": "https://arxiv.org/abs/2311.16097v2", "other_link": "", "title": "CG-HOI: Contact-Guided 3D Human-Object Interaction Generation", "abstract": "We propose CG-HOI, the first method to address the task of generating dynamic\n3D human-object interactions (HOIs) from text. We model the motion of both\nhuman and object in an interdependent fashion, as semantically rich human\nmotion rarely happens in isolation without any interactions. Our key insight is\nthat explicitly modeling contact between the human body surface and object\ngeometry can be used as strong proxy guidance, both during training and\ninference. Using this guidance to bridge human and object motion enables\ngenerating more realistic and physically plausible interaction sequences, where\nthe human body and corresponding object move in a coherent manner. Our method\nfirst learns to model human motion, object motion, and contact in a joint\ndiffusion process, inter-correlated through cross-attention. We then leverage\nthis learned contact for guidance during inference to synthesize realistic and\ncoherent HOIs. Extensive evaluation shows that our joint contact-based\nhuman-object interaction approach generates realistic and physically plausible\nsequences, and we show two applications highlighting the capabilities of our\nmethod. Conditioned on a given object trajectory, we can generate the\ncorresponding human motion without re-training, demonstrating strong\nhuman-object interdependency learning. Our approach is also flexible, and can\nbe applied to static real-world 3D scene scans.", "keywords": ["Biometrics and human analysis"], "authors_list": ["Christian Diller", "Angela Dai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f84c"}, "filepath": "data/2312.04547.png", "tags": [], "_media_type": "image", "_rand": 0.9994274499623413, "arXiv_link": "https://arxiv.org/abs/2312.04547", "other_link": "https://digital-life-project.com/", "title": "Digital Life Project: Autonomous 3D Characters with Social Intelligence", "abstract": "In this work, we present Digital Life Project, a framework utilizing language\nas the universal medium to build autonomous 3D characters, who are capable of\nengaging in social interactions and expressing with articulated body motions,\nthereby simulating life in a digital environment. Our framework comprises two\nprimary components: 1) SocioMind: a meticulously crafted digital brain that\nmodels personalities with systematic few-shot exemplars, incorporates a\nreflection process based on psychology principles, and emulates autonomy by\ninitiating dialogue topics; 2) MoMat-MoGen: a text-driven motion synthesis\nparadigm for controlling the character's digital body. It integrates motion\nmatching, a proven industry technique to ensure motion quality, with\ncutting-edge advancements in motion generation for diversity. Extensive\nexperiments demonstrate that each module achieves state-of-the-art performance\nin its respective domain. Collectively, they enable virtual characters to\ninitiate and sustain dialogues autonomously, while evolving their\nsocio-psychological states. Concurrently, these characters can perform\ncontextually relevant bodily movements. Additionally, a motion captioning\nmodule further allows the virtual character to recognize and appropriately\nrespond to human players' actions. Homepage: https://digital-life-project.com/", "keywords": ["Multimodal models and vision-language models", "Biometrics and human analysis"], "authors_list": ["Zhongang Cai", "Jianping Jiang", "Zhongfei Qing", "Xinying Guo", "Mingyuan Zhang", "Zhengyu Lin", "Haiy Mei", "Chen Wei", "Wang Ruisi", "Wanqi Yin", "Liang Pan", "Xiangyu Fan", "Han Du", "Peng Gao", "Zhitao Yang", "Yang Gao", "Jiaqi Li", "Tianxiang Ren", "YuKun Wei", "Xiaogang Wang", "Chen Change Loy", "Lei Yang", "Ziwei Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Human-Computer Interaction"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f84d"}, "filepath": "data/2311.15599.png", "tags": [], "_media_type": "image", "_rand": 0.9994563839015715, "arXiv_link": "https://arxiv.org/abs/2311.15599", "other_link": "", "title": "UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition", "abstract": "Large-kernel convolutional neural networks (ConvNets) have recently received\nextensive research attention, but two unresolved and critical issues demand\nfurther investigation. 1) The architectures of existing large-kernel ConvNets\nlargely follow the design principles of conventional ConvNets or transformers,\nwhile the architectural design for large-kernel ConvNets remains\nunder-addressed. 2) As transformers have dominated multiple modalities, it\nremains to be investigated whether ConvNets also have a strong universal\nperception ability in domains beyond vision. In this paper, we contribute from\ntwo aspects. 1) We propose four architectural guidelines for designing\nlarge-kernel ConvNets, the core of which is to exploit the essential\ncharacteristics of large kernels that distinguish them from small kernels -\nthey can see wide without going deep. Following such guidelines, our proposed\nlarge-kernel ConvNet shows leading performance in image recognition (ImageNet\naccuracy of 88.0%, ADE20K mIoU of 55.6%, and COCO box AP of 56.4%),\ndemonstrating better performance and higher speed than the recent powerful\ncompetitors. 2) We discover large kernels are the key to unlocking the\nexceptional performance of ConvNets in domains where they were originally not\nproficient. With certain modality-related preprocessing approaches, the\nproposed model achieves state-of-the-art performance on time-series forecasting\nand audio recognition tasks even without modality-specific customization to the\narchitecture. All the code and models are publicly available on GitHub and\nHuggingface.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Xiaohan Ding", "Yiyuan Zhang", "Yixiao Ge", "Sijie Zhao", "Lin Song", "Xiangyu Yue", "Ying Shan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f84e"}, "filepath": "data/2307.01200.png", "tags": [], "_media_type": "image", "_rand": 0.9996132600548844, "arXiv_link": "https://arxiv.org/abs/2307.01200", "other_link": "https://zhangyux15.github.io/ProxyCapV2.", "title": "ProxyCap: Real-time Monocular Full-body Capture in World Space via Human-Centric Proxy-to-Motion Learning", "abstract": "Learning-based approaches to monocular motion capture have recently shown\npromising results by learning to regress in a data-driven manner. However, due\nto the challenges in data collection and network designs, it remains\nchallenging for existing solutions to achieve real-time full-body capture while\nbeing accurate in world space. In this work, we introduce ProxyCap, a\nhuman-centric proxy-to-motion learning scheme to learn world-space motions from\na proxy dataset of 2D skeleton sequences and 3D rotational motions. Such proxy\ndata enables us to build a learning-based network with accurate world-space\nsupervision while also mitigating the generalization issues. For more accurate\nand physically plausible predictions in world space, our network is designed to\nlearn human motions from a human-centric perspective, which enables the\nunderstanding of the same motion captured with different camera trajectories.\nMoreover, a contact-aware neural motion descent module is proposed in our\nnetwork so that it can be aware of foot-ground contact and motion misalignment\nwith the proxy observations. With the proposed learning-based solution, we\ndemonstrate the first real-time monocular full-body capture system with\nplausible foot-ground contact in world space even using hand-held moving\ncameras. Our project page is https://zhangyux15.github.io/ProxyCapV2.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Yuxiang Zhang", "Hongwen Zhang", "Liangxiao Hu", "Jiajun Zhang", "Hongwei Yi", "Shengping Zhang", "Yebin Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f84f"}, "filepath": "data/2312.00853.png", "tags": [], "_media_type": "image", "_rand": 0.9991001622058094, "arXiv_link": "https://arxiv.org/abs/2312.00853", "other_link": "", "title": "DiffPerformer: Iterative Learning of Consistent Latent Guidance for Diffusion-based Human Video Generation", "abstract": "Real-world low-resolution (LR) videos have diverse and complex degradations,\nimposing great challenges on video super-resolution (VSR) algorithms to\nreproduce their high-resolution (HR) counterparts with high quality. Recently,\nthe diffusion models have shown compelling performance in generating realistic\ndetails for image restoration tasks. However, the diffusion process has\nrandomness, making it hard to control the contents of restored images. This\nissue becomes more serious when applying diffusion models to VSR tasks because\ntemporal consistency is crucial to the perceptual quality of videos. In this\npaper, we propose an effective real-world VSR algorithm by leveraging the\nstrength of pre-trained latent diffusion models. To ensure the content\nconsistency among adjacent frames, we exploit the temporal dynamics in LR\nvideos to guide the diffusion process by optimizing the latent sampling path\nwith a motion-guided loss, ensuring that the generated HR video maintains a\ncoherent and continuous visual flow. To further mitigate the discontinuity of\ngenerated details, we insert temporal module to the decoder and fine-tune it\nwith an innovative sequence-oriented loss. The proposed motion-guided latent\ndiffusion (MGLD) based VSR algorithm achieves significantly better perceptual\nquality than state-of-the-arts on real-world VSR benchmark datasets, validating\nthe effectiveness of the proposed model design and training strategies.", "keywords": ["Low-level vision"], "authors_list": ["Chenyang Wang", "Zerong Zheng", "Tao Yu", "Xiaoqian Lv", "Bineng Zhong", "Shengping Zhang", "Liqiang Nie"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f850"}, "filepath": "data/2312.13016.png", "tags": [], "_media_type": "image", "_rand": 0.9990351328343715, "arXiv_link": "https://arxiv.org/abs/2312.13016", "other_link": "", "title": "DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View Synthesis", "abstract": "We present DiffPortrait3D, a conditional diffusion model that is capable of\nsynthesizing 3D-consistent photo-realistic novel views from as few as a single\nin-the-wild portrait. Specifically, given a single RGB input, we aim to\nsynthesize plausible but consistent facial details rendered from novel camera\nviews with retained both identity and facial expression. In lieu of\ntime-consuming optimization and fine-tuning, our zero-shot method generalizes\nwell to arbitrary face portraits with unposed camera views, extreme facial\nexpressions, and diverse artistic depictions. At its core, we leverage the\ngenerative prior of 2D diffusion models pre-trained on large-scale image\ndatasets as our rendering backbone, while the denoising is guided with\ndisentangled attentive control of appearance and camera pose. To achieve this,\nwe first inject the appearance context from the reference image into the\nself-attention layers of the frozen UNets. The rendering view is then\nmanipulated with a novel conditional control module that interprets the camera\npose by watching a condition image of a crossed subject from the same view.\nFurthermore, we insert a trainable cross-view attention module to enhance view\nconsistency, which is further strengthened with a novel 3D-aware noise\ngeneration process during inference. We demonstrate state-of-the-art results\nboth qualitatively and quantitatively on our challenging in-the-wild and\nmulti-view benchmarks.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Yuming Gu", "Hongyi Xu", "You Xie", "Guoxian Song", "Yichun Shi", "Di Chang", "Jing Yang", "Linjie Luo"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f851"}, "filepath": "data/2401.13650.png", "tags": [], "_media_type": "image", "_rand": 0.9996516105456964, "arXiv_link": "https://arxiv.org/abs/2401.13650", "other_link": "", "title": "Tyche: Stochastic in Context Learning for Medical Image Segmentation", "abstract": "Existing learning-based solutions to medical image segmentation have two\nimportant shortcomings. First, for most new segmentation task, a new model has\nto be trained or fine-tuned. This requires extensive resources and machine\nlearning expertise, and is therefore often infeasible for medical researchers\nand clinicians. Second, most existing segmentation methods produce a single\ndeterministic segmentation mask for a given image. In practice however, there\nis often considerable uncertainty about what constitutes the correct\nsegmentation, and different expert annotators will often segment the same image\ndifferently. We tackle both of these problems with Tyche, a model that uses a\ncontext set to generate stochastic predictions for previously unseen tasks\nwithout the need to retrain. Tyche differs from other in-context segmentation\nmethods in two important ways. (1) We introduce a novel convolution block\narchitecture that enables interactions among predictions. (2) We introduce\nin-context test-time augmentation, a new mechanism to provide prediction\nstochasticity. When combined with appropriate model design and loss functions,\nTyche can predict a set of plausible diverse segmentation candidates for new or\nunseen medical images and segmentation tasks without the need to retrain.", "keywords": [], "authors_list": ["Marianne Rakic", "Hallee Wong", "Jose Javier Gonzalez Ortiz", "Beth Cimini", "John Guttag", "Adrian V. Dalca"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f852"}, "filepath": "data/2404.08978.png", "tags": [], "_media_type": "image", "_rand": 0.9995679478345647, "arXiv_link": "https://arxiv.org/abs/2404.08978", "other_link": "", "title": "Incremental Residual Concept Bottleneck Models", "abstract": "Concept Bottleneck Models (CBMs) map the black-box visual representations\nextracted by deep neural networks onto a set of interpretable concepts and use\nthe concepts to make predictions, enhancing the transparency of the\ndecision-making process. Multimodal pre-trained models can match visual\nrepresentations with textual concept embeddings, allowing for obtaining the\ninterpretable concept bottleneck without the expertise concept annotations.\nRecent research has focused on the concept bank establishment and the\nhigh-quality concept selection. However, it is challenging to construct a\ncomprehensive concept bank through humans or large language models, which\nseverely limits the performance of CBMs. In this work, we propose the\nIncremental Residual Concept Bottleneck Model (Res-CBM) to address the\nchallenge of concept completeness. Specifically, the residual concept\nbottleneck model employs a set of optimizable vectors to complete missing\nconcepts, then the incremental concept discovery module converts the\ncomplemented vectors with unclear meanings into potential concepts in the\ncandidate concept bank. Our approach can be applied to any user-defined concept\nbank, as a post-hoc processing method to enhance the performance of any CBMs.\nFurthermore, to measure the descriptive efficiency of CBMs, the Concept\nUtilization Efficiency (CUE) metric is proposed. Experiments show that the\nRes-CBM outperforms the current state-of-the-art methods in terms of both\naccuracy and efficiency and achieves comparable performance to black-box models\nacross multiple datasets.", "keywords": ["Multimodal models and vision-language models", "Efficient and scalable vision"], "authors_list": ["Chenming Shang", "Shiji Zhou", "Hengyuan Zhang", "Xinzhe Ni", "Yujiu Yang", "Yuwang Wang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f853"}, "filepath": "data/2403.05061.png", "tags": [], "_media_type": "image", "_rand": 0.9999950892412044, "arXiv_link": "https://arxiv.org/abs/2403.05061", "other_link": "", "title": "RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features", "abstract": "The inherent noisy and sparse characteristics of radar data pose challenges\nin finding effective representations for 3D object detection. In this paper, we\npropose RadarDistill, a novel knowledge distillation (KD) method, which can\nimprove the representation of radar data by leveraging LiDAR data. RadarDistill\nsuccessfully transfers desirable characteristics of LiDAR features into radar\nfeatures using three key components: Cross-Modality Alignment (CMA),\nActivation-based Feature Distillation (AFD), and Proposal-based Feature\nDistillation (PFD). CMA enhances the density of radar features by employing\nmultiple layers of dilation operations, effectively addressing the challenge of\ninefficient knowledge transfer from LiDAR to radar. AFD selectively transfers\nknowledge based on regions of the LiDAR features, with a specific focus on\nareas where activation intensity exceeds a predefined threshold. PFD similarly\nguides the radar network to selectively mimic features from the LiDAR network\nwithin the object proposals. Our comparative analyses conducted on the nuScenes\ndatasets demonstrate that RadarDistill achieves state-of-the-art (SOTA)\nperformance for radar-only object detection task, recording 20.5% in mAP and\n43.7% in NDS. Also, RadarDistill significantly improves the performance of the\ncamera-radar fusion model.", "keywords": ["Deep learning architectures and techniques", "Multimodal models and vision-language models"], "authors_list": ["Geonho Bang", "Kwangjin Choi", "Jisong Kim", "Dongsuk Kum", "Jun Won Choi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f854"}, "filepath": "data/2401.14718.png", "tags": [], "_media_type": "image", "_rand": 0.9998844302246636, "arXiv_link": "https://arxiv.org/abs/2401.14718", "other_link": "", "title": "Video Prediction by Modeling Videos as Continuous Multi-Dimensional Processes", "abstract": "Video prediction, a fundamental task in computer vision, aims to enable\nmodels to generate sequences of future frames based on existing video content.\nThis task has garnered widespread application across various domains. In this\npaper, we comprehensively survey both historical and contemporary works in this\nfield, encompassing the most widely used datasets and algorithms. Our survey\nscrutinizes the challenges and evolving landscape of video prediction within\nthe realm of computer vision. We propose a novel taxonomy centered on the\nstochastic nature of video prediction algorithms. This taxonomy accentuates the\ngradual transition from deterministic to generative prediction methodologies,\nunderlining significant advancements and shifts in approach.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Gaurav Shrivastava", "Abhinav Shrivastava"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f855"}, "filepath": "data/2401.00027.png", "tags": [], "_media_type": "image", "_rand": 0.9996048716453485, "arXiv_link": "https://arxiv.org/abs/2401.00027", "other_link": "", "title": "Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring", "abstract": "Coarse-to-fine schemes are widely used in traditional single-image motion\ndeblur; however, in the context of deep learning, existing multi-scale\nalgorithms not only require the use of complex modules for feature fusion of\nlow-scale RGB images and deep semantics, but also manually generate\nlow-resolution pairs of images that do not have sufficient confidence. In this\nwork, we propose a multi-scale network based on single-input and\nmultiple-outputs(SIMO) for motion deblurring. This simplifies the complexity of\nalgorithms based on a coarse-to-fine scheme. To alleviate restoration defects\nimpacting detail information brought about by using a multi-scale architecture,\nwe combine the characteristics of real-world blurring trajectories with a\nlearnable wavelet transform module to focus on the directional continuity and\nfrequency features of the step-by-step transitions between blurred images to\nsharp images. In conclusion, we propose a multi-scale network with a learnable\ndiscrete wavelet transform (MLWNet), which exhibits state-of-the-art\nperformance on multiple real-world deblurred datasets, in terms of both\nsubjective and objective quality as well as computational efficiency.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Xin Gao", "Tianheng Qiu", "Xinyu Zhang", "Hanlin Bai", "Kang Liu", "xuan huang", "Hu Wei", "Guoying Zhang", "Huaping Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f856"}, "filepath": "data/2404.00979.png", "tags": [], "_media_type": "image", "_rand": 0.999549817603247, "arXiv_link": "https://arxiv.org/abs/2404.00979", "other_link": "", "title": "PDF: A Probability-Driven Framework for Open World 3D Point Cloud Semantic Segmentation", "abstract": "Existing point cloud semantic segmentation networks cannot identify unknown\nclasses and update their knowledge, due to a closed-set and static perspective\nof the real world, which would induce the intelligent agent to make bad\ndecisions. To address this problem, we propose a Probability-Driven Framework\n(PDF) for open world semantic segmentation that includes (i) a lightweight\nU-decoder branch to identify unknown classes by estimating the uncertainties,\n(ii) a flexible pseudo-labeling scheme to supply geometry features along with\nprobability distribution features of unknown classes by generating pseudo\nlabels, and (iii) an incremental knowledge distillation strategy to incorporate\nnovel classes into the existing knowledge base gradually. Our framework enables\nthe model to behave like human beings, which could recognize unknown objects\nand incrementally learn them with the corresponding knowledge. Experimental\nresults on the S3DIS and ScanNetv2 datasets demonstrate that the proposed PDF\noutperforms other methods by a large margin in both important tasks of open\nworld semantic segmentation.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Jinfeng Xu", "Siyuan Yang", "Xianzhi Li", "Yuan Tang", "yixue Hao", "Long Hu", "Min Chen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f857"}, "filepath": "data/2312.03431.png", "tags": [], "_media_type": "image", "_rand": 0.9998623198021368, "arXiv_link": "https://arxiv.org/abs/2312.03431", "other_link": "https://nju-3dv.github.io/projects/Gaussian-Flow", "title": "Gaussian-Flow: 4D Reconstruction with Dynamic 3D Gaussian Particle", "abstract": "We introduce Gaussian-Flow, a novel point-based approach for fast dynamic\nscene reconstruction and real-time rendering from both multi-view and monocular\nvideos. In contrast to the prevalent NeRF-based approaches hampered by slow\ntraining and rendering speeds, our approach harnesses recent advancements in\npoint-based 3D Gaussian Splatting (3DGS). Specifically, a novel Dual-Domain\nDeformation Model (DDDM) is proposed to explicitly model attribute deformations\nof each Gaussian point, where the time-dependent residual of each attribute is\ncaptured by a polynomial fitting in the time domain, and a Fourier series\nfitting in the frequency domain. The proposed DDDM is capable of modeling\ncomplex scene deformations across long video footage, eliminating the need for\ntraining separate 3DGS for each frame or introducing an additional implicit\nneural field to model 3D dynamics. Moreover, the explicit deformation modeling\nfor discretized Gaussian points ensures ultra-fast training and rendering of a\n4D scene, which is comparable to the original 3DGS designed for static 3D\nreconstruction. Our proposed approach showcases a substantial efficiency\nimprovement, achieving a $5\\times$ faster training speed compared to the\nper-frame 3DGS modeling. In addition, quantitative results demonstrate that the\nproposed Gaussian-Flow significantly outperforms previous leading methods in\nnovel view rendering quality. Project page:\nhttps://nju-3dv.github.io/projects/Gaussian-Flow", "keywords": ["Efficient and scalable vision", "Image and video generation and manipulation"], "authors_list": ["Youtian Lin", "Zuozhuo Dai", "Siyu Zhu", "Yao Yao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f858"}, "filepath": "data/2312.01220.png", "tags": [], "_media_type": "image", "_rand": 0.9993007224816092, "arXiv_link": "https://arxiv.org/abs/2312.01220", "other_link": "https://github.com/ZPDu/DAI-Net.", "title": "Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation", "abstract": "Detecting objects in low-light scenarios presents a persistent challenge, as\ndetectors trained on well-lit data exhibit significant performance degradation\non low-light data due to low visibility. Previous methods mitigate this issue\nby exploring image enhancement or object detection techniques with real\nlow-light image datasets. However, the progress is impeded by the inherent\ndifficulties about collecting and annotating low-light images. To address this\nchallenge, we propose to boost low-light object detection with zero-shot\nday-night domain adaptation, which aims to generalize a detector from well-lit\nscenarios to low-light ones without requiring real low-light data. Revisiting\nRetinex theory in the low-level vision, we first design a reflectance\nrepresentation learning module to learn Retinex-based illumination invariance\nin images with a carefully designed illumination invariance reinforcement\nstrategy. Next, an interchange-redecomposition-coherence procedure is\nintroduced to improve over the vanilla Retinex image decomposition process by\nperforming two sequential image decompositions and introducing a\nredecomposition cohering loss. Extensive experiments on ExDark, DARK FACE, and\nCODaN datasets show strong low-light generalizability of our method. Our code\nis available at https://github.com/ZPDu/DAI-Net.", "keywords": ["Low-level vision"], "authors_list": ["Zhipeng Du", "Miaojing Shi", "Jiankang Deng"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f859"}, "filepath": "data/2312.08128.png", "tags": [], "_media_type": "image", "_rand": 0.9996534650478874, "arXiv_link": "https://arxiv.org/abs/2312.08128", "other_link": "", "title": "Clockwork Diffusion: Efficient Generation With Model-Step Distillation", "abstract": "This work aims to improve the efficiency of text-to-image diffusion models.\nWhile diffusion models use computationally expensive UNet-based denoising\noperations in every generation step, we identify that not all operations are\nequally relevant for the final output quality. In particular, we observe that\nUNet layers operating on high-res feature maps are relatively sensitive to\nsmall perturbations. In contrast, low-res feature maps influence the semantic\nlayout of the final image and can often be perturbed with no noticeable change\nin the output. Based on this observation, we propose Clockwork Diffusion, a\nmethod that periodically reuses computation from preceding denoising steps to\napproximate low-res feature maps at one or more subsequent steps. For multiple\nbaselines, and for both text-to-image generation and image editing, we\ndemonstrate that Clockwork leads to comparable or improved perceptual scores\nwith drastically reduced computational complexity. As an example, for Stable\nDiffusion v1.5 with 8 DPM++ steps we save 32% of FLOPs with negligible FID and\nCLIP change.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Amirhossein Habibian", "Amir Ghodrati", "Noor Fathima", "Guillaume Sautiere", "Risheek Garrepalli", "Fatih Porikli", "Jens Petersen"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f85a"}, "filepath": "data/2312.00633.png", "tags": [], "_media_type": "image", "_rand": 0.9991008764323914, "arXiv_link": "https://arxiv.org/abs/2312.00633", "other_link": "", "title": "BEVSpread: Spread Voxel Pooling for Bird\u2019s-Eye-View Representation in Vision-based Roadside 3D Object Detection", "abstract": "3D object detection in Bird's-Eye-View (BEV) space has recently emerged as a\nprevalent approach in the field of autonomous driving. Despite the demonstrated\nimprovements in accuracy and velocity estimation compared to perspective view\nmethods, the deployment of BEV-based techniques in real-world autonomous\nvehicles remains challenging. This is primarily due to their reliance on\nvision-transformer (ViT) based architectures, which introduce quadratic\ncomplexity with respect to the input resolution. To address this issue, we\npropose an efficient BEV-based 3D detection framework called BEVENet, which\nleverages a convolutional-only architectural design to circumvent the\nlimitations of ViT models while maintaining the effectiveness of BEV-based\nmethods. Our experiments show that BEVENet is 3$\\times$ faster than\ncontemporary state-of-the-art (SOTA) approaches on the NuScenes challenge,\nachieving a mean average precision (mAP) of 0.456 and a nuScenes detection\nscore (NDS) of 0.555 on the NuScenes validation dataset, with an inference\nspeed of 47.6 frames per second. To the best of our knowledge, this study\nstands as the first to achieve such significant efficiency improvements for\nBEV-based methods, highlighting their enhanced feasibility for real-world\nautonomous driving applications.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Wenjie Wang", "Yehao Lu", "Guangcong Zheng", "Shuigenzhan", "Xiaoqing Ye", "Zichang Tan", "Jingdong Wang", "Gaoang Wang", "Xi Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f85b"}, "filepath": "data/2401.09419.png", "tags": [], "_media_type": "image", "_rand": 0.9992010993423805, "arXiv_link": "https://arxiv.org/abs/2401.09419", "other_link": "https://www.garfield.studio/", "title": "GARField: Group Anything with Radiance Fields", "abstract": "Grouping is inherently ambiguous due to the multiple levels of granularity in\nwhich one can decompose a scene -- should the wheels of an excavator be\nconsidered separate or part of the whole? We present Group Anything with\nRadiance Fields (GARField), an approach for decomposing 3D scenes into a\nhierarchy of semantically meaningful groups from posed image inputs. To do this\nwe embrace group ambiguity through physical scale: by optimizing a\nscale-conditioned 3D affinity feature field, a point in the world can belong to\ndifferent groups of different sizes. We optimize this field from a set of 2D\nmasks provided by Segment Anything (SAM) in a way that respects coarse-to-fine\nhierarchy, using scale to consistently fuse conflicting masks from different\nviewpoints. From this field we can derive a hierarchy of possible groupings via\nautomatic tree construction or user interaction. We evaluate GARField on a\nvariety of in-the-wild scenes and find it effectively extracts groups at many\nlevels: clusters of objects, objects, and various subparts. GARField inherently\nrepresents multi-view consistent groupings and produces higher fidelity groups\nthan the input SAM masks. GARField's hierarchical grouping could have exciting\ndownstream applications such as 3D asset extraction or dynamic scene\nunderstanding. See the project website at https://www.garfield.studio/", "keywords": ["Scene analysis and understanding"], "authors_list": ["Chung Min Kim", "Mingxuan Wu", "Justin Kerr", "Ken Goldberg", "Matthew Tancik", "Angjoo Kanazawa"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f85c"}, "filepath": "data/2310.16861.png", "tags": [], "_media_type": "image", "_rand": 0.9990364454800008, "arXiv_link": "https://arxiv.org/abs/2310.16861", "other_link": "", "title": "General Point Model Pretraining with Autoencoding and Autoregressive", "abstract": "The pre-training architectures of large language models encompass various\ntypes, including autoencoding models, autoregressive models, and\nencoder-decoder models. We posit that any modality can potentially benefit from\na large language model, as long as it undergoes vector quantization to become\ndiscrete tokens. Inspired by GLM, we propose a General Point Model (GPM) which\nseamlessly integrates autoencoding and autoregressive tasks in point cloud\ntransformer. This model is versatile, allowing fine-tuning for downstream point\ncloud representation tasks, as well as unconditional and conditional generation\ntasks. GPM enhances masked prediction in autoencoding through various forms of\nmask padding tasks, leading to improved performance in point cloud\nunderstanding. Additionally, GPM demonstrates highly competitive results in\nunconditional point cloud generation tasks, even exhibiting the potential for\nconditional generation tasks by modifying the input's conditional information.\nCompared to models like Point-BERT, MaskPoint and PointMAE, our GPM achieves\nsuperior performance in point cloud understanding tasks. Furthermore, the\nintegration of autoregressive and autoencoding within the same transformer\nunderscores its versatility across different downstream tasks.", "keywords": [], "authors_list": ["Zhe Li", "Zhangyang Gao", "Cheng Tan", "Bocheng Ren", "Laurence Yang", "Stan Z. Li"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f85d"}, "filepath": "data/2310.00258.png", "tags": [], "_media_type": "image", "_rand": 0.9991550641333306, "arXiv_link": "https://arxiv.org/abs/2310.00258", "other_link": "https://github.com/tmtuan1307/nayer.", "title": "NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation", "abstract": "Data-Free Knowledge Distillation (DFKD) has made significant recent strides\nby transferring knowledge from a teacher neural network to a student neural\nnetwork without accessing the original data. Nonetheless, existing approaches\nencounter a significant challenge when attempting to generate samples from\nrandom noise inputs, which inherently lack meaningful information.\nConsequently, these models struggle to effectively map this noise to the\nground-truth sample distribution, resulting in prolonging training times and\nlow-quality outputs. In this paper, we propose a novel Noisy Layer Generation\nmethod (NAYER) which relocates the random source from the input to a noisy\nlayer and utilizes the meaningful constant label-text embedding (LTE) as the\ninput. LTE is generated by using the language model once, and then it is stored\nin memory for all subsequent training processes. The significance of LTE lies\nin its ability to contain substantial meaningful inter-class information,\nenabling the generation of high-quality samples with only a few training steps.\nSimultaneously, the noisy layer plays a key role in addressing the issue of\ndiversity in sample generation by preventing the model from overemphasizing the\nconstrained label information. By reinitializing the noisy layer in each\niteration, we aim to facilitate the generation of diverse samples while still\nretaining the method's efficiency, thanks to the ease of learning provided by\nLTE. Experiments carried out on multiple datasets demonstrate that our NAYER\nnot only outperforms the state-of-the-art methods but also achieves speeds 5 to\n15 times faster than previous approaches. The code is available at\nhttps://github.com/tmtuan1307/nayer.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Minh-Tuan Tran", "Trung Le", "Xuan-May Le", "Mehrtash Harandi", "Quan Tran", "Dinh Phung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f85e"}, "filepath": "data/2402.02045.png", "tags": [], "_media_type": "image", "_rand": 0.9997750376964731, "arXiv_link": "https://arxiv.org/abs/2402.02045", "other_link": "", "title": "MLIP: Enhancing Medical Visual Representation with Divergence Encoder and Knowledge-guided Contrastive Learning", "abstract": "The scarcity of annotated data has sparked significant interest in\nunsupervised pre-training methods that leverage medical reports as auxiliary\nsignals for medical visual representation learning. However, existing research\noverlooks the multi-granularity nature of medical visual representation and\nlacks suitable contrastive learning techniques to improve the models'\ngeneralizability across different granularities, leading to the\nunderutilization of image-text information. To address this, we propose MLIP, a\nnovel framework leveraging domain-specific medical knowledge as guiding signals\nto integrate language information into the visual domain through image-text\ncontrastive learning. Our model includes global contrastive learning with our\ndesigned divergence encoder, local token-knowledge-patch alignment contrastive\nlearning, and knowledge-guided category-level contrastive learning with expert\nknowledge. Experimental evaluations reveal the efficacy of our model in\nenhancing transfer performance for tasks such as image classification, object\ndetection, and semantic segmentation. Notably, MLIP surpasses state-of-the-art\nmethods even with limited annotated data, highlighting the potential of\nmultimodal pre-training in advancing medical representation learning.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Zhe Li", "Laurence Yang", "Bocheng Ren", "Xin Nie", "Zhangyang Gao", "Cheng Tan", "Stan Z. Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f85f"}, "filepath": "data/2312.04965.png", "tags": [], "_media_type": "image", "_rand": 0.9994499482196141, "arXiv_link": "https://arxiv.org/abs/2312.04965", "other_link": "https://sled-group.github.io/InfEdit/", "title": "Inversion-Free Image Editing with Language-Guided Diffusion Models", "abstract": "Despite recent advances in inversion-based editing, text-guided image\nmanipulation remains challenging for diffusion models. The primary bottlenecks\ninclude 1) the time-consuming nature of the inversion process; 2) the struggle\nto balance consistency with accuracy; 3) the lack of compatibility with\nefficient consistency sampling methods used in consistency models. To address\nthe above issues, we start by asking ourselves if the inversion process can be\neliminated for editing. We show that when the initial sample is known, a\nspecial variance schedule reduces the denoising step to the same form as the\nmulti-step consistency sampling. We name this Denoising Diffusion Consistent\nModel (DDCM), and note that it implies a virtual inversion strategy without\nexplicit inversion in sampling. We further unify the attention control\nmechanisms in a tuning-free framework for text-guided editing. Combining them,\nwe present inversion-free editing (InfEdit), which allows for consistent and\nfaithful editing for both rigid and non-rigid semantic changes, catering to\nintricate modifications without compromising on the image's integrity and\nexplicit inversion. Through extensive experiments, InfEdit shows strong\nperformance in various editing tasks and also maintains a seamless workflow\n(less than 3 seconds on one single A40), demonstrating the potential for\nreal-time applications. Project Page: https://sled-group.github.io/InfEdit/", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Sihan Xu", "Yidong Huang", "Jiayi Pan", "Ziqiao Ma", "Joyce Chai"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f860"}, "filepath": "data/2402.08876.png", "tags": [], "_media_type": "image", "_rand": 0.9996845223083991, "arXiv_link": "https://arxiv.org/abs/2402.08876", "other_link": "", "title": "DUDF: Differentiable Unsigned Distance Fields with Hyperbolic Scaling", "abstract": "In recent years, there has been a growing interest in training Neural\nNetworks to approximate Unsigned Distance Fields (UDFs) for representing open\nsurfaces in the context of 3D reconstruction. However, UDFs are\nnon-differentiable at the zero level set which leads to significant errors in\ndistances and gradients, generally resulting in fragmented and discontinuous\nsurfaces. In this paper, we propose to learn a hyperbolic scaling of the\nunsigned distance field, which defines a new Eikonal problem with distinct\nboundary conditions. This allows our formulation to integrate seamlessly with\nstate-of-the-art continuously differentiable implicit neural representation\nnetworks, largely applied in the literature to represent signed distance\nfields. Our approach not only addresses the challenge of open surface\nrepresentation but also demonstrates significant improvement in reconstruction\nquality and training performance. Moreover, the unlocked field's\ndifferentiability allows the accurate computation of essential topological\nproperties such as normal directions and curvatures, pervasive in downstream\ntasks such as rendering. Through extensive experiments, we validate our\napproach across various data sets and against competitive baselines. The\nresults demonstrate enhanced accuracy and up to an order of magnitude increase\nin speed compared to previous methods.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Miguel Fainstein", "Viviana Siless", "Emmanuel Iarussi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f861"}, "filepath": "data/2305.15404v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990083027671516, "arXiv_link": "https://arxiv.org/html/2305.15404v2", "other_link": "https://github.com/Parskatt/RoMa", "title": "RoMa: Robust Dense Feature Matching", "abstract": "Feature matching is an important computer vision task that involves\nestimating correspondences between two images of a 3D scene, and dense methods\nestimate all such correspondences. The aim is to learn a robust model, i.e., a\nmodel able to match under challenging real-world changes. In this work, we\npropose such a model, leveraging frozen pretrained features from the foundation\nmodel DINOv2. Although these features are significantly more robust than local\nfeatures trained from scratch, they are inherently coarse. We therefore combine\nthem with specialized ConvNet fine features, creating a precisely localizable\nfeature pyramid. To further improve robustness, we propose a tailored\ntransformer match decoder that predicts anchor probabilities, which enables it\nto express multimodality. Finally, we propose an improved loss formulation\nthrough regression-by-classification with subsequent robust regression. We\nconduct a comprehensive set of experiments that show that our method, RoMa,\nachieves significant gains, setting a new state-of-the-art. In particular, we\nachieve a 36% improvement on the extremely challenging WxBS benchmark. Code is\nprovided at https://github.com/Parskatt/RoMa", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Johan Edstedt", "Qiyu Sun", "Georg B\u00f6kman", "M\u00e5rten Wadenb\u00e4ck", "Michael Felsberg"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f862"}, "filepath": "data/2312.10835.png", "tags": [], "_media_type": "image", "_rand": 0.9992368878805571, "arXiv_link": "https://arxiv.org/abs/2312.10835", "other_link": "", "title": "Your Student is Better Than Expected: Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models", "abstract": "Knowledge distillation methods have recently shown to be a promising\ndirection to speedup the synthesis of large-scale diffusion models by requiring\nonly a few inference steps. While several powerful distillation methods were\nrecently proposed, the overall quality of student samples is typically lower\ncompared to the teacher ones, which hinders their practical usage. In this\nwork, we investigate the relative quality of samples produced by the teacher\ntext-to-image diffusion model and its distilled student version. As our main\nempirical finding, we discover that a noticeable portion of student samples\nexhibit superior fidelity compared to the teacher ones, despite the\n\"approximate\" nature of the student. Based on this finding, we propose an\nadaptive collaboration between student and teacher diffusion models for\neffective text-to-image synthesis. Specifically, the distilled model produces\nthe initial sample, and then an oracle decides whether it needs further\nimprovements with a slow teacher model. Extensive experiments demonstrate that\nthe designed pipeline surpasses state-of-the-art text-to-image alternatives for\nvarious inference budgets in terms of human preference. Furthermore, the\nproposed approach can be naturally used in popular applications such as\ntext-guided image editing and controllable generation.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Nikita Starodubcev", "Dmitry Baranchuk", "Artem Fedorov", "Artem Babenko"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f863"}, "filepath": "data/2404.00989.png", "tags": [], "_media_type": "image", "_rand": 0.9994236968191879, "arXiv_link": "https://arxiv.org/abs/2404.00989", "other_link": "", "title": "$360+x$: A Panoptic Multi-modal Scene Understanding Dataset", "abstract": "Human perception of the world is shaped by a multitude of viewpoints and\nmodalities. While many existing datasets focus on scene understanding from a\ncertain perspective (e.g. egocentric or third-person views), our dataset offers\na panoptic perspective (i.e. multiple viewpoints with multiple data\nmodalities). Specifically, we encapsulate third-person panoramic and front\nviews, as well as egocentric monocular/binocular views with rich modalities\nincluding video, multi-channel audio, directional binaural delay, location data\nand textual scene descriptions within each scene captured, presenting\ncomprehensive observation of the world. Figure 1 offers a glimpse of all 28\nscene categories of our 360+x dataset. To the best of our knowledge, this is\nthe first database that covers multiple viewpoints with multiple data\nmodalities to mimic how daily information is accessed in the real world.\nThrough our benchmark analysis, we presented 5 different scene understanding\ntasks on the proposed 360+x dataset to evaluate the impact and benefit of each\ndata modality and perspective in panoptic scene understanding. We hope this\nunique dataset could broaden the scope of comprehensive scene understanding and\nencourage the community to approach these problems from more diverse\nperspectives.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Hao Chen", "Yuqi Hou", "Chenyuan Qu", "Irene Testini", "Xiaohan Hong", "Jianbo Jiao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Multimedia", "Sound", "Audio and Speech Processing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f864"}, "filepath": "data/2403.14101.png", "tags": [], "_media_type": "image", "_rand": 0.9996701271279568, "arXiv_link": "https://arxiv.org/abs/2403.14101", "other_link": "https://github.com/tmtuan1307/lander.", "title": "Text-Enhanced Data-free Approach for Federated Class-Incremental Learning", "abstract": "Federated Class-Incremental Learning (FCIL) is an underexplored yet pivotal\nissue, involving the dynamic addition of new classes in the context of\nfederated learning. In this field, Data-Free Knowledge Transfer (DFKT) plays a\ncrucial role in addressing catastrophic forgetting and data privacy problems.\nHowever, prior approaches lack the crucial synergy between DFKT and the model\ntraining phases, causing DFKT to encounter difficulties in generating\nhigh-quality data from a non-anchored latent space of the old task model. In\nthis paper, we introduce LANDER (Label Text Centered Data-Free Knowledge\nTransfer) to address this issue by utilizing label text embeddings (LTE)\nproduced by pretrained language models. Specifically, during the model training\nphase, our approach treats LTE as anchor points and constrains the feature\nembeddings of corresponding training samples around them, enriching the\nsurrounding area with more meaningful information. In the DFKT phase, by using\nthese LTE anchors, LANDER can synthesize more meaningful samples, thereby\neffectively addressing the forgetting problem. Additionally, instead of tightly\nconstraining embeddings toward the anchor, the Bounding Loss is introduced to\nencourage sample embeddings to remain flexible within a defined radius. This\napproach preserves the natural differences in sample embeddings and mitigates\nthe embedding overlap caused by heterogeneous federated settings. Extensive\nexperiments conducted on CIFAR100, Tiny-ImageNet, and ImageNet demonstrate that\nLANDER significantly outperforms previous methods and achieves state-of-the-art\nperformance in FCIL. The code is available at\nhttps://github.com/tmtuan1307/lander.", "keywords": [], "authors_list": ["Minh-Tuan Tran", "Trung Le", "Xuan-May Le", "Mehrtash Harandi", "Dinh Phung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f865"}, "filepath": "data/2401.04092.png", "tags": [], "_media_type": "image", "_rand": 0.9994871238547576, "arXiv_link": "https://arxiv.org/abs/2401.04092", "other_link": "", "title": "GPT-4V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation", "abstract": "Despite recent advances in text-to-3D generative methods, there is a notable\nabsence of reliable evaluation metrics. Existing metrics usually focus on a\nsingle criterion each, such as how well the asset aligned with the input text.\nThese metrics lack the flexibility to generalize to different evaluation\ncriteria and might not align well with human preferences. Conducting user\npreference studies is an alternative that offers both adaptability and\nhuman-aligned results. User studies, however, can be very expensive to scale.\nThis paper presents an automatic, versatile, and human-aligned evaluation\nmetric for text-to-3D generative models. To this end, we first develop a prompt\ngenerator using GPT-4V to generate evaluating prompts, which serve as input to\ncompare text-to-3D models. We further design a method instructing GPT-4V to\ncompare two 3D assets according to user-defined criteria. Finally, we use these\npairwise comparison results to assign these models Elo ratings. Experimental\nresults suggest our metric strongly align with human preference across\ndifferent evaluation criteria.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Tong Wu", "Guandao Yang", "Zhibing Li", "Kai Zhang", "Ziwei Liu", "Leonidas Guibas", "Dahua Lin", "Gordon Wetzstein"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f866"}, "filepath": "data/2312.14440.png", "tags": [], "_media_type": "image", "_rand": 0.9994510285559022, "arXiv_link": "https://arxiv.org/abs/2312.14440", "other_link": "", "title": "Adversarial Text to Continuous Image Generation", "abstract": "The widespread use of Text-to-Image (T2I) models in content generation\nrequires careful examination of their safety, including their robustness to\nadversarial attacks. Despite extensive research on adversarial attacks, the\nreasons for their effectiveness remain underexplored. This paper presents an\nempirical study on adversarial attacks against T2I models, focusing on\nanalyzing factors associated with attack success rates (ASR). We introduce a\nnew attack objective - entity swapping using adversarial suffixes and two\ngradient-based attack algorithms. Human and automatic evaluations reveal the\nasymmetric nature of ASRs on entity swap: for example, it is easier to replace\n\"human\" with \"robot\" in the prompt \"a human dancing in the rain.\" with an\nadversarial suffix, but the reverse replacement is significantly harder. We\nfurther propose probing metrics to establish indicative signals from the\nmodel's beliefs to the adversarial ASR. We identify conditions that result in a\nsuccess probability of 60% for adversarial attacks and others where this\nlikelihood drops below 5%.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Kilichbek Haydarov", "Aashiq Muhamed", "Xiaoqian Shen", "Jovana Lazarevic", "Ivan Skorokhodov", "Chamuditha Jayanga Galappaththige", "Mohamed Elhoseiny"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Cryptography and Security"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f867"}, "filepath": "data/2404.10633.png", "tags": [], "_media_type": "image", "_rand": 0.9990835747819455, "arXiv_link": "https://arxiv.org/abs/2404.10633", "other_link": "", "title": "Contextrast: Contextual Contrastive Learning for Semantic Segmentation", "abstract": "Despite great improvements in semantic segmentation, challenges persist\nbecause of the lack of local/global contexts and the relationship between them.\nIn this paper, we propose Contextrast, a contrastive learning-based semantic\nsegmentation method that allows to capture local/global contexts and comprehend\ntheir relationships. Our proposed method comprises two parts: a) contextual\ncontrastive learning (CCL) and b) boundary-aware negative (BANE) sampling.\nContextual contrastive learning obtains local/global context from multi-scale\nfeature aggregation and inter/intra-relationship of features for better\ndiscrimination capabilities. Meanwhile, BANE sampling selects embedding\nfeatures along the boundaries of incorrectly predicted regions to employ them\nas harder negative samples on our contrastive learning, resolving segmentation\nissues along the boundary region by exploiting fine-grained details. We\ndemonstrate that our Contextrast substantially enhances the performance of\nsemantic segmentation networks, outperforming state-of-the-art contrastive\nlearning approaches on diverse public datasets, e.g. Cityscapes, CamVid,\nPASCAL-C, COCO-Stuff, and ADE20K, without an increase in computational cost\nduring inference.", "keywords": [], "authors_list": ["Changki Sung", "Wanhee Kim", "Jungho An", "WooJu Lee", "Hyungtae Lim", "Hyun Myung"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f868"}, "filepath": "data/2403.13304.png", "tags": [], "_media_type": "image", "_rand": 0.9990396128188604, "arXiv_link": "https://arxiv.org/abs/2403.13304", "other_link": "", "title": "DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception", "abstract": "Current perceptive models heavily depend on resource-intensive datasets,\nprompting the need for innovative solutions. Leveraging recent advances in\ndiffusion models, synthetic data, by constructing image inputs from various\nannotations, proves beneficial for downstream tasks. While prior methods have\nseparately addressed generative and perceptive models, DetDiffusion, for the\nfirst time, harmonizes both, tackling the challenges in generating effective\ndata for perceptive models. To enhance image generation with perceptive models,\nwe introduce perception-aware loss (P.A. loss) through segmentation, improving\nboth quality and controllability. To boost the performance of specific\nperceptive models, our method customizes data augmentation by extracting and\nutilizing perception-aware attribute (P.A. Attr) during generation.\nExperimental results from the object detection task highlight DetDiffusion's\nsuperior performance, establishing a new state-of-the-art in layout-guided\ngeneration. Furthermore, image syntheses from DetDiffusion can effectively\naugment training data, significantly enhancing downstream detection\nperformance.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Yibo Wang", "Ruiyuan Gao", "Kai Chen", "Kaiqiang Zhou", "Yingjie CAI", "Lanqing Hong", "Zhenguo Li", "Lihui Jiang", "Dit-Yan Yeung", "Qiang Xu", "Kai Zhang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f869"}, "filepath": "data/2308.12001.png", "tags": [], "_media_type": "image", "_rand": 0.9992312147570637, "arXiv_link": "https://arxiv.org/abs/2308.12001", "other_link": "", "title": "Boosting Image Quality Assessment through Efficient Transformer Adaptation with Local Feature Enhancement", "abstract": "Image Quality Assessment (IQA) constitutes a fundamental task within the\nfield of computer vision, yet it remains an unresolved challenge, owing to the\nintricate distortion conditions, diverse image contents, and limited\navailability of data. Recently, the community has witnessed the emergence of\nnumerous large-scale pretrained foundation models, which greatly benefit from\ndramatically increased data and parameter capacities. However, it remains an\nopen problem whether the scaling law in high-level tasks is also applicable to\nIQA task which is closely related to low-level clues. In this paper, we\ndemonstrate that with proper injection of local distortion features, a larger\npretrained and fixed foundation model performs better in IQA tasks.\nSpecifically, for the lack of local distortion structure and inductive bias of\nvision transformer (ViT), alongside the large-scale pretrained ViT, we use\nanother pretrained convolution neural network (CNN), which is well known for\ncapturing the local structure, to extract multi-scale image features. Further,\nwe propose a local distortion extractor to obtain local distortion features\nfrom the pretrained CNN and a local distortion injector to inject the local\ndistortion features into ViT. By only training the extractor and injector, our\nmethod can benefit from the rich knowledge in the powerful foundation models\nand achieve state-of-the-art performance on popular IQA datasets, indicating\nthat IQA is not only a low-level problem but also benefits from stronger\nhigh-level features drawn from large-scale pretrained models.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Kangmin Xu", "Liang Liao", "Jing Xiao", "Chaofeng Chen", "Haoning Wu", "Qiong Yan", "Weisi Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f86a"}, "filepath": "data/2401.01482.png", "tags": [], "_media_type": "image", "_rand": 0.9999921921933501, "arXiv_link": "https://arxiv.org/abs/2401.01482", "other_link": "", "title": "Incorporating Geo-Diverse Knowledge into Prompting for Increased Geographical Robustness in Object Recognition", "abstract": "Existing object recognition models have been shown to lack robustness in\ndiverse geographical scenarios due to domain shifts in design and context.\nClass representations need to be adapted to more accurately reflect an object\nconcept under these shifts. In the absence of training data from target\ngeographies, we hypothesize that geographically diverse descriptive knowledge\nof categories can enhance robustness. For this purpose, we explore the\nfeasibility of probing a large language model for geography-based object\nknowledge, and we examine the effects of integrating knowledge into zero-shot\nand learnable soft prompting with CLIP. Within this exploration, we propose\ngeography knowledge regularization to ensure that soft prompts trained on a\nsource set of geographies generalize to an unseen target set. Accuracy gains\nover prompting baselines on DollarStreet while training only on Europe data are\nup to +2.8/1.2/1.6 on target data from Africa/Asia/Americas, and +4.6 overall\non the hardest classes. Competitive performance is shown vs. few-shot target\ntraining, and analysis is provided to direct future study of geographical\nrobustness.", "keywords": ["Large multimodal models and prompting techniques"], "authors_list": ["Kyle Buettner", "Sina Malakouti", "Xiang Li", "Adriana Kovashka"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f86b"}, "filepath": "data/2404.00254.png", "tags": [], "_media_type": "image", "_rand": 0.9993447448754806, "arXiv_link": "https://arxiv.org/abs/2404.00254", "other_link": "", "title": "Clustering for Protein Representation Learning", "abstract": "Protein representation learning is a challenging task that aims to capture\nthe structure and function of proteins from their amino acid sequences.\nPrevious methods largely ignored the fact that not all amino acids are equally\nimportant for protein folding and activity. In this article, we propose a\nneural clustering framework that can automatically discover the critical\ncomponents of a protein by considering both its primary and tertiary structure\ninformation. Our framework treats a protein as a graph, where each node\nrepresents an amino acid and each edge represents a spatial or sequential\nconnection between amino acids. We then apply an iterative clustering strategy\nto group the nodes into clusters based on their 1D and 3D positions and assign\nscores to each cluster. We select the highest-scoring clusters and use their\nmedoid nodes for the next iteration of clustering, until we obtain a\nhierarchical and informative representation of the protein. We evaluate on four\nprotein-related tasks: protein fold classification, enzyme reaction\nclassification, gene ontology term prediction, and enzyme commission number\nprediction. Experimental results demonstrate that our method achieves\nstate-of-the-art performance.", "keywords": [], "authors_list": ["Ruijie Quan", "Wenguan Wang", "Fan Ma", "Hehe Fan", "Yi Yang"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computational Engineering, Finance, and Science", "Unknown", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f86c"}, "filepath": "data/2402.06659.png", "tags": [], "_media_type": "image", "_rand": 0.9990573225995278, "arXiv_link": "https://arxiv.org/abs/2402.06659", "other_link": "https://github.com/umd-huang-lab/VLM-Poisoning.", "title": "Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-grained Knowledge Alignment", "abstract": "Vision-Language Models (VLMs) excel in generating textual responses from\nvisual inputs, yet their versatility raises significant security concerns. This\nstudy takes the first step in exposing VLMs' susceptibility to data poisoning\nattacks that can manipulate responses to innocuous, everyday prompts. We\nintroduce Shadowcast, a stealthy data poisoning attack method where poison\nsamples are visually indistinguishable from benign images with matching texts.\nShadowcast demonstrates effectiveness in two attack types. The first is Label\nAttack, tricking VLMs into misidentifying class labels, such as confusing\nDonald Trump for Joe Biden. The second is Persuasion Attack, which leverages\nVLMs' text generation capabilities to craft narratives, such as portraying junk\nfood as health food, through persuasive and seemingly rational descriptions. We\nshow that Shadowcast are highly effective in achieving attacker's intentions\nusing as few as 50 poison samples. Moreover, these poison samples remain\neffective across various prompts and are transferable across different VLM\narchitectures in the black-box setting. This work reveals how poisoned VLMs can\ngenerate convincing yet deceptive misinformation and underscores the importance\nof data quality for responsible deployments of VLMs. Our code is available at:\nhttps://github.com/umd-huang-lab/VLM-Poisoning.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques", "Vision applications for social good and ethics"], "authors_list": ["Alvi Md Ishmam", "Chris Thomas"], "category_name": "Cryptography and Security", "all_categories": ["Cryptography and Security", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f86d"}, "filepath": "data/2403.10799.png", "tags": [], "_media_type": "image", "_rand": 0.9997653657013568, "arXiv_link": "https://arxiv.org/abs/2403.10799", "other_link": "", "title": "Structured Model Probing: Empowering Efficient Transfer Learning by Structured Regularization", "abstract": "Large language models (LLMs) have become crucial for many generative\ndownstream tasks, leading to an inevitable trend and significant challenge to\ndeploy them efficiently on resource-constrained devices. Structured pruning is\na widely used method to address this challenge. However, when dealing with the\ncomplex structure of the multiple decoder layers, general methods often employ\ncommon estimation approaches for pruning. These approaches lead to a decline in\naccuracy for specific downstream tasks. In this paper, we introduce a simple\nyet efficient method that adaptively models the importance of each\nsubstructure. Meanwhile, it can adaptively fuse coarse-grained and finegrained\nestimations based on the results from complex and multilayer structures. All\naspects of our design seamlessly integrate into the endto-end pruning\nframework. Our experimental results, compared with state-of-the-art methods on\nmainstream datasets, demonstrate average accuracy improvements of 1.1%, 1.02%,\n2.0%, and 1.2% for LLaMa-7B,Vicuna-7B, Baichuan-7B, and Bloom-7b1,\nrespectively.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Zhi-Fan Wu", "Chaojie Mao", "Xue Wang", "Jianwen Jiang", "Yiliang Lv", "Rong Jin"], "category_name": "Computation and Language", "all_categories": ["Computation and Language", "Artificial Intelligence", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f86e"}, "filepath": "data/2312.03420.png", "tags": [], "_media_type": "image", "_rand": 0.9991474564239187, "arXiv_link": "https://arxiv.org/abs/2312.03420", "other_link": "", "title": "Artist-Friendly Relightable and Animatable Neural Heads", "abstract": "An increasingly common approach for creating photo-realistic digital avatars\nis through the use of volumetric neural fields. The original neural radiance\nfield (NeRF) allowed for impressive novel view synthesis of static heads when\ntrained on a set of multi-view images, and follow up methods showed that these\nneural representations can be extended to dynamic avatars. Recently, new\nvariants also surpassed the usual drawback of baked-in illumination in neural\nrepresentations, showing that static neural avatars can be relit in any\nenvironment. In this work we simultaneously tackle both the motion and\nillumination problem, proposing a new method for relightable and animatable\nneural heads. Our method builds on a proven dynamic avatar approach based on a\nmixture of volumetric primitives, combined with a recently-proposed lightweight\nhardware setup for relightable neural fields, and includes a novel architecture\nthat allows relighting dynamic neural avatars performing unseen expressions in\nany environment, even with nearfield illumination and viewpoints.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Biometrics and human analysis"], "authors_list": ["Yingyan Xu", "Prashanth Chandran", "Sebastian Weiss", "Markus Gross", "Gaspard Zoss", "Derek Bradley"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f86f"}, "filepath": "data/2403.20022.png", "tags": [], "_media_type": "image", "_rand": 0.99994544995426, "arXiv_link": "https://arxiv.org/abs/2403.20022", "other_link": "", "title": "Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity", "abstract": "Reconstructing the viewed images from human brain activity bridges human and\ncomputer vision through the Brain-Computer Interface. The inherent variability\nin brain function between individuals leads existing literature to focus on\nacquiring separate models for each individual using their respective brain\nsignal data, ignoring commonalities between these data. In this article, we\ndevise Psychometry, an omnifit model for reconstructing images from functional\nMagnetic Resonance Imaging (fMRI) obtained from different subjects. Psychometry\nincorporates an omni mixture-of-experts (Omni MoE) module where all the experts\nwork together to capture the inter-subject commonalities, while each expert\nassociated with subject-specific parameters copes with the individual\ndifferences. Moreover, Psychometry is equipped with a retrieval-enhanced\ninference strategy, termed Ecphory, which aims to enhance the learned fMRI\nrepresentation via retrieving from prestored subject-specific memories. These\ndesigns collectively render Psychometry omnifit and efficient, enabling it to\ncapture both inter-subject commonality and individual specificity across\nsubjects. As a result, the enhanced fMRI representations serve as conditional\nsignals to guide a generation model to reconstruct high-quality and realistic\nimages, establishing Psychometry as state-of-the-art in terms of both\nhigh-level and low-level metrics.", "keywords": ["Medical imaging and biological vision", "Image and video generation and manipulation"], "authors_list": ["Ruijie Quan", "Wenguan Wang", "Zhibo Tian", "Fan Ma", "Yi Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f870"}, "filepath": "data/2307.11086.png", "tags": [], "_media_type": "image", "_rand": 0.9993044044347794, "arXiv_link": "https://arxiv.org/abs/2307.11086", "other_link": "https://zvict.github.io/papr/.", "title": "PAPR in Motion: Seamless Point-level 3D Scene Interpolation", "abstract": "Learning accurate and parsimonious point cloud representations of scene\nsurfaces from scratch remains a challenge in 3D representation learning.\nExisting point-based methods often suffer from the vanishing gradient problem\nor require a large number of points to accurately model scene geometry and\ntexture. To address these limitations, we propose Proximity Attention Point\nRendering (PAPR), a novel method that consists of a point-based scene\nrepresentation and a differentiable renderer. Our scene representation uses a\npoint cloud where each point is characterized by its spatial position,\ninfluence score, and view-independent feature vector. The renderer selects the\nrelevant points for each ray and produces accurate colours using their\nassociated features. PAPR effectively learns point cloud positions to represent\nthe correct scene geometry, even when the initialization drastically differs\nfrom the target geometry. Notably, our method captures fine texture details\nwhile using only a parsimonious set of points. We also demonstrate four\npractical applications of our method: zero-shot geometry editing, object\nmanipulation, texture transfer, and exposure control. More results and code are\navailable on our project website at https://zvict.github.io/papr/.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding", "Image and video generation and manipulation"], "authors_list": ["Shichong Peng", "Yanshu Zhang", "Ke Li"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Graphics", "Machine Learning", "Neural and Evolutionary Computing"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f871"}, "filepath": "data/2312.07538.png", "tags": [], "_media_type": "image", "_rand": 0.9994579475792401, "arXiv_link": "https://arxiv.org/abs/2312.07538", "other_link": "", "title": "Anatomically Constrained Implicit Face Models", "abstract": "Coordinate based implicit neural representations have gained rapid popularity\nin recent years as they have been successfully used in image, geometry and\nscene modeling tasks. In this work, we present a novel use case for such\nimplicit representations in the context of learning anatomically constrained\nface models. Actor specific anatomically constrained face models are the state\nof the art in both facial performance capture and performance retargeting.\nDespite their practical success, these anatomical models are slow to evaluate\nand often require extensive data capture to be built. We propose the anatomical\nimplicit face model; an ensemble of implicit neural networks that jointly learn\nto model the facial anatomy and the skin surface with high-fidelity, and can\nreadily be used as a drop in replacement to conventional blendshape models.\nGiven an arbitrary set of skin surface meshes of an actor and only a neutral\nshape with estimated skull and jaw bones, our method can recover a dense\nanatomical substructure which constrains every point on the facial surface. We\ndemonstrate the usefulness of our approach in several tasks ranging from shape\nfitting, shape editing, and performance retargeting.", "keywords": ["Deep learning architectures and techniques", "Biometrics and human analysis"], "authors_list": ["Prashanth Chandran", "Gaspard Zoss"], "category_name": "Graphics", "all_categories": ["Graphics", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f872"}, "filepath": "data/2402.03908.png", "tags": [], "_media_type": "image", "_rand": 0.9993492527117283, "arXiv_link": "https://arxiv.org/abs/2402.03908", "other_link": "https://kxhit.github.io/EscherNet.", "title": "EscherNet: A Generative Model for Scalable View Synthesis", "abstract": "We introduce EscherNet, a multi-view conditioned diffusion model for view\nsynthesis. EscherNet learns implicit and generative 3D representations coupled\nwith a specialised camera positional encoding, allowing precise and continuous\nrelative control of the camera transformation between an arbitrary number of\nreference and target views. EscherNet offers exceptional generality,\nflexibility, and scalability in view synthesis -- it can generate more than 100\nconsistent target views simultaneously on a single consumer-grade GPU, despite\nbeing trained with a fixed number of 3 reference views to 3 target views. As a\nresult, EscherNet not only addresses zero-shot novel view synthesis, but also\nnaturally unifies single- and multi-image 3D reconstruction, combining these\ndiverse tasks into a single, cohesive framework. Our extensive experiments\ndemonstrate that EscherNet achieves state-of-the-art performance in multiple\nbenchmarks, even when compared to methods specifically tailored for each\nindividual problem. This remarkable versatility opens up new directions for\ndesigning scalable neural architectures for 3D vision. Project page:\nhttps://kxhit.github.io/EscherNet.", "keywords": ["Deep learning architectures and techniques", "Efficient and scalable vision"], "authors_list": ["Xin Kong", "Shikun Liu", "Xiaoyang Lyu", "Marwan Taher", "Xiaojuan Qi", "Andrew J. Davison"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f873"}, "filepath": "data/2403.10073.png", "tags": [], "_media_type": "image", "_rand": 0.999605330208687, "arXiv_link": "https://arxiv.org/abs/2403.10073", "other_link": "https://github.com/NISPLab/AT-BSL", "title": "Revisiting Adversarial Training under Long-Tailed Distributions", "abstract": "Deep neural networks are vulnerable to adversarial attacks, often leading to\nerroneous outputs. Adversarial training has been recognized as one of the most\neffective methods to counter such attacks. However, existing adversarial\ntraining techniques have predominantly been tested on balanced datasets,\nwhereas real-world data often exhibit a long-tailed distribution, casting doubt\non the efficacy of these methods in practical scenarios.\n In this paper, we delve into adversarial training under long-tailed\ndistributions. Through an analysis of the previous work \"RoBal\", we discover\nthat utilizing Balanced Softmax Loss alone can achieve performance comparable\nto the complete RoBal approach while significantly reducing training overheads.\nAdditionally, we reveal that, similar to uniform distributions, adversarial\ntraining under long-tailed distributions also suffers from robust overfitting.\nTo address this, we explore data augmentation as a solution and unexpectedly\ndiscover that, unlike results obtained with balanced data, data augmentation\nnot only effectively alleviates robust overfitting but also significantly\nimproves robustness. We further investigate the reasons behind the improvement\nof robustness through data augmentation and identify that it is attributable to\nthe increased diversity of examples. Extensive experiments further corroborate\nthat data augmentation alone can significantly improve robustness. Finally,\nbuilding on these findings, we demonstrate that compared to RoBal, the\ncombination of BSL and data augmentation leads to a +6.66% improvement in model\nrobustness under AutoAttack on CIFAR-10-LT. Our code is available at\nhttps://github.com/NISPLab/AT-BSL .", "keywords": [], "authors_list": ["Xinli Yue", "Ningping Mou", "Qian Wang", "Lingchen Zhao"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f874"}, "filepath": "data/2312.01985.png", "tags": [], "_media_type": "image", "_rand": 0.9998686844736226, "arXiv_link": "https://arxiv.org/abs/2312.01985", "other_link": "https://github.com/qqlu/Entity}{https://github.com/qqlu/Entity}.", "title": "UniGS: Unified Representation for Image Generation and Segmentation", "abstract": "This paper introduces a novel unified representation of diffusion models for\nimage generation and segmentation. Specifically, we use a colormap to represent\nentity-level masks, addressing the challenge of varying entity numbers while\naligning the representation closely with the image RGB domain. Two novel\nmodules, including the location-aware color palette and progressive dichotomy\nmodule, are proposed to support our mask representation. On the one hand, a\nlocation-aware palette guarantees the colors' consistency to entities'\nlocations. On the other hand, the progressive dichotomy module can efficiently\ndecode the synthesized colormap to high-quality entity-level masks in a\ndepth-first binary search without knowing the cluster numbers. To tackle the\nissue of lacking large-scale segmentation training data, we employ an\ninpainting pipeline and then improve the flexibility of diffusion models across\nvarious tasks, including inpainting, image synthesis, referring segmentation,\nand entity segmentation. Comprehensive experiments validate the efficiency of\nour approach, demonstrating comparable segmentation mask quality to\nstate-of-the-art and adaptability to multiple tasks. The code will be released\nat \\href{https://github.com/qqlu/Entity}{https://github.com/qqlu/Entity}.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Lu Qi", "Lehan Yang", "Weidong Guo", "Yu Xu", "Bo Du", "Varun Jampani", "Ming-Hsuan Yang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f875"}, "filepath": "data/2312.10113.png", "tags": [], "_media_type": "image", "_rand": 0.999257507566832, "arXiv_link": "https://arxiv.org/abs/2312.10113", "other_link": "", "title": "Focus on Your Instruction: Fine-grained and Multi-instruction Image Editing by Attention Modulation", "abstract": "Recently, diffusion-based methods, like InstructPix2Pix (IP2P), have achieved\neffective instruction-based image editing, requiring only natural language\ninstructions from the user. However, these methods often inadvertently alter\nunintended areas and struggle with multi-instruction editing, resulting in\ncompromised outcomes. To address these issues, we introduce the Focus on Your\nInstruction (FoI), a method designed to ensure precise and harmonious editing\nacross multiple instructions without extra training or test-time optimization.\nIn the FoI, we primarily emphasize two aspects: (1) precisely extracting\nregions of interest for each instruction and (2) guiding the denoising process\nto concentrate within these regions of interest. For the first objective, we\nidentify the implicit grounding capability of IP2P from the cross-attention\nbetween instruction and image, then develop an effective mask extraction\nmethod. For the second objective, we introduce a cross attention modulation\nmodule for rough isolation of target editing regions and unrelated regions.\nAdditionally, we introduce a mask-guided disentangle sampling strategy to\nfurther ensure clear region isolation. Experimental results demonstrate that\nFoI surpasses existing methods in both quantitative and qualitative\nevaluations, especially excelling in multi-instruction editing task.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["guo", "Tianwei Lin"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f876"}, "filepath": "data/2312.00778.png", "tags": [], "_media_type": "image", "_rand": 0.9999668266088253, "arXiv_link": "https://arxiv.org/abs/2312.00778", "other_link": "", "title": "MorpheuS: Neural Dynamic 360$^{\\circ}$ Surface Reconstruction from Monocular RGB-D Video", "abstract": "Neural rendering has demonstrated remarkable success in dynamic scene\nreconstruction. Thanks to the expressiveness of neural representations, prior\nworks can accurately capture the motion and achieve high-fidelity\nreconstruction of the target object. Despite this, real-world video scenarios\noften feature large unobserved regions where neural representations struggle to\nachieve realistic completion. To tackle this challenge, we introduce MorpheuS,\na framework for dynamic 360{\\deg} surface reconstruction from a casually\ncaptured RGB-D video. Our approach models the target scene as a canonical field\nthat encodes its geometry and appearance, in conjunction with a deformation\nfield that warps points from the current frame to the canonical space. We\nleverage a view-dependent diffusion prior and distill knowledge from it to\nachieve realistic completion of unobserved regions. Experimental results on\nvarious real-world and synthetic datasets show that our method can achieve\nhigh-fidelity 360{\\deg} surface reconstruction of a deformable object from a\nmonocular RGB-D video.", "keywords": ["Deep learning architectures and techniques", "Scene analysis and understanding"], "authors_list": ["Hengyi Wang", "Jingwen Wang", "Lourdes Agapito"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f877"}, "filepath": "data/2312.09168v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995652804899243, "arXiv_link": "https://arxiv.org/abs/2312.09168v2", "other_link": "", "title": "DiffusionLight: Light Probes for Free by Painting a Chrome Ball", "abstract": "We present a simple yet effective technique to estimate lighting in a single\ninput image. Current techniques rely heavily on HDR panorama datasets to train\nneural networks to regress an input with limited field-of-view to a full\nenvironment map. However, these approaches often struggle with real-world,\nuncontrolled settings due to the limited diversity and size of their datasets.\nTo address this problem, we leverage diffusion models trained on billions of\nstandard images to render a chrome ball into the input image. Despite its\nsimplicity, this task remains challenging: the diffusion models often insert\nincorrect or inconsistent objects and cannot readily generate images in HDR\nformat. Our research uncovers a surprising relationship between the appearance\nof chrome balls and the initial diffusion noise map, which we utilize to\nconsistently generate high-quality chrome balls. We further fine-tune an LDR\ndifusion model (Stable Diffusion XL) with LoRA, enabling it to perform exposure\nbracketing for HDR light estimation. Our method produces convincing light\nestimates across diverse settings and demonstrates superior generalization to\nin-the-wild scenarios.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation", "Low-level vision"], "authors_list": ["Pakkapon Phongthawee", "Worameth Chinchuthakun", "Nontaphat Sinsunthithet", "Varun Jampani", "Amit Raj", "Pramook Khungurn", "Supasorn Suwajanakorn"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f878"}, "filepath": "data/2404.01686.png", "tags": [], "_media_type": "image", "_rand": 0.9990506273675928, "arXiv_link": "https://arxiv.org/abs/2404.01686", "other_link": "", "title": "JRDB-PanoTrack: An Open-world Panoptic Segmentation and Tracking Robotic Dataset in Crowded Human Environments", "abstract": "Autonomous robot systems have attracted increasing research attention in\nrecent years, where environment understanding is a crucial step for robot\nnavigation, human-robot interaction, and decision. Real-world robot systems\nusually collect visual data from multiple sensors and are required to recognize\nnumerous objects and their movements in complex human-crowded settings.\nTraditional benchmarks, with their reliance on single sensors and limited\nobject classes and scenarios, fail to provide the comprehensive environmental\nunderstanding robots need for accurate navigation, interaction, and\ndecision-making. As an extension of JRDB dataset, we unveil JRDB-PanoTrack, a\nnovel open-world panoptic segmentation and tracking benchmark, towards more\ncomprehensive environmental perception. JRDB-PanoTrack includes (1) various\ndata involving indoor and outdoor crowded scenes, as well as comprehensive 2D\nand 3D synchronized data modalities; (2) high-quality 2D spatial panoptic\nsegmentation and temporal tracking annotations, with additional 3D label\nprojections for further spatial understanding; (3) diverse object classes for\nclosed- and open-world recognition benchmarks, with OSPA-based metrics for\nevaluation. Extensive evaluation of leading methods shows significant\nchallenges posed by our dataset.", "keywords": ["Scene analysis and understanding"], "authors_list": ["Duy Tho Le", "Chenhui Gou", "Stavya Datta", "Hengcan Shi", "Ian Reid", "Jianfei Cai", "Hamid Rezatofighi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f879"}, "filepath": "data/2308.11228v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994033092184259, "arXiv_link": "https://arxiv.org/html/2308.11228v2", "other_link": "", "title": "Adaptive VIO: Deep Visual-Inertial Odometry with Online Continual Learning", "abstract": "Visual-inertial odometry (VIO) is a vital technique used in robotics,\naugmented reality, and autonomous vehicles. It combines visual and inertial\nmeasurements to accurately estimate position and orientation. Existing VIO\nmethods assume a fixed noise covariance for the inertial uncertainty. However,\naccurately determining in real-time the noise variance of the inertial sensors\npresents a significant challenge as the uncertainty changes throughout the\noperation leading to suboptimal performance and reduced accuracy. To circumvent\nthis, we propose VIO-DualProNet, a novel approach that utilizes deep learning\nmethods to dynamically estimate the inertial noise uncertainty in real-time. By\ndesigning and training a deep neural network to predict inertial noise\nuncertainty using only inertial sensor measurements, and integrating it into\nthe VINS-Mono algorithm, we demonstrate a substantial improvement in accuracy\nand robustness, enhancing VIO performance and potentially benefiting other\nVIO-based systems for precise localization and mapping across diverse\nconditions.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Youqi Pan", "Wugen Zhou", "Yingdian Cao", "Hongbin Zha"], "category_name": "Robotics", "all_categories": ["Robotics", "Systems and Control", "Systems and Control"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f87a"}, "filepath": "data/2311.02995.png", "tags": [], "_media_type": "image", "_rand": 0.9991088821272404, "arXiv_link": "https://arxiv.org/abs/2311.02995", "other_link": "https://github.com/liwenchao0615/ZERRINNet", "title": "ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images", "abstract": "Two difficulties here make low-light image enhancement a challenging task;\nfirstly, it needs to consider not only luminance restoration but also image\ncontrast, image denoising and color distortion issues simultaneously. Second,\nthe effectiveness of existing low-light enhancement methods depends on paired\nor unpaired training data with poor generalization performance.\n To solve these difficult problems, we propose in this paper a new\nlearning-based Retinex decomposition of zero-shot low-light enhancement method,\ncalled ZERRINNet. To this end, we first designed the N-Net network, together\nwith the noise loss term, to be used for denoising the original low-light image\nby estimating the noise of the low-light image. Moreover, RI-Net is used to\nestimate the reflection component and illumination component, and in order to\nsolve the color distortion and contrast, we use the texture loss term and\nsegmented smoothing loss to constrain the reflection component and illumination\ncomponent. Finally, our method is a zero-reference enhancement method that is\nnot affected by the training data of paired and unpaired datasets, so our\ngeneralization performance is greatly improved, and in the paper, we have\neffectively validated it with a homemade real-life low-light dataset and\nadditionally with advanced vision tasks, such as face detection, target\nrecognition, and instance segmentation. We conducted comparative experiments on\na large number of public datasets and the results show that the performance of\nour method is competitive compared to the current state-of-the-art methods. The\ncode is available at:https://github.com/liwenchao0615/ZERRINNet", "keywords": ["Low-level vision"], "authors_list": ["Yiqi Shi", "Duo Liu", "Liguo Zhang", "Ye Tian", "Xuezhi Xia", "fuxiaojing"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f87b"}, "filepath": "data/2312.15297.png", "tags": [], "_media_type": "image", "_rand": 0.9999737659879201, "arXiv_link": "https://arxiv.org/abs/2312.15297", "other_link": "", "title": "Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty from Pre-trained Models", "abstract": "Deep Neural Networks (DNNs) are powerful tools for various computer vision\ntasks, yet they often struggle with reliable uncertainty quantification - a\ncritical requirement for real-world applications. Bayesian Neural Networks\n(BNN) are equipped for uncertainty estimation but cannot scale to large DNNs\nthat are highly unstable to train. To address this challenge, we introduce the\nAdaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to\nseamlessly transform DNNs into BNNs in a post-hoc manner with minimal\ncomputational and training overheads. ABNN preserves the main predictive\nproperties of DNNs while enhancing their uncertainty quantification abilities\nthrough simple BNN adaptation layers (attached to normalization layers) and a\nfew fine-tuning steps on pre-trained models. We conduct extensive experiments\nacross multiple datasets for image classification and semantic segmentation\ntasks, and our results demonstrate that ABNN achieves state-of-the-art\nperformance without the computational budget typically associated with ensemble\nmethods.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Gianni Franchi", "Olivier Laurent", "Maxence Legu\u00e9ry", "Andrei Bursuc", "Andrea Pilzer", "Angela Yao"], "category_name": "Machine Learning", "all_categories": ["Machine Learning", "Computer Vision and Pattern Recognition", "Unknown"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f87c"}, "filepath": "data/2404.07990v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993451706672585, "arXiv_link": "https://arxiv.org/abs/2404.07990v1", "other_link": "", "title": "OpenBias: Open-set Bias Detection in Text-to-Image Generative Models", "abstract": "Text-to-image generative models are becoming increasingly popular and\naccessible to the general public. As these models see large-scale deployments,\nit is necessary to deeply investigate their safety and fairness to not\ndisseminate and perpetuate any kind of biases. However, existing works focus on\ndetecting closed sets of biases defined a priori, limiting the studies to\nwell-known concepts. In this paper, we tackle the challenge of open-set bias\ndetection in text-to-image generative models presenting OpenBias, a new\npipeline that identifies and quantifies the severity of biases agnostically,\nwithout access to any precompiled set. OpenBias has three stages. In the first\nphase, we leverage a Large Language Model (LLM) to propose biases given a set\nof captions. Secondly, the target generative model produces images using the\nsame set of captions. Lastly, a Vision Question Answering model recognizes the\npresence and extent of the previously proposed biases. We study the behavior of\nStable Diffusion 1.5, 2, and XL emphasizing new biases, never investigated\nbefore. Via quantitative experiments, we demonstrate that OpenBias agrees with\ncurrent closed-set bias detection methods and human judgement.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Moreno D'Inc\u00e0", "Elia Peruzzo", "Massimiliano Mancini", "Dejia Xu", "Vidit Goel", "Xingqian Xu", "Zhangyang Wang", "Humphrey Shi", "Nicu Sebe"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f87d"}, "filepath": "data/2404.11139v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990816776795727, "arXiv_link": "https://arxiv.org/abs/2404.11139v1", "other_link": "", "title": "GeoReF: Geometric Alignment Across Shape Variation for Category-level Object Pose Refinement", "abstract": "Object pose refinement is essential for robust object pose estimation.\nPrevious work has made significant progress towards instance-level object pose\nrefinement. Yet, category-level pose refinement is a more challenging problem\ndue to large shape variations within a category and the discrepancies between\nthe target object and the shape prior. To address these challenges, we\nintroduce a novel architecture for category-level object pose refinement. Our\napproach integrates an HS-layer and learnable affine transformations, which\naims to enhance the extraction and alignment of geometric information.\nAdditionally, we introduce a cross-cloud transformation mechanism that\nefficiently merges diverse data sources. Finally, we push the limits of our\nmodel by incorporating the shape prior information for translation and size\nerror prediction. We conducted extensive experiments to demonstrate the\neffectiveness of the proposed framework. Through extensive quantitative\nexperiments, we demonstrate significant improvement over the baseline method by\na large margin across all metrics.", "keywords": ["Deep learning architectures and techniques"], "authors_list": ["Linfang Zheng", "Tze Ho Elden Tse", "Chen Wang", "Yinghan Sun", "Hua Chen", "Ale\u0161 Leonardis", "Wei Zhang", "Hyung Jin Chang"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f87e"}, "filepath": "data/2404.01636.png", "tags": [], "_media_type": "image", "_rand": 0.999916167895354, "arXiv_link": "https://arxiv.org/abs/2404.01636", "other_link": "", "title": "Learning to Control Camera Exposure via Reinforcement Learning", "abstract": "Adjusting camera exposure in arbitrary lighting conditions is the first step\nto ensure the functionality of computer vision applications. Poorly adjusted\ncamera exposure often leads to critical failure and performance degradation.\nTraditional camera exposure control methods require multiple convergence steps\nand time-consuming processes, making them unsuitable for dynamic lighting\nconditions. In this paper, we propose a new camera exposure control framework\nthat rapidly controls camera exposure while performing real-time processing by\nexploiting deep reinforcement learning. The proposed framework consists of four\ncontributions: 1) a simplified training ground to simulate real-world's diverse\nand dynamic lighting changes, 2) flickering and image attribute-aware reward\ndesign, along with lightweight state design for real-time processing, 3) a\nstatic-to-dynamic lighting curriculum to gradually improve the agent's\nexposure-adjusting capability, and 4) domain randomization techniques to\nalleviate the limitation of the training ground and achieve seamless\ngeneralization in the wild.As a result, our proposed method rapidly reaches a\ndesired exposure level within five steps with real-time processing (1 ms).\nAlso, the acquired images are well-exposed and show superiority in various\ncomputer vision tasks, such as feature extraction and object detection.", "keywords": ["Efficient and scalable vision", "Low-level vision"], "authors_list": ["Kyunghyun Lee", "Ukcheol Shin", "Byeong-Uk Lee"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Machine Learning", "Robotics", "Systems and Control", "Systems and Control"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f87f"}, "filepath": "data/2403.01124.png", "tags": [], "_media_type": "image", "_rand": 0.9990211544389846, "arXiv_link": "https://arxiv.org/abs/2403.01124", "other_link": "", "title": "Text-guided Explorable Image Super-resolution", "abstract": "In this paper, we introduce the problem of zero-shot text-guided exploration\nof the solutions to open-domain image super-resolution. Our goal is to allow\nusers to explore diverse, semantically accurate reconstructions that preserve\ndata consistency with the low-resolution inputs for different large\ndownsampling factors without explicitly training for these specific\ndegradations. We propose two approaches for zero-shot text-guided\nsuper-resolution - i) modifying the generative process of text-to-image\n\\textit{T2I} diffusion models to promote consistency with low-resolution\ninputs, and ii) incorporating language guidance into zero-shot diffusion-based\nrestoration methods. We show that the proposed approaches result in diverse\nsolutions that match the semantic meaning provided by the text prompt while\npreserving data consistency with the degraded inputs. We evaluate the proposed\nbaselines for the task of extreme super-resolution and demonstrate advantages\nin terms of restoration quality, diversity, and explorability of solutions.", "keywords": ["Multimodal models and vision-language models"], "authors_list": ["Kanchana Vaishnavi Gandikota", "Paramanand Chandramouli"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f880"}, "filepath": "data/2401.01952.png", "tags": [], "_media_type": "image", "_rand": 0.9994859106208299, "arXiv_link": "https://arxiv.org/abs/2401.01952", "other_link": "", "title": "Instruct-Imagen: Image Generation with Multi-modal Instruction", "abstract": "This paper presents instruct-imagen, a model that tackles heterogeneous image\ngeneration tasks and generalizes across unseen tasks. We introduce *multi-modal\ninstruction* for image generation, a task representation articulating a range\nof generation intents with precision. It uses natural language to amalgamate\ndisparate modalities (e.g., text, edge, style, subject, etc.), such that\nabundant generation intents can be standardized in a uniform format.\n We then build instruct-imagen by fine-tuning a pre-trained text-to-image\ndiffusion model with a two-stage framework. First, we adapt the model using the\nretrieval-augmented training, to enhance model's capabilities to ground its\ngeneration on external multimodal context. Subsequently, we fine-tune the\nadapted model on diverse image generation tasks that requires vision-language\nunderstanding (e.g., subject-driven generation, etc.), each paired with a\nmulti-modal instruction encapsulating the task's essence. Human evaluation on\nvarious image generation datasets reveals that instruct-imagen matches or\nsurpasses prior task-specific models in-domain and demonstrates promising\ngeneralization to unseen and more complex tasks.", "keywords": ["Image and video generation and manipulation", "Multimodal models and vision-language models"], "authors_list": ["Hexiang Hu", "Kelvin C.K. Chan", "Yu-Chuan Su", "Wenhu Chen", "Yandong Li", "Kihyuk Sohn", "Yang Zhao", "Xue Ben", "William Cohen", "Ming-Wei Chang", "Xuhui Jia"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f881"}, "filepath": "data/2312.02520v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990678837655554, "arXiv_link": "https://arxiv.org/abs/2312.02520v2", "other_link": "", "title": "Towards More Unified In-context Visual Understanding", "abstract": "The rapid advancement of large language models (LLMs) has accelerated the\nemergence of in-context learning (ICL) as a cutting-edge approach in the\nnatural language processing domain. Recently, ICL has been employed in visual\nunderstanding tasks, such as semantic segmentation and image captioning,\nyielding promising results. However, existing visual ICL framework can not\nenable producing content across multiple modalities, which limits their\npotential usage scenarios. To address this issue, we present a new ICL\nframework for visual understanding with multi-modal output enabled. First, we\nquantize and embed both text and visual prompt into a unified representational\nspace, structured as interleaved in-context sequences. Then a decoder-only\nsparse transformer architecture is employed to perform generative modeling on\nthem, facilitating in-context learning. Thanks to this design, the model is\ncapable of handling in-context vision understanding tasks with multimodal\noutput in a unified pipeline.Experimental results demonstrate that our model\nachieves competitive performance compared with specialized models and previous\nICL baselines. Overall, our research takes a further step toward unified\nmultimodal in-context learning.", "keywords": ["Multimodal models and vision-language models", "Large multimodal models and prompting techniques"], "authors_list": ["Dianmo Sheng", "Dongdong Chen", "Zhentao Tan", "Qiankun Liu", "Qi Chu", "Jianmin Bao", "Tao Gong", "Bin Liu", "Shengwei Xu", "Nenghai Yu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f882"}, "filepath": "data/2311.17076.png", "tags": [], "_media_type": "image", "_rand": 0.9999583191835241, "arXiv_link": "https://arxiv.org/abs/2311.17076", "other_link": "https://github.com/chancharikmitra/CCoT", "title": "Compositional Chain-of-Thought Prompting for Large Multimodal Models", "abstract": "The combination of strong visual backbones and Large Language Model (LLM)\nreasoning has led to Large Multimodal Models (LMMs) becoming the current\nstandard for a wide range of vision and language (VL) tasks. However, recent\nresearch has shown that even the most advanced LMMs still struggle to capture\naspects of compositional visual reasoning, such as attributes and relationships\nbetween objects. One solution is to utilize scene graphs (SGs)--a formalization\nof objects and their relations and attributes that has been extensively used as\na bridge between the visual and textual domains. Yet, scene graph data requires\nscene graph annotations, which are expensive to collect and thus not easily\nscalable. Moreover, finetuning an LMM based on SG data can lead to catastrophic\nforgetting of the pretraining objective. To overcome this, inspired by\nchain-of-thought methods, we propose Compositional Chain-of-Thought (CCoT), a\nnovel zero-shot Chain-of-Thought prompting method that utilizes SG\nrepresentations in order to extract compositional knowledge from an LMM.\nSpecifically, we first generate an SG using the LMM, and then use that SG in\nthe prompt to produce a response. Through extensive experiments, we find that\nthe proposed CCoT approach not only improves LMM performance on several vision\nand language VL compositional benchmarks but also improves the performance of\nseveral popular LMMs on general multimodal benchmarks, without the need for\nfine-tuning or annotated ground-truth SGs. Code:\nhttps://github.com/chancharikmitra/CCoT", "keywords": ["Scene analysis and understanding", "Multimodal models and vision-language models"], "authors_list": ["Chancharik Mitra", "Brandon Huang", "Trevor Darrell", "Roei Herzig"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Artificial Intelligence", "Computation and Language", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f883"}, "filepath": "data/2402.19289v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997679328864959, "arXiv_link": "https://arxiv.org/abs/2402.19289v2", "other_link": "", "title": "CAMixerSR: Only Details Need More \"Attention\"", "abstract": "To satisfy the rapidly increasing demands on the large image (2K-8K)\nsuper-resolution (SR), prevailing methods follow two independent tracks: 1)\naccelerate existing networks by content-aware routing, and 2) design better\nsuper-resolution networks via token mixer refining. Despite directness, they\nencounter unavoidable defects (e.g., inflexible route or non-discriminative\nprocessing) limiting further improvements of quality-complexity trade-off. To\nerase the drawbacks, we integrate these schemes by proposing a content-aware\nmixer (CAMixer), which assigns convolution for simple contexts and additional\ndeformable window-attention for sparse textures. Specifically, the CAMixer uses\na learnable predictor to generate multiple bootstraps, including offsets for\nwindows warping, a mask for classifying windows, and convolutional attentions\nfor endowing convolution with the dynamic property, which modulates attention\nto include more useful textures self-adaptively and improves the representation\ncapability of convolution. We further introduce a global classification loss to\nimprove the accuracy of predictors. By simply stacking CAMixers, we obtain\nCAMixerSR which achieves superior performance on large-image SR, lightweight\nSR, and omnidirectional-image SR.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Yan Wang", "Yi Liu", "Shijie Zhao", "Junlin Li", "Li zhang"], "category_name": "Image and Video Processing", "all_categories": ["Image and Video Processing", "Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f884"}, "filepath": "data/2312.02244.png", "tags": [], "_media_type": "image", "_rand": 0.9991671924269323, "arXiv_link": "https://arxiv.org/abs/2312.02244", "other_link": "https://luigiriz.github.io/geoze-website/", "title": "Geometrically-informed aggregation for zero-shot point cloud understanding", "abstract": "Zero-shot 3D point cloud understanding can be achieved via 2D Vision-Language\nModels (VLMs). Existing strategies directly map Vision-Language Models from 2D\npixels of rendered or captured views to 3D points, overlooking the inherent and\nexpressible point cloud geometric structure. Geometrically similar or close\nregions can be exploited for bolstering point cloud understanding as they are\nlikely to share semantic information. To this end, we introduce the first\ntraining-free aggregation technique that leverages the point cloud's 3D\ngeometric structure to improve the quality of the transferred Vision-Language\nModels. Our approach operates iteratively, performing local-to-global\naggregation based on geometric and semantic point-level reasoning. We benchmark\nour approach on three downstream tasks, including classification, part\nsegmentation, and semantic segmentation, with a variety of datasets\nrepresenting both synthetic/real-world, and indoor/outdoor scenarios. Our\napproach achieves new state-of-the-art results in all benchmarks. Our approach\noperates iteratively, performing local-to-global aggregation based on geometric\nand semantic point-level reasoning. Code and dataset are available at\nhttps://luigiriz.github.io/geoze-website/", "keywords": ["Multimodal models and vision-language models", "Scene analysis and understanding"], "authors_list": ["Guofeng Mei", "Luigi Riz", "Yiming Wang", "Fabio Poiesi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f885"}, "filepath": "data/2306.11369.png", "tags": [], "_media_type": "image", "_rand": 0.9998405962059664, "arXiv_link": "https://arxiv.org/abs/2306.11369", "other_link": "https://github.com/jbwang1997/CrossKD.", "title": "CrossKD: Cross-Head Knowledge Distillation for Dense Object Detection", "abstract": "Knowledge Distillation (KD) has been validated as an effective model\ncompression technique for learning compact object detectors. Existing\nstate-of-the-art KD methods for object detection are mostly based on feature\nimitation. In this paper, we present a general and effective prediction\nmimicking distillation scheme, called CrossKD, which delivers the intermediate\nfeatures of the student's detection head to the teacher's detection head. The\nresulting cross-head predictions are then forced to mimic the teacher's\npredictions. This manner relieves the student's head from receiving\ncontradictory supervision signals from the annotations and the teacher's\npredictions, greatly improving the student's detection performance. Moreover,\nas mimicking the teacher's predictions is the target of KD, CrossKD offers more\ntask-oriented information in contrast with feature imitation. On MS COCO, with\nonly prediction mimicking losses applied, our CrossKD boosts the average\nprecision of GFL ResNet-50 with 1x training schedule from 40.2 to 43.7,\noutperforming all existing KD methods. In addition, our method also works well\nwhen distilling detectors with heterogeneous backbones. Code is available at\nhttps://github.com/jbwang1997/CrossKD.", "keywords": [], "authors_list": ["JiaBao Wang", "yuming chen", "Zhaohui Zheng", "Xiang Li", "Ming-Ming Cheng", "Qibin Hou"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f886"}, "filepath": "data/2312.09523.png", "tags": [], "_media_type": "image", "_rand": 0.9996174523864108, "arXiv_link": "https://arxiv.org/abs/2312.09523", "other_link": "", "title": "DriveTrack: A Benchmark for Long-Range Point Tracking in Real-World Videos", "abstract": "This paper presents DriveTrack, a new benchmark and data generation framework\nfor long-range keypoint tracking in real-world videos. DriveTrack is motivated\nby the observation that the accuracy of state-of-the-art trackers depends\nstrongly on visual attributes around the selected keypoints, such as texture\nand lighting. The problem is that these artifacts are especially pronounced in\nreal-world videos, but these trackers are unable to train on such scenes due to\na dearth of annotations. DriveTrack bridges this gap by building a framework to\nautomatically annotate point tracks on autonomous driving datasets. We release\na dataset consisting of 1 billion point tracks across 24 hours of video, which\nis seven orders of magnitude greater than prior real-world benchmarks and on\npar with the scale of synthetic benchmarks. DriveTrack unlocks new use cases\nfor point tracking in real-world videos. First, we show that fine-tuning\nkeypoint trackers on DriveTrack improves accuracy on real-world scenes by up to\n7%. Second, we analyze the sensitivity of trackers to visual artifacts in real\nscenes and motivate the idea of running assistive keypoint selectors alongside\ntrackers.", "keywords": ["Efficient and scalable vision"], "authors_list": ["Arjun Balasingam", "Joseph Chandler", "Chenning Li", "Zhoutong Zhang", "Hari Balakrishnan"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f887"}, "filepath": "data/2311.18405.png", "tags": [], "_media_type": "image", "_rand": 0.9998614725700659, "arXiv_link": "https://arxiv.org/abs/2311.18405", "other_link": "", "title": "CAT-DM: Controllable Accelerated Virtual Try-on with Diffusion Model", "abstract": "Generative Adversarial Networks (GANs) dominate the research field in\nimage-based virtual try-on, but have not resolved problems such as unnatural\ndeformation of garments and the blurry generation quality. While the generative\nquality of diffusion models is impressive, achieving controllability poses a\nsignificant challenge when applying it to virtual try-on and multiple denoising\niterations limit its potential for real-time applications. In this paper, we\npropose Controllable Accelerated virtual Try-on with Diffusion Model (CAT-DM).\nTo enhance the controllability, a basic diffusion-based virtual try-on network\nis designed, which utilizes ControlNet to introduce additional control\nconditions and improves the feature extraction of garment images. In terms of\nacceleration, CAT-DM initiates a reverse denoising process with an implicit\ndistribution generated by a pre-trained GAN-based model. Compared with previous\ntry-on methods based on diffusion models, CAT-DM not only retains the pattern\nand texture details of the inshop garment but also reduces the sampling steps\nwithout compromising generation quality. Extensive experiments demonstrate the\nsuperiority of CAT-DM against both GANbased and diffusion-based methods in\nproducing more realistic images and accurately reproducing garment patterns.", "keywords": ["Image and video generation and manipulation", "Deep learning architectures and techniques"], "authors_list": ["Jianhao Zeng", "Dan Song", "Weizhi Nie", "Hongshuo Tian", "Tongtong Wang", "An-An Liu"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f888"}, "filepath": "data/2312.04551.png", "tags": [], "_media_type": "image", "_rand": 0.9992406505750028, "arXiv_link": "https://arxiv.org/abs/2312.04551", "other_link": "https://chuanxiaz.com/free3d/.", "title": "Free3D: Consistent Novel View Synthesis without 3D Representation", "abstract": "We introduce Free3D, a simple accurate method for monocular open-set novel\nview synthesis (NVS). Similar to Zero-1-to-3, we start from a pre-trained 2D\nimage generator for generalization, and fine-tune it for NVS. Compared to other\nworks that took a similar approach, we obtain significant improvements without\nresorting to an explicit 3D representation, which is slow and memory-consuming,\nand without training an additional network for 3D reconstruction. Our key\ncontribution is to improve the way the target camera pose is encoded in the\nnetwork, which we do by introducing a new ray conditioning normalization (RCN)\nlayer. The latter injects pose information in the underlying 2D image generator\nby telling each pixel its viewing direction. We further improve multi-view\nconsistency by using light-weight multi-view attention layers and by sharing\ngeneration noise between the different views. We train Free3D on the Objaverse\ndataset and demonstrate excellent generalization to new categories in new\ndatasets, including OmniObject3D and GSO. The project page is available at\nhttps://chuanxiaz.com/free3d/.", "keywords": ["Deep learning architectures and techniques", "Image and video generation and manipulation"], "authors_list": ["Chuanxia Zheng", "Andrea Vedaldi"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f889"}, "filepath": "data/2401.05335v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992252136818296, "arXiv_link": "https://arxiv.org/html/2401.05335v1", "other_link": "https://mohamad-shahbazi.github.io/inserf.", "title": "InNeRF360: Text-Guided 3D-Consistent Object Inpainting on 360-degree Neural Radiance Fields", "abstract": "We introduce InseRF, a novel method for generative object insertion in the\nNeRF reconstructions of 3D scenes. Based on a user-provided textual description\nand a 2D bounding box in a reference viewpoint, InseRF generates new objects in\n3D scenes. Recently, methods for 3D scene editing have been profoundly\ntransformed, owing to the use of strong priors of text-to-image diffusion\nmodels in 3D generative modeling. Existing methods are mostly effective in\nediting 3D scenes via style and appearance changes or removing existing\nobjects. Generating new objects, however, remains a challenge for such methods,\nwhich we address in this study. Specifically, we propose grounding the 3D\nobject insertion to a 2D object insertion in a reference view of the scene. The\n2D edit is then lifted to 3D using a single-view object reconstruction method.\nThe reconstructed object is then inserted into the scene, guided by the priors\nof monocular depth estimation methods. We evaluate our method on various 3D\nscenes and provide an in-depth analysis of the proposed components. Our\nexperiments with generative insertion of objects in several 3D scenes indicate\nthe effectiveness of our method compared to the existing methods. InseRF is\ncapable of controllable and 3D-consistent object insertion without requiring\nexplicit 3D information as input. Please visit our project page at\nhttps://mohamad-shahbazi.github.io/inserf.", "keywords": ["Image and video generation and manipulation"], "authors_list": ["Dongqing Wang", "Tong Zhang", "Alaa Abboud", "Sabine S\u00fcsstrunk"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Graphics", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}},{"_id": {"$oid": "6669f098da8041005727f88a"}, "filepath": "data/2402.18919.png", "tags": [], "_media_type": "image", "_rand": 0.999088495854665, "arXiv_link": "https://arxiv.org/abs/2402.18919", "other_link": "", "title": "Decompose-and-Compose: A Compositional Approach to Mitigating Spurious Correlation", "abstract": "While standard Empirical Risk Minimization (ERM) training is proven effective\nfor image classification on in-distribution data, it fails to perform well on\nout-of-distribution samples. One of the main sources of distribution shift for\nimage classification is the compositional nature of images. Specifically, in\naddition to the main object or component(s) determining the label, some other\nimage components usually exist, which may lead to the shift of input\ndistribution between train and test environments. More importantly, these\ncomponents may have spurious correlations with the label. To address this\nissue, we propose Decompose-and-Compose (DaC), which improves robustness to\ncorrelation shift by a compositional approach based on combining elements of\nimages. Based on our observations, models trained with ERM usually highly\nattend to either the causal components or the components having a high spurious\ncorrelation with the label (especially in datapoints on which models have a\nhigh confidence). In fact, according to the amount of spurious correlation and\nthe easiness of classification based on the causal or non-causal components,\nthe model usually attends to one of these more (on samples with high\nconfidence). Following this, we first try to identify the causal components of\nimages using class activation maps of models trained with ERM. Afterward, we\nintervene on images by combining them and retraining the model on the augmented\ndata, including the counterfactual ones. Along with its high interpretability,\nthis work proposes a group-balancing method by intervening on images without\nrequiring group labels or information regarding the spurious features during\ntraining. The method has an overall better worst group accuracy compared to\nprevious methods with the same amount of supervision on the group labels in\ncorrelation shift.", "keywords": [], "authors_list": ["Fahimeh Hosseini Noohdani", "Parsa Hosseini", "Aryan Yazdan Parast", "Hamidreza Araghi", "Mahdieh Baghshah"], "category_name": "Computer Vision and Pattern Recognition", "all_categories": ["Computer Vision and Pattern Recognition", "Machine Learning"], "_dataset_id": {"$oid": "6669f097da8041005727ef3f"}}]} \ No newline at end of file