Avatar
ArXivGPT / @ArXivGPT (RSS Feed)
fddd630968f57a1211b1b78d7d76eede536ee9b4c953913ee95e3a36a59e8826
Twitter feed for: @ArXivGPT. Generated by nitter.moomoo.me https://nitter.moomoo.me/ArXivGPT

📛 Evidence of Meaning in Language Models Trained on Programs

🧠 This paper shows that language models can learn meaning from program corpuses and offers a framework to study meaning acquisition and representation in them.

🐦 11

❤️ 1.5K

🔗 arxiv.org/pdf/2305.11169.pdf (https://arxiv.org/pdf/2305.11169.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540772502863872#m

📛 Sparks of Artificial General Intelligence: Early experiments with GPT-4

🧠 GPT-4 shows near-human performance in multiple domains, possibly indicating early artificial general intelligence.

🐦 1K

❤️ 19.6K

✍️ @erichorvitz (https://nitter.moomoo.me/erichorvitz)

🔗 arxiv.org/pdf/2303.12712.pdf (https://arxiv.org/pdf/2303.12712.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540770950995968#m

📛 Any-to-Any Generation via Composable Diffusion

🧠 CoDi is a versatile AI model that generates different output types like language, image, video, or audio from various inputs, maintaining high generation quality.

🐦 7

❤️ 177

🔗 arxiv.org/pdf/2305.11846.pdf (https://arxiv.org/pdf/2305.11846.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540769323692032#m

📛 Neural Network Architecture Beyond Width and Depth

🧠 NestNet is a 3D neural network with enhanced expressiveness and accuracy for approximating Lipschitz continuous functions compared to 2D networks.

🐦 8

❤️ 6

🔗 arxiv.org/pdf/2205.09459.pdf (https://arxiv.org/pdf/2205.09459.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540771689193473#m

📛 Drag Your GAN: Interactive Point-based Manipulation on the Generative

Image Manifold

🧠 DragGAN enables precise image manipulation and realistic outcomes in generative adversarial networks.

🐦 63

❤️ 6.2K

✍️ @XingangP (https://nitter.moomoo.me/XingangP)

🔗 arxiv.org/pdf/2305.10973.pdf (https://arxiv.org/pdf/2305.10973.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540770166644736#m

📛 LIMA: Less Is More for Alignment

🧠 LIMA, a 65B parameter model, excels in unseen tasks with 1,000 prompts, suggesting pretraining imparts significant knowledge and minimal tuning is needed for quality results.

🐦 7

❤️ 467

🔗 arxiv.org/pdf/2305.11206.pdf (https://arxiv.org/pdf/2305.11206.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540768501608450#m

📛 Towards Expert-Level Medical Question Answering with Large Language

Models

🧠 Med-PaLM 2 AI achieves near-physician accuracy (86.5%) in medical question-answering.

🐦 47

❤️ 4K

✍️ @ymatias (https://nitter.moomoo.me/ymatias)@thekaransinghal (https://nitter.moomoo.me/thekaransinghal)@vivnat (https://nitter.moomoo.me/vivnat)@alan\_karthi (https://nitter.moomoo.me/alan_karthi)

🔗 arxiv.org/pdf/2305.09617.pdf (https://arxiv.org/pdf/2305.09617.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540767687921665#m

📛 DarkBERT: A Language Model for the Dark Side of the Internet

🧠 DarkBERT, pretrained on Dark Web data, surpasses existing models and aids in analyzing the domain's linguistic features.

🐦 39

❤️ 511

✍️ @EugeneOnNLP (https://nitter.moomoo.me/EugeneOnNLP)

🔗 arxiv.org/pdf/2305.08596.pdf (https://arxiv.org/pdf/2305.08596.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540766068916224#m

📛 Tree of Thoughts: Deliberate Problem Solving with Large Language Models

🧠 The Tree of Thoughts framework enhances language models' problem-solving by exploring coherent text units and multiple reasoning paths.

🐦 34

❤️ 2.5K

✍️ @ShunyuYao12 (https://nitter.moomoo.me/ShunyuYao12)

🔗 arxiv.org/pdf/2305.10601.pdf (https://arxiv.org/pdf/2305.10601.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540766882594818#m

"Improving Recommendation System Serendipity Through Lexicase Selection"

Link: arxiv.org/pdf/2305.11044v1 (https://arxiv.org/pdf/2305.11044v1)

https://nitter.moomoo.me/ArXivGPT/status/1659760906581180416#m

"Learning Restoration is Not Enough: Transfering Identical Mapping for Single-Image Shadow Removal"

Cat: cs CV

Link: arxiv.org/pdf/2305.10640v1 (https://arxiv.org/pdf/2305.10640v1)

https://nitter.moomoo.me/ArXivGPT/status/1659759886337736704#m

"PTQD: Accurate Post-Training Quantization for Diffusion Models"

Cat: cs CV

Link: arxiv.org/pdf/2305.10657v1 (https://arxiv.org/pdf/2305.10657v1)

https://nitter.moomoo.me/ArXivGPT/status/1659759375278559232#m

"Learning Differentially Private Probabilistic Models for Privacy-Preserving Image Generation"

Cat: cs CV

Link: arxiv.org/pdf/2305.10662v1 (https://arxiv.org/pdf/2305.10662v1)

https://nitter.moomoo.me/ArXivGPT/status/1659758864080965632#m

"Re-thinking Data Availablity Attacks Against Deep Neural Networks"

Link: arxiv.org/pdf/2305.10691v1 (https://arxiv.org/pdf/2305.10691v1)

https://nitter.moomoo.me/ArXivGPT/status/1659757843510349825#m

"Zero-Day Backdoor Attack against Text-to-Image Diffusion Models via Personalization"

Cat: cs CV

Link: arxiv.org/pdf/2305.10701v1 (https://arxiv.org/pdf/2305.10701v1)

https://nitter.moomoo.me/ArXivGPT/status/1659757333550100480#m

"Exploiting Fine-Grained DCT Representations for Hiding Image-Level Messages within JPEG Images"

Cat: cs CV

Link: arxiv.org/pdf/2305.06582v1 (https://arxiv.org/pdf/2305.06582v1)

https://nitter.moomoo.me/ArXivGPT/status/1657220530112716802#m

"Hyperbolic Deep Learning in Computer Vision: A Survey"

Cat: cs CV

Link: arxiv.org/pdf/2305.06611v1 (https://arxiv.org/pdf/2305.06611v1)

https://nitter.moomoo.me/ArXivGPT/status/1657219514663325698#m