Skip to content

Commit aa16698

Browse files
committed
update the contents
1 parent a092a12 commit aa16698

File tree

2 files changed

+121
-29
lines changed

2 files changed

+121
-29
lines changed

PaperList/KnowledgeAugmentedPromptList.md

+109-29
Original file line numberDiff line numberDiff line change
@@ -5,35 +5,107 @@
55
<div style="line-height:0.2em;">
66

77

8+
[**Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning**](https://arxiv.org/abs/2406.16989)**2024.06.24**
9+
10+
<font color="gray">Ziyu Zhao, Leilei Gan, Guoyin Wang, Yuwei Hu, Tao Shen, etc </font>
11+
12+
![](https://img.shields.io/badge/Citations-0-green)
13+
14+
---
15+
16+
[**Enhancing RAG Systems: A Survey of Optimization Strategies for Performance and Scalability**](https://doi.org/10.55041/ijsrem35402)**2024.06.04**
17+
18+
<font color="gray">【INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT】</font>
19+
20+
![](https://img.shields.io/badge/Citations-0-green)
21+
22+
---
23+
24+
[**Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training**](https://doi.org/10.48550/arXiv.2405.20978)**2024.05.31**
25+
26+
<font color="gray">Feiteng Fang, Yuelin Bai, Shiwen Ni, Min Yang, Xiaojun Chen, etc . - 【arXiv.org】</font>
27+
28+
![](https://img.shields.io/badge/Citations-1-green)
29+
30+
---
31+
32+
[**Accelerating Inference of Retrieval-Augmented Generation via Sparse Context Selection**](https://doi.org/10.48550/arXiv.2405.16178)**2024.05.25**
33+
34+
<font color="gray">Yun Zhu, Jia-Chen Gu, Caitlin Sikora, Ho Ko, Yinxiao Liu, etc . - 【arXiv.org】</font>
35+
36+
![](https://img.shields.io/badge/Citations-0-green)
37+
38+
---
39+
40+
[**DocReLM: Mastering Document Retrieval with Language Model**](https://doi.org/10.48550/arXiv.2405.11461)**2024.05.19**
41+
42+
<font color="gray">Gengchen Wei, Xinle Pang, Tianning Zhang, Yu Sun, Xun Qian, etc . - 【arXiv.org】</font>
43+
44+
![](https://img.shields.io/badge/Citations-0-green)
45+
46+
---
47+
48+
[**UniRAG: Universal Retrieval Augmentation for Multi-Modal Large Language Models**](https://doi.org/10.48550/arXiv.2405.10311)**2024.05.16**
49+
50+
<font color="gray">Sahel Sharifymoghaddam, Shivani Upadhyay, Wenhu Chen, Jimmy Lin . - 【arXiv.org】</font>
51+
52+
![](https://img.shields.io/badge/Citations-0-green)
53+
54+
---
55+
56+
[**ChatHuman: Language-driven 3D Human Understanding with Retrieval-Augmented Tool Reasoning**](https://doi.org/10.48550/arXiv.2405.04533)**2024.05.07**
57+
58+
<font color="gray">Jing Lin, Yao Feng, Weiyang Liu, Michael J. Black . - 【arXiv.org】</font>
59+
60+
![](https://img.shields.io/badge/Citations-0-green)
61+
62+
---
63+
64+
[**REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs**](https://doi.org/10.48550/arXiv.2405.02228)**2024.05.03**
65+
66+
<font color="gray">Deepa Tilwani, Yash Saxena, Ali Mohammadi, Edward Raff, Amit P. Sheth, etc . - 【arXiv.org】</font>
67+
68+
![](https://img.shields.io/badge/Citations-1-green)
69+
70+
---
71+
872
[**Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation**](https://arxiv.org/abs/2404.06910)**2024.04.10**
973

1074
<font color="gray">Thomas Merth, Qichen Fu, Mohammad Rastegari, Mahyar Najibi </font>
1175

12-
![](https://img.shields.io/badge/Citations-0-green)
76+
![](https://img.shields.io/badge/Citations-0-green) ![](https://img.shields.io/badge/Mendeley%20Readers-14-red)
1377

1478
---
1579

1680
[**Untangle the KNOT: Interweaving Conflicting Knowledge and Reasoning Skills in Large Language Models**](https://arxiv.org/abs/2404.03577)**2024.04.04**
1781

1882
<font color="gray">Yan-Tie Liu, Zijun Yao, Xin Lv, Yuchen Fan, S. Cao, etc </font>
1983

20-
![](https://img.shields.io/badge/Citations-0-green)
84+
![](https://img.shields.io/badge/Citations-0-green) [![](https://img.shields.io/badge/Github%20Stars-1-blue)](https://github.com/thu-keg/knot)
2185

2286
---
2387

2488
[**Unveiling LLMs: The Evolution of Latent Representations in a Temporal Knowledge Graph**](https://arxiv.org/abs/2404.03623)**2024.04.04**
2589

2690
<font color="gray">Marco Bronzini, Carlo Nicolini, Bruno Lepri, Jacopo Staiano, Andrea Passerini </font>
2791

28-
![](https://img.shields.io/badge/Citations-0-green)
92+
![](https://img.shields.io/badge/Citations-0-green) ![](https://img.shields.io/badge/Mendeley%20Readers-6-red)
93+
94+
---
95+
96+
[**JORA: JAX Tensor-Parallel LoRA Library for Retrieval Augmented Fine-Tuning**](https://doi.org/10.48550/arXiv.2403.11366)**2024.03.17**
97+
98+
<font color="gray">Anique Tahir, Lu Cheng, Huan Liu . - 【arXiv.org】</font>
99+
100+
![](https://img.shields.io/badge/Citations-0-green) [![](https://img.shields.io/badge/Github%20Stars-22-blue)](https://github.com/aniquetahir/JORA)
29101

30102
---
31103

32104
[**Retrieval-Augmented Generation for AI-Generated Content: A Survey**](https://arxiv.org/abs/2402.19473)**2024.02.29**
33105

34106
<font color="gray">Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, etc </font>
35107

36-
![](https://img.shields.io/badge/Citations-0-green)
108+
![](https://img.shields.io/badge/Citations-0-green) ![](https://img.shields.io/badge/Mendeley%20Readers-132-red) [![](https://img.shields.io/badge/Github%20Stars-893-blue)](https://github.com/hymie122/rag-survey)
37109

38110
---
39111

@@ -45,6 +117,14 @@
45117

46118
---
47119

120+
[**The Power of Noise: Redefining Retrieval for RAG Systems**](https://doi.org/10.1145/3626772.3657834)**2024.01.26**
121+
122+
<font color="gray">Florin Cuconasu, Giovanni Trappolini, F. Siciliano, Simone Filice, Cesare Campagnano, etc . - 【arXiv.org】</font>
123+
124+
![](https://img.shields.io/badge/Citations-37-green) ![](https://img.shields.io/badge/Mendeley%20Readers-51-red) [![](https://img.shields.io/badge/Github%20Stars-26-blue)](https://github.com/florin-git/The-Power-of-Noise)
125+
126+
---
127+
48128
[**LLM Augmented LLMs: Expanding Capabilities through Composition**](https://doi.org/10.48550/arXiv.2401.02412)**2024.01.04**
49129

50130
<font color="gray">Rachit Bansal, Bidisha Samanta, Siddharth Dalmia, Nitish Gupta, Shikhar Vashishth, etc . - 【arXiv.org】</font>
@@ -57,7 +137,7 @@
57137

58138
<font color="gray">Jon Saad-Falcon, O. Khattab, Christopher Potts, Matei Zaharia . - 【arXiv.org】</font>
59139

60-
![](https://img.shields.io/badge/Citations-1-green) [![](https://img.shields.io/badge/Github%20Stars-368-blue)](https://github.com/stanford-futuredata/ares)
140+
![](https://img.shields.io/badge/Citations-1-green) [![](https://img.shields.io/badge/Github%20Stars-371-blue)](https://github.com/stanford-futuredata/ares)
61141

62142
---
63143

@@ -89,7 +169,7 @@
89169

90170
<font color="gray">Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, Hannaneh Hajishirzi . - 【arXiv.org】</font>
91171

92-
![](https://img.shields.io/badge/Citations-13-green)
172+
![](https://img.shields.io/badge/Citations-13-green) [![](https://img.shields.io/badge/Github%20Stars-1.6k-blue)](https://github.com/AkariAsai/self-rag)
93173

94174
---
95175

@@ -105,7 +185,7 @@
105185

106186
<font color="gray">Yile Wang, Peng Li, Maosong Sun, Yang Liu . - 【Conference on Empirical Methods in Natural Language Processing】</font>
107187

108-
![](https://img.shields.io/badge/Citations-3-green)
188+
![](https://img.shields.io/badge/Citations-3-green) [![](https://img.shields.io/badge/Github%20Stars-888-blue)](https://github.com/ruc-nlpir/flashrag)
109189

110190
---
111191

@@ -121,7 +201,7 @@
121201

122202
<font color="gray">Fangyuan Xu, Weijia Shi, Eunsol Choi . - 【arXiv.org】</font>
123203

124-
![](https://img.shields.io/badge/Citations-16-green) [![](https://img.shields.io/badge/Github%20Stars-55-blue)](https://github.com/carriex/recomp)
204+
![](https://img.shields.io/badge/Citations-16-green) [![](https://img.shields.io/badge/Github%20Stars-56-blue)](https://github.com/carriex/recomp)
125205

126206
---
127207

@@ -137,31 +217,31 @@
137217

138218
<font color="gray">Ori Yoran, Tomer Wolfson, Ori Ram, Jonathan Berant . - 【arXiv.org】</font>
139219

140-
![](https://img.shields.io/badge/Citations-17-green)
220+
![](https://img.shields.io/badge/Citations-17-green) [![](https://img.shields.io/badge/Github%20Stars-53-blue)](https://github.com/oriyor/ret-robust)
141221

142222
---
143223

144224
[**RAGAS: Automated Evaluation of Retrieval Augmented Generation**](https://doi.org/10.48550/arXiv.2309.15217)**2023.09.26**
145225

146226
<font color="gray">ES Shahul, Jithin James, Luis Espinosa Anke, Steven Schockaert . - 【arXiv.org】</font>
147227

148-
![](https://img.shields.io/badge/Citations-12-green)
228+
![](https://img.shields.io/badge/Citations-12-green) [![](https://img.shields.io/badge/Github%20Stars-5.6k-blue)](https://github.com/explodinggradients/ragas)
149229

150230
---
151231

152232
[**Benchmarking Large Language Models in Retrieval-Augmented Generation**](https://doi.org/10.48550/arXiv.2309.01431)**2023.09.04**
153233

154234
<font color="gray">Jiawei Chen, Hongyu Lin, Xianpei Han, Le Sun . - 【arXiv.org】</font>
155235

156-
![](https://img.shields.io/badge/Citations-8-green) [![](https://img.shields.io/badge/Github%20Stars-220-blue)](https://github.com/chen700564/RGB)
236+
![](https://img.shields.io/badge/Citations-8-green) [![](https://img.shields.io/badge/Github%20Stars-222-blue)](https://github.com/chen700564/RGB)
157237

158238
---
159239

160240
[**RaLLe: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models**](https://doi.org/10.48550/arXiv.2308.10633)**2023.08.21**
161241

162242
<font color="gray">Yasuto Hoshi, D. Miyashita, Youyang Ng, Kento Tatsuno, Yasuhiro Morioka, etc . - 【Conference on Empirical Methods in Natural Language Processing】</font>
163243

164-
![](https://img.shields.io/badge/Citations-2-green)
244+
![](https://img.shields.io/badge/Citations-2-green) [![](https://img.shields.io/badge/Github%20Stars-50-blue)](https://github.com/yhoshi3/ralle)
165245

166246
---
167247

@@ -201,7 +281,7 @@
201281

202282
<font color="gray">Shufan Wang, Yixiao Song, Andrew Drozdov, Aparna Garimella, Varun Manjunatha, etc </font>
203283

204-
![](https://img.shields.io/badge/Citations-0-green) ![](https://img.shields.io/badge/Mendeley%20Readers-6-red)
284+
![](https://img.shields.io/badge/Citations-0-green) ![](https://img.shields.io/badge/Mendeley%20Readers-14-red)
205285

206286
---
207287

@@ -281,23 +361,23 @@
281361

282362
<font color="gray">Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, etc . - 【Conference on Empirical Methods in Natural Language Processing】</font>
283363

284-
![](https://img.shields.io/badge/Citations-31-green) [![](https://img.shields.io/badge/Github%20Stars-431-blue)](https://github.com/jzbjyb/flare)
364+
![](https://img.shields.io/badge/Citations-31-green) [![](https://img.shields.io/badge/Github%20Stars-545-blue)](https://github.com/jzbjyb/flare)
285365

286366
---
287367

288368
[**Augmented Large Language Models with Parametric Knowledge Guiding**](https://doi.org/10.48550/arXiv.2305.04757)**2023.05.08**
289369

290370
<font color="gray">Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, etc . - 【arXiv.org】</font>
291371

292-
![](https://img.shields.io/badge/Citations-19-green)
372+
![](https://img.shields.io/badge/Citations-19-green) [![](https://img.shields.io/badge/Github%20Stars-36-blue)](https://github.com/chiyeunglaw/lexlip-iccv23)
293373

294374
---
295375

296376
[**Lift Yourself Up: Retrieval-augmented Text Generation with Self Memory**](https://doi.org/10.48550/arXiv.2305.02437)**2023.05.03**
297377

298378
<font color="gray">Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, etc . - 【arXiv.org】</font>
299379

300-
![](https://img.shields.io/badge/Citations-9-green)
380+
![](https://img.shields.io/badge/Citations-9-green) [![](https://img.shields.io/badge/Github%20Stars-40-blue)](https://github.com/hannibal046/selfmemory)
301381

302382
---
303383

@@ -321,15 +401,15 @@
321401

322402
<font color="gray">O. Khattab, Keshav Santhanam, Xiang Lisa Li, David Leo Wright Hall, Percy Liang, etc . - 【arXiv.org】</font>
323403

324-
![](https://img.shields.io/badge/Citations-109-green)
404+
![](https://img.shields.io/badge/Citations-109-green) [![](https://img.shields.io/badge/Github%20Stars-14.0k-blue)](https://github.com/stanfordnlp/dsp)
325405

326406
---
327407

328408
[**One Embedder, Any Task: Instruction-Finetuned Text Embeddings**](https://doi.org/10.48550/arXiv.2212.09741)**2022.12.19**
329409

330410
<font color="gray">Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, etc . - 【ArXiv】</font>
331411

332-
![](https://img.shields.io/badge/Citations-2-green) [![](https://img.shields.io/badge/Github%20Stars-606-blue)](https://github.com/HKUNLP/instructor-embedding)
412+
![](https://img.shields.io/badge/Citations-2-green) [![](https://img.shields.io/badge/Github%20Stars-1.8k-blue)](https://github.com/HKUNLP/instructor-embedding)
333413

334414
---
335415

@@ -353,7 +433,7 @@
353433

354434
<font color="gray">Zhichao Yang, Shufan Wang, Bhanu Pratap Singh Rawat, Avijit Mitra, Hong Yu . - 【Conference on Empirical Methods in Natural Language Processing】</font>
355435

356-
![](https://img.shields.io/badge/Citations-2-green) [![](https://img.shields.io/badge/Github%20Stars-28-blue)](https://github.com/whaleloops/KEPT)
436+
![](https://img.shields.io/badge/Citations-2-green) [![](https://img.shields.io/badge/Github%20Stars-44-blue)](https://github.com/whaleloops/KEPT)
357437

358438
---
359439

@@ -385,7 +465,7 @@
385465

386466
<font color="gray">Shuyan Zhou, Uri Alon, Frank F. Xu, Zhiruo Wang, Zhengbao Jiang, etc </font>
387467

388-
![](https://img.shields.io/badge/Citations-4-green) ![](https://img.shields.io/badge/Mendeley%20Readers-24-red) [![](https://img.shields.io/badge/Github%20Stars-164-blue)](https://github.com/shuyanzhou/docprompting)
468+
![](https://img.shields.io/badge/Citations-4-green) ![](https://img.shields.io/badge/Mendeley%20Readers-50-red) [![](https://img.shields.io/badge/Github%20Stars-230-blue)](https://github.com/shuyanzhou/docprompting)
389469

390470
---
391471

@@ -401,39 +481,39 @@
401481

402482
<font color="gray">Xiang Chen, Lei Li, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, etc . - 【ArXiv】</font>
403483

404-
![](https://img.shields.io/badge/Citations-7-green) [![](https://img.shields.io/badge/Github%20Stars-660-blue)](https://github.com/zjunlp/promptkg)
484+
![](https://img.shields.io/badge/Citations-7-green) [![](https://img.shields.io/badge/Github%20Stars-662-blue)](https://github.com/zjunlp/promptkg)
405485

406486
---
407487

408488
[**Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning**](https://doi.org/10.1145/3477495.3531746)**2022.05.04**
409489

410490
<font color="gray">Xiang Chen, Lei Li, Ningyu Zhang, Chuanqi Tan, Fei Huang, etc . - 【Annual International ACM SIGIR Conference on Research and Development in Information Retrieval】</font>
411491

412-
![](https://img.shields.io/badge/Citations-4-green) ![](https://img.shields.io/badge/Mendeley%20Readers-35-red) [![](https://img.shields.io/badge/Github%20Stars-660-blue)](https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE)
492+
![](https://img.shields.io/badge/Citations-4-green) ![](https://img.shields.io/badge/Mendeley%20Readers-35-red) [![](https://img.shields.io/badge/Github%20Stars-662-blue)](https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE)
413493

414494
---
415495

416496
[**Contrastive Demonstration Tuning for Pre-trained Language Models**](https://doi.org/10.48550/arXiv.2204.04392)**2022.04.09**
417497

418498
<font color="gray">Xiaozhuan Liang, Ningyu Zhang, Ningyu Zhang, Siyuan Cheng, Zhen Bi, etc . - 【Conference on Empirical Methods in Natural Language Processing】</font>
419499

420-
![](https://img.shields.io/badge/Citations-3-green) [![](https://img.shields.io/badge/Github%20Stars-660-blue)](https://github.com/zjunlp/PromptKG/tree/main/research/Demo-Tuning)
500+
![](https://img.shields.io/badge/Citations-3-green) [![](https://img.shields.io/badge/Github%20Stars-662-blue)](https://github.com/zjunlp/PromptKG/tree/main/research/Demo-Tuning)
421501

422502
---
423503

424504
[**Multi-Stage Prompting for Knowledgeable Dialogue Generation**](https://doi.org/10.48550/arXiv.2203.08745)**2022.03.16**
425505

426506
<font color="gray">Zihan Liu, M. Patwary, R. Prenger, Shrimai Prabhumoye, Wei Ping, etc . - 【Findings】</font>
427507

428-
![](https://img.shields.io/badge/Citations-12-green) [![](https://img.shields.io/badge/Github%20Stars-5.1k-blue)](https://github.com/NVIDIA/Megatron-LM)
508+
![](https://img.shields.io/badge/Citations-12-green) [![](https://img.shields.io/badge/Github%20Stars-9.3k-blue)](https://github.com/NVIDIA/Megatron-LM)
429509

430510
---
431511

432512
[**Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data**](https://doi.org/10.48550/arXiv.2203.08773)**2022.03.16**
433513

434514
<font color="gray">Shuo Wang, Yichong Xu, Yuwei Fang, Yang Liu, S. Sun, etc . - 【Annual Meeting of the Association for Computational Linguistics】</font>
435515

436-
![](https://img.shields.io/badge/Citations-21-green) [![](https://img.shields.io/badge/Github%20Stars-103-blue)](https://github.com/microsoft/reina)
516+
![](https://img.shields.io/badge/Citations-21-green) [![](https://img.shields.io/badge/Github%20Stars-116-blue)](https://github.com/microsoft/reina)
437517

438518
---
439519

@@ -457,31 +537,31 @@
457537

458538
<font color="gray">Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun . - 【AI Open】</font>
459539

460-
![](https://img.shields.io/badge/Citations-172-green) ![](https://img.shields.io/badge/Mendeley%20Readers-274-red) [![](https://img.shields.io/badge/Github%20Stars-148-blue)](https://github.com/thunlp/PTR)
540+
![](https://img.shields.io/badge/Citations-172-green) ![](https://img.shields.io/badge/Mendeley%20Readers-297-red) [![](https://img.shields.io/badge/Github%20Stars-154-blue)](https://github.com/thunlp/PTR)
461541

462542
---
463543

464544
[**Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks**](https://arxiv.org/abs/2005.11401)**2020.05.22**
465545

466546
<font color="gray">Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, etc . - 【Neural Information Processing Systems】</font>
467547

468-
![](https://img.shields.io/badge/Citations-551-green) ![](https://img.shields.io/badge/Mendeley%20Readers-719-red) [![](https://img.shields.io/badge/Github%20Stars-101.5k-blue)](https://github.com/huggingface/transformers)
548+
![](https://img.shields.io/badge/Citations-551-green) ![](https://img.shields.io/badge/Mendeley%20Readers-1.8k-red) [![](https://img.shields.io/badge/Github%20Stars-128.6k-blue)](https://github.com/huggingface/transformers)
469549

470550
---
471551

472552
[**REALM: Retrieval-Augmented Language Model Pre-Training**](https://arxiv.org/abs/2002.08909)**2020.02.10**
473553

474554
<font color="gray">Kelvin Guu, Kenton Lee, Z. Tung, Panupong Pasupat, Ming-Wei Chang . - 【ArXiv】</font>
475555

476-
![](https://img.shields.io/badge/Citations-542-green) ![](https://img.shields.io/badge/Mendeley%20Readers-845-red) [![](https://img.shields.io/badge/Github%20Stars-1.4k-blue)](https://github.com/google-research/language/tree/master/language/realm)
556+
![](https://img.shields.io/badge/Citations-542-green) ![](https://img.shields.io/badge/Mendeley%20Readers-1.1k-red) [![](https://img.shields.io/badge/Github%20Stars-1.6k-blue)](https://github.com/google-research/language/tree/master/language/realm)
477557

478558
---
479559

480560
[**Prompt as a Knowledge Probe for Chinese Spelling Check**](https://doi.org/10.1007/978-3-031-10989-8_41)
481561

482562
<font color="gray">Kun Peng, Nannan Sun, Jiahao Cao, Rui Liu, Jiaqian Ren, etc . - 【Knowledge Science, Engineering and Management】</font>
483563

484-
![](https://img.shields.io/badge/Citations-0-green)
564+
![](https://img.shields.io/badge/Citations-0-green) ![](https://img.shields.io/badge/Mendeley%20Readers-1-red)
485565

486566
---
487567

README.md

+12
Original file line numberDiff line numberDiff line change
@@ -99,6 +99,18 @@ In the future, there will likely be two types of people on Earth (perhaps even o
9999
<!-- 🔥🔥🔥 -->
100100
☄️ **EgoAlpha releases the TrustGPT focuses on reasoning. Trust the GPT with the strongest reasoning abilities for authentic and reliable answers. You can click [here](https://trustgpt.co) or visit the [Playgrounds](./Playground.md) directly to experience it。**
101101

102+
- **[2024.7.11]**
103+
- 🔥🔥🔥NVIDIA release: Paper[Data, Data Everywhere:A Guide for Pretraining Dataset Construction](https://arxiv.org/pdf/2407.06380)
104+
105+
- **[2024.7.10]**
106+
- Paper: [Multi-Object Hallucination in Vision-Language Models](https://arxiv.org/abs/2407.06192)
107+
108+
- **[2024.7.9]**
109+
- Paper: [Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision](https://arxiv.org/abs/2407.06189)
110+
111+
- **[2024.7.8]**
112+
- Paper: [Temporal Grounding of Activities using Multimodal Large Language Models](https://arxiv.org/abs/2407.06157)
113+
102114
- **[2024.7.7]**
103115
- 🔥🔥🔥[WAIC:2024:Only spend 2 minutes, the requirements document into a product, China's large model development artefacts fire WAIC](https://www.worldaic.com.cn/)
104116

0 commit comments

Comments
 (0)