You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Numerousstudies [ 1, 2 , 3, 4, 5] highlight the challenges and frustrations involved in constructing a data augmented LLM applications based on technologies like RAG and fine-tuning, particularly in specialized domains such as the legal field, healthcare, manufacturing, and others
Explicit Facts and Implicit Facts, focus on the retrieval of factual information, whether directly stated or requiring basic inferencing. These levels challenge the LLM’s ability to extract and synthesize data into coherent facts.
Conversely, the latter two levels, Interpretable Rationales and Hidden Rationales, shift the focus towards the LLM’s capacity to learn and apply the rationales behind the data.
整个流程有:data indexes, processing queries, retrieving and matching, re-ranking, and evaluation
indexes:主要有3种方案,Sparse retrieval uses specific words to index text segments. In contrast, dense retrieval maps text segments
into a dense vector space of features. Hybrid retrieval combines elements of both sparse and dense techniques.
Sparse Retrieval: TF-IDF and BM25, identify the most representative keywords of each text segment based on their relative frequency. 这种方案的问题是无法识别同义词,解决方案是使用KNN来进行关键词的相似度查找。另一种是根据query和context直接预测关键词
Dense Retrieval: sing pre-trained or fine-tuned text encoders to map texts to
a dense vector space that aligns with query requirements。现在都是用LM-based dense retrieval。比如LLM2vec
Others: Combining sparse retrieval and dense retrieval is an effective method to focus simultaneously on the central theme of text segments and global features. Feng et al. (2023) propose initially determining the
knowledge domain needed to answer a query as a fixed area of expertise, and then using dense retrieval to recall supplementary information within this domain [ 68 ].(这个研究涉及到了domain,需要仔细看一下)。很多研究都表明混合两种方法对于获取语义信息有帮助。Tang et al. (2024) have enhanced the capabilities of a LLM by fine-tuning it for indexing and retrieving, effectively integrating these abilities directly into the LLM. This allows the LLM to autonomously generate data indices and text segments for each query [72, 73]. (Tang的研究是通过FT直接做到更好的indexing和retrieving)
Query Document Alignment:The goal of this step is to align the query with document segments in external data to identify the best document segment that can assist in answering the query. 如下图所示,有三种对齐方式
traditional alignment:将query和doc边骂道同一个encoding space
Re-ranking and Correction: After retrieving the top k text blocks, RAG systems must filter and reorder these segments。一些研究使用 perplexity or perplexity gain来作为ranking指标。另一些研究使用LLM来进行评价。
Recursive Retrieval or Iterative Retrieval: Considering the inherent limitations in the accuracy of a single retrieval attempt, an effective mitigation strategy is to perform multiple retrievals to progressively address any omissions。使用复数的retrieval方法来避免单个方法的缺陷。比如使用tree-like模型,k-means模型。(这些情况主要看使用场景)
3.3.3 Response Generation Enhancement
Handling conflicts between retrieved knowledge and the model’s internal prior knowledge is also essential [ 84 , 85 , 86]. 获取的数据之间矛盾的话,确实也有问题。当context是无关或错误的内容时,预训练的模型经常产生错误的结果。有研究表示通过巧妙设计训练数据,FT或pretraing能有效减缓 irrelevant retrieval noise, relevant retrieval noise, and counterfactual retrieval noise。
• Adaptive retrieval volumes: Different questions may require varying numbers of retrieved contexts, and the specific number of retrieved contexts can depend on both the question and the dataset. A fixed number of retrievals may result in either information noise or insufficient information.
• Coordination between reasoning and retrieval: Reasoning can guide the focus of what needs to be retrieved, while the insights gained from retrieved information can iteratively refine reasoning strategies. Addressing these complexities calls for an intelligent integration and selective harnessing of external data, capitalizing on the inherent reasoning prowess of LLMs.
• Planning-based: Generating a stepwise retrieval plan during the prior-retrieval stage or dynamically within the retrieval process can refine the focus of each retrieval, efficiently guiding the iterative RAG system. For example, ReAct [93 ] progressively updates the target of each step, reducing the knowledge gap required to answer the question. (基于规划:生成一个车stepwise plan,根据每一步得到的信息,逐渐获取接近答案的内容)
• Information Gap Filling Based: ITRG [97 ] introduces an iterative retrieval-generation collaboration framework, generating answers based on existing knowledge and then continuing to retrieve and generate for the unknown parts of the response in subsequent rounds. Similarly, FLARE [50 ] revisits and modifies low-probability tokens in answers generated in each iteration. On the other hand, Self-RAG [ 92 ] fine-tunes a large model to autonomously decide when to search and when to stop searching and start answering questions.(基于信息缺口填补:ITRG [97] 引入了一种迭代检索-生成协作框架,基于现有知识生成答案,然后在后续轮次中继续检索和生成响应中未知的部分。同样,FLARE [50] 重新审视并修改每次迭代中生成答案的低概率标记。另一方面,Self-RAG [92] 微调了一个大型模型,以自主决定何时搜索以及何时停止搜索并开始回答问题。)
文本描述是呈现可解释理由的最常见形式。这些可能包括手册或指南等专业或官方文件,以及特定领域的手册或操作指南。这些文本阐明了在复杂情境中促进决策的推理过程。例如,像 FDA 对制药厂的指导文件或医生用药指南这样的文件,提供了专家如 FDA 官员或医生如何处理特定案例的见解。(这个也是大部分domain-adaption需要解决的问题)
结构化指令:更明确的推理关系或决策路径可能以结构化格式呈现。这些理由可以理解为文本条件的摩尔机或文本条件的米利机。在计算理论中,摩尔机是一个有限状态机,其输出值仅由其当前状态决定。控制状态转换的条件通常以文本形式表达,这需要LLMs进行解释,不同于在本地代码上运行的传统程序。例如,考虑一个客户支持代理,它遵循手册来处理用户的产品更换或退款请求。同样,米利机是一个有限状态机,其输出值由当前状态和输入共同决定。这里的区别在于,动作(如 API 调用)不仅由状态决定,还由与前一状态转换相关的文本消息决定。自然,这些特定领域的理由可以表示为工作流、决策树或伪代码等格式。(手順書 这类的信息比较接近?)
Here are some examples of queries at this level:
• How should a patient with chest pain and specific symptom descriptions be diagnosed and treated (given a chest pain management guideline)
• How to respond to a user’s question in a real-life scenario? (given a customer service workflow)
5.2 Challenges and SolutionsIn
the realm of interpretable rationale queries, an additional challenge is integrating domain-specific rationales into LLMs in an comprehensible manner. The primary challenges are as follows:
• Prompt Optimization Costs: The process of optimizing prompts is marked by high time and computational demands. Distinct queries demand tailored background knowledge and decision-making criteria, necessitating diverse examples. While manually designed prompts can be highly effective, they are labor-intensive and timeconsuming. Furthermore, training models to generate tailored prompts for various queries incurs significant computational overhead. (提示优化成本:优化提示的过程以高时间和计算需求为标志。不同的查询需要量身定制的背景知识和决策标准,从而需要多样化的示例。虽然手动设计的提示可能非常有效,但它们劳动强度大且耗时。此外,训练模型以生成针对各种查询的定制提示会产生显著的计算开销。这种情况下,使用few-shot其实是无法满足多样化需求的)
• Limited interpretability: The impact of prompts on LLMs is opaque. In many cases, access to the internal parameters of LLMs is typically restricted, complicating efforts to determine the impact of various prompts on these models. This lack of transparency hinders our ability to consistently understand and verify the interpretability of LLM responses to different prompts.(有限的可解释性:提示对LLMs的影响是不透明的。在许多情况下,对LLMs内部参数的访问通常受到限制,这使得确定各种提示对这些模型的影响变得复杂。这种缺乏透明性阻碍了我们持续理解和验证LLM对不同提示的可解释性的能力。)
Here are some examples of queries at this level:
• How will the economic situation affect the company's future development? (given a collection of financial reports, with economic and financial rationale required) (我设想的不是scale这么大的,而是基于用户数据生成一些更加私人化的解答的时候的挑战。这方面没有相关的介绍。personalized RAG之类的?这个方向可以吗?)
• How to achieve 24 points using the numbers 5, 5, 5, and 1? (given a series of 24-point game examples and corresponding answers.)
• Does Afghanistan permit a parent to confer his or her citizenship on a child born abroad? (given the GLOBALCIT citizenship law dataset [136])
6.2 Challenges and Solutions
The construction of data-augmented LLM applications is significantly challenged by hidden rationale queries, with primary difficulties manifesting in the following areas:
• Logical retrieval: For questions involving hidden rationales, the helpfulness of external data does not simply depend on entity-level or semantic similarity, but rather on logical congruence or thematic alignment. Standard retrieval methods often struggle to capture the true target of the query or to identify text segments with logical similarities based on the problem presented. This necessitates the development of more sophisticated retrieval algorithms that can parse and identify underlying logical structures rather than relying solely on superficial textual similarities. (识别潜在的逻辑结构。。这个有点难啊)
• Data insufficiency: Fundamentally, external data may not explicitly contain the guidance or answers relevant to the current query. Instead, relevant information is often embedded in dispersed knowledges or illustrated through examples. This indirect presentation demands robust capabilities in data interpretation and synthesis, requiring LLMs to effectively derive coherent answers from fragmented or tangentially related data sources. Such challenges underscore the imperative for sophisticated data integration and reasoning capabilities within LLM frameworks to navigate the complexities of hidden rationale querieseffectively.(数据不足的情况下进行解释)
BrambleXu
changed the title
arXiv-2024-Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely
arXiv-2024/09-Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely
Nov 25, 2024
Summary:
本调查提出了一种RAG任务分类方法,将用户查询分为四个层次:显式事实查询、隐式事实查询、可解释的推理查询和隐藏的推理查询。文中定义了这些查询层次,提供了相关数据集,并总结了应对关键挑战和有效技术的关键点。最后,讨论了将外部数据整合到LLMs中的三种主要形式:上下文、微小模型和微调,强调了它们各自的优缺点及适合解决的问题类型。
Resource:
Paper information:
Notes:
Numerousstudies [ 1, 2 , 3, 4, 5] highlight the challenges and frustrations involved in constructing a data augmented LLM applications based on technologies like RAG and fine-tuning, particularly in specialized domains such as the legal field, healthcare, manufacturing, and others
这些挑战范围广泛,从构建数据管道(例如,数据处理和索引)到利用LLMs的能力实现复杂的智能推理。例如,在金融应用中,经常需要理解和利用高维时间序列数据,而在医疗保健中,医学图像或时间序列医疗记录通常是必不可少的。使LLMs能够理解这些不同形式的数据是一个反复出现的挑战。另一方面,在法律和数学应用中,LLMs通常难以掌握不同结构之间的远距离依赖关系。此外,根据具体的应用领域,对LLM响应的可解释性和一致性有更高的要求。
(针对CS中涉及到金融相关的数据,找到其中独特的领域特征难题,针对这个难题进行研究,这个流程比较靠谱)
通过与领域专家和开发人员的广泛讨论,并仔细分析他们面临的挑战,我们深刻理解到,数据增强LLM应用程序并不是一刀切的解决方案。现实世界的需求,特别是在专家领域中,极其复杂,并且在与给定数据的关系和所需的推理难度上可能有显著差异。然而,开发人员往往没有意识到这些区别,最终得到一个充满性能陷阱的解决方案(类似于一个到处漏水的房子)。相反,如果我们能够充分理解不同层次的需求及其独特挑战,我们就可以相应地构建应用程序,并使应用程序稳步改进(就像一步步构建一个坚固可靠的房子)。
然而,研究工作和现有的相关调查[6, 7, 8, 9, 10, 11, 12, 13]经常只关注这些层次中的一个或某一特定技术主题。这促使我们编写了这份综合调查,旨在清晰地定义这些不同层次的查询,识别与每个层次相关的独特挑战(图 1),并列出相关的工作和努力来解决这些问题。此调查旨在帮助读者构建数据增强LLM应用的全局视图,并作为系统开发此类应用的手册。
需要领域独特知识的实在第三级。如何把这部分内容应用到金融领域问答中是一个问题。
分级方式,根据问题复杂度和LLM的理解能力
前两个层级,显性事实和隐性事实,侧重于事实信息的检索,无论是直接陈述的还是需要基本推理的。这些层级挑战LLM提取和综合数据为连贯事实的能力。相反,后两个层级,可解释的理由和隐藏的理由,将重点转向LLM学习和应用数据背后理由的能力。
根据此标准对常见事实查询数据集的分类如表 1 所示。
(通过对公司数据集进行整理,可以按照这个分类创建一个同时包含Explicit和Implicit两种类型的数据集。针对其中出现的问题,提出我们新的解决方法)
使用RAG来解决上面问题中发现的新问题
3.3 Retrieval-augmented Generation (RAG)
3.3.1 Data Processing Enhancement
- 多模态文档解析 - 方法1 将多模态的数据转换为文本 - 方法2 对多模态的数据直接进行embedding - chunking优化 - 优化方法有很多。还有一些针对特殊问题,做特殊chunking的方案3.3.2 Data Retrieval Enhancement
整个流程有:data indexes, processing queries, retrieving and matching, re-ranking, and evaluation
indexes:主要有3种方案,Sparse retrieval uses specific words to index text segments. In contrast, dense retrieval maps text segments
into a dense vector space of features. Hybrid retrieval combines elements of both sparse and dense techniques.
a dense vector space that aligns with query requirements。现在都是用LM-based dense retrieval。比如LLM2vec
knowledge domain needed to answer a query as a fixed area of expertise, and then using dense retrieval to recall supplementary information within this domain [ 68 ].(这个研究涉及到了domain,需要仔细看一下)。很多研究都表明混合两种方法对于获取语义信息有帮助。Tang et al. (2024) have enhanced the capabilities of a LLM by fine-tuning it for indexing and retrieving, effectively integrating these abilities directly into the LLM. This allows the LLM to autonomously generate data indices and text segments for each query [72, 73]. (Tang的研究是通过FT直接做到更好的indexing和retrieving)
Query Document Alignment:The goal of this step is to align the query with document segments in external data to identify the best document segment that can assist in answering the query. 如下图所示,有三种对齐方式
Re-ranking and Correction: After retrieving the top k text blocks, RAG systems must filter and reorder these segments。一些研究使用 perplexity or perplexity gain来作为ranking指标。另一些研究使用LLM来进行评价。
Recursive Retrieval or Iterative Retrieval: Considering the inherent limitations in the accuracy of a single retrieval attempt, an effective mitigation strategy is to perform multiple retrievals to progressively address any omissions。使用复数的retrieval方法来避免单个方法的缺陷。比如使用tree-like模型,k-means模型。(这些情况主要看使用场景)
3.3.3 Response Generation Enhancement
Handling conflicts between retrieved knowledge and the model’s internal prior knowledge is also essential [ 84 , 85 , 86]. 获取的数据之间矛盾的话,确实也有问题。当context是无关或错误的内容时,预训练的模型经常产生错误的结果。有研究表示通过巧妙设计训练数据,FT或pretraing能有效减缓 irrelevant retrieval noise, relevant retrieval noise, and counterfactual retrieval noise。
4 Implicit Fact Queries (L2)
4.1 Overview
这部分的问题,需要从不同的数据源获取对应的内容,还需要一定的推理能力
4.2 Challenges and Solutions
这个级别虽然还是获取fact,但是答案没有直接在text里,需要结合多个fact然后通过reasoning得到一个结论。
• Adaptive retrieval volumes: Different questions may require varying numbers of retrieved contexts, and the specific number of retrieved contexts can depend on both the question and the dataset. A fixed number of retrievals may result in either information noise or insufficient information.
• Coordination between reasoning and retrieval: Reasoning can guide the focus of what needs to be retrieved, while the insights gained from retrieved information can iteratively refine reasoning strategies. Addressing these complexities calls for an intelligent integration and selective harnessing of external data, capitalizing on the inherent reasoning prowess of LLMs.
4.3 Iterative RAG
• Planning-based: Generating a stepwise retrieval plan during the prior-retrieval stage or dynamically within the retrieval process can refine the focus of each retrieval, efficiently guiding the iterative RAG system. For example, ReAct [93 ] progressively updates the target of each step, reducing the knowledge gap required to answer the question. (基于规划:生成一个车stepwise plan,根据每一步得到的信息,逐渐获取接近答案的内容)
• Information Gap Filling Based: ITRG [97 ] introduces an iterative retrieval-generation collaboration framework, generating answers based on existing knowledge and then continuing to retrieve and generate for the unknown parts of the response in subsequent rounds. Similarly, FLARE [50 ] revisits and modifies low-probability tokens in answers generated in each iteration. On the other hand, Self-RAG [ 92 ] fine-tunes a large model to autonomously decide when to search and when to stop searching and start answering questions.(基于信息缺口填补:ITRG [97] 引入了一种迭代检索-生成协作框架,基于现有知识生成答案,然后在后续轮次中继续检索和生成响应中未知的部分。同样,FLARE [50] 重新审视并修改每次迭代中生成答案的低概率标记。另一方面,Self-RAG [92] 微调了一个大型模型,以自主决定何时搜索以及何时停止搜索并开始回答问题。)
4.4 图/树问题解答
根据先前回忆的信息做出关于当前检索目标的决策 解决隐性事实查询需要综合来自多个参考的信息。图形或树形结构,无论是基于知识还是数据结构,自然地表达了文本之间的关系结构,使其非常适合此类数据检索问题。
4.6 Discussion on Fact Queries
5 Interpretable Rationale Queries (L3)
5.1 Overview
在本节和下一节中,我们将探讨需要外部数据来提供解决理由的查询。这些查询不仅需要掌握事实内容,还需要理解和应用与数据背景密切相关的领域特定理由。我们根据所涉及理由的性质将这些查询分为两类:基于可解释理由的查询和基于隐藏理由的查询,如图 4 所示。
可解释的理由查询在依赖外部数据提供理由的应用中代表了一个相对简单的类别。这些类型查询的辅助数据通常包括对解决问题所用思维过程的清晰解释。数据可以以多种形式组织:
文本描述是呈现可解释理由的最常见形式。这些可能包括手册或指南等专业或官方文件,以及特定领域的手册或操作指南。这些文本阐明了在复杂情境中促进决策的推理过程。例如,像 FDA 对制药厂的指导文件或医生用药指南这样的文件,提供了专家如 FDA 官员或医生如何处理特定案例的见解。(这个也是大部分domain-adaption需要解决的问题)
结构化指令:更明确的推理关系或决策路径可能以结构化格式呈现。这些理由可以理解为文本条件的摩尔机或文本条件的米利机。在计算理论中,摩尔机是一个有限状态机,其输出值仅由其当前状态决定。控制状态转换的条件通常以文本形式表达,这需要LLMs进行解释,不同于在本地代码上运行的传统程序。例如,考虑一个客户支持代理,它遵循手册来处理用户的产品更换或退款请求。同样,米利机是一个有限状态机,其输出值由当前状态和输入共同决定。这里的区别在于,动作(如 API 调用)不仅由状态决定,还由与前一状态转换相关的文本消息决定。自然,这些特定领域的理由可以表示为工作流、决策树或伪代码等格式。(手順書 这类的信息比较接近?)
Here are some examples of queries at this level:
• How should a patient with chest pain and specific symptom descriptions be diagnosed and treated (given a chest pain management guideline)
• How to respond to a user’s question in a real-life scenario? (given a customer service workflow)
5.2 Challenges and SolutionsIn
the realm of interpretable rationale queries, an additional challenge is integrating domain-specific rationales into LLMs in an comprehensible manner. The primary challenges are as follows:
• Prompt Optimization Costs: The process of optimizing prompts is marked by high time and computational demands. Distinct queries demand tailored background knowledge and decision-making criteria, necessitating diverse examples. While manually designed prompts can be highly effective, they are labor-intensive and timeconsuming. Furthermore, training models to generate tailored prompts for various queries incurs significant computational overhead. (提示优化成本:优化提示的过程以高时间和计算需求为标志。不同的查询需要量身定制的背景知识和决策标准,从而需要多样化的示例。虽然手动设计的提示可能非常有效,但它们劳动强度大且耗时。此外,训练模型以生成针对各种查询的定制提示会产生显著的计算开销。这种情况下,使用few-shot其实是无法满足多样化需求的)
• Limited interpretability: The impact of prompts on LLMs is opaque. In many cases, access to the internal parameters of LLMs is typically restricted, complicating efforts to determine the impact of various prompts on these models. This lack of transparency hinders our ability to consistently understand and verify the interpretability of LLM responses to different prompts.(有限的可解释性:提示对LLMs的影响是不透明的。在许多情况下,对LLMs内部参数的访问通常受到限制,这使得确定各种提示对这些模型的影响变得复杂。这种缺乏透明性阻碍了我们持续理解和验证LLM对不同提示的可解释性的能力。)
5.3 Prompt Tuning
对于可解释的理由查询,关键问题是如何有效地将外部数据提供的理由整合到LLMs中,并确保这些模型能够准确地根据这些理由进行跟随和反应。Text2MDT [112] 提供了一个可行的示范,介绍了两种从医学指南和教科书中自动提取医学决策树的方法。这个过程澄清了冗长医学文本中的逻辑链,使其更易于理解。同样,MedDM [113] 开发了一种可以由LLMs执行的临床指导树格式,提出了一种对这些可执行 CGT 进行推理的方法论以及患者与LLMs之间多轮对话的框架。InstructRec [114] 旨在利用LLMs在推荐系统中的能力,设计了一种通用格式,使用自然语言描述用户的偏好、意图、任务形式和上下文,从而创建一个高性能的、基于语言的推荐系统。(InstructRec这个看着有点意思啊)
将推理直接作为自然语言指令整合到LLMs中不一定能产生最佳性能,手动设计提示可能会耗费时间。为了解决这个问题,提示调优技术的应用变得至关重要,以增强LLMs遵循特定推理的能力。一种有效的方法是应用强化学习,如 TEMPERA 框架[115]所示,该框架在强化学习的动作空间中设计了包含有限指令、示例和词汇化器的提示。在这里,LLM生成正确响应的概率作为奖励,引导模型在数据集上发现最佳提示配置。同样,Rlprompt[116]采用强化学习方法,训练一个适配器以帮助较小的语言模型根据有关相对反馈生成最佳提示。
5.4 CoT Prompting
这部分不做过多的了解
6 Hidden Rationale Queries (L4)
6.1 Overview
这部分设计domain-spcific推理。隐藏理由查询涉及领域特定的推理方法,这些方法可能没有被明确描述且数量过多而无法穷尽。这些理由通常涵盖了多种多样的内容,无法在典型的上下文窗口中完全探索,并且可能缺乏明确的指示,代表了一种隐含在数据中的领域专业知识。这类数据可能包括但不限于:
Here are some examples of queries at this level:
• How will the economic situation affect the company's future development? (given a collection of financial reports, with economic and financial rationale required) (我设想的不是scale这么大的,而是基于用户数据生成一些更加私人化的解答的时候的挑战。这方面没有相关的介绍。personalized RAG之类的?这个方向可以吗?)
• How to achieve 24 points using the numbers 5, 5, 5, and 1? (given a series of 24-point game examples and corresponding answers.)
• Does Afghanistan permit a parent to confer his or her citizenship on a child born abroad? (given the GLOBALCIT citizenship law dataset [136])
6.2 Challenges and Solutions
The construction of data-augmented LLM applications is significantly challenged by hidden rationale queries, with primary difficulties manifesting in the following areas:
• Logical retrieval: For questions involving hidden rationales, the helpfulness of external data does not simply depend on entity-level or semantic similarity, but rather on logical congruence or thematic alignment. Standard retrieval methods often struggle to capture the true target of the query or to identify text segments with logical similarities based on the problem presented. This necessitates the development of more sophisticated retrieval algorithms that can parse and identify underlying logical structures rather than relying solely on superficial textual similarities. (识别潜在的逻辑结构。。这个有点难啊)
• Data insufficiency: Fundamentally, external data may not explicitly contain the guidance or answers relevant to the current query. Instead, relevant information is often embedded in dispersed knowledges or illustrated through examples. This indirect presentation demands robust capabilities in data interpretation and synthesis, requiring LLMs to effectively derive coherent answers from fragmented or tangentially related data sources. Such challenges underscore the imperative for sophisticated data integration and reasoning capabilities within LLM frameworks to navigate the complexities of hidden rationale querieseffectively.(数据不足的情况下进行解释)
三种注入domain data的方案
Model Graph:
Result::
Thoughts:
我可以先观察和统计数据集中各种问题的百分比,比如L2的有X%。通过观察数据,得到课题,然后继续查看这些课题在当前是否被很好的解决
Next Reading:
The text was updated successfully, but these errors were encountered: