Skip to content

Commit

Permalink
Merge pull request #61 from topoteretes/moving_notebooks
Browse files Browse the repository at this point in the history
Added docs
  • Loading branch information
Vasilije1990 authored Mar 22, 2024
2 parents d45f8ee + 1717791 commit d516da3
Show file tree
Hide file tree
Showing 5 changed files with 32 additions and 0 deletions.
6 changes: 6 additions & 0 deletions docs/concepts/graph_data_models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
Graph data models are fundamental structures used to represent and store data in the form of graphs, which consist of nodes (or vertices) and edges (or links). This model is particularly effective for illustrating relationships and connections among various data entities, making it invaluable in domains such as social networks, recommendation systems, logistics, biological networks, and more. Here's an overview of key concepts and types of graph data models:

Key Concepts:
Nodes (Vertices): Represent entities or objects within the graph, such as people in a social network, stations in a transportation map, or proteins in biological networks.
Edges (Links): Depict the relationships or interactions between nodes. Edges can be directed (indicating a one-way relationship) or undirected (indicating a mutual relationship).
Properties: Both nodes and edges can have properties (key-value pairs) that provide additional information, such as weights, types, or other attributes relevant to the application.
8 changes: 8 additions & 0 deletions docs/concepts/llm_structured_outputs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Function calling in the context of Large Language Models (LLMs) like GPT-3, GPT-4, and their derivatives extends beyond traditional programming paradigms. In this scenario, function calling involves prompting the LLM to simulate the behavior of a function within its generated output. This capability allows users to interact with LLMs in a structured way, effectively requesting specific operations or information retrieval tasks by framing their prompts as function calls.

How LLM Function Calling Works:
Prompt Construction: The user constructs a prompt that mimics a function call in programming. This prompt includes the "name" of the function (often a description of the task) and the "arguments" (the specific inputs or conditions for the task). For example, a prompt might look like "Generate a summary for the following article:" followed by the article text.

LLM Interpretation: The LLM interprets this structured prompt and understands it as a request to perform a specific task, similar to how a function in a program would be invoked. The model then generates an output that aligns with the expected behavior of the function described in the prompt.

Parameters and Outputs: In LLM function calling, the parameters are the details provided in the prompt, and the output is the generated text that the model produces in response. This output is intended to fulfill the function's "purpose" as inferred from the prompt.
1 change: 1 addition & 0 deletions docs/concepts/multilayer_graph_networks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
A multilayer graph network is a sophisticated structure used to model complex systems where entities and their interactions can exist in multiple layers, each representing a different type of relationship, context, or domain. Unlike traditional graphs that capture connections in a single, uniform setting, multilayer graphs provide a more nuanced framework, allowing for the representation of diverse interconnections and dependencies across various dimensions or layers.
11 changes: 11 additions & 0 deletions docs/concepts/propositions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@

Propositions are fundamental elements in the study of logic, linguistics, and natural language processing. They represent atomic expressions within texts that encapsulate distinct factoids, conveying specific pieces of information. In essence, a proposition is a declarative statement that can either be true or false, but not both simultaneously.
This binary nature makes propositions crucial for logical deductions, reasoning, and the construction of arguments.

In a natural language context, propositions are presented in a concise and self-contained format.
They are designed to convey information clearly and unambiguously, making them easily interpretable by humans and computable by machines. For example, the statement "The Eiffel Tower is in Paris" is a proposition because it presents a specific fact about the location of the Eiffel Tower, and its truth value can be assessed as either true or false.

The concept of propositions extends beyond mere statements of fact to include assertions about concepts, relationships, and conditions.
For instance, "If it rains, the ground gets wet" is a conditional proposition that establishes a cause-and-effect relationship between two events.

In computational linguistics and natural language processing, propositions are vital for tasks such as information extraction, knowledge representation, and question answering.
6 changes: 6 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -123,3 +123,9 @@ nav:
- "blog/index.md"
- Why cognee:
- "why.md"
- Concepts:
- Propositions: 'concepts/propositions.md'
- Multilayer graph network: 'concepts/multilayer_graph_networks.md'
- Data models: 'concepts/graph_data_models.md'
- LLM structured Outputs: "concepts/llm_structured_outputs.md"

0 comments on commit d516da3

Please sign in to comment.