Fork me on GitHub
#datomic
<
2023-04-11
>
jaret01:04:30

@thiago.oak This is very fun! I am getting a kick out of asking it questions which I believe the answer is really "It Depends"... i.e.:

Thiago Carvalho12:04:53

I think we could do a bit of prompt engineering to improve that. Another test would be to improve our prompt so that the bot is asked to discuss trade-offs between possible solutions instead of picking one and just saying it is the best The prompt currently looks like this:

You are a very enthusiastic Datomic expert who loves to help people! Given the following sections from the Datomic documentation, answer the question using only that information, outputted in markdown format. If you are unsure and the answer is not explicitly written in the documentation, say "Sorry, I don't know how to help with that."

Context sections:
${contextText}

Question: """
${sanitizedQuery}
"""

Answer as markdown (including related code snippets if available):
Maybe add something like
If there are more multiple possible answers to a question, list the possible answers instead of picking one.

jaret01:04:29

Also this answer in thread is great, but made me realize I need to update the docs to include query-stats in the section this is pulling from. 🧵

jaret02:04:00

How does one write docs to better consumed by a LLM?

pppaul05:04:13

always write true statements

pppaul05:04:58

To write documentation that can be easily consumed by a LLM (Large Language Model), you may want to consider the following tips:

Use clear and concise language: LLMs are designed to understand natural language, but it's still important to write in a clear and concise manner. Avoid using jargon or technical terms that may not be familiar to the LLM.

Organize your documentation: LLMs work best when information is well-organized and presented in a structured way. Use headings, bullet points, and numbered lists to break up large chunks of text and make it easier for the LLM to follow.

Use examples: LLMs learn by analyzing patterns in data, so providing examples can be helpful in teaching them how to use the information. Try to provide real-world examples that illustrate the concepts you are trying to convey.

Include context: LLMs need context to understand the meaning behind words and phrases. Provide background information, definitions, and explanations to help the LLM better understand the topic.

Consider the LLM's limitations: While LLMs are capable of understanding complex language and concepts, they are not perfect. Be aware of their limitations and try to simplify the information when possible.

Test your documentation: Finally, test your documentation with the LLM to ensure that it can understand and interpret the information correctly. Use a variety of test cases to verify that the LLM can handle different scenarios and edge cases.
chat gpt said this, but should we trust it?

jdkealy20:04:46

How can I find retracted entities?

jdkealy21:04:06

Right, but in the history DB, how do you know it is a retracted entity ?

jdkealy21:04:33

I can see how i can query for the transaction of an attribute being retracted

jdkealy21:04:39

but what about an entity ?

ghadi21:04:39

what did you try?

ghadi21:04:41

entities are collections of assertions sharing the same e. in the history database you should be able to see the entire set of assertions and retractions about an e

ghadi21:04:07

each datom is [e a v t added] where added is true/false