Answerly vs ChatGPT Data Analysis

I uploaded the same DOCX file, which contains the entire novel, to ChatGPT’s Data Analysis and to Answerly (using GPT-4 Turbo). When using ChatGPT, I received better answers than Answerly. For instance, when I inquired about the main characters’ parents’ names, Answerly claimed that the information wasn’t available in the provided text, whereas ChatGPT provided the answers without any issues.

If Answerly could replicate the accuracy of Data Analysis in ChatGPT that would be great. Is it possible?

CleanShot 2023-12-27 at 11.50.35@2x

Hello @pkom79,

If you keep agents with default settings and set the response length to “Auto,” you essentially create direct communication with OpenAI, much like ChatGPT does.

The differences in the answers then, are simply due to chance/probability.

There’s one minor variable at play: how OpenAI fragments the book and feeds it to ChatGPT to generate answers. At Answerly, we try to fragment the book so that we can use the fewest tokens to get an answer, while OpenAI might work differently in this approach and ultimately not be concerned with the number of tokens they send per request, thus feeding more book-related data per request.

I’ve considered making the latter a setting, but these types of settings are becoming complex to represent, so for now it isn’t.

I really appreciate this as data and comparison are the best types of feedback!


Thanks @Fatos

I created a brand new agent and left all the settings on default, except for the response length, which was set to Auto, but unfortunately, I got the same result.

It’s interesting to see that the agent knows this fact when asked in reverse. It can’t tell me the name of Celeste’s mom, but when I ask who Olivia Wilder is, it knows she’s Celeste’s mom.