I changed from having Q&A’s within a google doc to switching it over to Q&A’s because i saw that some had started to give less accurate information.
However, now the chatbot only replies exactly what the Q&A says. So basically it’s no longer an AI chatbot - it’s a machine pulling out pre written responses to questions. So it has no flexibility to variations but sends the pre-packaged response which might be slightly weird depending on the exact question. Example:
“How much does X cost?”
Bot reply: “Yes! X cost this much.” Because i had another q&a which said “Do you have x?” it went with that reply the “Yes!” even though the customers didn’t ask anything that would prompt such a respons. So it doesn’t even take the exact q&a but something close and makes it weird.
The google sheet and docs doesn’t work anymore - it pulls nothing out of if.
I’m getting really frustrated with this.
This is taking more time and energy and creates more problems than it solves, it seems like it’s not worth it.
ChatGPT can somehow extract accurate information from across the web and a million different things. But when used in answerly it can’t read a simple google sheet, send an accurate link or not hallucinate plain wrong information.
Anyone has overcome obstacles like this in a good way? It feels like i’ve tried a million different things with my customers and with some have gotten it good enough but still not ideal and a lot of compromising.
You also need to be very specific in the way you ask the question. And it can’t really pull data from other questions to gather a more rounded knowledge on a subject.
This is just the worst. I’m sorry but i’m at my ends wit.
One specific bot which nothing works for is AD Maskin with the bot Olivia | AD Maskin.
Please advice. I’m really close to investing in LTD’s for both facepop and wonderform - but now i’m really close to giving this up and moving to a competitor i will not disclose out of respect which i tried with the most basic of training turning my google doc from answerly into a pdf and got good respones.
I understand how frustrating this can be. It can be confusing at times, but providing more variations in questions can help the bot understand context better and respond more accurately.
Regarding the issue with the bot not accessing data from Google Sheets and Docs, I’ve tested it on my end, and everything seems to be working properly. If anyone else is experiencing similar issues, please reach out to hi@answerly.io for assistance.
This incident shows that OpenAI had issues with their vectors API, which we use to connect your embeddings context to the conversations.
All training that occurred during these two days failed, which also explains why QAs only worked if you had a perfect match (this bypasses the vector part).
We apologize for the inconvenience. Simply re-save your embeddings in the AI training, and that should cause our systems to re-embed them.
Can you replicate a minimal set up in English for me, so I can dive deeper in this and see what’s happening in the background? Would be more than happy to come with some insight & a solution.
@Fatos & @paradisianway same here, I deleted embeddings and still facing the same issue.
For example, there is a file with “Bank promos/discounts”, other with a list of 6 shop locations.
As part fo the prompts, the bot is instructed to not make up info, to always rely on its knowledge base and training data fisrt and that if doesn’t know something, to redirect the user to a contact tlink.
When testing, the bot would always redirect to the contact link, following its prompt, but clearly ignoring all its training data.