At Kickstart, we regularly explore the latest technological developments and learn how to utilize them to continuously build a better and more effective organization. Generative AI is certainly the innovation of our time, so we invited an expert in this space, @Cyrill Glockner, for a hands-on workshop focused on LLM use-case discovery. Our goal was to learn about LLMs so we could apply them in our work.
Fortunately for us, Cyrill has years of experience in machine learning and was most recently working as Director Business AI at Microsoft in Seattle. So let’s ask him about us being his students.
Kickstart (KS): Hi Cyrill. What did the Kickies learn today?
Cyrill Glockner (CG): Together we solved many pre-selected use cases in a hands-on setting using a private ChatGPT-4 Workspace. To get started we worked on an example that almost all enterprises can benefit from. How do you summarize and understand tone and feedback from customers in large quantities of uncategorized text? We picked a local hotel, copied the most recent 100 or so reviews, and pasted them into ChatGPT’s context window. Then we asked for a summary and key recurring themes. We also explored how to retrieve specific details from the reviews e.g., “show me the original review that complained about noise”. This interaction with the reviews enables enterprises to chat with their data and generate valuable insights by asking questions using common language.
KS: Seems to have gotten off to a good start. What was next?
CG: Now, we looked at a use case that is using company internal data. We uploaded an Excel table that contains a list of startups that applied to the Kickstart program, which includes the Sustainable Development goals they are supporting. We asked ChatGPT for the 5 most often mentioned SDGs and their respective percentage. It took a few seconds and we retrieved the results. They were exactly what the team had calculated in the past but now delivered in no time.
KS: Impressive. And what else?
CG: The team was curious whether the data provided by the startups during the application process was matching what they had published on their website. So, we asked ChatGPT to browse the web and compare the given data with what they had put online. Most of it matched, with some minor differences, but we thought this was an excellent example where you can verify or update data collected some time ago with currently available online data, without a lengthy browsing session.
KS: We are hearing so much about hallucination. Any concerns here?
CG: For the use cases we looked at, hallucination is less of a concern because we provide validated data directly into the context window, and ChatGPT doesn't need to rely on its vast neural network to generate answers. Hallucination becomes a problem when the LLM lacks a high-probability answer and embarks on a path of creative thinking that lacks substantial basis. They are programmed to always provide an answer, but they don’t have the capability to verify the accuracy of their responses.
KS: And any fun too?
CG: This depends on your definition of fun, but we did have some amusing exercises. We tried to classify muffins and Chihuahua dogs, reversed words, and worked through an interesting 'theory of mind' example involving chocolate and popcorn.
Thank you Cyrill to your time and looking forward to doing many more GenAI workshops together. If you want to know more, write to firstname.lastname@example.org.