Drew details the emergence of cloud data warehouses and the rapid adoption that followed, unpacks the practical uses of LLMs , and demystifies some of their reasoning-based limitations. He also sheds light on vector embeddings, their transformative potential, and what’s next for this dynamic space.
<iframe height="200px" width="100%" frameborder="no" scrolling="no" seamless src="https://player.simplecast.com/1a0d3b27-60da-49c5-b986-53a28727a42d?dark=true"></iframe>
“Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]
“One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]
“I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]
“My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]
Links Mentioned in Today’s Episode:
Understanding the Limitations of Mathematical Reasoning in Large Language Models
Drew Banin on LinkedIn
dbt Labs