Jacob Wood

Gen AI Summit & Hackathon - Google Chicago

Google Chicago Office on Morgan Street


This was my first Google Hackathon [^1] and it was held at Google's Chicago office on Morgan Street [^2].

The reason Hackathons appeal to me personally is that I learn the fastest when I'm building things.

I went on behalf of Brio Technologies and was sat with engineers from Bitstrapped and Accenture.

The specific content shared during the Hackathon is confidential but my personal opinions aren't and I wanted to share seven small takeaways.


1. Measuring Performance

The metrics for measuring Generative AI performance are different in research than business.

For example, a Generative AI customer support agent is measured by things like customer satisfaction. How well did the agent resolve the customer's problem and did they do it in a way that the customer enjoyed.

Its important to understand what these performance benchmarks are when designing Generative AI solutions.


2. Prompt Engineering

Attempts are being made to understand how Large Language Models generate outputs but it remains uncertain [^3].

We don't know why instructing an LLM to take a deep breath before providing an output can improve performance.

We know strategies that we can employ to create consistently better outputs but it is not an exact science [^4].


3. ML v. Generative AI

There are many use cases that call for traditional ML practices (object detection, forecasting models) and are at this time ill suited to Generative AI technologies.

AutoML is a great way to test that out without advanced ML expertise.


4. One Model to Rule Them All

A great decision tree was shared on deciding which model is suited to a use case. There is not one model that fits all use cases. Its expensive to run an extremely large, general purpose, language model for a very specific task.

That's why Google provides access to many models through Model Garden and its about deciding on what makes the most sense for your use case.

There are domain specific models like MedLM for healthcare as one example [^5].


5. Coding Assistance

AI coding assistance combined with reading the latest technical documentation is King.

Models have training cutoff periods, so when using SDKs or APIs, still refer to the documentation to ensure you're using the right versions, variables and functions.

Oftentimes code supplied by LLMs can be outdated or incompatible, especially when working in the Generative AI space as its changing so rapidly.

For example, are you calling the right embedding model? [^6].

import vertexai
from vertexai.language.models import TextEmbeddingModel
model = TextEmbeddingModel.from_pretrained(text-embedding-004)

From Embeddings API Documentation [^7].


6. Agent Use Cases

Agents are a great place to start thinking about implementing Generative AI in your business, as its using powerful LLMs with a simple UX to connect to your data and external tools.

A great public example of over 100 different Agent use cases was shared as inspiration at the event [^8]


7. 2Chains, LangChain

The glue to stitch it together. Its a great framework to chain things together and increase the level of control over customisation of bespoke Generative AI solutions.


Completed at Hackathon

  • [x] L300 - Generative AI with Vertex AI: Text Prompt Design: Challenge Lab
  • [x] L400 - Build a Generative AI solution using a RAG Framework: Challenge Lab

Footnotes

[^1]: Google Summit & Hackathon Event

[^2]: Linkedin Post

[^3]: Anthropic Research Paper

[^4]: Google's Prompt Engineering Advice

[^5]: Google's MedLM Model

[^6]: Google Embeddings Model Versions

[^7]: Google Embeddings API

[^8]: 101 Agent Use Cases