A brief summary of today’s JAIC × Silicon Valley event

Today I had the opportunity to attend the JAIC (Japan Association for IT) event, an event that helps Japanese startups both locally and from Japan have a stage to share their story, services, and products in Silicon Valley.

One of the topics that came up was the difference between LLM and SLM. Many of the engineers there had the same sentiment as I have had for a while—the LLMs we have today are a tool, and they foresee them being utilized to adapt SLMs (Small Language Models) more widely.

SLMs are a way to deploy a language model that is trained specifically for a business or a person, and while it can have general information much like an LLM — the prioritization will be based on the data given to it by the entity that owns it.

This is a topic to dive into deeper, but it goes back to a sentiment I’ve been sharing with many I talk to about the topic. There needs to be a balance between user intervention and AI output. Feeding the model data to learn, and feeding it data not to learn (for example, don’t use certain keywords or voice and tone) is going to be a crucial step in the AI world.

Currently, AI is treated as a blanket solution to everything, and just like in the real world, AI will need to be curated and customized to fit specific niches, markets, and customers to deliver value.

Matthew Talebi

With over 15 years of experience advising and working for companies around the world, Matthew is now helping businesses discover new opportunities and improving product experiences through customer research and insights

https://matthews.studio
Next
Next

The broken AI product experience