Clibrain joins the generative AI race with Lince, an LLM optimized for Spanish

1 year ago 112

There is a long list of Large Language Models (LLMs) out in the wild already, from OpenAI’s GPT-4 to Google’s PaLM2 to Meta’s LLaMA, to name three of the more high profile examples. Differentiation between LLMs is determined by factors including the core architecture of the model, training data used, model weights applied and any fine tuning for specific contexts/purposes, as well as the cost of development (and the relative budget of the model maker to splurge on those costs) — all of which can influence how this flavor of generative AI performs in response to a user’s natural language query.

Thing is, this already lengthy list of LLMs seems unlikely to stop growing any time soon, given how many variables AI makers can toy with and contexts lean into to try to get the best performance from conversational generative AI for a given use-case.

Another factor influencing outputs is how much LLM development has focused on the English language — with less attention paid to training models on other languages (it typically being cheaper/easier to get hold of English language data for training). This means LLMs are likely to perform better in response to English language queries than asks in other languages. So models trained on non-English languages, arguably, present a pretty notable opportunity to keep building out that list.

To that end, meet Lince Zero: A Spanish-instruction tuned LLM, released last week by Madrid-based AI startup Clibrain, which reckons it’s spotted a gap to join the generative AI race by developing models optimized for Spanish speakers.

It points to Spanish not only being one of the most spoken languages globally but boasting considerable variety, in terms of dialects and variants, since it’s spoken across some 20 countries spanning multiple continents (and cultural contexts) — which it suggests muddies the water for performance of mainstream models that aren’t so comprehensively focused on espanol.

One such biggie, OpenAI’s ChatGPT, does handle Spanish. As can others. But Clibrain contends its full focus on the language will enable its forthcoming foundational model, plus a series of domain-trained models it plans to develop atop the big one, will be able to parse and understand more Spanish linguistic nuance than the average LLM, thanks to training on a dedicated corpus of Spanish language data.

The release of Lince Zero is the first step on its ambitious roadmap. This LLM is largely based on existing open source technologies — so it can’t yet boast its own foundational model. But it says that’s coming soon. 

Co-founders ClibrAIn

Clibrain co-founders (Image credits: ClibrAIn)

Co-founder and CEO, Elena González-Blanco, brings an educational background in linguistics research and poetry to the startup, combined with a career focus on AI (or IA as it’s rendered in Spanish) — including years spent working on earlier iterations of natural language processing (NLP) tech and racking up industry experience in insurtech and fintech (at companies including Indra and Banco Santander).

But she points back to her years doing linguistics research as powering a particularly key contribution to the project — by enabling Clibrain to source unique training data to feed its model making ambitions now.

Counting on linguistic quality

“We have a corpus [of training data] which is unique,” she says. “I am a linguist I have, let’s say, 15 years of research in terms of history of language, Spanish language… a lot of contacts that have not been used for training yet. So we have a unique corpus [as a differentiator].”

“We think that there is a super interesting opportunity for us because it’s true a lot of things are going on in the AI world but the Spanish speaking market is completely at a second level,” she also tells TechCrunch. “The quality of what we are building — linguistically — is significantly different. So point is not [to build] a massive model — but a very high quality model.”

Clibrain’s debut model release, which is called Lince Zero model (and being released under an open source license), is is a 7BN parameter taster of a more powerful (foundational) model (40BN parameters) it has in the pipeline — which will simply be called Lince (a word that means lynx in English; aka, a reference to Spain’s iconic but rarely glimpsed wild cat).

As you can tell from the parameter numbers, these LLMs are far from contending to be the biggest models on the block. But, as González-Blanco argues, Clibrain’s conviction is that model size, per se, won’t be the killer feature when it comes to generating a performance advantage around enhanced understanding of Spanish — rather quality attention to linguistic detail will count (and, it hopes, give it an edge in Spanish markets). So, essentially, it’s anticipating there will be a bunch of Spanish speaking users willing to trade off a little in cutting-edge generative AI capabilities (and/or power) for a greater level of native linguistic understanding.

And on that front it’s fair to say that stuff getting lost in translation can generate a lot of irritating friction. So, assuming Lince really can deliver — and sustain — a linguistic edge for Spanish queries, it may be onto something for (at least) a chunk of the close to half a billion native Spanish speakers globally who could end up using these sorts of AI tools.

It’s not the first to see value in optimizing for a specific language, of course. There are a number of non-English language-optimized LLMs out there now, such as Baidu’s Chinese language model, Ernie. Or this LLM model family that’s being tuned for German. South Korean tech giant Naver is also working on generative AI models trained on Korean. And it’s a safe bet we’ll see more LLMs geared towards communities of non-English speakers — at least for more widely spoken languages.

Nor is Clibrain the first conversational AI model to focus on Spanish — the Barcelona Supercomputing Center’s MarIA project, which was launched back in 2021, claimed to be the first “massive” AI system in the Spanish language. But Clibrain argues it’s surpassed MarIA and pulled together the most technologically “advanced” model focused on the Spanish speaking market to date.

Per González-Blanco the performance of Lince Zero is equivalent to GPT-3, whereas she says MarIA’s performance is equivalent to GPT-2. (Although benchmarking linguistic performance of LLMs is a cutting edge business in and of itself. Albeit, on that front Clibrain is encouraging Spanish speakers to check out what it’s built and start generating feedback.)

Unlike Lince Zero, the forthcoming (full-fat) Lince model won’t be open source. Instead the proprietary model will be made available via API to paying customers wanting to plug into a model that’s been trained on a corpus of data in Spanish. The startup will also offer access via embedding the model into a trio of comms and productivity apps it also offers (called CliChat, CliCall and CliBot).

Development will also continue and it intends to offer more proprietary models down the line — including multimodal models that can respond to images and audio, not just text. So there’s plenty on its product roadmap to keep the team busy.

While Clibrain has drawn on a number of open source technologies to build Lince Zero (documentation on its Hugging Face model card stipulates it’s based on Falcon-7B, fine-tuned using a combination of Alpaca and Dolly datasets — translated into Spanish and “augmented” to 80k examples) it claims it’s not just using existing architectures — touting its own senior engineering talent in AI.

The startup was only founded in April, so it’s only around three months old — which does seem to underline the blistering pace of development in the generative AI field these days, with so many rich open source libraries to tap into and compute costs for model training having reduced considerably vs even recent years. But it wasn’t exactly starting from scratch since it was spun out of another of González-Blanco’s startups (a car-backed loan entity called Clidrive).

She explains they had been experimenting with AI internally at that business but decided the size of the opportunity to develop an LLM tuned for Spanish markets merited breaking out a separate startup — and so here they all are: A multidisciplinary team of close to 30 staff with an R&D lab focused on generative AI at the core.

“It was really deeply easy for us to build that research group and centre around the stuff that we had already been doing,” adds González-Blanco.

The other (four) co-founders are Pablo Fernández (president), Pablo Molina (CTO), Paul Martz (CPO), and David Villalón (CAIO). 

The co-founders have been bootstrapping development so far, using funds gleaned from previous startup exits. Which means — perhaps unusually in these AI hype fuelled times with large amounts of investor cash being re-routed to target AI-focused entrepreneurs — Clibrain doesn’t have a hefty investor roster nor deep funding warchest as yet.

González-Blanco says they had wanted to focus on developing core models and getting their first products to market, rather than on external fundraising. But she adds they may look to raise a bigger round of investment than the founders were able to plough in themselves as they continue to progress with the Lince product roadmap.

Clibrain joins the generative AI race with Lince, an LLM optimized for Spanish by Natasha Lomas originally published on TechCrunch

Read Entire Article