Skip to main content

OpenAI and the Network Effect (ft. Md Rafi and Ola Krutrim)

"Who is the greatest Bollywood singer of all times?" I typed into chat.krutrim.com

It listed seven, but missed Mohammad Rafi. 

Horrified, I followed up, "Why is Mohammad Rafi not in this list?" 

And it missed the context, replying, "Mohammad Rafi is not in the list because the list you are referring to is not provided."

With a deep sigh, it reminded me of Altman's India visit June last year. Someone asked him if India should invest in building a Foundational model (assuming funding and talent is not as issue). And he replied, "it would be hopeless to compete with us on training foundation models.. you shouldn’t try”.

Try they will, and they should. The world's fourth(?) largest economy has pockets of deep pockets that can sustain the demands of developing a resource hungry technology such at Foundational LLMs. But distribution, diffusion and monetisation remains challenging, when chatGPT, Copilot and Gemini in Indic languages are just an App away on the same device. 

Network Effect is the game of volume and velocity. Targeting a niche market and offering a unique value proposition for that market helps gain the critical mass of adoption. Incentivising the users for adoption has paid off for building the user base in a shorter time. but gaining customers and expanding the user-base is as critical as retaining users. Building differentiation and continuous innovation are mandated for maintaining the cutting edge and enables a quick pivot in response to market trends. Vertical integration with interoperability and strategic partnership are the necessities for expanding into untapped markets.

All of this is easier said than done. Google, the AI giant of the planet, and the original innovator and patent holder for Transformers (the technology that underpins GenAI), is struggling against an upstart OpenAI. Google Bard has already been axed by the new marketing strategy of Google Gemini, which had its own share of bad PR after inception. Apparently, Microsoft has build a more effective Network Effect strategy by integrating and building interoperability with OpenAI: It is able to drive mass user-adoption through its browser and app ecosystem, and incentivising them by providing free (though limited) access to best GenAI technology; is able to take the Android-advantage and first-mover-advantage away from Google; is able to keep Apple and Amazon at bay; and is able to derive the competitive advantage for winning in the GetAI market. 

For how long this competitive advantage will last is hard to tell at the moment. The pace and quantum of the change in the industry is very high. It suffices to say, however, that Microsoft and OpenAI have a clear head-start. 

But sometimes, a head-start is all that you need. The underlying phenomena here is that Network Effect will ensure that chatGPT will continue to become bigger, at the cost of others, with every incremental user adding to the value of the product and the LLM learning from every interaction. (Not to be confused with Brand loyalty, which is sans any direct feedback loop with the user except for an emotional connect with the product. Google is a much bigger brand and OpenAI is not even a fraction of it, but that hasn't helped Google gain in terms of Network effect.)   

With barriers of entry so high in terms of resource intensiveness and costs, competition and rivalry so intense, substitution costs being so low, and an absence of curated, high quality, India-specific, dataset, reduces the overall industry attractiveness for a firm that wants to invest in building India-specific Foundational LLM from ground up, and make it a thriving, profitable, endeavor.

What does that mean for BharatGPT? A list of 60+ LLMs is here. Foundational LLMs are only a handful, including Krutrim, Bhashini, OpenHathi, Project Indus. For them, the challenges are enormous. For the rest of the crowd that is largely startup-led, their models are based on 'american' open-source Foundational LLMs. They have a much smaller parameter base, and are targeted to solving defined use-cases. Some of them may survive the test of time, and a round of consolidation in a maturing industry would see two or three mid-size players appearing down the lane, by the end of the decade.

Other, smaller, countries and societies are likely to break the ice first - especially in Europe, where local language driven use-cases would start coming into the market. Likewise, there are plenty of opportunities of carving out a niche in Indian context. But whoudl all of them require its own foundational models is not clear.

  • What do you think about the future of such LLMs in India? 
  • Are there any specialised or nuanced use-cases for India that you know of? 
  • What do you think of the overall GenAI "bubble", and its capability of writing code? 
  • What impact would it have on IT/ITeS/BPO industry that is USD100Bn+ today? 

For Ola, this Krutrim tastes rather "unnatural" and raw, at least for now.


"Krutrim" is an India-centricFoundational LLM,
that seems to fall short on multiple accounts
including - bias in dataset, and Context limitations.

Comments

Popular posts from this blog

The Pygmalion vs. The Golem Effect

There are two kinds of self-fulfilling prophecies. They are broadly defined by wiki as follows: The Pygmalion effect , or Rosenthal effect, is the phenomenon in which the greater the expectation placed upon people, the better they perform. On the other hand is the Golem effect , in which low expectations lead to a decrease in performance. In ancient Greek mythology, Pygmalion fell in love with one of his sculptures, which then came to life. The theme was in the main stray of many English literary works during the victorian era. One of which is George Bernard Shaw's play titled "Pygmalion" from which Rosenthal effect gets its name. In Shaw's play, the protagonist, a professor of phonetics Henry Higgins makes a bet that he can train a bedraggled Cockney flower girl, Eliza Doolittle, to pass for a duchess at an ambassador's garden party by teaching her to assume a veneer of gentility, the most important element of which, he believes, is impeccable speech. (The pl

Humor: Scott Adams, The Hypnotist

This blog entry is a fan-post about choosing the three best blog entires that Scott Adams, the creator of Dilbert, has posted over the month of March '12. Arguably, this is also a lazy task. Understandably, this will need some explaining. Scott Adams is a genius with hypnotic calibre. He can even prove it by producing a certain Certification in Hypnotism that hangs on his office wall, and about which we, the ardent followers of his humor blog at Dilbert.com and elsewhere such as his occasional NYT and WSJ columns, have heard more often than perhaps the issuing authorities themselves. That a certain obscure yet timely reference or reminder of being a certified hypnotist can turn his otherwise benign looking paragraphs into mesmerizing wand of a wizard is something only a certified hypnotist can do (I agree that this logic defeats itself, but I never claimed that hypnotism has anything to do with logic. If you have read Scott as regularly as he writes you have already learned that

The Dunbar Number and Limits of Our Social Networks

THE SOCIAL MEDIA AND WEB 2.0 (though both are considered synonymous by some) provided the netizens with amazing new possibilities, like a new universe opening up with everyone mingling with everyone else. While the web (no pun intended) of these collaboration network, social in nature, kept increasing in complexity and continued expanding, there was no measure for if it were to follow the same yo-yo model of the actual Universe (try here ) . In other words, it was very difficult to ascertain if the motion was inward or outward, for there were no clear boundaries defined or known. The size of the neocortex of the brain allows humans to have stable networks of about 150. This is known as "the Dunbar number". With the help of Dr. Robin Dunbar’s research, perhaps we now have the first indications toward the limitations of Web 2.0 vis-a-vis human psychology and behaviour. Dr. Dunbar is an anthropologist currently with the Oxford University and has studied primates and humans and t