Friday, November 01, 2024

I'm impressed* by Amazon Rufus (*when it works)

Earlier this week #MuchCurious dropped in with a suggestion - Alchemy. 

Over the millennia the word itself is full to the brim with mysticism, mystery, magic, and disproportionate rewards. The last one is something every human secretly strives for. To be sure, The Alchemist made Paulo Coelho the legend in pop culture, with quotes from that book such as - “when you want something, all the universe conspires in helping you to achieve it”. Yes, it is that which gave Shah Rukh Khan one of the defining dialogues of his carrier (if you are a bollywood buff, you got the translation already; if you are not, it doesn’t matter really.)

Now, this Alchemy is by Rory Sutherland. The legend from Ogilvy who has dissected human behavior and decision making in making the global brands what they are. The snippet read - "Imagine a world where a $100,000 watch is sold not because of its intricate mechanics, but because of the story of its craftsmanship. Where a bottle of perfume is prized not for its fragrance, but for the mystique of its packaging. And where a pair of sneakers is coveted not for its performance, but for the status symbol it represents.

Take, for instance, the story of the De Beers diamond cartel, which created an illusion of scarcity to make diamonds a symbol of luxury and romance. Or the tale of the Red Bull energy drink, which became a global phenomenon not because of its taste, but because of its association with extreme sports and a "can-do" attitude. These are just a few examples of how Alchemy can transform the way we think about products, services, and experiences."

Enough said, I clicked on the link that took me to Amazon.

But there was a problem.

(Sit tight. This is where the real story begins). 

It appears that Sutherland has written two books with the exact same title. The subtexts are different, the cover designs are different, and the pricing difference is significant. It was virtually impossible to determine which one to choose simply based on the book cover. (No, not buying both, thank you. Because, Tsundoku!)

So I checked with little Rufus within the Amazon mobile app (Amazon’s love for dogs has always bemused me). What we have here as Rufus is a serious attempt at Agentic AI (That which we call a GenAI, by any other name would smell as chatty!). This attempt is aimed at potentially influencing purchase decisions for future transactions on the world’s largest e-commerce platform, with the primary objective of this Agentic model being product discovery.

Here is the exact chat with Rufus (beta) on mobile.

Not only that the Agent got the query right, it did provided inferences (yes!) based on data such as number of pages of the books, publication dates for each, and content reference from Amazon Preview of the books to help make the right decision. I revised and wrote the prompt a couple of times, but none of these decision points are explicitely asked for in the prompt. Given that Rufus is still in beta and a mobile-only interface for amazon.in, it was impressive. This was no short of checking for advise with someone who owns both the books.

Rufus (beta) is not perfect. It is buggy, forgetful, over confident and “thin”. It has earned its own redditor/haters. While shopping for a spray paint can, for instance, I asked Rufus what is the surface area that this 245gm paint can would typically cover? It mined through the customer reviews and came back with the answer “Enough for two coats on a complete 3-seater sofa” or “both rims of a motorcycle with three coats each”. Interesting. This provided some perspective, but then I had a follow up question - How many cans would I need to cover 8 feet by 10 feet area for two coats? That’s where the math was overwhelming for the model, and it fumbled (by suggesting “at least two cans”).

Can it be trained further and improved? Certainly. Will we be seeing such optimization in future releases? Less likely. You see the design decision for this LLM model is leaning towards information retrieval rather than towards math accuracy. As the Amazon release notes suggest (below), Rufus is a “Retrieval Augmented Generation (RAG) system with responses enhanced by retrieving additional information such as product information from Amazon search results.”


At the moment Rufus (beta) is like that small puppy who doesn’t know anything in life yet, but it does that one trick well. And when it does it you are overjoyed as the owner. In this chat transaction, for example, it did drag in “The alchemist”, a cough remedy by Alchem pharma, and Psychology of money by Housel. When I “scolded”, it revealed the search option “pathways” that the model "saw" as product possibilities.

This is when, instead of continuing the same prompt, I re-wrote the prompt by making it more precise. It is likely that the missives until then were taken into onsideration while generating the response for this final prompt (the impressive answers). This contextual continuity for a beta model is promising for an involved decision conversation with an engaging customer.

Rufus is scaled to engage with millions of customers concurrently (the reason I called it “thin”) on a day like Amazon Prime Day with less than 1 second latency to the first response, and provide product discovery and product feature details. This is very different from solving math or writing poems like a general purposes LLM like Amazon invested Claude.ai or ChatGPT. Keeping that perspective in sight is helpful.

Try here for Rufus technical architecture and release notes.

What do you think about an LLM or Agentic AI use-case like Rufus? Rufus is already out of Beta and on Amazon main website in many markets including the US. Have you tried it? How was your experience?

Saturday, March 30, 2024

OpenAI and the Network Effect (ft. Md Rafi and Ola Krutrim)

"Who is the greatest Bollywood singer of all times?" I typed into chat.krutrim.com

It listed seven, but missed Mohammad Rafi. 

Horrified, I followed up, "Why is Mohammad Rafi not in this list?" 

And it missed the context, replying, "Mohammad Rafi is not in the list because the list you are referring to is not provided."

With a deep sigh, it reminded me of Altman's India visit June last year. Someone asked him if India should invest in building a Foundational model (assuming funding and talent is not as issue). And he replied, "it would be hopeless to compete with us on training foundation models.. you shouldn’t try”.

Try they will, and they should. The world's fourth(?) largest economy has pockets of deep pockets that can sustain the demands of developing a resource hungry technology such at Foundational LLMs. But distribution, diffusion and monetisation remains challenging, when chatGPT, Copilot and Gemini in Indic languages are just an App away on the same device. 

Network Effect is the game of volume and velocity. Targeting a niche market and offering a unique value proposition for that market helps gain the critical mass of adoption. Incentivising the users for adoption has paid off for building the user base in a shorter time. but gaining customers and expanding the user-base is as critical as retaining users. Building differentiation and continuous innovation are mandated for maintaining the cutting edge and enables a quick pivot in response to market trends. Vertical integration with interoperability and strategic partnership are the necessities for expanding into untapped markets.

All of this is easier said than done. Google, the AI giant of the planet, and the original innovator and patent holder for Transformers (the technology that underpins GenAI), is struggling against an upstart OpenAI. Google Bard has already been axed by the new marketing strategy of Google Gemini, which had its own share of bad PR after inception. Apparently, Microsoft has build a more effective Network Effect strategy by integrating and building interoperability with OpenAI: It is able to drive mass user-adoption through its browser and app ecosystem, and incentivising them by providing free (though limited) access to best GenAI technology; is able to take the Android-advantage and first-mover-advantage away from Google; is able to keep Apple and Amazon at bay; and is able to derive the competitive advantage for winning in the GetAI market. 

For how long this competitive advantage will last is hard to tell at the moment. The pace and quantum of the change in the industry is very high. It suffices to say, however, that Microsoft and OpenAI have a clear head-start. 

But sometimes, a head-start is all that you need. The underlying phenomena here is that Network Effect will ensure that chatGPT will continue to become bigger, at the cost of others, with every incremental user adding to the value of the product and the LLM learning from every interaction. (Not to be confused with Brand loyalty, which is sans any direct feedback loop with the user except for an emotional connect with the product. Google is a much bigger brand and OpenAI is not even a fraction of it, but that hasn't helped Google gain in terms of Network effect.)   

With barriers of entry so high in terms of resource intensiveness and costs, competition and rivalry so intense, substitution costs being so low, and an absence of curated, high quality, India-specific, dataset, reduces the overall industry attractiveness for a firm that wants to invest in building India-specific Foundational LLM from ground up, and make it a thriving, profitable, endeavor.

What does that mean for BharatGPT? A list of 60+ LLMs is here. Foundational LLMs are only a handful, including Krutrim, Bhashini, OpenHathi, Project Indus. For them, the challenges are enormous. For the rest of the crowd that is largely startup-led, their models are based on 'american' open-source Foundational LLMs. They have a much smaller parameter base, and are targeted to solving defined use-cases. Some of them may survive the test of time, and a round of consolidation in a maturing industry would see two or three mid-size players appearing down the lane, by the end of the decade.

Other, smaller, countries and societies are likely to break the ice first - especially in Europe, where local language driven use-cases would start coming into the market. Likewise, there are plenty of opportunities of carving out a niche in Indian context. But whoudl all of them require its own foundational models is not clear.

  • What do you think about the future of such LLMs in India? 
  • Are there any specialised or nuanced use-cases for India that you know of? 
  • What do you think of the overall GenAI "bubble", and its capability of writing code? 
  • What impact would it have on IT/ITeS/BPO industry that is USD100Bn+ today? 

For Ola, this Krutrim tastes rather "unnatural" and raw, at least for now.


"Krutrim" is an India-centricFoundational LLM,
that seems to fall short on multiple accounts
including - bias in dataset, and Context limitations.

Saturday, March 02, 2024

Digitally.me (or, your next unicorn startup idea!)

Clone thyself as the next unicorn startup idea.

Imagine you have the superpower to clone yourself. You can send this clone to as many places as you like. These clone presense will be present 24x7 for you. Serving you. As you.

Like the various hats that you like to adorn, so do the clones. A clone for every Avatar of you that you’d like to personify. A clone for a situation, a clone for every need.

Yes, imagine Agent Smith of the last act in The Matrix. Hundreds of “him”, doing work for him. Only, yours will be benign and avoid evil. Because you are not evil, and your clone is but what you are.

My son was playing with a phone while I was on a separate WhatsApp call. He was making videos of himself, and he ended up capturing my voice from the call in the background. When I heard myself there, it gave me a start. Have you listened to yourself on a zoom call, video off, just a name displayed on the screen? Sarcasm aside, is that real you? Or is it an audio representation of you for that hour? Does that change between a call with a boss, and a call with the team and subordinates? An interview zoom call, and a friendly catch up with a childhood buddy? There is an essence of you in all of these clones, and yet the persona changes based on the prompt that you give. Be yourself, they say. This is being yourself. Do you notice such differences in your digital interactions?

Recently, I went through my resume. I read it a few times and tried to imagine how my audience, a screening team (and algorithms), would read it. Truth be told, there is a reason for them to be skeptical in the process. My resume reads unreal to me at times. Some things are too good to be true when crammed on an A4 sheet back-to-back. It lists how many times I have cloned myself in my professional life, and every time I claim that I was the Neo of The Matrix. Supposedly, this claim to fame is the ultimate weapon in a knowledge worker’s arsenal. And yet, it is one of the most poor and imaginary representation of yourself. If a resume is the door opener to take you to the interview panel, a brief, impersonal, one-sided, paper document that is seemingly fraught with false victories and promises, is a rather poor medium of choice. How about your resume?

Do you have your own personal website or webpage? I have been maintaining this domain for the website for a few years now. A typical personal website would be (or should be) like a public resume. Ignoring the contradiction of that statement for a bit, because there is none, think about it. We want people to come to your personal website to be dazzled. Therefore, we clone bits and pieces of us that we think are the best and most impressive and put them up in sections like portfolio, blog and pictures, testimonials, projects and client-lists, and our opinions. We window-dress our website or personal pages and leave it for the visitor to entertain or dazzle herself. And as an enforcement learning for her we go and SEO our links to get hits and likes, including paying for it. Most of such personal endeavor like this run out of steam sooner or later. (Neglect for years to this website is but one example.)

I’m fortunate to have known some of the LinkedIn top voices (no, no, the “blue tick” ones) personally. Their dedication to the craft is commendable. Like any personal branding hack would tell you, they personalised their messaging to the core. They try and bridge the gap between their product (mostly, opinions) and their personal presence by putting their personal photos and anecdotes. This is making your digital clone more ‘human’. Doing it a lot will take your clone to the brink of vanity, not doing it at all will make it sound mechanical. Admittedly, social acceptance to the norm vary and keep evolving, but balance remains the key.

The YouTubers are the most interesting lot. Since SmartPhones with decent camera and high-res screens become ubiquitous, you need virtually zero additional investment to be on YouTube, TickTock, Insta and the lot. Of all formats, these folks get the most bang for the buck - of the other way around - out there. Partly because it is the most ‘direct’ medium of them all, that engages Audio and Visual faculties along with storytelling, and partly because it is the most personal of the mediums. in most cases, you are showing you and selling a part of your life as a product. When optimized for adictiveness, it becomes unstoppable. (YouTube is the #1 website in the US in terms of daily hits.)

Therefore, if one were to look for a perfect digital clone - the Neo amongst all the Agents Smiths that we discussed above, it has to be an Audio-Visual presence. No more multiple personals on various platforms. one clone to rule them all.

When someone comes to your website, they meet you there - which is your digital clone there. They “talk” to you like they interact with an online agent. It is a conversation. A conversation about your portfolio, and projects, clients and testimonials, blogs, pictures and opinions.

Your resume is but a factual, historical, information subset of your digital ‘self’. For all practical purposes, the technical interview should be handled by your digital clone, while the HR may call you in for the human touch and a personality assessment.

Your future professional blogs, LinkedIn posts, and tweets can be written by your digital clone, freeing you up for mending your garden or walking your kids to school or getting onto that cruz for a world tour.

You can ‘send’ your Digital self to participate in a podcast or a talkshow and nobody would know the difference. After all, your clone had your perfect voiceprint. It has your exact vocabulary and the perfectly matching pitch and tonality. The ‘thoughts’ on the show are also a ‘derivative’ of your own. If it does blutter our some nonsense you can always argue that it was “under the influence” at that time.

There are no deepfakes anymore. What is there to fake? My clone is the true copy and it is accountable for all the time. The digital footprints shall negate any deepfake-induced claim contrary to the records. Can someone else create a deepfake about me? They can if they steal my identity credentials. That will be unauthorised cloning and not a deepfake. Unauthorised clones or parody can be regulated.

When metaverse comes along - indeed, it is a question of when and not if - your digital clone be the natural (sorry), logical , extension for your presence there. The interaction between clones in the metaverse would be seamless and extremely enriching because it would have the potential to demonstrate very human-like behaviour. You would not need expensive headsets such as Apple Vision Pro because your digital clone is truly digital and not a hybrid A/R solution.

As you would agree, this is totally exciting. There are no doubts on the desirability part of the proposition. It’s just there, an the potential is enormous.

In the next part of the blog we will see the Feasibility and Viability aspects. Stay tunes!

Someone said that the idea is already a Unicorn. What do you think?






Friday, February 23, 2024

$NVDA: When You are The Moat

NVIDIA had their earnings call yesterday for the quarter ending Dec'23. Markets were muted in anticipation. As expected, the S&P 500 rose by 2.5% on the back of a strong performance and pipeline.

The day after, NVIDIA stock rallied to all time high of $800. This gave the company a market cap of USD 2 Tn, surpassing Alphabet, Inc., and becoming the fourth largest listed company in the world by market value. 

For perspective consider this - the single day gain of USD 277Bn was bigger than the largest listed company in India - the world's 4th biggest equity market, and by an estimate its market cap was now larger than the entire SENSEX of India.

Who knew? Perhaps not even Berkshire Hathaway. (See share holding pattern in the links below).

One of the simplest reasons for the meteoric rise of NVIDIA is, as Warren Buffet once famously said about resilient businesses, that NVIDIA provides a moat to the the software firms for their business of developing and productising AI and, specifically, GenAI. 

Imagine a very, very large Excel sheet, where every cell is linked with other cells using some simple formula. That is, if you make a change in any one cell, the cascading effect will be seen across the entire grid where every cell would require a re-computation. Such a computation need not be intensive at a cell level, but the sheer quantity (ie the number of cells require re-computation concurrently) can be overwhelming even for the fastest of Intel CPUs. 

AI model training and data retrieval is similar. It requires computation that is very concurrent in nature. A large Language Model (LLM) for a GenAI such as ChatGPT typically has billions of parameters, where change in each parameter can affect all other parameters.  

Unprecedented resources are required for a task such as this. A large portion of those resources are GPUs - specialised computer chips designed to handle 'concurrent' calculation.  NVIDIA is the current market leader in developing and supplying such GPUs to the resource hungry AI companies.

Apart from the availability of very large quality dataset and algorithm development itself, this resource intensive nature of developing and maintaining LLMs raises barriers of entry in the market. A very important force in the industry. 

This is the moat.

Now, in May last year, an internal memo from Google leaked. It got famous by the meme- "We do not have a moat, and neither do OpenAI". The conversation of the memo centred around the resource availability of large dataset and the algorithm development at Google and OpenAI, and how Open Source models are catching up. "While our models still hold a slight edge in terms of quality, the gap is closing surprisingly quickly." lamented the memo.

So, when software doesn't hold as a moat, it has to default to hardware.

Therefore, NVIDIA.

But there is a problem.

This moat has a price-tag. (which moat doesn't? Well, those that do not, and are shrouded in casual ambiguity become your competitive advantage!)

It was perhaps this moat building that took Sam Altman to Saudi Arabia, and the rumours that followed about the trillion dollar investments towards chip development. True or not, the threat perception of proprietary GenAI such as ChatGPT and Gemini from the Open sourced and funded models such as Llama3 by Meta are high.

To cement its entrench position, NVIDIA is said to be in conversation with Alphabet, Amazon, Meta, Microsoft and OpenAI to build custom chips for their respective positions. It is not difficult to predict how this might play out. There can be only one Wintel-like alliance. Meta has announced an intent to purchase upto 350,000 NVIDIA H100 GPUs, taking the total stockpile to 600,000 GPUs. At a discounted rate of USD25,000 a piece, this is more than USD 15 Bn for the boxes alone.

As an analyst observed, "The people who made the most money in the Gold Rush of mid-1800s were the ones providing the tools to get the job done, and not those hunting for the precious metal. NVIDIA is effectively playing the same role today in this tech revolution." 

Therefore, what the moat is guarding is an entirely separate issue. 

In the next post we will look into the hype, the Concentration of power, the de-centralization and true democratisation of AI.

Did you read anything interesting this week on AI? Would love to know. Drop in a comment!

 

Moat and the Gold Rush
A GPU moat guarding the castle of shovel-makers during the Gold Rush of mid-1800s.

References and Further reading:

Thursday, December 28, 2023

The Independent Directors at OpenAI

Sam Altman was the CEO and Greg Brockman was the chairman of the board at OpenAI.org, the parent company that is listed as a not-for-profit organization in the US u/s 501(C)(3). 

On 17 Nov 2023 both of them were fired by the Independent Directors of the board. This post talks about the 4-day drama that ensued at the back of these events, focusing on the role of Independent Directors. (Try here for a related earlier post.)

One year ago the company launched the ChatGPT, the Large Language Model, that rose to prominence with its Generative AI capabilities (“GPT” or Generative Pre-trained Transformer) and human-like response and interactive interface (“Chat”). At launch ChatGPT was based on based on GPT-3.5 series. The launch took the internet by storm as Microsoft unveiled its commercial partnership with the firm, and its global marketing machine geared into action. 


To accommodate for this new profit-making "partnership" endeavor, the firm came up with another for-profit arm of OpenAI called "OpenAI Global LLC". There was already a "capped" For-profit subsidiary for fund raising et al. However, the supervisory board remained the same as before, and thereby, the corporate governance become hybrid – the same 5-member board now oversaw all the three entities of the company.


Elon Musk claims to have co-founded OpenAI in 2015 “to counter AI dominance that Google had at that time”. He claims that nearly 66% of top AI talent of the planet was cornered by Google at that time, but Google founders did not have AI safety as their priority. The intention was to create an Open-source AI rival to Google’s DeepMind, and hence the name “Open”AI.


Over time, the Corp structure of Open AI has evolved, and it has the following key constituents at present:

  • OpenAI (Not-for-profit): The main governing body that is (still) responsible for the overarching decisions of the entire group.
  • OpenAI (For-profit subsidiary): For fund-raising and other commercial purposes this subsidiary was created. It also helped in talent acquisition in the competitive AI market.
  • OpenAI LP (now OpenAI Global LLC): One of the unique features of OpenAI corporate structure is this Limited Partnership entity. It has a hybrid structure and is a “capped profit” company. It allows for keeping a check on “profit chasing” by investors (and employees) by capping the returns at 100x of their initial investments.

But still, the structure did not have room for Microsoft, Inc., who had substantial investments and pledges, totalling USD 13Bn+, with OpenAI. Microsoft does not have a direct control or a Board presence. (Though this should be more of a worry for the shareholders of Microsoft. With the backdrop of this OpenAI drama, during the Q4 2023 Earnings call, Satya Nadella said "The approach we are always going to take is a broad tent approach"). 


The Independent Directors of OpenAI.org (17 Nov 2023)


On November 17, the Independent Directors of board - Helen Toner, Adam D'Angelo and Tasha McCauley, along with co-founder Ilya Sutskever, removed Altman as the CEO and Brockman as the Chair of OpenAI. The reason cited was a lack of confidence and candidness. No details have been made public. Check their brief bio at the end of this post.


After 4 days of internal churn and external mayhem, Sam Altman returned to OpenAI as the CEO. Brockman also returned as a President. Except for D’Angelo the entire board was replaced.



Let’s delve into the role of Independent Directors and related questions that surrounded the drama:


Who’s interest should the Independent Directors have protected? The Not-for-profit entity or the For-profit one?


Independent directors on the board of OpenAI should primarily protect the interests of not-for-profit OpenAI.org. This aligns with the original mission of OpenAI to ensure AI benefits all of humanity.


The for-profit arm, OpenAI LP (now OpenAI Global LLC), is a mechanism to attract capital and talent, but its activities should align with the overarching goals of OpenAI.org.

The governance structure must ensure that the profit-driven motives of OpenAI Global LLC do not overshadow the ethical and broader human-centric objectives of OpenAI.org.


Was the Hybrid Governance Model a wrong choice to begin with? How the Board could ensure optimal performance given the challenging nature of the model?


The hybrid governance model was innovative but complex. It aimed to balance ethical AI development with competitive market presence. An industry observer commented that the crisis of Altman’s ouster would not have come to pass if someone like Vinod Khosla was on the board. 


Let’s also remind ourselves of Eric Schmidt as Google’s CEO from 2001 to 2011 (and subsequently as the chair of the board until 2017). During his time as the CEO of Google, which was but a search algorithm at the time, came to transform the world of Internet as we know it. Schmidt was 18 years senior to Google founders Brin and Page. And he famously described his role with Brin and Page as that of “adult supervision”.


In addition to missing oversight from a seasoned leadership, the criticism of this model centers on potential conflicts between profit motives and ethical AI development.


After Sam Altman's return as CEO (but without a Board membership, at least for now), and constitution of the new board of directors, better governance can be provided by:

  • Ensuring clear, transparent governance structures that align both entities' goals.
  • Implementing robust checks and balances to manage conflicts of interest.
  • Fostering a culture of ethical AI development, even within the for-profit arm.

Endurance of the ‘Startup’ and Influence of ‘Sharks’?

  • OpenAI, initially a startup, has rapidly evolved, partly due to significant investments from companies like Microsoft.
  • The influence of large tech firms, referred to as ‘sharks’, can be double-edged. While they provide necessary capital and resources, their business interests might conflict with OpenAI's original mission.
  • The longevity of OpenAI's startup ethos depends on its ability to maintain a balance between innovation, ethical AI development, and commercial pressures.
  • The governance structure and board's role are crucial in navigating these challenges and ensuring OpenAI's mission isn't compromised.

In conclusion, the governance of OpenAI, especially in light of recent events in Nov 2023, highlights the complexities of balancing ethical AI development with commercial success. The independent directors and the new board have a critical role in maintaining this balance and ensuring the organization's original mission is upheld.


Sources and further reading:

  • OpenAI announces leadership transition (Removal of Sam and Brockman) (try here.)
  • Elon Musk – comments on Founding OpenAI (try here.)
  • CNBC: Microsoft CEO Nadella says OpenAI governance needs to change no matter where Altman ends up (try here.)
  • Microsoft, Inc earnings call - Nadella hedges against the big bet on OpenAI (try here.)
  • OpenAI Blog: Moving AI Governance Forward (try here.)
  • Eric Schmidt on Twitter, “Day-to-day adult supervision no longer needed!  http://goo.gl/zC89p” (try here.)
Brief bio of the Board of Directors at OpenAI.org on 17 Nov 23:

Ilya Sutskever is the co-founder and chief scientist of OpenAI who leads the research in the artificial intelligence company. He has also been one of the architects behind the ChatGPT models. Sutskever is a Russian-born Israeli-Canadian who specialises in machine learning.

Prior to his involvement in OpenAI, Sutskever was the co-inventor of AlexNet and Sequence to Sequence Learning. He is also amongst the co-authors of the AlphaGo paper.


Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology (CSET). She has previously worked as the Senior Research Analyst at Open Philanthropy.

She is a member of the board of directors at OpenAI. Tones holds a master’s degree in Security Studies from Georgetown, as well as a bachelor in science degree in Chemical Engineering and a Diploma in Languages from the University of Melbourne.


Tasha McCauley is an independent director at the company and has been recognised for her work as a technology entrepreneur in Los Angeles. McCauley also enjoys a fan following as she is the spouse of American actor Joseph Gordon.

She is the chief executive officer (CEO) at GeoSim Systems, which is a pioneering company involved in developing 3D city modelling systems. Additionally, Tasha McCauley is also the co-founder of Fellow Robots. She holds an MBA degree from USC Marshall School of Business.


Adam D’Angelo is an American internet entrepreneur who is known for co-founding and directing Quora. Earlier, D’Angelo held a pivotal position at Facebook, while serving as its Chief Technology Officer (CTO) wherein he oversaw new product development and managed the engineering team. 

D’Angelo holds a Bachelor’s degree in Computer Science from the California Institute of Technology in 2002. After working for Facebook, D’Angelo embarked to found Quora in 2009. Later in 2018, he joined as a board member of OpenAI.


Monday, December 25, 2023

Tuesday, December 19, 2023