Saturday, March 2, 2024

How LLMs Could Change Smartphones

It remains unclear how generative artificial intelligence (large language models) will change our notions of “smartphone” capabilities. One reason is that LLMs might be likened to platforms, operating systems, applications, features or “simply” interfaces between humans and computer resources. 


Though operating systems typically have been considered to manage and control computer resources and interactions, sitting between the hardware and the applications, LLMs might structurally provide similar functions, mediating between the machines and their users. 


That is a role closely related in some ways to the human-to-machine interface, but is much more pervasive in terms of anticipating needs. 


On the other hand, sometimes LLMs might function almost like an app, as when used to support image processing and editing for smartphone cameras or providing on-the-fly translation. In most cases, LLMs might support the features and operation of almost any other software. In such cases the LLM will appear to be a feature or capability. 


Role

Function

Example Use Cases

Operating System

Personalizes and adapts to user behavior and preferences.

- Proactive suggestions for apps, settings, and actions based on context and usage patterns. - Personalized notifications and reminders tailored to individual needs. - Adaptive user interface elements that adjust to user preferences and accessibility needs.

Application

Generates content, automates tasks, and offers creative tools within specific apps.

- Social media: AI-powered content creation (captions, images, even videos) based on user input or prompts. - Productivity: Generating reports, summarizing documents, and drafting emails based on user instructions. - Education: Personalized learning content and practice exercises tailored to individual learning styles and knowledge gaps. - Entertainment: Creating short stories, music, or games based on user preferences and prompts.

Computer-to-Human Interface

Enables natural language interaction and voice control with devices.

- Conversational virtual assistants that understand complex queries and requests, providing relevant information and completing tasks. - Real-time language translation for spoken and written communication, facilitating cross-cultural interactions. - Voice-controlled interfaces that interpret natural language commands and execute actions effortlessly.

Platform

Provides foundational tools and frameworks for developers to integrate generative AI into their apps.

- APIs for tasks like text generation, image creation, and voice synthesis, accessible to developers for app integration. - Pre-trained AI models for specific domains (e.g., healthcare, finance) that developers can leverage in their apps. - Standardized security and ethical frameworks for responsible development and deployment of generative AI on smartphones.

Capability

Enhances existing smartphone features and adds new functionalities using generative AI.

- AI-powered image and video editing tools with filters, effects, and automatic enhancements. - Personalized fitness coaching with workout plans generated based on individual goals and capabilities. - Real-time language translation during phone calls and video conferences, breaking down language barriers. - Smart assistants that manage personal finances, automate repetitive tasks, and optimize daily routines.


All of that will eventually come into play in creating the market for “AI smartphones.” What we do not yet know is how consumers will come to understand what an “AI smartphone” must be capable of doing. 


Friday, June 9, 2023

For Vision Pro, Category Reshaping or Creation Might Have Equal Outcomes

The debate over what to call Apple’s Vision Pro--either “spatial computing” or “virtual reality headset” reflects an effort by Apple to create a new product category that is distinct from extended reality “headsets.”


Spatial computing might be defined as devices or technologies that allow users to interact with computers in a three-dimensional space. 


This might be done through the use of head-mounted displays, augmented reality glasses or other devices that track the user's position and orientation in space.


Using that definition, all xR headsets are spatial computing devices. Apple is trying to avoid creating a product within the existing category, but to create a new category. 


Some might argue this tactic is tried and true marketing, if difficult. 


source: SaaStr, Andres Botero 


Category creation essentially entails creating a business moat around a new product, separating it from possible competitors. It means creating “different” rather than “better” products


So Apple hopes “spatial computing” will separate it from “VR headset” competitors. Financial rewards tend to follow when a firm can succeed in creating a new product category, often because a firm comes to dominate the category


We do not know yet whether spatial computing develops as a way to handle person-to-machine interfaces on a generalized or specialized basis, beyond those instances where three-dimensional space is part of the computation.


For example, spatial computing could become important for gaming, internet of things or training, but not replace other computing activities such as productivity suites, which are two dimensional. 


Probably most observers would presently argue that spatial computing is a specialized use case that augments, and does not replace, many other general computing use cases. It is quite unclear whether three-dimensional capabilities add much value for word processing, messaging, presentation or spreadsheet creation and use, for example. 


Apple might see a way to create a path towards an augmentation of its core product lines rather than a replacement. In other words, limited success rather than “full” success might be the desired outcome, somewhat on the pattern of the Apple Watch extending the value of the iPhone, rather than replacing that device. 


Many will remain unconvinced Apple can actually create a new category of computing appliances, but arguably many more will admit the possibility that Apple eventually could reshape the existing category of VR headsets. 


After all, the iPod reshaped the mobile music playback device market and the iPhone reshaped the mobile phone market. Some might also include the personal computer and tablet as among product categories Apple reshaped, if not created. 


source: Clear Purpose


As a practical matter, it might not so much matter whether Apple is able to create a new category or reshape it. The iPod recreated the digital music player market, as the iPhone recreated the mobile phone market. The market outcomes might not have been markedly different from creating a whole new category of devices. 


Monday, June 5, 2023

Edge Computing Could Drive More Service Provider Revenue Than IoT or Private Networks

Some might argue that AT&T and other tier-one telcos are wasting their time on edge computing, internet of things and private networks unless each of those services can grow to generate many billions of dollars worth of revenue. 


If, for example, those three services together generated as much as $15 billion in revenue for AT&T by 2030 (perhaps an optimistic scenario), those three services collectively would represent about 10 percent of AT&T's total projected revenue, with edge computing the biggest contributor at about five percent. 


Product

Expected percentage of total AT&T revenue

Edge computing

5%

Internet of things

3%

Private networks

2%


With the caveat that a general pattern rarely applies perfectly to any particular service provider, it might be suggested that a service provider owning both fixed and mobile assets could generate revenue on about this pattern: 

Product or Service

Mobile

Fixed Network

Mobile voice

30%

20%

Messaging

10%

5%

Internet access

20%

25%

Device sales (business and consumer)

20%

10%

Fixed network consumer voice

10%

20%

Home broadband

5%

25%

Video services

5%

5%

Business services

10%

10%


Should that pattern generally hold, and to the extent we can isolate edge computing, IoT revenues and private networks from other business services, edge computing, for example, could represent as much revenue for a mobile operator as home broadband or video services.


Edge computing could rival fixed network operator revenue from video subscriptions.


Thursday, June 1, 2023

Will 70% of AI Efforts Fail?

Artificial intelligence, generative AI. large language models and ChatGPT and Bard are the latest examples of new technologies most observers believe will have a measurable positive impact in many industries and processes. 


On the other hand, maybe we should stop confusing the matter by insisting that applied AI is going to have outcomes different in quantity, if not quality, from the earlier waves of  "digital transformation" and all computing innovations of the last 50 years. 


Which is to say, applied AI is  going to take some time to massively transform processes, even if we believe it will do so quickly. And we need to be ready for as much as 70 percent of efforts to fail. 


That is only because 70 percent or more of all information technology programs fail.


“74 percent of cloud-related transformations fail to capture expected savings or business value,” say McKinsey consultants  Matthias Kässer, Wolf Richter, Gundbert Scherf, and Christoph Schrey. 


Those results would not be unfamiliar to anyone who follows success rates of information technology initiatives, where the rule of thumb is that 70 percent of projects fail in some way.


Of the $1.3 trillion that was spent on digital transformation--using digital technologies to create new or modify existing business processes--in 2018, it is estimated that $900 billion went to waste, say Ed Lam, Li & Fung CFO, Kirk Girard is former Director of Planning and Development in Santa Clara County and Vernon Irvin Lumen Technologies president of Government, Education, and Mid & Small Business. 


That should not come as a surprise, as historically, most big information technology projects fail. BCG research suggests that 70 percent of digital transformations fall short of their objectives. 


From 2003 to 2012, only 6.4 percent of federal IT projects with $10 million or more in labor costs were successful, according to a study by Standish, noted by Brookings.


IT project success rates range between 28 percent and 30 percent, Standish also notes. The World Bank has estimated that large-scale information and communication projects (each worth over U.S. $6 million) fail or partially fail at a rate of 71 percent. 


McKinsey says that big IT projects also often run over budget. Roughly half of all large IT projects—defined as those with initial price tags exceeding $15 million—run over budget. On average, large IT projects run 45 percent over budget and seven percent over time, while delivering 56 percent less value than predicted, McKinsey says. 


Beyond IT, virtually all efforts at organizational change arguably also fail. The rule of thumb is that 70 percent of organizational change programs fail, in part or completely. 


There is a reason for that experience. Assume you propose some change that requires just two approvals to proceed, with the odds of approval at 50 percent for each step. The odds of getting “yes” decisions in a two-step process are about 25 percent (.5x.5=.25). 


In other words, if only two approvals are required to make any change, and the odds of success are 50-50 for each stage, the odds of success are one in four. 


The odds of success get longer for any change process that actually requires multiple approvals. 


Assume there are five sets of approvals. Assume your odds of success are high--about 66 percent--at each stage. In that case, your odds of success are about one in eight for any change that requires five key approvals (.66x.66x.66x.66x.66=82/243). 


In a more realistic scenario where odds of approval at any key chokepoint are 50 percent, and there are 15 such approval gates, the odds of success are about 0.0000305. 


source: John Troller 


So it is not digital transformation or AI specifically which tends to fail. Most big IT projects fail.


If one defines AI as integrated “ into all areas of a business resulting in fundamental changes to how businesses operate and how they deliver value to customers,” you can see why it is so hard to get early progress. 


Massive AI adoption affecting “all” of the business processes will take longer than we think. 

 

The e-conomy 2022 report produced by Bain, Google and Temasek provides an example of why massive new technology benefits are  so hard to realize and  measure. Literally “all” of a business, all processes and economic or social outcomes must be changed to take advantage of a truly-significant new technology.


We should not expect people and organizations to stop talking about “AI” impact.  But maybe we shouldn’t listen quite so much to claims that important outcomes will happen quite soon.


“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Stanford computer scientist Roy Amara. Some people call it the “Gate’s Law.”


It will prove useful to keep that in mind as the hype over artificial intelligence, ChatGPT, large language models and generative AI eventually cools. It will. Outcomes will likely prove less than we expect early on. 


The expectation for virtually all technology forecasts is that actual adoption tends to resemble an S curve, with slow adoption at first, then eventually rapid adoption by users and finally market saturation.   


That sigmoid curve describes product life cycles, suggests how business strategy changes depending on where on any single S curve a product happens to be, and has implications for innovation and start-up strategy as well. 


source: Semantic Scholar 


Some say S curves explain overall market development, customer adoption, product usage by individual customers, sales productivity, developer productivity and sometimes investor interest. It often is used to describe adoption rates of new services and technologies, including the notion of non-linear change rates and inflection points in the adoption of consumer products and technologies.


In mathematics, the S curve is a sigmoid function. It is the basis for the Gompertz function which can be used to predict new technology adoption and is related to the Bass Model.


Another key observation is that some products or technologies can take decades to reach mass adoption.


It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in use. 


Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.



source: Wikipedia


The S curve is related to the product life cycle, as well. 


Another key principle is that successive product S curves are the pattern. A firm or an industry has to begin work on the next generation of products while existing products are still near peak levels. 


source: Strategic Thinker


There are other useful predictions one can make when using S curves. Suppliers in new markets often want to know “when” an innovation will “cross the chasm” and be adopted by the mass market. The S curve helps there as well. 


Innovations reach an adoption inflection point at around 10 percent. For those of you familiar with the notion of “crossing the chasm,” the inflection point happens when “early adopters” drive the market. The chasm is crossed at perhaps 15 percent of persons, according to technology theorist Geoffrey Moore.

source 


For most consumer technology products, the chasm gets crossed at about 10 percent household adoption. Professor Geoffrey Moore does not use a household definition, but focuses on individuals. 

source: Medium


And that is why the saying “most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is so relevant for technology products. Linear demand is not the pattern. 


One has to assume some form of exponential or non-linear growth. And we tend to underestimate the gestation time required for some innovations, such as machine learning or artificial intelligence. 


Other processes, such as computing power, bandwidth prices or end user bandwidth consumption, are more linear. But the impact of those linear functions also tends to be non-linear. 


Each deployed use case, capability or function creates a greater surface for additional innovations. Futurist Ray Kurzweil called this the law of accelerating returns. Rates of change are not linear because positive feedback loops exist.


source: Ray Kurzweil  


Each innovation leads to further innovations and the cumulative effect is exponential. 


Think about ecosystems and network effects. Each new applied innovation becomes a new participant in an ecosystem. And as the number of participants grows, so do the possible interconnections between the discrete nodes.  

source: Linked Stars Blog 


Think of that as analogous to the way people can use one particular innovation to create another adjacent innovation. When A exists, then B can be created. When A and B exist, then C and D and E and F are possible, as existing things become the basis for creating yet other new things. 


So we often find that progress is slower than we expect, at first. But later, change seems much faster. And that is because non-linear change is the norm for technology products.