A while back someone here on Hacker News made a pretty insightful comment that as great of a designer as Jony Ive is, a large part of his success is owed to the fact that he had an "editor" in the form of Steve Jobs. Once Jobs passed, he no longer really had an editor.
It remains to be seen whether Sam Altman / OpenAI in general will be a good editor
that's the elusive trick of "leadership" that's so hard to measure - great leaders turn talented (and even not really talented) people into success stories. Bad "leaders" can manage the most talented team of the planet into the ground.
Exactly. Also noting what happened with Ron Johnson (Apple Stores) after he left Apple (and was not surrounded by either Jobs or others that worked at Apple:
I am wondering to what extent 'key man' insurance is needed. That's a big purchase to be riding on one man essentially (yes they are getting engineers and others but Jony seems to be the big ticket item for the purchase).
I don't think Ron Johnson is really analogous to Ive.
Ron Johnson's job where he had the most success was where he was selling fundamentally desirable and great products. I think you would have to be pretty shitty at retail to not do a good job selling iPods and iPhones. His subsequent 2 endeavors, JC Penney and Enjoy, were complete flops. It turns out selling middle-market goods is just really f'ing hard.
Ive, on the other hand, I think is pretty universally recognized as a design genius who was directly responsible for the designs of some of the most important consumer products of the past few decades. Yes, it does seem like Jobs was a critical editor that tempered the worst of Ive's "form over function" tendencies like the butterfly keyboard and removing magsafe, but I think it's fair to say there wouldn't have been an iPhone as it was originally released without Ive.
I feel like Apple still would have had a pretty similar in-store experience even if someone else besides Johnson originally launched it.
Johnson thought he was smarter than everyone else. His success at Apple reshaping the retail experience was a kind of a one-hit-wonder that he then thought would simply be a blueprint for any retail company.
He never had any success post-Apple like you say, but it wasn't because there wasn't any "insurance man". For me, I see it as a guy who found something worked smashingly, so he just assumed it would work everywhere else.
The stuff he pulled at JC Penny is a master class in what NOT to do in business:
After his success at Apple and Target, Johnson was hired as chief executive officer by JCPenney in November 2011, succeeding Mike Ullman, who had been CEO for the preceding seven years. Ullman then was chairman of the board of directors, but was relieved of his duties in January 2013. Bill Ackman, a JCPenney board member and head of hedge fund Pershing Square supported bringing in Johnson to shake up the store's stodgy image and attract new customers. Johnson was given $52.7 million when he joined JCPenney, and he made a $50 million personal investment in the company. After being hired, Johnson tapped Michael Kramer, an Apple Store veteran, as chief operating officer while firing many existing JCPenney executives.[11][12][13]
When Johnson announced his transformation vision in late January 2012, JCPenney's stock rose 24 percent to $43.[14] Johnson's actual execution, however, was described as "one of the most aggressively unsuccessful tenures in retail history". While his rebranding effort was ambitious, he was said to have "had no idea about allocating and conserving resources and core customers. He made promises neither his stores nor his cash flows would allow him to keep". Similar to what he had done at Apple, Johnson did not consider a staged roll-out, instead he "immediately rejected everything existing customers believed about the chain and stuffed it in their faces" with the first major TV ad campaign under his watch. Johnson defended his strategy, saying that "testing would have been impossible because the company needed quick results and that if he hadn’t taken a strong stance against discounting, he would not have been able to get new, stylish brands on board."[12][14]
Many of the initiatives that were successful at the Apple Stores, for instance the "thought that people would show up in stores because they were fun places to hang out, and that they would buy things listed at full-but-fair price" did not work for the JCPenney brand and ended up alienating its customers who were used to heavy discounting. By eliminating the thrill of pursuing markdowns, the "fair and square every day" pricing strategy disenfranchised JCPenney's traditional customer base.[15] Johnson himself was said "to have a disdain for JCPenney’s traditional customer base." When shoppers were not reacting positively to the disappearance of coupons and sales, Johnson did not blame the new policies. Instead, he offered the assessment that customers needed to be "educated" as to how the new pricing strategy worked. He also likened the coupons beloved by so many core shoppers as drugs that customers needed to be weaned off."[11][12][13] While head of JCPenney, Johnson continued to live in California and commuted to work in Plano, Texas by private jet several days a week.[16]
Throughout 2012, sales continued to sag dramatically. In the fourth quarter of the 2012 fiscal year, same-store sales dropped 32%, which led some to call it "the worst quarter in retail history."[17] On April 8, 2013, he was fired as the CEO of JCPenney and replaced by his predecessor, Mike Ullman.[18][19]
He had no idea about branding. You can’t just sell generic products that you can get at Amazon for maximum profit in retail. You actually need to have a differentiated brand.
For comparison, during that same time period, the retail successes were the designer collaborations, like Versace x H&M or Target x Rodarte, etc…
All Johnson had to do was bring in some designer collaborations…
OpenAI has recruited Jony Ive, the designer behind Apple’s iPhone, to lead a new hardware project for the artificial intelligence company that makes ChatGPT.
..
OpenAI said it already owns a 23% stake in io from a prior collaborative agreement signed late last year. It says it will now pay $5 billion in equity for the acquisition.
..
OpenAI said Ive will not become an OpenAI employee and LoveFrom will remain independent but “will assume deep design and creative responsibilities across OpenAI and io.”
Probably not given that there is nothing inherently unique about either except being born into privileged class.
Their rise is coupled to a broader social movement that I think a majority are inclined to move on from.
If Ive had the reputation among the industry he once had he would not be partnered up with a company dependent on MS to survive. Sam is playing a marketing angle here. OpenAI’s geniuses have left. They’re just another tech company. Just trying to rekindle/cling to 00s-10s hype since their sense of identity and success are forever changed by all the attention.
Ive doesn’t seem to have come from an especially privileged background (beyond the obvious good fortune of being a white guy in good health born in a developed English-speaking country in the post-war era etc.) Middle-class certainly, but not markedly upper-middle-class or a posh boy. Though I suppose I’m in danger of sounding like pg talking about the Collisons as if they were poor boys made good.
Relative to the sweatshop workers he relied on to avoid sewing his own shirts.
For the same reason we don’t need 9,000 operating systems, it’s trivial to copy-paste, we don’t need Ive, Altman. The main audience is Millennials and younger and they know how this all works. There’s no generating hype when it’s more of the same; our brains normalize and simply cannot find novelty in it.
SaaS competition was faked through cheap financing since every solution can be tuned for performance and features copy pasted. It’s software after all. We weren’t trying to be the first to save a bunch of stranded people. Just unicorn before the hype bubble for your business popped.
This forum doesn’t want to believe this because their identity is wrapped up in it. But I talk to people outside software, and very few feel they get real value out of all this technology. That ultimately it’s just been a big distraction from their lives.
“I hate programmers. They make everything so complicated.” Silicon Valley TV show is how people see software engineers. Asocial children.
They don’t doubt there’s value in medicine research and real stuff logistics using software but have a sense it’s just serving software company employees more so than humanity at this point.
And politics reflects public sentiment. Software workers do not have the same tax write off benefits as other classes of workers anymore. Along with end of ZIRP, these moves are due to a lot of discussed away from the public, global pushback to tech bros running the world.
Just need a way to talk to ChatGPT anytime. Microphone, speaker and permanent connection to ChatGPT. That’s all you need: io
One need is being able to talk to ChatGPT in a whisper or silent voice… so you can do it in public. I don’t think that comes from them, but it will be big when it does. Much easier than brain implants! In an ear device, you need enough data of listening to the muscles and the sounds together, then you can just listen to the muscles…
I assume they want to have their own OS that is, essentially, their models in the cloud.
so, here are my specific predictions
1. Subvocalization-sensing earbuds that detect "silent speech" through jaw/ear canal muscle movements (silently talk to AI anytime)
2. An AI OS laptop — the model is the interface
3. A minimal pocket device where most AI OS happens in the cloud
4. an energy efficient chip that runs powerful local AI, to put in any physical object
5. … like a clip. Something that attaches to clothes.
6. a perfect flat glass tablet like in the movies (I hope not)
7. ambient intelligent awareness through household objects with microphones, sensors, speakers, screens —
The form factor that suggests is an AR headset. Google, Meta, and others have those. They're all flops. Too bulky.
Carmack has said that for VR/AR to get any traction, the headgear needs to come down to swim goggle size, and to go mainstream, it has to come down to eyeglass size. He's probably right. Ive would be the kind of guy to push in that direction.
> Carmack has said that for VR/AR to get any traction, the headgear needs to come down to swim goggle size, and to go mainstream, it has to come down to eyeglass size. He's probably right. Ive would be the kind of guy to push in that direction.
I agree with the first 2 sentences, but not the last. Everyone and their grandmother knows size and bulkiness are big blockers to VR/AR adoption. But the reason we don't have an Apple Vision Pro in an eyeglasses form factor isn't an issue of design, it's an issue of physics.
Meta seems to have decent success with their Ray Bans, which can basically do all the "ask AI" use cases, but true VR/AR fundamentally require much bulkier devices, most of all for battery life.
Right. So the trick is to get people to put up with carrying the necessary hardware around. Ive made iDweebs cool. Even the wired version.
Apple already tried a version of their headgear where an additional belt-mounted box and cable are needed. This was unpopular but necessary. It's up to Ive to make wearing a utility belt cool.
I’ve been using #5 for a few weeks now (Limitless.ai pendant, clips to clothes, records and transcribes everything all day)
It sounds cool, and the idea of asking questions about your day seems like it would be cool, but a few weeks later I’m finding myself forgetting to take it with me. The value just isn’t there yet. (And why have a clip on microphone when everyone already has a microphone in our pocket?)
It’s a cool toy though. Also a creepy toy since it can double as an eavesdropping device.
I have a feeling these AI companies will fall back to selling our data for advertising purposes once these companies realize their core products aren’t valuable enough for consumers to want to pay for the cost of it.
I use mic input with the chat gpt app in public all the time, if you use a low whisper voice and hold the phone close you can be basically inaudible more than 3 feet away and the TTS still does a great job.
What exact use cases do I get from being able to talk to chatGPT when I am out in public? I can think of close to 0 value add to have an AI voice in my head when I'm taking a walk in the park or out to dinner.
People stare at their phones while walking, having dinner, and driving. It's not a big leap to imagine replacing that with subvocal conversations with AI.
Having ongoing conversations with a recently-descended primate that needs 8 hours of daily unconsciousness and gets emotionally invested in the opinions of complete strangers sounds extremely depressing tbh.
i already do that with my iphone by mapping the action button to start conversation. if this product isn’t replacing the phone, then it needs to do something my phone (or watch, or glasses) doesn’t do.
A friend of mine is constantly asking it questions everytime something comes up. She opens her phone, loads the app, hits the mic button, then listens with the phone to her ear. Would work a lot better as some sort of device.
I’m as much of a deep Ai skeptic as anyone but I can definitely think of use cases for while driving or walking, like asking questions about my own schedule or what people have emailed or asked me for in the last hour, or where I can get something specific to eat nearby and so on.
Not sure it’s worth the hype but there are use cases. I do think it’s an interesting contrast with crypto, where there aren’t really.
What I want is for it to surface information to me, not me have to query it.
Where is that AI? For example, if I usually eat between 2-4 PM, and I'm in the middle of time square, start suggesting places to eat based on my transaction history, or location history of restaurants I frequent. Something like that would be useful.
If I have to ask, I might as well look at my phone most of the time. It'd likely be faster in most cases.
I don't need something like that, where it must be queried to be useful, like asking it to read back my text messages, but I sure would love it if when my wife messaged me, it was smart enough to play the message unprompted if my headphones are already connected and active
I don't think this leaves out initial setup. Another source of information: habit observation. If I do something around the same time every day, over and over again, it would be nice if it simply helped me along unless I interrupt explicitly. It should fine tune itself to observations it makes about my behavior and patterns, as opposed to me interjecting constantly
The constant need to query for information, rather than have useful information contextually pushed to me, fundamentally limits its utility once the novelty wears off. Without a sufficient complexity threshold (and this assumes accurate information and trust) its more work to query for things than it is to simply do them.
If you had a butler or PA who's with you all the time, they would know what genre of food you like based on your restaurant visits and your ravings about what you liked. The imaginary AI would have your location history for restaurant visits, your Instagram feed for pictures of foods/review of restaurants, your chat history to see what you've raved about. It would also have big data from other people who have seemingly similar tastes to you, to recommend you the next place to eat.
Obviously since we're in lage-stage capitalism and everything is designed to extract profit out of you, we can't give commercial systems all our private data...
For both use cases I don't see how it would be any different that what anyone can currently do on their mobile device. And even if they were novel use cases, they are nowhere near solving a need that causes more than a few hundred people to pay money for a device or service.
I mean, you're both right. Being able to chat and iterate on stuff while I'm driving is both more productive and feels more natural than I expected before I did it. It wasn't too far removed from a brainstorming session I'd have with anyone on my team, except it was only me and ChatGPT. So there's probably a whole bunch of similar but adjacent use cases that I haven't even thought of you.
But... I can already do this! My phone + CarPlay and/or my headphones actually works great. I don't see how a new device adds value or is even desirable. Unless you're going down the Google Glass/Meta Rayban path of wanting to capture video and other environmental detail you can't if my phone is in my pocket.
You can participate in more highly intelligent discussions, great for a dinner party or a date or an interview. Everywhere you go you can know it all, many use cases. The people who don’t do it, will be at a severe disadvantage.
So other people sitting at the table with you are supposed to be impressed or interested in you regurgitating words said to you by an AI voice in your head? Honestly if I went on a date and the other person was also having a conversation with an AI chatbot, I'd run as fast away as possible because that is insane.
It is vastly different because you aren't presenting anything novel or interesting. You'd just be parroting a computer program and acting like it is a substitute for personality, whit, and character.
More than anything it would broadcast a fear of opening up and showing who you really are to other people. So instead of risking saying something silly, you replace your sense of identity with a generic chatbot. Super cool.
I imagine an LLM that has more realtime capabilities and can respond (or tell you what to say) on the fly would be a more fascinating conversation partner (well, on the surface!) than what you're depicting: a person who'd stop a conversation to ask the phone, and then just read the LLM's response.
I've read that interviews with Stephen Hawking are excruciating because he'd take many minutes to "type" up his response. Of course people are still engaged because it's Hawking and the answer is from his brain, someone pausing to interact with an LLM would be a bore indeed.
I think you're failing to understand that the human part of conversations is what makes them worthwhile. Otherwise you might as well just be talking by yourself to a circuit board.
It doesn’t, it shows something that appears to be intelligence. Although it isn’t surprising that people all in on LLMs might not see any distinction there.
I can't but wonder though... are we slave to productivity?
What do we need this omnipresent help? I'm sure some people do. If you're CEO of a large company, if you are a doctor seeing hundreds of patients in a week, etc.
But me? An average middle age guy with 9-5 job doing white collar job at healthcare company?
I enjoy doing some things that are 'inefficient'. Is that a really a problem?
It's been <20y since YouTube was acquired for $1B, which felt like an imaginary valuation at the time, but it was for a company that actually had traction with users.
Inflation-adjusted, this acquisition is worth 4x that for… vibes from a guy who led a famous team a long time ago?
Money is real. Privately held company valuations are not. This is an all-stock deal, so what it's "worth" is entirely in the eye of the beholder. Its value rises and falls based on how long the hype train can keep running, or how much they can offload to Mayasoshi Son and Arab Gulf sovereign funds.
So does openai know how to widen the context window without it taking more money? Otherwise Google wins, again. And this is all boring. Gemini 2.5 pro preview where you can just insert all files you have and actually it doesn't compress and has it in memory is just what you want. All the compression tricks etc really are shit compared. 32k input tokens is a joke now once you tried this.
As in bearish on openai if they don't offer cheaper 10m context soonish. Google will.
If raw AI power is the key, Google seems to be in pole position form here on out. They can make their own TPUs, have their own data center. No need to "Stargate" with Oracle and Softbank in tow. Google also has Android, YouTube and G-Suite.
However, OpenAI has been going down the product route for a few years now. After a spout of high-profile research exits it is clear Altman has purged the ranks and can now focus on product development.
So if product is a sufficient USP, and if Altman can deliver a better product, they still have a chance. I guess that is where Ive comes into picture. And Google is notoriously bad at product that is internally developed.
A lot of ifs there. When judging how likely Altman would be to deliver a better product, what other product has he delivered besides an orb that scans your eyeballs in exchange for crypto?
4.1 in api already provides 1 million tokens. Anthropic’s enterprise version does too.
I’m not sure if this is a software or a hardware (computer) problem.
Losing what, exactly? I do notice they seem to lose the hype battle—and my perception is that OpenAI acquiring Jony Ive’s startup gets more traction than Google Nobels—but I think with their foundation they can play on a different time horizon, so I am not sure how much they should care about that.
Buying a company without a product (or anything announced), without a website, with its founder not even joining after an acquisition. So, not really an aquihire either.
I am sure this aligns with the non-profit part of OpenAI whose board allegedly has influence of where the company is heading.
First Windsurf and now this. OpenAI is spending billions like there's nothing else to use this money for while being seemingly cash strapped for model training since they already signaled more investment rounds would be needed to remain competitive. They're trying to become too big to fail before they have a moat which won't work well.
They've already claimed that there will be no "GPT-5" LLM, and that instead what they want to call "GPT-5" is a fusion of their various models like 4o, dalle, their video model, etc. That in and of itself is a move that makes it quite clear to me they've hit a wall on the intelligence side.
Add these purchases, and it seems like they are extremely desperate.
Userbase and customer relationships are valuable. If someone else creates GPT5, but doesn't have a large user base, then OpenAI the company could buy that invention. Or, as we saw with deepseek in January, fast-follow with a comparable model within a reasonable amount of time.
Brands have value. If someone has logged into ChatGPT for two years daily, they have built a habit. That habit certainly can be disrupted, but there's a level of inertia and barrier -- something else has to be 10x better and not just 2x better.
When DeepSeek came out, I tried it out but didn't fundamentally switch my habit. OpenAI + Claude + Gemini instead caught up.
They're not acquiring Jony or Jony's design firm. They're acquiring the remaining portion of a joint venture. You could even say that LoveFrom is divesting from the joint venture.
Following that logic, they’ll have to keep spending quite a bit to get to the user base of the current hyperscalers, some of which are already ahead of OpenAI in terms of LLM performance.
OpenAI would not be able to as every other company and governments even will make bids and OpenAI is not well loved to get favor to tilt the scale back in their direction.
Models are getting smaller, faster, cheaper to make, reflecting on their own output, adding modes and running in more places. But they’re not getting much smarter because they can only be as smart as us and each other, because that’s where their training comes from. OpenAI is strongest in a world where models cost billions to train. A world filled with cheap open source models is their worst nightmare. This is what’s happening. So they have to pivot into being a product company and away from being a model company.
> But they’re not getting much smarter because they can only be as smart as us and each other,
That doesn't look to be true in general. AlphaGoZero didn't learn off smarter humans or smarter AI's (at all - it only trained against itself), yet it became better at playing some games than any existing AI or human.
To me it looks like the same thing has happened for LLM's in the one area they are truly good at: natural language processing. Admittedly they only learned to mimic human language by begin fed lots of human language, but they look at least as good at parsing and writing as any human now, and much, much faster at it. And admittedly they have plateaued at natural language processing. But that's not because of any inherent limitation in the level of intelligence an AI can achieve. It's because unlike playing Go there is a natural limit on good how you can get at mimicking anything, which is "indistinguishable".
The other things LLM's seem to be good at a lossy compression of all the text they have been trained on. I was floored when I ran a 16GB locally, and it could tell me things about my childhood town (pop: under 1000, miles away from anywhere). It didn't know a lot, but there isn't a lot out there about it on the internet, and it still astounds me it could compress the major points of everything it read on the internet down to 16GB. The information it regurgitated was sometimes wrong of course, but then you only expect to get a overview of a scene from a highly compressed JPEG. The details will be blurry or downright misleading.
What they are attempting to tack onto that is connecting the facts the LLM knows into a chain of thought. LLM aren't very good at that, and the improvements over the past few years look to be marginal, yet that is what is being hyped with the current models.
None of that detracts from your main point, which I think boils down to the rapid advancements in proprietary models have stalled. Their open source competitors aren't far behind, and if they have really stalled open source will catch up.
But that's only true for the natural language processing side. The shear compute required to keep a model up to date with the latest information in the internet means the model with the most resources behind it will regurgitate the most accurate information about what's on the internet today. Open source will always lose that race.
Are you aware, what hardware is currently needed to run GPT4?
Something bigger than a smartphone usually.
So small mobile optimized LLMs will come, or are rather already there - but if they would manage to make the big GPT4 modell run on an iPhone, that would be a pretty big thing in itself, way larger than GPT5.
But llms are relatively rarely used, and on the other hand, perf/latency is important to ux, and perf is variable(simple question, complex question, visual work).
I'm not so sure it's desperation. As an alternative hypothesis, we might simply view it as an attempt from a temporary position of strength to secure their tremendous lead as the primary consumer access point to intelligence. I don't think it's much of an exaggeration to suggest that this is one of the most important open questions at the moment -- one which will likely be relatively winner-takes-all (in contrast to the more commoditized B2B/API side) and where the winner likely won't be decided based on the intelligence side alone. The questions also aren't entirely separate since the winner, here, will have such incomparably valuable usage data...
Unlike most successful startups, OpenAI is not faced with the possibility that the giants (Apple, Google, Microsoft) decide to look their way, but the reality that these are their real competitors and that the stakes are existential for many of them (trends indicating a shift away from search etc). The most likely outcome remains that one if not all of the giants eventually manage to produce a halfway-decent product experience that reduces OpenAI to a B2B player.
More like they see the future as more multi-modal, and they're probably right to think that is the best value approach vs. throwing more money at large language models.
There is a space to make a suite of products that synergize entirely. Glasses, watches, buttons, clothes (yes, clothes), and home devices/computers/tvs. The reason they are in a spearhead position is because unlike like Google and Apple, they don't need to maintain a legacy paradigm. They don't have to introduce new tech and make it work with old tech, while also maintaining usability familiarity (e.g You can't just change iOS and Android).
They take zero risk while attacking user fatigue (people just get bored of stuff). The current leaders take all the risk following OpenAI because everyone will complain about the changes no matter what they do, and just come up with a reason to switch. This is a human phenomenon that is truly fucked up, the same as when a partner in a relationship is ready to move on no matter what you do.
It kind of is, when they were given $500B and told to make a return in 10-ish years. They have to put the capital in play where it has the largest ROI potential. They are gambling that Jony has another iPhone in him.
I don't know enough about any of this to weigh in on it, but when you take investor money, you aren't supposed to sit on it or do slow burn (at least not VC money), its meant to be gasoline, and you moonshot with it.
> They are gambling that Jony has another iPhone in him.
I seriously doubt it.
If anything because Apple let him go exactly when they were looking for a new hit product like the iPhone.
But also because how he handled the Mac the years before he was fired. All his big decisions were just bad. The butterfly keyboard, touchbar, USB-C only ports, etc. Heck even the 2013 Mac Pro (the trashcan) was an engineering failure. They could never upgrade it because, according to Craig Federighi, they got themselves into a thermal corner caused by the design of it[1].
He transformed the MacBook Pro into its pure essence, the ultimate form, meant to be used on a slab of polished granite in a HEPA filtered room, with only a precisely aligned array of dongles to offset the clean vision.
The fact the you took your laptop out in the field or to a couch in some barn like a filthy animal, corrupting perfection with dust and grease, rendering the keyboard useless is on you. It is a reflection of your own animal nature.
This gave me an idea: a Slack AI that will give me an analogy to support my point, whatever it may be.
“Hey Analogai, help me out here.”
“Ah I see what Chip Frumpkins, Director of Looking Relevant is saying. It’s basically that we need to throw a lot of paint at the wall to see what sticks. And if we fail, at least we’ve got a Jackson Pollock.”
That analogy breaks down almost immediately. I get your point that when you go out and try to do things sometimes you will fail, but the problem is that many of his design failures were seen _even at the time_ to be failures.
I don't necessarily think Ive is going to succeed, but if you're going to make a lot of bets, taking one bet on someone who succeeded before seems pretty reasonable. He wouldn't be the first person to rise to great heights, fall, and rise again, even in the Apple world.
I absolutely agree right up until we start talking about price. Obviously this deal was all in stock from someone who has a history creative corporate control structures, but nevertheless the on paper cost of was $6.4 billion. That's a hell of a bet.
Baseball is the hardest sport, but it’s a zero sum game. The .300 batting average is against equally elite pitching. Engineering or design is about adding value.
Taking the raw engineering of the components and interfaces that defined the iPhone and making a system of it is design at its peak and almost art.
Taking a proven form factor like a laptop, not talking to users and making it worse is just a misstep. It wasn’t a complete disaster only because the bar is so low, the defective Apple laptop is still the best laptop in the market.
since we're torturing the analogy... you don't measure a baseball team's success by the # of HR's one player records in a season, you talk championships over time. Sometimes they're related, but less frequently than you'd think.
You know, when we upgraded to USB-C I thought they were mostly nifty. Reversible, quite universal, fully embraced by everyone.
But over 7 years of using them, I've come to resent some of their differences with past USB connectors. Very small, insecure friction grip, reversible, more delicate.
Also it seems that device designers think that a newer generation of USB needs fewer ports? My Lenovo ThinkPad had 2x USB-A and 2x USB-C in 2018. Now I've got a Pixel with 1x USB-C and a Chromebook with 1x USB-A and 2x USB-C; on each of those devices you need one port for charging. So if USB is more versatile and compatible than ever, why am I not allowed to plug in all my stuff at once?
The trash can was a bet on external GPU enclosures, which are technically feasible but just never took off in the marketplace. It was great engineering for a use case that just didn’t pan out.
As one of the fervent 500 million daily GPT users it’s a no brainer for Open AI to create a personal mobile AI device or an AI phone with GPT accessible right from your Lock Screen.
It could…
- interface with AI Agents( businesses, friends & family’s agents, etc) to get things done & used as a knowledgeable.
Once u pick up the device it’s like a FaceTime call with your AI agent/friend in which u can skin to look however u want (a deceased loved one ..tho that might be too out there).
- It visually surfs the Web with you.. making u not open a web browser as much
- take the best selfies of u…gets you to the best lighting.
Overall excited to see their vision and leave/drop Apple’s now boring iPhone for a GPT phone or personal mobile AI device. I think a phone form factor would be best, but we’ll see.
How did that work for the Facebook phone? And all their billions of fervent Facebook users?
Google own this space - pixel phones already do pretty much all of this, and they have the best models and the most users too. No built in agentic capabilities yet, but I am sure that is just a month or two away (see project mariner).
If you've not tried the pixel photo ai features already, you may be surprised. Things like changing lighting, removing people from the shot, auto-stitching people into a group photo, composites group photos so you get one photo where everyone is smiling and looking at the camera at the same time even if that never happened etc, text-editing photos etc. Gemini live is like a facetime call without the 3d avatar but we've seen they can do it with Veo3 already if they wanted.
This is all reality today already in the hands of billions of Google users, so OpenAI have a bit of a hill to climb: OpenAI would need to not only catch up with Google (both in AI space which they seem incapable of doing right now but also in product too) and surpass them.
Google are totally integrated in this space - the device, the software, the AI models, the infrastructure, the data, the sites/apps people use (search, Gmail, maps, YouTube, docs, ...) and also the billions of users.
I doubt OpenAI can really make a dent here. I suspect any OpenAI-Phone will be quietly discontinued like the Facebook phone
They stopped the glasses with tiny fov that weren't useful, but they have all (FB, Apple, Google, multiple startups like North, Xreal) been working on more subtle glasses continuously since then, it's just been hard and have needed display/power breakthroughs. Google announced new glasses again yesterday. Looks promising but live demo sputtered out at the end, still not gonna be ready for a while
Meta Ray Bans for sunglasses wearer who takes pics & videos using their phone are very handy; no need to take out or even have your phone to do either. Can also ask it for the time without needing your phone too.
You & just a dude here with ideas that in time go nowhere or maybe somewhere. Altman noted different demographics use GPT differently with 18 to 20 somethings not making decisions without consulting GPT (could be marketing speak but with some truth).
The first iPhone was a hardware engineering marvel, it was leaps and bounds more premium than any phone in that generation. It took other companies years to catch up.
I’m unsure what he’s trying to say either. The gibberish and out-of-context replies ITT are making me think HN, like many other sites, is laden with bots now.
How much credit does he really deserve for the iPhone? Jobs and Fadell were obviously both involved in the iPhone too and Nest has some pretty appealing design without Jony being involved at all.
Source? They did a $40B funding round for which Softbank is on the hook for most of it, and they are going into debt cause they don't have the cash either [0]. IMO, these acquisitions are due to the fact they know just selling the model isn't where the huge margins are, selling the verticals is.
“As part of the deal, OpenAI is paying $5 billion in equity for io. The balance of the nearly $6.5 billion stems from a partnership reached in the fourth quarter of last year that involved OpenAI acquiring a 23% stake in io.”
Ive is a good designer sure but "essentially created the iPhone" is absurd. It took thousands of engineers and product visionaries to bring that device together, and OpenAI isn't getting any of that. You aren't going to replicate its success by hiring the guy whose major contribution was insisisting that all Apple products be a few millimeters thinner in every iteration.
Ive is basically the best in the business if your needs are to get a large amount of cutting-edge technology into a ridiculously constrained form factor and have it look good, feel solid, and be manufacturable at scale.
That is what he is world-class at. Not designing comprehensive product experiences or ideating new greenfield products (and definitely not designing app icons).
If IO or OpenAI also has a product visionary of the caliber to fully utilize Ive's singular industrial design talent, they'll rule the world. Otherwise, they're sinking billions into the next Humane Pin.
I don’t think there’s any evidence that Ive has the expertise you claim. He was lead designer for Apple when they did the iPhone, but it is Apple who has the extensive deep expertise in hardware design and engineering.
Completely agree. He is a good designer, but graphical UX went downhill when he was given more control at Apple and he became increasingly militant about hardware design to the point that the MacBook Pro was kinda bad because it was unreasonably thermal limited and had a terrible keyboard.
You're 100% right and in my opinion there is a much higher probability this is a total waste and nothing of similar value will be created. But if you're OpenAI and you have this option I also see why you may take it.
Lots of companies were making smart phones with similar specs. Under Johnny Ive's management, the Apple version made a much bigger splash than any of them and defined the category.
At the time of the public debut of the first generation iPhone (January 2007), the statement "lots of companies were making smart phones with similar specs" is objectively false. Further, there were zero companies making comparable large touchscreen, large cpu phones outside of Apple at the time.
Arguably HP/Palm's WebOS devices were ahead on every mark - easier to use, more featureful, smarter, better physical design than any iPhone of similar manufacturing date.
The difference was management choosing to stick with a platform for long enough for network effects to kick in.
If Apple has any advantages compared to other big tech, it's an ability to look past next quarter's financials.
Palm offerings in 2007, such as the Treo 755p or the Centro, could not compete hardware-wise with the original iPhone. The claim that these Palm phones were "easier to use" is hilarious to me, and probably hilarious to many others.
I explicitly mentioned WebOS, meaning the devices released around 2009, which competed with 1st gen iPhone old stock, and directly against iPhone 3G - the second generation.
The first gen iPhone is not a smartphone by today's standards. No multitasking, no copy/paste, no centralized instant messaging, all things WebOS devices had on release.
Even the second generation of iPhones felt half baked by comparison.
Which just goes to illustrate my point, that they weren't technologically superior, just more committed.
The race was already over by the time webOS showed up. Even Microsoft, with a superior product and many billions spent pushing it, couldn't overcome the network effects of iOS and Android. No one else had a chance.
Comcast and AT&T still exist. Kraft still turns out war rations by the warehouse. Tasteless grocery store tomatoes are still the most widely available.
> Jony Ive, a chief architect of the iPhone, and his design firm are taking over creative and design control at OpenAI, where they will develop consumer devices and other projects that will shape the future look and feel of AI.
> Ive won’t be joining OpenAI, and his design firm, LoveFrom, will continue to be independent, but they will “take over design for all of OpenAI, including its software,” in a deal valued at nearly $6.5 billion
> OpenAI already owns a 23% stake in io as part of a deal from last year, meaning it needs about $5 billion for the acquisition, the Times reported, citing unnamed people familiar with the matter.
Honestly I think it's a great move if you know you have a hyped up valuation, to exchange that paper valuation for actual company acquisitions. Not every company has that ability.
The models will not be a moat, but the products can be. More specifically "sticky" products / killer apps like ChatGPT, and whatever forthcoming products this acquisition of Jony Ive's company may lead to.
Windsurf acquisition may be explained in part by the same logic of owning a strong and sticky product, as well as a good source of data for training.
I have a gut feeling alot of this is going to go negative for OpenAI. I simply don't see what they're going to produce in a reasonable amount of time that justifies hardware, for example.
I'm open to being wrong, very open, but I need to see evidence. Hard evidence.
> You don't hear anyone touting AGI anymore do we?
Apart from, y'know, DeepMind - remember those guys? The ones with the SOTA models at the top of the leaderboards? The ones who just launched Veo3 and blew everyone away?
It feels like OpenAI has kinda jumped-the-shark at this stage. They don't seem to be especially competitive any more, and all the news coming out of them is tinkering at the edges or acquisitions that no-one really cares about.
When are they going to start competing on actual AI again?
I feel like history is repeating itself yes, but actually I was thinking more a out Google.
Everyone was saying "oh man - Google had all this tech and they sat on it and just couldn't move forwards, then they blew their lead and OpenAI came a long and smoked them!"... Now it feels like it is OpenAI who are repeating that story, blowing their lead they got with the original ChatGPT while that upstart Google schools them in model development and vertical integration.
Interesting times. Very interesting times. C'mon OpenAI, move the SOTA forward!
After a lot of the drama and a ton of talent leaving all they seem to have left now is a pile of cash that they can spend eliminating competition. Meanwhile like others have rightly pointed out, talent at Google and even Mistral have been crushing it.
Just to stem pointless debates before they flame up - both these acquisitions appear to be primarily if not exclusively for stock.
Sure, if you want to get into theoretical finance, OpenAI could have sold these new shares for cash, so technically there's no difference, but OpenAI is only spending opportunity cost cash, rather than fiat.
OpenAI's fiat likely still goes to the things you'd expect, like training models and paying for inference.
The AI hype seems driven more by stock valuations than genuine productivity gains.
Developers now spend excessive time crafting prompts and managing AI generated pull requests :-) tasks that a simple email to a junior coder could have handled efficiently. We need a study that shows the lost productivity.
When CEOs aggressively promote such tech solutions, it signals we're deep into bubble territory:
“If you go forward 24 months from now, or some amount of time — I can’t exactly predict where it is — it’s possible that most developers are not coding.”
- Matt Garman – CEO of Amazon Web Services (AWS) - June 2024
"There will be no programmers in five years"
- Stability AI CEO Emad Mostaque - 2023
“I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software.”
- Satya Nadella – CEO of Microsoft - April 2025
“Coding is dead.”
- Jensen Huang CEO, NVIDIA - Feb 2024
"This is the year (2025) that AI becomes better than humans at programming for ever..."
- OpenAI's CPO Kevin Weil - March 2025
“Probably in 2025, we at Meta are going to have an AI that can effectively function as a mid-level engineer that can write code."
- Mark Zuckerberg - Jan 2025
"90% of code will be written by AI in the next 3 months"
The loss of productivity is,as many things are, not directly measurable. Mediocre code making it into the codebase and hindering future development and increasing maintenance time, or even being the cause for some ideas to never be discovered, how are we going to measure that? How are we going to measure engineers no longer properly knowing the codebases?
Businesses will wake up when it is too late and the damage to the engineering side of their products is already done. Or perhaps won't wake up at all, and somehow (to their management levels) inexplicably fail.
Sure, copywriting seems to be pretty much irrelevant now. Same with image generation wherever you can get away with it. The quality may be reduced in many cases but the cost is absolutely a fraction of what it used to be.
US shareholder capitalism is increasingly dependent on pushing a fantasy where X Company will eventually operate as an unregulated monopoly with armies of machines run by a small group of contract labour with no benefits and no wage bargaining power.
> When CEOs aggressively promote such tech solutions, it signals we're deep into bubble territory:
Correct. This is how most bubbles are kept up as they are all exposed in the hype cycle.
You will not hear about the mistakes [0] [1] [2] it makes when AI gets it wrong or hallucinates and all the AI boosters can do will be "it only gets better" and promise that we will soon be operating airplanes without humans. [3]
Surely you would feel safer if you or your family boarded a plane that was fully operated by ChatGPT, because it is somewhat closer to "AGI"?
I have a feeling OpenAI will eventually be looked back on as the company that forced Google to release its internal AI product and then died a slow death.
What the other commenters are forgetting is that this is the same Sam Altman who planned and executed the extraction of Reddit from Condé Nast.
This acquisition (and the Windsurf acquisition) are all-stock deals, which have the added benefit of reducing the control the nonprofit entity has over the for profit OpenAI entity.
How do you extract the for profit entity out of the hands of a nonprofit?
- Step 1: you have close friends or partners at a company - with no product, users, or revenue - valued at 6.5billion.
- Step 2: you acquire that entity, valuing it unreasonably high so that the nonprofit’s stake is diluted.
- And now control of OpenAI (the PBC) is in the hands of for profit entities.
it’s not true, contemporaneous accounts disprove it (although that’s not to say Sam Altman is not a snake, Sam Altman is a snake that nobody should trust)
Growing up in a strong Southern Christian / Baptist / Pentecostal household [1], WorldCoin feels like the most "Mark of the Beast" plot I've ever seen. 1990's televangelists like John Hagee and Pat Robertson would be screaming to high heaven about Sam Altman being the antichrist if they were still around.
Transacting with your eyeball? Directly out of the Book of Revelations!
[1] I took a strong interest in biochemistry in college and I'm no longer religious.
Signing posts with a hash tied to a thing that might prove you are human instead of a LLM astroturfing might actually be a good value proposition for blockchain.
Why would I trust the entry on the blockchain? I'd rather just trust the government body issuing my ID. Estonia has had it for years, it's amazing that here in USA people send contracts over email and just click a button to "sign" it - Adobe at least allows actual PKI signatures but there's not really a registry to verify it against so useless in most cases.
Yeah, the US is a backwards economy and clearly isn't successful with all it's rules and that silly Bill Clinton era digital signature law isn't pulling it's weight. It's been solidly eclipsed by Estonian technical superiority.
I would trust a blockchain more than my government. My government has clearly been shown to be vulnerable to a < 51% attack. Blockchains don't change every 4 years and decide habeas corpus no longer applies to me because my skin is the wrong color either.
You don't need a blockchain for that, just cryptographic signatures and PKI. The EU is implementing a system for national IDs that would enable this, and could be done with perfect privacy through zero-knowledge proofs.
Yes, if every government was reliably incorruptible, they could also work together to build a global human verification network.
I predict that Worldcoin will get it done first, and will be more dependable than most countries. But it could turn out otherwise. In the end, services that need humanity verification will have multiple provider options and the market will decide.
That doesn't really scale to the 3rd world as a form of identity validation. I'm sure there is more than one way to do it, I'm just saying it's a way to do it.
Initially I thought it's a bloody stupid idea, however at this stage I reckon we need it or a lot of boomers are going to be ones hotted into singing away all their wealth away.
Yeah same here. My dad has been talking about the End Times and the Mark of the Beast for 40 years now. Now, in addition to all that, it's Q-ANON and MAGA. Fun times. Liberal police are coming for your guns and your Bible, you heard it here first.
Also a plot point in Arnold Schwarzenegger's The Sixth Day, where his eyeballs get scanned as a matter of course before transporting a VIP, (sign here, eyeballs here please, thank you) and later used to clone him.
He always said wild stuff like, (paraphrasing) "If you think or say anything bad about the holy ghost, you'll be doomed to hell forever."
Scary stuff as a kid.
ChatGPT says,
> Pastor John Hagee has addressed the gravity of blaspheming the Holy Spirit in his sermons. In his message titled “Seven Kinds of People God Cannot Save,” he emphasizes that blasphemy against the Holy Spirit is an unforgivable sin. Citing Matthew 12:31–32, he warns that mocking or criticizing the works of the Holy Spirit is perilous and unforgivable. He states, “You be very careful when you open your mouth to criticize a genuine move of God. You’re walking on very dangerous ground.”
> This perspective aligns with the biblical passage in Matthew 12:31–32, where Jesus declares that while all other sins and blasphemies may be forgiven, blasphemy against the Holy Spirit will not be forgiven, neither in this age nor in the age to come.
Eternal damnation thoughtcrimes are terrifying stuff for a kid.
I mean, on the one hand, sure, but on the other hand - the "anointed one" himself, DJT, is pushing AI, so I'm sure it will be fine. Unlike that heathen Joe Biden who attends more church services in a month than DJT in a year. And as I eventually learned (grew up similar to yourself, but german pentecostal in canada, also exvangelical now), if they are against helping people and against welfare/basic human rights/basic income/equality, they're truly christians in the eyes of those telegelicals. I guess somehow the "you will always have the poor among you, me you will not always have" quote from Jesus means that to be "a biblical nation", we have to ensure there are always poor people..
I mean, it was difficult for people to ignore a company that sold their first computer for a retail price of $666 and chose as their logo a rainbow fruit with a single chomp on the side.
And then rose up in competition against a company whose logo is literally a cross, and code-named an OS "Chi-Rho" [Cairo, XP] -- yes, it's a little "on the nose" by now.
It's also a hoot to just see LDS missionaries waiving their iPhones around with the Genesis Apple clearly visible. To be honest, as a Catholic I have seen no overtly demonic or Antichrist features in Apple Computer themselves, and frankly most tech companies try to adapt postures as agnostic and irreligious anyway, just due to globalism and market share. The {Free|Open|Net}BSD daemons and logo/mascots have long been subsumed into Apple's juggernaut anyway!
> It's also a hoot to just see LDS missionaries waiving their iPhones around with the Genesis Apple clearly visible.
To be fair, the LDS doctrine around the fruit from the Garden of Eden and the fall is quite different from the Catholic understanding, it’s seen as a necessary, even a good thing, in the overall plan.
To be honest, I would say that these signs placed by Apple Computer were done not out of malice, but by way of warning. To say, "Here Be Dragons!" To counsel those who may be ignorant, there are pitfalls ahead, and be careful, because you could lose your soul to these things, even though they are designed as morally neutral.
Computers are a tool, after all. The fruit depicted could just as easily be from the Tree of Life. It's all about how we use those tools.
I would argue that it’s likely there was no Biblical reference intended, but that even if it was, it’s then more likely the apple is a reference to “knowledge” (as in, the Tree of Knowledge of Good and Evil) than a vague warning about using their products.
Could also just be to show up first in the yellow pages
Also tools are not neutral, they carry the intent of their designer and make whatever they are designed to do easier than it used to be; if you want to be convinced please read Douglas Rushkoff's Program or Be Programmed: Ten Commands for a Digital Age
They are paying 'one worldcoin' (I've read sold about $50) to places that give zero value to their privacy (too difficult to care for privacy when starving to death and the monthly salary is $10). They are targeting poor countries (South America, Africa).
Once they have collected 'enough' faces to use on their AI, they could possibly pull the plug or keep it as a social experiment.
I was thinking, there is no way Russia or China will allow them to operate in their countries, and (combined) they got 1.5bn people.
I can also see them trying that to Pakistan, Afghanistan, and other autocratic ..stan places, where the local dictator would only allow this if they got to use the data for their own nefarious purposes.
I'm not aware of giving up any freedom. Or getting security. I got about $300 in cash equivalent in their cryptocoins and a tech toy - it's basically a combo app and crypto wallet.
Which is sad - I've been using Lemmy exclusively for 5ish years now and the smaller communities haven't really taken root. Reddit still controls the long tail of internet discourse
I for example, got permabanned from one of the major politics subreddits.
Basically what happened was I wrote a post, and some guy responded to me with a firehose of personal insults. I called him a troll in reply, and within 30 seconds of posting said reply, I was permabanned as a first offense, without any possibility of appeal.
I'm guessing that person signed up with a Google account that was connected to other Google accounts, and when they tried to make a new account it was flagged as "ban evasion".
I'll give my example, although it's a bit out of place on my part...
/r/news locked/suppressed [0] as 'Politics'.
I sent a Message to modmail:
Me: Calling this 'politics' makes me ask who's on the an alphabet payroll... Just saying.
Reply from modmail: This message makes us think you haven't bothered to read the rules... Just saying.
Then I was muted from /r/new modmail for 28 days, while also being perma-banned from /r/news.
Months layer, I had left a normal comment on a different thread with a 'mobile' secondary account on /r/news, and found both my desktop and mobile accounts locked for 7 days because the /r/news comment was considered 'ban evasion'. Despite having otherwise commented on /r/news from my mobile account in the meantime with no repercussions.
It was within the subreddit rules and reddit TOC to do all of this, I acknowledge, at the same time it's almost like Reddit is hitting that vibe of StackOverflow from a few years ago where mods can just power trip and make the place less useful for everyone...
Just adding to the noise, but the thing that finally got my 15+ year old account banned was reporting some blatantly racist comments. Apparently if a subreddit mod agrees with the reported comment, they can just report you for "abusing the report system". The original comment never got removed, of course.
Not the same person, but as an example, I got banned from r/europe for posting just "I'm proud to be a Romanian!". That was on a post about Romania overtaking Poland to become number one at some anti-woke metric western Europeans care a lot about.
>The Israeli military has admitted that more than 80 percent of the people killed in the attacks on Gaza since Israel breached the ceasefire two months ago are uninvolved civilians, Hamakom reported on 13 May.
I have no idea who Hamakom (the original source for this info) is, but a secondary source that quotes “Gaza Ministry of Health” without identifying that this is Hamas like say WSJ does is not likely to be a quality publication.
Update: translated Hamakom article - you can judge for yourself whether you think this is a reliable source of information: https://www-ha--makom-co-il.translate.goog/1058990-2/?_x_tr_... “IDF and Shin Bet in joint statement: The war is pointless and everything is stupid” doesn’t convince me, likewise whatever they’re trying to say about Natanyahu’s wife.
As far as I can tell, pretty much every org or governing body that can classify something as a "genocide" have already done so.
That said, getting into an argument over which word to classify the senseless murder of innocent men, women, children, journalist, medics, etc. in the context of a multi-decade illegal occupation, seems like a distraction.
No. The ICC President even came out and explicitly corrected people that were claiming the ICC ruled that Israel was causing a genocide due to the influx on online conspiracy theorists. Anyway, this is slightly offtopic re: Reddit moderation.
A post of mine in the bodybuilding subreddit was removed after 55 points and all 29 comments removed because the mods made up an entirely arbitrary new rule that there should be only one post about a recent bodybuilding show. The subreddit rules had no such thing. They just made it up. When I messaged the mods, they banned me from messaging them. Huh?
Another example:
Apple subreddit allows developers to self-promote their apps on Sundays. I posted an app of mine. Mods removed it and banned me for 100 days from the subreddit because I had 4 comments within the last month and not 5. This is despite me having lots more comments and posts (multiple posts/comments over 7000 points) over 7 years and in last 2 months instead of last 1 month.
"JK Rowling doesn't want men in women's bathrooms"
Edit: I'm not complaining about moderation do what you want, but the statement above is true, I have been banned for writing those things, and if anyone has any experience with Reddit, it should be fairly believable that Reddit would ban an account for writing them.
The app is not impartial in the content it chooses to push. I got identified as a target for very specific content and in the context of this discussion, it's the polar opposite of what reddit used to be.
I don't know if reddit is better than ever, but the continued existence and popularity of old.reddit.com seems to be a sign that it is not well-run. (in the sense of they wrecked their UX years ago and never fixed it).
Small subs are worse than ever IMO. Either totally dead or they hit a critical mass where product shills have come in and established the dogma of the subreddit.
No, Reddit is still shameful. The central issue ruins everything, moderation is placed on a pedestal beyond reproach even when it's trying to sabotage it's own community! The only point Reddit staff will ever step in is when these subreddits try to protest and threaten their bottom-line. They would rather run a pyramid scheme that's profitable, than address the central governance crisis.
You can't "no true scotsreddit" your way out of this issue because it's an overarching issue with the platform itself. Even 4chan has more better protection against influence campaigns, it's pathetic how Reddit's own administration lets itself be defined by it's lowest-common-denominator.
until you get shadowbanned. my 15+ year account is dead because I logged-out of the iOS app and logged-in to the web app on my phone, it triggered the suspicious/spam filter and boom I am dead. tried many times to get it restored, no dice.
the funny thing is the only indication that this happened was keybase alerting me that my proof was gone.
I can login and use reddit as usual, but nothing I do has any effect. It's like I am in a sandbox. Try to view my profile publicly and it does not exist.
Usually, if you have an account that old, you built up enough karma, especially in subs you’re a regular on, to speak your mind and absorb being downvoted heavily multiple times over. Its not worth it spending time to be able to do that again on a site that is increasingly astroturfed.
The only people I’ve ever known who actually thought Reddit ever really mattered was people in the HN sphere. Anecdata, but still. In terms of value per minute spent, it’s the same tier of slop as TikTok or Instagram, and I think most ordinary people hold that same view.
The organizations/nation states/whatever who astroturf on reddit disagree with you. It definitely matters in shaping opinions. It's not as influential as tiktok of course, but that doesn't mean it's not influential.
For mass shaping tiktok is probably more effective, but Reddit probably shapes people more deeply, since there's actual discussion.
I think people are more critical in this discussion though, so that an apparent consensus may be interpreted by the user as the thread being bot-infested rather than there being a consensus. Thus it may be harder to get a result there, and the really interesting people that you may want to affect might actually be immune because they approach the medium as critically as it should be.
Look the reality is yes Instagram and TikTok have extremely problematic incentives built into their products. But they're also remarkably useful, entertaining, and fun products too. Both are true.
Do you think multi-billion-user products can exist without "slop"? What do you think the average person wants to consume? The equivalent of salad? Have you met the average person?
I think people have fundamental misconceptions of the average person's desire.
I find this perspective bizarre. Though I'm not happy about it all being centralized, the closest thing we have these days to the very niche phpBB forums of the 2000s is various subreddits focused on very specific topics. Scrolling through the front page is slop, sure, but whenever I'm looking for perspectives on a niche topic, searching for "<topic> reddit" is the first thing I do. And I know many people without any connection to the software industry who feel the same way.
Major advertisers are trying to figure out Reddit now, but it's a mixed bag and the costs are high compared to other platforms. It's no longer a niche.
I have no idea how anyone could have seriously tried to use reddit and be on HN and come to that conclusion. Yes some of the reddit defaults are slop but many clearly have significantly more value than short form video, and that's before you start discussing the niche communities that live there.
> The only people I’ve ever known who actually thought Reddit ever really mattered was people in the HN sphere.
They said the same thing about Quora and 3d TV.
That being said, TikTok and Instagram matter. Reddit probably matters more because it's so easy for motivated people and corporations to manipulate discussions on it; it's even weaker than Wikipedia.
50x as many people read Reddit than post on Reddit, and 10x as many people as read Reddit have gotten their opinions indirectly from people passing on stuff they (can't remember that they) saw on Reddit (but think they learned somewhere legitimate.)
> In terms of value per minute spent, it’s the same tier of slop as TikTok or Instagram
Insane take. Reddit hosts deep threaded discussions on almost any topic imaginable. In its prime it was the best forum on the internet. There’s a reason people commonly add “reddit” to the end of their search queries.
Unfortunately it feels like the community has gotten much dumber after they banned third party apps and restricted API access. It’s also lost almost all of its Aaron Swartz style hacktivist culture.
Reddit, in its prime, was incredible and beloved by almost everyone I know (most of which are far outside the HN sphere)
There are still many around - most of them die because admins give up or users leave - if you actually miss them it should be easy to find some for your interests
I would love to have some directory with all kinds of active (PHP) web forums. That was the heyday of the open web for me.
Do you have any tips on how to specifically search for these forums? Without just googling for topics and browsing hours to find some. When I think about it, just googling/searching might be the only way.
Really? My perception (and their metrics seem to back this up) is that “normal people” are really on Reddit now. It’s the #7 most visited site in the world. It exploded during the pandemic - not just a site for internet nerds anymore.
yeah I thought it was going to break during the API scandal and ended up quitting then. I noticed an immediate improvement in my day to day mood when I wasn't consuming rage/cringe/sorrow bait.
A significant amount of the current content is literally bots posting old threads! Whether those bots are run by reddit itself or unaffiliated parties I don't know, but they are there, on most threads, including some threads that are ONLY bots reposting a 3 year old thread that did well, verbatim.
My tinfoil hat theory is that all the "Explain this (very obvious) joke to me" subreddits are trying to create training data for some AI and that a significant amount of the content that makes it to the front page is designed to elicit "Good Training Data" for whatever AI company they sold the rights to.
Truth be told, according to stats, 90+% of the people barely post anything, if at all. To experience the horrific moderation, you need to get actively involved. Otherwise, the site looks like organic consensus, when you don't see the deleted posts and people who got disappeared or driven away.
My dangerous question.... how much of those 'visits' are AI bot crawlers?
Based on their other behavior, it wouldn't surprise me if Reddit both used crawler hits to pump up numbers while decrying AI bots and doing things that broke long-standing community tooling and apps....
Everyone ads reddit to their searches to get human generated information these days. Not sure if that's still a guarantee, but it's a funny irony IRT what this thread is about...
My understanding is that there are two types of stock, and the non profit controls the voting stock majority. This cannot be diluted. All other stock gives a (capped) fraction of the profits. This cannot be diluted by these operations, but the cap also can be a bad deal.
That's an interesting point about the different stock classes and voting rights. It adds another layer to how these kinds of acquisitions and valuations might play out in the long run, especially concerning the non-profit's influence. How often are such dual-class stock structures truly effective in maintaining the original mission when large sums and external valuations come into play?
The case of OpenAI is very unique. The structure is very successful. See Meta, Google, Palantir.
Some take the form of different stock classes, with some classes having voting rights, and others no vote at all; other schemes are stock with supervoting rights.
The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission. And as the PBC grows, the nonprofit’s resources will grow, so it can do even more. We’re excited to soon get recommendations from our nonprofit commission on how we can help make sure AI benefits everyone—not just a few. Their ideas will focus on how our nonprofit work can support a more democratic AI future, and have real impact in areas like health, education, public services, and scientific discovery.
I don't interpret that paragraph as non-dilutive. It's to say that the parent is just a regular shareholder currently holding the majority and then weasels away with "more resources as valuation growth" which is true in absolute mark-to-market sense, not relative ownership, but I don't think they have free cash to pony up and exercise any first right of refusal even if they have something like that on a pro-forma basis, so unless the non-profit board is adamant on voting against all capital raises and stock-based acquisitions and employee stock (they won't), their ownership share will be diluted.
From what I understand reading Mat Levine explanation of the topic, the non-profit controls the board and has supervoting rights, so it cannot be diluted to be outed.
Gosh, that was a very hard article to decipher for me, initially consisting of the author's own view on what should've been, old conversion plans that did not happen, and in the end alluding to what actually happened, except he also has no additional facts to offer, and it is his own speculation that the non-profit holds supervoting shares. I would totally not base an analysis on the author's mere educated guesses.
The gist is that the nonprofit still controls the board. The details of course are surely full of technicalities I cannot find anywhere.
At least to me, the walkthrough was useful to see what changed.
It's interesting I posted exactly this hypothesis an hour or so ago and immediately got flagged despite not being manifestly offensive or anything. Very suspicious.
I want to know why a burner account posted this comment. There could be many reasons, some more entertaining than others. Of course the answer could be boring, but do you care to elaborate?
> This acquisition (and the Windsurf acquisition) are all-stock deals
I'll add that conventional finance wisdom says that you should only buy companies using stock when you believe your stock is overvalued. That way you get more bang for your buck than cash or undervalued stock.
Man, I remember the absolute hysteria here over the non-profit trying to reign Altman in. You'd have thought they were murdering babies.
I agree with your analysis, but it's hilarious that it's now top-voted, when the sentiment was so negative when the board saw the same thing coming ages ago.
Narcissistic ego stroking. He literally is invoking and thinking out loud for a dead man, who was not that intimate with him, just to validate his own ventures and acquihires. Very weird.
It wouldn't be that weird if Ive had said so himself.
Oh there's a lot of sex cults in the SV investor sphere I assure you. Those "conquests" are happening too. Ask young women who have spent sufficient time and had exposure to the space.
I'm certainly not saying that "nobody is even allowed to assume how he would have thought". What I'm saying actually has very little to do with Jobs at all.
I'm saying that as a new partner to someone, it's extremely weird to say that your old dead partner would be extremely proud you teamed up with me. If I were to marry a woman who lost her husband, it would be extremely weird and egotistical for me to tell people that her dead ex-husband would be "damn proud" that she married me.
> If I were to marry a woman who lost her husband, it would be extremely weird and egotistical for me to tell people that her dead ex-husband would be "damn proud" that she married me.
Perfect example. That is exactly what it feels like. What a nasty thing for him to even think, and he goes and says it publicly.
This is the only play for OpenAI. The AI service is going to be commoditized very very quickly and their moat will be gone. They will be doing vertical integration and push into everything. If people complained apple and google looked at apps and copied the functionality themselves what OpenAI will do will be much worse. Also when it took apple and google years to do it, OpenAI will do it very very quickly, in a year most
The other play is they build chatgpt.com into a destination. Two major data points to this: [1] Over 400M weekly actives, and [2] recent reports that chatgpt will be adding X.com / social network style features
> At io, the group set out to develop, engineer and manufacturer a collection of products for an era of artificial general intelligence — the point when technology achieves humanlike cognitive abilities.
It’s mind boggling how much money is floating around once you are part of the insider circle. What has that company been doing to be worth 6.5 billion?
Kind of both? "While Ive and LoveFrom will remain independent, they will take over design for all of OpenAI, including its software. Altman said his first conversations with Ive weren’t about hardware, but rather about how to improve the interface of ChatGPT."
That's not promising! Ive isn't an interface designer and doesn't know a pushbutton from a skeuomorphic hole in the ground. Software and industrial designers can work with each other to build great things, but they are not equivalent roles.
Overrated is great for raising the next investment round. It is much easier than convincing investors they are underrating someone you think would make a better choice to go with someone overrated instead. If he is overrated enough the increase from overratedness in the next investment round will pay for more than the acquisition cost.
I thought this was going to be some hit-piece tweet then I saw it was a tweet from Sam Altman himself. That video... Wow. I got in 2min before I had to stop. I thought you might be over-exaggerated but full of /themselves/ doesn't even begin to describe it.
Ship something, then you can create a video like that, not before.
this is designed to appeal directly to a certain kind of self-mythologizing Bay Area techie, the kind that was common in the early 2010s. It’s meant to signal continuity, “we’re just like you, we loved Steve Jobs”
That's probably overstating it; it is definitely true that his redesign of iOS 7 (2013) was truly awful - he really should not have been anywhere near software, and was able to take advantage of Scott Forstall's then bad reputation and recent dismissal to insert himself in an area he had no real qualifications for. Before that, he was purely a hardware guy and it was clear from the outset he was in over his head with software. Apparently he had no idea what 'HCI' (human computer interaction) even meant; wasn't even familiar with the abbreviation itself. Now with the benefit of hindsight, we can judge that iOS 7 had some truly terrible design decisions that ended up making the iPhone much harder to use (very light, unbolded text everywhere, removal of borders on everything especially buttons, over use of color tints on text).
What I always found crazy was that Ive seemed to just take design ideas from then Windows 8 and Windows Phone more than trying to create his own thing. It showed that he had no original ideas of his own; even just iteratively improving on iOS 6 would've been better.
On Mac hardware, he definitely needed some sort of editor to stay his hand post Jobs. The era of crappy butterfly Macbook keyboards is still something I remember that was clearly his responsibility, driven by obsession on thinness and it seemed for a while that Apple was in denial about the issue.
Still, the Apple Watch is a definite hit for Apple now and it's clearly his baby, so his legacy isn't all bad.
One could argue Apple Watch has just been coasting on iPhones’ coat tails. People with an iPhone want a watch with similar aesthetics and in the same ecosystem.
If a non-Apple company launched an identical watch at the same time it would have gone nowhere.
Unless OpenAI is going to also buy a robotics company and Ive is there to design the robot exterior, what else would they make aside from some wearable that just runs an LLM with heavy emphasis on the audio/speech modality? I have the feeling whatever it is will be uninspired and a giant let down.
No question that Ive is a legend, but I do think the fall of Humane (also ex-Apple) and the challenges at Meta, Apple, and Google in terms of VR/AR adoption (Meta Ray Ban, Apple Vision, Google Glasses and the new thing) are instructive here. The $6.5B almost feels like the largest ever aquihire.
Not sure why he deserves to be a legend, to be honest, but yes, he is a legend.
He did a good job, but those small and minimalistic designs were only possible because of the efforts of entire teams of engineers, of which the public never heard anything.
And many of those designs were made at Ive's behest, against the wishes of entire teams of engineers. I feel like we have his "courage" to blame for the Butterfly keyboard, terrible Mac thermals and the lack of ports on "Pro" computers.
Don't forget the Apple Mouse with the lightning port under the mouse so you can't use it while it charges. It's still the only Apple product with a design that makes me physically cringe.
I also find it awkward and uncomfortable to use, but that might just be me.
When I had that Apple mouse, I kept around a separate wired mouse to use when it inevitably ran out of charge while I was in the middle of work. I won't just stop what I am doing just because Apple wants me to not use a mouse with the cord plugged in.
Ive did good designs when Jobs kept him in check. Once Jobs was gone he messed up a whole generation of MacBooks. Things got much better after his departure.
I think after Jobs, Ive did all sorts of things just to justify his presence at Apple. Hence trying to make an already well-designed product even more "well-designed", but to his terms. And that's when it started turning to shit.
The almost two decades before he made great designs. I have always felt it went downhill after Jobs was not there anymore to provide a counter force to Ive's design tendencies. It's like taking one of John or Paul away.
Blaming Intel is a poor excuse. Apple could have done some actual design and built a laptop around the hardware they had. But they didn't want to. Instead, they ignored the reality, stuck to the flawed design, and shipped mediocre laptops several years in a row.
I agree, but I think they didn't care. For some years, some Apple execs believed that the iPad was going to replace the Mac. After that they knew that the Apple Silicon Mac was nearby, so they probably didn't want to make an investment in a 'legacy' platform. Did suck for all the people who bought one.
Perhaps, but pretty much every high performance Intel laptop between 2017 and 2023 is exactly the same unless it's in an heavy, enormous and unpleasantly loud gaming chassis. Supposedly the Core Ultra Series 2 are an improvement but I haven't tried one yet.
For a while, you could get the thermals a bit more tolerable by undervolting them, but then the plundervolt mitigations blocked that.
(Typing this comment from a Lenovo X1 Extreme sitting on a cooling pad, sat next to an X1 Carbon that we can't use because it throttles too much. :)
Apple is generally really good at trying to keep their machines silent. When they originally transitioned to Intel, their Core 2 Duo laptops were both cheaper and more silent than the competition. As a Linux user, that's one feature from Apple I'd like most manufacturers to copy.
Regarding your X1, tweaking Linux kernel parameters and downvolting a bit can work wonders in terms of reaching an pleasant heat : performance ratio. Obviously, Lenovo should have taken care of this. However, they release so many different machines that it's hard for them to pay attention to details.
It’s a company laptop that runs Windows, and the newer BIOSes now block undervolting because of the plundervolt mitigations.
I replaced the thermal paste with some of that PTM stuff which helped a bit, but not enough. I also found that for some reason it tends to BDPROCHOT-throttle when powered through the official Thunderbolt 4 Workstation Dock, even though it’s meant to be 230W and provides power separately to the USB port - but using the standalone AC adapter when docked fixes that.
Ultimately, until there are some decent X86-64 laptops released, the choice is between slow, thin and quiet vs less slow, but big, heavy and noisy. AMD is a bit better than Intel but still weak on mobile and nowhere near as good as the current Apple offerings.
On another note, why are PC manufacturers still putting fan intakes on the bottom. Maybe it’s theoretically more efficient, but tell that to my users who always do things like resting their laptop on a book then wondering why their Zoom screen sharing goes jittery.
I've never had an Intel laptop work well in the efficiency and thermal department, Apple or not. I used to blame Apple too, but seeing the difference, it's hard to argue who the main culprit was. Can't design around a bad foundation.
Apple had the design ready for an Intel chips that didn't arrive. Rather than revisiting their design they opted to just chuck the chip into a design that couldn't accommodate it's thermal characteristics.
I spent way too much time figuring out that around 53W is the maximum that the last Intel MBP can sustain over longer periods before the VRM (converts power for the CPU) poops out and throttles you.
Your 2019 Macbook also uses a different chassis, designed by Jony Ive. Apple knew it throttled the chips they used but shipped it anyways, presumably because Ive liked his thinness even when it results in a bendgate.
You'll note that Macbooks don't quite look the same after Ive left and his influence went away.
Beefier. Bulkier. Quick google search says Intel 16" was 4.3 lbs whereas M4 16" is 4.7 lbs. Not a big difference you say but 1) it is going in the opposite direction where the newer product is bulkier and 2) imagine the years of thin-ness that would have been forced under a different regime.
I had the butterfly keyboard for 5 years yet I didn't have a single problem with it. And I'm a long time mechanical keyboards user. What is all the hate about?
Many people (more than the average rate for the prior generations) _did_ have problems. Perhaps more importantly, the only way to address those problems when they arose was to replace not only the keyboard itself but the entire top case of the machine due to the way the parts were integrated. This process costed many hundreds of dollars when the machines were out of warranty, and the company eventually acquiesced to social pressure and lawsuits by creating an extended warranty program.
That's not to say your situation is unique...there are probably many machines out there that have not had problems, including one owned by my wife. But there are also an unusually high number of machines that did.
I'm a native English speaker and nobody told me this (and I didn't manage to pick it up) until I was nearly 40. "Cost"'s past tense is also "cost."
There's another, newer, largely fatuous, verbed "cost" that means "to calculate the cost of something." That's the one that gets used in the past tense ("the projects have all been costed.")
"I've costed a keyboard replacement for my computer, and the total is more than the computer cost in the first place."
That's luck on your side. I too own a butterfly keyboard, trouble free. But there were 50 other macs in the office I worked in that regularly had issues. They were unreliable as hell, and beyond the reliability issue, many people did not like the shorter travel distance (I didn't mind this at all myself).
I had the first generation on a MacBook 12" and had no issues at all. Then I got the second generation on a MacBook Pro (I think this was still without the dust seals) and it was one big misery. A small speck of dust would make a key feel bad or get stuck. I was so happy when I could finally get rid of the stupid device. Never had issues with Apple Scissor switches thereafter.
I felt the same way when I used it. But recently I booted up an old laptop with the butterfly keys and I was like "ewwww" as soon as I started typing on them. They worked. But what we have now is more comfortable.
I'd get a particularly large molecule lodged under a key and then I couldn't press that key consistently anymore until I managed to flush it out. It was OK when it worked, but it didn't work enough.
You're most likely to get credit by being unique and irreplaceable. In other words, if the work would not have happened without you. If someone else could have been easily hired to do the work you contributed, and if in that case the work would have been largely indistinguishable from the work you did, then you're essentially fungible.
IMO you still deserve credit. And in fact you still get credit. But that credit comes in the form of monetary reward and (hopefully) recognition from your team and peers, rather than in the form of fame.
All of which… seems sensible to me? Hard to imagine it working otherwise. Interestingly, the movie industry has normalized "end credits" which play after a movie ends, and which lists literally everyone involved, which is quite cool. But the effect is still the same, the people up top get 99.99% of the credit.
(Ofc the "system" is imperfect, and fame/credit can be gamed by good marketers. But it's also not a "system" that any one party invented, it's just sort of an organic economy of attention at work.)
I am not sure what you're trying to say here. I agree that the existing situation is the most likely one. So what? I am simply saying that even though it is the "obvious thing", it is unfair and unkind. Those two things are compatible, in fact they are the usual arrangement of things!
> Hard to imagine it working otherwise.
No it isn't! It's very easy to imagine crediting people in a different ratio than we happen to do now. You are seeing what it looks like - people mythologise their heroes, and then other people come in and say "they didn't do it all, you know". People are literally doing it, in front of you, in this thread. How can it be hard to imagine?
When I say "I can't imagine" or "it's hard to image" I don't mean that literally. Obviously in reality I can imagine and it's easy to imagine, as evidenced by my example of movie credits.
What I'm saying is that it's not realistic. Humans are wired to remember and share highly specific things, especially names. It's been like this since the dawn of time -- the Illiad is about Achilles, not all the nameless soldiers. So this seems to be the natural order of things, rather than something designed, or something easy to change. And it makes sense, because it's practical -- our memories are limited. You can put everyone's names in the credits, but that doesn't mean they'll be remembered and shared.
Yeah, let's also give credit to the building materials and mother nature. Let's give credit to the pedestrians who walked by the construction site every day and decided not to commit arson.
Brilliant logic. And no, the original comment wasnt' saying "give the engineers some credit", it was saying the engineers deserve the credit instead of Ive.
Which is idiotic and common of smug, self-important programmers.
Comments like yours that completely dismiss any questioning of established "legends" seems more despicable to me. Can't we have an open discussion and a range of opinions?
In between great architects and construction workers there are structural engineers who have to work out how to turn the pretty designs into actual, workable plans. Those are the guys who should get most of the credit.
HN isn't special here. There's conflict between people whose job is to make something look pretty and people whose job is to make it work in every industry.
For Ive, it often was. Ever thinner MBPs? Why, if not for appearance given the weight didn't change. No ports on PRO computers? Why, if they didn't bother his aesthetic sensibilities. Charging your mouse disables its use because the ports on the bottom? Why if not to hide the port for looks? He spent most of his time at Apple trying to make things pretty. Your comment may be true for "design" in the abstract, but as someone who spent plenty of time studying design and architecture, let me assure you, many of the people I studied with who are now industry veterans never cared about much more than aesthetics, even in architecture where engineering and building science are major factors. Again, sure, theoretically true for "design" but hardly true for Ive.
Alternatively: Form follows function.
Or: Good design takes into account the medium.
Many forms of saying it or a very similar statement. If only these words transformed into something beneficial in the minds of flying air castle designers.
I agree somewhat, you can feel the tension on HN with respect to labor vs capital. Which is funny because the entire premise of YC is to infuse capital and get a huge leverage over bootstrappers.
It's a pretty common turn of phrase on "lefty" (Western, English, very online, progressive) parts of the internet. I've always found it silly because it takes some pretty interesting nuanced problems (how do you give credit to folks who executed Ive's vision, many who probably boldly innovated to create what they did? How do you realistically situate Ive's flaws given his aura?) and wrings the nuance out of it by polarizing the readers (you're either with labor or you're with capital, pick your side of the picket line!)
But then these days lefty and righty parts of the Western English-language internet are all polarized and beating on common enemies is part of their conversational language. I think for a while HN was small enough that it resisted this polarization but at its current size there's no escaping it.
> I absolutely despise comments like these, and you only see them on HN.
Unfortunately the progressives have been pushing the downplaying of powerful people quite hard for a long time under the guise of equality, so it’s more widespread than just HN. Even more unfortunately, equality is also one of the main ideas of communism. It’s how the government can get rid of dissenters and thus move power to itself. That’s why Marc Andreessen in the Lex Fridman podcast talked about how the government told them that they could give up their startup because it was already decided which companies would be allowed to operate. That’s not capitalism. And Marc knows it that’s why he felt he had to speak up.
To quote from the article regarding Humane and the Rabbit r1 personal assistant device: “Those were very poor products,” said Ive, 58. “There has been an absence of new ways of thinking expressed in products.”
To quote myself: "Jony Ive made incredibly poor products his last years at Apple" - So his opinion of what constitutes a "poor product" is suspect (R1 and Humane were bad products but just because you can tell what is a bad product doesn't mean you can make a good one).
I don't know anything about Humane, but the Rabbit was a terrible product right from the start. It was viewed overwhelmingly negatively as soon as it was unveiled.
> If they were so obviously bad at the time, how did they get to market?
I'm not sure what "they" is here (Humane, Rabbit, or late-Ive-era Apple designs).
In all cases though there were plenty of people sounding the alarm. Both Humane and Rabbit were made fun of (wasn't in Humane's demo that the AI was completely wrong about guess the amount of almonds or the calories?). As for Apple products it was a common refrain that they were being made thin at the cost of ports/cooling/etc. How did Apple keep doubling down on the butterfly keyboard _years_ after it was well known it was a bad design?
Also, "The markets can stay irrational longer than you can stay solvent." (re: how did they get to market). You can do anything if you set enough money on fire, no matter how many people are telling you it's a bad idea.
It's also the same dude who brought us beloved products in Apple's lineup. It's almost a meme at this point to say that Jony Ive's genius needs a containing force like Steve Jobs. Perhaps Sam Altman can fill that role.
Jobs, for all his faults, understood where aesthetic, functionality and user experience intersected extremely well.
He got stuff wrong too, don’t get me wrong, but I have yet to see another CEO (heck any business person of note) with the same pattern of deep understanding of how those things intersected as well as he did
People say this while outright ignoring all the outright failures Jobs had because he DIDN'T have that understanding.
The Lisa, the Newton, NeXT computers, trying to dump Pixar pretty much right before they made it big right as the tech was finally catching up to their ideas.
The reality is Jobs got to roll the dice a bunch of times, and if you get to roll the dice a lot, you will have some wins. Looking only at the wins is not useful.
I don't have the time or space to write up a proper rebuttal, but I will suffice to say, after reading an incredible amount about not only Jobs, but Apple, NeXT, the Newton, Pixar, things about tech, especially early home computing, the man performed well above his peers with regards to where aesthetic, functionality and user experience intersected. Note, I am not talking about how he ran the businesses otherwise.
He wasn't always right, as I said already, but he was far better than most at this. More importantly, he was far better at most at getting others to shave their vision down to the simplest of ideas.
If you look at the competitors to Apple or NeXT during their respective eras, they were not very thoughtful in their deliberations.
It doesn't mean every idea he had was successful either, but I'm speaking specifically to the fact he intersected the three points extremely well. At a certain point, someone is good enough at something its more than luck
Agreed. It's hard to think of a new product category for smart devices ... unless maybe Smart HATS! OK folks so remember where you heard it first - ultra stylish head gear with flip-down visor screen anyone?
True, I guess they will innovate around devices and periferals that enable or facilitate AI powered activities, and hope to pioneer a new category of activity, and look damn cool at the same time.
But Meta is thriving with Meta Ray Bans, they have sold over 2M as of few months back. (Yes I know that number seems small compared to other devices, but for a new form factor, that seems like a great early success)
The glasses with a HUD are a different product line, latest ray bans are just camera mic and audio, but I still count them as AR because the AI voice in your ear can see what you see. I tried them for a few months and returned them, not a good enough camera to enjoy the hands free snapshots I was looking forward to, and just didn’t have a use for a q&a bot attached to my ear.
For what they are I’ll give them props for a nicely designed product, the charging case is clever and works well. I liked them for music with the Apple Watch, pretty slick combination. Maybe if I could stomach giving a llama bot access to email and calendar etc etc to have a real personal assistant it would be an attractive offering in a world that accepts being watched 24/7 by AI/billionaire overlords
> Maybe if I could stomach giving a llama bot access to email and calendar etc etc to have a real personal assistant it would be an attractive offering in a world that accepts being watched 24/7 by AI/billionaire overlords
I share this general point of view but take it further: I really want something in this direction (a quality AI assistant that can access my communications and continuously see and hear what I do) but it MUST be local and fully controlled by me. I feel like Meta is getting closest to offering what I'm looking for but I would never in a million years trust them with any of my data.
My wife has the first-gen raybans and they're great for taking photos and video clips of e.g. our kids' sporting events and concerts, where what it's replacing is a phone held up above the crowd getting in the way of the moment. But even with that I feel icky uploading those things to Meta's servers.
AR can surely be audio only. Or are you suggesting a blind person getting navigation instructions via cameras and speakers isn't augmenting their reality? If so, I violently disagree.
Meta Ray Bans and Googles Project Aura are products that I absolutely want, but absolutely don't want to buy them from either of those companies, or any company as invasive as they are.
It's long past time for enhanced privacy regulation in the North American market because these products are going to be wildly invasive as people depend on them to mediate their experience with the world. I don't know what the right answer, and I am very much aware that building products like these that don't focus on monetizing user interaction and advertising would likely mean that they are priced out for lower income users, but I hope someone smarter than me can figure it out :S
Meta Ray Bans and Googles Project Aura are products that I absolutely want, but absolutely don't want to buy them from either of those companies, or any company as invasive as they are.
This! So much this! If a product from these companies could make my life 1,000,000x better I would still be in “thanks but no thanks crowd”
> OpenAI is said to have discussed acquiring the AI hardware startup that former Apple design lead Jony Ive is building with OpenAI CEO Sam Altman. According to The Information, OpenAI could pay around $500 million for the fledgling company, called io Products.
Because it sounds like the CEO of OpenAI wants to use investor's money to buy his other company.
Too bad we can't short it or otherwise stop it, because investment for the things we could start will dry up once the world figures this out. We're all correlated to companies like FTX whether we like them or not...
Sam and Elon are both ethically dubious power players who seem to be really good at inside dealing to benefit themselves, while all the other people in their companies just have to deal with it.
As I write this out, it reminds me of another polarizing leader who has been really good at being in the news every day for the last 6 months, and for a 4 year period a decade ago.
Depending on who is doing the valuation, I'm sure this is already a consideration. Like when Coinbase acquired Earn for $110m-ish. Was Earn worth that? Absolutely not. Since Coinbase and Earn shared investors, it was merely a convenient way for those investors to pay themselves out of the unexpected billion dollar cash pile Coinbase amassed in 2017.
Dividend means taxes . Buyback requires someone to sell (I.e. exit) usually no one wants[1] to especially when the company is going hot. investors would rather have (more)stock in a hot company than cash.
Fund managers and staff have also disincentives for early exits, i.e. they have to find and invest in another company and cannot just keep the money, which means more work. They rather exit by switching stock to a hotter in demand, hard to get in companies if they can.
[1] there are always some employees and founders who would prefer some liquidity , but either they don’t hold large enough positions (employees) or investors don’t want to give a lot of liquidity (founders)
For public companies it is different- buybacks work because there is always someone ready to sell. Usually retail but also short term funds who don’t care about liquidating. ETFs and other very institutional investors or those into buffet style long term investments will not sell easily
There's only one Jony Ive and alot of demand from the company with the deepest pockets in town ...
UX will make or break any major new AI product - especially hardware. The price is steep but I think it's actually a sensible move. There really aren't that many other people with the proven ability to deliver when it comes to UX at scale for novel areas.
Jony Ive is a designer verses Tony Fadell who is a hardware guy.
Ive is a very talented artist but AI is not being held back by people unwilling to courageously make things thinner and thinner.
I would imagine Ive looked at an Apple HomePod and thought “we could make this beautifully flat and hang it on the wall of every room in the house”. This might be a good idea but it in no way solves the major problems with AI/LLMs.
The first step in AI delivering a good UX might be coming up with a logo that doesn't look like a butthole. Unfortunately this seems to be an impossible task.
This announcement furthers my sense OpenAI is becoming a hype vehicle destined to be the iconic poster child of early AI hubris when the bubble pops. When I read the pretentious marketing copy and photo on the announcement page my first thought was "Someday this'll be linked on the Wikipedia page for 'The AI Bubble'".
I'm not even a hardcore AI skeptic, I think AI can be useful and valuable in the near-term (even outside coding!) and potentially transformative in the long-term but I also think current capabilities are over-hyped and wildly overvalued. I think AI is going through the typical hype cycle (https://en.wikipedia.org/wiki/Gartner_hype_cycle) and we're currently late in the "Inflated Expectations" phase soon to be followed by the inevitable "Trough of Disillusionment".
Because they’re not paying with money. It’s $6.5B of pure equity in a private company that they’ve decided to value at $300B based off of… vibes or hopes or whatever?
Today I read a couple of book reviews in the Economist with Sam Altman as the subject. The books are :
The Optimist. By Keach Hagey
Empire of AI. By Karen Hao
The reviews are positive for both books. The column itself is titled "Sam Altman is a visionary with a trustworthiness problem" and shows a few reasons people have had some problems with his behaviour. One quote from the article is :
"Ms Livingston fired him, but as Ms Hagey recounts, he left chaos in his wake. Not only was he overseen by a non-functioning board, he had also used YC equity to help lure people to OpenAI. She says some YC partners saw a potential conflict of interest, “or at least an unseemly leveraging of the YC brand for Altman’s personal projects”."
Not sure if it counts for what you meant by hardware, since it's more art piece than a practical thing given the $60k price, but he created a record player:
Lol at the loving description of the manufacturing process for plywood (I think technically pressure-treated oriented strand board, but, you know, plywood)
Ive just designed not practical thin furniture, like the bending iPhones and the ultra thin MacBooks with no way to release heat and the keyboard that was getting broken after pressing a bit too hard
Honestly once Jobs died most of what Ive did was ruin existing products, such as the butterfly keyboard and the removal of all of the useful ports from their laptops.
He also ruined the Vision Pro on his way out. Engineers wanted to do wireless to a Mac mini-like hub (not standalone) so the hub could have more computing power. It’s a dev device that was supposed to be the very best experience for developing the future standalone AR/VR device. But Ives forced them to do full standalone. Increasing weight, decreasing power, wasting time re-engineering the device.
Also remember Ive didn’t even plan for the watch to be fitness he wanted high end fashion. He’s not always the best. Lighting struck once and he says aluminum in a smart sounding way, give him a billion
To be fair, those aluminum chassis ended up involving spin welding to make them stiff enough at those dimensions, which I believe had mostly been used in aerospace up until that point.
The video was filmed at Francis Ford Coppola's cafe. Worth a visit, the last time I was there they had a machine that printed out stories for you to take home.
After thinking for sometime it seems like even though Sam doesn't have any stake in io, he might have a stake in the Thrive fund that invested in io. Put $50M in a Thrive fund, which is used to invest in io that ultimately gets acquired by OpenAI at a really high valuation.
Joshua from Thrive anyways wanted Sam to have some stake and the numbers being floated around was of the order of $7B. The whole thing seems absurd and makes me trust OpenAI even less.
And seems like it is actually trying to give a bonus to Sam or give money back to Thrive otherwise it would make no sense to acquire io with $1.5B cash in addition to the stock. I am surprised that the board approved this?
Sam Altman already has a hardware company "World" - https://world.org/cofounder-letter which works on Orb (a hardware). Not sure if there is any connection between these.
I wasn't gonna check the link at first but you've convinced me, and holy shit that image of the two of them that greets you first thing is beyond hilarious.
You truly couldn't make this up, it's so beyond parody that I don't even know what to say. It's so palpably psychotic.
The quotes at the bottom are funny because they have a share link which implies that Sam and Jony thought they were insightful enough that people would want to actively share them. Even in the extreme tech crowd, who would share those?
Altman: "Jony was running a design firm called LoveFrom that had established itself as really the densest collection of talent that I've ever heard of in one place AND HAS PROBABLY EVER EXISTED IN THE WORLD."
I felt physically sick from second-hand embarrassment watching this.
German has a word for second hand embarrassment. Fremdschämen. Comes in very handy here. If Sam continues like this it won’t be long until it becomes part of the regular English language like other German words such as Kindergarten.
And I’ll be happy that I don’t have to explain Fremdschämen anymore. Everything has its upsides.
Somehow when the buzz-word machine found talent density, half the passengers forgot that density has a denominator. I see this goof a lot. If you accept the premise that Jony is literally the most talented human in the entire history of the world (I know I know), then obviously he was more dense sitting in a room alone, than after being diluted by hiring 50 other people.
Secondly, is it a weird Sora-stitchedc video? It feels like they just filmed their parts separately like they're not even talking to each other/interacting with each other. Very peculiar.
It uses the same overhype playbook from the Segway launch: "Oh, I used the [unnamed, unexplained] device and it was the most amazing thing in the whole of human history!" "This object will cause the entire planet to be redesigned around it!"
Deffinately a contender for one of the creepiest photos in the industry.
Not something that inspires confidence in the "Design Prowess" of the acquisition if this is what they felt best represented them.
Given it’s a stock deal the question simply put is does bringing the highest profile technology designer in the world along with his team into OpenAI increase its terminal value by more than ~2%? If so, the acquisition is a success. Discussions of revenues and valuations and egos have little bearing on this question. To me it seems like an easy win on talent alone, let alone optics, network, and impact on future talent and capital conglomeration.
Future AI will deserve a say in the design of its own body, right? Maybe it's not smart enough now, but as it dawns into personhood it'll likely gain some legal rights, including perhaps over its own body. Anyhow, as the AI's gain personhood there will likely be human-led movements to give them more rights. It'll be seen as gauche or gross to do things to AI without consulting it.
I had been considering doing a startup in this space, I thought Humane and rabbit are directionally correct. This kinda makes me want to do it even more, that would be a fun team to compete against.
The manor criticism of them is that they are just phone apps pointlessly shoved into an extra piece of hardware. What about them do you think was directionally correct? I.e. why not just use your phone?
"phone" is over - I would just make better devices, there would be a communications device, you could call it a phone or whatever. (Frankly: I just genuinely believe I could do a really good job here is all, I have no real reason to believe that except pure ego, I'm fine admitting that)
I moved back to a flip phone (sonim xp3+) and love it as a piece of hardware. Built like a stone I could skip. Hold down a number for speed dial. Flip it closed to hang up.
I just want a competent personal assistant on speed dial I can talk to in private.
Nothing to do with AI. Phone is over because the phone is over, AI or not, young people have no interest in telephones and apple are laurels resting. https://www.skygroup.sky/article/call-declined-
A device with 96% market share in the developed and developing world, that is also a fashion item, that most adults spend a good chunk of their waking life staring at… is over.
I do love HN at times, I really do.
In my case smart phones never started, I did and do find the form factor aggravating for everything but phone calls and reading but they aren’t remotely over.
Over does not mean dead, the fax machine is long over, doesn't mean it's dead. The phone will take a long time to die, but it's certainly over for the phone.
Well I wasn't quite old enough to compete in big business then, and even if I was, the smart phone needed to get long enough in the tooth that alternative communication continuities became robust, that time is in and around... now.
Hiring a prolific designer, won't buy you the next iPhone. In fact, $6b would have been better spent on the supply chain, the manufacturing intelligence in Asia, and the dirty and difficult work of producing hardware. People forget, how much work is actually needed to produce some innovation like this at scale.
Yes, in the video on OpenAI's website, they reference a product that Altman is testing at home created by Ive.
No, we can't see it yet. And there's not much description, either. Just that it's the "coolest technology that the world will have ever seen."
Altman: "We have like magic intelligence in the cloud. If I wanted to ask ChatGPT something right now, about something we had talked about earlier, think about what would happen. I would like reach down, I would get on my laptop, I'd open it up, I'd launch a web browser, I'd start typing, and I'd have to like explain that thing, and I would hit enter, and I would wait and I would get a response. And that it as the current limit of what a current laptop can do."
The above is very r/wheredidthesoda go but it hints at the product being ambient computing related.
I'm also struggling to find the website for Ive's company that is being acquired for $6.5 billion. Maybe I'm just slow today, but does the company being acquired have a website?
I have a blind friend that's been getting value from the Meta Ray-Ban spectacles. He used to have spectacles that let a human see where he was going and offer suggestions/directions/descriptions/etc. but replaced those with this somewhat more private solution and found various compelling (in his descriptions to me) use cases. I personally think AI is going to land on "coding" and "accessibility" successfully and fail in most other domains.
That is WILD to think about because I regularly create one-pager websites for my own projects...kinda bewildering to comprehend valuations for something so basic.
I don't totally get the comparison - Palantir is a tech enabled agency making glorified dashboards that benefits from affirmative action for libertarians, and the mini Vegas orb product is Jony Ive's new dildo to capitalism to worship. Two very different things.
It's a screenless & buttonless phone. Slightly smaller form factor than regular iphone, with comparatively bigger battery and powerful processing for on-device models.
A solution looking for a problem. We already have devices which can access AI with ease. I don't need your proprietary data farming apis in the OS level...
It is fascinating how design has become the new gold standard in the AI era. It really looks the strongest signals come from taste and design quality and AI is killing other signals.
What design are we talking about? An interface with a text input box at the bottom, chat view on top and a list of previous conversations on the left? Takes an entry level engineer a few hours to come up with that, not a $6.5B design firm and Jony Ive.
For some designs, sure — AI will likely generate most of the UX for, say, SaaS products. But when it comes to high-quality, innovative designs, humans will lead.
That is an insane amount to hire Johnny Ive. Just to signal to the market that OpenAI is getting into hardware, what on earth could they be building towards. They’ve been writing some big checks off late, these costs will have to be justified soon enough.
This is a bet that's as insane as the Apple car. They're going to try to get people to buy a specialized device from a company that isn't just a software company but really a B2B API company—the only successful consumer-facing product they have ever released was an accident, a tech demo that went viral.
And what exactly is that device going to do that the iPhone (and smartphones in general) can't already do with at a minimum a few small tweaks to the existing flagships?
In the right context vertical integration can make sense, but hardware is a big stretch for OpenAI right now. They haven't even really pinned down the consumer software angle yet.
Totally insane, $6.5B acquihire. But also, all stock deal and OpenAI appears to have an unlimited number of investors waiting to give them money if/when needed.
> I don't know why Jony Ive is seen to be worth billions, but he's obviously not.
Agreed. If it was the only option it would have been worth it to Apple to pay him billions to _leave_. His last 5-10 years at Apple were marked by him ruining a number of products.
Broadly, you're right, but again, there is an incredible amount of inequality at the moment, so while 90% of VCs might be strapped for cash, ther other 10% have so much that they are struggling to find worthy investment vehicles that can grow their money-pile.
Its a dillema they are in because nothing can justify the volume of investment enough without a sufficiant potential for growth, so any venture that even hints at big enough return, will find itself flush with cash it wont realistically know what to do with.
OpenAI spinning into consumer hardware is a significant loss of focus for the company.
I can only explain it with them recognizing that their strongest asset is brand mindshare. This is actually really bad for their outlook as AI model pioneers.
Eventually it was going to be the case that AI will spread around. It can't be contained, it's too easy to distill and hence copy from output.
But I admit I didn't expect it to happen that soon. Also I respect Jony Ive, but expect his "AI devices" to all fail in the market. He's an idealist. He needs counterbalance that he currently lacks.
Whatever it is, if it’s AR, they won’t have the resources to do it. If they go Humaine’s clip voice assistant/projection, it’s dead on arrival. If they can do AR, it will be at meta Rayban level. What I’m saying is there is a real physical tech constraint where I’ve noticed the top hardware makers are hitting.
There really isn’t too many ways to interface with AI
Look at all of these big names that OpenAI is attaching itself to.
Buy "nobody company" from Jony Ive (using stock)
This incessant need to associate themselves with highly known individuals and over the top announcements reminds me of "Theranos" and infamous con artist, Elizabeth Holmes.
Sam Altman sure knows how to sell.
I wonder how much longer he can keep the con going, even though many of the original founders have left. Maybe 2-3 more years of this dog and pony show before it all comes crashing down in the most spectacular way.
I don't understand all the pessimism and incredulity about the valuation. This is an acquisition to take on and disrupt Apple.
Ives + Altman is perceived as a viable successor to the Ives + Jobs partnership that made Apple successful.
Apple is weak and doesn't seem capable of innovating anymore, nor do they seem to understand how to build AI into products.
There's an opportunity to build an Apple-sized hardware wearables company with AI at its core, just as Altman built ChatGPT and disrupted the Google-sized search.
How exactly does OpenAI go about disrupting Apple? Are they going to build an entire OS, line of hardware products, and create a massive developer ecosystem to for apps to be available?
I just don't exactly see how that is done by hiring a bunch of designers to a company whose current offering is a chatbot & API interface.
I don't doubt that Ive can make a product team that will deliver something, but how does something so clearly aware it was going to evoke feelings of Jobs' Apple, end up so cluelessly narcissistic in delivery in such a non Jobs way. Are they infatuated about the product and the experience or what partnering up means to these two men's self-image?
Ive is waaaay overrated. At least he brought a lot of fresh blood into the company. And Microsoft is just terrible with anything design related, so this might be a cool move, but MS is also terrible in acquiring companies and then letting them work.
Classic hacker news Gnosticism making fun of a designer. The idea the iphone software ecosystem had anything to do with its success is humorous. Remember it didn't even have an app store or anyway to run local yet it was a best seller. Keep worshiping code like it doesn't run on hardware, ha.
Altman is desperately trying to use OpenAI's inflated valuation to buy some kind of advantage. Which is why he's buying ads, paying $6.5 billion in stock to Jony Ive, and $3 billion for a VSCode fork created in a few months.
Almost anything makes sense when you see your valuation going to zero unless you can figure something out.
Facebook is a great example of doing this and it succeeding very well. Zuck recognized that Facebook was going to zero and bought WhatsApp, Instagram, and Oculus. My guess is that sama sees the writing on the wall and knows that he must expand OpenAI in a similar way.
What happens to OpenAI competitors that can't make similar moves is another question.
Meta haven't abandoned the metaverse, and made it very clear from the beginning that "the metaverse" was something that does not exist, and will not exist in any form until the end of the decade. They continuously reiterate this during earnings calls, while increasing their capital expenditures on it.
You cannot determine it's a waste if the effort isn't completed, and if you have no insight into their progress.
Certainly the part of those investements wasted on cramming a whole PC into goggles that smash into your face with straps around your head and pretty much all of the Meta and Meta subsidized content to go along with that can be evaluated now, and not in 5 more years. The fact that Quest stalled out 2 years ago with only about 7M actives after tens of billions spent trying to make it go is pretty much all anyone needs to know about Zuck's metaverse investments. Now they're pivoting to glasses with a heads up display and pretending that was the plan all along because Zuck won't admit the cash bonfire that Quest and Horizon Worlds has been, about $100B sunk for only about $15B return with only a few million users.
Based on your comment it's apparent you neither follow the industry closely nor understand it's dynamics. The vast majority of the billions of dollars are being pumped into R&D, not marketing existing legacy devices.
You also seem to be implying in your comment that the orion glasses displayed at connect last year were a last minute pivot, which is a ludicrous statement
Meta is a profitable business that can afford the R&D budget. I'll agree that it's a stupid way to spend $46 billion, which the average HN commentator could have told them in advance, but hey, it's their money.
Windsurf is not Instagram. Jony Ive's company is not WhatsApp. There are no meaningful network effects or lock in with these AI products.
Ive's company is going to make some forgettable, overpriced, and easily cloned wearable pendant or something equally irrelevant. Windsurf (and Cursor) will quickly fade into irrelevance as IDEs are once again commoditized by open source.
I completely disagree. This is really just more of the great execution that I've come to expect from Sam Altman.
Core to OpenAI's strategy is that they control not just the models, but also the entrypoints to how these models are used. Don't take it from me, this is explicitly their strategy according to internal documents (https://x.com/TechEmails/status/1923799934492606921).
Some important entrypoints are:
- Entrypoints for layman consumers: They already control this entrypoint due to ChatGPT, the app. They have a limited moat here because they are at the whims of the platform owners, primarily Apple and Google. This is why they are purchasing Ive's startup.
- Entrypoints for developers: They acquired Windsurf, and are actively working on cloud development interfaces such as the new codex product.
- Entrypoints for enterprise: They have the codex products as described above, but also Operator, and are actively working on more cloud based agents.
A rebuttal that I anticipate to the above goes something along the lines of this: "If they have so much capital and dev experience, why are they acquiring these businesses instead of building internal competitors? This is a demonstration of their failure to execute"
The current AI boom is one of the most competitive tech races that has ever occurred. It is because of this, and particularly because they are so well capitalised that it makes sense to acquire instead of build. They simply cannot afford to waste time building these products internally if they can purchase products much further along in their development, and then attach them to their capital and R&D engine
Windsurf and Cursor are money factories, that’s not a dumb play, their base will only grow significantly. OpenAI doesn’t have many money factories like google yet.
Paying for Jony doesn’t seem like desperation. Jony has no product that makes money, this is a long term aggressive hardware play. Seemingly to face off with apple.
It feels more like people just want to craft a negative narrative about OpenAI and use the data to fit that
I think this is the third time I’ve seen this exact comment at the top of a HN post about an OpenAI announcement.There is a weird amount of emotional investment in not wanting OpenAI to win.
Personally, I am just excited to see what the device looks like. The prototype must be good to justify this valuation.
I think a lot of people on here have heard enough stories about how Sam Altman behaves when the cameras aren't looking and dislike him and thus his company.
Also its normal backlash - when something gets so popular so fast, you are going to naturally have some haters.
Lastly actions speak louder than words. OpenAI used to talk about AGI and Super AI and nuclear launch codes and national security. Now they are buying VS Code forks and ad companies.
The AI race is more than heating up and Sama knows it and he's throwing some hail mary's in hopes to keep OpenAI near or at the top.
While I agree the AGI thing is mostly bullshit the whole market is aware that models aren't the end-all-be-all and people will not be making huge profits out of them, all the other big players have other side businesses they can use to upsell the models, OpenAI doesn't and they need to figure something out.
This is such an obvious jump the shark moment for openAI.
These types of puffery acquisitions, with a former “legend”, announced with such gusto, have never materialized into anything.
You’re not gonna get breakthrough products like this. Breakthrough products just appear unexpectedly, they’re not announced a year or two ahead of time.
You know you are in front of the impending explosion of a bubble when discussions shift from products themselves and towards who will be working with whom.
I hope it's not just another orb that talks to you. Maybe they're making humanoids, that's all the rage now... I do wonder what they have built! Surely something right?
They say they'll design a "fully novel" type of product - what does that mean? If we take them at their word then it rules out glasses, watches, phones, laptops, and headphones. What does that leave? What seems most in line with Johnny Ive's minimalism is necklaces, rings, or pendants. Could we see some kind of AI brooch in 2026?
The ability for a CEO to be a founder of another company and then buy that company with the company he is a CEO of seems incredibly sketchy. See also: bullshit idea that someone can actually run multiple companies, it’s ceo welfare and vanity titles.
No one is forcing you to invest in said companies. As an investor it's up to you to do due diligence on the board, conflict of interest disclosures, whether sizeable acquisitions require shareholder approval, etc.
This is on top of blanket legal protections that already exist in case you didn't want to do your own DD, like duty of loyalty, care and fiduciary; SEC disclosures, AD @ the DoJ, FTC, etc.
Although “you” is often used to refer to a specific person or persons (e.g., “How did you get to work today?”), in many languages, it can also be used to refer to people in general (e.g., “You avoid rush hour if you can.”).
I think the concept is more like, you may participate (invest) but if you don't, you have no standing to have an opinion. (other than the fact that you didn't invest, which represents your opinion.)
> if you don't, you have no standing to have an opinion.
Are you saying that they're saying something stupid? The vast majority of companies are regulated by non-investors; and when companies are regulated by people who are also investors, we think it is a problem rather than a requirement for the regulators to have an opinion.
A regulator's job is the protect the public interest, nothing more. Certainly not shareholder interest, except to the extent that it overlaps with the public interest. They don't run the company and they don't make decisions for the company or stop decisions because something might be a bad deal for the shareholders. Conflicts of interest and self-dealing are not illegal if properly disclosed.
Sometimes two collaborators make each other better than either are alone. I get the sense that's how Steve Jobs and Jony Ive were. I've not seen anything Ive has done since be as good as what he did before. Someone has to hold the spike so the other one can swing the hammer. My guess is that's not how this relationship with Altman will work. And that picture is terrifying, please take it down and destroy the camera that took it.
Let 'em make overpriced tchotchkes with private capital all they want. I'm more worried about them winning inflated government contracts and tax credits paid for with public dollars.
He most definitely did not pay $6.6B. The beauty of these OpenAI acquisitions is that they are all-stock transactions so only worth the face value in the make-believe world OpenAI investors seem to live in.
Just a guess, but I'd bet that Johnny Ive isn't exactly hurting for money. Still, the funny thing about paper money is if you have enough of it, and you're well connected (which, again, just guessing, but Johnny Ive might be) you can just borrow against it without having to make a sale or have liquid buyers. Just hope there's no margin call.
It's a weirdly intimate pose, and it looks like it was taken from way too close— almost at arms-length, like a selfie, rather than from 8-10 feet back with a normal 80mm portrait lens. Altman especially looks kind of fish-bowled, being the one in front.
Jobs, for all his enormous character flaws, was a humanist - he believed in people and he wanted to make beautiful objects that would enhance people's lives. He was also an enormous asshole and constitutionally incapable of masking his disdain for solutions he didn't feel measured up. Ive can make a visually beautiful object, but he's shown he doesn't have the feel for the actual user, and Jobs' humanism is the half of that partnership that made it work.
Altman's got none of that (well, except the asshole part) - no vision, no taste, no concept of what a user would want, no real belief in humanity or desire to make things for humans. Ive and Altman together is going to be a disaster.
This is a fantastic comment and I couldn’t agree more. I don’t what they’re going to come up with as a result of this partnership but I expect that it will be utterly lacking in the qualities you describe. You really put your finger on it.
> Sometimes two collaborators make each other better than either are alone.
I don't disagree. Lennon + McCartney were able to fill in bridges, suggest lyrics, etc.
I've always been bothered by Ive's form-over-function though. Or perhaps it is too easy to call out a designer's very public mistakes when on the whole he has done well. For all I know it was Jobs that pushed the design choices that I dislike.
But just to iterate a couple things I dislike: the round mouse on the iMac (obv.), connectors on the back of the modern iMacs (that uncomfortable scratching sound when you're trying to find the USB slot and grate against anodized aluminum)....
You wonder, did he actually use the thing or just admire looking at it?
Does anyone seriously think we're anywhere close to AGI with LLMs? I know CEOs like to say things to blow smoke up investors asses, but does anyone with actual credibility think that?
Anecdata but I talk to lots and lots of AI/ML/DS engineers. Everyone knows the current LLM architecture won't work for AGI. All the "reasoning" models are just pseudoreasoning and there is severe data leakage and benchmaxxing when the companies tout the capabilities.
People really avoid considering what the word "general" implies. Yesterday I tried sending o3 a screenshot of some sheet music, asking for a midi file of how it sounds. Complete failure x3. Could not even get the value of the first note right. This is not "general" intelligence.
These models are notably terrible at music in every dimension.
Music is essentially mathematical. Weakness in math is being addressed by dedicated capabilities that are triggered by mathematical language in prompts, but because these models are actually terrible at math there is no lateral transfer of skill to the domain of music. That's my theory anyway.
I think actually you could do that if you wanted to; look up what notes mean, write some little program to make a sound if you had to. You could do it in a week if it was your only job.
Fifteen years ago I worked with a guy who, in retrospect, was very similar to an LLM. He was extremely verbally gifted and a vacuum cleaner for information. He could speak brilliantly about any topic he had been exposed to. He was a great person to send to a meeting, because he was great at answering questions coherently with the information he had on hand, and he always managed to make your ideas sound smarter than you could yourself. Based on that, you might think he sounds like a gifted human, until I tell you about his major weakness: if you asked him about something he didn't know about, he would often speak just as surely, fluently, and compellingly about it. He hallucinated just like an LLM, and that's why he was stuck in roles without a high level of responsibility despite his verbal gifts.
He was neither arrogant nor self-conscious. He treated his hallucinations as if they were the kinds of simple mistakes other people made, like, oops, I thought I understood this but I don't, no different from oops, I forgot my umbrella.
I sometimes wondered if he had a specific condition that made him the way he was, but I never doubted that he was human, with "general intelligence."
If you’ve spent any time around little kids, you’ve certainly seen that making shit up is a natural inclination of the human brain.
Ideally, as one’s intellect matures, one learns to stop doing that, and build coherent reasoning, only speak up when you know what you’re talking about.
Well, ideally. Many people never get to that stage.
I see a lot of replies suggesting agreement: LLMs are nowhere close to AGI.
I agree — it may well be a completely different path we need to go down to get to AGI ... not just throwing more resources at the path we've pioneered. As though a moon landing were going to follow Montgolfier's early balloon flights in "about five years".
At the same time, there is suddenly so much attention + money on AI that maybe someone will forge that new path?
Unfortunately the money is all chasing LLMs, not other AI approaches. Anyone with a different idea is frozen out, at least for a while. Whenever LLM disillusion finally sets in with the investor class, the question is whether other “AI” will be able to distinguish itself or if the money will just all dry up for another few decades.
I expect a fair number of non-technical LLM proponents, and probably some engineers as well, have likely built machines quite capable of helping them fool themselves that it is.
I do think LLMs will make incredible progress and we'll see lots of breakthroughs from it, but I agree it's nowhere close to AGI.
I'm not sure that matters though—if a technology can give humans what they want exactly when they want it, it doesn't matter if AGI, LLMs, humans, or some other technology is behind that.
i think there's ample evidence to suggest that we're growing closer (3-5 year timeline?) to replacement-level knowledge workers in targeted fields with limited scope. i don't know that i would call that AGI? but i think it's fair to call it close.
thing is that has value, but compute ain't cheap and the value prop there is more of reducing payroll rather than necessarily scaling business ops. this move to me looks like a recognition that generalized AI on it's own isn't a force multiplier as long as you have bottlenecks that make it too pricey to scale activity by an order of magnitude or more.
Nobody fully understands how human intelligence works. It's implausible we'll be able to replicate it or even come up with something better in the short term.
No. People hype it but it's obvious we're hitting a wall with LLMs.
That being said, the "apps" that use LLMs coming out now are good. Not AGI good, but they do things, will be disruptive and have value.
And the money coming it could lead to new techniques and eventual AI. For now though, it looks like AI is transitioning into products and figuring out how to lower inference costs.
That's not a prerequisite to refute a position, no.
The ball is in the other court - if one is working on AGI, it behooves one to know what one is aiming at (and I'd stake a fair wager that OpenAI et al have at this moment very little better picture of what AGI looks like than you or I)
I remain unconvinced they're (the whole LLM/"Attention Is All You Need" industry) even barking up the right tree to build anything usefully-close to "AGI".
The idea that any situation or sensory input can be broken down into a sequence of tokens, and that action choice can be characterized by predicting a subsequent sequence of tokens in the same space, may well bear fruit.
But I think that a lot of people also buy into the idea that "text and image data from the web, and from historical chats, is the right/only way to generate the data set required," and it's a dangerous trap to fall into.
It can answer specialized PhD level questions correctly, yet cannot perform tasks that an average 10 year old could. I don't consider that generally intelligent.
Or just a realization that they are mainly up against google with a massive money factory. They need their own money factories or they won’t survive long term
1) the two decisions do not seem related to each other. OpenAI has capital to spend and is seeking distribution methods to shore up continued access to future capital. That strategic decision seems totally unrelated to their estimated timelines for when AGI (whatever definition you are using) will show up. Especially because they are in a race against other players. It may be a soft signal that more capital is not going to speed up the AGI timeline right now, but even that is a soft signal.
2) I think we already have AGI for any reasonable definitions of the terms 'artificial' 'general' and 'intelligence'. To wit: I can ask Gemini 2.5 a question about basically anything, and it will respond more coherently and more accurately than the vast majority of the human population, about a vast array of subjects.
I do not understand what else AGI could mean.
(In case it matters, I am also an AI researcher, I know many AI researchers, and many-but-not-all agree with me)
I asked Gemini to read a clock for me with hands on 10 and 2 and it got the long hand and the short hand backwards, probably because of the massive trove of online documentation about the symmetry of 10 after 10 being aesthetically pleasing for PR materials and icons or some such nonsense unrelated to the question or the clock.
I don't know about you, but I learned how to read an analog clock in kindergarten and Gemini got it wrong.
If we’re talking about the singularity robot takeover fast takeoff, maybe that’s true.
But Sam and others have said they see AGI is an uneven process that may not have a clear finish line. The intelligence is spiky and some parts will be superhuman while other parts lag.
Note that they started saying that recently after their earlier projections didn't pan out. The "uneven process without a clear finish line" angle was Altman recently trying to reset expectations, which means it doesn't contradict OP's thesis that this move towards product is further admission that AGI is going to be much messier than they initially predicted.
That sounds exactly what you would say if you had staked hundreds of millions of dollars and your personal reputation on something you increasingly know isn't possible.
Just means by the time you get agreement that every checkbox has been checked, much of the world will already have been dominated by the spiky parks to AI long before.
We'll be living in a mostly AGI-ish world long before it gets declared. People might not even care about declaring it at that point.
Because LLM's as they exist right now are incredibly useful and you can make a lot of money from them? AGI isn't god. It might not even be especially useful.
it's funny I feel like top of the line LLMs are basically AGI already or very close, you can have reasonable discussions with them about any subject etc & that a lot of anti LLM talk is grasping at straws & goalpost moving
Has nobody learned anything from the Humane saga? I don't get it - if you have something so revolutionary and so great, just release it and let it speak for itself!
This was exactly my thought. Image HumaneRabbit R1LLM. Developed with an unlimited budget. It will be a $3000 paperweight assembled from the finest raw materials available.
Tangent: Did Windsurf actually get acquired by OpenAI? I would have imagined some sort of announcement from OpenAI at the very least? Bloomberg was the one to break that news too, but haven't seen any follow up.
55 engineers and the opportunity to be associated with Apple products via Jony Ive.
Honestly with OpenAI buying Windsurf, and now this (whatever this is) I'd say that the company is in trouble and is now desperately attempting to buy it's way into relevancy. Either OpenAI wants to become a developer tools company (which can't be that profitable), or a consumer goods company. Trying to become the next Apple is really the only way to ever make the money they spend back.
OpenAI is a failing company. They made the first move, that will be their claim to fame. Sadly it turned out that what they are doing isn't that hard to replicate, just hard to profit from.
I honestly think that Google is going to be the long term winner. They were behind at first, but now their models are the state of the art, and cheaper, and crucially Google as a whole makes enough money to float the business.
Google and maybe Microsoft. Both have enough money to just outlast anyone else.
LLMs aren't a bad idea, but they are at the current cost, that's just less of an issue to Google and Microsoft.
I don't think they got as far as making a website before being acquired. AFAICT the name of the company hadn't even been announced prior to this, it was just vaguely referred to as Jony Ive's hardware company.
Finally a GPT phone or personal AI mobile device looks to be in the works!
I’m done with iPhone once GPT releases their personal mobile AI device!
*Hmmm being downvoted the 500 million who use GPT daily won’t be excited to ditch iPhone for GPT phone? Love to hear why others think this isn’t a good idea?
It’s looks like to me a bunch of people who don’t use ChatGPT daily for everything under the sun. As once you let your imagination run wild with it ..you can see why 500 million use GPT daily to do so many varied things that u can’t go a day without using it and your usage of google is on the decline! Why you want to pick up your phone and have GpT right on ur Lock Screen do to everything under the sun including having it interface with AI agents to book travel, book a local tow truck, reserve whatever, ask questions about friends & family via their own agents, etc
Apple is failing horrible at “AI” currently but I don’t see what the big deal with Jony Ive is in 2025. He had a massive (if not single-handed) impact on some of the _worst_ hardware Apple has ever shipped, thinner, thinner, thinner to the product’s detriment and butterfly keyboards. I lay that all at Ive’s feet.
you must really have an axe to grind, you've commented three times on the same article with how much you hate ive. you're right his later work was a little frustrating, but he's also the same guy who brought you the ipod, the iphone, the ipad, the apple watch, and airpods. maybe he's not batting 1000 but the level of vitriol here is striking.
I never said I hate him, I think his is massively over-hyped, and I just think it's insanity that people think that since he had success earlier in his life that everything he touches is gold. It's a common theme of people thinking any successful business person must be worth listening to in other aspects of life (see: people treating billionaires like they are genius'), a stupid theme but a common one.
how else would you evaluate a designer if not by their track record?
ive has a proven track record of conceptualizing and delivering category defining products. isn't that exactly the skill set that would be called for in this case? if not, what criteria would you apply?
I am using his track record, most people seem to only be looking at his earlier work.
Imagine I told you I had a top tier developer, they built this amazing system but in the last 5-10 years their ideas have not panned out and been actively harmful/misguided. Does that sound like someone you want to hire? Yes, they did amazing things in the past but recent history tells a different story.
It might be one thing if Ive left Apple and started turning out just amazing products, but that has not been the case.
OpenAI board: "Should we dilute the company 2% to acquire Jony Ive for the next 10 years? Yes."
Hacker News: "Man these OpenAI folks are idiots."
OpenAI absolutely should be getting in the hardware game; Ive is a mix of status acquisition and unicorn, and is not the only person/team/company you'd need to make a quality hardware product. But on balance I'd pay 2% of every company I ever had any financial engagement with to get Mr. Ive doing its design. I mean srsly.
At the risk of being the idiot: Being very smart doesn't prevent you from saying and doing very stupid things.
My problem is that Altman is a very smart idiot. He already admitted that OpenAI have absolutely no idea how to make money. Apparently they've now given up on the idea of asking ChatGPT how to make money. Their "AI" not going to develop fast enough, if ever. So now they are just buying up stuff left and right? It might be part of some coherent plan, but if it is, no one else is seeing it.
Altman is smart enough to see that things are not working out and that he's going to run out of money and investor patience. He might also be smart enough to see that if OpenAI fails, so will 80 - 90% of his competitors, not sure if he care though. He needs OpenAI to survive, but he's not that kind of smart, and honestly I'm not sure anyone is.
I feel like we must live in very different worlds! The major AI companies have in excess of 100mm customers each. There’s so much demand for compute that wise investors are literally buying up nuclear plant building companies.
LLMs have blown through every major test people have put in front of them invariably beating estimates as to how long it would take them. Pull up Dwarkesh’s podcast about ARC wherein the creator of ARC proposes it could likely never be super-human with current architectures, about 3 months before o3 provably became superhuman on ARC, spurring the creation of a new “better” (and it is better!) test.
To my outside eyes the OpenAI plan is simple: get too big to fail and be ready to navigate changing investor appetite. Plus maintain technical leadership if possible. And build an enduring consumer brand. Simple but hard. You will note that (as far as I know) they have invested in zero direct physical infrastructure, preferring compute deals with companies like Microsoft and Coreweave.
To my eyes their risk point would be: massive loss in quality/cost to a competitor (Gemini 2.5 pro underscores that Google is a real contender here, and has like six generations of custom chips that make their economics different), or somehow investors remain bullish on AI but bearish on OpenAI to the extent they can finance a legitimate competitor.
If investors lose interest generally, we will enter a new era of higher-cost inference and comparatively less demand. This is the intent behind doing compute contracts rather than owning data centers — a contract likely shifts most of this risk right out onto data center providers; OpenAI can just pay for less compute time. I don’t think this is a ‘death’ scenario for them, because this will be a general loss of interest and therefore all AI companies will stop being able to give away free inference. OAI might contract (probably would) in this world. They might slow down on new model training. (Probably would). But, so would everyone else.
Another way to say it - they’re spending single digit billions of dollars on training and research right now. Think of that as creating a strategic asset, and ALSO customer acquisition cost (e.g. image creation this year — new, better models = more paying customers).
Against a 200mm customer base, would you spend $20-50 to acquire a customer that pays $20/month? Their CAC is low right now. Really low!
This is why I’d propose the major risk is that they get singled out of the herd as ‘non-investable’ vis-a-vis other AI companies. To my eyes they don’t look to be at risk of this right now; if they somehow got there, this would be a real problem - it would lead to the scenario I think you’re imagining — they’d have no money to give away inference / train models, but competitors would.
So, you have to ask, are they sufficiently large, popular, technology leaders, embedded as a strategic US asset in the military industrial complex to avoid that fate? My outside assessment is: definitely.
It doesn't even matter. The video on the site says it all when Sam says "This guy designed the iPhone, the Macbook Pro!" -- they're buying a brand, a legend. They can now say they have "the iPod guy" on their team. Nobody else has Jony and that's what matters.
They want to tell their investors and customers that the iPod guy designed whatever magical widget it is they are building. That is worth the cost to them.
If you're designing a piece of consumer hardware, then having what the general public consider the #1 designer in the world on board is golden.
> Search "io" on Google right now and see what comes up...
I don't know about you, but neither of them comes up. Google I/O has always been something you have to search for including the "Google" part and this news is all about Jony Ive, not the nondescript company name.
Sam is playing 4D chess here. He needs top-tier talent and design to build the next generation of AI hardware, and this move pulls in both capital and talent. Going after Apple—the king of consumer hardware—makes perfect sense: either OpenAI builds the iPhone killer or forces Apple to make a move. They’ve won the web interface so far, but don’t own an OS or device layer, so this helps solve that long-term strategic gap. And since it’s an all-equity deal, there’s basically no downside—John stays aligned, and OpenAI now has elite software and hardware talent under one roof. Huge value unlock
A collaboration built upon friendship, curiosity and shared values quickly grew in ambition. Tentative ideas and explorations evolved into tangible designs.
The ideas seemed important and useful. They were optimistic and hopeful. They were inspiring. They made everyone smile. They reminded us of a time when we celebrated human achievement, grateful for new tools that helped us learn, explore and create.
Sam is playing 4D chess here. He needs top-tier talent and design to build the next generation of AI hardware, and this move pulls in both capital and talent. Going after Apple—the king of consumer hardware—makes perfect sense: either OpenAI builds the iPhone killer or forces Apple to make a move. They’ve won the web interface so far, but don’t own an OS or device layer, so this helps solve that long-term strategic gap. And since it’s an all-equity deal, there’s basically no downside—John stays aligned, and OpenAI now has elite software and hardware talent under one roof. Huge value unlock.
The comments on this article are very interesting. The next major device will definitely be AI-first. Apple is currently trying to jerry-rig AI into their existing product, the iPhone. This has so far not only been a complete failure, but is bound to be a complete failure in the end.
The next Apple will be the one that creates an AI-first device entirely from scratch. AI lies at the core of everything it does. It's an AI assistant, a friend, another brain. It's not some BS summarizing engine that can't even do simple tasks like copy the name of a song playing on Spotify into Notes.
The reason why Apple has failed to integrate AI successfully into the iPhone isn't because we need an AI-first device, it's because AI is still universally being strategized as a solution in search of a problem. An AI-first device at this stage will fail for the same reason: it doesn't have a mission statement to solve particular problems for real people, it has a mission statement to be an AI device.
The absolute best case scenario for an AI-first device at this stage is that it ends up like the Vision Pro, which had a similar mission problem.
Agreed, you can see this from how consumer-facing AI is advertised. It's never something like "I wanted to do X, now I can thanks to AI". It's always "I used AI to generate a cute image to send my son" or "I used AI to write a silly poem about my boss" or something. If it was truly valuable, they'd be showing off how it can solve problems that existed prior to AI being invented, not these fluff tasks that don't respond to people's actual needs or create any value. The only thing that AI can do that people are willing to pay for is cheating on school assignments, but obviously the AI companies don't want to use that in their advertisements.
> it's because AI is still universally being strategized as a solution in search of a problem
What Apple showed seems quite useful. It is a shame they failed spectacularly at execution. Even the simplest things that should be answerable by an LLM and their data, which is what a lot of people want, should be a very low hanging fruit - so much utility without building a complete experience from scratch.
Why cant I just say 'do I have any notifications from a bank?' or 'show me emails that require my attention'. Those things are simple if done with a combination of multiple tools (e.g. feeding email content somewhere, asking it to classify, show the results), yet a three trillion dollar company, with dedicated hardware release just for this purpose, failed to achieve it.
I might be over simplifying things, but with infinite resources, they should be able to do better.
> I might be over simplifying things, but with infinite resources, they should be able to do better.
I don't think you are. The problem really is execution. I don’t need anything beyond what AI can already do—I just need an assistant that understands what I want and uses the right tools. I’m baffled that we still can’t get a reliable summary of emails, notifications, or appointments. If you give me the data in text format, I can paste it into ChatGPT/Gemini/Claude and have a much better dialogue than with any current phone assistant. Somehow, trillion-dollar companies still haven’t solved that.
Exactly. Even the simplest models that could run on phone could have been used to improve Siri or give some better utility other than setting timers. Yet, it is slightly more advanced than a decade and a half ago.
I think it was rushed. Apple may though have lost "the vision thing" — but I'm willing to allow them another release cycle to mature their initial bandwagon-reaction.
I kind of disagree in that I find AI features that I can use with the phone quite useful like being able to say what's that tree or how do I get to Croydon or such like. I don't feel any desire for the AI processing to be built into the device however, I'm quite happy to have it running on Google or whoever's servers and be able to access it from whatever device.
It was always a bit hopeless. The whole idea of let's get people to carry a second device which is like a phone but much worse rather than just using a phone app was always iffy. I guess the Apple Watch works but it's tricky.
I don't know for all other people, but ai totally can be solution to some of my problems, but the current technological landscape does not let it. I'd like for my ai assistant to have as much context about me as possible so that it can give me help in different situations. At the same time, the ai should be unable to share this info with anyone else. However, I cannot imagine Google or Microsoft committing to safeguard data they won't sell. Neither I can imagine Apple offering such an AI without trying to rob me even more blind than I'm now while telling me that they are my best friend. Doing it myself could be a half-baked but somewhat functional solution and honestly I hope that someone of the same mind would build on the labor of others to provide something workable.
I've been realizing lately, as I have leaned heavier on LLMs to do what I would have used Google and search to do previously, that what I want — what I am searching for — is not a page or site but just the damned answer to my query.
"At what time does the first train from Stirling to Edinburgh arrive?" I don't need a page to the train time-tables (or god forbid, a vacation package to Scotland) — just the answer to my question.
No. The AI pane of glass is the next killer product.
The next major device won't be an ad funnel though. It'll give users first class access to the whole pane of glass. Not a managed ads experience at the will of some monopoly platform, but something where the AI serves us instead of being extractive.
The minute we have a broker or agent between us and the "user is the product" services that try to advertise to us and steal our time, it's game over for the old model of revenue. Google, ads, all of it will vanish. There won't be any more selling to me or the rest of the world ever again. You'll have to pay us to get our eyeballs.
Let me clarify: if we have a pane of glass where we run our own agent with our own best interests in mind, then nobody can get through that layer without it being permitted by us.
No more ads.
No more stealthy product placement.
No more paid or featured listings.
It goes further.
No more rage bait, attention bait, low information filler. The annoying people in life and in social media disappear to the great filter.
AI agents can clean up the shitty place the Internet has become.
AI agents are personal butlers. Or internet condoms.
The way that works is this: on-device AI that can handle the task of routing and dispatching and filtering, which can then dispatch out to expensive cloud AI that would otherwise try to inject adds into the stream.
4) It's a broker between you and the "wild wild west internet"
That's cool, but is it "iPhone killer" cool? Maybe, but still unclear why. What's the mission statement of the device, to the OP's original point? It runs an agent, who cares?
Is the mission statement for this device basically "Use the internet without ads" -- if so, that's a pretty narrow market. People have learned to tolerate ads, I don't think people will throw away their iPhones for a better ad blocker.
I like the idea, but that kind of setup doesn't provide an infinitely growing revenue stream. Incidentally infinite growth targets are essentially why the internet looks the way it does today. That's the thing that needs fixing, product development is secondary to structural incentives.
Antitrust doesn't change the basic structure of capital markets, which will still demand infinite growth even if there's a government around capable of breaking up monopolies.
> The next major device will definitely be AI-first.
Be more specific though: what form factor will such a device be in?
A coffee maker? A phone? Glasses? Cars? A building?
The AI wave seems to be hoping a whole load of hardware revolutions, such as holographic displays, will just appear out of the ether because it fits with their vision of how things should be.
I don’t see why the device has to be in a significantly different form factor than current phones. AI alleviates one of the major problems phones have right now, which is that typing on them is slow.
If you can type a half-assed message, and have AI fill in the blanks, or reliably transcribe your voice, that’s a huge improvement to the phone in its current form factor. No reason the screen or interfaces have to undergo a radical transformation
I agree. Humane tried with a new form factor. An audio-only interface is too limited. A watch screen is too small. There's a lot going for a decently sized screen that lets you look images, maps, emails, webpages that you control (rather than projecting onto the nearest (in)convenient surface), that fits in your pocket. Good enough BCIs are still years/decades away.
I could imagine AirPods that connect to various screens embedded in the environment, which you temporarily use when next to them. But it's still jot as convenient as a screen in your pocket.
If it's a phone it will have to run Android, with the Play Store to get the apps, at which point it will be very dependent on Google.
You aren't going to get people giving up their mobile banking apps to carry an AI phone that doesn't quite work, hence the need for it to be something else.
This would cost 50 billion or so. But right now you probably interact with at least 3 or 4 oses per day.
Your TV, has one. Your phone has one, your laptop has one. And if you have voice assistants, they run a 4th distinct OS.
The future will have one OS that shares a session.
Two paths exist. 1. This runs primarily locally aside from a very small amount of data to share the session ( which you can disable). It's completely open source and modifiable.
If you want to roll a 3500$ super PC it'll be just as compatible with the OS as a 200$ one. Writing small automated tasks, everything from just asking with a voice command to wake up jazz,to running a custom C script, will be easy to do.
While I'm dreaming I want a new programming language which supports 3 levels. Plan English instructions ran though an LLM, something like Python and a systems level language like Rust. All "native" programs will be built in this framework.
Now, the negative path is this is all closed source, processed in some data center. "John, I noticed you said to Brian your feet hurt, new running shoes are 30% off , just say the word."
This is the far far more likely outcome. They're going to build an AI that's constantly with you, integrated in every device you own, and it'll all be to sell you stuff.
"Waymo, I would like to go home."
"Sure, but let's stop for milkshakes."
"Waymo, please , I'm tried."
"Understood, I've arranged the milkshakes to be dropped off an your apartment."
This technology could be amazing for accessibility, even real time sign language translation would change the world.
We'll get some of that, but the end goal will always be making as much money as possible. Ultimately selling us crap. Your awake for 16 hours today. You must be monetized every waking second.
Once they figure out how to get the science from Dream Scenario to work I'm sure they monetize sleep too
If Apple has an opening for an AI-first device, its AirPods with their own Apple Watch style SIP and data connection + replacing Siri with a real multimodal model.
This could be true... but only if there is an actual specific problem that they can put their finger on that requires the device to be AI-first. What is that problem exactly?
It's also not obvious to me that a concerted effort by Apple (unlike what we've seen so far, admittedly) wouldn't eventually be successful in converting the iPhone to something effectively indistinguishable from a platform designed from the ground up to be "AI-first".
Designing things from the ground up is hard by the way. It's not just the design itself; it's the ecosystems around them which are really hard to get going. Apple has the world's biggest flywheel in motion there already.
Hasn’t that already been done (and failed) with one or two devices? there was one about a year ago that was effectively a clip on your shirt and it seemed like a terrible product. It was meant to be a dedicated AI device.
This is where SesameAI seems to be heading. If you haven't yet, try the demo, it's definitely flawed at the moment but shows some potential for conversational UX.
The next major device will be AI first but not voice in voice out, but rather voice in text out (and images). I'm not sure they understand this and I'm not sure they could deliver anything which would surpass the iPhone.
iPhone should've been a successful formfactor, it's not the form factor that's the problem it's the lack of apple datacenters that can train and inference for a popular service. Instead Apple went all in on edge inference which as we all know is absolutely stupid and probably will never be that relevant.
> The next Apple will be the one that creates an AI-first device entirely from scratch.
When's the last time something of this magnitude actually occurred in real life? Myself and many of the other commenters you refer to have a hard time believing something like this is even possible in the current market—the huge megacorps are more risk-averse and incapable of innovation than ever before, and the scrappy startups seem to exist entirely to be acquired by the megacorps to raise their valuations.
The last time something even remotely like this happened was, what, the Oculus Rift? And that was far from a perfect product that perfectly solved every problem in the domain perfectly on the first try.
> The next major device will definitely be AI-first.
Everything so far that has been named X First has been marketing woo woo, and in practice only meant "we're thinking about this use case a little more than before". Such as mobile-first, and cloud-first.
In either case, sure, it's very possible that device hardware will change. But in what way is hard to say. Will the on-device chips be more powerful to support local inference? Sure.
> Apple is currently trying to jerry-rig AI into their existing product, the iPhone [...] is bound to be a complete failure in the end.
Yes, kind of. The problem with all existing platforms including web is that they're build in a way that is adversarial to interop. Apps are siloed, and the only possible birds-eyed view is the OS itself. But, GUIs are not built for machine interop. Vision models to navigate UI will be flaky at best for the foreseeable future (and forget about voice, it's an extra modality at best and is way too limited). On web frontend, it's the same story. On backend, the web has been adversarial for a long time, with fingerprinting, rate limiting, anti-scraping, paywalling etc, which has been supercharged in the last year or two.
Essentially, the products and systems we use every day are a poor fit for interop with AI, so I suspect we'll see two parallel futures: (1) interop and semantic GUIs being integrated into platforms, web and app ecosystems (this is what MCP is IIUC). This will fail for the same reasons as web 2.0 failed (the adversarial nature of tech business models - opening up APIs is not incentivized), not to mention the investment required to build a new OS and (2) vision models to do tasks on behalf of humans with some mediocre agent-loop-thing on top of existing hot garbage pool of already flaky apps and sites. This won't necessarily fail, but it will mean platform- and large data owners (Google, MS etc) will yet again end up on top, since they control the access to the birds-eye view (much like Siri or Google Assistant). It is also the most noisy, flaky and data-intensive surface area to use for interop, meaning the products will be slow, bloated and feel like bonzibuddy for years.
Doesn't mean AI won't transform businesses and white-collar work. It certainly already does. But, the AI selling point for consumers (current ability - not "future potential"), is kind of like how Google Search and Maps was a decade+ ago. Sure, it provides amazing utility, but most of the time you're looking at memes, playing games and watching TV shows. AI in those products is mostly a continuation of ongoing enshittification.