OpenAI Device Imagined

What Do Sam Altman & Jony Ive Have In Store For Us: Thoughts on OpenAI's Upcoming "Device"

Two years ago, Apple's former Chief Officer of Design, Jony Ive, started working with Sam Altman and the team at OpenAI. About two weeks ago, they announced their joint venture, io. The goal of this new endeavor is "create a family of devices that would let people use AI to create all sorts of wonderful things." Like some of you out there, I am completely fascinated by the idea that OpenAI is going to make physical hardware. While they haven't given us a lot to go on, what they have shared is intriguing, like the one hint given by both Jony Ive and Sam Altman, that the device will have NO screen. So, what could OpenAI's device be? Well, for the rest of this article, I'm going to share, from a purely speculative stand-point, what I think we may be in store for from Sam and Jony's teams.

Let's get super nerdy about tech! 

Why Does OpenAI Need A Device?

Before we dive into this, a little story. I sent my wife the first draft of this blog. After reading it she asked me, "Can't this all be done in an app?" I found that to be incredibly profound insight. I realized I was so captivated by the thought of what it might look like, that the thought of "is it needed to begin with," had totally escaped me. 

So, I think the short answer to my wife's question is, "Yes, my concept for what the OpenAI "device" could be can pretty much all be done in an app." However, the longer more complex answer is that "Although my concept can be done, it likely cannot be done well without being broken into a family of devices." And I think that is the crux of why OpenAI would build their own hardware. 

I wrote an article at the end of last year titled "Is the Next Meta Smartphone the Puck that Comes with the Orion?" Basically, Meta showed off a pair of very capable AR Smartglasses, AND with it a required "Compute Puck". That "Compute Puck" was required because there is not enough space, power, or cooling capacity on the glasses themselves. Also, and maybe even more importantly, the puck is needed by Meta because neither Apple, nor Google, allow the deep access they would want or need to create the seamless experience Meta would need for a great customer experience. 

That, in essence, is exactly why I think that OpenAI, with the help of Jony Ive, are embarking on a similar journey to create actual hardware themselves. On this particular day you might also be asking, specific to Apple anyway, "Can't they just use the new Foundation Models Framework that Apple introduced yesterday to do AI natively on the phone?" And, although they could technically do that, the point is to use their AI models, not someone else's. That is the biggest difference. It's all about controlling the platform, and all its elements, to deliver the best possible experience to customers. With both Apple and Google having significantly locked down ecosystems, the only way for anyone to compete is to do it themselves. And that is where we get into the hardware, software, and services of it all. 

So, What Do I Think OpenAI's Device Will Include?

Hardware

To me, this is the most fun part of all this, because I love hardware. It is what inspired this post to begin with. So, if you couldn't tell from the primary image for this article, I think OpenAI is going to make two pieces of hardware. One will be required and one will be optional. 

The OpenAI Intelligent Pack

This is what it's all about. As you can see from my mockups, I think this screenless device will be a medium-thick rectangle that is ALL about AI model processing power. That will be its main function, but that will likely require it to have a pretty serious battery, as well as both WiFi and bluetooth (likely multipoint). I also think it will have MagSafe/Qi2, but more on that later. I think it will potentially be able to work totally disconnected with an onboard model (this may even be an out-of-the-box, subscription-free product that "just works"), but will work best if internet connected to the much larger, and constantly adapting, main OpenAI model (requiring a subscription). That connection could happen over wifi, but likely will rely on your phone for constant connectivity to that larger model. 

I think the idea would be that you would connect a set of bluetooth headphones to the OpenAI Intelligent Pack and, because it has multipoint bluetooth, it could also be connected directly to your phone, allowing all interactions to go through the OpenAI device, while still allowing control of music, podcasts, and other phone specific functions. I think this necessitates an OpenAI app as well, but more on that below. I wouldn't be surprised if they decided to make earbuds in the future, but that's for another time. 

The OpenAI Camera Pendant 

This one is a bit out there, but I can't help but think that the device needs to "see" to achieve its full potential. Thus, I imagine a 360° fishey semi-flattened spherical pendant that can be attached to a necklace or a broach type pin. Yes, I know this conjures thoughts of the Humane AI Pin, BUT, what I think is significantly different here is that unlike Humane's device, what I am imagining for OpenAI's is that all it is is a camera. It would connect to the OpenAI device, but that would be it. It would kind of be a "dumb" device, literally just sending video. That's it. Somewhat like a battery powered video doorbell that can last a month or more on a charge because all it does is sling audio and video, this device would be doing something similar. No processing, no interpreting, just sending audio and video. The idea of it being a spherical 360° lens, with some small microphones, is so that if it were to be on a necklace, even if it were to be flipped over, it kind of doesn't matter since the lens is on both sides. And, with integrated microphones, it can "listen" to the world around you as well. 

Software

As has been intimated, the OpenAI device will likely have no screen and be optimized for voice interaction. But, that doesn't mean it won't have access to a screen. That will be where your phone, and an OpenAI app, comes in. There are obviously going to be times when you ask a question that requires OpenAI to show you an image. In cases like that it feels obvious that there will be an OpenAI app on your phone that you can reference for these visual answers. So, for example, if you asked something like "What does the Transamerica building look like?" you would likely get an alert on your phone that you just tap on and it takes you right to seeing what you just asked about. That, on top of probably an entire world of settings and other options you could check and adjust to your heart's content, as well as a chat interface for discreet queries. Again, audio will be primary, but screens are just too important not to have access to one.

There will also, obviously, be the software on the device itself. I'm guessing this will be a fairly robust, yet optimized for the space, AI model that can help and answer questions, even if it's disconnected from the network. Like I said above, it would be neat if it just worked, no subscription required, out of the box. But, of course, give you access to updates and even larger models with a subscription and access to OpenAI through the cloud. There is a lot of potential and options here. 

Services 

OpenAI is already a subscription SaaS business model. Having a physical device attached to it, allowing the service to be accessed and used on an even more regular basis, just increases the likelihood that even more people will subscribe. Expanding on that, the more it knows and learns about you, your life, and your environment, the more valuable it becomes, to the point it might be hard to ever not subscribe. It has the potential to be a service unlike anything we have ever experienced before, in its ability to know us, and thus assist us, for better or worse. 

Other Considerations

MagSafe/Qi2

Yes, I know this kind of feels out of left field right now, and a little self serving coming from a guy whose business is all based around MagSafe/Qi2 products, but hear me out. Basically, who among us wants yet another device sloshing around in our pockets or purse, that can be easily left behind because it's not our phone. Not me, not my wife.

Thus, the idea here is to use MagSafe/Qi2 make the main OpenAI Device part of the phone, in a way. And, unless you make some stupid looking case design, which then requires you to actually make many, many, many different phone cases, for every size and version of phone, that change every year, and makes your customers mad because they need a new device just because they got a new phone, it actually doesn't make sense to do anything other than MagSafe/Qi2. MagSafe is basically ubiquitous at this point (ignore iPhone 16e here), and universally loved (see every article written about how disappointed reviewers were that iPhone 16e didn't have MagSafe), and the fact that it is now part of a standard that can work for any phone (not just Apple) via the Qi2 standard (which all new Android phones will likely start adding), and you have a near universal platform to attach the mythical OpenAI Device, turning 2 devices into, effectively, one. 

I just hope that, if this were to happen, that they make it the same size as Apple's MagSafe Battery Pack (which all my mockups are based on) so that it fits perfectly into the opening of my OpenCase cases. Heck, that would only make it even thinner, and lighter, and more secure. What more could you want! Well, actually, if the OpenAI Device were built with dual-direction MagSafe/Qi2 so that it could receive charge from supported phones as well, and maybe even allow charge back to a phone also, now you are talking about a seriously robust device ready for a full day in the age of AI. 

OpenAI, The Device & Our Future

In laying all this out, and especially in referencing the Meta phone article I wrote last year, it feels like this is an obvious step in the direction of creating an interim platform for OpenAI to build on. Its potential usefulness is obvious, thus leading me to believe that maybe smart AR glasses might be on the roadmap as well, with a potential stop at "make their own smartphone town" along the way. 

I think it's clear that significant change is in all our futures, whether we like it or not, thanks to AI. It has inspired me to take this fun romp into guessing what this device(s) from OpenAI might look like, and even some of the ways it might be part of a larger system, from bluetooth headphones and connected camera orbs, to stand alone services to connected subscription bases offerings, to MagSafe/Qi2 connections. I'm as nervous as I am interested to see what really comes together. And, as they say at the very end of the Sam Altman & Jony Ive acquisition video "We look forward to sharing our work next year."

It's gonna be a long year.

————————————————————— 

Image explanation - I literally took a picture of my Apple MagSafe Battery Pack and I used Pixelmator to erase the Apple logo from it and to add the OpenAI logo. 

Back to blog