OpenAI’s Self-Hosted Option Changes the Game for Privilege-Sensitive Professions

With OpenAI’s release of a fully self-hosted model, the conversation around legal and medical AI use just shifted—subtly but significantly.

For years, the promise of generative AI has clashed with the hard boundaries of privilege and compliance. Lawyers and clinicians want to use LLMs for research, drafting, or triage—but uploading sensitive information to third-party tools, even “secure” ones, risks breaching attorney-client or doctor-patient privilege. Worse, under HIPAA, uploading protected health information (PHI) to a system without a signed Business Associate Agreement (BAA) is a clear violation.

OpenAI’s hosted offerings (like ChatGPT Enterprise) tried to split the difference—disabling training on user inputs, offering SOC 2 compliance, and claiming no retention of prompts. But they didn’t solve the core issue: from a legal standpoint, hosted AI tools are still third parties. And privilege waived, even unintentionally, is privilege lost.

Self-hosting changes that. By running the model entirely inside your infrastructure—air-gapped, audited, and access-controlled—you eliminate the ambiguity. There’s no third-party disclosure, no downstream training risk, no hand-waving about deletion. For legal and medical contexts, this architecture is a critical step toward preserving privilege by design, not just by policy.

But architecture is only part of the story. Most people—including many legal assistants and clinical support staff—don’t know that sending a document to a hosted chatbot could constitute a privilege-destroying act.

Even more importantly, hosted models are typically subject to subpoenas—not warrants. This distinction matters:

  • A warrant requires probable cause and judicial oversight.
  • A subpoena just needs a lawyer’s signature and a theory of relevance.

So if you’re using a third-party LLM provider—even one that claims “enterprise-grade security”—you’re often one subpoena away from disclosing sensitive information without your client or patient ever knowing. And the provider may not even be legally obligated to notify you.

This is not paranoia. It’s infrastructure-aware realism.

That’s why I’ve been working to design AI interfaces that don’t just assume good legal hygiene—they actively enforce it. Smart defaults. Guardrails. Warnings that clarify when a tool is protected vs. exposed.

We need AI tools that:

  • Detect and flag PHI or confidential content in real time
  • Provide proactive alerts (“This tool may not preserve privilege”)
  • Offer strict, admin-controlled retention and audit settings
  • Default to local-only, no-train, no-transmit modes for sensitive workflows

Legal and healthcare use cases shouldn’t be an afterthought. They should be designed for from the start. Not just to avoid lawsuits—but because the trust at stake is deeper than compliance. You only get one shot at privilege. If you lose it, no one can claw it back.

OpenAI’s self-hosted model is a necessary foundation. But we still need purpose-built, context-aware product layers on top of it. The future of privileged AI won’t be one-size-fits-all. It’ll be legal, local, and locked down—by design.

(Written by me, in collaboration with ChatGPT. IANAL. Case law evolves, though at a snail’s pace, compared with hypersonic pace of technology.)

Disclaimer: As with all things AI, the industry moves at a rapid pace. Models evolve, tools update, and behaviors shift—sometimes overnight. By the time an author hits ‘publish,’ the example they’re using may already be obsolete. It’s not that the writer was wrong. It’s that the system changed while their post was still rendering. Disclaimer 2: The previous disclaimer (only) was written by AI. Disclaimer 3: Any future attempts to update Disclaimer 1 may invalidate Disclaimer 2.

Double‑Check Your Bot: Lessons From Yahoo! to Kosovo

A lot of users think AI is infallible. It’s often way off. A recent exchange:

You:

What percentage of Yahoo!’s acquisitions were failures?

Me (first pass):

Roughly 50 % of the 110 deals Yahoo! executed between 1997‑2016 failed to meet stated objectives…

You:

That’s a big understatement. Try again.

Me (second pass):

A more realistic post‑mortem puts the failure rate closer to 90 %, once you count shut‑downs (e.g., GeoCities, Broadcast.com) and talent walk‑outs…

Having worked in the space, I can confirm Yahoo hired some of the least‑talented corp‑dev people in tech; AOL scooped up the rest. Hi, AOL Time Warner—oh wait, neither entity exists anymore.

Because AI learned from coders, it expects humans to compile and rerun its answers—so you must iterate.

This is the approach that ordinary users should take: try, try again. Sometimes, you’ll get better answers just by saying “Are you sure?” or “That doesn’t seem right.”

Rule of thumb: ask twice, ask differently.

Here are some areas where I’ve found a lot of success.

Sparring partner

I like having deep, intellectual conversations with smart people. But they can’t be around 24×7. AI can. I can test out ideas. Even though they’re wired to be people pleasers, you can get them to poke holes. “What do you think about this idea?”

Travel planner

Want a 5-day trip to Kosovo? ChatGPT will give you ideas. Unlike the generic top 10 lists, it can customize to your preferences. “I like museums.” “I like war memorials.” “I’m vegetarian.” “I like beaches.” Yes, it’ll claim Kosovo has beaches—double‑check the map.

Part of the reason AI is good at this is that there aren’t definitive answers. No matter what travel resource (or friend) you use for recommendations, they will always miss things.

Where it gets problematic is the actual geography and details. I’ve found it to put places far from where they actually are or get directions wrong. When I asked about Southwest’s bag fees, it got that wrong.

To be fair, a lot of sites get that wrong. Southwest has long touted that two-bags fly free; that policy changed recently.

Psychologist

This one is going to be controversial, especially among psychologists. In my experience, a lot of therapists in the past, most human ones are terrible.

There are some inherent benefits of using AI for therapy that humans can’t match:

Available 24×7. I had an issue recently where I needed to talk to my therapist. It was on the weekend and he said to call back during the week.

Cost. U.S. therapy isn’t cheap. Even online options like BetterHelp run about $70–$100 per live session once you annualize their weekly plans. Walk into a brick‑and‑mortar office in San Francisco and it is much more expensive. According to Psychology Today, the average is $185 and in private practice, it can be $300. Meanwhile, my AI “therapist” costs $20 a month for unlimited chats.

Longer context window. A human therapist sees you maybe one hour a week, probably only one hour a month. You talk about things that you can remember since the last visit. But there may have been things you have forgotten when they were relevant. AI has near-perfect memory.

Less risk of confusion. AI isn’t going to conflate your experience with others it “sees,” like a human therapist might.

The biggest challenge is that (so far) there isn’t AI-client privilege or malpractice insurance. Your data can be subpoenaed. If AI gives you bad advice, it’s not responsible. (Check the Terms of Service.)

AI isn’t a psychiatrist. It can’t prescribe medications. When it does venture into medicine, be very careful. More on that later.

Lawyer

You can have AI write legal-sounding messages. Often, the implication that you are using an attorney can be enough to get intransigent parties to agree, especially on low-stakes cases. Your bank doesn’t want to spend $20,000 in legal fees if they can make you go away for $200.

It’s not a great lawyer. We’ve seen AI make up cases and citations. Attorneys have been sanctioned for using AI to generate briefs. Anthropic (one of the leading AI companies), had a court filing partly written by AI. Parts of it were wrong.

Again, there is no privilege. Your interactions can be subpoenaed, unlike if you pay an attorney. Unlike a real attorney, there is no malpractice insurance. I expect that this will change.

Writer

As a one-time journalist, I hate to say this because it will hurt my friends. Even more so because jobs are in short supply.

Sure, a lot of what is generated by AI is slop. But working collaboratively, you can write better, crisper and with better arguments. You can use it to find sources. It’s easily the best assigning editor and copy editor I’ve worked with. It also has infinite time—something today’s newsrooms lack.

Unless I explicitly call it out, I don’t use AI to write posts, but I do have it look at my writing. It should be built into the editor tool in every CMS.

Recently, I listened to a podcast where three reporters who cover AI were asked how they use AI. Two said as a thesaurus. You’ve got gigantic supercomputers at your fingertips and you’re going to use it like a $16 paperback? NGMI.

Doctors and finance… more on that later.

TL;DR: As a college teacher told me, “If your mother tells you she loves you, check it out.” AI is great for low-stakes scenarios like bar bets; otherwise check it out.

Ask twice, ask differently.

Disclaimer: As with all things AI, the industry moves at a rapid pace. Models evolve, tools update, and behaviors shift—sometimes overnight. By the time an author hits ‘publish,’ the example they’re using may already be obsolete. It’s not that the writer was wrong. It’s that the system changed while their post was still rendering. Disclaimer 2: The previous disclaimer (only) was written by AI. Disclaimer 3: Any future attempts to update Disclaimer 2 may invalidate Disclaimer 1.

AI is wrong 75.3% of the time.

Ok, maybe not. Probably not

But I just did something AI largely doesn’t do. I admitted uncertainty: AI acts like a first-year associate at an elite consulting firm, not allowing for the possibility that it is wrong.

“Hallucination” is the sanitized term. In plain English: “I made it up and hoped you wouldn’t notice.”

So why all the excitement?

Three big reasons:
– People are talking their book. Companies have already invested hundreds of billions in AI. They need it to work. (A big part of my portfolio is in AI-related companies, so I really want it to work, too!)

– A lot of the excitement around AI is coding tools. “Look at this app I built with a few sentences.” AI is a technology that is over-fitted for coding. There is so much high quality training data out there from sources like Stack Overflow, language documentation, open-source community, university lectures among other things. In product management, this is called “the happy path” – what happens when everything goes right. Clean specs, deterministic outputs, tons of labeled data. Real life isn’t like Stack Overflow.

– It feels like magic. No pop-ups, no autoplay videos, no sites inhaling 6 GB of RAM. Just a clean answer box. Take that, EU cookie banners. But “feels like magic” isn’t the same as being right.

Going outside that domain, things get a lot dicier:
– I did some medical queries and if I’d listened to ChatGPT’s advice, I would be having *literal* hallucinations. It confused a benign drug with a powerful psychotropic.

– Earlier this year, it was still thinking Biden was president. It didn’t even know that Biden had dropped out and Kamala ran for president.

– When I was researching Boeing, ChatGPT’s performance was 80 points off. Boeing’s 5‑year total return was +~30%, ChatGPT said –50%.

Across thousands of queries where I’m a SME, I saw a lot of cracks. Where I’m not, everything just “sounds right.” That’s dangerous.

There are a lot of technical reasons for this that I’ll get into in another post.

I’m still very excited about where AI will go. With tech, we way overestimate what it will do in the next year, but way underestimate what it will do in ten years. (See: cell phones, iPhones, apps, social, Amazon.)

We will get there. Until then, take the responses with a brick of salt.

(I reviewed this post with ChatGPT. It had some good thoughts, including adding the “feels like magic” bullet. It tightened my wording in a few places.)

Some of my favorite articles

Principles of mobile design

Five mistakes that product managers make

Facebook could make billions in search. Here’s how.

Creating great products isn’t just engineering them

11 questions for marketing and product interviews

A startup’s guide to doing research on the cheap – usability testing

Why Groupon Is Poised For Collapse

Use your client’s product (and your own)

Why I expect Allo to struggle

Google this week launched, Allo, the latest in its efforts at social. We’ve seen a long Wave of Google social products that have failed. Buzz, Wave, OpenSocial, Google+ on the pure social side. When you look at the subset of messaging apps, this includes gTalk, Google Voice, Google Hangouts among others.

Allo is Google’s latest attempt to compete with Facebook Messenger, iMessage, WhatsApp and Skype.

There is no clear reason to adopt this. Why is a user going to adopt Allo? Is it for:

  • Tons of emojis. (Piece of cake to emulate.)
  • To play command line games? Zork 2016 (Piece of cake to emulate.)
  • Google Assistant.
  • whisper SHOUT. (Piece of cake to emulate. iOS 10 includes this.)

Better to pick one thing and knock that out of the ballpark. You aren’t going to win FB Messenger users over with emoji. Given Google and Facebook’s relative strengths and weaknesses, I’d bet it all on Google Assistant. Another plus: It adds virality to Google’s other products.

The initial implementation of the assistant is an OK start, but there’s a long, long way to go. Google Assistant is like most bots, it overpromises and underdelivers.

fullsizerender-5

One of the challenges in natural language processing is understanding entities. When I asked a friend “Do you want to meet up at blue line pizza tonight?”, I got a search suggestion for “Pizza places nearby”. It didn’t recognize that “blue line pizza” is an actual place. When I said “How about tacorea?” It gave me the correct suggestion of “Tacorea restaurant”.

Having worked in local, search and messaging, I know that entity extraction is an incredibly hard technical problem. So I’m going to be more forgiving than most people. A lot of users will just feel that the experience is broken.

Google is also behind in another way: Unlike Facebook and iMessage (and even Google Hangouts), there is no desktop experience. I wanted to send a link to this post to a friend over Allo (after I wrote it on my Mac), but had to send it via Hangouts instead.

The biggest challenge for Allo will be distribution. I already have plenty of ways to message someone: Facebook Messenger, WhatsApp, Skype, SMS, Line, Twitter DM, iMessage.

iMessage succeeded because it Apple just took over SMS transport for iPhone to iPhone messaging. (Apple was able to do this because it has always been able to dictate the rules to carriers.)

WhatsApp built its base outside the U.S. The primary reason people adopted it initially was to avoid paying the exorbitant cross-border SMS and MMS fees. There was an easy, compelling reason to switch.

Facebook Messenger used its insane time-on-site and hundreds of millions of users to build its user base. They had a massive (and personal) friend graph to work with.

So far, I haven’t seen anything from Google about how it’s going to attract users.

Tesla’s tragic reminder: we don’t have self-driving cars, yet

If you’ve been reading this blog or following my tweets, you know that I’m a huge proponent of self-driving cars. In the long run, they will save lives, reduce environmental costs of transportation and make more efficient use of capital. They will fundamentally change the nature of cities and society.

But we’re not there yet. And we won’t be for many years to come.

8633482886_0f07a15401_oA Tesla enthusiast died recently when his Model S drove straight into a truck that was making a turn. The car’s “Autopilot” mode didn’t recognize the brightly colored truck against the brightly colored sky. Neither did he. (A portable DVD player was found in his car; it isn’t clear if he was watching it. A witness said that it was playing Harry Potter shortly after the accident.)

We’re in the midst of a long transition period in cars and car safety. I’m afraid this won’t be the last such incident.

We have many different kinds of safety and driver-assistance features in cars today. Some assist driving. Others offer semi-automation. The last category is true autonomous vehicles. (There are no vehicles of the last type in commercial production.)

Definitions of what belongs what will vary. But this is how I think about them.

Driver assistant features

These help the driver with alerts or by managing small parts of the driving experience. They check the work of the driver. They include:

  • Anti-lock brakes. The system pulses the brakes to help prevent skidding. Before anti-lock brakes, drivers had to manually pump brakes to keep from hard braking and locking the wheels. With ABS, the system pulses the brakes much faster than a human can. The braking has to be initiated by the driver. These are standard on U.S. cars.
  • Back up cameras and back up sensors. When the car is put into reverse, back up sensors will beep as it detects an object behind you. The closer you get to the object, the more frequent the beeping. Cameras show you what’s behind the car, including things you wouldn’t see in the rearview mirror. Cameras are now in about 60% of new vehicles in U.S.; they will be required in cars by 2018.
  • Lane-departure warning systems. These notify you when you are drifting out of your lane. They use cameras to look for lane markings. The driver still has to do the steering; the system only alerts to mistakes. LDWS are options on mid- to high-end cars.
  • Blindspot detection. When you are changing lanes, blindspot detection systems will alert you when there is a vehicle in your blindspot. This could be an audible alert or an indicator in the side view mirror. BSDs are options on mid- to high-end cars.

Semi-automation systems

These are typically offered on mid- to high-end cars. They actively control the vehicle.  They include:

  • Cruise control. Cruise allows a driver to set a steady speed for the vehicle and it will maintain the speed. The driver can then remove the foot from the accelerator. Even in light traffic, this is a pretty useless feature. Because other cars change speeds, you have to keep adjusting the cruise setting. This has been a common feature for decades.
  • Adaptive cruise control. Similar to cruise control, but the speed adapts to the car in front of you. If the car slows down, your car will slow down.
  • Lane management systems. They will keep you in your lane by using cameras to detect lane markings. They’re rarer than LDWS, but rely on the same basic technology.
  • Automatic braking. These detect imminent collisions and automatically apply the brakes.
  • Automatic parallel parking. These will park your car for you.

Fully autonomous

These systems use a range of sensors including cameras, infrared and LIDAR along with extensive maps databases to drive without human intervention. Alphabet, the parent of Google, is the company that is furthest along in fully autonomous vehicles.

In Google’s testing, there have been no fatal accidents. The only accident caused by a Google vehicle was a very-low speed collision with no injuries.

A long transition

We are in the midst of a long transition. Unfortunately accidents will happen because of a combination of human laziness, overselling of the product and confusing interfaces. The current semi-automation systems have a lot of limitations.

I recently rented a Cadillac STS with a lot of these features. As I drove it, I tried using the “lane keep assist” feature. In theory, the system would keep me in my lane. I tried it on curvy Interstate 280 in the Bay Area, in moderate traffic. As far as I can tell, the system didn’t work. When I took my hands off the wheel, the car would drift a foot into the other lane before pulling me back into my lane. Although I’m a big fan of testing products to the limit, I wasn’t about to do that in traffic.

It’s possible that it was user error. Or a confusing interface. Or I was outside the limitations of the system.

According to GM, Lane Keep Assist and Lane Departure Warning systems may not:

  • Provide an alert or enough steering assist to avoid a lane departure or crash
  • Detect lane markings under poor weather or visibility conditions, or if the windshield or headlamps are blocked by dirt, snow, or ice, if they are not in proper condition, or if the sun shines directly into the camera.
  • Detect road edges
  • Detect lanes on winding or hilly roads

And if Lane Keep Assist only detects lane markings on one side of the road, it will only assist or provide a Lane Departure Warning alert when approaching the lane on the side where it has detected a lane marking.

Lastly, GM says that using Lane Keep Assist while towing a trailer or on slippery roads could cause loss of control of the vehicle and a crash. Turn the system off.

When the LKA or LDW systems don’t work properly, the system performance may be affected by:

  • A close vehicle ahead
  • Sudden lighting changes, such as when driving through tunnels or direct sunlight on the camera sensor
  • Banked roads
  • Roads with poor lane markings, such as two-lane roads

Read more: http://gmauthority.com/blog/2014/11/gms-lane-departure-warning-and-lane-keep-assist-tech-feature-spotlight/#ixzz4DYSRz6QL

That is a lot of limitations to be aware of! It’s too easy to learn to rely on semi-autonomous features that might work 95% of the time but have dire consequences in the 5% case.

Marketing doesn’t help either. The benefits are highlighted in glamorous videos; the limitations buried in fine print. Even naming makes a big difference. Calling something “Autopilot” given the state of today’s technology is vastly overstating the case.

IMG_1676
What does the A with the arrow that looks like a circle mean? Beats me.

Car companies aren’t the greatest at user-interface design, often using what look like  hieroglyphics for controls. In my test of the STS, I thought the car had an automatic braking system based on the icons. I’m glad I didn’t try to test that — because it didn’t. Mine was a somewhat unfair test because if I owned the vehicle, I’d probably know what features I had. But if someone had been borrowing my car, they’d be presented with the same set of challenges.

Driver training on the proper use of new features is key. When I went through driver’s ed, I was taught to pulse the brakes to prevent the wheels from locking up. But with antilock brakes, you are supposed to step hard on the brakes. I was taught to put my hands at 10 and 2 on the steering wheel; with airbags, you want to put them at 5 and 7.

Not only are the controls of new features not intuitive, some companies even fiddle with basic features.

FCA’s redesign of the transmission shifter is mind-bogglingly stupid.

The National Highway Traffic Safety Administration’s investigation into the Monostable gear shifter used by a number of Chrysler, Dodge and Jeep vehicles is turning into a recall; FCA will recall approximately 1.1 million vehicles worldwide to modify the operation of the shifter that has now caused 121 accidents and 41 injuries.

The issue itself is not a fault of engineering but rather design, as the shifter returns to the default center position without giving the driver sufficient feedback as to the selected gear.

As a result, a number of owners have exited their vehicles thinking that they had put the vehicle into Park, while in reality it remained in Drive or Reverse position. The NHTSA has called the operation of the shifter “unintuitive” and had opened an investigation into the issue months ago.

Read more: http://autoweek.com/article/recalls/fca-recall-11-million-vehicles-confusing-shifter#ixzz4DYUXeSCP

With driver reliance on semi-automation systems, system limitations and confusing user interfaces, we can expect to see more cases like the Tesla accident.

Media frenzy and public irrationality

My big worry is that media hype around the small number of accidents will hurt the development of truly autonomous vehicles that can save a lot of lives.

Even in the current state, semi-automation features like lane management and automatic braking can save lives. IF drivers use them as backups.

But we’ll see endless stories about how dangerous automation is. Anything that is “new” is dangerous. It was worldwide news when a Tesla caught fire. Never mind that gas vehicles catch fire much more frequently.

Imagine if we had 24-7 news networks during the rise of aircraft. In the early years of aviation, lots of accidents happened. Every accident would have been covered nonstop.

With much less media scrutiny than we have today, we were able to improve airliner safety. With every accident, we investigated, learned what went wrong and improved.

The NTSB is great at what it does. Although we primarily hear about it in the context of airline accidents, they’re already looking into the Tesla accident.

They provide reasoned analysis, tradeoffs and recommendations. Unfortunately, government, politicians, media and the public don’t work that way. We will see negative hype around self-driving cars as politicians chase votes and media chase ratings.

When it comes to media, only the misses count. If your technology saves 4,999 people, you don’t get credit for that. But you get dinged for the one it doesn’t save.

Developing our safer future requires some reasonableness on the part of consumers, manufacturers, media, politicians, regulators and attorneys. Is that an unreasonable ask?

What will the car look like 35 years from now?

We know what self-driving cars look like today. We’ve seen two models so far: the Lexus SUV with a bunch of cameras and sensors on it and the Google bubble car.

hybrid.jpg

But what might they look like 35 years from now? Here are some thoughts.

The obvious design changes will be removing the steering wheel, dashboard and gear shift. But those are some of the least interesting.

Cars today are designed for human drivers, safety and convenience. Over time, the first two become unnecessary and the convenience becomes dominant.

Driving

There are a number of features that are necessary for driving that can go away:

  • Driver and passenger side mirrors and the rear view mirror. Nothing to see here. And getting rid of the exterior mirrors will make the cars more fuel efficient.
  • Windshield wipers. The car doesn’t need these to see. (But there are other reasons to keep them — see below.)
  • Windows. The car doesn’t need them, but also see below.
  • Turn signals.
  • Dashboard with gauges.

Some things can’t go away, not because the car or passengers need them, but for safety of others. We don’t really need headlights. But pedestrians do. However, the headlights can remain off most of the time and be turned on when there are people or animals around. This will dramatically reduce light pollution. (Lack of signals have already caused problems for humans; the blind can have a hard time with hybrid and electric vehicles because they are nearly silent.)

bubble car

Safety

We’ve added a lot of safety features over the years:

  • Airbags
  • Side impact door beams
  • Seatbelts
  • LATCH anchors for car seats
  • Crumple zones
  • Bumpers

These add substantial weight to cars.

With self-driving cars, car accidents will be incredibly rare events. The biggest safety feature will be the lack of human drivers.

Make driving safe enough and you can get rid of that weight, making cars more fuel efficient.

Convenience

Sure, we’ve added radios, CD players, rear-seat DVD players and cupholders. But a self-driving cars will provide a whole new dimension of convenience.

Instead of having the typical front-facing seats, we can have different seating arrangements. Maybe a table for playing games, cards or just talking.

Recliner seats or beds for sleeping. Reading lights that dim the rest of the cabin.

Air travel is a model to look at: from the perspective of the passenger, an airplane is a self-driving transportation vehicle. We could have a big screen display for watching movies with thousands of options. Cameras for teleconferencing. Better, more immersive sound systems.

We could have the equivalent of Airshow: maps and stats on the journey.

airshow

These features could be segmented as airplanes are. Vehicles that are basic for short trips and luxury vehicles for long hauls. (Of course, you could order these on demand for a particular trip.)

There are some things that the car doesn’t need, but we might want to keep for humans. Windows and windshield wipers are two of them.

We still have windows on planes because people want to be able to look out. (Cargo planes don’t have windows because it is more fuel efficient.) Likewise, we need to provide visibility, especially on scenic roads. But we can improve these, too: windows and the windshield will have the ability to become opaque. This is better for having sex and watching movies. Or driving on an urban blight road full of billboards.

Now all of this will take a long time. The average vehicle on the road today in the United States is more than 11 years old. If we’re looking at individually owned vehicles, it would take 20 years or more to turnover the fleet. But this should be accelerated by purchase of self-driving cars by companies like Uber and Lyft for on-demand service.

The driving and convenience features are easier to change than the safety features.

Having a self-driving car on the road without a steering wheel is fine, because other vehicles don’t rely on it. We can’t get rid of turn signals because other cars need to know which way the self-driving car intends to go.

The safety features will need the longest to get rid of. If humans can cause accidents, we still need to protect the occupants of the self-driving car. This also affects the convenience features; we can’t have people standing up unbuckled if there’s a chance that the car will get hit.

Regulators and public fear will also play a role in further delaying the removal of outdated safety features.

All of these changes can’t happen fast enough for me.

Read my post on how cars have changed over the past 35 years.

Creating great products isn’t just engineering them

Silicon_valley_title.png

Last night’s episode of Silicon Valley is one of the must-watch episodes of series.

Our friends at Pied Piper have accomplished an amazing technological feat. Their friends and family (except for Monica) love the product. Downloads are going crazy. We watch along as the counter reaches 500,000 downloads. But look under the hood and there are fewer than 20,000 daily users.

Pied Piper commissions a focus group where it becomes clear that consumers don’t understand the product. Richard spends hours trying to explain it to them.

What went wrong?

If you’re designing a product for the masses, it needs to sell itself. The benefits (not the features) have to be obvious quickly. Marketing like this isn’t the answer:

(If you hire an agency — which you shouldn’t do initially– and they create something like this, fire them.)

Pied Piper’s platform was a product by engineers for engineers. It was tested by an initial beta group that was mostly engineers.

Here’s the Pied Piper user interface:

piedpierux.jpg

This reminded me a lot of product I worked on. Our CTO constantly wanted to add new features to the platform. As a result, the site looked a lot like this.

Our platform had more (and better and cheaper) features than the competitors. But we were putting all of them in front of the consumer at once. People couldn’t understand what they could do with it.

We broke the product up into three products that solved different consumer needs. (Benefits instead of features.) This was mostly a design and marketing effort, but obviously the the engineers had to build it. All of the products ran on the platform we’d already built. (It was just different skins that adapted the platform to specific use cases.) We also raised prices closer to our competitors. Despite the price increase, demand went up. Once people understood what we were selling, they were willing to buy.

Real-life examples abound. Wave and Google+ are two of the highest profile examples. Googlers tell me that internal testing was off the charts. (A more detailed post on why Google+ failed coming soon.)

If you want to design great consumer products, you need to have an interdisciplinary approach. You want a designer who has worked on complex consumer-facing products. You need a marketing person who has knowledge of consumer behavior. They all need to be working together on the product. These aren’t necessarily separate people; often, one person can wear multiple hats.

One of my recommendations: send your product to someone you know is not technical, maybe a parent or sibling. Give them a task list. See how they do. (Ideally, with screen sharing.) It’s important to do it without guidance because no one will be guiding them in real life.

When I tweeted about this, I got this response:

This sounds great — solve a problem that people have. But the greatest value comes from solving problems people didn’t know they had. The iPhone and Facebook are great examples.

What existing problem did iPhone solve? It created a completely new category that people fell in love with.

Facebook is similar.

It’s easy, in retrospect, to define problems that were solved. In Facebook’s case, you can easily keep in touch with tangential contacts, such as high school and college classmates, business acquaintances, etc. Sure, there is clear demand for this now. But I don’t know that there was an unrequited need to share pictures of your lunch with your friend from 3rd grade.

Some of the key reasons for Facebook’s success:

  • Simplicity. The initial feature set was very limited and easy to understand. Even if you have engineered 4,000 features behind the scenes, the initial experiences should be easy-to-get. You can expose some of the other features later.
  • Iteration. Facebook rolled out market-by-market (Initially, Harvard and then other elite schools.) Only later did it expand to the masses. By the time it was rolled out, behaviors had been established. (Poking, status updates.) It’s easier for people to mimic behavior of others than to create their own behaviors.
  • Growth and marketing built-in. Facebook is great at this; Google sucks at this. The way products succeed today is by having growth mechanisms built into the product. See my post on how people tagging was key to Facebook’s success.

When you’re designing products for consumers, there is no such thing as too simple.

 

Google has failed at social; Facebook has failed at search. Here’s why.

Today’s the 5-year anniversary of the launch of Google+. It was an unmitigated disaster for Google. Despite spending many man-years of development, endless hype in the media and Google’s attempt to cook the books on usage stats, the network is essentially dead.

Google+ failed for a simple reason: It blatantly tried to copy Facebook instead of playing to Google’s strengths.

We’ve seen a lot of attempts to copy successful products of others. Facebook tried to compete in search. Facebook tried to copy Flipboard (Paper), Instagram (Camera) and Snapchat (Poke). All of these attempts failed.

The only product in recent memory where the copy was more successful is Facebook Live, which is essentially Meerkat. I’d argue this was because Meerkat didn’t really solve a compelling user problem. Most people don’t need to broadcast 1-way video. Those that do need broad distribution, which Meerkat lost as soon as it was cut off from Twitter. (To the extent people want video, it’s 2-way, such as FaceTime, Skype or Hangouts.)

The reason these copies didn’t succeed? They didn’t incorporate what was unique about the new platform; what made them successful. In Google, that is search. In Facebook’s case, that’s social.

Google+ required you to replicate what you’d already done on Facebook. Create a profile, friend people and post. The unique and much better features of Google+ — Hangouts and Photos — were buried by comparison to the Facebook- product. Why would anyone repeat all the work they were doing on Facebook on Google+? Or switch to a platform where none of their friends are for no real benefit?

Google embedded Google+ everywhere it possibly could (YouTube comments, giant alerts, etc.) But it didn’t effectively do it where it mattered: in search. Hundreds of my friends use Google everyday. The results that they click on are more likely to matter to me than results that the general population click on. Despite the fact that I have a network of hundreds of people, I’m still searching in isolation.

If my buddy Bob spent 2 hours researching a trip to Senegal, shouldn’t I be able to learn from his efforts? Shouldn’t I be flagged that Bob did this work, maybe went to Senegal and had knowledge on the topic? Maybe I should reach out to him and learn about it? (Of course, this always needs appropriate privacy permissions. I shouldn’t be able to see Bob’s searches unless he makes them available to me.)

rosewood_sand_hill_-_Google_Search

A friend wrote a review in Google’s local product of Rosewood Sand Hill. That should be front-and-center on this screen. It’s what I would consider by far the most relevant. But it’s nowhere to be found.

The right way for Google to play in social is to add a social layer to Google. If the value proposition to the consumer was “have your friends help you search,” instead of “use a version of Facebook without your friends,” I imagine Google+ would have been much more successful.

People search on Facebook. All the time.

Conversely, most of Facebook’s efforts on search, have focused on the search box. People search on Facebook all the time. But they don’t search in the search box, they search in status field.

Facebook

If Facebook copies Google’s definition of search, they will (and have) failed.

What do I mean by people search on Facebook? Consider this example:

SearchFB

This is no different than a Google search for “Senegal”. Except, I am asking my friends, in a highly inefficient manner. There’s a high likelihood that someone in my friend network (of 600+ people) has been to Senegal or knows something about Senegal. But my post doesn’t efficiently reach those people. FB, through, NLP should identify this as a query for “Senegal” and present this post to my friends who have been to Senegal.

That creates a better search experience because I get expertise from people I actually trust.

If you expand distribution to friends of friends, you are almost guaranteed to find someone who has an answer. In this case, in an efficient way, my friend Mandy has expanded the search to her friend Chris in the last comment.

It could either be highly prioritized in news feed for them, or they could get a notification that says “Your friend Rakesh is looking for information about Senegal? Want to help him out?”

Modifying Facebook in this way also helps improve the social experience and increases the liquidity in the market. By expanding the distribution to my friends most likely to know the answer, I get an answer faster. This also opens up the possibility of creating new relationships or renewing old ones.

Scenario:

  • I haven’t talked to friend Bill in a while.
  • I post a “query” for Senegal.
  • FB knows that Bill has been to Senegal. (Pictures posted from there, status updates from there, logins from there, etc.)
  • FB surfaces the “query” to Bill.
  • Bill sees it and responds.
  • Bill and I reconnect.

Fact-based queries vs. taste-basted queries

This all works better for matters of taste vs. fact. Google is going to give you a much better, quicker answer for queries like the “value of pi” or “5+2” or “weather in Miami”.

Yes, I could ask this in Facebook — and I did:

pi4

More than an hour later, I still had no answer. (And my non-technical friends, who didn’t know what I was doing, would think I’m an idiot.) Mihir asked about chatbots — I’ll get to this in a minute.

But those are matter of facts — and, btw, have zero advertising against them.

Think about queries like “plumber,” “dentist,” “lawyer,” “auto insurance”. Those are queries of taste. And, it may shock people, but that’s where you make your money in search! Travel, law, professional services and insurance are among Google’s top money makers.

While many people, including Wall Street analysts, treat search as a monolith, search is actually a collection of verticals. Each has different levels of monetization. Many fact-based queries have no advertising against them.

Facebook doesn’t have to solve the queries of fact. Leave those to Google. (It could, but people aren’t searching FB for those.)

Facebook can pick off the higher-value queries and the ones that are most likely to add to the FB experience and value proposition: a place where you come to interact with your friends.

FB can also use these “queries” as a way to turn its ad into higher revenue, intent-based ads. In addition to your friends comments, you’d see — clearly identified — responses from advertisers to your query.

Someone who posts a query “anyone know of a good hotel in London?” could be presented with an advertiser comment for “hotels in London.” This presents a highly relevant ad that someone could turn to immediately. (It could also be time delayed — if I don’t get a response from a friend, the advertiser comment shows up.)

Bots

Facebook is trying to do this in a ham-fisted — and annoying and needlessly interruptive way.

I was recently hit by an Uber while walking across the street. My cousin asked me about it on Messenger. Here’s what happened:

uber.jpg

My cousin is asking how I’m doing after I was hit by an Uber. Messenger is throwing an ad for Uber in both of our faces. (Not only once, but three times. See my post on bots.) There are some great uses for bots. Sticking irrelevant ads in front of people isn’t one. (I’ll talk about good use cases in a future post.)

Often, you’re forced into a space by business needs or the stock market demanding that you have a “search” or “social” strategy. Or there’s a hole in you business model. See also: wireless carriers in payments, video, content, pictures.

The easiest thing to do is to try to copy someone else who has been successful. But if they’re already dominant, how are you going to win? You can’t just create something to plug a hole in your business strategy; you need to plug a hole in the customer’s needs.

These are just two big examples of how you could win by playing to your own strengths — and your user’s frame of reference about your product.

When designing new products, you should figure out what makes you different and better. Then build off that.

11 questions for marketing and product interviews

I’ve always hated job interviews. On both sides. Not only are they poor indicators of eventual success, they also create a dynamic that isn’t good.

Some of the things I hate:

  • They don’t allow for the possibility that the interviewee is smarter or has a level of experience that interviewer doesn’t have. This is especially true when you have a marketing person interviewing an engineer. Often, the only assessment that can be made is cultural fit.
  • There’s generally no way to assess the interviewer. There have been several cases in my professional life when I know the interviewer was a terrible interviewer and not getting any insight. In some cases, an asshole on the interview loop may by annoying prospective candidates to the point that they don’t want to join the company. There should be a mechanism for an interviewee to rate an interviewer. (The incentives are complicated here, but I can think of some ways.)
  • The process rewards people who know the tricks of interviewing. Because it’s only 20-25 minutes per interviewer, it is often easy to blow through the process with prepackaged talking points.

The biggest issue is that they create a confrontational dynamic, instead of a conversational dynamic.

Here are some of the questions I use when interviewing marketing and product people. In most cases, there is no “right” answer. I’ve often learned something when talking to interviewers. But for some of them, there is a definite wrong answer.

  1. Late-night talk show host Jimmy Kimmel comes up to you with a camera crew and asks you, “Who is the president of the United States?” What should you say? Why?
  2. A pizzeria charges $12 for a 9″ pizza. How much should it charge for an 18″ pizza?
  3. Financial analyst frequently beat up on Google because its CPC is declining (the revenue generated per click). Is a declining CPC really bad for business? Why or why not?
  4. There are extremely rare circumstances where a self-driving car will have to chose among hitting two people. How do you decide which person to hit?
  5. In question 3, what if the pedestrians could be identified as Stephen Hawking and a Wal-Mart clerk?
  6. In the self-driving car scenario, assume the car’s options are: hit a deer head on or swerve into oncoming traffic. If you hit the deer, there is a high probability that the driver will die. If you swerve into oncoming traffic, there is a lesser probability that you will die. But you create a risk for the other driver that he could be injured or killed.
  7. You accidentally get a Vanguard statement that was supposed to go to a well-known psychic. Does this make you believe in her skills more or less? (For the purpose of this question, assume you have some level of belief. You can’t opt out of the question by saying psychics separate the gullible from their money.)
  8. In the heart of Time Square, there has been a tkts booth since 1973. The booth offers 1/2 off tickets many Broadway shows. People line up and wait for an hour or more to get these cheaper tickets. Obviously, technology has changed a lot since 1973. They have a great app whose functionality could be enhanced for online ticketing. The 1 hour wait would go away and offer theatergoers instant access. Should they add app based ticketing? Why or why not?
  9. You are product manager for an OnStar like service. The capabilities include remote door unlock, vehicle status reports, turn-by-turn directions for navigation (talking to an agent to enter the destination), warnings when you need to go the dealer for service. The technical capability is in every car. The package costs $200 a year for unlimited use of all services. It’s possible to offer a one-time remote unlock service. It would cost you $1 to do a one-time unlock. It happens instantly. The consumer’s other alternative is to call a locksmith. The locksmith has to pay a technician $40. The retail price is $80. Again, assuming it only costs you $1 to provide the service, should you offer the a la carte product? If so, how much should it cost?
  10. Back to Question 1. Does your answer change if you’re a 24-year-old aspiring actress? If so, why?
  11. How would you market Twitter?