LinkedIn’s AI could have you freezing your butt off in Alaska

I love rampies. Not only could I not get off the ground without them, I couldn’t leave the gate. They load the luggage, drive the tugs that push back the plane and use the orange wands to marshal airplanes. They are out on the ramp whether it’s 100 degrees in Phoenix or -40 degrees in Nome.

But does that mean I could be one? Nope.

Though I do know not to load bags into an engine. That won’t end well. Note to Alaska: don’t hire Gemini to load bags.

On to LinkedIn…

Imagine my surprise when LinkedIn’s AI told me I was a great fit… for a ramp service agent role. LinkedIn has a feature for premium members that will tell you how good a match you are for a role.

Here’s my rating for a Ramp Service Agent role.

That is not a job I remotely qualify for. Location doesn’t match. Comp doesn’t match. Skills don’t match. But that rating is one of benefits I get for $40 a month.

This role at Alaska Airlines is a great fit. I could fly this at 40,000 feet. Fortunately, I’m also a “high” fit for this.

The only way these answers help is if candidates are doing spray-and-pray applications. That’s a waste of time for applicants.

Now imagine a recruiter looking at this same view. (I don’t have access to LinkedIn Recruiter, so I don’t know its sort order.) It wouldn’t help them either. If anything, it would destroy credibility for LinkedIn as a recruiting tool. And when a company owned by Microsoft (one of the biggest backers of AI) ships something this sloppy, it casts a shadow over all their other products.

But what about that “BETA” label, Rakesh? For those not familiar with tech talk, it means they’re testing the product for release.

I’ve designed and launched search products for much of my career. I’d never put this out beyond a closed internal beta, much less as a premium feature. As it stands, this isn’t a product — it’s unpaid labor for LinkedIn’s AI. The thumbs up/thumbs down will train their model. Even better: just hire a large RLHF team in India.

Amy Miller, a recruiter at Amazon, hates AI for “scoring.” This is a good reason why.

Rampies don’t need AI scores — they get planes moving. LinkedIn’s “match” feature should aspire to that kind of utility: useful, reliable, and grounded in reality. Until then, it feels more like unpaid labor for their AI than a benefit for members.

And if you’re looking for a product executive who knows AI — and knows when not to trust it — send me a message.

Parting shot: Here’s what WordPress generated. The NTSB will want a conversation.

Written by me, lightly edited by ChatGPT, illustrated by ChatGPT & Gemini. Unlike LinkedIn’s AI, none of them tried to send me to Nome at 40 below.

OpenAI’s Self-Hosted Option Changes the Game for Privilege-Sensitive Professions

With OpenAI’s release of a fully self-hosted model, the conversation around legal and medical AI use just shifted—subtly but significantly.

For years, the promise of generative AI has clashed with the hard boundaries of privilege and compliance. Lawyers and clinicians want to use LLMs for research, drafting, or triage—but uploading sensitive information to third-party tools, even “secure” ones, risks breaching attorney-client or doctor-patient privilege. Worse, under HIPAA, uploading protected health information (PHI) to a system without a signed Business Associate Agreement (BAA) is a clear violation.

OpenAI’s hosted offerings (like ChatGPT Enterprise) tried to split the difference—disabling training on user inputs, offering SOC 2 compliance, and claiming no retention of prompts. But they didn’t solve the core issue: from a legal standpoint, hosted AI tools are still third parties. And privilege waived, even unintentionally, is privilege lost.

Self-hosting changes that. By running the model entirely inside your infrastructure—air-gapped, audited, and access-controlled—you eliminate the ambiguity. There’s no third-party disclosure, no downstream training risk, no hand-waving about deletion. For legal and medical contexts, this architecture is a critical step toward preserving privilege by design, not just by policy.

But architecture is only part of the story. Most people—including many legal assistants and clinical support staff—don’t know that sending a document to a hosted chatbot could constitute a privilege-destroying act.

Even more importantly, hosted models are typically subject to subpoenas—not warrants. This distinction matters:

  • A warrant requires probable cause and judicial oversight.
  • A subpoena just needs a lawyer’s signature and a theory of relevance.

So if you’re using a third-party LLM provider—even one that claims “enterprise-grade security”—you’re often one subpoena away from disclosing sensitive information without your client or patient ever knowing. And the provider may not even be legally obligated to notify you.

This is not paranoia. It’s infrastructure-aware realism.

That’s why I’ve been working to design AI interfaces that don’t just assume good legal hygiene—they actively enforce it. Smart defaults. Guardrails. Warnings that clarify when a tool is protected vs. exposed.

We need AI tools that:

  • Detect and flag PHI or confidential content in real time
  • Provide proactive alerts (“This tool may not preserve privilege”)
  • Offer strict, admin-controlled retention and audit settings
  • Default to local-only, no-train, no-transmit modes for sensitive workflows

Legal and healthcare use cases shouldn’t be an afterthought. They should be designed for from the start. Not just to avoid lawsuits—but because the trust at stake is deeper than compliance. You only get one shot at privilege. If you lose it, no one can claw it back.

OpenAI’s self-hosted model is a necessary foundation. But we still need purpose-built, context-aware product layers on top of it. The future of privileged AI won’t be one-size-fits-all. It’ll be legal, local, and locked down—by design.

(Written by me, in collaboration with ChatGPT. IANAL. Case law evolves, though at a snail’s pace, compared with hypersonic pace of technology.)

Disclaimer: As with all things AI, the industry moves at a rapid pace. Models evolve, tools update, and behaviors shift—sometimes overnight. By the time an author hits ‘publish,’ the example they’re using may already be obsolete. It’s not that the writer was wrong. It’s that the system changed while their post was still rendering. Disclaimer 2: The previous disclaimer (only) was written by AI. Disclaimer 3: Any future attempts to update Disclaimer 1 may invalidate Disclaimer 2.

Agentic Commerce or Agentic Con Job? Whose Side Will Your AI Really Be On?

Image of someone searching for flights

There’s a lot of talk in the AI world about “agentic commerce.” My big question: whose agent?

We have “agents” today, but most of them aren’t working for you. They aren’t fiduciaries.

Travel agents? They steer you to whoever gives them the largest commission. Despite all their targeting data, the default sort order is Recommended. I usually want Price or Rating, but Expedia wants to steer my booking—so it shows Recommended. (Expedia takes a 15–30 percent commission on hotels.)

Real-estate buyer’s agents? Same story. When I bought my place, my agent urged me to pay full asking price. I bid lower. He wanted the higher price because his commission would be higher.

Almost every service you buy, no matter how it’s marketed, isn’t designed in your best interest. “Free” stock trades? Robinhood makes its money on the back end through payment for order flow—$677 million of it in 2023. The finance industry even lobbied successfully against a rule that would have required brokers to act as fiduciaries.

Google is the ultimate example, even though Google’s founders were skeptical of the advertising model.

“We expect that advertising-funded search engines will be inherently biased toward the advertisers and away from the needs of the consumers.”Sergey Brin and Larry Page, 1998 (The Anatomy of a Large-Scale Hypertextual Web Search Engine)

In the U.S., the only mainstream professions actually required to put your interests first are lawyers and fee-only financial advisers. (Yes, lawyers still have a conflict: the more hours they bill, the more you pay.)

What AI could change

With a fundamental shift like AI, there’s a possibility of turning that model on its head.

Take Amazon Marketplace: merchants compete for visibility, and Amazon takes 30–50 percent for the privilege. If my agent could talk directly to their agents, we could split that spread—I’d pay less, the merchant would earn more.

My agent could negotiate on my behalf:

“Hey airlines, I need to fly SFO → JFK next Tuesday. Give me your best bid.”

Instead of spending 20 minutes sifting through Expedia, my agent could strike a deal in milliseconds. I could even place limit orders: “When someone offers this trip for $300, buy it.”

Airlines and intermediaries would hate that, but it’s closer to an efficient market. Intermediaries wouldn’t skim 30 percent just for hosting a platform. (Yes, platforms claim they provide protection and customer service, but in reality most offer little of either.)

Today’s model is expensive because there are many mouths to feed along the way.

Will it happen?

I’d happily pay my agent a subscription fee if it truly worked for me. Will it happen? I’d love it—but history says no.

We’re addicted to “free,” even though “free” often costs us more.

(Written by me, lightly edited by ChatGPT.)

Disclaimer: As with all things AI, the industry moves at a rapid pace. Models evolve, tools update, and behaviors shift—sometimes overnight. By the time an author hits ‘publish,’ the example they’re using may already be obsolete. It’s not that the writer was wrong. It’s that the system changed while their post was still rendering. Disclaimer 2: The previous disclaimer (only) was written by AI. Disclaimer 3: Any future attempts to update Disclaimer 1 may invalidate Disclaimer 2.

2 + 2 = 5: Why AI can’t do math

Since the dawn of calculators, we’ve trusted computers to do math. Unless there’s a bug in the code, computers have been great at math. Better and faster than the fastest humans. We use them in everything from stock trading to calculating change at the register. Some grocery stores even have automated coin dispensers; the register calculates the coins due and tells another machine to spit out the right coins.

What are marketed as the most powerful supercomputers ever… can’t do math. They can write sonnets, pass the bar exam, summarize Tolstoy… and botch 17 + 5. When I gave AIs a spreadsheet with places I’ve traveled and asked it to do a simple count, they kept coming back with the wrong numbers. (The spreadsheet I imported did an easy and accurate count: the number of rows matched the number of places I’ve visited.) ChatGPT can get something as basic as less than or greater than wrong. When it comes to math, AI might not be smarter than a 5th grader.

It has to do with the way AI works. Unlike spreadsheets and other tools we’re used to, it’s just predicting what will come next. It’s not actually doing math. It predicts based on inferences. The name is a tell for people in the industry: LLM stands for Large Language Model. It’s not a Large Math Machine.

But people who’ve been told that AI is a magic machine don’t know that. Like everything else, the AI confidently spits out an answer. The better implementations will do the inferences, recognize that might be wrong (“hey, I just did math!”), go back and rewrite Python code, execute it and spit back an accurate, calculated answer.

Yes, there is a small print disclosure that says “AI can make mistakes.” It should add “My math might not be great.”

If you do use AI to do math, ask it to double check the answer as a math problem. Or, you can just use a spreadsheet until the technology gets better.

Disclaimer: As with all things AI, the industry moves at a rapid pace. Models evolve, tools update, and behaviors shift—sometimes overnight. By the time an author hits ‘publish,’ the example they’re using may already be obsolete. It’s not that the writer was wrong. It’s that the system changed while their post was still rendering. Disclaimer 2: The previous disclaimer (only) was written by AI. Disclaimer 3: Any future attempts to update Disclaimer 2 may invalidate Disclaimer 1.

Double‑Check Your Bot: Lessons From Yahoo! to Kosovo

A lot of users think AI is infallible. It’s often way off. A recent exchange:

You:

What percentage of Yahoo!’s acquisitions were failures?

Me (first pass):

Roughly 50 % of the 110 deals Yahoo! executed between 1997‑2016 failed to meet stated objectives…

You:

That’s a big understatement. Try again.

Me (second pass):

A more realistic post‑mortem puts the failure rate closer to 90 %, once you count shut‑downs (e.g., GeoCities, Broadcast.com) and talent walk‑outs…

Having worked in the space, I can confirm Yahoo hired some of the least‑talented corp‑dev people in tech; AOL scooped up the rest. Hi, AOL Time Warner—oh wait, neither entity exists anymore.

Because AI learned from coders, it expects humans to compile and rerun its answers—so you must iterate.

This is the approach that ordinary users should take: try, try again. Sometimes, you’ll get better answers just by saying “Are you sure?” or “That doesn’t seem right.”

Rule of thumb: ask twice, ask differently.

Here are some areas where I’ve found a lot of success.

Sparring partner

I like having deep, intellectual conversations with smart people. But they can’t be around 24×7. AI can. I can test out ideas. Even though they’re wired to be people pleasers, you can get them to poke holes. “What do you think about this idea?”

Travel planner

Want a 5-day trip to Kosovo? ChatGPT will give you ideas. Unlike the generic top 10 lists, it can customize to your preferences. “I like museums.” “I like war memorials.” “I’m vegetarian.” “I like beaches.” Yes, it’ll claim Kosovo has beaches—double‑check the map.

Part of the reason AI is good at this is that there aren’t definitive answers. No matter what travel resource (or friend) you use for recommendations, they will always miss things.

Where it gets problematic is the actual geography and details. I’ve found it to put places far from where they actually are or get directions wrong. When I asked about Southwest’s bag fees, it got that wrong.

To be fair, a lot of sites get that wrong. Southwest has long touted that two-bags fly free; that policy changed recently.

Psychologist

This one is going to be controversial, especially among psychologists. In my experience, a lot of therapists in the past, most human ones are terrible.

There are some inherent benefits of using AI for therapy that humans can’t match:

Available 24×7. I had an issue recently where I needed to talk to my therapist. It was on the weekend and he said to call back during the week.

Cost. U.S. therapy isn’t cheap. Even online options like BetterHelp run about $70–$100 per live session once you annualize their weekly plans. Walk into a brick‑and‑mortar office in San Francisco and it is much more expensive. According to Psychology Today, the average is $185 and in private practice, it can be $300. Meanwhile, my AI “therapist” costs $20 a month for unlimited chats.

Longer context window. A human therapist sees you maybe one hour a week, probably only one hour a month. You talk about things that you can remember since the last visit. But there may have been things you have forgotten when they were relevant. AI has near-perfect memory.

Less risk of confusion. AI isn’t going to conflate your experience with others it “sees,” like a human therapist might.

The biggest challenge is that (so far) there isn’t AI-client privilege or malpractice insurance. Your data can be subpoenaed. If AI gives you bad advice, it’s not responsible. (Check the Terms of Service.)

AI isn’t a psychiatrist. It can’t prescribe medications. When it does venture into medicine, be very careful. More on that later.

Lawyer

You can have AI write legal-sounding messages. Often, the implication that you are using an attorney can be enough to get intransigent parties to agree, especially on low-stakes cases. Your bank doesn’t want to spend $20,000 in legal fees if they can make you go away for $200.

It’s not a great lawyer. We’ve seen AI make up cases and citations. Attorneys have been sanctioned for using AI to generate briefs. Anthropic (one of the leading AI companies), had a court filing partly written by AI. Parts of it were wrong.

Again, there is no privilege. Your interactions can be subpoenaed, unlike if you pay an attorney. Unlike a real attorney, there is no malpractice insurance. I expect that this will change.

Writer

As a one-time journalist, I hate to say this because it will hurt my friends. Even more so because jobs are in short supply.

Sure, a lot of what is generated by AI is slop. But working collaboratively, you can write better, crisper and with better arguments. You can use it to find sources. It’s easily the best assigning editor and copy editor I’ve worked with. It also has infinite time—something today’s newsrooms lack.

Unless I explicitly call it out, I don’t use AI to write posts, but I do have it look at my writing. It should be built into the editor tool in every CMS.

Recently, I listened to a podcast where three reporters who cover AI were asked how they use AI. Two said as a thesaurus. You’ve got gigantic supercomputers at your fingertips and you’re going to use it like a $16 paperback? NGMI.

Doctors and finance… more on that later.

TL;DR: As a college teacher told me, “If your mother tells you she loves you, check it out.” AI is great for low-stakes scenarios like bar bets; otherwise check it out.

Ask twice, ask differently.

Disclaimer: As with all things AI, the industry moves at a rapid pace. Models evolve, tools update, and behaviors shift—sometimes overnight. By the time an author hits ‘publish,’ the example they’re using may already be obsolete. It’s not that the writer was wrong. It’s that the system changed while their post was still rendering. Disclaimer 2: The previous disclaimer (only) was written by AI. Disclaimer 3: Any future attempts to update Disclaimer 2 may invalidate Disclaimer 1.

AI is wrong 75.3% of the time.

Ok, maybe not. Probably not

But I just did something AI largely doesn’t do. I admitted uncertainty: AI acts like a first-year associate at an elite consulting firm, not allowing for the possibility that it is wrong.

“Hallucination” is the sanitized term. In plain English: “I made it up and hoped you wouldn’t notice.”

So why all the excitement?

Three big reasons:
– People are talking their book. Companies have already invested hundreds of billions in AI. They need it to work. (A big part of my portfolio is in AI-related companies, so I really want it to work, too!)

– A lot of the excitement around AI is coding tools. “Look at this app I built with a few sentences.” AI is a technology that is over-fitted for coding. There is so much high quality training data out there from sources like Stack Overflow, language documentation, open-source community, university lectures among other things. In product management, this is called “the happy path” – what happens when everything goes right. Clean specs, deterministic outputs, tons of labeled data. Real life isn’t like Stack Overflow.

– It feels like magic. No pop-ups, no autoplay videos, no sites inhaling 6 GB of RAM. Just a clean answer box. Take that, EU cookie banners. But “feels like magic” isn’t the same as being right.

Going outside that domain, things get a lot dicier:
– I did some medical queries and if I’d listened to ChatGPT’s advice, I would be having *literal* hallucinations. It confused a benign drug with a powerful psychotropic.

– Earlier this year, it was still thinking Biden was president. It didn’t even know that Biden had dropped out and Kamala ran for president.

– When I was researching Boeing, ChatGPT’s performance was 80 points off. Boeing’s 5‑year total return was +~30%, ChatGPT said –50%.

Across thousands of queries where I’m a SME, I saw a lot of cracks. Where I’m not, everything just “sounds right.” That’s dangerous.

There are a lot of technical reasons for this that I’ll get into in another post.

I’m still very excited about where AI will go. With tech, we way overestimate what it will do in the next year, but way underestimate what it will do in ten years. (See: cell phones, iPhones, apps, social, Amazon.)

We will get there. Until then, take the responses with a brick of salt.

(I reviewed this post with ChatGPT. It had some good thoughts, including adding the “feels like magic” bullet. It tightened my wording in a few places.)

“When will AI be able to replace a product thinker like me?”

Whenever I meet people, they tell me that AI will get rid of most jobsexcept theirs.

I know that there is a good chance that at some point AI will develop skills that will help it get closer to my skill level.

I’ve written about how AI can replace PMs with up to 7 years of experience. So I asked ChatGPT “When will AI be able to replace a product thinker like me?”

The TL;DR: Keep building the future. AI can (currently) only do what is in its training set. If you do things that haven’t been done before, you can stay ahead of the tsunami. If you can manage complex organizations to ship. If you’re a system-level thinker.

Is this good advice? That’s TBD. But it’s also the best answer that I’ve been able to come up with when talking with many long-term tech execs.

The following is a transcript of ChatGPT’s answer. When drafting this, I told ChatGPT that I’d put my own assessment in italics. It misunderstood and put its assessment of what my assessment would be. They were spot on, so I left them in.

Executive Summary

AI can already outperform junior and mid-level PMs at execution. It writes PRDs, prioritizes roadmaps, handles tradeoff matrices, and synthesizes user feedback—all faster, cheaper, and more thoroughly than most human PMs. But it still struggles with judgment, taste, and navigating political, temporal, and organizational complexity. Those are the layers where top product thinkers still thrive.

I’m watching this happen in real time. What used to take days of back-and-forth now takes a well-structured prompt and 30 seconds. But the AI still doesn’t know when to say no—or when to break the rules.

This memo breaks the question into three timeframes: what AI can do today, what it will likely master soon, and what may remain uniquely human.


1. What AI Can Already Do (Today)

AI now matches or outpaces $150K+/year PMs at:

  • Feature scoping & prioritization
  • Drafting PRDs and user stories
  • Competitive teardown and synthesis
  • UX tradeoff analysis
  • Roadmap justification and market segmentation
  • Edge-case anticipation
  • Implementation feasibility checks (tooling, latency, systems integration)

Key insight: For structured thinking and repeatable processes, AI is already a high-functioning, tireless, no-ego PM.

In side-by-side comparisons, I’ve seen AI outperform real PMs in breadth, speed, and edge-case awareness. It’s humbling—and a little unnerving.


2. What AI Will Likely Master Soon (1–3 Years)

  • Modeling political/organizational dynamics: understanding who needs buy-in and where blockers live
  • Multi-system reasoning: how product changes ripple through infrastructure, marketing, and user trust
  • Value-based prioritization: balancing revenue, delight, risk, and cost with contextual nuance
  • Taste mimicry: copying the instincts of great product thinkers based on exposure to their prior work

Key risk: If you stay in the realm of optimization and extrapolation, you’ll get outpaced by a machine that never sleeps and never second-guesses.

It’s already good at seeing the chessboard. It’s learning how the players behave. Soon it will start asking which games are worth playing.


3. What’s Hard to Replace (Maybe Never)

This is the true job of senior product leadership:

a. Knowing when to ignore the data

“The metrics say it’s fine, but I know it’s not.”

b. Calling organizational bullshit

“This isn’t a feature request. It’s a political workaround.”

c. Pattern recognition across domains

“This reminds me of something that failed five years ago in a different vertical.”

d. Taste

You can’t A/B test your way to Pixar. You don’t user research your way to vision. You feel when it’s right.

Key moat: Judgment, taste, and courage remain hard to automate—especially when success requires going against the grain.

These are the moments where senior PMs still earn their title. When to kill a darling. When to greenlight a moonshot. When to say “trust me.” AI doesn’t have that instinct—yet.


Final Assessment

If your job is to execute, AI is already competitive.

If your job is to bet, guide, interpret, and influence—it’s still your game to lose.

The PMs who will survive and thrive are those who:

  • Integrate systems thinking and storytelling
  • Understand power and incentives
  • Apply historical and cross-domain reasoning
  • Develop and defend taste

AI isn’t coming for your job. It’s coming for the parts of your job that weren’t your differentiators anyway.

ChatGPT: A $150,000 PM for $20 a month

I have a new product manager that outperforms most mid-level PMs I’ve worked with. It’s faster, more thorough, and has ideas veteran PMs miss. It’s also an AI. As someone who has recruited and managed PMs for over a decade, this is what keeps me up at night.

To see how far AI has really come, I started feeding ChatGPT the same product design and strategy questions I use to interview human candidates to see how it would do. The answer: great. For most of the tasks, it has easily out-performed entry level PMs and PMs with 5-7 years of experience. It has come up with solutions that even veteran PMs haven’t. All for the low, low price of $20/month. And, of course, it does it faster.

The humble volume buttons

Here’s one example: In the latest hardware refresh, Google moved the volume buttons on the remote for their TV streamer from the side of the remote to the face.

New (left) and old Google streaming remote

ChatGPT came up with the expected answers: the buttons on the side have become very familiar to users because that’s the way cell phone buttons work. It also lets the remote be smaller.

Putting the buttons on the face is more equivalent to traditional remote controls in terms of discoverability. That’s where they’ve always been. But it makes the remote substantially bigger. (See picture above.)

That’s where most PMs would stop. ChatGPT went into the details of tooling and manufacturing costs.

The absurdity test

I also did something I frequently do with PMs: suggest absurd ideas to see if 1) they understand that they are absurd 2) they are willing to push back.

I suggested doing a split test, with 5,000 units with the volume buttons on the side and 5,000 units with the buttons on the face.

Many junior PMs say “Sure, sounds like a good experiment.” They are trained to be data-driven.

Although that works well in a software environment, that’s a really bad idea for hardware. Doing a split run is prohibitively expensive due to tooling costs. You’d also have to come up with different packaging and marketing materials.

ChatGPT came up with the idea I was looking for: 3D print a few samples and bring in people to test them.

Absent that, ChatGPT recommended putting the volume controls on the side. So did Gemini. (If I meet the team who designed the new remote, I will definitely ask about the reason for the swap – and the swap of the home and assistant buttons.)

What does it mean for entry-level PMs?

I’m afraid the answer isn’t great. I can get $150k of productivity for $20/month. That’s not a tough call.

That begs the question: if there isn’t a pipeline for entry-level and mid-level PMs, where do senior level PMs come from? The best answer for now is that PMs need to expand their breadth to be able to handle more complexity: integrate design, development, business and systems level thinking into their repertoire.

As Scott Belsky says, taste becomes more important than ever.

So does the ability to see what the AI doesn’t: power dynamics, company incentives, unquantifiable friction — and what’s not on the roadmap, but should be.

A snippet of the ChatGPT response is below.

20 years of Google Maps

Today marks 20 years since Google changed the online mapping paradigm. Instead of Mapquest’s bitmapped maps, Google allowed superior control within the browser dynamically loading tiles.

I’ve been working in mapping and local products for longer than Google Maps has existed. Here are my observations of the industry, with a focus on Google.

  • The seed of Google Maps was an acquisition of an Australian company called Where 2.
  • Google Earth came from another acquisition, Keyhole.
  • When John Hanke, founder of Keyhole, asked Larry and Sergey to buy better imagery of the US, they asked how much it would cost to buy the whole world. They bought the whole world.
  • Before the acquisition, Keyhole was running out of money. They asked who on the team was willing to trade salary for more equity. Clearly, the latter group made out.
  • When Maps launched, the head of Mapquest at the time would often send all-hands emails to AOL employees about how no one would ever use Google Maps and how poor its traction was. (Somehow Mapquest is still around, but Yahoo! Maps is gone.) A lot of Mapquest’s dev team was based in Lancaster, PA. Lancaster is not the home of prime engineering talent and the product reflected it.
  • I was at a Google shareholder meeting where someone asked why Google was wasting so much money on Maps. The answer was essentially, “it’s our company, next question.” Of course it is now a key differentiator.
  • Amazon’s A9 division launched a version of Street View earlier than Google. Ironically, I was interviewing at Google at the time and one of my interviewers said “that will never scale.” (Even earlier than that, I launched street views for real estate in Minneapolis.)
  • The day Google announced turn-by-turn directions, Garmin shares plummeted. Google might have done Garmin a favor: Garmin instead focused on the more lucrative aviation, marine and sports enthusiast markets.
  • The launch of offline maps put another nail in the coffin of portable navigation devices that used to dot the windshields of cars across the country.
  • Google launched an extensive marketing campaign in Portland for Maps. I guess it wasn’t that successful because it wasn’t deployed elsewhere. I did get a lot of swag and some free drinks out of it.
  • Apple Maps was a disaster when it launched in 2012. I did an interview with NPR’s Science Friday about it. Now it is by far my preferred mapping product.
  • As you would expect from Apple, the visualizations are gorgeous. The integration with Apple Watch and AirPods is brilliant for when I’m walking. I primarily use it when I’m walking, taking transit or renting a car (CarPlay). Unfortunately, the Tesla doesn’t allow CarPlay, so I’m stuck with an ugly version of Google Maps that looks like what Maps did in 2005 and worse than the later generations of PNDs.

Local is one of the most difficult problems out there. Businesses open and close all the time. (POI data is especially hard!) New roads get added. Construction temporarily re-routes roads. Roads are temporarily closed for events like marathons. Traffic data can be inaccurate.

Maps are ever evolving and there’s a long road ahead. Check out some of my wishlist and writings about maps. If you really want to go back through the history of maps on my older blog.

Disclosure: I’m an investor in all of the public companies named. Mapquest is part of a Yahoo!, which is primarily owned by Apollo after another failed content play by Verizon.

Three things I got right as a PM leader

Previous post: Three things I got wrong as a PM leader.

Understanding customer psychology is key

The best products come from the intersection of technology and psychology. Part of the fun of creating new products is trying to figure out things other people haven’t. Imagine someone dumped a pile of small, multi-colored plastic shapes that interlock in front of you in 1948. Dump them in front of someone and they’ll think it is junk.

Put a picture of a houses or airplane on the box and they’ll be able to fill in the gaps. This is what I can do with those Legos. You’ve provided people a framework for understanding and sparking their creativity.

Understanding psychology includes using all of the senses. Incorporate sight, sound, touch, smell and taste. (OK, smell and taste aren’t necessarily applicable to online products.)

I was at a ski resort and their lift ticket scanners would beep when the ticket was scanned. But the beep was just a confirmation that it was scanned, not an indicator of whether it was valid. The liftie had to look at the display to see the ticket status. It could mean moving gloves out in the cold. If I were designing it, the scanner would beep differently based on whether the ticket was valid or not. There would also be big green and red lights on top the scanner.

Haptics are often overlooked, but they can be very useful. When you’re using walking directions, Apple Watch will tap you on the wrist to indicate that you need to make a turn. What they could do better: have a different tap pattern based on whether you need to make a left turn or right. You wouldn’t have to look down at the watch to see the arrow.

Price is not everything

Yes, price matters. But understanding and being able to contextualize price is important. 

We had a feature-rich product that you could use in a lot of different ways — making phone calls, checking email, storing files and sending faxes (!). It was a great set of features, but because it was a new product, people had no understanding of how much it should cost. In fact, we were underpricing it. I was able to create bundles of features that were more widely understood and comparable to how competitors priced things. We were able to double prices and double adoption.

Think carefully about whether you want to charge at all. There is a much bigger psychological difference between $0.00 and $0.01 than between $0.01 and $1.00.

Simplicity of payment also matters. In the Bay Area, there are more than two dozen transit agencies. Each has its own pricing and fare structure. Passes are different. Not only did you have to figure out how much it cost, your had to figure out how to pay. The payment part was simplified by having an NFC card that worked across the systems. 

Some systems have gotten even simpler. In NYC and London, you can use your contactless credit card. No more having to find and buy a separate card.

If you’re shipping physical products, it’s a giant mistake to not incorporate Apple Pay. Apple created a great system to minimize friction in online commerce. Use it. This is especially true if you have low frequency customers.

Whoever sets the defaults controls the world

In general people want to do the least amount of effort, especially things that they aren’t super interested in. They will do whatever is easiest. 

The new tablet-based point-of-sale systems make it easy to tip 15%, 18%, 20% etc. (depending on the system). You can tip less or more, but that usually requires going to a submenu and entering an amount. Not only is picking the pre-filled amounts easier, it tells users that they should tip one of those amounts. (Hey cheapskate!)

Think about walking through a supermarket. The big brands make it convenient to buy their products. They pay slotting fees to grocers to ensure that their products are at eye level or on the end caps. The better values, either in terms of quality or price, aren’t at eye level.

By setting the right defaults, you can push the metrics you want toward your preferred direction.