One function that some companies serve that I think is underacknowledged is creating Schelling points. Schelling points, or focal points, are an idea from game theory where one of a set of choices stands out enough that an individual person can be confident that other people might pick it too, without having communicated with each other, if there’s some benefit to having made the same choice. From Wikipedia:

Consider a simple example: two people unable to communicate with each other are each shown a panel of four squares and asked to select one; if and only if they both select the same one, they will each receive a prize. Three of the squares are blue and one is red. Assuming they each know nothing about the other player, but that they each do want to win the prize, then they will, reasonably, both choose the red square. Of course, the red square is not in a sense a better square; they could win by both choosing any square. And it is only the “right” square to select if a player can be sure that the other player has selected it; but by hypothesis neither can. However, it is the most salient and notable square, so—lacking any other one—most people will choose it, and this will in fact (often) work.

It can be tricky to create a focal point in real life because it’s not quite the same thing as making something that appeals to people—it’s making something that people think will be selected by other people, or possibly even something that people think that other people will think will be selected by other people, and so on. This is arguably how most advertising works: the goal is less to convince you (personally) that you’re buying something good, and more to convince you that other people will understand what you’re buying to be good, or to at least understand what message you’re trying to send with it. When this is successful, your personal opinion of of the company stops being relevant for the purposes of susceptibility. From the linked post, which I recommend reading:

For each of these products, an ad campaign seeds everyone with a basic image or message. Then it simply steps back and waits — not for its emotional message to take root and grow within your brain, but rather for your social instincts to take over, and for you to decide to use the product (or not) based on whether you’re comfortable with the kind of cultural signals its brand image allows you to send.

It’s the same idea as common knowledge—even if you know a product is good, and everyone else knows it’s good, there’s an additional benefit if everyone knows that it’s good and everyone knows that everyone knows it’s good: you can invest in the product without explicitly coordinating with other people.

Having this quality can make a product significantly more valuable than could be guessed by its performance alone. This is basically what the Console Wars are fought over: not which game console is the most powerful (which would be relatively easy to settle) but which console is the best gathering point: where players can predict that developers will release good games, and where developers can predict that they’ll have an audience of players. Factors like graphical power clearly help a console become a focal point, but so do features like included control types, cost of manufacturing games, exclusive series, and online multiplayer—all of these benefit the owners of a console, even those who don’t personally use them, because other players will. Consoles that have failed have almost exclusively done so because they didn’t manage to become a focal point, whether because they attempted a surprise launch (the Sega Saturn) or because they were too niche (the Ouya).

I think this is most of what Apple does at this point. Once their focus changed from actual innovation to refinement, they became an efficient producer of focal points. When Apple Pay came out in 2014, the ability to pay for things with just your phone suddenly became a huge national talking point, even though it was only possible due to the phone payment terminals created for use with Android phones in 2011. They weren’t creating new functionality, but by making it a thing-that-all-iPhones-do, they were making it something that users could trust developers to support, and that developers could trust users to use. This is also what happened with the Apple Watch, which spawned dozens of mainstream thinkpieces about “what is a smartwatch good for?” a full year after Android Wear was released, and two years after the Pebble. Today they announced the HomePod, which appears to be an attempt to move the “Amazon Echo But By A Smartphone Company” focal point away from Google’s version.

This also seems to be what Google is trying to do with the Pixel phone: as a platform, it’s a clearer focal point when there are more first-party exclusives packed in, compared to the Nexus line, which was more of an all-around decent-quality reference model. This is also a perceived benefit for users of platforms like Android Auto: just by being associated with a big name, the platform is a better investment than whatever crappy OEM UI a car would come with otherwise.

It’s great when standards like Bluetooth (which was named after a king who united Denmark) do work out, instead of becoming yet another “universal” standard, because it means that adoption costs for users are decreased dramatically (you don’t end up accidentally buying an HD-DVD player anymore, for example) and less effort is wasted by developers trying to settle on a proprietary standard than is spent actually creating useful works. Ideally, these end up being open standards that anyone can implement, rather than proprietary standards that everyone ends up having to use anyway.