Tony Ennis Cover Photo
Tony Ennis Avatar
Follow

Share this Profile

Topics

@fortelabs Agreed. Best thing I watched this year. "Come on Jeffrey you can do it, pave the way put your back into it." Bo's special and "In and Of Itself" by @derek_del are the two things I beg strangers to watch this year.

Saved to
Recommendations
about 1 year ago
about 1 year ago

How ergodicity reimagines economics for the benefit of us all

Mark Buchanan

is a physicist and science writer. Formerly an editor at Nature and New Scientist, his work has been published in Science, The New York Times and Wired, among others, and he writes monthly columns for Nature Physics and Bloomberg Views. His latest book is Forecast: What Physics, Meteorology, and the Natural Sciences Can Teach Us About Economics (2013). He lives in in Dorset, England.

<p&gtThe World Clock in Alexanderplatz, Berlin. <em&gtPhoto by Tony Webster/Flickrov</em></p>
The World Clock in Alexanderplatz, Berlin. Photo by Tony Webster/Flickrov

The principles of economics form the intellectual atmosphere in which most political discussion takes place. Its prevailing ideas are often invoked to justify the organisation of modern society, and the positions enjoyed by the most wealthy and powerful. Any threat to these ideas could also be an implicit threat to that power – and to the people who possess it. Their response might be brutal.

And so it was, after rumours recently spread that a widely known economist had redeveloped much of economic theory, and reached conclusions suggesting that the economic world could be greatly improved if it was radically reorganised. The ideas leaked out before their official publication, and drew intense interest from economists, politicians and social activists who sensed a potential moment of world-changing importance. Just hours before he could present his results to a global audience, however, the economist was killed in a mysterious car accident in Berlin. His manuscript went missing. But the accident was no accident – the economist was murdered by a conspiracy of political and financial interests determined to suppress thinking that could erode their power.

The story above is fiction – but plausible fiction taking place in the murky nexus of power, ideology and economics. It’s the focus of the German-language novel Gier (2019), by the Austrian author Marc Elsberg, who was inspired by research articulated in the paper ‘Evaluating Gambles Using Dynamics’ (2016) by Ole B Peters of the London Mathematical Laboratory (LML) and the late Nobel laureate Murray Gell-Mann of the Santa Fe Institute (SFI) in New Mexico. In the novel, Elsberg tries to imagine how a new way of thinking about economics could provoke a violent backlash by those benefiting from current illusions about the field. The thriller follows a dramatic scavenger hunt across Berlin, as authorities try to piece together who was behind the murder – and more importantly, what were the incendiary ideas that the economist was about to present.

In the real world, through the pages of scientific journals, in blog posts and in spirited Twitter exchanges, the set of ideas now called ‘Ergodicity Economics’ is overturning a fundamental concept at the heart of economics, with radical implications for the way we approach uncertainty and cooperation. The economics group at LML is attempting to redevelop economic theory from scratch, starting with the axiom that individuals optimise what happens to them over time, not what happens to them on average in a collection of parallel worlds.

The new concept is a key theme of research initiated by Peters about a decade ago, and developed with the collaboration of Gell-Mann and the late Ken Arrow at SFI, and of Alex Adamou, Yonatan Berman and many others at the LML. Much of this view rests on a careful critique of a model of human decisionmaking known as expected utility theory. Everyone faces uncertainties all the time, in choosing to take one job rather than another, or deciding how to invest money – in education, travel or a house. The view of expected utility theory is that people should handle it by calculating the expected benefit to come from any possible choice, and choosing the largest. Mathematically, the expected ‘return’ from some choices can be calculated by summing up the possible outcomes, and weighting the benefits they give by the probability of their occurrence.

But there is one odd feature in this framework of expectations – it essentially eliminates time. Yet anyone who faces risky situations over time needs to handle those risks well, on average, over time, with one thing happening after the next. The seductive genius of the concept of probability is that it removes this historical aspect by imagining the world splitting with specific probabilities into parallel universes, one thing happening in each. The expected value doesn’t come from an average calculated over time, but from one calculated over the different possible outcomes considered outside of time. In doing so, it simplifies the problem – but actually solves a problem that is fundamentally different from the real problem of acting wisely through time in an uncertain world.

Expected utility theory has become so familiar to experts in economics, finance and risk-management in general that most see it as the obvious method of reasoning. Many see no alternatives. But that’s a mistake. This inspired LML efforts to rewrite the foundations of economic theory, avoiding the lure of averaging over possible outcomes, and instead averaging over outcomes in time, with one thing happening after another, as in the real world. Many people – including most economists – naively believe that these two ways of thinking should give identical results, but they don’t. And the differences have big consequences, not only for people trying to do their best when facing uncertainty, but for the basic orientation of all of economic theory, and its prescriptions for how economic life might best be organised.

The upshot is that a subtle and mostly forgotten centuries-old choice in mathematical thinking has sent economics hurtling down a strange path. Only now are we beginning to learn how it might have been otherwise – and how a more realistic approach could help re-align economic orthodoxy with reality, to the benefit of all.

Of particular importance, the approach brings a new perspective to our understanding of cooperation and competition, and the conditions under which beneficial cooperative activity is possible. Standard thinking in economics finds limited scope for cooperation, as individual people or businesses seeking their own self-interest should cooperate only if, by working together, they can do better than by working alone. This is the case, for example, if the different parties have complementary skills or resources. In the absence of possibilities for beneficial exchange, it would make no sense for an agent with more resources to share or pool them together with an agent who has less. The standard economic approach, by nature, tends to come down in favour of splintering society into individuals who see only their own interests, and it suggests that they do better by this approach.

Things change dramatically, however, if one considers how parties do when facing uncertainty and repeatedly undertaking risky activities through time. As Elsberg illustrates in his novel, such conditions greatly expand the scope for pooling and sharing resources to be beneficial to all parties. From a basic point of view, pooling resources provides all parties with a kind of insurance policy protecting them against occasional poor outcomes of the risks they face. If a number of parties face independent risks, it is highly unlikely that all will experience bad outcomes at the same time. By pooling resources, those who do can be aided by others who don’t. Mathematically, it turns out that such pooling increases the grow rate of resources or wealth for all parties. Even those with more resources do better by cooperating with those who have less. This insight needs further development, but suggests that the scope for beneficial cooperation is much greater than previously believed.

The developing ideas of Ergodicity Economics are described in a set of lecture notes, in the aforementioned 2016 paper, and in a number of blog posts that describe some of the ideas and their implications. The ideas offer a completely new perspective on matters ranging from optimal portfolio management to the dynamics of wealth inequality, and the circumstances under which sharing and pooling resources can be beneficial to all. If spread widely, these ideas could exert influence over the economics profession and encourage governments to take a fundamentally different approach to policy.

As such, one might expect these ideas to generate considerable controversy, perhaps even forcible resistance – as explored in the novel Gier.

To read Mark Buchanan’s interview with Marc Elsberg, visit the LML blog.

almost 3 years ago

Why people jump red lights and what it says about startup failure

One of the many things that used to baffle me was people’s behavior that’s evidently harmful to themselves. Take the case of Pune (a city in India where I live). It has simultaneously the lowest rate of helmet adoption and the highest number of two-wheeler casualties. How do you explain that?

Obviously, my confusion was a cognitive bias that impacts many people. It’s the mind projection fallacy: how I think is how other people must also be thinking. It’s an understandable bias as we know no other mind better than ourselves. We have direct access to our thoughts, but for others we can only guess why they’re behaving a certain way. My mistake was that I assumed that if I understand the tradeoff between the cost of wearing a helmet and benefits of avoiding a potential accident (conditioned on how frequently I use a two-wheeler * probability of an accident each time I use it), I’d be foolish not to wear a helmet.

It’s easy to judge people and call them irrational. If it’s done for warm, fuzzy boosts to ego in an idle chat, that’s ok. But it gets frustrating when your personal or professional depends on it. Say, if you are a parent and your children refuse to take obvious precautions. Or take, for example, a common frustration of new entrepreneurs while selling a product. New entrepreneurs during a pitch (either on a sales call or on the product / landing page) usually end up explaining all the benefits that are obviously beneficial to the prospect, and yet when the customer shows hesitation or doesn’t end up buying, the automatic response that leaps to the entrepreneur’s mind is: “why don’t get it”. I know this first hand as my first product (Wingify platform) was built over 8-9 months during which I added one feature after another to make it into a very comprehensive marketing platform. Here’s how it looked.

My first SaaS attempt (2009)

In my head, it had everything that a marketer would need: analytics, testing, and personalization. And yet when it was launched on Hacker News, feedback (below) wasn’t encouraging.

“Unfortunately, it took me 14 page views and 1738 seconds to find the product tour on the page. In general, I found the organization of the different product descriptions to be confusing.”

“I looked at this and completely failed to “get it”.
Then I read the summary here and it sounds like a grab bag of stuff to both analyse and deploy content to sections of your site’s audience.”

Wooah, jargon overload – you’ve managed to put pretty much every technical term on one page and I think that could be a problem to your audience

If you have experience with designing products, the problem with the screenshot will be obvious to you. I’m using this example to highlight a core principle of human behavior that comes up in multiple situations and it explains both why people jump red lights and why good products fail.

Risk aversion in a gamble, risk seeking on the road

The difference between what researchers say people’s behavior should be (anybody who jumps a red light is making a wrong choice) and what actual behavior is (if the police isn’t around, people do jump red lights) comes because of description-experience gap. Many psychological experiments are conducted by giving explicit choices to people with clear probabilities, and they know they’re in an experiment. But in real life, people rarely have access to such probabilities directly. Rather as they continue living, they sample their experiences and estimate probabilities for different situations by themselves. Research studying people’s actual behavior shows that while experiencing situations, we significantly under-estimate (or completely ignore) very small or very large probabilities. We estimate very small probabilities as never-gonna-happen, and very large probabilities that it-is-certain-to-happen.

Research cited in this paper confirms this intuition.

Using Israeli data, Bar-Ilan (2000) calculated that the expected gain from jumping one red traffic is, at most, one minute (the length of a typical light cycle). Given the known probabilities they find that: if a slight injury causes a loss greater or equal to 0.9 days, a risk neutral person will be deterred by that risk alone. However, the corresponding numbers for the additional risks of serious and fatal injuries are 13.9 days and 69.4 days respectively.

From this data, they derive the conclusion: “Thus, red traffic light running is simply caused by some individuals ignoring (or seriously underweighting) the very low probability of an accident.”

The obvious question is why does this happen?

To explain this risk-seeking behavior in red light jumping, it’s helpful to understand when we’re risk-averse.

We’re risk averse in situations that don’t repeat often. In such one-off cases, as we can’t rely on estimating the odds ourselves from our experience, we take a safer approach and become significantly loss averse (as a mistake in a situation that doesn’t repeat could potentially be fatal). That’s how our brain has evolved: when you see a new pattern for the first time in the savannah, it’s safer to assume that is a tiger preying on you. However, if you go on a path every day to fetch water and one day come across a researcher who tells you that there’s 1% chance that there’s a deadly spider in the grass, you are less likely to believe him because every day you’ve been estimating your own probabilities (and so far you haven’t died).

It isn’t that you are ignoring the possibility of a spider, it’s simply that doing the same thing daily has given you an intuitive feeling for probabilities that informs your tradeoffs. We know ourselves best, so we overindex on our experience (spider bites never happen). This is why people don’t wear helmets. They’re overconfident in their driving abilities and every single ride without a helmet increases their confidence.

However, as I wrote above, when something is new and we don’t have experience with it, we become risk averse.

Losses hurt us much more than gains benefit

Risk aversion was one of the major findings of Kahneman and Tversky and they used it to propose Prospect Theory (for which they won The Nobel Prize; it’s the second highest cited paper in economics. If you haven’t read the original, go read it now.). Because we’re risk-averse, when it comes to things that have never happened to us before: like deciding to do a startup, trying a new product, or marrying a person. In each of the cases, our natural inclination is to be safe (like not quitting the job, not buying that product or not marrying the first person we meet). We take the safer choice unless the benefit * probability of benefit far out-weighs potential loss. Different people have different attitudes towards risk, but Kahneman and Tversky empirically estimated the risk/reward ratio to be 2 (i.e. you value what you have / status quo twice as much as a probable gain).

Prospect Theory applied to products

When customers interact with new businesses or products, it’s a new experience for them. Among many others, these three psychological phenomena happen during such interactions: customers mistrusting the promised value by sellers, their impatience towards getting the promised value and during interactions with the product, not getting enough value to counterbalance costs they incur.

1) The Market of Lemons Problem
There’s typically an information asymmetry between what sellers know about their products and what buyers know. From buyers’ perspective, while engaging in a potential purchase, they never have enough information to know whether what they’re getting is worth the cost (in time, money, effort) that has been asked from them. Even when the seller gives information about the value the buyer will get, buyers suspect because both honest and dishonest sellers say similar things. So for new products, as buyers can’t tell good products from bad products, they typically end up wanting to pay (in time, money, effort) much less than what the seller demands. In many markets, this drives away good, honest sellers leaving only dishonest sellers (which further aggravates the mistrust). The most famous example of market of lemons is used cars, but some version of this plays in all markets.

Who would you buy lemons from?

In the tech startup world, one of the major reasons newly launched, obviously good products fail is because of mistrust in the market for new tech products. Honest entrepreneurs get penalized in the market for other dishonest entrepreneurs (who overpromise and underdeliver). So, for success, a key objective for an entrepreneur becomes winning the trust of the customer in the market. (This is why brands and social proof matters a lot for new products).

2) The description-experience gap

Trust is not binary, customers are continuously updating their beliefs what products can do for them. Every time a customer is using a product, she is evaluating the costs (what she’ll lose) against benefits (what she’ll gain). Money is one of the many costs. Others time, effort and reputation.

My mistake with Wingify marketing platform was asking people to invest their time in learning what Wingify platform is capable of, when they’ve just started interacting with the product. I was essentially telling them: “hey, you don’t know me and have all reasons to suspect that I’m overpromising, but please spend several minutes trying to understand what is going on in the product. Trust me, I know my product will deliver you value many times cost of your time“. This didn’t fly with people and so for my second product (Visual Website Optimizer), I eliminated everything. In this iteration, all the customer could do was just one thing: enter the URL of her website.

No cognitive investment required from user

After this step, a visual interface opened up where for the customer the benefits that I had promised (“Visual Website Editor“) became immediately evident.

User made a small investment (enter the URL), product delivered immediate benefit

In the next step, I asked for some more information, then delivered some more value. And so on. For complex B2B products, doing this is hard but it’s worth remembering that: a) customers are rational in mistrusting upfront as they don’t know about what sellers know; b) customers consider multiple costs (money, time, effort, cognitive load, reputation) while making the tradeoff of cost vs benefit. Their expectation of potential benefit continuously changes as they gain more experience with a product or a company (however new products usually fail because customers never got to a point of gaining enough experience to derive full value from those products).

My thesis is that for successful products, the cost vs benefit curve goes something like below.

Good vs bad sales, onboarding, product usage or any other customer experience

3) Risk aversion
Because of risk aversion, benefits for customers must be many folds the demanded cost. And using their experience during repeated interactions with the seller, they’re making their own estimates of benefits. At any moment (usually that happens soon enough in journey for new products), when their estimate of costs (that they know for sure – fill a form, talk to sales, $299/mo cost) seems to be higher than the evidence of current and future value, they drop off from the interactions and abandon. (Unless, of course, it’s something they’ve bought and later they discover, it was a lemon. Then they go on social media.)

This is also the reason why startups find their chances with early adopters who are much less risk-averse (and demand less evidence of benefits upfront).

What are YOUR ideas for higher helmet adoption in Pune?

Dear reader, if you have read so far, I have a question for you that’s related to this article.

Pune has highest rate of two wheeler accidents and lowest rate of helmet adoption. Authorities have tried to change behaviour but failed.

Using any feasible technology or technique, if you were traffic police chief, how would you do so?

I’ll RT interesting proposals.

— Paras Chopra (@paraschopra) December 8, 2017

Tweet your response to me as a reply to this thread and I’ll retweet the most intriguing responses. In the same thread, you can also check out and comment on what others proposed.

Have an opinion on this essay? You can tweet it to me (tagging my handle @paraschopra) or you can send your feedback on email. Thank you!

Saved to
Behaviour Paras Chopra
almost 3 years ago
almost 3 years ago

Do you believe in progress? Here's why you're wrong

Saved to
Progress Paras Chopra
almost 3 years ago
almost 3 years ago

Hundreds of books and movies have depicted World War II. Eugene Sledge, a Marine who fought on Okinawa and Peleliu, says almost all of them ignore one of the most important stories of the war: how hard it was to keep soldiers stocked with ammunition.

U.S. soldiers were supplied with 40 billion rounds of ammunition during the war, according to historian Rick Atkinson. A typical soldier carried 80 rounds – 10 eight-round clips. So soldiers had to be resupplied with ammunition something like 500 million times, or on average of about 100 times per soldier in combat.

Sledge writes in his memoir that the ammo is often portrayed as just being there, ready to take. It wasn’t:

We spent a great deal of time in combat carrying this heavy ammunition on our shoulders to places where it was needed—spots often totally inaccessible to all types of vehicles—and breaking it out of the packages and crates. On Okinawa this was often done under enemy fire, in driving rain, and through knee-deep mud for hours on end. Such activity drove the infantryman, weary from the mental and physical stress of combat, almost to the brink of physical collapse.

Carrying bullets doesn’t make for good stories or movie scenes. Shooting them does. But the combat part of war relies on the supply logistics part having first been met. There’s a long history of armies running out of food and ammo before bravery and spirit. You can’t have one without the other. They’re inseparable pairs. The amazing stuff doesn’t work without the boring stuff.

A lot of things work that way.

Some skills are necessary for big achievements and also nearly irrelevant unless they’re matched with other skills. And those other skills tend to be less exciting than the things that get the attention.

A few similar things in business and investing:

Strategy is important, but it has to be matched with good communication to have a shot at working.

Tim Chen, founder of NerdWallet, was asked what the hardest part of starting his business was. “I think one of the hardest parts was convincing other people I wasn’t crazy,” he said.

Most great investing or business strategies sounds crazy at first because few people have tried them, or they’ve tried and failed. Getting people on board relies on your ability to explain your vision in a way that convinces investors, employees, and customers that you’re worth betting on. Good storytellers with OK ideas are more persuasive than average people with the right answers. This is obvious because everyone knows how much money and effort goes into marketing. But communication isn’t a skill a lot of technical minds have. Communication is easy to downplay if you’re technically minded because it’s a soft skill, the kind of thing low-IQ people resort to. The person capable of discovering a cancer drug isn’t always the kind of person skilled at PowerPoint design. But communicating your idea in a pitch meeting or quarterly letter requires one as much as the other. The crazier the idea, the truer this becomes. The most technical geniuses I’ve met are, to a person, frustrated that not everyone can understand what they see.

Chen said:

You can’t just put your head down and work hard and do things. You have to communicate well what it is you’re trying to do—the vision behind what you’re trying to do—to get other people inspired to understand what you’re doing and help you out.

Part of this is managing longevity. Few strategies work right away, or work all the time. Keeping people around during the downtimes requires them believing in your story, which requires you being able to tell a story.

Investing skills are important, but they have to be paired with personal finance skills to be sustainable.

I’m surprised how many good investors I know with terrible personal finance habits. Maybe I shouldn’t – they are completely different skills. The ability to uncover an undervalued investment is not associated with your propensity to avoid lifestyle bloat. The irony is that people who will move mountains to gain a few basis points of return bleed ten times that amount on personal spending that all science says adds little to their net life happiness.

But investing and personal finance rely on each other because few industries are as cyclical as investing and as Charlie Munger says, “the first rule of compounding is to never interrupt it unnecessarily.”

Compounding works only to the extent that your lifestyle doesn’t force you to sell investments at inopportune times to fund your lifestyle. Someone earning average but uninterrupted returns may be better off than someone outperforming by 50 basis points a year yet forced to liquidate a portion during every bear market to pay off lenders.

An unrelated but similar point: one divorce can erase a lifetime of alpha.

Marketing is great, but it has to be matched with a great product to be convincing.

A lot of marketing problems are just product problems. Marketing that’s too pushy or too gimmicky is rarely the fault of the marketer. It’s hard to sell a product no one needs and few want.

CBS Moneywatch published a list of 10 companies with great marketing and 10 companies with “insanely bad” marketing. The great marketers (Apple, Nike, Geico, FedEx, Toyota) all sell products with high rates of consumer satisfaction. The bad marketers (Motorola, Sprint, Blackberry) … don’t. This is not a coincidence. Buffett’s line about when good managers run bad businesses, it’s the business’s reputation that prevails works for the relationship between marketing and products, too.

In the eager quest for growth some companies plow disproportionate energy into marketing that, in hindsight, often should have been spent further developing a product people need and want.

It’s not that product is superior to marketing – every company has to tell their story – but one doesn’t work without the other. This is especially important today because the young generation who grew up with deep access to information have sharp marketing BS detectors.

Bold ideas are great, but they have to be matched with the realities of accounting, finance, and corporate norms to be practical.

Kanye West recently told David Letterman that “if you guys want these crazy ideas and these crazy stages, this crazy music and this crazy way of thinking, there’s a chance it might come from a crazy person.”

The same idea applies to business. Everyone wants unique vision. But no one should be shocked when people who think about the world in unique ways you like also think about the world in unique ways you don’t like.

A big part of this is that the kind of personality that lets you take big risks with strategy and product ideas – which is good – will also let you take big risks with accounting, finance, and HR, which can implode quickly. As a rule of thumb, people who are extremely talented at one thing tend to be deficient at another. The world isn’t fair enough for it to be otherwise, and amazing skills tends to come from a hyper focus that monopolizes mental bandwidth away from learning how other things work.

Pairing a big-idea thinker with a pragmatic operator and an unimaginative accountant is vital to giving big ideas a fighting chance of survival.

A little more on this topic:

Jul 9, 2019 by Morgan Housel · @morganhousel

Saved to
Startup Strategy Morgan Housel
almost 3 years ago
almost 3 years ago

If you ask people what the most important thing is in a relationship, you’ll get a myriad of answers — big ones being trust, communication, respect, etc. — but all of the answers really tie back to one singular factor. And it’s:

Without it, there effectively is no relationship.

With it, everything else will naturally follow.

I’ve written about this before — in fact, “emotional stability” is number one on my own list, and one of only three things I absolutely need in a partner. Emotional stability is the sexiest thing you can do.

And I’m not alone in saying this — many people agree on this being the number one most important thing.

Mark Manson calls it “people who manage their insecurities well” or “the ability to see one’s own flaws and be accountable for them.”

Karen Salmansohn called it “good character values” — i.e., “not a psychopath” (and then includes a list of “psychopath” characteristics — thanks, Karen.)

Leo Babauta of Zen Habits uses the term “emotionally self-reliant,” saying,

“We look for happiness from others, but this is an unreliable source of happiness… And here’s the thing: it’s not their job to fill our emotional needs.”

Zaid Dahhaj describes emotional self-sufficiency as “your relationship with yourself,” which is the same thing. He goes on to say,

“If you do not love yourself entirely and actively ensure your own needs are met, you will find it difficult to do the same for others.”

And when we talk about “actively ensuring your own needs are met,” we do not mean “actively asking others to meet them.” We mean “actively working to meet them yourself.”

Healthy relationships do not start from a standpoint of “scarcity,” “shortage,” or “something missing.” Contrary to popular cliches, they are not about finding our “other half,” or someone to “complete” us. Healthy relationships are built only with people who are already complete going in.

And even the other biggies — communication, trust, respect, etc. — will come along afterwards, fluidly and organically, if emotional stability is well-nurtured and in place by each individual (regarding their own, not each others’) going in. You will foster good values — and find partners who mirror them — if you have emotional self-sufficiency, and solid self-respect.

How to build it:

There are many better resources out there than this list. But to give you an idea:

  • Sit by yourself. Sit “without a device or distraction, for a few minutes. Look inside. Notice your thoughts as they come up. Get to know your mind.” — Leo Babauta
  • Learn to fix your own problems. “If you are bored, fix it. If you are lonely or hurt, comfort yourself. If you are jealous, don’t hope that someone will reassure you … reassure yourself.” — Leo Babauta
  • Take responsibility. We only control ourselves — we do not control other people, or the environment. Figure out what falls within your real control (yourself) and focus on that.
Saved to
Relationships
almost 3 years ago
almost 3 years ago

This is my advice on improving the UX of your designs WITHOUT hours of user research sessions, paper prototyping playtime, or any other trendy UX buzzwords.

(Seriously, search “design thinking”. 0 results. Nailed it!)

Who’s this article for? I’m looking at you:

  • Developers. You created your own app, but every time someone downloads it, they struggle to use it. And you know if they’re telling you this, then it’s really bad.
  • Graphic designers. Looking to make the transition into digital, but trying to learn UX by reading articles online is… a very painful way to die 😬
  • PMs. Your job is already like 25% UX designer. Would be nice to level up those skills.
  • And the hustlers. Anyone working on digital side projects nights/weekends. This one’s for you too 🍻

If you’re already a UX designer, I don’t expect this article to go over super well with you. I’m basically skipping over entire chunks of our field in favor of focusing entirely on the single most lacking skill in aspiring UX designers (or UX-adjacent folks who find themselves designing screens).

I call it “speaking interface”.

When I started as a professional UX designer, I was shocked how many times my clients would hand me the initial wireframes (or the living, breathing, in-browser MVP) and there’d be completely obvious UX mistakes all over them. I’m not talking about things you need hours of research and A/B testing to discover. I’m talking, like, dead simple mistakes.

For lack of a better example:

Somewhere out there, there's a team that knows HTML, but doesn't know the difference between a radio button and a checkbox. pic.twitter.com/VBwk8Jxekd

— Erik D. Kennedy (@erikdkennedy) May 24, 2017

Now my clients weren’t this bad, but look - you don’t need to be Bret Victor to understand that if you can only select ONE thing from a list, you need RADIO BUTTONS, not checkboxes. To understand that, you just need to be able to speak interface. And that’s the craziest thing to me. Interface fluency is something anyone can achieve. You don’t need college, you don’t need Lambda school, yadda yadda.

Frankly, you just need the presence of mind to (A) pause every single time you’re confused or frustrated by some app, (B) verbalize what about the interface makes you confused/frustrated/etc., and then (C) figure out how you could avoid that specific snafu that in your own designs.

Rinse and repeat that non-stop and you’ll be a pro in no time.

What I want to talk about today is four little rules that will help eliminate these pain points in your own designs. They’re the heuristics that are a level or two deeper than “use radio buttons if the user can only select one thing”. But, if you can remember to obey the things in this checklist, you’ll be that much closer to creating designs that your users can use easily right off the bat, freeing up your time for other, more important things.

(That’s when the other UX designers can lecture you on the newest academic user research methodologies!)

Here’s what we’ll be covering:

Any questions? Let’s dive right in.

1. Obey the Law of Locality

Put interface elements where they affect change.

All else being equal, you should put the elements in your interface near where they affect change. This is because, when a user wants to make a change to the system, they will unwittingly glance at where that change will happen.

So, let’s say you have a list of things. Where do you put the “ADD A NEW THING” button?

law of locality in playlist illustration

Q: Well, where does the change happen?

A: At the end of the list.

Great, put the button at the end of the list.

WAIT. You’d think this would be pretty simple. But there’s a temptation.

The temptation is to just put it where we have space for it.

For instance, if you have a menu, maybe you’d think “We have a menu! Why not just put it in the menu!?”

the law of locality violated in a music UI

The answer is, of course, because users won’t look for it there.

(And the ultimate answer is that having a place where “we just put things” will ultimately render your app an unusable mess that people will abandon the first chance they see a half-viable alternative)

Don’t think I’m joking. Have you ever noted this interface?

the law of locality violated in evernote's interface

An equally-bad/common alternative is to just take a solution that you’ve seen applied by A Respected Tech Company without any thought as to if it makes sense for you. “We need an ‘Add’ button? I’ve seen one of those. Hold my beer!”

the law of locality violated with a floating action button

Look. Another button in a place users will never look for it. To compound things, users will suspect this button actually adds a new whatever-is-currently-displayed-on-the-big-blank-white-space. Because that’s where the control is.

Your users want you to follow the Law of Locality.

So, now that we know it, let’s use it.

the law of locality in a list of music playlists

Bam.

But maybe you’re a born UX designer and you always visualize what happens when there’s 1000 items instead of 5 and you realize: there’s still an issue here. If the user creates a TON of playlists, this button will disappear hundreds of pixels offscreen!

So maybe you could anchor the button near the bottom of the list, but have it always be visible, no matter how many hundreds of playlists the user has created.

the law of locality in Spotify's UI
For bonus points, (1) use the inline button UNTIL it's about to go offscreen, and at that point switch to the anchored solution and (2) make it more visible than Spotify's button, which took me months to notice while I haplessly right-clicked individual songs to add them to my playlists!

Brilliant! And this is what Spotify has done.

Another possibility is to say “Hey, we can’t reliably and consistently show the button at the bottom of the list. Where’s the nearest logical place to put it?”

And the answer is, (I think pretty obviously) the top of the list.

the law of locality obeyed in Spotify's UI
I wish.

Sacrebleu! This is actually just what Spotify-competitor Rdio did, before they were acqui-shut-down by Pandora.

law of locality obeyed in rdio's UI
Reconstructed from memory (like all reality, if you think about it)

The lesson here is clear. Never sell your company, and always always obey the Law of Locality.

(There are actually 3 laws of locality, and “Put UI elements where they affect change” is only the first. If you’re interested, read more here)

Next!

2. ABD: Anything but Dropdowns

Any time you feel tempted to use a dropdown, ask yourself if one of these 12 controls is better instead.

One non-obvious lesson of UX design is that dropdowns are pretty much the worst control.

3 dropdowns to specify one simple date
Welcome to hell!

They’re not always bad, but you’re working against the following:

  • Dropdowns take too many clicks/taps. One to open, a few more to scroll around to the right option (on mobile), another to select the right option, and (on mobile) another to close. (Compare to the single click use-cases of many of the options listed below)
  • Dropdowns don’t show you the options! You have to click into them to see the possible values, and on mobile, you can often only see a couple at a time.
  • Long dropdowns are ridiculous to navigate. A country dropdown for an app used worldwide could have 195+ countries. At some point, almost any other method of asking a user their country would be quicker than having them scroll through a dropdown (“Smoke signals?” AGCKKHKGH).

This is pretty straightforward, so let’s just cover some examples for the various major cases of dropdown replacement.

If you’re choosing between 2 options…

We already have some fantastic options for allowing users to choose 1 of 2 things, all of which (A) show the options right away and (B) require fewer taps/clicks.

For questions to which there is no “default” answer, and either might be picked with roughly equal frequency, try a segmented button.

segmented button instead of dropdown control

If there is a “default state” that corresponds to “Off”, try a checkbox. A checkbox is also good for settings that don’t affect change until the user presses Save or Submit.

checkbox instead of dropdown control

Similar to the checkbox is the switch, which is good for changes that should apply immediately.

switch instead of dropdown control

Checkboxes and switches only make sense when there are two options. However, the following controls make sense for 2 to roughly 5 options, so you might try some of the following instead.

If you’re choosing between 2–5 options…

We covered segmented buttons above (and they apply here too) but it’s worth mentioning that when there are more options, vertical segmented buttons allow even more flexibility of answer length.

vertical segmented button instead of dropdown control

Radio buttons are similar, but particularly useful if you need to display a couple sub-elements for each choice.

radio button instead of dropdown control

For detailed displays of just a few choices, cards are where it’s at.

cards instead of dropdown control

One trick I like is displaying visual options literally.

visual options instead of dropdown control
Tesla likes it too, apparently.

If you’re choosing between many options…

When there are enough options that scrolling through them is annoying, consider a typeahead control. It’s like a search bar that shows top matching results as you type.

typeahead control instead of dropdown control

If you’re choosing a date…

Picking a date from dropdowns is the worst. If I ever do this, then I’ve really failed as a UX designer.

don't use dropdown controls for choosing dates

But what do you use instead? Well, it depends. First question: what type of date are you picking?

  1. Poisson dates. Dates most likely to be in the near future, tapering off as you go farther into the future (or nearer to the present), e.g. date of an appointment you’re scheduling, date of a flight you’re purchasing
  2. High-variability dates. Dates that have a similar probability of being anywhere in a wide range of time, e.g. date of birth, day-and-month of your birthday

(Yes, I named “Poisson dates” after the mathematical distribution 🤓)

For different types of date-picking, you should use different controls.

chart of poisson vs. wide-range dates

For Poisson dates, you want to make it DEAD SIMPLE to pick dates in the most common range (e.g. for scheduling an appointment, it might be the next, say, 14 days). It’s perfectly OK if picking dates outside of that range is a little tougher.

A calendar control fits the bill rather well for Poisson dates. If you know the date to-be-picked is most likely in the next 2–4 weeks, you’re golden.

calendar control instead of dropdown control

Rather creatively, Google Flights defaults to you selecting a flight roughly 2 weeks in the future, which is perhaps an opportunity for confusion (“I didn’t choose this!”), but probably a better date to default to, and closer to the hump in the Poisson curve.

Google Flights defaults to hump in Poisson distribution fo flight dates

Date text inputs are probably the best option for high-variability dates, where (A) there’s no reason to favor any date over another, meaning (B) all options will be equally difficult to select.

date text input instead of dropdown control
Remember, input[type=date] is your friend… on desktop, at least

If you’re choosing a number…

Numbers come in all kinds of flavors, but you’re most likely to be tempted to use dropdowns when you’re dealing with counts - e.g. the number of tickets, the number of people, the number of rooms, etc.

How often do you need 1 ticket? Plenty.

How often do you need 10 tickets? Not so much.

How often do you need 10,000 tickets? Is this some kind of cruel joke?

For counts of things, you’re also dealing with Poisson distributions, and should use a control that biases towards lower numbers - like a stepper.

stepper control instead of dropdown control

For wide-range numbers (like, say, SSNs), you weren’t going to use a dropdown anyways… I hope.

So can I ever use a dropdown?

Sure.

Remember, they work OK when…

  • Users rarely need to change the default value
  • There are very few options - e.g. only 3 will be visible on the default iOS control
  • The user is not on mobile (whereby many of these problems are mitigated)

The particularly observant among you may have noticed that the Google Flights interface I lauded above actually has three prominent dropdowns!

dropdown controls on google flights
Brilliant detail: on mobile, the 'Economy' dropdown is removed.

They actually do a great job with this. The potential usability issues are swiftly mitigated with:

  • Custom controls that show all options on tap (including on mobile) – and replace 4 dropdowns (for Adults, Children, and Seated Infants and Lap Infants) with 4 steppers in a single dropdown.
  • Removing the “Economy” dropdown on mobile
  • Few options and smart defaults for each control

If you want to print this section out and stick it on your wall, I’ve created a printable cheatsheet of dropdown replacements.

Anyhow. Let’s move on.

3. Pass the Squint Test

If you squint your eyes, the Most Important Thing should catch your eye first - and the least important elements should catch your eye last.

Pop quiz: what does a user need to do to use this page?

(NB: I’ve blurred it out so you have to go by gut instinct, but it’s a data entry form, to give you a hint)

blurred out version of a train ticketing ui

My best guess is two things:

  1. Check any applicable checkboxes (??) in the yellow area
  2. Press the blue “Submit” button

Did you guess the same?

Wrong and wrong.

train ticketing ui violates squint test
  1. The “checkboxes” are actually very small numerical text inputs. (If you already read Anything But Dropdowns, you know Poisson numbers should be steppers)
  2. The Most Important Thing (“Find Options” – which is a very confusing way to say “Submit”, by the way) is gray and unnoticeable. A much less important thing (“Help”) is immediately next it, but bigger and more visible.

The Squint Test says the Most Important Thing must be the most visible thing. What’s the MIT? The ticket textbox (or stepper 😉) controls and “Submit” button.

If you make it past this page, the next page is even worse.

blurred out version of a train ticketing ui

What will you click: gray button the left, or identical gray button on the right?

Hope you chose left!

train ticketing ui violating the squint test
In rushing through this form, I actually clicked 'help' first. Oops. My second time on this page, I clicked 'Go Back', having processed there was an 'Add' and 'Go Back' button, and in the other 99.999% of (left-to-right language) websites, 'Go Back' is always on the left.

Again, when I squint my eyes and look at the design, I can’t tell what’s important.

Like the Law of Locality and Anything But Dropdowns, the Squint Test is a fairly simple law to enforce. Here’s like a 30-second wireframey redesign.

wireframe redesign of a train ticketing ui to pass the squint test
In a real redesign, I'd also want to consider allowing the user to specify number of tickets ON THIS PAGE. But that's another law for another time.

Does it work?

blurred out version of a train ticketing ui wireframe reddesign to pass the squint test

You tell me. Four radios and a button. And a tiny little link below it.

I’m not trying to pick on AlaskaTrain.com. You see this kind of stuff all over.

Here’s the signup screen for my beloved recommendation-based social network, Foursquare (blurred, of course).

blurred out foursquare ui

How do you actually submit the required data? (i.e. the Most Important Thing)

Hint: it’s hidden in plain text in the upper-right corner.

foursquare ui redesigned to pass squint test

But Foursquare is just following Apple’s design standards here. Unfortunately, violating the Squint Test is a tradition even among industry leaders.

ios calendar app failing the squint test

One way to find the Most Important Thing is to consider what percentage of pageviews will involve a certain action. Here’s flashcard/memorization software Anki analyzed in this way.

action frequency analysis of Anki UI

For every 100 flashcards I view, I will then go on to…

  • Show the answer (approx. 95 times)
  • Navigate back to the list of decks (twice)
  • Start adding cards (twice)
  • Use some other feature (very rarely)

This sort of analysis really hints at what kind of interface would work better here.

  • Emphasize the most-commonly used functionality (at first approximation, “most used” equals “most important”)
  • Deemphasize, hide, or remove the less commonly used functionality
wireframe redesign of Anki UI to pass the squint test

Now this is just a start (I’d want to see if users understood that the unlabelled plus button added cards, for instance). But with just a couple simple heuristics, we’ve reduced a cluttered, confusing interface of 10 UI elements down to just 5. A reduction of… check my math here… 50%.

For more on the Squint Test, check out my YouTube video redesign of the Timezon.es web app. Or, if you don’t have 10 minutes, here’s a scannable, illustrated blog post with the same step-by-step redesign.

4. Teach by example

If you’re introducing users to new concepts, a few examples can be worth 1000 words - which your users wouldn’t read, anyways.

We have a weird tendency to try and explain things in words when examples would be much clearer.

Consider the new startup Teeming.ai, who recently reached out to me to ask about their homepage design. Headlines on the page read:

  • “Teeming takes the isolation out of remote work
  • “Teeming helps with remote team building” as well as “learning, problem solving, having fun, and motivating each other
  • “Teeming and video for synchronous [communication]
  • “Works with all your favorite video platforms

But here’s my question for you. What does Teeming actually do?

teeming.ai UI doesn't teach by example

It’s tough to tell. I know it has something to do with… good vibes for remote workers? But I have no concrete idea how it would help me, so I wouldn’t otherwise try it, recommend it, etc.

(Sorry Teeming, you know I ❤️ you)

Next, let’s look at IFTTT. Maybe you already know what they do - in which case, pretend you don’t, and try to figure it out from these headlines on their homepage:

  • Automatically light the way for the pizza delivery guy (Dominoes+Hue)
  • Post your photo anywhere and see it everywhere (Instagram+twitter)
  • Make your voice assistant more personal (Google Assistant+iOS Calendar)
IFTTT UI teaches by example
IFTTT UI teaches by example
IFTTT UI teaches by example

You don’t have to list too many examples to paint a decently clear picture: IFTTT hooks apps together to do things they couldn’t do alone.

The crazy part is, if you visit their homepage, they first explain it in text:

IFTTT helps your apps and devices work together in new ways. IFTTT is the free way to get all your apps and devices talking to each other. Not everything on the internet plays nice, so we’re on a mission to build a more connected world.

YAAAAWN.

My question: which gives you a better idea of the app? The examples, or the description? 🤔

I think it’s the examples. The description only resonates once I see a few examples of how it can help me.

The description of your complex new app/feature only resonates once I see a few examples of how it can help me.

But examples aren’t just for landing pages. Here’s what you see when you first sign into project management tool Basecamp.

Basecamp UI teaches by example

Rather than seeing a totally blank page, you see two obviously pre-fabricated example projects that teach you, by example, how the whole app works (and also gives you an idea of what the tool will look and feel like when you’ve been using it a while).

Seriously, I can browse through fake chat logs by fake users discussing fake file uploads and fake to-do items.

Basecamp UI teaches by example

There’s even a friendly… mountain?… telling me I can watch a 2-minute explanatory video about this sample project.

And thank you, Mr. Mountain, for the lead-in: providing videos showing usage is another way of teaching by example! Not only does the sample project model teach by example what my projects will look/feel like, but the video teaches by example what it looks like to use the software.

Brilliant.

If your app allows users to create something, a showcase is a great way to teach by example just what’s possible.

The beloved painting app Procreate won an Apple Design Award, the App Store Editor’s Choice, the App Store Essential awards, and John Gruber called it “groundbreaking”, etc. – and yet none of this is as viscerally exciting as seeing what you can create with it.

Procreate gallery UI teaches by example
This ain't no ordinary painting app.

Whoa.

That’s no MS Paint.

The showcase is a powerful tool for making it clear just what’s possible with your app.

So: if your app does something new and unfamiliar – or relies on new and unfamiliar concepts – you should get acquainted with the ways of teaching by example. The moment you realize that you’re introducing users won’t have seen before, you should start thinking: how can I give an example to make this clearer?

The moment you realize that you’re introducing users won’t have seen before, you should start thinking: how can I give an example to make this clearer?

In review, my favorite ways of doing this:

  1. On any page that tries to get the user to use a feature/app/etc., show examples of what they can do with your tool
  2. Use the “first load” experience to provide sample data, showing by example what the properly-working app will look like
  3. Strategically inject help content (like articles, videos, or tooltips) inline with the feature that show how to use it
  4. Does your app allow users to create something? Include a user-submitted gallery of examples to spur imaginations

Make sense? Let’s call it a day.

Alright, that wraps things up.

There are plenty more rules for “speaking interface” that I cover in my video course Learn UX Design, but these are some of the ones that I’ve used the most over the years. If you like these, check out more of my design writing on the Design Newsletter, where I send occasional, original design writing – as well as updates when Learn UX Design is open for enrollment.

Over 30,000 subscribed.
No spam. Unsubscribe anytime.

Saved to
UX
almost 3 years ago
almost 3 years ago

Whenever new technologies emerge, the first priority for a technologist is to understand the implication of adopting it. Serverless architecture is a case in point.

Unfortunately, much of the current literature around serverless architecture focuses solely on its benefits. Many of the articles — and examples used — are driven by cloud providers — so, unsurprisingly talk up the positives. This write-up attempts to give a better understanding of the traits of serverless architecture.

I’ve deliberately chosen the word trait, and not characteristic, because these are the elements of the serverless architecture that you can’t change. Characteristics are malleable, traits are inherent. Traits are also neutral, hence it isn’t positive nor negative. In some cases, the type of trait I’ll describe may have positive connotations, but I’ll be keeping a neutral head on this one so that you understand what you’ll be facing.

Traits are also inherent, hence you’ll have to embrace them, not fight them because such attempts are likely to prove quite costly. Characteristics, on the other hand, need effort spent to mold them, and you can still get them wrong.

I should also point you to this article by Mike Roberts — who also explores the traits of serverless services. Even though we are sharing the same terminology here, it’s useful to note that this article is looking into the traits of your architecture, not the services that you use.

This article doesn’t aim to help you understand all of the topics in-depth but to give you a general overview of what you are in for. These are the traits of serverless architecture defined in this article: It’s relatively straightforward to start getting your code running in a serverless architecture. You can follow any tutorial to get started and get your code running in a production-grade ecosystem. In many ways, the learning curve for serverless architecture is less daunting than that for typical DevOps skills — many of the elements for DevOps aren’t necessary when you adopt serverless architecture. For instance, you wouldn’t have to pick up server management skills, like configuration management or patching. This is why low barrier-to-entry is one of the traits of serverless architecture.

This means initially, developers have a lower learning curve than many other architectural styles. This doesn't mean that the learning curve will stay low, and indeed, the overall learning curve gets steeper as developers continue along their journey.

As a result of this architectural trait, I’ve seen many new developers on-boarded to projects very quickly and they’ve been able to contribute effectively towards the project. The ability for devs to get quickly up-to-speed might be one reason that serverless projects have a faster time to market.

As we noted though, things do get more complex. For instance, things like Infrastructure as a code, log management, monitoring, and sometimes networking, are still essential. And you have to understand how you can achieve them in the serverless world. If you’re coming from a different development background, there are a number of serverless architecture traits — which will be covered in this article — that you need to understand.

Despite the initial low barrier to entry, devs shouldn’t assume that they can ignore important architectural principles.

One thing I’ve noticed is the tendency for some developers to think that serverless architecture means they don’t have to think about code design. The justification being that they’re just dealing with functions, so code design is irrelevant. In fact, design principles, like SOLID, are still applicable — you can’t outsource code maintainability to your serverless platform. Even though you can just bundle up and upload your code to the cloud to make it run, I strongly discourage doing this as the Continuous Delivery practices are still relevant in a serverless architecture. One of the obvious traits of serverless architecture is the idea that you are no longer dealing with servers directly. In this age, where you have a wide variety of host where you can install and run a service — whether it’s physical machines, virtual machines, containers, and so on — it’s useful to have a single word to describe this. To avoid using the already overloaded term serverless, I’m going to use the word host1 here, hence the name of the trait, hostless.

One advantage of being hostless is you’ll have significantly less operational overhead on server maintenance. You won’t need to worry about upgrading your servers, and security patches will automatically be applied for you. Being hostless also means you’ll be monitoring different kind of metrics in your application. This happens because most of the underlying services you use won’t be publishing traditional metrics like CPU, memory, disk size, etc. This means you no longer have to interpret the low-level operational details of your architecture.

But different monitoring metrics mean that you’ll have to relearn how to tune your architecture. AWS DynamoDB exposes read and write capacity for you to monitor and tune, which is a concept that you’ll have to understand — and the learning isn’t transferable to other serverless platforms. Each of the services you use will also have their limitations. AWS Lambda has concurrent executions limit, not the number of CPU cores you have. To make it a little bit quirkier, changing the memory allocation size of your Lambda will change the number of CPU cores you get. If you are sharing one AWS account for your performance testing and production environments, you might bring your production down if the performance test unexpectedly consumes your entire concurrent execution limits. AWS documents the limits for each of these services pretty well, so make sure you check it so that you can make the right architecture decisions.

It’s a common misconception that serverless applications are more secure as security patches are applied to your underlying servers automatically. It’s a dangerous assumption to make.

Traditional security protection wouldn’t be applicable as the serverless architecture has different attack vectors. Your application security practices will still apply, and storing secrets in your code is still a big no-no. AWS has outlined this in its shared responsibility model, where, for example, you still need to secure your data if it contains sensitive information. I highly encourage you to read the OWASP Serverless Top 10 to get more insights on this topic.

While you have significantly less operational overhead, it’s worth noting that in rare cases, you still need to manage the impact of the underlying server change. Your application might be relying on native libraries, and you will need to ensure that they’re still working when the underlying operating system is upgraded. In AWS Lambda, for example, the OS has recently been upgraded to AMI 2018.03. Functions as a Service, or FaaS, is ephemeral, hence you can’t store anything in memory because the compute containers running your code will automatically be created and destroyed by your platform. Stateless is, therefore, a trait in a serverless architecture.

Being stateless is a good trait to scale applications horizontally. The idea of being stateless is that you are discouraged from storing states in your application. By not storing states in your application, you’ll be able to spin up more instances, without worrying about the application state, to scale horizontally. What I find interesting here is you’re actually forced to be stateless, hence the room for errors is greatly reduced. Yes, there are some caveats: for instance, the compute containers might be re-used and you can store states, but if you take that approach, do proceed with care.

In terms of application development, you won’t be able to utilize technology that requires states, as the burden of state management is forced to the caller. HTTP sessions, for example, can’t be used, as you don’t have the traditional web server with persistent file storage. If you want to use a technology that requires a state like WebSockets, you have to wait until it's supported by the corresponding Backend as a Service, or apply your own workaround. As your architecture is hostless, your architecture will have the trait of being elastic as well. Most serverless services that you use are designed to be highly elastic, where you’d be able to scale from zero to the maximum allowed, then back to zero, mostly being managed automatically. Elasticity is a trait of serverless architecture.

The benefit of being elastic is huge for scalability. It means that you won’t have to manage your resources scaling manually. Many challenges of resource allocation disappear. In some cases, being elastic may only mean that you’ll only pay for what you use, hence you’ll lower running costs if you have a low usage pattern.

There are chances that you’ll have to integrate your serverless architecture with legacy systems which don’t support such elasticity. You might break your downstream systems when this occurs, as they might not be able to scale as well as your serverless architecture. If your downstream systems are critical systems, it's important to think about how you’re going to mitigate this issue — perhaps by limiting your AWS Lambda concurrency or utilizing a queue to talk to your downstream systems.

While ‘denial of service’ is going to be more difficult with such high elasticity, you’ll be vulnerable to ‘denial of wallet’ attacks instead. This is where the attacker s attempts to break application by forcing you to exceed cloud account limits by forcing up your resource allocation. To prevent this attack, you may find it helpful to utilize DDoS protection, such as AWS Shield, in your application. In AWS, it’s also useful to set AWS Budgets, so that you’re notified when your cloud bill is skyrocketing. If high elasticity isn’t something that you’re expecting here, it’s useful again to set a constraint on your application — such as by limiting your AWS Lambda concurrency. As stateless compute is a trait, all of the persistence requirements you have will be stored in the backend as a service (BaaS), typically a combination of them. Once you embrace FaaS more, you’ll also discover that your deployment units, the functions, are smaller than you may be used to. As a result, serverless architecture is distributed by default — and there are many components that you have to integrate with via the network. Your architecture will also consist of wiring together services, like authentication, database, distributed queue, and so on.

There are many benefits of distributed systems, including elasticity, as we’ve previously discussed. Being distributed also brings a single region, high availability by default to your architecture. In the serverless context, when one availability zone is failing in your cloud vendor’s region, your architecture will be able to utilize other availability zones that are still up — all of which will be opaque from the developers’ perspective.

There is always a trade-off in your choice of architecture. In this trait, you are trading consistency away by being available. Typically in the cloud, each of serverless services has its own consistency model too. In AWS S3, for example, you’ll get read-after-write consistency for PUTS of new objects in your S3 bucket. For object updates, S3 is eventually consistent. It’s quite common for you to have to decide on which BaaS is to be used, so watch out for the behavior of their consistency model.

The other challenge is for you to be familiar with is distributed message delivery methods. You need to be familiar and understand the hard problem of exactly-once delivery, for example, because the common message delivery method for a distributed queue is at-least-once-delivery. An AWS Lambda can be invoked more than once due to this delivery method, therefore you have to make sure that your implementation is idempotent (it’s also important to understand your FaaS retry behavior too, where an AWS Lambda may be executed more than once upon failure). Other challenges that you’ll need to understand include the behavior of distributed transactions. The learning resources in building distributed systems, however, are improving all the time, as the popularity of microservices blooms. Many of the BaaS provided by your serverless platform will naturally support events. This is a good strategy for third-party services to provide extensibility to their users, as you won’t have any control over the code of their services. As you’ll be utilizing a lot of BaaS in your serverless architecture, your architecture is event-driven by trait.

I also do recognize that you might even though your architecture is event-driven by trait, it doesn’t mean that you need to fully embrace an event-driven architecture. However, I have observed that teams tend to embrace an event-driven architecture when it’s naturally provided to them. This is a similar idea with elasticity as a trait, you can still turn it off.

Being event-driven brings many benefits. You’ll have a low level of coupling between your architecture components. In serverless architecture, it’s easy for you to introduce a new function that listens to a change in a blob store:

Figure 1: Adding new serverless functions

Notice how function A is not changed when you add function B (See figure 1). This increases the cohesiveness of your functions. There are a lot of benefits of having a highly cohesive function, one of which is that you can easily retry a single operation when it fails. Retrying Function B when it fails means that you wouldn’t need to run the expensive Function A.

Especially in the cloud, the cloud vendors will make sure that it is easy for your FaaS to be integrated with their BaaS. FaaS can be triggered by event notifications by design.

The downside of an event-driven architecture is you may start to lose the holistic view of the system as a whole. This makes it challenging to troubleshoot the system. Distributed tracing is an area that you should look into, even though it’s still a maturing area in serverless architecture. AWS X-Ray is a service that you can use out of the box in AWS. X-Ray does come with its own limitations, and if you’ve outgrown it, you should watch this space as there are third-party offerings that are emerging. This is the reason that the practice of logging Correlation IDs is essential, especially where you’re using multiple BaaS in a transaction. So do make sure that you implement Correlation IDs. There are six traits of serverless architecture that I have covered in this article: low barrier-to-entry, hostless, stateless, elasticity, distributed, and event-driven. My intention is to go as broad as possible so that you can adopt serverless architecture well. Serverless architecture brought an interesting paradigm shift, which makes a lot of software development aspects better. But it also introduces new challenges that technologists have to get comfortable with. There are also brief recommendations on how you can tackle the challenges each trait would bring, so hopefully, those challenges will not stop you from adopting the serverless architecture.

Thanks to James Andrew Gregory and Troy Towson for a thorough review of this article. Thanks to Gareth Morgan for proofreading and copy-editing this article. 1: The term host is used in the book Building Microservices

Saved to
Serverless
almost 3 years ago
almost 3 years ago
Saved to
Zettlekasten Productivity
almost 3 years ago
almost 3 years ago

Real World Location Virtually Recreated to Scale in Minutes

This is a pretty impressive tech demonstration from 6D.AI, a startup that's creating a 3D mirror world of real life from the input of standard smartphones.

"Building a 3D model of the world means fusing crowdsourced scans into huge models and here's a demo of that in action," 6D CEO Matt Miesnieks explains. "The model is a SLAM map, it improves [with] every scan and map coordinates are real world coordinates." In this particular demo, he goes on, "The ~5000 square foot space took 3 minutes for 4 people to scan.... these are regular phones, no depth cameras involved. Scans can be from one phone doing partial scans over many days, or lots of people scanning at once. The fused model will figure itself out and assemble based on any new scan that overlaps a previous scanned area."

The implications are pretty amazing and open-ended: This potentially means that, say, a whole city block could be slurped up and imported into a virtual world/MMO in a matter of days or weeks. And if the smartphone capture works as well as presented, the potential is far more staggering:

Volumetric mirror world mapping 6D_AI
Get a few million people around the world using this app, uploading their travels as they go, and an exact mirror of the entire planet could be modeled in a year or two.

Hat tip: Philip Rosedale, who sees this as a huge breakthrough for mixed reality projects: "This is the key missing piece to make augmented reality useful," as he puts it. "Accurate positioning when indoors, by comparing volumetric scans to a stored database."

Saved to
AR VR
almost 3 years ago
almost 3 years ago
Post

Theory of Constraints 101: Applying the Principles of Flow to Knowledge Work

The Theory of Constraints is deceptively simple. It starts out proposing a series of “obvious” statements. Common sense really. And then before you know it, you find yourself questioning the fundamental tenets of modern business and society.

Eliyahu Goldratt laid out the theory in his 1984 best-selling book The Goal. It was an unusual book for its time — a “business novel” — telling the story of a factory manager in the post-industrial Midwest struggling with his plant. The problems this manager faces are universal, of course, and not only for manufacturing. For 30 years now, readers have recognized their own situation in this fictional story. This flash of recognition is the hook drawing you deeper into the TOC world.

The first statement that TOC makes is that every system has one bottleneck tighter than all the others, in the same way a chain has only one weakest link.

For some systems you can see it plainly: the cars always slowing to a stop in that same section of freeway; that doorway in the office where everyone’s paths seem to converge; that one curve of your plumbing that you can’t seem to keep unclogged.

For other types of systems, it’s less obvious. It helps to broaden the concept from bottlenecks, which describe material flows, to constraints of any kind.

What is the constraint preventing a coffee shop from serving more customers? It’s not the size of the doorway. It could be the rate of cappuccino preparation, the speed of credit card approval, or the number of people wanting coffee in that place and time for that price. It’s not always easy to tell where the constraint is actually. But we know it’s there, or else the shop could serve an infinite number of customers at infinite speed.

What is the constraint keeping the human body, let’s take Usain Bolt’s for example, from running faster? It could be his body’s ability to metabolize glucose or oxygenate his muscles; or his shoe’s grip on the track surface; or a limiting belief somewhere in the depths of his mind.

Clearly, it gets even harder to find the constraint once we enter the world of the abstract, the psychological, and the immaterial. But we’re getting ahead of ourselves.

The second statement is that the performance of the system as a whole is limited by the output of the tightest bottleneck or most limiting constraint.

In other words, if the water in a pipe is slowed down to a trickle by a narrow section, the outflow at the end of the pipe will be no more than a trickle. This one is harder to see intuitively, because it is obscured by the messiness of the systems we typically interact with.

That coffee shop cannot serve customers one iota faster than the speed of its cash register (if that is the constraint). Usain Bolt cannot shave one microsecond off his time without increasing his proportion of fast twitch muscle fibers (if that’s his most limiting constraint).

The third statement follows from the first two , but is the hardest to swallow. It is the red pill of TOC: if the first and second assumptions hold, then the only way to improve the overall performance of the system is to improve the output at the bottleneck (or more broadly, the performance of the constraint).

Using the coffee shop example, if the most limiting constraint is the cash register, literally nothing else will make a difference to the bottom line except an improvement in cash register speed. Not better customer service, not higher quality food, not better interior decoration, not faster WiFi or cleaner bathrooms or stronger coffee or any one of the million other ideas we could come up with in a freewheeling Design Thinking workshop. Any improvement not at the constraint is an illusion, for the same reason there’s no way to strengthen a chain without strengthening its weakest link.

Now think about how a typical company operates. The CEO announces it’s time for the company to improve. That command gets translated down the ranks, each manager impressing upon his team the importance of their individual efforts. Each person hears what they want to hear: the accountants understand that they must improve the usefulness of the books they keep (with each person interpreting “usefulness” differently). The software devs nod in agreement that better code is crucial (with each person interpreting “better” differently). The marketing people agree that more creative promotions are the only solution (with no one bothering to define “creativity”).

Each person marches off on their personal mission, nose to the grindstone, performance bonuses on the horizon, without realizing that their collective efforts imply a management philosophy:

Local improvements everywhere automatically translate into the global improvement of the organization.

John Maynard Keynes once said, “Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”

What if this implied management philosophy was dead wrong? What if, by believing that we can improve a system as a whole by individually improving each part, we are living and working according to an economic paradigm that has been defunct for decades?

That is what the Theory of Constraints proposes, and the problem it seeks to resolve.

Sign up here for a free 30-day trial of the new Praxis blog, or subscribe to the newsletter to receive notifications of free articles. You can also follow us on Twitter, Facebook, LinkedIn, or YouTube.

Saved to
Theory of constraints Tiago Forte
almost 3 years ago
almost 3 years ago

The Genius Neuroscientist Who Might Hold the Key to True AI

Karl Fristons free energy principle might be the most allencompassing idea since Charles Darwins theory of natural...
Karl Friston’s free energy principle might be the most all-encompassing idea since Charles Darwin’s theory of natural selection. But to understand it, you need to peer inside the mind of Friston himself.

When King George III of England began to show signs of acute mania toward the end of his reign, rumors about the royal madness multiplied quickly in the public mind. One legend had it that George tried to shake hands with a tree, believing it to be the King of Prussia. Another described how he was whisked away to a house on Queen Square, in the Bloomsbury district of London, to receive treatment among his subjects. The tale goes on that George’s wife, Queen Charlotte, hired out the cellar of a local pub to stock provisions for the king’s meals while he stayed under his doctor’s care.

More than two centuries later, this story about Queen Square is still popular in London guidebooks. And whether or not it’s true, the neighborhood has evolved over the years as if to conform to it. A metal statue of Charlotte stands over the northern end of the square; the corner pub is called the Queen’s Larder; and the square’s quiet rectangular garden is now all but surrounded by people who work on brains and people whose brains need work. The National Hospital for Neurology and Neurosurgery—where a modern-day royal might well seek treatment—dominates one corner of Queen Square, and the world-renowned neuroscience research facilities of University College London round out its perimeter. During a week of perfect weather last July, dozens of neurological patients and their families passed silent time on wooden benches at the outer edges of the grass.

December 2018. Subscribe to WIRED.

On a typical Monday, Karl Friston arrives on Queen Square at 12:25 pm and smokes a cigarette in the garden by the statue of Queen Charlotte. A slightly bent, solitary figure with thick gray hair, Friston is the scientific director of University College London’s storied Functional Imaging Laboratory, known to everyone who works there as the FIL. After finishing his cigarette, Friston walks to the western side of the square, enters a brick and limestone building, and heads to a seminar room on the fourth floor, where anywhere from two to two dozen people might be facing a blank white wall waiting for him. Friston likes to arrive five minutes late, so everyone else is already there.

His greeting to the group is liable to be his first substantial utterance of the day, as Friston prefers not to speak with other human beings before noon. (At home, he will have conversed with his wife and three sons via an agreed-upon series of smiles and grunts.) He also rarely meets people one-on-one. Instead, he prefers to hold open meetings like this one, where students, postdocs, and members of the public who desire Friston’s expertise—a category of person that has become almost comically broad in recent years—can seek his knowledge. “He believes that if one person has an idea or a question or project going on, the best way to learn about it is for the whole group to come together, hear the person, and then everybody gets a chance to ask questions and discuss. And so one person’s learning becomes everybody’s learning,” says David Benrimoh, a psychiatry resident at McGill University who studied under Friston for a year. “It’s very unique. As many things are with Karl.”

At the start of each Monday meeting, everyone goes around and states their questions at the outset. Friston walks in slow, deliberate circles as he listens, his glasses perched at the end of his nose, so that he is always lowering his head to see the person who is speaking. He then spends the next few hours answering the questions in turn. “A Victorian gentleman, with Victorian manners and tastes,” as one friend describes Friston, he responds to even the most confused questions with courtesy and rapid reformulation. The Q&A sessions—which I started calling “Ask Karl” meetings—are remarkable feats of endurance, memory, breadth of knowledge, and creative thinking. They often end when it is time for Friston to retreat to the minuscule metal balcony hanging off his office for another smoke.

Saved to
Free Energy Principle Karl Friston
almost 3 years ago
almost 3 years ago
Saved to
Language
almost 3 years ago
almost 3 years ago
almost 3 years ago
Post

The root cause of nearly all bad planning processes is lack of understanding of roles.

Saved to
Project Management
almost 3 years ago
almost 3 years ago

I cannot believe it took me this long to find this out about myself. How could I have had the same brain my whole life and yet no major life complications until… every single life complication all at once with the volume cranked up? The first 75% of my life: swell. The next 8%: absolute crap. The most recent 17%: harnessing that crap for good.

I don’t twiddle my pencil. I’m not hyper. I don’t engage in reckless behaviors. I am a full-grown woman. And, yes, I have ADHD.

It took me 3 years to figure out I had attention deficit hyperactivity disorder (ADHD). Actually 35, if you start from the very beginning. And then 6 more (and counting) to know what to do with it.

I’ll get right to it and start with the climax that marks the start of finding out I had ADHD, at last: I went nuts.

What I mean by nuts is that my mind, generally a pretty likable place where you might find birds chirping and lots of plants in brightly painted pots, became unrecognizable. It became a place I wanted to avoid – my birds silent, my plants putrid.

I became perpetually nervous, my heart beating rapidly in thew way hearts are only supposed to do at the starting (or finishing) line of a race. I was struggling to get through my workdays, uncertain of how much longer I’d be able to fake not being on the brink of losing it. My sleep was crap. Since my body was constantly worked up, my appetite waned; eating became forced.

[Self Test: ADHD Symptoms in Women and Girls]

My thoughts raced. Everything was hard. Even figuring out how to spend my time became this big goliath of a task. I was wilting and scared as hell about it. Scared as hell, specifically, that the men from the psychiatric ward, armed with gauze and a gurney, were going to show up at my doorstep any day to wheel me away from my life.

Now that you have a handle on the low that led to my ADHD diagnosis, I’m going to start at the beginning.

A child of the ‘80s and a first-born do-gooder, I was fortunate enough to thrive in the classic, straightforward classroom of my childhood. Because I liked learning and I liked gold stars and I liked all opportunities to socialize, there was never a moment for me when school felt dreadful. Luckily, my report cards revealed my school ease; I was an Honor Roll sort of gal.

In college it was more of the same, plus a new path to earn success: 11thhour victories. I became an epic procrastinator. During intended study sessions in the library, I almost always abandoned my work at first opportunity to socialize in whispers with fellow distractors. As a result, I relied almost entirely on charged bolts of inspiration under my dorm room desk lamp within hours of deadlines. And I almost always struck gold.

There were no problems, World.

I was still on track, competent, and confident.

After graduation, I was still rocking through life, except now — with my job charging me with lots of event planning and orchestration of details — I started feeling like I had half a brain. It was taking me way longer to do stuff than it seemed my co-workers would take to do the same stuff. I took a lot home. I worked more hours. I couldn’t help but feel wildly inefficient, even though I was paddling underwater twice as fast.

Then came the speeding tickets. Around the same time, on road trips to and from visits to my hometown, I got ticketed however many times it takes to be within an inch of having your license revoked. To slap my wrist prior to it getting to that, I earned a seat in a tutorial driving class. Except that I opted for the alternative self-guided option: they sent me an instructional DVD with a paper test. I got the test back to them; I had to pay for a replacement DVD (because I most certainly lost my copy).

I’ll spare the smaller details, but here are some other highlights:

  • Despite having graduated college with a degree in mathematics, my checkbook-balancing deficiencies had me pleading regularly with bank representatives to waive overdraft fees.
  • My go at serving tables at a restaurant was short-lived: I couldn’t answer questions about the menu under pressure and diners kept asking me for things while I was getting other diners’ things – the nerve.
  • I once paid to have my car, which wouldn’t start, towed to the mechanic only to find out that I had simply run out of gas.
  • The era of cell phones had begun and in pretty much every single situation when I had need to use mine, it was almost always reliably dead: remembering to charge things was way above my operating level.
  • I apologize to Mother Earth for the countless extra loads of laundry I did, necessary because of how soured my clothes would get left sitting in the washing machine for too many days.
  • More and more, simple communication would fail me — like there was a barrier between all my juicy intelligence and the words to share it. My fiancé and I developed language for this: When I got stuck, I’d just say, “I can’t find my words,” with a sigh.
  • My wedding weekend was an absolute miracle. To everyone who helped pull it off and might be reading this: Thanks. The planning sure as hell can’t be credited to me.
  • Speeding tickets. Did I mention speeding tickets?

So, while all of these realities were going on in the background, the foreground of my life had been very affirming: I was a woman who was educated, employed, married, and even keeping a small child alive. With flying colors, I might add.

When and why did I go nuts, then?

I mean, I suppose it was gradual. But if I had to pinpoint – in retrospect — I would say the trigger was the second kid and then definitely the third kid (and then most definitely the fourth). Doing the wife thing and the house management thing and the working thing and the one kid thing was what my neurological makeup could handle.

After layering in additional kiddos, my “engine – despite its strength – couldn’t pull the weight of life any longer with all those flat tires.” (Those are not my words. They are the words of the ADHD testing specialist responsible for diagnosing me. The engine is my brain. The flat tires are the challenges my ADHD puts before me. The weight is all my responsibilities, including needy babes.)

And for me, it wasn’t just that my vehicle’s speed slowed. And it wasn’t just that it was protesting with grunts, sputters, and grumbles.

It fully blew out.

My interior world went with it… to that overwhelmed, panicky, scary place. There was a growing disparity between what was required of me and what I was capable of, and fear was more than eager to fill the space. Not surprisingly, my feelings of competency, confidence, and self-reliance hit the road, too. I doubted myself more and more, trusted myself less and less, resorted to hiding more and more, and became smaller and smaller and smaller.

Except, and this is important to make clear, I didn’t have knowledge that that last paragraph was what was actually happening.

What I thought was happening: I was going nuts.

Now I’d like to point out that there are many different launch pads that can propel one to a place of impairing anxiety and bottomed-out wellness like mine at that time. And believe me, in the beginning a couple of therapists and I explored every single one. We poked around in my childhood for trauma, dabbled with the possibility of grief from some losses in my life, tried to make Acute Adjustment Disorder fit due to several cross-country moves in a short period of time, and thought we’d struck gold with much of what I was experiencing fitting post-partum symptoms.

It took a cunning ear from Therapist Number Three to hear the quiet whispers of ADHD through all my squabbling. It was she who suggested the ADHD testing, and – even though I was stubbornly resistant to this discovery of hers (“No way! I did great in school! I was never out of control! ADHD is the picture of someone else, NOT me!”) – that therapist stuck with it. She nudged me further and further away from denial and imprinted upon me that my neurological deficits might be exactly what was painting the dark picture of my days.

Fast-forward to now: Since that day in the ADHD testing office when the doc used car imagery to explain in layman’s terms that I had Inattentive ADHD (the kind without the H – that is to say without the hyperactivity – which is much more nuanced and difficult to uncover), I’ve committed to learning about it like a PhD student. I have books and articles all around my house (and I’d show you, if only I could find them). My brain and I have become incredibly well-acquainted. I’ve devised, executed, and abandoned at different times innumerable systems to organize better, time manage better, file better, decrease distractions better, meal plan better… you name it.

I’ve tried ADHD medications. I’ve stopped medications. I’ve tried them again. I’ve sharpened the fine art of self-care, waxing and waning the frequency of my massages, naps, meditations, outsourced house cleanings, journaling, babysitters, and exercise based on how my engine is handling my tires. I’ve seen therapists and ADHD life coaches and attended local CHADD chapter meetings. And I’ve definitely prayed.

And I’m happy to say that I’m not worried about the loony bin anymore.

It’s also certainly not perfect. As my adult-ADHD-specialized psychiatrist recently said, “We’re not looking for a silver bullet here, but how about we aim for a bronze one?” Bronze for me is that I finally can place my anxiety and mood disorder and wilty, songless interior life – whenever they show up again – as byproducts of my cognitive challenges. I can see that I’m working too hard and my mind is bucking. And – pretty importantly – that I’m not nuts.

Most of all — and what I want to communicate with fervor here — I cannot believe it took me this long to find this out about myself. How could I have had the same brain my whole life and yet have no major life complications result from it until major complications started resulting from it? First 75% of my life: SWELL. Next 8%: WENT TO CRAP. Most recent 17%: HARNESSING THAT SHIT.

It certainly makes me want to be what Therapist Number Three was for me for other young women (inattentive ADHD is most common in females and, since it does not show up in behavioral or scholastic ways in school – at least in the beginning – is often overlooked). It makes me want to crack open every youngster’s head and help expose any invisible learning disabilities lingering in there. It makes me want to educate all teachers, parents, coaches, relatives about what signs might point to ADHD in the kids they hang with, even when nothing dramatic is yet going on.

Basically, I’d like for flat tires to be known entities by our young generation of vehicles… long before — like me — a blow-out does the revealing.

  • Lady who wasn't diagnosed til later in life.
  • First place I realised difference between ADD & ADHD

"My job charging me with lots of event planning and orchestration of details — I started feeling like I had half a brain. It was taking me way longer to do stuff than it seemed my co-workers would take to do the same stuff. I took a lot home. I worked more hours. I couldn’t help but feel wildly inefficient, even though I was paddling underwater twice as fast."

"First 75% of my life: SWELL. Next 8%: WENT TO CRAP. Most recent 17%: HARNESSING THAT SHIT."

Read More
Hide
Saved to
ADHD
almost 3 years ago
almost 3 years ago

Open Source: From Community to Commercialization - Andreessen Horowitz

Editor’s Note: The open source software (OSS) movement has created some of our most important and widely used technologies, including operating systems, web browsers, and databases. Our world would not function, or at least not function as well, without open source software.

While open source has delivered amazing technological innovation, commercial innovation – most recently and notably the rise of software-as-a-service – has been just as vital to the success of the movement. And since open source is by definition software that is free for anyone to use, modify, and distribute, open source businesses have required different models and a different go to market than other kinds of software companies.

Peter Levine has been working with open source as a developer, entrepreneur, and investor for more than thirty years. He recently gave a talk called “Open Source: From Community to Commercialization” (you can download the full presentation here), that drew on his own experiences as well as interviews with dozens of open source experts. Below is a written version of that presentation, in which Peter tracks the rise of open source software and provides a practical, end-to-end framework for turning an open source project into a successful business.

Open source started as a fringe activity, but has since become the center of software development. One of my favorite anecdotes from the early days of open source – when it wasn’t yet even called open source, and was just “free software” – is around the Linux/Unix command [~}$ BIFF that informs a user when a new email arrives. The command was named after a hippie’s dog in Berkeley that would run out to bark at the mailman. I love this anecdote because open source was so on the fringes in the early days that an important command was named after a developer’s dog.

I’ve been working with open source for many decades, originally as a developer at MIT’s Project Athena and the Open Software Foundation, then as the CEO of the open source company, XenSource. And for the past 10 years, I’ve been on the boards of a number of companies involved with open source. From developer to board member, I’ve seen the evolution of open source and watched as large companies that have been built on the foundation of open source projects.

I really believe that there has never been a better time to build an open source business. Commercial innovation is essential to the open source movement and here I provide a framework for taking open source products to market.

The Open Source Renaissance is Underway

The last 10 years have been a renaissance for open source. The above graph shows the past 30 years, and in that time, approximately 200 companies were founded with open source as the core technology. Collectively, these companies have raised over $10B in capital with a trend in the last 10 years towards bigger and bigger deals. In fact, three-quarters of the companies and 80% of the capital raised have come about after 2005. And I think we are just at the start of this renaissance.

These investments are leading to bigger IPOs and larger M&A deals.

open source deals

It’s interesting to

note that, in 2008, mySQL was purchased by Sun Microsystems (which Oracle later acquired) for $1B dollars. At that time, I was convinced that $1B represented the maximum that any open source company would ever get. It was the bar for many years, and $1B was seen as a tremendous exit by the software industry which viewed open source as a commodity.

But look at what happened in the last few years. We have Cloudera, MongoDB, Mulesoft, Elastic, and GitHub that were the part of multi-billion dollar IPOs or M&A deals. Then, of course, there is RedHat. In 1999, it went public at $3.6B, and this year, it was sold to IBM for $34B. In the future, I’m excited to see if new bars will be set.

open source by vertical

Open source is also expanding to more areas of software. Traditionally, OSS was mostly developed around enterprise infrastructure, such as databases and operating systems (e.g. Linux & MySQL). With the current renaissance, active OSS development is occurring in almost every industry – fintech, ecommerce, education, cybersecurity and more.

So what is behind this renaissance? To understand that, let me take a walk down memory lane and the history of open source.

The History of Open Source from Free to SaaS

open source history

Open Source 0.0 – The “Free Software” era
Open source started in the mid-70s, and programmer that I am, I call this era 0.0 – the “free software” era. Academics and hobbyists developed software, and the whole ethos was: give software away for free. As ARPANET gave way to the internet, networks made it much easier to collaborate and exchange code.

I remember going to work at MIT or the Open Software Foundation at the time for work, and I had no idea where my paycheck came from. There was no concept of a business model, and the money behind “free software” development, if there was any, came in the form of university or corporate research grants.

Open Source 1.0 – The Support and Services era
With the arrival of Linux in 1991, open source became important for enterprises and proved a better, faster way to develop core software technologies. With more and more foundational open source technologies, open source communities and enterprises began to experiment with commercialization.

In 1998, the Open Software Initiative coined the term “open source,” and around that time, the first real business model emerged with RedHat, MySQL, and many others doing paid support and services on top of free software. This was the first time we saw a viable economic model to support the development of these organizations.

The other thing notable at this time was the value of open source companies paled in comparison to their proprietary counterparts. When I look at RedHat versus Microsoft, mySQL versus Oracle, XenSource versus VMWare, the value of the closed source company was so much greater than the open source counterpart. The industry deemed OSS a commodity that could never realize anywhere near the potential economic value of a proprietary company.

Open Source 2.0 – The SaaS & Open Core Era
By the mid-2000s, those valuations started to shift. Cloud computing opened the aperture and allowed companies to run open source software-as-a-service (SaaS). Once I can host an open source service in the cloud, the user does not know or care if there is open source or proprietary software under the hood, leading to similar valuations of open source and proprietary companies and indicating that open source really has real economic and strategic value.

A wave of acquisitions, including of my own startup XenSource by Citrix (not to mention MySQL by Sun and then by Oracle), has also made open source a key component of large tech companies. In 2001, Microsoft CEO Steve Ballmer called Linux “a cancer.” Now, even Microsoft uses open source in its technology stack and invests heavily in contributing to open source projects. As a result, the next open source startup is as likely to spin out of a major tech company as out of an academic research lab or a developer’s garage.

The Virtuous Cycle of Open Source

open source virtuous cycle

The history of open source highlights that its rise is due to a virtuous cycle of technology and business innovation. On the technical side, open source is the best way to create software because it speeds product feedback and innovation, improves software reliability, scales support, drives adoption, and pools technical talent. Open source was a technologically driven model, and these traits have been there since the “free software” era.

The full potential of open source is only realized, however, when technological innovation is paired with commercial innovation. Without business models, such as pay for support, Open Core, and the SaaS model, there would be no open source renaissance.

Economic interest creates a virtuous cycle, or flywheel. The more business innovation we have, the larger the developer community, which spurs more technological innovation, which increases the economic incentives for open source. I’ll talk at the end of the presentation about what I think is the future of open source in 3.0 and point out some of the interesting innovations that are currently happening on both the technological and business side.

But first, let’s talk about how to build open source businesses.

Business Success Centers on Three Pillars

open source 3 pillars

The success of open source businesses rests on three pillars. These unfold initially as stages with one leading to the next. In a mature company, they then become pillars that need to be maintained and balanced for a sustainable business:

  1. Project-community fit, where your open source project creates a community of developers who actively contribute to the open source code base. This can be measured by GitHub stars, commits, pull requests or contributor growth.
  2. Product-market fit, where your open source software is adopted by users. This is measured by downloads and usage.
  3. Value-market fit, where you find a value proposition that customers want to pay for. The success here is measured by revenue.

All three pillars have to be present over the life of a company, and when each has a measurable objective.

Project-Community Fit

project-community fit

Project-community fit is the first pillar and is about critical community mass and the traction of a project with developers. Though the size of OSS communities varies, strong followings and increasing popularity are key indicators that an OSS project spurs strong interest in a group of developers. The indicators include GitHub stars, the number of collaborators, and the number of pull requests.

Open source projects can start in many places, including inside large companies or academia. But it’s less important where a project starts, than that it has a project leader to drive the effort and that project leader typically becomes the CEO of the commercial entity.

Achieving project-community fit requires high touch engagement and continual recognition of the developer community. The best project leaders will reach a delicate balance between inclusion and assertion — making clear decisions to provide project direction while making sure everyone’s voice is heard and contributions are recognized. When this balance is struck, the project will sustain healthy growth and attract more people to contribute to and distribute the project. and distribute the project.

OSS deck

As investors, we are strongly biased towards funding OSS project leaders since they know the codebase in and out and are custodians of an ethos and vision that sustains a community of developers.

Product-Market Fit

product-market fit

Once you have a project leader and active group of collaborators, the next phase is to understand and measure product-market fit. In this process, project leaders need to crystalize: what is the problem the open source software helps to solve? Who is it solving that problem for? And what are the alternatives in the market? Without a clear understanding of your users and their use cases, projects can be pulled in many directions and lose momentum.

When the above questions are answered, you’ll observe organic adoption, as measured by the number of downloads. Product-market fit is the precursor to later sales engagement. Ideally, OSS users become top-of-funnel leads for value-added products or services – something we’ll look at more in our go-to-market section.

While working on product-market fit, it is important to think about what will delineate your commercial product and how you will deliver value that someone is willing to pay for. A common pitfall that I want to note is that sometimes an OSS product can be too good. The product-market fit may be phenomenally good, such that there is no need for value-market fit, meaning there is no natural extension to drive revenue. As a result, while you are driving organic adoption, you and your community should consider carefully what you may expect to commercialize in the future.

Value-Market Fit

The last stage, and often the most difficult, is finding value-market fit and generating revenue. While product-market fit often accrues to individual users, value-market fit typically centers on departmental and enterprise buyers. The secret to value-market fit is to focus on what customers care about and are willing to pay for, not what you can monetize.

Often value-market fit is less about what the product does and more about how it gets adopted and the type of value it drives. The value OSS provides is not just its functionality, but its operational benefits and at scale features. So when thinking through a commercial offering, some questions to consider are: does your product solve a core business problem or provide clear operational benefits? Is it hard to replicate or find alternatives? Are there at-scale capabilities required for teams or organizations that are not realized in the OSS?

examples of value-market fit

Though not an exhaustive list, open source companies have found value-market fit around features, such as:

  • RAS (reliability, availability, security)
  • Tooling, add-ons
  • Performance
  • Auditing
  • Services

Choosing a Business Model

open source business models

What business model you choose comes down to what value you can provide to customers and how best to provide it. It’s important to note that these business models are not exclusive, and it’s possible to build a hybrid business with elements of multiple models.

Support and Services was the model of the Open Source 1.0 era, and RedHat has really cornered the market on this and achieved scale. If you decide to go down this path, you will likely end up competing with RedHat (which is why five years ago, I wrote the blog post “Why There Will Never Be Another Red Hat: The Economics of Open Source.”)

The Open Core model, which layers value-added proprietary code on top of the open source software, is a good model for on-premise software. If you have super valuable components (such as security or integrations) that can be kept proprietary without harming the open source adoption, Open Core will be a fine model. A note of caution: with Open Core, community alienation can become a problem when deciding which features belong in which code. I saw that at my own company, and finding the right calibration with the community was very important. The ultimate pitfall here is that your community decides they don’t like your behavior on the proprietary side and they fork the project, or start a new project around the same codebase.

In a SaaS model, you provide a complete hosted offering of the software. If your value and competitive edge is in the operational excellence of the software, then SaaS is a good choice. However, since SaaS is usually based around cloud hosting, there is the potential risk that public clouds will choose to take your open source code and compete.

The Cloud & Competitive Moats

open source and cloud

Once an open source business reaches a certain maturity, the threat of public clouds and topic of licensing is likely to come up. Licensing is a heavily debated topic, and while it’s important, I see companies spend too much time debating licensing in their early days.

I also think we have over-rotated on the threat from public cloud vendors. While these vendors may host open source projects, to date, there isn’t a single open source company I am aware of that has been fully displaced by a cloud provider.

The more important question for an open source company to answer is: if code isn’t a competitive moat, what is?

The answer goes back to what makes open source so powerful to begin with: community and how you think about development. Independent open source companies have three big competitive advantages:

  1. Enterprise customers don’t want vendor lock-in.
  2. They want to buy from people who have written the code.
  3. Big companies don’t have your expertise.

When you combine those three things, I think that’s a real competitive value-add and why we have not yet seen big clouds fully displace stand-alone open source companies.

Now that we’ve covered the three pillars, let’s look at how you build an organization around them.

The more important question for an open source company to answer is: if code isn’t a competitive moat, what is? Community.

The Go-to-Market: Open Source is Top-of-Funnel

open source is top of funnel

Your open source community is a developer-driven top-of-funnel activity. Building a business is about connecting that open source top-of-funnel to a strong value-driven commercial product. The idea of a funnel is not new, but it does play out differently for open source companies. In this section, I want to cover how the open source effort integrates and weaves into the commercial effort in different stages of the funnel, and how to get each stage to work harmoniously with the others.

The open source go-to-market funnel breaks down into four stages and key organizational functions.

  • Developer community management drives awareness and interest in your open source product.
  • Effective product management leads to a base of users for your free open source product.
  • Lead generation and business development evaluate the intent of those users to identify potential value and selling opportunities in the enterprise.
  • Self-serve (bottom up) and sales-serve (top down) motions deliver and expand the value of a paid product or service to the enterprise.

Let’s look at each of these stages and functions in more detail.

Stage 1: Awareness & Interest – Developer Community Management

developer community management

Driving word of mouth with developers, as measured by user registrations and downloads, is absolutely critical to success in later stages of the funnel. In the early days of a company, founders often act as the first evangelists in the early days of a company. As the company expands, having a dedicated team of developer evangelists who have a mix of technical expertise and strong communication skills is important. While typically a rare combination, it’s usually easy to spot when someone has the mix of technical and communication skills. Once you do, hire those developers and get them at conferences and on social media engaging with developer communities and explaining the importance and value of your project.

While you need to align your sales and developer evangelist message, you do not want your community managers “selling.” They should generate interest and inform people about the product, and any emphasis on selling is likely to erode their credibility within the community.

As you launch your business, you will also need to decide if it will carry the same name and brand as the open source project. Companies have succeeded both ways, and there are pros and cons to each. Separate names, such as Databricks and Spark, prevent brand dilution and provide licensing flexibility, while the same name often provides more momentum from the OSS project, but can risk alienating the open source community if they perceive that they are being exploited for profit.

Finally, user registrations and downloads is a common measurement for both open source and proprietary software, so the secret sauce is not what you are measuring, but how accurately. At XenSource, we had inaccurate numbers at one point because our download metric included a large number of downloads that had been initiated, but not completed. Honing how you measure user registration and interest will prevent downloads from being a vanity metric that doesn’t lead to the next stage of the funnel.

Stage 2: Consideration – Product Management

product management

Next is consideration. Once you have engaged a developer community, your goal is to maximize developer and user love, adoption, and value. This second stage of the open source funnel is typically accomplished with product management.

Effective product management will execute a number of functions to support this stage: managing the closed and open source roadmaps, communicating the decisions to your developers and users, and building analytics into the product to collect more insights of usage patterns and predict sales opportunities.

Unlike proprietary software, open source businesses typically have two roadmaps to manage. OSS CEOs and founders often spend the majority of their time managing these roadmaps and how one feeds the other. I like to see this laid out on a single page to show how these interlock and relate and what features are available in each.

The most successful companies and founders have a framework or guidelines that helps them delineate and communicate what will be paid and what will be free. For instance, PlanetScale is committed to keeping open source anything that would produce vendor lock in, a value that maintains the good will of their open source community and enterprise customers. It is helpful to have a feature comparison table so customers and your community understand the different value the free software and paid software provide.

Transparency around research and development and incorporating community feedback into your product roadmap is particularly important to maintaining community trust. Many successful open source companies remain active, and often leading, contributors to the corresponding open source projects. For instance, Databricks has 10x the contributions to Spark of any other company.

When it comes to the product itself, you should build in analytics that help you understand your users and predict the percentage of OSS users who will convert to buyers. Once users have the product, product usage analytics will help identify product-market fit from value-market fit and the number of people likely to go from free to paid user to predict sales opportunities. For example, if five out of every hundred users consistently convert to paid users, then you can use 5% as an estimate to build your financial models.

This will be a complex process, and you should experiment with product packaging to recognize the optimal line between free and paid. For many open source founders, this product experimentation is a never-ending journey, and the success of their go-to-market hinges on tight product feedback cycles.

Stage 3: Evaluation & Intent – Lead Generation & Business Development

The next stage of funnel – evaluation and intent – is about validating and refining those theories with lead generation and sales development. The goal is to find the path from your users to an enterprise buyer with success measured by sales qualified leads (SQLs).

The first part of this is outbound marketing. Outbound marketing should prioritize campaigns focused on specific market segments, based on well-known patterns from your developer evangelism at the top of the funnel. Paying attention to your open source users, you will learn which roles and departments are using the product and what their interests are. You can then target your outbound marketing to the engineering managers, devops, or IT who will find your product valuable.

Next is a sales development effort. Sales development reps (SDRs) should take a customer success, rather than an overly salesy, approach and be insatiably curious about the needs of your users and what they are doing with the product.

As your campaigns generate leads, there are two primary filters to qualify them: 1) what organization does the developer or user represent? 2) did they download or engage with your project for something that is connected to a larger enterprise goal?

Stage 4: Purchase & Expansion – Inside & Field Sales

Once you have sales qualified leads, you may have two sales motions to enterprise. The first could be a self-serve bottom up motion, in which users within the enterprise organically adopt and pay for a product. Typically, this product will be for individuals. The second will be a sales-serve motion that uses a more traditional top down motion to land deals at departmental level or expand usage across the enterprise.

What Success and Failure Look Like

Coordinating organic growth and enterprise sales can lead to a few common failure modes for open source businesses, as my colleague Martin Casado pointed out in his Growth, Sales, and a New Era of B2B talk. In the first, your open source user doesn’t lead to a buyer. In this case, you have great product-market fit, but no value-market fit.

In the second failure mode, your OSS project growth falls behind your enterprise sales. Here, your product-market fit may not be that great. In the third, your commercial offering kills your credibility with developer communities. There is likely too much in proprietary and not enough in open source, and your open source project withers.

The top of funnel provides the key to all that follows, so invest first in your developer community, open source project, and users ahead of formal marketing and sales. And never lose sight of these three central questions: Who are your users? Who is your buyer? And how is your open source and commercial offering providing value to each?

If successful, you may see a graph like the one above. On the y-axis, we have revenue per customer, and on the x-axis, time. This graph, which is really my direct observation as a former board member of GitHub, shows top down and bottom up selling as it aggregates together in revenue. The takeaway here is: it’s a good sign if your revenue looks like a layer cake. The orange line is bottom up revenue from an individual, and typically, that will be one revenue line. The next revenue line may be selling to departmental buyers, but this is top down, usually using inside sales. The next part of the layer cake is field sales, or direct sales, selling or expanding the account across the enterprise. To optimize each of these revenue lines, don’t let the sales motion just happen; have someone who owns that specific function and drives the effort purposefully.

Finally, depending on your product, you may only have self-serve or only have inside sales. That’s okay, and it really depends on the complexity of the product and where and how it is best used. I do find that most open source companies have some combination of top down and bottom up motions, often starting with bottom up and then building a revenue expansion function on top of it.

OSS 3.0 – Open Source is a part of every software company.

As software has eaten the world, open source is eating software.

Today, almost every major technology company, from Facebook to Google, is written on the backs of open source software. Increasingly, these companies are building their own open source projects as well – Airbnb, for example, has more than 30 open source projects, and Google more than 2000!

In the future, the virtuous cycle will continue. Technologically, AI, open source data, and block chain are some examples of emerging innovations. The next generation of business models may include ad-supported OSS, as when a large proprietary enterprise supports open source projects; data-driven revenue; and crypto tokens, which monetize blockchain.

I believe Open Source 3.0 will expand how we think of and define open source businesses. Open source will no longer be RedHat, Elastic, Databricks, and Cloudera; it will be – at least in part – Facebook, Airbnb, Google, and any other business that has open source as a key part of its stack. When we look at open source this way, then the renaissance underway may only be in its infancy. The market and possibilities for open source software are far greater than we have yet realized.

COMMUNITY CONTRIBUTORS: This guide was a true open source effort. It draws on wisdom from across the open source community. In particular, we would like to thank Ali Ghodsi (CEO, Databricks), Maxime Beauchemin (Project Leader, Superset), Jiten Vaidya (CEO, PlanetScale), Sugu Sougoumarane (CTO, PlanetScale), Marco Palladino (CTO, Kong), Paul St John (SVP, WW Sales, Github), Jono Bacon (Community Manager), Kevin Wang (CEO, FOSSA), Stuart West (SVP Ops, Automattic), Zeeshan Yoonas (CRO, Netlify), and Heather Meeker (OSS Licensing Lawyer).

***

The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

Saved to
Open Source Economy
over 2 years ago
over 2 years ago
Video

Yochai Benkler: Open-source economics - YouTube

Saved to
Open Source Economy Yochai Benkler
over 2 years ago
over 2 years ago