Essays

Remaking Education

To continue with the topic of education, today we increasingly hear complaints about the growing inadequacy of our education systems to the realities of the world around us. It's impossible not to see merit in some of those, too. In the world that is rapidly moving towards a gig economy, characterized by continuing decline in the average job tenure, with a lot of jobs likely to disappear in the next 10-20 years, a lot of aspects of the traditional education systems are questionable at best.

But in order to understand which parts of the system work well, and which are outdated and require revamping, it's useful to understand the history and context in which current system came into existence in the first place, and the purposes it was set up to serve. Otherwise, proposing any changes would be akin to moving ahead in the dark: we might still stumble upon something useful, but it is just as likely that we would do more harm than good. This is particularly true for something as complex and intertwined with every aspect of our lives as education.

Our current education system as we know it, was largely established in the second half of the 19th century, and the first decades of the 20th century, and coincided with the Second Industrial Revolution. In his (absolutely brilliant, in my opinion) book "The End of Average", Todd Rose argues that to a significant extent, the motivation behind it had less to do with the desire to create a truly meritocratic society — instead, it was largely driven by the ever increasing demand for workers that the new businesses were experiencing. Therefore, the key purpose of education was not to provide everyone with the opportunities to discover their talents and use those in the best possible way, but rather to educate people to a minimum level that would be sufficient for them to fill in the new vacancies.

The Second Industrial Revolution has long since became history; today, we are in the middle of what is widely regarded as the Digital Revolution, or the Third Industrial Revolution. This new era has arguably brought tremendous change to the societies throughout the world and global economy; it's hard to argue that the needs of both the society and individuals today aren't very different from what they've been during the Second Industrial Revolution more than a hundred years ago. And yet, we still to a significant extent rely upon a system that was designed for a different age and circumstances.

That raises several important questions. First, given how much the world has changed over the last 100 years, how suitable our education approaches are for the new circumstances? Yes, it remains possible that a lot could be achieved through the gradual evolution of the existing offerings. But is it too far-fetched to imagine that at least for some aspects of the current system, disruption might make more sense that evolution?

Personally, I don't think so. The idea of providing personalized education in schools required changing pretty much every aspect of the traditional school experience - and yet, the early results seem to be very promising. Same goes for the notion that bootcamps, nanodegrees and other unconventional options for professional education might one day turn into a viable alternative to college education — while it might raise some eyebrows, there is a lot of promising work happening in the space right now. And the list goes on.

Second, if we want to bring positive change to the current education system, we need to focus on designing new solutions that can be successfully scaled. One reason why the entire world still relies on a system that was put in place over a hundred years ago is that it was built to scale. Therefore, if the goal is to have a wide impact, for whatever solutions we propose, it's important to consider whether there is a way to implement them throughout a single state, a country, or the entire globe, as it was done with the school and college education in the past.

To that point, it's also crucial to consider the implications the proposed solutions would have on the existing system: we no longer live in a world that is a blank canvas, therefore, the implications of the change sometimes could be unexpected and profound. The concept of personalized learning illustrates some of these issues well: while students might get tremendous benefits from the new process, we need to consider what would happen when the real world would inevitably start interfering with it. What would happen when the families move, and the students find themselves in the areas where there are no schools with personalized learning options? Would the introduction of personalized learning only deepen the gap between the well-performing schools that are well-manned and access to funding, and the ones that are already struggling? Would it hamper the job mobility for the teachers? I'm sure it's not impossible to find answers to those questions, but in order to do that , we need to be asking those questions in the first place.

Finally, one day a time would come when the context would change again, and we would need to rethink the education system once more. I believe we could do a great service to the future generations if we keep that in mind, and focus on designing solutions that could be adjusted as needed, and are made to be iterated upon.

The Future Of Online Education: Udacity Nanodegrees

In its 20+ year history, the online education market has experienced quite a few ups and downs. From the launch of lynda.com way back in 1995 (back then, it wasn't even an EdTech company yet, strictly speaking; it only started offering courses online in 2002), to Udemy, with its marketplace for online courses in every conceivable topic, to the MOOC revolution, which promised to democratize higher education — I guess it would be fair to say that EdTech space has tried a lot of things over the years, and has gone through quite a few attempts to re-imagine itself.

On the last point, while MOOCs (massive open online courses) might not have exactly lived up to the (overhyped) expectations so far, the industry continues to live on and evolve, with the startups like Coursera, edX and Udacity continuing to expand their libraries, and experimenting with new approaches and programs.

Most recently, Udacity has shared some metrics that allow us to get a sense of how the company have been doing so far. And, in a word, we could describe it as "not bad at all". Apparently, in 2017 the company had 8 million users on the platform (that includes the users engaged with Udacity free offerings), up from 5 million the year before. Udacity also doubled its revenue to $70 million, which constitutes an impressive growth rate for a company at this stage.

Now, the reason why I believe those numbers are particularly interesting is because of the monetization approach Udacity took a few years ago, when it first introduced its Nanodegrees, a 6-12 month long programs done in collaboration with the industry partners, such as AT&T, IBM and Google, that should presumably allow the students to build deep enough skillset in a specific area in order to be able to successfully find jobs.

While this idea itself isn't necessarily unique - other companies have also been trying to create similar programs, be it in the form of online bootcamps, as is the case for Bloc.io, or the Specializations offered by Coursera, I would argue that Udacity's Nanodegrees offered the most appealing approach. Nanodegrees are developed in a close partnership with industry partners (unlike Coursera's Specializations that are university-driven), and require lower commitment (both from the financial perspective and time-wise) compared to online bootcamps. Finally, the marketing approach of Udacity is vastly superior to that of its key competitors, especially when the Nanodegrees were first launched (they announced it in partnership with AT&T, with AT&T committing to provide internships for up to 100 best students, which was a great move).

Some of the metrics Udacity shared this week were specifically related to Nanodegrees, and provided a glimpse into how they were doing so far. In particular, Udacity has reported that there are 50,000 students currently enrolled into Nanodegrees, and 27,000 have graduated since 2014.

The price per Nanodegree varies quite a bit, and it can also depend on whether the program consists of a single term, or several of those, but with the current pricing, it seems reasonable to assume that the average program probably costs around $500-700. With 50,000 students enrolled, that should amount to $25-35 million in run-rate revenues (strictly speaking, that's not isn't exactly run-rate, but that's unimportant here). The actual number might be a bit different, depending on a number of factors (the actual average price per course, the pricing Udacity offers to its legacy users, etc.), but I'd assume it shouldn't be off by much.

Those numbers ($25-35 million, give or take) are interesting, because they clearly show that Udacity must have other significant revenue streams. There are several possibilities here. In addition to offering learning opportunities to consumers, Udacity also works with the businesses, which theoretically could amount to a hefty chunk of the money it earned last year. Besides that, Udacity also runs a Master in Computer Science online program with Georgia Tech, which is a fairly large program today, and offers some other options to its users, such as a rather pricy Udacity Connect, which provides in-person learning opportunities. and a few Nanodegrees that still operate under its legacy monthly subscription pricing model, such as Full Stack Web Developer Nanodegree. All of those could also contribute to the revenue numbers, of course.

And yet, if you look at Udacity website today, and compare it to how it looked like a couple years ago, everything seems to be focused around the Nanodegrees now, whereas in the past, Udacity felt much more like Coursera, with its focus on free courses, with the users required to pay only for the additional services, such as certificates, etc.. The obvious conclusion to be made here is that apparently Udacity considers Nanodegrees to be a success, and believes that there is a significant potential to scale it further.

One last interesting thing to consider is the number of people who have completed at least one Nanodegree since its introduction in 2014. According to Udacity, only 27,000 people have graduated so far, which is curious, given that it reports 50,000 people are currently enrolled in at least one degree, and most degrees are designed to be completed in 6 to 12 months.

This can only mean one of two things: either Udacity has recently experienced a very significant growth in the number of people enrolling in Nanodegrees (which would explain the existing discrepancy between those two numbers), or the completion rates for the Nanodegrees historically have been relatively low.

Now, the completion rates were one of the key issues for MOOCs, where they proved to be quite dismal. However, the situation for Udacity is somewhat different: here, the users have already paid for the program, so in a way, completion rates are less of a concern (and with the legacy pricing model, where Udacity charged users a monthly subscription, the longer times to completion could have actually benefitted the company). On the other hand, low completion rates might ultimately contribute to the poor reviews, negatively affect user retention, and damage the company's brand, so this issue still needs to be managed very carefully.

Would Udacity's Nanodegrees prove to be a success in the long run? That remains to be seen, but so far, it looks like the company has been doing a pretty good job with those, so the future certainly looks promising.

The Challenge Of Attracting The Best Talent

In one of the classes I'm currently taking at Kellogg, we've recently touched on the issue of top K-12 teachers working at the better performing schools, with the schools that represent a more challenging case often facing significant difficulties attracting and retaining top talent.

This problem, of course, isn't unique to K-12 system. If you think about it, most of us will probably choose to move to a job that offers higher pay, and a better working environment, whenever the opportunity presents itself, without a second thought. And if we believe that the new job would be just as, or more, meaningful than the old one, that typically seals the deal. And who could blame us?

And yet, once you start thinking about what that truly means, the answer becomes less clear. While it most certainly makes sense to look for the greener pastures from an individual's perspective, we might wonder what kind of impact does it have on the world around us? More importantly, are we even serving our own needs in the best possible way by following this line of thinking?

One particularly interesting example to illustrate this point that immediately comes to mind is Google. For years now, it has been being highlighted as one of the most desirable employers in the world. It has the resources required to offer its employees extremely competitive levels of pay, and it is also famous for its great work environment - hey, it even tries to assess people's "Googliness" before hiring them in order to determine whether they'll fit well with the company's culture.

Google is undoubtedly a great place to work, so it isn't really surprising that people from all over the world aspire to work there. However, there is also another side to that story. Almost every person I've talked to who's worked at Google has at some point brought up the issue of being surrounded by people who were overqualified for their jobs. Yes, Google's immense profitability has made it possible for the company to pay for the best available talent. But hiring the best people doesn't automatically mean that you have meaningful problems for them to work on. 

That, of course, doesn't mean that Google shouldn't aim to hire the people of the highest caliber -  after all, as long as it has the resources and the appeal required to attract them, the employees and Google both seem to be better off if it does. And yet, one might wonder, what could many of those people have achieved otherwise? Would the companies they'd work for have more challenging problems for them to work upon? Or would some of those people actually start their own companies that'd eventually change the world?

The same goes for the K-12 system. Nobody could ever blame the teachers for the desire to work for the schools that offer better environments - even if one doesn't care for the compensation and surroundings, it can be much more fulfilling to work in such a place. The question, however, is what impact those teachers might have had at the lower-performing schools: some of those often have a much more pressing need for the best talent, but have trouble attracting such candidates.

So, what could be done to address this issue? I am afraid there are no easy answers here. The best talent is, and will always remain, a scarce commodity, and the best organizations often have a higher appeal (not to mention more resources to offer) to those workers - that is not going to change, nor should anyone want it to, really.

What we could do, however, is create additional incentives for the people to take risks, whether that means going to work for a struggling school, or taking a leap of faith and starting a company. Some of those incentives might be financial in nature, but what seems to me to be even more crucial is for us as a society to promote the importance of raising up to the challenge, especially if it doesn't bring one any immediate rewards, and to celebrate those who choose to do so. This, of course, might be easier said than done, but it's not impossible, and is very much worth the effort.

The Benefits Of Raising Less Money

A couple of weeks ago, TechCrunch published an essay called "Raise softly and deliver a big exit" by Jason Rowley. In this essay, he set to explore the relationship between the amount of funding startups raise, and the success of the exits, measured by the ratio of exit valuation to invested capital (VIC).

The analysis, unfortunately, doesn't provide a breakdown by space the startups operate in, and thus is relatively high level. It also raises some questions about the validity of using VIC as a metric to compare to the amount of capital raised or the valuation: as both of those are in fact used in the calculation of VIC, any inferences about the correlations between either of them and VIC aren't really meaningful.

Still, even if the conclusions aren't statistically meaningful, the analysis itself raises some interesting points, all of which can be summarized in a single phrase: "raising a lot of money makes getting high return on investment less likely".

One could argue that this is a fairly obvious conclusion that doesn't require looking at any specific data, and she'll be right about that: making high returns (meaning a percentage of capital invested, not absolute numbers) at scale is often harder compared to a situation when you invest relatively small amounts of money.

For the startups raising venture capital funding, that appears to be particularly true. Selling your company for $50 million is a success, if it only raised $5 million in funding; it becomes much more complicated if it attracted $100 million in funding - in this case, to deliver the same multiple you'll need to sell it for at least $1 billion, which drastically limits the number of potential buyers (and also the chances that the company would be able to get to the stage when it could be solved for such an amount of money).

So why are we so focused on the huge rounds raised, "unicorn" startups and the outsized exits?

Part of the story is tied to the business model of the VC firms: most of them receive a fixed percentage of the assets under management (AuM) as a management fee (typically, 2% per year), plus carry (say, 20% of the overall proceeds from exits, once the investors in the fund are paid the principal back). Both of those pieces are directly tied to the AuM, creating the incentive to raise more money from the limited partners.

What that means is that there is a misalignment between the interests of limited partners (who care about returns as a percentage of capital invested), and those of general partners (whose compensation, and especially their salaries, is to a significant extent determined by the AuM size, followed by the absolute returns).

This compels the general partners to raise larger funds, which in turn means that they need to pour more money into each startup (or do more deals per fund, which brings the risk of spreading your resources too thin). And investing more money per startup creates the obvious pressure for larger exits.

While VC piece is relatively straightforward, the situation for the startup founders is more complicated. Unlike the general partners of VC firms, the founders do almost exclusively care about the returns: the founders' compensation isn't really tied to the amount of money they raise, only to the proceeds from selling their companies. Another interesting point to consider is that for the vast majority of individuals, the amount of money required to completely change their lives is much lower than the amounts that might be deemed satisfactory for the VC firms, especially the larger ones.

To illustrate this point, for a firm with $1 billion under management, selling a company they've invested $5 million in at $10 million pre-money valuation, for $50 million, isn't really attractive: even though they'd make a decent return on this investment, the absolute gains are too small to make much of a difference.

For the founders of that same company, however, such a deal can be very attractive: if there were 3 of them, it would yield them more than $11 million apiece - a huge sum of money for any first-time entrepreneur. Accepting a deal like that would also leave them free to pursue their next ventures, knowing that they can now take bigger risks, with their financial security already established.

So again, why does the entire industry pay some much attention to the largest deals and exits?

Well, for once, it's just more interesting for the public to follow those deals - they create a rock-star aura around the most prominent founders and VCs, something that is obviously lacking for the smaller investments and exits. Next, some of the more exciting ventures do require outsized investments: that is often particularly true for some of the most well-known B2C startups (e.g. social networks, or on-demand marketplaces) - that, however, certainly isn't the case for a lot of companies out there. Finally, the VC agenda certainly plays a role there as well.

And yet, while all those reasons might be legitimate, it's worth remembering that for every $1 billion exits there could be dozens of $50-100 million sales, and while such deals don't always sound as cool, there surely do have the potential to change the lives of the entrepreneurs involved in them.

Airbnb's Latest Announcements: Hassle-Free Travel And Luxury Properties

Yesterday, Airbnb hosted a large keynote presentation, announcing two important additions to its product: Airbnb Plus and Beyond, as well as a number of smaller additions and changes.

According to the company, "Airbnb Plus is a new selection of only the highest quality homes with hosts known for great reviews and attention to detail. Every Airbnb Plus home is one-of-a-kind, thoughtfully designed, and equipped with a standard set of amenities — whether you’re in a private room or have the entire place to yourself.” At the launch, Airbnb Plus features 2,000 listings across 13 cities, with more to follow. To join Airbnb Plus, the hosts would need to submit an application, which requires paying $149 fee, and then satisfy the company's 100-point quality checklist.

Another service announced yesterday was Beyond, although it won't be launched till late spring, and the amount of information available so far is limited. As Airbnb puts it, Beyond will bring "extraordinary homes with full service hospitality" to the platform.

Besides that, Airbnb is now formally recognizing boutique hotels for the first time: while some hotels have been represented on its platform for years now, Airbnb never paid much attention to those. That is about to change, with the inventory now being separated into several categories that will include vacation homes, unique spaces, bed & breakfast ones and boutique hotels.

***

In my opinion, those changes are extremely significant. They also provide us with the glimpse into the direction Airbnb want to head in the future. While it was the idea of a marketplace for people to rent their apartments to other travelers that made Airbnb into the company it is today, at some point it had to find a way to transcend the limitations of this niche, while also utilizing its strengths to expand into additional areas.

One of the key challenges for Airbnb to solve at the beginning was to convince people to put their trust into the platform, allowing the strangers to stay in their homes. Once Airbnb managed to overcome this initial mistrust, the ratings system allowed it to quickly scale the platform, with both the untrustworthy guests and hosts being filtered out by the market.

With Airbnb Plus, it's now taking this further, using its already established ratings system for the hosts (as well as the statuses of "superhosts", possessed by some of them) to identify the most promising rentals, and then work with their owners to ensure even higher level of comfort for the guests. This seems very smart, as it fully utilizes the existing advantages that come with Airbnb scale and its crowdsourced ratings, thus allowing the company to scale it fast, while also providing the guests with enhanced convenience.

The same goes for the idea of recognizing boutique hotels. In many ways, Airbnb is better positioned to serve this niche that the regular hotel booking systems, not to mention the fact that Airbnb only charges the hosts 3%, charging the guests with the rest, and doing that in a transparent way, while platforms like Booking.com charge the hotels 15 to 20% of the booking value. However, before now, finding the boutique hotels on the platform was slow and inconvenient, damaging the experience for the users. The introduction of separate categories for different types of inventory should allow to improve the user experience, and potentially help to attract additional hotels to the platform.

It's harder to make any definitive conclusions about Airbnb Beyond at this point. On the one hand, judging from the way Airbnb positioned it in the announcement, it represents a long awaited move for the company directly onto the hotels' turf, which significantly expands its total addressable market, and should also potentially allow it to better serve the entire spectrum of their clients' needs.

On the other hand, unlike with the Plus and boutique hotels, the expansion into the full service hospitality doesn't necessarily utilize the existing strengths of the platform, and it's also not a space the company has much experience in. In order to leverage its scale, Airbnb would most likely need to find local partners in each geography, and then figure out a way to ensure that it can provide a consistent and high quality experience the guests are accustomed to with the traditional luxury hotels. This can be a very difficult challenge to tackle, but at the same time, the sheer size of the hospitality industry makes the attempt worth the effort.

Fighting The Ivory Trade: The Lessons Learned

According to the estimates, in 1979 there were at least 1.3 million African elephants. By early 1990s, that number dropped by more than half, to 600,000. Today, the estimates stand around 415,000, with additional 100 elephants being lost every day, mostly to the poachers engaged in ivory trade.

Recently, the Economist has published a film describing the scope of the problem, and the efforts African countries are currently involved in trying to reduce, and ultimately eliminate, poaching, - I'd highly suggest watching it (it's only 6 minutes long).

The fight to stop poaching is a tough and complicated one, and as one can learn from the film, the best of intentions can sometimes lead to terrible consequences, undoing a lot of the good work that had been done previously. This is something I wanted to focus on, as I believe it's helpful to learn about some of the strategies described in the video, and the reasoning behind them, as those can widely applicable to a number of other issues as well.

The fight to end ivory trade has been going on for decades now, and while it hasn't always been a success, some progress has been made. However, while killing elephant for ivory had been made illegal, the trade itself wasn't completely banned: an exceptions has been made for some countries who made an effort to control the poaching, and the ivory trade also remained legal, albeit with restrictions, in the countries that generated the majority of demand (China, Japan, U.K.). That, in turn, created a surreal situation when the legal and illegal trade co-existed side by side.

The problem is, while one can announce that trading the tusks carved before a certain date is legal, while trading in any tusks carved after that date is not allowed (e.g. this is exactly how the system was set up in the U.K., where trading in tusks carved before 1947 remained legal), there is no real way to separate the demand into those artificial buckets. Moreover, as it turned out, the very fact that the ivory trade was still allowed, even with all of the restrictions, legitimized the desire to own ivory in the eyes of those looking to purchase it.

This became particularly clear in 2008, when the decision to legally sell 102 tons of stockpiled tusks was made. As the tusks have been being seized over the years, it has never been clear what to do with them in the long run, and guarding those has remained expensive and often unsafe. So the argument has been made that the legal sell-off would help to raise the money needed for continuing the conservation efforts, and would also help to depress the prices for ivory, making poaching less economically attractive.

That decision, however, backfired terribly. Those involved in the illegal trade viewed it as a signal that the ivory trade is back (legal or illegal). Moreover, the huge amount of legal ivory flooding the market created a perfect cover for the expansion of illegal trade, as it was often impossible to trace the origin of the tusks. And as it turned out, the legal sell-off didn't even depress the prices, instead, they continued rising - there were multiple theories on why that was the case, with the main explanation accepted today being that the excess demand for the ivory was there, and the legal sell-off certainly didn't help to promote the idea that purchasing ivory might be wrong or immoral.

In 2016, Kenya, trying to decide what to do with a huge amount (105 tons) of stockpiled tusks, and given the terrible outcome of the legal sell-off in 2008, decided to take a different approach: it chose to burn them. It wasn't the first time Kenya was doing that: it first burned 12 tons in 1989 in a widely publicized (and criticized) event, but it has never yet aimed to destroy such a unbelievably huge amount of tusks.

At the first glance, this idea might seem insane: those 105 tons were valued in the hundreds of millions of dollars that could be used to fund further conservation efforts. Moreover, burning so much ivory could have created a sense of scarcity, driving the price of ivory even higher. Finally, some argued that destroying tusks denigrated the dead animals, and sent the message that they were of no value. And yet, Kenya chose to proceed with its plan, widely publicizing the event.

The result: the price of ivory went from $2,000 per kilo in 2013 to around $700 today. That wasn't, of course, the result of Kenya choosing to burn the stockpiled tusks alone. Rather, it came as a result of a series of orchestrated efforts to raise the awareness of the terrible consequences that the demand for ivory had for African elephants, as well as the gradually imposed bans on the legal trade throughout the world (in particular, in China and Hong Kong).

One might argue that the collapse of the legal trade should have just shifted the demand to the illegal market, creating scarcity and driving prices even higher. However, that didn't actually happen, and that's what made this strategy so valuable to learn from.

As it turned out, to a significant extent the demand for ivory was driven by the justification that the existence of legal trade provided it with, and also by general unawareness of the buyers of the real source of most of the ivory they were buying, and the suffering their demand had generated. The phase out of the legal ivory trade that's happening right now, together with the public efforts of the African governments to draw attention to the issue, stripped those moral justifications, and as a result, the demand for the ivory collapsed.

The supply and demand laws of the free market were, of course, still in place, but the relationship between those turned out to be much more complicated than many might have expected. This isn't something unique to the ivory trade, either, - there are other cases where the relationship between supply and demand is complex, and therefore requires very careful management to avoid disastrous consequences. I sincerely hope that those lessons of 2008 and 2016 would be further researched and publicized, as the price paid for these insights was surely too high to let it go to waste.

Assessing The "Worthiness" Of Companies' Missions

Recently, I've stumbled upon a discussion somewhere (I believe it was on Quora) about whether people who state that they'd only work for companies with whose missions they can strong empathize are mostly being hypocritical. The author of this argument used Uber as an example, stating that it seemed highly unlikely that so many people had suddenly found themselves so interested to work for a company that at its core remained a taxi hailing business.

While I believe that Uber and its competitors are so much more than just taxi hailing services, I found the question itself rather intriguing. How much does the company's mission actually matter to most of us? Should it even matter? And how might we approach assessing this mission in the first place?

Those are, of course, deeply philosophical questions, and I don't entertain any illusions regarding my abilities to provide the answers that would be universally applicable to everyone. I also don't have any desire to argue whether people have to care about the mission of the companies they work for. Rather, I wanted to share my take on the second half of this question - how we might approach thinking about the companies' missions and values - in the context of the tech industry today.

First, it's useful to consider how the nature of the tech products we use has evolved over the last 10-15 years. In the past, we often evaluated the products and services we used based on the new features or functionality they offered us. Today, however, that's not the case anymore: we would often be hard pressed to name even a few new features the new generations of the apps and services we use bring us, and not because we don't care for that additional functionality, but rather because what we really value today above everything else is convenience.

The New York Times has just published an essay called "The Tyranny of Convenience" on exactly this topic, which I found rather entertaining. Still, the reason I believe the change in our priorities matters in the context of this post is that it actually makes so much more difficult to answer the questions asked in the second paragraph in any reasonable fashion.

In the world that valued features, assessing the importance of the work done by many tech companies was relatively easy: the users typically had a certain type of problem, or pain point, they required solving, which in turn created an opportunity for the companies to come with a software product, a hardware device, or some combination of both, that would allow to solve that issue. Of course, it was still entirely possible that the users didn't realize that they actually had a problem, but most of the time, at least some indicators were there: the office workers of the early 1980s probably didn't know that they desperately needed spreadsheets, but someone who decided to pay close attention to the work they were doing for a reasonable amount of time, might have noticed that there was a huge potential to digitize their activities, in one way or the other.

In the world of convenience, however, the situation seems remarkably different. First of all, the users often might not even realize that they have a problem to begin with. Did most of us know that we actually need that 2-hour guaranteed delivery? What about those sophisticated algorithms that allow us to create discover new compositions based on what we listened to in the past? Or the opportunity to upload the photos to the cloud on the PC, and immediately access them from our phones? There are a thousand things we can't imagine our lives without that we would have a really hard time even dreaming of 10 or 15 years ago.

Next, in this strange new world, there is no way to solve the problem once and for all, as there is often no limit to how much something can be improved upon. Getting a guaranteed 1-day delivery from Amazon is nice, but why not focus on 2-hour delivery next? Uber and Lyft might be so much more convenient that the traditional taxi services, but the cars could still arrive even faster, and, by the way, wouldn't it be nice if it cost less? And while Amazon Echo is helping us to make some of our routines so much more efficient, it can definitely be further improved by integrating with additional services, and employing more sophisticated machine learning capabilities.

All of that implies that there is no task or problem that isn't worth solving, as long as it improves the experience for the end users (not to mention that the large advances can sometimes start as minor efforts to solve a particular issue). By extension, this also means that the true complexity of the systems aiming to provide the users with a more convenient way of doing something is often hidden from the eyes of those users. Most of us have no idea of the amount of efforts behind Spotify recommendation engine we take for granted nowadays, or the cloud infrastructure that the majority of the services we interact with today are built upon, or Amazon 1-day guaranteed delivery. Moreover, we don't care: the entire point of convenience is that the users don't need to concern themselves with all of that to benefit from those services.

Now, going back to the initial argument, if we look at the Uber from the perspective of feature-driven world, it undoubtedly is just another taxi hailing service, albeit a much more convenient one compared to the traditional taxi services, where you had to call a local company on the phone. However, in the world of convenience, Uber and the likes of it are so much more than that: they help us save time and money, while also feeling more secure and experiencing a nicer way to travel. Moreover, they hold promise of bringing us an entirely new level of convenience in the years to come, once the self-driving cars hit the roads. And when you frame the problems Uber is trying to solve like that, it seems unsurprising that they might want to recruit some of the best engineers or businesspeople, or that those people would be genuinely interested to come work for Uber and its peers.

That, obviously, doesn't mean that everyone out there does (or even should) care for the mission of the companies they work for. Rather, I'd like to argue that the question itself is somewhat flawed, at least in relation to the tech companies, given the realities of the world today.

The Peculiar Pricing Model Of The Live TV Services

A couple of weeks ago, I was chatting with a friend about YouTube TV, when she expressed her frustration about the high price of the service ($35/month at the launch, now bumped to $40). What I found interesting is that she herself learned about YouTube TV from someone who worked at Google, and when he heard about her grievances about the price, his response was that it didn't matter that much, as one could always find a group of friends to share this cost with (the subscription comes with 6 accounts per household, and allows to stream content to 3 different devices at the same time).

Now, if you look at the competition, its pricing is actually quite comparable to that of YouTube TV: e.g., Hulu Live TV would cost you the same $40/month. So the price point might be justified, although one might argue that the target demographics of YouTube TV, which includes millennials who've never subscribed to the cable, might naturally keep comparing it to the likes of Netflix, Hulu and HBO Now, all of which cost significantly less (albeit arguably providing a different kind of service altogether). However, the specific price is not the point here - instead, I wanted to focus on the potential issue of creating wrong incentives among the customers that might in turn threaten the long-term prospects of those services.

The vast majority of content subscription services (Netflix, Spotify, Apple Music, etc.) today offer some kind of family plan option. The exact way they choose to implement those options might vary, but the general pattern remains the same: you typically get an opportunity to get 3-5 separate accounts at the price point that is below the cost of 2 separate individual accounts, albeit it comes with some limitations (e.g. all users are technically required to live under the same address, the bill needs to be paid in a single transaction, etc.)

One might argue that such a structure already creates some incentives for foul play: for example, today a lot of students choose to become part of Spotify family plans with their classmates, even though this is technically a violation of Spotify's terms of service.

At the first glance, one way to avoid this situation becoming widespread would be to limit the discounts you get by becoming part of family plan, and create a tiered pricing, where the overall price will depend on the number of people on the plan (thus limiting the advantages of family plan for each individual user, while still providing them with some discounts).

If you do some digging, though, you'll discover that this is exactly what Spotify did when it first introduced the family plan option in 2014. Back then, the cheapest option was to pay $14.99/month for 2 users, and in order to get a plan for 5 users, you were required to pay $29.99/month. Since then, however, it got rid of the tiered pricing, and now offers the family plan for up to 5 users at a flat rate of $14.99/month.

Whether the decision to go with a flat rate was driven by the desire to try and capture larger market share (even if it meant accepting lower margins), or it actually made sense from the unit economics standpoint, remains unclear. Still, at least we know that the tiered pricing was tested in the marketplace before being discounted. Moreover, even at the flat rate, one can see how the convenience of having your own individual account can trump the hassles of setting up a family account with your friends. The last point, however, is predicated on the (relatively) low price of Spotify (and most of the other streaming services as well): saving a few dollars might not be worth it for a lot of people.

With YouTube TV and Hulu Live TV, however, the situation seems to be remarkably different. The significantly higher price point suggests that the incentives to share the plan (and thus, split the costs) with your friends are much stronger, and the lack of the cheaper individual plans further strengthens the case for doing so. Moreover, by making what is essentially a family plan a default option, both YouTube and Hulu are effectively making sharing the default behavior among their users.

Now, both YouTube and Hulu have some very smart people working for them, which means they might have some very sound reasons to set up the pricing the way they did. Maybe both companies believe that by setting up the pricing the way they did they would be able to get more people to use the service in the first place, and later on, some of them would choose to set up their own plans for the sake of convenience, instead of sharing those with their friends. That assumption would actually make a lot of sense, especially considering the fact that both companies seem to target millennials, a lot of whom might not have families of their own yet, but will undoubtedly start them in the future.

Alternatively, it might be the case that the data for existing subscription services shows that from the unit economics standpoint, it makes sense to allow the users to share plans. It's entirely possible that some users aren't that active, and thus don't end up costing the companies providing the service that much in licensing payments to content providers, while helping to alleviate the pain of committing to pay too much for the service for other users.

Still, the decision to use such pricing scheme seems peculiar, and I'd definitely would love an opportunity to take a peek into the reasons that were behind it, as well as to see whether it proves to be a success in the marketplace.

Uber International Growth Strategy

UberIM_003946-1.jpg

On February 16, the news leaked that Uber is reportedly preparing to sell its Southeast Asian business to Grab, in exchange for the stake in the company. This is the third time that Uber decided to abandon the efforts to establish itself as a leader in the region, choosing to join forces with the strongest regional players instead: in 2016, Uber China was acquired by Didi Chuxing, and then in 2017, Uber merged its Russian assets with Yandex.Taxi.

In both cases, Uber hasn't actually fully exited the markets, but received significant minority equity stakes in the newly formed entities. This allows it to benefit from future growth in those regions, while also freeing the resources to be invested elsewhere. Still, the fact that Uber is now about to abandon another market that was long considered to be one of the most promising in terms of its long-term potential, is telling: the war of attrition can become too costly even for the most well-funded companies, especially if they have to compete against the strongest regional players in multiple geographies simultaneously. And the fact that those players are often very well funded themselves, have a significant head start and a much better understanding of the local specifics, doesn't help.

To be fair, I don't think that it's all doom and gloom for Uber: merging with the leading regional players, instead of continuing to wage the often doomed war actually makes total sense. However, it also raises a couple interesting questions. First, Uber might find itself effectively locked in some of those entities, unless those companies become large enough on its own to make public offerings an attractive option to achieve liquidity. If, for some reason, an IPO isn't an option, the only viable buyer for those companies will most likely be Uber itself. Second, there is a very real possibility that Didi would soon emerge as a truly global player, with its latest acquisition of 99 in Brazil only strengthening the case for it. If, or rather when, that happens, Uber will face a significant conflict of interest, being at the same time a major shareholder and a key competitor to Didi. This is, of course, not the first time in history such a conflict would emerge, but still, it would be curious to see how Uber would choose to deal with it.

The (Huge) Impact Of The Culture

As part of my experience at Kellogg, I've got an opportunity to work on a few projects that involved working with industry partners, and one thing that struck me as surprising was the huge impact the culture often had on every aspect of the operations of those companies. By culture here I mean not just the internal culture, but rather the all-encompassing perception of the organization and the values associated with it by both the customers and the employees.

Spending most of my time around tech companies in the past, I grew to appreciate the importance of building the right culture for the organizations, but at the same time got to view the culture as something that was constantly evolving, and could be changed over time, if need be. Part of this impression definitely comes from the fact that a lot of tech companies haven't been around for that long, but even the ones that have existed for a while and faced the need to adjust their culture and mission at some point, often managed to do that quite successfully (take Microsoft and the transition it went through in the last few years, for example).

What I've discovered at Kellogg, however, was that this is most certainly not the case for a lot of companies in other industries. While the right company's culture often serves as an amplifier for any initiative the company might be willing to undertake, it can also become a huge barrier to being able to successfully introduce the necessary changes. What's also interesting is that probably no company starts with the wrong culture in the first place - but rather, over time, some organizations might find themselves in a situation where certain aspects of the culture require adjustments due to the changes in market environment, customer preferences or the competition. What happens when this moment comes is very hard to predict, and depends on a wide range of factors, such as both the customers' and the employees' perception of the company's mission, the employees' attitudes towards the company, which are again often rooted in their perception of the company, the governance structure (e.g. being franchised definitely makes introducing changes more complex), whether this is a product- or service-driven company (changing the culture of the product-driven companies appears to be somewhat easier, but can bring other challenges) and so on.

What are some of the steps the companies might take to make it easier for them to make the necessary adjustments in the future? For starters, it seems that it is generally a good idea to start paying special attention to the company's culture while it's still emerging, and then keep re-evaluating the different aspects of it continuously, as the incremental changes certainly come easier than the all-encompassing reforms. Second, figuring out how the internal culture impacts the employees' and customers' perceptions of the company is crucial: once the customers make their minds, it's often extremely hard to do anything about it, and that in turn can affect the types of people the company is able to attract (especially if that's a B2C company). Finally, if the company's business model involves franchising, or is service-driven, bringing in the right people who can emphasize with the vision of the founders/top management and who share the same values becomes especially important. After all, the culture is by definition shaped by people, and if you're in a people-driven business, the culture essentially becomes your product.