Essays

Tech's Biggest Fear: Lack Of Growth Opportunities (Part 2)

This is the second article in a 2-post series — the first one covering Facebook can be found here.

When I was originally planning this series, I wanted to cover — and contrast — two companies that had recently started to experience peculiarly similar issues, but in my mind were nonetheless facing completely different challenges and outcomes going forward, despite some obvious similarities in their current states, thus creating an interesting case for comparison. Those companies were Facebook and Apple.

I’ already wrote a lengthy post about Facebook a few weeks back, and given how some of the recent developments made parts of my Apple write-up unnecessary (as the things I intended to cover have already played out), so I decided to make this post a bit less focused on the stock valuation aspects, and instead to spend more time brainstorming on the challenges and opportunities I believe Apple is going to face going forward.

Apple: a trillion dollar company of a single product

Today, Apple’s story is almost legendary — the company almost went bankrupt in 1997, and was worth less than $5 billion in 2000, but then went on to rise to the valuation of $150 billion by 2007, and then again to the top spot of the most valuable publicly traded company in the world in 2012, finally becoming the first publicly traded company to surpass the coveted $1 trillion number.

While one might argue that Apple started on the path of becoming the company it is today with the launch of iMac in 1998, it seems a bit more fair to say that it was actually the iPod, announced in 2001, that clearly signaled the beginning of the new era for Apple, followed by iTunes Store (called “iTunes Music Store” back then) that launched in 2003, and finally, the original iPhone, introduced in 2007. From there, Apple has gone to become the most valuable company in the world and has built what it is perhaps the largest (and for sure the most profitable) consumer tech business of all times, culminating with its market capitalization surpassing $1.1 trillion by the end of September 2018.

And yet, after the announcement of Q3 results (well, Q4 in Apple’s case, because of how its financial year aligns to the calendar one) on November 1, its share price fell more than 7%, constituting the worst decline since 2014, amidst a widespread investor backlash around Apple’s decision to stop reporting the number of devices sold starting next quarter.

So far, the stock price hasn’t demonstrated any signs of recovery; instead, the ensuing news about weaker than anticipated demand for new iPhones dragged the stock price even lower, and as of the close yesterday (December 17), Apple's stock was worth less than $164, down almost 30% from its 52-week high of $233.47 (to be fair, part of the decline could be attributed to the horrible performance of the broader markets in the last few months, but still, Apple’s stock performance was much worse compared to the overall market in that time frame). This decline led Apple to lose the top spot as the most valuable company, passing this title, as luck would have it, to Microsoft, which weathered the volatility of the last few months somewhat better than other big tech companies.

It’s not particularly hard to figure out the nature of the issue Apple’s facing — in fact, it can be summarized in a single chart:

Source: Statista.com

Source: Statista.com

As you can see here, it’s been now 4 years since the number of iPhones sold has delivered any growth, which, coupled with the fact that iPhone sales account for about 60% of Apple’s overall revenue, poses an obvious problem for the company.

Over the last year or so, the introduction of iPhone X, and then of iPhones XS and XS Max, has helped the company to temporarily address those concerns by substantially increasing the average selling price of an iPhone ($793 in Q4 2018, vs. only $618 a year ago). Still, the broader issue remains largely unsolved — with iPhone sales stagnant, and iPhone accounting for such a large percentage of the overall revenue, it’s hard to see how Apple could continue to deliver substantial revenue growth in the years to come. The revenue growth driven by the increase in the average device price came in handy (and helped Apple’s stock price to continue rising for these last few years), but there had to be a natural limit to how much Apple could possibly increase the prices before hitting the ceiling, and now, with nearly $800 in average selling price, it seems the ceiling might finally be close — the most recent news about less than stellar sales of the new, more expensive than ever, iPhones have further confirmed this hypothesis, and led to the extremely sharp decline of the stock price we’ve witnessed over the last few months.

To that point, this is why Apple’s decision to stop reporting the number of devices sold is viewed as troubling by the investors. Even taking Apple’s argument that the number of devices no longer provides a good estimate of the company’s performance at its face value, there would be no reason for it to get rid of this metric now, unless the company doesn’t expect any further substantial and sustained increases in either the number of devices sold, or the average selling price (and thus would then prefer to altogether stop reporting numbers that are set to lack growth)

Finding new growth points

In many ways, the problems Facebook and Apple are currently experiencing are of the same nature. Both companies have sustained impressive growth rates for years, but now, nearing the saturation point, are facing limits to this growth. Both have done a great job monetizing their existing user base, but have no obvious way to continue growing these revenues indefinitely (the situation is a bit more nuanced for Facebook, where this argument mostly applies to its user base in the incumbent markets; more on that later). Finally, both are heavily dependent on a single product generating the majority of their revenues and profits.

And yet, despite all those similarities, what I personally find interesting here is that the future opportunities that Facebook and Apple are facing are actually vastly different. This is also the reason why I’ve previously mentioned that I wanted to compare and contrast these companies — in my opinion, the markets were right to significantly discount Apple’s stock over the last few months (moreover, the previous expansion of P/E multiple in the last few years had been largely unwarranted); on the contrary, Facebook, in my mind, remains to be one of the most undervalued big tech companies right now.

Let’s dig a bit more into that, starting with Facebook. The company effectively holds a monopoly in the social media space: it owns two of the most popular social networks and the most popular global messenger app. While it might be facing limited growth opportunities in North America, it continues to grow its user base in other markets — one might argue that currently, Facebook isn’t doing a great job monetizing its user base outside of North America and Europe, but that could also be viewed as an opportunity, given that Facebook isn’t facing competitive pressure and isn’t losing its users because of its monopoly position in most markets — despite all the recent outrage against Facebook, most users simply have nowhere to go.

Now compare this to Apple. The company is heavily dependent on the sales of a single hardware product (iPhone) that is facing both the ever-increasing competition from Android phones, and also the longer upgrade cycles, simply because the smartphone category is now mature, and the users don’t feel any significant pressure to upgrade frequently. The lock-in effect of Apple ecosystem that helped the company to cross-sell its products is mostly gone — today, nobody has to use iTunes to update iOS devices or to download content on the devices (which arguably acted as a central hub, effectively locking you in the iOS ecosystem, not to mention helping to boost the sales of laptops and desktops running MacOS, considering how terrible iTunes experience on Windows was at one point); besides that, most applications are now cross-platform and sync to the cloud, making switching devices, even the ones running different operating systems, much easier.

True, iPhone as a product still has an army of loyal followers (with me being one of them, actually) and remains one of the best smartphones available in the market. It also continues to benefit from a substantial lock-in effect created by the easy migration to the newest device, and the app ecosystem (on the surface, this statement might seem contradictory to what I wrote in the previous paragraph, but it’s not: what’s gone — and what I was talking about in the previous paragraph — is the opportunity to sell more iPads or MacBooks to people who use iPhones simply because of this fact). So I personally wouldn’t expect iPhone sales to decline going forward, but there is also no substantial opportunity to sell more of them, and, judging by the events of this fall, the opportunity to continue to grow the average price is most likely gone too.

This leaves Apple in a precarious state, as it really needs to find new growth opportunities. Let’s now take a quick look into what those might be.

Services. The revenue from Services in Q4 FY18 reached $10 billion, or 16% of total revenues, — this includes revenue from App Store, AppleCare, Apple Pay, and Apple Music. Apple is forecasting that the revenue drawn from the Services would increase to $14 billion per quarter by FY20, and Morgan Stanley predicts that the revenue growth for the Services would constitute more than 50% of the total revenue growth for Apple going forward, with the idea that the potential for Apple to improve monetization of their existing users, as well as to add additional revenue streams, is very significant.

To be honest, I find this outlook to be all too rosy. It’s true that Apple has already built a truly humongous business in Services (especially with App Store and iTunes/Apple Music), and has the potential to grow it further over time. However, there are several factors that could make scaling Services business at a fast pace challenging, to say the least.

First of all, the idea that Apple’s currently under-monetizing its user base is, in my mind, a fallacy. Unlike Facebook, that doesn’t sell anything to its users, but rather essentially sells their eyeballs to the advertisers (and thus can indeed improve revenue per user, as long as there is sufficient demand from the advertisers, and the space on their website/mobile app to add additional advertisements without ruining user experience), for Apple to improve its user monetization, it needs to sell additional services directly to its users. The problem is, in most cases Apple users aren’t restricted to purchasing their content from Apple, and chances are, if they are interested in certain services, they are already spending their money elsewhere — which means that in order for Apple to grow its Services revenue (esp. without the corresponding increase in hardware sales, and thus the overall user base), it has to convince those users to abandon whatever service providers they are currently using (such as Spotify/Netflix/Amazon), and switch to Apple offerings instead. To an extent, this is a zero-sum game, and it’s not easy to win it, as was demonstrated, for example, by the fact that Spotify user base continues to be substantially larger than Apple’s, despite a huge number of iOS devices that come with pre-installed Apple Music, and years of efforts spent to promote Apple Music on Apple’s part.

The second issue is that Apple simply wasn’t built with the idea of selling Services in mind. Its highly secretive, centralized and hierarchical environment is very well-suited for the purpose of building the best hardware devices, where Apple controls all the aspects of user experience, both on the hardware and software side. However, building a successful Services business is a different story altogether. So far, Apple has proved that it can do a good job curating a catalog of content (meaning App Store and iTunes), and collecting a percentage of revenues from any sales, but, for example, creating a video streaming service would mean competing against extremely data-driven and highly flexible competitors like Netflix and Amazon, which might prove to be very challenging for Apple, not to mention that Apple’s desire to decide what types of content are allowed to be present on its platforms can prove to be really harmful here, as its competitors aren’t held by any such considerations, and there is no evidence that users would appreciate those.

Finally, while the marketplace model of App Store and iTunes remains highly profitable for Apple, that wouldn’t necessarily be the case for the new businesses such as streaming services — Spotify and Netflix can be viewed as two representative examples, with the first company yet to break-even, and the second turning in minuscule profits, while also being saddled with significant debt ($8 billion+ as of September, and counting). What’s even more important, some of the key players in the market (see Amazon) don’t actually need to make their streaming services profitable, — instead, they might offer such services to improve customer retention, and deepen their relationship with the users. The same strategy, in principle, could work for Apple as well, as long as the sales of hardware continued to rise, but it becomes a problem if Services business is now regarded as a growth opportunity for a company whose investors are accustomed to high margins.

iPad. In itself, iPad sales (or the overall tablet market) are unlikely to grow much — while iPad’s initial success had gotten a lot of people to expect tablets to eventually overshadow the traditional PC industry in a similar fashion to what happened with the smartphones, that never came to pass, and over the last few years tablet sales have been largely stagnant or even declining. Part of this is simply the result of the lifecycle of tablets turned out to be much longer than previously thought. Another reason is that the truly valuable use cases for tablets have simply never materialized — the basic entertainment functionality doesn’t require purchasing the newest and most powerful hardware (not to mention that the never-ending increases in smartphone screen sizes turned the category into a formidable competitor to tablets), and the limitations of mobile operating systems restricted the opportunities to abandon laptops in favor of tablets (the fact that the two-in-one devices from Microsoft and its partners offering a combination of full-scale desktop OS experience and tablet inputs continued to get better didn’t help either).

However, I believe that the recent developments in the tablet space have created an interesting and unexpected opportunity that, if executed in the right fashion, might open a valuable new market for Apple.

Gaming. The third generation of iPad Pro that was released earlier this year was equipped with A12X chips — a proprietary 64-bit system on a chip developed by Apple and currently used to power both iPhones and iPads. Apple’s CPUs had been getting better and better throughout the last few years, but the truly outstanding performance of the latest chips was nonetheless a surprise — according to some tests, iPad Pro now boasts a performance comparable to that of the latest generation of 13-inch MacBook Pro which is powered by the most recent Intel processors belonging to the Coffee Lake series. In my mind, this achievement could now open a path to some pretty exciting new opportunities for Apple.

The gaming industry in 2017 was approaching $110 billion in annual revenues, with Mobile accounting for $46 billion (tablets brought in a bit less than 25% of that), and consoles representing the second largest segment, estimated at $33.5 billion. Apple already draws very decent revenues from gaming on smartphone and tablets, but the incredible performance of its newest chips, coupled with the success that Nintendo Switch has seen over the last 2 years (it sold 20 million devices in 15 months, which, according to some estimates, is on par or better than the sales numbers for PS4 at the same point of its lifecycle), indicates that there is a very clear demand for a more serious portable gaming devices, and Apple might just have the tech necessary to build such product.

To be fair, I am not saying that this would be easy to do, or that Apple is necessarily the right company to execute on such an opportunity. Building a console business would likely require Apple to learn how to collaborate with the largest game studios much more closely than it currently does, and possibly would also mean that it would need to build its own production and publishing business, similar to what Microsoft currently does with Microsoft Studios; it might also face challenges creating a device that provides gamers with an experience comparable or superior to the one currently offered by the Switch, given Apple’s relative lack of expertise in the field (although it’s worth noting that Apple already made a couple of attempts to expand into the space, so it won’t be completely clueless).

And yet, I would argue that Apple stands a much better chance of expanding its presence in gaming, than it does, say, building a successful streaming service, as its success in gaming would rely on the very same capabilities that made Apple so successful in the first place, namely, manufacturing a hardware product that offers users a superior experience through a tight integration of hardware and software, whereas creating a streaming service (and the same is true for some of the other businesses that can be classified as Services) often requires capabilities that Apple currently doesn’t possess.

The final question here would be, if Apple enters the console gaming space and manages to build a successful presence there, would it help the company to alleviate concerns about future growth? That remains to be seen: first of all, right now Apple hasn’t publicly announced any intention to do so, therefore, everything described above remains speculation on my part. Second, even if it manages to eventually replicate the success of Nintendo Switch, it would still need to build a huge business in the area for it to start making a difference in the broader context of the company’s financial performance.

To put Nintendo Switch numbers into context, 20 million devices sold at $300 retail price means bringing in $6 billion in revenues, which is a great deal of money for Nintendo, but not so much for Apple — therefore, it would likely need to do even better for this opportunity to be really worth it. However, the Gaming market is growing at a healthy pace, and the revenues from devices are not the only possible revenue stream here (production and publishing of gaming is another one; so is streaming), so in the long term, it might indeed turn out to be Apple’s best bet for growth.

MacOS. I believe the chance that macOS would present any substantial growth opportunities for Apple going forward is very low; rather, it’s actually much more probable that the revenue share of the desktops and laptops in the overall mix would continue to decline. The introduction of iMac Pro in 2017 represented an interesting development, but it remains a niche product, with limited appeal to the wider audience, and the rest of the updates over the last few years were thoroughly unexciting. There is a small chance that the rumored upcoming switch to ARM processors would help Apple to create a more differentiated offering in the space, but even then, the chances of a substantial growth coming from this segment remain slim.

Apple Watch and AirPods. Apple Watch is arguably the most successful wearable device today, but at this point, the category is fairly mature, and while in Q4 FY18 Apple reported a 50% growth in revenues for the category year over year, going forward the growth would be slower.

AirPods, however, represent a different case — according to some estimates, Apple will sell 26 to 28 million units in 2018, vs. 14 to 16 million in 2017, which represents a growth rate of 62% to 100%, and could continue to aggressively grow the category, potentially reaching 100 to 110 million units in annual sales by 2021. If that turns out to be the case, Apple could draw an annual revenue of up to $18 billion from this category.

On the surface, that’s a huge number, but in Apple’s case, the same argument I made about Gaming above applies here: even $18 billion in annual revenues would constitute only about 6.5% of Apple’s total revenue for FY 2018, and 6.1% of $294 billion in revenues forecasted by Morgan Stanley for 2021, which means that AirPods, however successful, might not become a category that is large enough to make a real difference going forward. Still, right now, AirPods might actually be Apple’s best bet for short-term growth.

And this brings us to the final conclusion:

Facing the future

In my opinion, it would be a mistake to either underestimate or overestimate the scale and seriousness of the challenges Apple’s facing today. On the one hand, Apple today remains one of the most successful tech companies in the world, and it is highly likely that the business it has built will continue to bring in huge profits for years to come, — to that point, the predictions of Apple’s inevitable demise are very unlikely to materialize. On the other hand, tech companies today are to a very significant extent judged by their ability to continue to grow (indefinitely, if possible), and that’s where Apple is likely to face serious, if not altogether insurmountable, challenges.

Yes, there are some promising opportunities that the company can execute upon with its existing products and capabilities — Gaming and AirPods being the two that in mind represent the most attractive targets. But even if both of those would pan out, the main issue Apple is facing today stems from its sheer size — it is really hard to find opportunities that are large enough to make a difference for a company with an annual revenue of over $260 billion (with AirPods being a great illustration of the issue - even if the wildest forecasts would prove to be true, it would remain a small percentage of the company’s overall revenue).

What Apple really needs, if it has any hopes of continuing to grow at the pace it has enjoyed over the last 15 years or so, is to find another opportunity of the iPhone-like size that also aligns well with the company’s know-how and the organizational capabilities. The problem is, such opportunities are so extremely rare that there is simply no guarantee that one would emerge over the next decade, not to mention that even if it does, there is a good chance that Apple wouldn’t be the best organization to act upon it. And this is the key thing that distinguishes Apple from Facebook — the latter doesn’t really need to go look for new opportunities (not that they shouldn’t search for those, of course, but there is no immediate pressure to do so), but rather has the luxury to focus on their existing products, while for Apple it is an economic imperative, if the company wants to continue to grow.

Disclosure: This article expresses my own opinions, and my opinions only. I am not receiving any compensation for it. I have no business relationship with either Facebook or Apple. I hold no position in Apple stock, and a long position in Facebook stock, and have no plans to adjust those positions or initiate new ones within the next 72 hours.

Amazon's HQ2: Why Blaming The Company Misses The Point

Not to worry, the second part of my previous post is coming, but today I wanted to focus on a different topic altogether — that is, on Amazon HQ2 announcement.

As I am sure most of you have heard by now, Amazon finally announced the location of its HQ2 yesterday, which, after all, isn’t going to be exactly an HQ2 (surprise!). Instead, the company has decided instead to split the jobs between Crystal City in Northern Virginia and Long Island City in Queens. This news was met, well, unfavorably by most, to say the least.

HQ1, HQ2, HQ[X]?

Perhaps the first thing one would notice is the outrage over the fact that HQ2 ended up being, well, not exactly a full-blown HQ. On the surface, that’s a fair thing to fume about — after all, Amazon has spent over a year promoting their plans to create a huge office that would be comparable to its current base in Seattle both in terms of influence and the number of jobs, and the company has obviously used such framing to try and extract concessions from the cities lining up to compete for the opportunity, which again might justify the outrage over the fact that there would actually be no HQ2, at least not as it has been previously described.

That being said, focusing too much on this would mean missing two important points. First, if we are really talking about the global headquarters here, there is simply no such thing as a second headquarters, no matter the rhetoric. Instead, the global headquarters is by definition a singular phenomenon, and Amazon is by now so deeply rooted in Seattle that no other office would stand a chance to assume a comparable position, no matter how large it would be. And while one might argue that if that’s the case, then it was from the start deceptive of Amazon to describe their plans for a new office in a way they did, this point is so simple that the only people who might have been deceived by Amazon’s marketing spiel, were, well, willing to be deceived in the first place.

With that comes the second point — while there is no such thing as the second HQ, you don’t really need to have the same number of jobs as the original HQ to wield a very significant influence over the company — just take a look at Google Zurich office, for example, which houses only about 2,000 employees, but still managed to gain a wide recognition as a very well-respected engineering & research location, wielding significant influence over the company. And there are many more examples like that.

So, if the influence is what you’re after, how many jobs your local office houses doesn’t really matter that much — rather, what really matters is the kind of those jobs, and whether there is a substantial number of them are in engineering, research, and product, not just sales and marketing — which is not to say that those jobs are in any way less important — but it’s often the engineering jobs (or lack thereof) that define the nature of the office. Quantity should really be at best a second priority here.

In any case, I find the statement that going from the initially promised 50,000 jobs in one city to 25,000 in two of them somehow changes the nature of those offices and turns them into “glorified satellite offices” to be quite ridiculous. To my knowledge, no tech company today has offices even remotely approaching this size — again, just as an example, Google in NYC has recently stated that it plans to grow to (only) 14,000 employees in the 10 years — so there is simply no point of reference to use to judge what those offices might turn into, but, if anything, it seems highly unlikely that an office of 25,000 corporate jobs would ever be reduced to being a mere satellite to the headquarters.

The poisoned apple of tax subsidies

Next, the announcement has surfaced (or, rather, resurfaced) a growing wave of complaints about Amazon securing substantial tax subsidies that it doesn’t really need while putting additional strain on the already overextended infrastructure of NYC and Northern Virginia.

To that point, I am not here to argue that Amazon really needs or deserves to receive those tax subsidies (there is simply no way to objectively judge something like that in any case), or that tax incentives always work out as intended (hint: they don’t). Truth is, tax incentives can in some cases be quite harmful to the local municipalities, creating situations where they have to spread thin the tax dollars they collect from the rest of the taxpayers or even downright depleting cities’/states’ coffers in some cases (see cash grants). To echo the recent article in the Atlantic, one could make a sound argument that at least some of the tax subsidies that are currently being offered by the regional governments to woo corporations to their dominions should be made downright illegal, or at the very least frowned upon (although to be fair, that mostly applies to tax subsidies that are being used to move the existing jobs across the state lines, which is very different from creating new jobs, as is the case with Amazon here).

Still, while it might be fair to pose the question of whether the big tech companies are the best recipients of the significant tax breaks they are often able to extract from the local governments, as long as such subsidies are legal and available, it seems strange to blame Amazon, or any other company, for taking advantage of such opportunities — after all, big tech companies remain commercial entities whose key purpose for existence is to make money for their shareholders, not try and solve the complex social issues of the cities or states they happen to reside in (which, realistically, they are simply unequipped to tackle too, for all their size and power).

Moreover, tax subsidies are, well, subsidies, meaning that, at least in theory, they shouldn’t really make local budgets poorer, if set up correctly (that’s why I am personally inclined to argue that cash grants shouldn’t be classified as mere subsidies, and probably deserve to be banned altogether). It is still possible, of course, for the subsidies to become so large that the company receiving them would be essentially capitalizing on existing infrastructure and using city/state services without ever (or at least or a long time) paying its fair share for those, — but the point is, it doesn’t have to be that way. Instead, smart tax subsidies could be tied to specific KPIs, be it number of jobs created, the amounts invested in the region, or something else, essentially functioning as profit-sharing agreements between the companies and local authorities, and thus benefiting both, and it’s the job of the governments to ensure that this is indeed the case.

Going back to Amazon HQ2, if we look at the arrangement it negotiated with Long Island City as an example, the company is expected to receive up to $1.2 billion in tax subsidies over 10 years, in exchange for investing $2.5 billion in office space, and then paying up to $10 billion in taxes over the next 20 years, which roughly translates into $48,000 to $61,000 in subsidies for each of the 25,000 jobs it promised to create there, according to TechCrunch calculation. With the cited 31% tax rate for a job paying $150,000, the subsidy would translate into ~1.5 years of foregone tax revenue for the city. Is that a lot? Maybe, but I am inclined to say that that’s not necessarily an outrageous price to pay, especially taking in account all the additional spending that typically follows those high paying tech jobs (again, quoting the same article, Amazon claims that it boosted the local economy by $38 billion from 2010 to 2016).

Now, one might argue that Amazon was likely to pick up NYC for their HQ2 in any case, even if the city had decided not to offer any tax subsidies. That, of course, might be true (after all, NYC was deemed a top contender from the start), but it’s also possible that Amazon would have chosen a different location (e.g. one of those that would have still offered subsidies), and all the upside from capturing those jobs would be lost to NYC. The point is, there is simply no way of knowing this, so it all again comes down to the question of whether tax subsidies are beneficial and should remain lawful, but whatever the answer to that question, it seems reasonable that as long as those continue to exist, the companies will (and should) continue to use them, and the cities or states should continue to offer them whenever they believe it would boost their chances of securing the business — after all, that’s what rational market participants are supposed to do.

Actually, we don’t want those jobs here

Finally, the announcement led to an outcry from some of the people who live in the neighborhoods selected to house the new Amazon offices, as well as the local politicians, already blaming the company for the upcoming increases in the cost of living, and the inevitable displacement of the local communities that would come with it.

While the other two issues discussed above are rather complicated and hard to untangle (and also might justify different opinions), I find this one to be ridiculous in a sense that it completely misplaces the blame, and also risks throwing the baby with the water, metaphorically speaking.

Of course, it’s true that the ever-increasing COL is a huge issue in many places with the strong tech presence, and it’s a tragedy when people in local communities find themselves unable to stay in the neighborhoods they often spent a significant chunk of their lives in, and had no intention of leaving, if not for the rising costs. But it is hard to see how this is the fault of the companies based in those places — if anything, the blame should really lie with the local politicians (and sometimes, in the end, with the people themselves, however unfortunate that might sound), who often help to create the conditions that eventually lead to these problems in the first place.

Therefore, I am willing to argue that focusing on questioning the cities’ or states’ officials’ decisions to grant permissions to build in already congested areas, calling to investigate the reasons for the underinvestments in local infrastructure if that appears to be the case, or gathering political will to repeal regulations that prevent construction of additional housing (which often has a lot to do with the NIMBY attitudes of the locals — take SF, for example, whose ridiculous cost of living to a significant extent is a direct result of the laws passed 30 years ago that put severe restrictions on the amount of housing that could be built in any given year, coupled with the desire of those who already own real estate in the city to preserve their way of life) would do much more good than lashing out on the companies as the ultimate evil instead.

The main point, however, is to remember that in order to pull people out of poverty and help them, one still needs to command the necessary economic resources, which in turn could only come with the jobs, and, at the same time, that it’s not the job (no pun intended) of the private companies to address the broad social issues, but rather it’s the exact reason for existence of the local governments, and one of the means to achieve that is to tax the economic outputs of those companies and then responsibly spend the collected resources. This idea, however simple, seems to be escaping a lot of people lately — I’ve previously written how, for all their benefits, some of the EU members are starting to forget this, and now it increasingly seems that some parts of the U.S. are following suit. This is most unfortunate, if only for the reason that vilifying successful companies doesn’t really help to resolve any issues, and instead diverts the attention away from the real source of the problem, which could eventually deepen the issues even more, and make the processes that lead to them in the first place even more broken.

P.S. I do think that it’s possible that creating multiple relatively large offices in 2-tier tech hubs instead of putting two huge ones in the already saturated markets would have been a better idea, but I also reckon that even if Amazon had decided to do that, those would have still been the cities that are already doing quite well (Denver, Austin, etc.), and not the mid-western cities or deep south cities many hoped Amazon would help to revitalize. That, again, goes to back to a point that it’s not exactly Amazon’s job to rebuild the economies of the struggling regions, and the hard truth is that the company has to go where people who will fill the jobs it creates are or want to be, not where others need them to be.

Tech's Biggest Fear: Lack of Growth Opportunities (Part 1)

It’s been a while since I’ve published anything here — hopefully, it’ll now change. I’m looking forward to get back to writing on a regular basis, aiming for a post per week or so.

This is the first part of a 2-post series. The second article can be found here.

An IPO priced at $104 billion, followed by the company’s valuation dipping below $50 billion within 3 months, and then recouping all losses in a year, and never stopping ever since, with its valuation peaking at over $600 billion this year. Recognize the company? I bet you do — this is, of course, Facebook.

Where does one go from 2 billion users?

As with any platform relying on the advertising revenue, there are 4 key metrics Facebook’s performance relies upon: number of active users on the platform, time they spend there daily/monthly, advertising space utilization (with 2 aspects associated with it — e.g. whether it’s possible to add more advertising blocks per screen, or sell a higher percentage of the existing ones), and, finally, price per ad. It’s worth noting that the last metric — price per ad — often isn’t really an independent one, in a sense that unless the utilization is at capacity, and is coupled with excess demand, thus creating scarcity for the advertising space, it’s unlikely that price per ad would increase on its own (and even then, the price per ad is still subject to the competition from other platforms).

Source: Facebook Q3 2018 Earnings Presentation

Source: Facebook Q3 2018 Earnings Presentation

Now, when Facebook went public in May 2012, it had around 900 million monthly active users, and a bit over $1 billion in quarterly revenue. Today, the number of users has increased to 2.27 billion, while the revenue has grown to $13.7 billion. Granted, not all in 2018 not all of Facebook’s revenue is coming from Facebook.com — while the company doesn’t break out Instagram’s revenue, it’s estimated to have reached about $2 billion per quarter this year, and is posed to continue to grow. Still, it’s obvious that since 2012, Facebook has significantly ramped up its efforts to monetize its user base, to a point where it appears that it can no longer increase the number of ads in the News Feed. That, coupled with the slowing user growth (which came almost to a halt in its most lucrative markets — Europe, US & Canada — in the recent quarters), makes it challenging for Facebook to continue growing revenue at the same pace in the future.

Source: Facebook Q3 2018 Earnings Presentation

Source: Facebook Q3 2018 Earnings Presentation

Concerns over the slowing revenue growth, and revised guidance for the upcoming quarters were the main reason for an extremely sharp decline of 20%+ in Facebook’s stock price in July, after it announced Q2 results. Granted, the Cambridge Analytica scandal and overall concerns over privacy probably didn’t help either; however, it’s worth noting that by the second half of July Facebook was back to trading at an all times high, so it seems unlikely that either Cambridge Analytica incident, or the privacy concerns had much to do with the drop in stock price that followed Q2 earnings announcement.

Reported on October 30, Q3 earnings did little to dissuade investors’ concerns over the company’s ability to continue to grow at a fast pace: while Facebook exceeded the expectations for earnings, it missed forecasts on revenue and DAUs/MAUs, which arguably are more indicative of the growth potential. It’s therefore curious that despite of that, the market didn’t register any further decline in the stock price (although there were a couple of wild swings before the price settled at more or less the same levels as before the earnings’ announcement). That being said, given that the current price of ~$145 per share is 18% lower than the $176 per share after the disastrous announcement of Q2 earnings, and 33% lower than 2018 peak price, one could argue that the not so stellar results of Q3 were simply already baked into the stock’s price.

Now, it’s obvious that Facebook in 2018 is quite different from what the company it was back in 2013, when its stock started its mind-blowing ascent (close to 10 times returns in 5 years, from July 2013 to July 2018). Many of the concerns surrounding Facebook’s potential to continue to grow are justified, and require addressing. And yet, at 22x P/E, Facebook appears to be much cheaper than most of the other big tech companies: for instance, Google is currently traded at 42x P/E, Netflix at 108x, Microsoft at 45x, and Amazon at 96x (to be fair, P/E isn’t always a good benchmark to use to compare valuations, but in this case, the discrepancy between the Facebook’s valuation and that of others is too substantial to be ignored, even if the metric itself is flawed).

Some might notice that one company is conspicuously absent from the list above. That company is, of course, Apple, which represents another potentially interesting case, as it’s currently trading slightly above 17x P/E.

It might indeed be worth spending some time to dig into Apple here. In many ways, the problems Facebook and Apple are currently experiencing are of the same nature. Both companies have sustained impressive growth rates for years, but now, nearing the saturation point, are facing limits to this growth. Both have done a great job monetizing their existing user base, but have no obvious way to continue to grow those revenues indefinitely. Finally, both are heavily dependent on a single product generating the majority of their revenues and profits.

And yet, despite all those similarities, what I personally find particularly interesting here is that the future opportunities that Facebook and Apple are likely to face are actually vastly different.

Side note: Apple was originally supposed to be a part of this article, with me intending to cover both Facebook and Apple in one post. However, since it was getting way too long, I’ve decided to split this article in two, therefore Apple would be covered in the next piece.

Growing beyond original limitations

I think it would be fair to state that Facebook.com is unlikely to grow its MAUs much further, and the opportunities to monetize the geographies where it is still acquiring new users are likely to lag behind those of North America and Europe. The lagging monetization of the new content formats, namely Stories, represents another growing problem for the company.

That being said, however, it seems unwise to underestimate just how well protected Facebook’s market position currently is.

At the time of its public offering in 2012, Facebook’s monopoly on the market wasn’t yet solidified. Since then, however, it managed to acquire and then very successfully scale Instagram, which arguably would have represented its strongest competitor otherwise; it also removed the threat of WhatsApp by acquiring it, and managed to copy and then improve on some of the key features of Snap that are now available both on Facebook and on Instagram, thus crippling its ability to grow or retain its user base. All that, coupled with the decline of some of the stronger regional social networks (e.g. VK.com and Orkut), left users with no alternative but to continue to use Facebook’s properties (which includes Instagram and WhatsApp). You do, of course, still have Twitter and LinkedIn, but the nature of those networks is substantially different from Facebook, to point where it could be fair to argue that the common scenarios all three of those could possibly compete for are exceedingly rare.

Another trend that, while doing Facebook some (limited) harm short-term, might be actually helping to further entrench the company as a market leader is the current push towards privacy. Today, with the developers no longer enjoying the freedom of access to the entire Facebook’s social graph as they did in the earlier years, it becomes even more challenging for any aspiring competitor to scale its services, as it now has to compete against the incumbent whose services are already being used by everyone (and that is thus enjoying tremendous network effects), and at the same time can no longer piggyback on Facebook’s social graph for its benefit.

The same goes for regulations like GDPR — instead of, or rather, in addition to, putting oversight over what tech giants can and cannot do, it also makes it much more challenging for the data to flow freely, which (surprise!) helps incumbents the most, and actively harms smaller startups that are often dependent on 3rd-party user data being shared with them — which is exactly why, as I wrote previously, I believe GDPR stands to do more harm than good, and other countries, including the U.S., would do well to be wary of implementing similar regulatory frameworks (that’s not to say that privacy isn’t important — but it’s crucial to clearly understand the trade-offs you’re making).

In that sense, while Facebook might be currently struggling to monetize some of its geographies and/or content formats, the one thing (and an extremely unusual one for a tech company to have too) it has is time, as it has by now all but eliminated competition, and its network effects are simply too strong for any new players to successfully compete against it. That, of course, doesn’t mean that no company could ever unseat Facebook from its throne as the king of social, but it could be a long time before anyone figures a way to do that, which leaves Facebook an opportunity to continue to extract rents from its user base, while also figuring better ways to do so.

Finally, Ben Thompson made an interesting point in his August post about Facebook: it’s possible that the new content formats, and Stories in particular, might eventually allow Facebook to finally tap into the brand advertising market, unlocking a huge market that the company hasn’t been able to substantially penetrate yet. If that turns out the case, a short-term hit associated with the limited opportunities to monetize Stories Facebook has to take now would be more than worth it in the longer term — and indeed, intentionally pushing the customers towards Stories, even it means sacrificing some of the revenue from the News Feed, might prove to be exactly the right strategy.

Still a great opportunity

Source: Macrotrends.net

To that end, I believe that Facebook today is considerably undervalued, compared to its big tech peers. While the company has a number of issues to work through, some of which weren’t apparent earlier (which might justify some of the correction in stock price we’ve witnessed), none of the fundamentals have really changed in 2018, which makes the extremely aggressive decline in P/E ratio this year (or even more generally, over the last 3 years) hard to justify.

After all, Facebook remains the largest social platform on the planet, and, in many cases, the only viable option, with plenty of opportunities for growth, and, more importantly, the luxury of having the time to figure out how to execute on those (again, the time here is simply a function of the lack of alternatives to Facebook/Instagram from the users’ perspective), which is really a rare thing in the tech space.

Disclosure: This article expresses my own opinions, and my opinions only. I am not receiving any compensation for it. I have no business relationship with either Facebook, or Apple. I hold no position in Apple stock, and a small position in Facebook stock, and have no plans to adjust my position(-s) or initiate new ones within the next 72 hours.

Why We Need To Rethink The Existing Safety Nets

These days, it seems that the discussion on how AI is going to disrupt the vast majority of industries in just a few short years rages everywhere. According to PitchBook, in 2017 VCs have poured more than $10.8 billion into AI & machine learning companies, while the incumbents have spent over $20 billion on AI-related acquisitions; according to Bloomberg, the mentions of AI and machine learning on earnings calls of public companies have soared 7-fold since 2015; and just this week, The Economist published a series of articles, framed as Special Report, on the topic.

In today's context, AI typically refers to machine learning, rather than any kind of attempt to create general intelligence. That, however, doesn't change the fact that the current technology has clearly moved past the point when it was of limited use to non-tech companies, and is now beginning to disrupt a large number of industries, including the ones that weren't particularly tech-savvy in the past. To quote McKinsey Global Institute's "Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation" report:

"We estimate that between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world, based on our midpoint and earliest (that is, the most rapid) automation adoption scenarios. New jobs will be available, based on our scenarios of future labor demand and the net impact of automation, as described in the next section. However, people will need to find their way into these jobs. Of the total displaced, 75 million to 375 million may need to switch occupational categories and learn new skills, under our midpoint and earliest automation adoption scenarios."

To be fair, McKinsey also states that less than 5% of all occupations consist entirely of activities that can be fully automated. Still, here's another valuable quote from the report:

"In about 60 percent of occupations, at least one-third of the constituent activities could be automated, implying substantial workplace transformations and changes for all workers."

Overall, there seems to be little doubt today that even with the current level of technology, the global workforce is about to enter a very volatile period that would require large numbers of people to learn new skills or be altogether retrained, or else risk losing their jobs, and face difficulties finding new employment.

The peculiar nature of disruptive technology adoption

I would also argue that while tech industry, as well as the broader society, have often been overly optimistic when trying to forecast how soon certain revolutionary advances in technology were to happen (heck, in their 1955 proposal the fathers of AI, which included Marvin Minsky, John McCarthy and others, outlined their belief to be able to make significant progress towards developing a machine with general intelligence in a single summer), once the core new technology became available, even the most daring forecasts for adoption rates often turned out to be too conservative.

This is especially true in cases when technology in question was impactful enough, and its nature allowed for the formation of an ecosystem around it — in which case, in just a few years, there were hundreds of thousands of stakeholders involved coming up with new creative ways to benefit from the advantages brought by the new tech.

With AI, or rather, with machine learning (in this case, the distinction is quite important), while the underlying technology is still evolving and will continue to do so, it's already good enough for a wide variety of applications, which prompted a rapid rise in the number of tech companies, startups, consultancies and independent developers involved in the space — today we already have a vast ecosystem around AI, with the ever-growing number of stakeholders involved, and it can only be expected to grow larger in the next few years.

Rethinking the safety nets

What that means is that even the most daring forecasts produced by McKinsey or anyone else might still underestimate the change that's coming. And if that turns out to be true, figuring out how to help all the people who are going to be displaced, becomes of utmost importance, as the society will need to find ways to support those people through periods of unemployment, provide them with the training that would be effective in bringing them back to the workforce (the current government-run retraining programs, while costing a lot to the taxpayers, often turn out to be painfully ineffective, at least in the U.S.), and, ultimately, take care of those who for various reasons can't get back to the workforce, while doing all of the above on an unprecedented scale.

This calls for the creation of robust safety nets for people, while also making sure that it doesn't stifle economic growth: while the safety nets of some European countries are great for their citizens, they also place undue burden on the employers, and incentivize both the mature companies and the startups to move their business to other places, if possible (and in the increasingly global and interconnected world, it is indeed becoming possible to do that more and more frequently).

At the first glance, there is a paradox here: the safety net is becoming increasingly important, but if a robust safety net stands to hurt the economic growth, then there'll be less jobs going on, in turn making the safety net even more essential, and more costly to provide. This paradox, in turn, brings the ultimate question: why are our safety nets designed with the assumption that it's the end goal for people to have a formal full-time job? Note that this is the case for most developed countries, including the U.S.: while it might be easier to fire people in the States compared to many European countries, the system is still designed to incentivize people to seek full-time employment, in some ways even more so than in Europe.

If you think about it, it doesn't seem to make much sense to force people to look for full-time employment above everything else, or to force the employers to make long-term commitments to their employees, or to bear most of the burden associated with their safety nets, in a world that is increasingly global and going through rapid changes at an accelerating pace. Wouldn't it be better if at least most of the safety net would come from the state, while the employers would be incentivized to optimize for efficiency and growth, bringing in people (and firing them) as needed?

This added flexibility for the employers doesn't need to be free either: it's no secret that the corporate taxation is dysfunctional, but it's hard to fix it without offering a decent reason for the companies to play nice (instead of moving the profit center to Ireland) and comply, and the added flexibility on managing their workforce can potentially be a powerful incentive (especially in the HQ markets, where workforce constitutes a significant expense, and can't be easily moved elsewhere). For the businesses, that would mean that they are still being asked to pay their fair share, but at least they won't have to make upfront and long-term commitments that can often have perilous consequences in the changing markets. That remains particularly true for the smaller companies.

Would such a world be more volatile for regular people? Alas, it most likely will be. But it also stands to reason that in a world where your health insurance isn't tied to your employer but is instead provided by the state no matter what and where you have the opportunity to go back to the school as needed, without having to worry about the cost, people would be much more daring to pursue the career options that are best for them long-term.

The final piece: UBI

There is still one component missing, of course. If there is nothing preventing your employer from firing you without much notice, the safety net has to include some mechanism to account for that, and, most likely, it has to be more robust than the currently available programs, which brings the conversation to the concept of UBI, or universal basic income.

Now, that's an incredibly broad topic, and the one that has been in discussion for decades, if not longer (for example, few people know that the U.S. actually conducted a number of experiments on negative taxation way back in 1960s, and even almost got to implement a form of basic income). Also, basic income doesn't stand for one particular idea, but rather includes a range of concepts, from offering everyone the same lump sum regardless of their income or wealth, to ideas of negative taxation that would help to create an income floor for everyone, to proposals that are more limited in scope, but might still play a valuable role in helping to eliminate poverty and providing safety net for people.

The most realistic concept I've seen so far, and the one I like the most, is described in the recently released book called "Fair Shot: Rethinking Inequality and How We Earn", written by a Facebook co-founder Chris Hughes. I'd highly recommend reading the book to anyone interested in the topic, but in short, the idea is to supplement the earnings of every household with the annual income of $50,000 or less, with additional $500/month per working adult (less, if the income is close to $50,000), building on top of existing EITC program, and to pay for this program by eliminating preferential tax treatment for capital gains, and imposing additional taxes on those who earn $250,000 or more per year.

While this idea is less daring that the some of the more sweeping concepts of UBI, it has several extremely interesting components to it. First, it's much less expensive than some of the other UBI proposals, which in theory means that it's possible to implement it even today. Second, unlike the calls to provide basic income to everyone regardless of their wealth or whether they are working or not, Chris proposes to provide this supplementary income to working adults with relatively low earnings, but to use a much broader definition of work that the one currently used in EITC: the idea is to count as work any kinds of paid gigs (e.g. working for Uber, TaskRabbit and the likes), as well as to count homemaking and studying as work. That way, people would remain incentivized to engage in productive activities, but wouldn't be limited in what they could do as much as they are now (although, interestingly, the vast majority of UBI experiments actually provide evidence that people receiving it continue to work, and even work more, instead of withdrawing from the workforce, so this concern is artificial in nature to begin with). Third, while $500/month won't be enough to support someone who has no other income, its value shouldn't be underestimated: the studies show that even small amounts of cash can help people get by during the hardest periods and optimize their careers for longer term.

The path ahead

Even with the UBI in some form, the guaranteed health insurance and the access to free education, people wouldn't exactly get to enjoy their lives without having to worry about work: the goal, at least for now, should be to provide safety net for the periods of turmoil and incentivize people to pursue riskier and more rewarding opportunities career-wise, rather than eliminate the need to worry about finding employment altogether. Still, having this safety net would mean a great deal for someone whose job has been eliminated by automation and who now has trouble finding work, or who needs to go back to college to get retrained, or simply wants to quit her less than inspiring job and try to launch a business.

The change brought by the globalization and automation is inevitable, and so, most places would have to find a way to adapt to it, one way or the other. Right now, places like Netherlands or Nordic countries already have well-developed safety nets, but often represent a challenging environment for the new businesses to grow in, while other places (e.g. the U.S.) can be much more business-friendly, but don't offer all the necessary protections to support people who find themselves worse off than before. What remains to be seen is what path each of those countries would choose to pursue going on, and how it would play out for them over the next 10-20 years.

Data Privacy And GDPR: Treading Carefully Is Still The Best Course

As the rage over Facebook/Cambridge Analytica situation continues, the calls for much more rigorous regulation for tech companies are becoming more and more common. On the surface, this seems reasonable: it's hard to argue that handling of users' data by many companies remains messy, with the users often left confused and frustrated, having no idea about the scope of the data they're sharing with those companies. And yet, I am going to argue that we — as users, customers and society as a whole — stand to lose a lot if we act purely on our instincts here: the excessive regulation, if handled poorly, can harm the market immensely in the years to come, and ultimately leave us worse, not better, off.

Current discussion around data privacy hasn't actually started with the recent Facebook scandal. Over the last few weeks, you might have received notices from multiple tech companies on updated terms of services — those are driven by the companies' preparation for General Data Protection Regulation, or GDPR, a new set of rules aimed to govern data privacy in the EU, to kick in on May 25th this year. If you're interested, here are a couple of decent pieces providing an overview of GDPR, from TechCrunch and The Verge.

Now, it is still the EU regulatory framework, so naturally, it only governs the handling of the data that belongs to the users who reside in the European Union, which prompts the question why should people in other geographies bother to learn about it? Well, to answer it, here's the quote from the recent The Verge article:

"The global nature of the internet means that nearly every online service is affected, and the regulation has already resulted in significant changes for US users as companies scramble to adapt."

And that's exactly right: while GDPR only applies to the data that belongs to the EU citizens, it's often hard, if not altogether impossible, to build a separate set of processes and products for a subset of your users, especially if we are talking about a subset so large, diverse and interconnected as the European users. Therefore, quite a few companies have already announced an intention to use GDPR as the "gold standard" for their operations worldwide, rather than just in the EU.

Quite a few things about GDPR are great: the new "terms of service" are about to become significantly more readable, the companies would be required to ask the users to explicitly opt in on the data sharing arrangements, instead of opting their users in by default, and then forcing them to look for the buried "opt out" options, and the opportunity for the users to request any company to provide a snapshot of all the data they have on them is likely to prove to be extremely useful. The abuse, like in Facebook/Cambridge Analytica case (irrespective of who's to blame there) is also about to become much harder, not to mention much costlier for the companies involved (under GDPR, maximum fines can reach 4% of the company's global turnover, or €20 million, whichever number is larger).

So what's the problem then? Well, first of all, GDPR compliance is going to be costly. Europe has already witnessed the rise of a large number of consultants helping companies to satisfy all the requirements of GDPR before it kicks in in May. The issue with that is that the large companies typically can afford to pay the consultants and the lawyers to optimize their processes. Instead, it's often the smaller companies, or the emerging startups, that can't afford the costs associated with becoming fully compliant with the new regulations.

That, in turn, can mean one of two things: either the authorities choose not to enforce the new laws to a full extent for the companies that are beyond a certain threshold in terms of revenue or the number of users, or GDPR threatens to seriously thwart the competition, aiding the incumbents and harming the emerging players. The second scenario is hardly something that the regulators, not to mention ordinary citizens, can consider a satisfactory outcome, especially in the light of the recent outcry over Facebook, Google and few other big tech companies — most people have no desire to see these companies become even more powerful than they are today, and yet that's exactly what GDPR might end up accomplishing, if it's enforced in the same fashion for all companies, irrespective of their size or influence.

The second problem is that while the first of the principles of GDPR, "privacy by design", isn't really new to the market, the second, "privacy by default" is a significant departure from how many tech companies, in particular those in the marketing/advertising space, operate today. In short, GDPR puts significant restrictions on the data about the user that companies are allowed to collect, and in the situations they're allowed to share it with their partners (and, in most cases, they'd need to obtain an explicit consent from the user before her data could be shared). That potentially puts at risk the entire marketing industry, as most of the current advertising networks employ various mechanisms to track users throughout the internet, as well as routinely acquire data from third parties on the users' activities and preferences in order to enable more effective targeted advertising. Right now, this way of doing things seems to be under direct threat from GDPR.

Now, there are plenty of people who believe that the current advertising practices of many companies are shady at best, and downright outrageous at worst, and any regulation that forces the companies to rethink their business models should be welcomed. To that end, I want to make three points on the situation isn't necessary that simple:

1. Advertising is what makes many of the services we routinely use free. Therefore, if the current business model of the vast majority of those companies comes under threat, we need to accept that we'll be asked to pay for many more of the services we engage with than we do now. The problem, of course, is that most consumers, for better or worse, really hate to pay for the services they use online, which means that a lot of companies might find themselves without a viable business model to go on with.

2. The incumbents are the ones who stand to win here. What comes to mind when you think about the companies that don't need to rely upon third-party data about their users to successfully advertise to them? Facebook, LinkedIn, Google. Those companies already possess huge amounts of information about their users, and therefore they'd actually be the ones that are the least threatened by tightened regulations on data sharing, and likely to become even stronger, if their competitors for the advertising dollars are put out of business.

3. A "separate web" for the EU users. Right now, it looks like many companies are inclined to treat GDPR as the "gold standard". However, it's worth remembering that they still have another option to go with. If GDPR compliance proves to be too harmful for their businesses, instead of adopting it globally, they might choose to go into trouble of creating a separate set of products and processes for the EU users. That, of course, would most likely mean that those products would receive less attention that their counterparts used by the rest of the world, and would feature more limited functionality, harming the users who reside in the EU. It would also harm the competitiveness of the European companies, as well as their ability to scale globally, as, unlike their foreign-based peers, they would face more restrictive and expensive to comply with regulations from the start, while, say, their U.S. peers would have the luxury to scale in the more loosely regulated markets first, before expanding to Europe — at which point, they'd be more likely to have the resources necessary to successfully withstand the costs of compliance.

Once all of this is taken into consideration, I'd argue that it becomes obvious that the benefits that come with the stricter regulation, however significant, don't necessary outweigh the costs and the long-term consequences. Data privacy is, of course, a hugely important issue, but there is little to be gained from pursuing it above everything else, and a lot to lose. With GDPR, the EU has chosen to put itself through a huge experiment, with its outcome far from certain; the rest of the world might benefit from watching how the situation around GDPR unfolds, waiting to see the first results, and then learning from them, before rushing in similar proposals in their home countries.

Cambridge Analytica Crisis: Why Vilifying Facebook Can Do More Harm Than Good

Throughout the week, I've been following Facebook and Cambridge Analytica scandal as it's been raging on, growing more and more incredulous. Yes, this is a pretty bad crisis for Facebook (which they inadvertently made even worse by their clumsy actions last week). But it still felt to me that the public outrage was overblown and to a significant degree misdirected. Here are the key things that contributed to those feelings:

1. Don't lose sight of the actual villains. Aleksandr Kogan and Cambridge Analytica are the ones truly responsible for this, not Facebook. Facebook practices for managing users' data might have been inadequate, but it was Kogan who passed the data to Cambridge Analytica in violation of Facebook policies, and then Cambridge Analytica who chose to keep the data instead of deleting it to comply with Facebook requests.

2. Nobody has a time machine. It might seem almost obvious that Facebook should have reacted differently when it learned that Kogan passed the data to Cambridge Analytica in 2015 — e.g. extensive data audit of Cambridge Analytica machines would have certainly helped. The problem is, it's always easy to make such statements now, yet nobody has a time machine to go back and adjust her actions. Was Facebook sloppy and careless when it decided to trust the word of the company that already got caught breaking the rules? Sure. Should it be punished for that? Perhaps, but rather than using the benefit of hindsight to argue that it should have acted differently in this particular case, it seems more worthwhile to focus on how most companies dealing with users' data approach those "breach of trust" situations in general.

3. Singling out Facebook doesn't make sense. To the previous point, Facebook isn't the only company operating in such a fashion. If one wants to put this crisis to good use, it makes more sense to demand for more transparency and better regulatory frameworks for managing users' data, rather than single out Facebook, and argue that it needs to be regulated and/or punished.

4. Don't lose sight of the forest for the trees. It's also important to remember that the data privacy regulation is a two-way road, and by making the regulations tighter, we might actually make Facebooks of the world better, not worse, harming the emerging startups instead. This is a topic for another post, but in short, strict data regulation usually aids the incumbents while harming the startups that find it more difficult to comply with all the requirements.

5. Data privacy is a right — since when? Finally, while the concept of data privacy as a right certainly seems attractive, it's not as obvious as it might seem. Moreover, it raises an important question — when exactly did the data privacy become a right? This isn't a rhetorical question either. It certainly wasn't so in the past: many of the current incumbents have enjoyed (or even continue to enjoy) periods of loose data regulation in the past (e.g. like Facebook in 2011-2015, or so). So if we pronounce the data privacy to be the right today, we are essentially stifling the competition going forward by denying the startups of today similar opportunities. Does this sound nice? Of course not, but that's the reality of the market, and we have to own it before making any rash decisions, even if some things seem long overdue.

Overall, this crisis is indicative of multiple issues around data management, and can serve to launch a productive discussion on how we might address the data privacy concerns going on. At the same time, it doesn't do anyone any good to vilify Facebook beyond necessary (and some of the reporting these days was utterly disgusting and irresponsible), the #deletefacebook campaign doesn't really seem to be justified (again, why not get rid of the vast majority of the apps then, given that Facebook isn't that different from the rest) and any further discussion about data privacy should be carefully managed to avoid potentially harmful consequences - most of us have no desire to find themselves in the world where we have perfect data privacy, and no competition.

Designing Accessible Products

On Thursday, Microsoft announced Soundscape, an app that aims to make it easier for people who are blind or visually impaired to navigate the cities, by enriching their perception of surroundings through 3D cues.

According to Microsoft:

"Unlike step-by-step navigation apps, Soundscape uses 3D audio cues to enrich ambient awareness and provide a new way to relate to the environment. It allows you to build a mental map and make personal route choices while being more comfortable within unfamiliar spaces."

To me, this appears to be a wonderful idea, and an app like this could eventually make a huge difference for people who are visually impaired, helping them to navigate unfamiliar environments and make a better use of everything the cities have to offer.

I've been very impressed by the commitment Microsoft demonstrated to building more accessible tools, while interning at Microsoft this summer. If you're interested to learn more about the work they are doing, there is a dedicated section on the company's website, highlighting the principles Microsoft utilizes to think about the inclusive design, and providing specific examples of their work.

Of course, Microsoft isn't the only major tech company that has demonstrated a commitment to building products that are truly accessible. Apple has been long known for their attention to the accessibility, and continues to work to make its products accessible. Google, while not necessarily doing a great job in the past, seems to be catching up. And Amazon finally made its Kindle e-readers accessible once again in 2016, after 5 years of producing devices that weren't suited for those who are visually-impaired (the early versions of Kindle readers were actually accessible too, but then Amazon has given up on this functionality).

And yet there are a lot of areas where tech products' accessibility leaves much to be desired, and many companies simply don't pay enough attention to it. Those often come up with multiple reasons to justify it, too. Some companies state that designing with accessibility in mind is too hard or too expensive, or that it just makes their products look dull. Others believe that by ignoring the accessibility issues, they're only foregoing a small percentage of the market (the figures typically mentioned are 5%, or less).

To be clear, none of those arguments should be viewed as acceptable. Moreover, designing with no regard to accessibility today is often classified as discrimination based on disabilities, and over the last 25 years, it has been made illegal in multiple countries (including the U.S. and U.K.), with the customers successfully suing companies who weren't providing accessible options.

But even if we put aside the legal aspect of the issue, do any of the excuses typically used by companies to avoid paying attention to accessibility actually have merit in them? As it turns out, not really.

According to the U.S Census Bureau, in 2010 nearly 1 in 5 People (19%) had a disability, with more than half of them reporting their disability being severe. About 8.1 million people had difficulty seeing, including 2.0 million who were blind or unable to see. About 7.6 million people experienced difficulty hearing, including 1.1 million whose difficulty was severe. About 5.6 million used a hearing aid. Roughly 30.6 million had difficulty walking or climbing stairs, or used a wheelchair, cane, crutches or walker. About 19.9 million people had difficulty lifting and grasping. This includes, for instance, trouble lifting an object like a bag of groceries, or grasping a glass or a pencil.

Now, if you look at those numbers, the argument that by ignoring the accessibility, the companies are foregoing only a small chunk of the market, proves to be obviously incorrect. Even if you single out a particular disability, like having difficulty seeing, it still affects millions of people.

What is perhaps even more important, those numbers don't necessarily include everyone who might benefit from the products being designed with accessibility in mind: a well thought-out design might also benefit people who are temporary disabled, or the youngest and the elderly users. So it's not just about ensuring that the people with disabilities would be able to use your products, but also about creating better products in general.

Here is one great quote related to this discussion, from the Slate.com article "The Blind Deserve Tech Support, Too: Why don’t tech companies care more about customers with disabilities?":

"When you make a product that’s fully accessible to the blind, you are also making a product accessible to the elderly, to people with temporary vision problems, and even to those who might learn better when they listen to a text read aloud than when reading it themselves. This is the idea of universal design: that accessible design is just better design."

Is designing for accessibility time-consuming and expensive? Sometimes, but overall, it really doesn't have to be. A lot of it has to do with learning about and following the best practices related to accessibility, and ensuring that the products you build adhere to the industry standards. Starting to do that might require a certain amount of resources, but in most cases it would be a one-time investment. Besides that, some of the things related to accessibility require very little effort on your part, e.g. adjusting your color scheme to make it easier for people who are color-blind to interact with your product. And in the process of making your products accessible, you are likely to materially improve the experience for your current users as well.

Finally, we are entering an era when the new technology (AI, voice assistants, VR/AR, novel ways to input information, etc.) can contribute a great deal to making it easier for people with disabilities to interact with the products around them. Take, for example, this description of what could be achieved even with the current generation of voice assistants, from "Brave In The Attempt" article on Microsoft's accessibility efforts:

"One of the best Windows tools for people with mobility challenges is Cortana. Just with their voice, users can open apps, find files, play music, check reminders, manage calendars, send emails, and play games like movie trivia or rock, paper, scissors, lizard, Spock. The speech recognition software takes this even further. You can turn all the objects on your screen into numbers to help you choose with your voice. You can vocally select or double-click, dictate, or specify key presses. You can see the full list of speech recognition commands to see all that it can do."

Isn't such a tremendous opportunity to empower people to live much richer lives worth working just a little bit harder for?

The Future Of Online Education: Udacity Nanodegrees

In its 20+ year history, the online education market has experienced quite a few ups and downs. From the launch of lynda.com way back in 1995 (back then, it wasn't even an EdTech company yet, strictly speaking; it only started offering courses online in 2002), to Udemy, with its marketplace for online courses in every conceivable topic, to the MOOC revolution, which promised to democratize higher education — I guess it would be fair to say that EdTech space has tried a lot of things over the years, and has gone through quite a few attempts to re-imagine itself.

On the last point, while MOOCs (massive open online courses) might not have exactly lived up to the (overhyped) expectations so far, the industry continues to live on and evolve, with the startups like Coursera, edX and Udacity continuing to expand their libraries, and experimenting with new approaches and programs.

Most recently, Udacity has shared some metrics that allow us to get a sense of how the company have been doing so far. And, in a word, we could describe it as "not bad at all". Apparently, in 2017 the company had 8 million users on the platform (that includes the users engaged with Udacity free offerings), up from 5 million the year before. Udacity also doubled its revenue to $70 million, which constitutes an impressive growth rate for a company at this stage.

Now, the reason why I believe those numbers are particularly interesting is because of the monetization approach Udacity took a few years ago, when it first introduced its Nanodegrees, a 6-12 month long programs done in collaboration with the industry partners, such as AT&T, IBM and Google, that should presumably allow the students to build deep enough skillset in a specific area in order to be able to successfully find jobs.

While this idea itself isn't necessarily unique - other companies have also been trying to create similar programs, be it in the form of online bootcamps, as is the case for Bloc.io, or the Specializations offered by Coursera, I would argue that Udacity's Nanodegrees offered the most appealing approach. Nanodegrees are developed in a close partnership with industry partners (unlike Coursera's Specializations that are university-driven), and require lower commitment (both from the financial perspective and time-wise) compared to online bootcamps. Finally, the marketing approach of Udacity is vastly superior to that of its key competitors, especially when the Nanodegrees were first launched (they announced it in partnership with AT&T, with AT&T committing to provide internships for up to 100 best students, which was a great move).

Some of the metrics Udacity shared this week were specifically related to Nanodegrees, and provided a glimpse into how they were doing so far. In particular, Udacity has reported that there are 50,000 students currently enrolled into Nanodegrees, and 27,000 have graduated since 2014.

The price per Nanodegree varies quite a bit, and it can also depend on whether the program consists of a single term, or several of those, but with the current pricing, it seems reasonable to assume that the average program probably costs around $500-700. With 50,000 students enrolled, that should amount to $25-35 million in run-rate revenues (strictly speaking, that's not isn't exactly run-rate, but that's unimportant here). The actual number might be a bit different, depending on a number of factors (the actual average price per course, the pricing Udacity offers to its legacy users, etc.), but I'd assume it shouldn't be off by much.

Those numbers ($25-35 million, give or take) are interesting, because they clearly show that Udacity must have other significant revenue streams. There are several possibilities here. In addition to offering learning opportunities to consumers, Udacity also works with the businesses, which theoretically could amount to a hefty chunk of the money it earned last year. Besides that, Udacity also runs a Master in Computer Science online program with Georgia Tech, which is a fairly large program today, and offers some other options to its users, such as a rather pricy Udacity Connect, which provides in-person learning opportunities. and a few Nanodegrees that still operate under its legacy monthly subscription pricing model, such as Full Stack Web Developer Nanodegree. All of those could also contribute to the revenue numbers, of course.

And yet, if you look at Udacity website today, and compare it to how it looked like a couple years ago, everything seems to be focused around the Nanodegrees now, whereas in the past, Udacity felt much more like Coursera, with its focus on free courses, with the users required to pay only for the additional services, such as certificates, etc.. The obvious conclusion to be made here is that apparently Udacity considers Nanodegrees to be a success, and believes that there is a significant potential to scale it further.

One last interesting thing to consider is the number of people who have completed at least one Nanodegree since its introduction in 2014. According to Udacity, only 27,000 people have graduated so far, which is curious, given that it reports 50,000 people are currently enrolled in at least one degree, and most degrees are designed to be completed in 6 to 12 months.

This can only mean one of two things: either Udacity has recently experienced a very significant growth in the number of people enrolling in Nanodegrees (which would explain the existing discrepancy between those two numbers), or the completion rates for the Nanodegrees historically have been relatively low.

Now, the completion rates were one of the key issues for MOOCs, where they proved to be quite dismal. However, the situation for Udacity is somewhat different: here, the users have already paid for the program, so in a way, completion rates are less of a concern (and with the legacy pricing model, where Udacity charged users a monthly subscription, the longer times to completion could have actually benefitted the company). On the other hand, low completion rates might ultimately contribute to the poor reviews, negatively affect user retention, and damage the company's brand, so this issue still needs to be managed very carefully.

Would Udacity's Nanodegrees prove to be a success in the long run? That remains to be seen, but so far, it looks like the company has been doing a pretty good job with those, so the future certainly looks promising.

The Challenge Of Attracting The Best Talent

In one of the classes I'm currently taking at Kellogg, we've recently touched on the issue of top K-12 teachers working at the better performing schools, with the schools that represent a more challenging case often facing significant difficulties attracting and retaining top talent.

This problem, of course, isn't unique to K-12 system. If you think about it, most of us will probably choose to move to a job that offers higher pay, and a better working environment, whenever the opportunity presents itself, without a second thought. And if we believe that the new job would be just as, or more, meaningful than the old one, that typically seals the deal. And who could blame us?

And yet, once you start thinking about what that truly means, the answer becomes less clear. While it most certainly makes sense to look for the greener pastures from an individual's perspective, we might wonder what kind of impact does it have on the world around us? More importantly, are we even serving our own needs in the best possible way by following this line of thinking?

One particularly interesting example to illustrate this point that immediately comes to mind is Google. For years now, it has been being highlighted as one of the most desirable employers in the world. It has the resources required to offer its employees extremely competitive levels of pay, and it is also famous for its great work environment - hey, it even tries to assess people's "Googliness" before hiring them in order to determine whether they'll fit well with the company's culture.

Google is undoubtedly a great place to work, so it isn't really surprising that people from all over the world aspire to work there. However, there is also another side to that story. Almost every person I've talked to who's worked at Google has at some point brought up the issue of being surrounded by people who were overqualified for their jobs. Yes, Google's immense profitability has made it possible for the company to pay for the best available talent. But hiring the best people doesn't automatically mean that you have meaningful problems for them to work on. 

That, of course, doesn't mean that Google shouldn't aim to hire the people of the highest caliber -  after all, as long as it has the resources and the appeal required to attract them, the employees and Google both seem to be better off if it does. And yet, one might wonder, what could many of those people have achieved otherwise? Would the companies they'd work for have more challenging problems for them to work upon? Or would some of those people actually start their own companies that'd eventually change the world?

The same goes for the K-12 system. Nobody could ever blame the teachers for the desire to work for the schools that offer better environments - even if one doesn't care for the compensation and surroundings, it can be much more fulfilling to work in such a place. The question, however, is what impact those teachers might have had at the lower-performing schools: some of those often have a much more pressing need for the best talent, but have trouble attracting such candidates.

So, what could be done to address this issue? I am afraid there are no easy answers here. The best talent is, and will always remain, a scarce commodity, and the best organizations often have a higher appeal (not to mention more resources to offer) to those workers - that is not going to change, nor should anyone want it to, really.

What we could do, however, is create additional incentives for the people to take risks, whether that means going to work for a struggling school, or taking a leap of faith and starting a company. Some of those incentives might be financial in nature, but what seems to me to be even more crucial is for us as a society to promote the importance of raising up to the challenge, especially if it doesn't bring one any immediate rewards, and to celebrate those who choose to do so. This, of course, might be easier said than done, but it's not impossible, and is very much worth the effort.

Fighting The Ivory Trade: The Lessons Learned

According to the estimates, in 1979 there were at least 1.3 million African elephants. By early 1990s, that number dropped by more than half, to 600,000. Today, the estimates stand around 415,000, with additional 100 elephants being lost every day, mostly to the poachers engaged in ivory trade.

Recently, the Economist has published a film describing the scope of the problem, and the efforts African countries are currently involved in trying to reduce, and ultimately eliminate, poaching, - I'd highly suggest watching it (it's only 6 minutes long).

The fight to stop poaching is a tough and complicated one, and as one can learn from the film, the best of intentions can sometimes lead to terrible consequences, undoing a lot of the good work that had been done previously. This is something I wanted to focus on, as I believe it's helpful to learn about some of the strategies described in the video, and the reasoning behind them, as those can widely applicable to a number of other issues as well.

The fight to end ivory trade has been going on for decades now, and while it hasn't always been a success, some progress has been made. However, while killing elephant for ivory had been made illegal, the trade itself wasn't completely banned: an exceptions has been made for some countries who made an effort to control the poaching, and the ivory trade also remained legal, albeit with restrictions, in the countries that generated the majority of demand (China, Japan, U.K.). That, in turn, created a surreal situation when the legal and illegal trade co-existed side by side.

The problem is, while one can announce that trading the tusks carved before a certain date is legal, while trading in any tusks carved after that date is not allowed (e.g. this is exactly how the system was set up in the U.K., where trading in tusks carved before 1947 remained legal), there is no real way to separate the demand into those artificial buckets. Moreover, as it turned out, the very fact that the ivory trade was still allowed, even with all of the restrictions, legitimized the desire to own ivory in the eyes of those looking to purchase it.

This became particularly clear in 2008, when the decision to legally sell 102 tons of stockpiled tusks was made. As the tusks have been being seized over the years, it has never been clear what to do with them in the long run, and guarding those has remained expensive and often unsafe. So the argument has been made that the legal sell-off would help to raise the money needed for continuing the conservation efforts, and would also help to depress the prices for ivory, making poaching less economically attractive.

That decision, however, backfired terribly. Those involved in the illegal trade viewed it as a signal that the ivory trade is back (legal or illegal). Moreover, the huge amount of legal ivory flooding the market created a perfect cover for the expansion of illegal trade, as it was often impossible to trace the origin of the tusks. And as it turned out, the legal sell-off didn't even depress the prices, instead, they continued rising - there were multiple theories on why that was the case, with the main explanation accepted today being that the excess demand for the ivory was there, and the legal sell-off certainly didn't help to promote the idea that purchasing ivory might be wrong or immoral.

In 2016, Kenya, trying to decide what to do with a huge amount (105 tons) of stockpiled tusks, and given the terrible outcome of the legal sell-off in 2008, decided to take a different approach: it chose to burn them. It wasn't the first time Kenya was doing that: it first burned 12 tons in 1989 in a widely publicized (and criticized) event, but it has never yet aimed to destroy such a unbelievably huge amount of tusks.

At the first glance, this idea might seem insane: those 105 tons were valued in the hundreds of millions of dollars that could be used to fund further conservation efforts. Moreover, burning so much ivory could have created a sense of scarcity, driving the price of ivory even higher. Finally, some argued that destroying tusks denigrated the dead animals, and sent the message that they were of no value. And yet, Kenya chose to proceed with its plan, widely publicizing the event.

The result: the price of ivory went from $2,000 per kilo in 2013 to around $700 today. That wasn't, of course, the result of Kenya choosing to burn the stockpiled tusks alone. Rather, it came as a result of a series of orchestrated efforts to raise the awareness of the terrible consequences that the demand for ivory had for African elephants, as well as the gradually imposed bans on the legal trade throughout the world (in particular, in China and Hong Kong).

One might argue that the collapse of the legal trade should have just shifted the demand to the illegal market, creating scarcity and driving prices even higher. However, that didn't actually happen, and that's what made this strategy so valuable to learn from.

As it turned out, to a significant extent the demand for ivory was driven by the justification that the existence of legal trade provided it with, and also by general unawareness of the buyers of the real source of most of the ivory they were buying, and the suffering their demand had generated. The phase out of the legal ivory trade that's happening right now, together with the public efforts of the African governments to draw attention to the issue, stripped those moral justifications, and as a result, the demand for the ivory collapsed.

The supply and demand laws of the free market were, of course, still in place, but the relationship between those turned out to be much more complicated than many might have expected. This isn't something unique to the ivory trade, either, - there are other cases where the relationship between supply and demand is complex, and therefore requires very careful management to avoid disastrous consequences. I sincerely hope that those lessons of 2008 and 2016 would be further researched and publicized, as the price paid for these insights was surely too high to let it go to waste.