Why We Need To Rethink The Existing Safety Nets

These days, it seems that the discussion on how AI is going to disrupt the vast majority of industries in just a few short years rages everywhere. According to PitchBook, in 2017 VCs have poured more than $10.8 billion into AI & machine learning companies, while the incumbents have spent over $20 billion on AI-related acquisitions; according to Bloomberg, the mentions of AI and machine learning on earnings calls of public companies have soared 7-fold since 2015; and just this week, The Economist published a series of articles, framed as Special Report, on the topic.

In today's context, AI typically refers to machine learning, rather than any kind of attempt to create general intelligence. That, however, doesn't change the fact that the current technology has clearly moved past the point when it was of limited use to non-tech companies, and is now beginning to disrupt a large number of industries, including the ones that weren't particularly tech-savvy in the past. To quote McKinsey Global Institute's "Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation" report:

"We estimate that between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world, based on our midpoint and earliest (that is, the most rapid) automation adoption scenarios. New jobs will be available, based on our scenarios of future labor demand and the net impact of automation, as described in the next section. However, people will need to find their way into these jobs. Of the total displaced, 75 million to 375 million may need to switch occupational categories and learn new skills, under our midpoint and earliest automation adoption scenarios."

To be fair, McKinsey also states that less than 5% of all occupations consist entirely of activities that can be fully automated. Still, here's another valuable quote from the report:

"In about 60 percent of occupations, at least one-third of the constituent activities could be automated, implying substantial workplace transformations and changes for all workers."

Overall, there seems to be little doubt today that even with the current level of technology, the global workforce is about to enter a very volatile period that would require large numbers of people to learn new skills or be altogether retrained, or else risk losing their jobs, and face difficulties finding new employment.

The Peculiar Nature Of Disruptive Technology Adoption

I would also argue that while tech industry, as well as the broader society, have often been overly optimistic when trying to forecast how soon certain revolutionary advances in technology were to happen (heck, in their 1955 proposal the fathers of AI, which included Marvin Minsky, John McCarthy and others, outlined their belief to be able to make significant progress towards developing a machine with general intelligence in a single summer), once the core new technology became available, even the most daring forecasts for adoption rates often turned out to be too conservative.

This is especially true in cases when technology in question was impactful enough, and its nature allowed for the formation of an ecosystem around it — in which case, in just a few years, there were hundreds of thousands of stakeholders involved coming up with new creative ways to benefit from the advantages brought by the new tech.

With AI, or rather, with machine learning (in this case, the distinction is quite important), while the underlying technology is still evolving and will continue to do so, it's already good enough for a wide variety of applications, which prompted a rapid rise in the number of tech companies, startups, consultancies and independent developers involved in the space — today we already have a vast ecosystem around AI, with the ever-growing number of stakeholders involved, and it can only be expected to grow larger in the next few years.

Rethinking The Safety Nets

What that means is that even the most daring forecasts produced by McKinsey or anyone else might still underestimate the change that's coming. And if that turns out to be true, figuring out how to help all the people who are going to be displaced, becomes of utmost importance, as the society will need to find ways to support those people through periods of unemployment, provide them with the training that would be effective in bringing them back to the workforce (the current government-run retraining programs, while costing a lot to the taxpayers, often turn out to be painfully ineffective, at least in the U.S.), and, ultimately, take care of those who for various reasons can't get back to the workforce, while doing all of the above on an unprecedented scale.

This calls for the creation of robust safety nets for people, while also making sure that it doesn't stifle economic growth: while the safety nets of some European countries are great for their citizens, they also place undue burden on the employers, and incentivize both the mature companies and the startups to move their business to other places, if possible (and in the increasingly global and interconnected world, it is indeed becoming possible to do that more and more frequently).

At the first glance, there is a paradox here: the safety net is becoming increasingly important, but if a robust safety net stands to hurt the economic growth, then there'll be less jobs going on, in turn making the safety net even more essential, and more costly to provide. This paradox, in turn, brings the ultimate question: why are our safety nets designed with the assumption that it's the end goal for people to have a formal full-time job? Note that this is the case for most developed countries, including the U.S.: while it might be easier to fire people in the States compared to many European countries, the system is still designed to incentivize people to seek full-time employment, in some ways even more so than in Europe.

If you think about it, it doesn't seem to make much sense to force people to look for full-time employment above everything else, or to force the employers to make long-term commitments to their employees, or to bear most of the burden associated with their safety nets, in a world that is increasingly global and going through rapid changes at an accelerating pace. Wouldn't it be better if at least most of the safety net would come from the state, while the employers would be incentivized to optimize for efficiency and growth, bringing in people (and firing them) as needed?

This added flexibility for the employers doesn't need to be free either: it's no secret that the corporate taxation is dysfunctional, but it's hard to fix it without offering a decent reason for the companies to play nice (instead of moving the profit center to Ireland) and comply, and the added flexibility on managing their workforce can potentially be a powerful incentive (especially in the HQ markets, where workforce constitutes a significant expense, and can't be easily moved elsewhere). For the businesses, that would mean that they are still being asked to pay their fair share, but at least they won't have to make upfront and long-term commitments that can often have perilous consequences in the changing markets. That remains particularly true for the smaller companies.

Would such a world be more volatile for regular people? Alas, it most likely will be. But it also stands to reason that in a world where your health insurance isn't tied to your employer but is instead provided by the state no matter what and where you have the opportunity to go back to the school as needed, without having to worry about the cost, people would be much more daring to pursue the career options that are best for them long-term.

The Final Piece: UBI

There is still one component missing, of course. If there is nothing preventing your employer from firing you without much notice, the safety net has to include some mechanism to account for that, and, most likely, it has to be more robust than the currently available programs, which brings the conversation to the concept of UBI, or universal basic income.

Now, that's an incredibly broad topic, and the one that has been in discussion for decades, if not longer (for example, few people know that the U.S. actually conducted a number of experiments on negative taxation way back in 1960s, and even almost got to implement a form of basic income). Also, basic income doesn't stand for one particular idea, but rather includes a range of concepts, from offering everyone the same lump sum regardless of their income or wealth, to ideas of negative taxation that would help to create an income floor for everyone, to proposals that are more limited in scope, but might still play a valuable role in helping to eliminate poverty and providing safety net for people.

The most realistic concept I've seen so far, and the one I like the most, is described in the recently released book called "Fair Shot: Rethinking Inequality and How We Earn", written by a Facebook co-founder Chris Hughes. I'd highly recommend reading the book to anyone interested in the topic, but in short, the idea is to supplement the earnings of every household with the annual income of $50,000 or less, with additional $500/month per working adult (less, if the income is close to $50,000), building on top of existing EITC program, and to pay for this program by eliminating preferential tax treatment for capital gains, and imposing additional taxes on those who earn $250,000 or more per year.

While this idea is less daring that the some of the more sweeping concepts of UBI, it has several extremely interesting components to it. First, it's much less expensive than some of the other UBI proposals, which in theory means that it's possible to implement it even today. Second, unlike the calls to provide basic income to everyone regardless of their wealth or whether they are working or not, Chris proposes to provide this supplementary income to working adults with relatively low earnings, but to use a much broader definition of work that the one currently used in EITC: the idea is to count as work any kinds of paid gigs (e.g. working for Uber, TaskRabbit and the likes), as well as to count homemaking and studying as work. That way, people would remain incentivized to engage in productive activities, but wouldn't be limited in what they could do as much as they are now (although, interestingly, the vast majority of UBI experiments actually provide evidence that people receiving it continue to work, and even work more, instead of withdrawing from the workforce, so this concern is artificial in nature to begin with). Third, while $500/month won't be enough to support someone who has no other income, its value shouldn't be underestimated: the studies show that even small amounts of cash can help people get by during the hardest periods and optimize their careers for longer term.

The Path Ahead

Even with the UBI in some form, the guaranteed health insurance and the access to free education, people wouldn't exactly get to enjoy their lives without having to worry about work: the goal, at least for now, should be to provide safety net for the periods of turmoil and incentivize people to pursue riskier and more rewarding opportunities career-wise, rather than eliminate the need to worry about finding employment altogether. Still, having this safety net would mean a great deal for someone whose job has been eliminated by automation and who now has trouble finding work, or who needs to go back to college to get retrained, or simply wants to quit her less than inspiring job and try to launch a business.

The change brought by the globalization and automation is inevitable, and so, most places would have to find a way to adapt to it, one way or the other. Right now, places like Netherlands or Nordic countries already have well-developed safety nets, but often represent a challenging environment for the new businesses to grow in, while other places (e.g. the U.S.) can be much more business-friendly, but don't offer all the necessary protections to support people who find themselves worse off than before. What remains to be seen is what path each of those countries would choose to pursue going on, and how it would play out for them over the next 10-20 years.

Data Privacy And GDPR: Treading Carefully Is Still The Best Course

As the rage over Facebook/Cambridge Analytica situation continues, the calls for much more rigorous regulation for tech companies are becoming more and more common. On the surface, this seems reasonable: it's hard to argue that handling of users' data by many companies remains messy, with the users often left confused and frustrated, having no idea about the scope of the data they're sharing with those companies. And yet, I am going to argue that we — as users, customers and society as a whole — stand to lose a lot if we act purely on our instincts here: the excessive regulation, if handled poorly, can harm the market immensely in the years to come, and ultimately leave us worse, not better, off.

Current discussion around data privacy hasn't actually started with the recent Facebook scandal. Over the last few weeks, you might have received notices from multiple tech companies on updated terms of services — those are driven by the companies' preparation for General Data Protection Regulation, or GDPR, a new set of rules aimed to govern data privacy in the EU, to kick in on May 25th this year. If you're interested, here are a couple of decent pieces providing an overview of GDPR, from TechCrunch and The Verge.

Now, it is still the EU regulatory framework, so naturally, it only governs the handling of the data that belongs to the users who reside in the European Union, which prompts the question why should people in other geographies bother to learn about it? Well, to answer it, here's the quote from the recent The Verge article:

"The global nature of the internet means that nearly every online service is affected, and the regulation has already resulted in significant changes for US users as companies scramble to adapt."

And that's exactly right: while GDPR only applies to the data that belongs to the EU citizens, it's often hard, if not altogether impossible, to build a separate set of processes and products for a subset of your users, especially if we are talking about a subset so large, diverse and interconnected as the European users. Therefore, quite a few companies have already announced an intention to use GDPR as the "gold standard" for their operations worldwide, rather than just in the EU.

Quite a few things about GDPR are great: the new "terms of service" are about to become significantly more readable, the companies would be required to ask the users to explicitly opt in on the data sharing arrangements, instead of opting their users in by default, and then forcing them to look for the buried "opt out" options, and the opportunity for the users to request any company to provide a snapshot of all the data they have on them is likely to prove to be extremely useful. The abuse, like in Facebook/Cambridge Analytica case (irrespective of who's to blame there) is also about to become much harder, not to mention much costlier for the companies involved (under GDPR, maximum fines can reach 4% of the company's global turnover, or €20 million, whichever number is larger).

So what's the problem then? Well, first of all, GDPR compliance is going to be costly. Europe has already witnessed the rise of a large number of consultants helping companies to satisfy all the requirements of GDPR before it kicks in in May. The issue with that is that the large companies typically can afford to pay the consultants and the lawyers to optimize their processes. Instead, it's often the smaller companies, or the emerging startups, that can't afford the costs associated with becoming fully compliant with the new regulations.

That, in turn, can mean one of two things: either the authorities choose not to enforce the new laws to a full extent for the companies that are beyond a certain threshold in terms of revenue or the number of users, or GDPR threatens to seriously thwart the competition, aiding the incumbents and harming the emerging players. The second scenario is hardly something that the regulators, not to mention ordinary citizens, can consider a satisfactory outcome, especially in the light of the recent outcry over Facebook, Google and few other big tech companies — most people have no desire to see these companies become even more powerful than they are today, and yet that's exactly what GDPR might end up accomplishing, if it's enforced in the same fashion for all companies, irrespective of their size or influence.

The second problem is that while the first of the principles of GDPR, "privacy by design", isn't really new to the market, the second, "privacy by default" is a significant departure from how many tech companies, in particular those in the marketing/advertising space, operate today. In short, GDPR puts significant restrictions on the data about the user that companies are allowed to collect, and in the situations they're allowed to share it with their partners (and, in most cases, they'd need to obtain an explicit consent from the user before her data could be shared). That potentially puts at risk the entire marketing industry, as most of the current advertising networks employ various mechanisms to track users throughout the internet, as well as routinely acquire data from third parties on the users' activities and preferences in order to enable more effective targeted advertising. Right now, this way of doing things seems to be under direct threat from GDPR.

Now, there are plenty of people who believe that the current advertising practices of many companies are shady at best, and downright outrageous at worst, and any regulation that forces the companies to rethink their business models should be welcomed. To that end, I want to make three points on the situation isn't necessary that simple:

1. Advertising is what makes many of the services we routinely use free. Therefore, if the current business model of the vast majority of those companies comes under threat, we need to accept that we'll be asked to pay for many more of the services we engage with than we do now. The problem, of course, is that most consumers, for better or worse, really hate to pay for the services they use online, which means that a lot of companies might find themselves without a viable business model to go on with.

2. The incumbents are the ones who stand to win here. What comes to mind when you think about the companies that don't need to rely upon third-party data about their users to successfully advertise to them? Facebook, LinkedIn, Google. Those companies already possess huge amounts of information about their users, and therefore they'd actually be the ones that are the least threatened by tightened regulations on data sharing, and likely to become even stronger, if their competitors for the advertising dollars are put out of business.

3. A "separate web" for the EU users. Right now, it looks like many companies are inclined to treat GDPR as the "gold standard". However, it's worth remembering that they still have another option to go with. If GDPR compliance proves to be too harmful for their businesses, instead of adopting it globally, they might choose to go into trouble of creating a separate set of products and processes for the EU users. That, of course, would most likely mean that those products would receive less attention that their counterparts used by the rest of the world, and would feature more limited functionality, harming the users who reside in the EU. It would also harm the competitiveness of the European companies, as well as their ability to scale globally, as, unlike their foreign-based peers, they would face more restrictive and expensive to comply with regulations from the start, while, say, their U.S. peers would have the luxury to scale in the more loosely regulated markets first, before expanding to Europe — at which point, they'd be more likely to have the resources necessary to successfully withstand the costs of compliance.

Once all of this is taken into consideration, I'd argue that it becomes obvious that the benefits that come with the stricter regulation, however significant, don't necessary outweigh the costs and the long-term consequences. Data privacy is, of course, a hugely important issue, but there is little to be gained from pursuing it above everything else, and a lot to lose. With GDPR, the EU has chosen to put itself through a huge experiment, with its outcome far from certain; the rest of the world might benefit from watching how the situation around GDPR unfolds, waiting to see the first results, and then learning from them, before rushing in similar proposals in their home countries.

Cambridge Analytica Crisis: Why Vilifying Facebook Can Do More Harm Than Good

Throughout the week, I've been following Facebook and Cambridge Analytica scandal as it's been raging on, growing more and more incredulous. Yes, this is a pretty bad crisis for Facebook (which they inadvertently made even worse by their clumsy actions last week). But it still felt to me that the public outrage was overblown and to a significant degree misdirected. Here are the key things that contributed to those feelings:

1. Don't lose sight of the actual villains. Aleksandr Kogan and Cambridge Analytica are the ones truly responsible for this, not Facebook. Facebook practices for managing users' data might have been inadequate, but it was Kogan who passed the data to Cambridge Analytica in violation of Facebook policies, and then Cambridge Analytica who chose to keep the data instead of deleting it to comply with Facebook requests.

2. Nobody has a time machine. It might seem almost obvious that Facebook should have reacted differently when it learned that Kogan passed the data to Cambridge Analytica in 2015 — e.g. extensive data audit of Cambridge Analytica machines would have certainly helped. The problem is, it's always easy to make such statements now, yet nobody has a time machine to go back and adjust her actions. Was Facebook sloppy and careless when it decided to trust the word of the company that already got caught breaking the rules? Sure. Should it be punished for that? Perhaps, but rather than using the benefit of hindsight to argue that it should have acted differently in this particular case, it seems more worthwhile to focus on how most companies dealing with users' data approach those "breach of trust" situations in general.

3. Singling out Facebook doesn't make sense. To the previous point, Facebook isn't the only company operating in such a fashion. If one wants to put this crisis to good use, it makes more sense to demand for more transparency and better regulatory frameworks for managing users' data, rather than single out Facebook, and argue that it needs to be regulated and/or punished.

4. Don't lose sight of the forest for the trees. It's also important to remember that the data privacy regulation is a two-way road, and by making the regulations tighter, we might actually make Facebooks of the world better, not worse, harming the emerging startups instead. This is a topic for another post, but in short, strict data regulation usually aids the incumbents while harming the startups that find it more difficult to comply with all the requirements.

5. Data privacy is a right — since when? Finally, while the concept of data privacy as a right certainly seems attractive, it's not as obvious as it might seem. Moreover, it raises an important question — when exactly did the data privacy become a right? This isn't a rhetorical question either. It certainly wasn't so in the past: many of the current incumbents have enjoyed (or even continue to enjoy) periods of loose data regulation in the past (e.g. like Facebook in 2011-2015, or so). So if we pronounce the data privacy to be the right today, we are essentially stifling the competition going forward by denying the startups of today similar opportunities. Does this sound nice? Of course not, but that's the reality of the market, and we have to own it before making any rash decisions, even if some things seem long overdue.

Overall, this crisis is indicative of multiple issues around data management, and can serve to launch a productive discussion on how we might address the data privacy concerns going on. At the same time, it doesn't do anyone any good to vilify Facebook beyond necessary (and some of the reporting these days was utterly disgusting and irresponsible), the #deletefacebook campaign doesn't really seem to be justified (again, why not get rid of the vast majority of the apps then, given that Facebook isn't that different from the rest) and any further discussion about data privacy should be carefully managed to avoid potentially harmful consequences - most of us have no desire to find themselves in the world where we have perfect data privacy, and no competition.

Why We Should Focus On Our Similarities, Not Uniqueness

"Define America in one word... Possibilities. Americans always believe anything is possible."

Tonight, Joe Biden, the 47th Vice President of the U.S., came to Kellogg to deliver a talk on unequal economic growth. For me, that was the first time I've got to witness such a high-profile politician speak in person, so, as you can imagine, I was fairly excited about it. And I definitely wasn't disappointed: overall, it was a very interesting and insightful talk. Unequal economic growth of the last decades remains a significant issue that should not be overlooked, and Vice President Biden in his speech touched on many of the key points.

In particular, his push for the healthcare and education to be treated as people's basic right, and not a privilege, felt appropriate and refreshing, and his comments about the unfair restrictions that the companies today often force onto the workers that limit their job mobility and bargaining power, or about the unreasonably harsh licensing requirements for many jobs that as a result stifle competition, were spot on, while also staying reasonable: he only focused on the right of the workers to compete for the jobs and fair pay in an open marketplace, and not on how people are entitled to those jobs in the first place (an argument that a certain person who-must-not-be-named likes to appeal to so much).

Was Biden's speech mostly focused on the U.S.? Well, yes, yet in a way that was to be expected. In business school, it's easy to grow accustomed to the idea of bringing global perspective into every discussion, but one can't expect everyone to follow on this approach, nor is it really necessary. After all, most of us probably didn't come to Kellogg today expecting Biden to deliver a lecture on the issues of inequality globally - we can always look up to Gates and others for that.

However, there was one thing in today's talk that rubbed me the wrong way. In his speech, Vice President repeatedly emphasized the uncanny ability of the U.S. to reimagine itself, the unique qualities that the U.S. and its people possess, and its special place in history of the world, in the process making a few unflattering remarks about China, and also, to my surprise, U.K., France and Germany.

Curiously enough, I actually do agree with most of those remarks: in my opinion, it's quite fair to say that the U.S. holds a unique place of the in world today, as well as to talk about the very special traits and qualities that brought many people to the U.S. in the first place, and then helped them succeed there and build the country as we know it, and the exceptional ability of the country to reimagine itself, and push forward.

Still, I feel that it's not enough for a statement to be simply correct to make for a compelling, and, more importantly, right, argument, and, in my opinion, that was exactly the case here. In the global world of today, there is more to be gained from focusing on how everyone might benefit from increased cooperation that is predicated on every country acknowledging its strong and weak sides, as well as taking time to praise and learn to work with the strengths of its partners. It's not that the U.S. (or any other place) needs to suddenly lose their unique advantages, or forget its history, of course. Rather, it's about focusing on seeing itself as an essential part of the larger world made of equals, and then promoting that kind of worldview among its citizens.

There is also another argument to be made there. The sense of uniqueness can be seen as a source of pride, but it can also easily lead to the feelings of superiority or entitlement. Yes, Vice President Biden did specifically mention that to him, this discussion isn't about entitlement, but that's the issue with the concept of uniqueness: what it actually means is open to everyone's interpretation. Coming from another country that also has a long history of viewing itself, and its people, as a unique and powerful force in the world (to those of you who don't know that, I'm originally from Russia), I've seen firsthand some of the issues often stemming from such positioning. Yes, the sense of national pride can do a lot of good for any country and its people, but it can also represent a dangerous force if taken too far, with the sentiments of people around it subject to being easily manipulated — which makes me convinced that now is not the time to appeal to it, as the dangers far outweigh any possible benefits.

So while I agree with the essence of the comments Vice President Biden made in his speech, I also strongly believe that in today's world that is becoming increasingly global and yet is also riddled with xenophobia, civil unrest, and white supremacy movements gaining ground, the "identity of uniqueness", if you will, even when tied to a country, and not race, ethnicity, or religion, should perhaps make way for the idea of everyone in the world being essentially the same, and of the ever-increasing importance of all of us working together. After all, whether we like it or not, the world we live in is already global, and nothing would ever reverse this, so the sooner we adjust our philosophies and rhetoric accordingly, the better off we'll all be.

Silicon Valley's Imminent Demise Is Way Overrated

It has become almost obligatory for every publication to occasionally write about a piece on the imminent decline of Silicon Valley, and the opportunities that exist elsewhere (the latest being NYT, with its "Silicon Valley Is Over, Says Silicon Valley" article). The only issue with such statements? They often don't have a shred of evidence to support it.

All right, maybe that wasn't entirely fair. The declining costs of starting a company and bringing the product to the market, combined with the ever-increasing interconnectedness of the world, have in fact created fertile ground for startup hubs to grow throughout the world — although I would argue that the actual vitality of many of the places typically mentioned by the media is rather overrated. Then, there are also several cities/regions that have done a great job attracting the biggest tech companies to open offices there, or even establish regional headquarters (with Ireland and Singapore being perhaps the most prominent examples). Finally, there is, of course, plenty of organizations that have been established in places other than Silicon Valley to begin with, and have grown to successfully challenge their Silicon Valley rivals.

But all in all, Silicon Valley is alive and well, and isn't going anywhere. And the surest way to confirm that would be to try and find any reasonable metric that shows that something is amiss in the Valley, compared to the previous year, or the years before that. Mind you, I'm not talking about the data that shows that other places are doing well (thankfully, building and growing companies isn't a zero sum game, after all). Rather, if one wants to claim the decline of Silicon Valley, it seems reasonable to ask for data that clearly shows just that.

The problem is, it's extremely hard to build a case for it with data, especially if you try to account for the influence that the outside factors (e.g. the economy experiencing a recession, or a period of active growth) might have on the tech world, or the natural variability that is an inherent part of any sort of business (e.g. a single mega-round of financing might influence the statistics for VC investments, yet wouldn't mean much to the overall health of the ecosystem).

Consider, for example, the metric NYT references in its recent article: according to RedfIn, in the last three months of 2017, San Francisco lost more residents to outward migration than any other city in the country. At the first glance, this fact seems impressive. However, there are several issues with this kind of logic. First, inferring any conclusions from a single data point is dangerous in itself. Second, Redfin report doesn't provide any insights into the demographics of those leaving the city. In my opinion, it actually stands to reason to assume that the fact that people are fleeing SF is more likely to be an indicator of the success of the tech sector — which, unfortunately, also brings the issue of the ever-rising cost of living — and that there is absolutely no reason to conclude that it's the tech workers who are leaving the city. Finally, drawing conclusions for the entire Silicon Valley based on the data for San Francisco seems presumptuous at best.

Other arguments typically used to argue for the decline of Silicon Valley don't stand up to the scrutiny either. Major tech companies opening offices in other cities/countries? But why is that necessarily a sign of things going awry? As long as they aren't downsizing their offices in Silicon Valley, it actually seems reasonable to view it as a confirmation that everything is going well, with those new offices being part of the expansion plans. Some places offering attractive real estate at bargain prices? And how is this different from 5, or 10, or 15 years ago? Yes, the real estate prices in Silicon Valley are through the roof, but they've been like that for years, and if anything, there is some evidence of the prices actually declining a bit recently, at least in SF. Not to mention that the idea that coming up with affordable real estate offerings has not once helped anyone to attract the best companies - and believe me, this has been tried many times all over the world. The investors moving to the new city? But what is the context for this move? Is it really about the great future of that place, or does it have more to do with coming up with a bold thesis for a new fund they're raising right now?

Again, I'm not saying that other places cannot successfully compete with, or even rival, Silicon Valley. There are plenty of cities that did extremely well in the last 10-20 years, both in the U.S. (Seattle, New York, Boston, LA and so on), and elsewhere (London, Berlin, Tel Aviv, Shanghai, to name just a few). But I would argue that the rise of those places doesn't automatically constitute the decline of Silicon Valley — if anything, in the modern world, the well-being of many of them is tied to that of Silicon Valley, same as Silicon Valley significantly benefits from the existence of vibrant ecosystems in those places. Still, if you want to argue that Silicon Valley is experiencing a decline right now, that's fine - just try to come up with at least somewhat convincing data first.

Designing Accessible Products

On Thursday, Microsoft announced Soundscape, an app that aims to make it easier for people who are blind or visually impaired to navigate the cities, by enriching their perception of surroundings through 3D cues.

According to Microsoft:

"Unlike step-by-step navigation apps, Soundscape uses 3D audio cues to enrich ambient awareness and provide a new way to relate to the environment. It allows you to build a mental map and make personal route choices while being more comfortable within unfamiliar spaces."

To me, this appears to be a wonderful idea, and an app like this could eventually make a huge difference for people who are visually impaired, helping them to navigate unfamiliar environments and make a better use of everything the cities have to offer.

I've been very impressed by the commitment Microsoft demonstrated to building more accessible tools, while interning at Microsoft this summer. If you're interested to learn more about the work they are doing, there is a dedicated section on the company's website, highlighting the principles Microsoft utilizes to think about the inclusive design, and providing specific examples of their work.

Of course, Microsoft isn't the only major tech company that has demonstrated a commitment to building products that are truly accessible. Apple has been long known for their attention to the accessibility, and continues to work to make its products accessible. Google, while not necessarily doing a great job in the past, seems to be catching up. And Amazon finally made its Kindle e-readers accessible once again in 2016, after 5 years of producing devices that weren't suited for those who are visually-impaired (the early versions of Kindle readers were actually accessible too, but then Amazon has given up on this functionality).

And yet there are a lot of areas where tech products' accessibility leaves much to be desired, and many companies simply don't pay enough attention to it. Those often come up with multiple reasons to justify it, too. Some companies state that designing with accessibility in mind is too hard or too expensive, or that it just makes their products look dull. Others believe that by ignoring the accessibility issues, they're only foregoing a small percentage of the market (the figures typically mentioned are 5%, or less).

To be clear, none of those arguments should be viewed as acceptable. Moreover, designing with no regard to accessibility today is often classified as discrimination based on disabilities, and over the last 25 years, it has been made illegal in multiple countries (including the U.S. and U.K.), with the customers successfully suing companies who weren't providing accessible options.

But even if we put aside the legal aspect of the issue, do any of the excuses typically used by companies to avoid paying attention to accessibility actually have merit in them? As it turns out, not really.

According to the U.S Census Bureau, in 2010 nearly 1 in 5 People (19%) had a disability, with more than half of them reporting their disability being severe. About 8.1 million people had difficulty seeing, including 2.0 million who were blind or unable to see. About 7.6 million people experienced difficulty hearing, including 1.1 million whose difficulty was severe. About 5.6 million used a hearing aid. Roughly 30.6 million had difficulty walking or climbing stairs, or used a wheelchair, cane, crutches or walker. About 19.9 million people had difficulty lifting and grasping. This includes, for instance, trouble lifting an object like a bag of groceries, or grasping a glass or a pencil.

Now, if you look at those numbers, the argument that by ignoring the accessibility, the companies are foregoing only a small chunk of the market, proves to be obviously incorrect. Even if you single out a particular disability, like having difficulty seeing, it still affects millions of people.

What is perhaps even more important, those numbers don't necessarily include everyone who might benefit from the products being designed with accessibility in mind: a well thought-out design might also benefit people who are temporary disabled, or the youngest and the elderly users. So it's not just about ensuring that the people with disabilities would be able to use your products, but also about creating better products in general.

Here is one great quote related to this discussion, from the Slate.com article "The Blind Deserve Tech Support, Too: Why don’t tech companies care more about customers with disabilities?":

"When you make a product that’s fully accessible to the blind, you are also making a product accessible to the elderly, to people with temporary vision problems, and even to those who might learn better when they listen to a text read aloud than when reading it themselves. This is the idea of universal design: that accessible design is just better design."

Is designing for accessibility time-consuming and expensive? Sometimes, but overall, it really doesn't have to be. A lot of it has to do with learning about and following the best practices related to accessibility, and ensuring that the products you build adhere to the industry standards. Starting to do that might require a certain amount of resources, but in most cases it would be a one-time investment. Besides that, some of the things related to accessibility require very little effort on your part, e.g. adjusting your color scheme to make it easier for people who are color-blind to interact with your product. And in the process of making your products accessible, you are likely to materially improve the experience for your current users as well.

Finally, we are entering an era when the new technology (AI, voice assistants, VR/AR, novel ways to input information, etc.) can contribute a great deal to making it easier for people with disabilities to interact with the products around them. Take, for example, this description of what could be achieved even with the current generation of voice assistants, from "Brave In The Attempt" article on Microsoft's accessibility efforts:

"One of the best Windows tools for people with mobility challenges is Cortana. Just with their voice, users can open apps, find files, play music, check reminders, manage calendars, send emails, and play games like movie trivia or rock, paper, scissors, lizard, Spock. The speech recognition software takes this even further. You can turn all the objects on your screen into numbers to help you choose with your voice. You can vocally select or double-click, dictate, or specify key presses. You can see the full list of speech recognition commands to see all that it can do."

Isn't such a tremendous opportunity to empower people to live much richer lives worth working just a little bit harder for?

Remaking Education

To continue with the topic of education, today we increasingly hear complaints about the growing inadequacy of our education systems to the realities of the world around us. It's impossible not to see merit in some of those, too. In the world that is rapidly moving towards a gig economy, characterized by continuing decline in the average job tenure, with a lot of jobs likely to disappear in the next 10-20 years, a lot of aspects of the traditional education systems are questionable at best.

But in order to understand which parts of the system work well, and which are outdated and require revamping, it's useful to understand the history and context in which current system came into existence in the first place, and the purposes it was set up to serve. Otherwise, proposing any changes would be akin to moving ahead in the dark: we might still stumble upon something useful, but it is just as likely that we would do more harm than good. This is particularly true for something as complex and intertwined with every aspect of our lives as education.

Our current education system as we know it, was largely established in the second half of the 19th century, and the first decades of the 20th century, and coincided with the Second Industrial Revolution. In his (absolutely brilliant, in my opinion) book "The End of Average", Todd Rose argues that to a significant extent, the motivation behind it had less to do with the desire to create a truly meritocratic society — instead, it was largely driven by the ever increasing demand for workers that the new businesses were experiencing. Therefore, the key purpose of education was not to provide everyone with the opportunities to discover their talents and use those in the best possible way, but rather to educate people to a minimum level that would be sufficient for them to fill in the new vacancies.

The Second Industrial Revolution has long since became history; today, we are in the middle of what is widely regarded as the Digital Revolution, or the Third Industrial Revolution. This new era has arguably brought tremendous change to the societies throughout the world and global economy; it's hard to argue that the needs of both the society and individuals today aren't very different from what they've been during the Second Industrial Revolution more than a hundred years ago. And yet, we still to a significant extent rely upon a system that was designed for a different age and circumstances.

That raises several important questions. First, given how much the world has changed over the last 100 years, how suitable our education approaches are for the new circumstances? Yes, it remains possible that a lot could be achieved through the gradual evolution of the existing offerings. But is it too far-fetched to imagine that at least for some aspects of the current system, disruption might make more sense that evolution?

Personally, I don't think so. The idea of providing personalized education in schools required changing pretty much every aspect of the traditional school experience - and yet, the early results seem to be very promising. Same goes for the notion that bootcamps, nanodegrees and other unconventional options for professional education might one day turn into a viable alternative to college education — while it might raise some eyebrows, there is a lot of promising work happening in the space right now. And the list goes on.

Second, if we want to bring positive change to the current education system, we need to focus on designing new solutions that can be successfully scaled. One reason why the entire world still relies on a system that was put in place over a hundred years ago is that it was built to scale. Therefore, if the goal is to have a wide impact, for whatever solutions we propose, it's important to consider whether there is a way to implement them throughout a single state, a country, or the entire globe, as it was done with the school and college education in the past.

To that point, it's also crucial to consider the implications the proposed solutions would have on the existing system: we no longer live in a world that is a blank canvas, therefore, the implications of the change sometimes could be unexpected and profound. The concept of personalized learning illustrates some of these issues well: while students might get tremendous benefits from the new process, we need to consider what would happen when the real world would inevitably start interfering with it. What would happen when the families move, and the students find themselves in the areas where there are no schools with personalized learning options? Would the introduction of personalized learning only deepen the gap between the well-performing schools that are well-manned and access to funding, and the ones that are already struggling? Would it hamper the job mobility for the teachers? I'm sure it's not impossible to find answers to those questions, but in order to do that , we need to be asking those questions in the first place.

Finally, one day a time would come when the context would change again, and we would need to rethink the education system once more. I believe we could do a great service to the future generations if we keep that in mind, and focus on designing solutions that could be adjusted as needed, and are made to be iterated upon.

The Future Of Online Education: Udacity Nanodegrees

In its 20+ year history, the online education market has experienced quite a few ups and downs. From the launch of lynda.com way back in 1995 (back then, it wasn't even an EdTech company yet, strictly speaking; it only started offering courses online in 2002), to Udemy, with its marketplace for online courses in every conceivable topic, to the MOOC revolution, which promised to democratize higher education — I guess it would be fair to say that EdTech space has tried a lot of things over the years, and has gone through quite a few attempts to re-imagine itself.

On the last point, while MOOCs (massive open online courses) might not have exactly lived up to the (overhyped) expectations so far, the industry continues to live on and evolve, with the startups like Coursera, edX and Udacity continuing to expand their libraries, and experimenting with new approaches and programs.

Most recently, Udacity has shared some metrics that allow us to get a sense of how the company have been doing so far. And, in a word, we could describe it as "not bad at all". Apparently, in 2017 the company had 8 million users on the platform (that includes the users engaged with Udacity free offerings), up from 5 million the year before. Udacity also doubled its revenue to $70 million, which constitutes an impressive growth rate for a company at this stage.

Now, the reason why I believe those numbers are particularly interesting is because of the monetization approach Udacity took a few years ago, when it first introduced its Nanodegrees, a 6-12 month long programs done in collaboration with the industry partners, such as AT&T, IBM and Google, that should presumably allow the students to build deep enough skillset in a specific area in order to be able to successfully find jobs.

While this idea itself isn't necessarily unique - other companies have also been trying to create similar programs, be it in the form of online bootcamps, as is the case for Bloc.io, or the Specializations offered by Coursera, I would argue that Udacity's Nanodegrees offered the most appealing approach. Nanodegrees are developed in a close partnership with industry partners (unlike Coursera's Specializations that are university-driven), and require lower commitment (both from the financial perspective and time-wise) compared to online bootcamps. Finally, the marketing approach of Udacity is vastly superior to that of its key competitors, especially when the Nanodegrees were first launched (they announced it in partnership with AT&T, with AT&T committing to provide internships for up to 100 best students, which was a great move).

Some of the metrics Udacity shared this week were specifically related to Nanodegrees, and provided a glimpse into how they were doing so far. In particular, Udacity has reported that there are 50,000 students currently enrolled into Nanodegrees, and 27,000 have graduated since 2014.

The price per Nanodegree varies quite a bit, and it can also depend on whether the program consists of a single term, or several of those, but with the current pricing, it seems reasonable to assume that the average program probably costs around $500-700. With 50,000 students enrolled, that should amount to $25-35 million in run-rate revenues (strictly speaking, that's not isn't exactly run-rate, but that's unimportant here). The actual number might be a bit different, depending on a number of factors (the actual average price per course, the pricing Udacity offers to its legacy users, etc.), but I'd assume it shouldn't be off by much.

Those numbers ($25-35 million, give or take) are interesting, because they clearly show that Udacity must have other significant revenue streams. There are several possibilities here. In addition to offering learning opportunities to consumers, Udacity also works with the businesses, which theoretically could amount to a hefty chunk of the money it earned last year. Besides that, Udacity also runs a Master in Computer Science online program with Georgia Tech, which is a fairly large program today, and offers some other options to its users, such as a rather pricy Udacity Connect, which provides in-person learning opportunities. and a few Nanodegrees that still operate under its legacy monthly subscription pricing model, such as Full Stack Web Developer Nanodegree. All of those could also contribute to the revenue numbers, of course.

And yet, if you look at Udacity website today, and compare it to how it looked like a couple years ago, everything seems to be focused around the Nanodegrees now, whereas in the past, Udacity felt much more like Coursera, with its focus on free courses, with the users required to pay only for the additional services, such as certificates, etc.. The obvious conclusion to be made here is that apparently Udacity considers Nanodegrees to be a success, and believes that there is a significant potential to scale it further.

One last interesting thing to consider is the number of people who have completed at least one Nanodegree since its introduction in 2014. According to Udacity, only 27,000 people have graduated so far, which is curious, given that it reports 50,000 people are currently enrolled in at least one degree, and most degrees are designed to be completed in 6 to 12 months.

This can only mean one of two things: either Udacity has recently experienced a very significant growth in the number of people enrolling in Nanodegrees (which would explain the existing discrepancy between those two numbers), or the completion rates for the Nanodegrees historically have been relatively low.

Now, the completion rates were one of the key issues for MOOCs, where they proved to be quite dismal. However, the situation for Udacity is somewhat different: here, the users have already paid for the program, so in a way, completion rates are less of a concern (and with the legacy pricing model, where Udacity charged users a monthly subscription, the longer times to completion could have actually benefitted the company). On the other hand, low completion rates might ultimately contribute to the poor reviews, negatively affect user retention, and damage the company's brand, so this issue still needs to be managed very carefully.

Would Udacity's Nanodegrees prove to be a success in the long run? That remains to be seen, but so far, it looks like the company has been doing a pretty good job with those, so the future certainly looks promising.

The Challenge Of Attracting The Best Talent

In one of the classes I'm currently taking at Kellogg, we've recently touched on the issue of top K-12 teachers working at the better performing schools, with the schools that represent a more challenging case often facing significant difficulties attracting and retaining top talent.

This problem, of course, isn't unique to K-12 system. If you think about it, most of us will probably choose to move to a job that offers higher pay, and a better working environment, whenever the opportunity presents itself, without a second thought. And if we believe that the new job would be just as, or more, meaningful than the old one, that typically seals the deal. And who could blame us?

And yet, once you start thinking about what that truly means, the answer becomes less clear. While it most certainly makes sense to look for the greener pastures from an individual's perspective, we might wonder what kind of impact does it have on the world around us? More importantly, are we even serving our own needs in the best possible way by following this line of thinking?

One particularly interesting example to illustrate this point that immediately comes to mind is Google. For years now, it has been being highlighted as one of the most desirable employers in the world. It has the resources required to offer its employees extremely competitive levels of pay, and it is also famous for its great work environment - hey, it even tries to assess people's "Googliness" before hiring them in order to determine whether they'll fit well with the company's culture.

Google is undoubtedly a great place to work, so it isn't really surprising that people from all over the world aspire to work there. However, there is also another side to that story. Almost every person I've talked to who's worked at Google has at some point brought up the issue of being surrounded by people who were overqualified for their jobs. Yes, Google's immense profitability has made it possible for the company to pay for the best available talent. But hiring the best people doesn't automatically mean that you have meaningful problems for them to work on. 

That, of course, doesn't mean that Google shouldn't aim to hire the people of the highest caliber -  after all, as long as it has the resources and the appeal required to attract them, the employees and Google both seem to be better off if it does. And yet, one might wonder, what could many of those people have achieved otherwise? Would the companies they'd work for have more challenging problems for them to work upon? Or would some of those people actually start their own companies that'd eventually change the world?

The same goes for the K-12 system. Nobody could ever blame the teachers for the desire to work for the schools that offer better environments - even if one doesn't care for the compensation and surroundings, it can be much more fulfilling to work in such a place. The question, however, is what impact those teachers might have had at the lower-performing schools: some of those often have a much more pressing need for the best talent, but have trouble attracting such candidates.

So, what could be done to address this issue? I am afraid there are no easy answers here. The best talent is, and will always remain, a scarce commodity, and the best organizations often have a higher appeal (not to mention more resources to offer) to those workers - that is not going to change, nor should anyone want it to, really.

What we could do, however, is create additional incentives for the people to take risks, whether that means going to work for a struggling school, or taking a leap of faith and starting a company. Some of those incentives might be financial in nature, but what seems to me to be even more crucial is for us as a society to promote the importance of raising up to the challenge, especially if it doesn't bring one any immediate rewards, and to celebrate those who choose to do so. This, of course, might be easier said than done, but it's not impossible, and is very much worth the effort.

The Benefits Of Raising Less Money

A couple of weeks ago, TechCrunch published an essay called "Raise softly and deliver a big exit" by Jason Rowley. In this essay, he set to explore the relationship between the amount of funding startups raise, and the success of the exits, measured by the ratio of exit valuation to invested capital (VIC).

The analysis, unfortunately, doesn't provide a breakdown by space the startups operate in, and thus is relatively high level. It also raises some questions about the validity of using VIC as a metric to compare to the amount of capital raised or the valuation: as both of those are in fact used in the calculation of VIC, any inferences about the correlations between either of them and VIC aren't really meaningful.

Still, even if the conclusions aren't statistically meaningful, the analysis itself raises some interesting points, all of which can be summarized in a single phrase: "raising a lot of money makes getting high return on investment less likely".

One could argue that this is a fairly obvious conclusion that doesn't require looking at any specific data, and she'll be right about that: making high returns (meaning a percentage of capital invested, not absolute numbers) at scale is often harder compared to a situation when you invest relatively small amounts of money.

For the startups raising venture capital funding, that appears to be particularly true. Selling your company for $50 million is a success, if it only raised $5 million in funding; it becomes much more complicated if it attracted $100 million in funding - in this case, to deliver the same multiple you'll need to sell it for at least $1 billion, which drastically limits the number of potential buyers (and also the chances that the company would be able to get to the stage when it could be solved for such an amount of money).

So why are we so focused on the huge rounds raised, "unicorn" startups and the outsized exits?

Part of the story is tied to the business model of the VC firms: most of them receive a fixed percentage of the assets under management (AuM) as a management fee (typically, 2% per year), plus carry (say, 20% of the overall proceeds from exits, once the investors in the fund are paid the principal back). Both of those pieces are directly tied to the AuM, creating the incentive to raise more money from the limited partners.

What that means is that there is a misalignment between the interests of limited partners (who care about returns as a percentage of capital invested), and those of general partners (whose compensation, and especially their salaries, is to a significant extent determined by the AuM size, followed by the absolute returns).

This compels the general partners to raise larger funds, which in turn means that they need to pour more money into each startup (or do more deals per fund, which brings the risk of spreading your resources too thin). And investing more money per startup creates the obvious pressure for larger exits.

While VC piece is relatively straightforward, the situation for the startup founders is more complicated. Unlike the general partners of VC firms, the founders do almost exclusively care about the returns: the founders' compensation isn't really tied to the amount of money they raise, only to the proceeds from selling their companies. Another interesting point to consider is that for the vast majority of individuals, the amount of money required to completely change their lives is much lower than the amounts that might be deemed satisfactory for the VC firms, especially the larger ones.

To illustrate this point, for a firm with $1 billion under management, selling a company they've invested $5 million in at $10 million pre-money valuation, for $50 million, isn't really attractive: even though they'd make a decent return on this investment, the absolute gains are too small to make much of a difference.

For the founders of that same company, however, such a deal can be very attractive: if there were 3 of them, it would yield them more than $11 million apiece - a huge sum of money for any first-time entrepreneur. Accepting a deal like that would also leave them free to pursue their next ventures, knowing that they can now take bigger risks, with their financial security already established.

So again, why does the entire industry pay some much attention to the largest deals and exits?

Well, for once, it's just more interesting for the public to follow those deals - they create a rock-star aura around the most prominent founders and VCs, something that is obviously lacking for the smaller investments and exits. Next, some of the more exciting ventures do require outsized investments: that is often particularly true for some of the most well-known B2C startups (e.g. social networks, or on-demand marketplaces) - that, however, certainly isn't the case for a lot of companies out there. Finally, the VC agenda certainly plays a role there as well.

And yet, while all those reasons might be legitimate, it's worth remembering that for every $1 billion exits there could be dozens of $50-100 million sales, and while such deals don't always sound as cool, there surely do have the potential to change the lives of the entrepreneurs involved in them.