hard problems

Amazon's HQ2: Why Blaming The Company Misses The Point

Not to worry, the second part of my previous post is coming, but today I wanted to focus on a different topic altogether — that is, on Amazon HQ2 announcement.

As I am sure most of you have heard by now, Amazon finally announced the location of its HQ2 yesterday, which, after all, isn’t going to be exactly an HQ2 (surprise!). Instead, the company has decided instead to split the jobs between Crystal City in Northern Virginia and Long Island City in Queens. This news was met, well, unfavorably by most, to say the least.

HQ1, HQ2, HQ[X]?

Perhaps the first thing one would notice is the outrage over the fact that HQ2 ended up being, well, not exactly a full-blown HQ. On the surface, that’s a fair thing to fume about — after all, Amazon has spent over a year promoting their plans to create a huge office that would be comparable to its current base in Seattle both in terms of influence and the number of jobs, and the company has obviously used such framing to try and extract concessions from the cities lining up to compete for the opportunity, which again might justify the outrage over the fact that there would actually be no HQ2, at least not as it has been previously described.

That being said, focusing too much on this would mean missing two important points. First, if we are really talking about the global headquarters here, there is simply no such thing as a second headquarters, no matter the rhetoric. Instead, the global headquarters is by definition a singular phenomenon, and Amazon is by now so deeply rooted in Seattle that no other office would stand a chance to assume a comparable position, no matter how large it would be. And while one might argue that if that’s the case, then it was from the start deceptive of Amazon to describe their plans for a new office in a way they did, this point is so simple that the only people who might have been deceived by Amazon’s marketing spiel, were, well, willing to be deceived in the first place.

With that comes the second point — while there is no such thing as the second HQ, you don’t really need to have the same number of jobs as the original HQ to wield a very significant influence over the company — just take a look at Google Zurich office, for example, which houses only about 2,000 employees, but still managed to gain a wide recognition as a very well-respected engineering & research location, wielding significant influence over the company. And there are many more examples like that.

So, if the influence is what you’re after, how many jobs your local office houses doesn’t really matter that much — rather, what really matters is the kind of those jobs, and whether there is a substantial number of them are in engineering, research, and product, not just sales and marketing — which is not to say that those jobs are in any way less important — but it’s often the engineering jobs (or lack thereof) that define the nature of the office. Quantity should really be at best a second priority here.

In any case, I find the statement that going from the initially promised 50,000 jobs in one city to 25,000 in two of them somehow changes the nature of those offices and turns them into “glorified satellite offices” to be quite ridiculous. To my knowledge, no tech company today has offices even remotely approaching this size — again, just as an example, Google in NYC has recently stated that it plans to grow to (only) 14,000 employees in the 10 years — so there is simply no point of reference to use to judge what those offices might turn into, but, if anything, it seems highly unlikely that an office of 25,000 corporate jobs would ever be reduced to being a mere satellite to the headquarters.

The poisoned apple of tax subsidies

Next, the announcement has surfaced (or, rather, resurfaced) a growing wave of complaints about Amazon securing substantial tax subsidies that it doesn’t really need while putting additional strain on the already overextended infrastructure of NYC and Northern Virginia.

To that point, I am not here to argue that Amazon really needs or deserves to receive those tax subsidies (there is simply no way to objectively judge something like that in any case), or that tax incentives always work out as intended (hint: they don’t). Truth is, tax incentives can in some cases be quite harmful to the local municipalities, creating situations where they have to spread thin the tax dollars they collect from the rest of the taxpayers or even downright depleting cities’/states’ coffers in some cases (see cash grants). To echo the recent article in the Atlantic, one could make a sound argument that at least some of the tax subsidies that are currently being offered by the regional governments to woo corporations to their dominions should be made downright illegal, or at the very least frowned upon (although to be fair, that mostly applies to tax subsidies that are being used to move the existing jobs across the state lines, which is very different from creating new jobs, as is the case with Amazon here).

Still, while it might be fair to pose the question of whether the big tech companies are the best recipients of the significant tax breaks they are often able to extract from the local governments, as long as such subsidies are legal and available, it seems strange to blame Amazon, or any other company, for taking advantage of such opportunities — after all, big tech companies remain commercial entities whose key purpose for existence is to make money for their shareholders, not try and solve the complex social issues of the cities or states they happen to reside in (which, realistically, they are simply unequipped to tackle too, for all their size and power).

Moreover, tax subsidies are, well, subsidies, meaning that, at least in theory, they shouldn’t really make local budgets poorer, if set up correctly (that’s why I am personally inclined to argue that cash grants shouldn’t be classified as mere subsidies, and probably deserve to be banned altogether). It is still possible, of course, for the subsidies to become so large that the company receiving them would be essentially capitalizing on existing infrastructure and using city/state services without ever (or at least or a long time) paying its fair share for those, — but the point is, it doesn’t have to be that way. Instead, smart tax subsidies could be tied to specific KPIs, be it number of jobs created, the amounts invested in the region, or something else, essentially functioning as profit-sharing agreements between the companies and local authorities, and thus benefiting both, and it’s the job of the governments to ensure that this is indeed the case.

Going back to Amazon HQ2, if we look at the arrangement it negotiated with Long Island City as an example, the company is expected to receive up to $1.2 billion in tax subsidies over 10 years, in exchange for investing $2.5 billion in office space, and then paying up to $10 billion in taxes over the next 20 years, which roughly translates into $48,000 to $61,000 in subsidies for each of the 25,000 jobs it promised to create there, according to TechCrunch calculation. With the cited 31% tax rate for a job paying $150,000, the subsidy would translate into ~1.5 years of foregone tax revenue for the city. Is that a lot? Maybe, but I am inclined to say that that’s not necessarily an outrageous price to pay, especially taking in account all the additional spending that typically follows those high paying tech jobs (again, quoting the same article, Amazon claims that it boosted the local economy by $38 billion from 2010 to 2016).

Now, one might argue that Amazon was likely to pick up NYC for their HQ2 in any case, even if the city had decided not to offer any tax subsidies. That, of course, might be true (after all, NYC was deemed a top contender from the start), but it’s also possible that Amazon would have chosen a different location (e.g. one of those that would have still offered subsidies), and all the upside from capturing those jobs would be lost to NYC. The point is, there is simply no way of knowing this, so it all again comes down to the question of whether tax subsidies are beneficial and should remain lawful, but whatever the answer to that question, it seems reasonable that as long as those continue to exist, the companies will (and should) continue to use them, and the cities or states should continue to offer them whenever they believe it would boost their chances of securing the business — after all, that’s what rational market participants are supposed to do.

Actually, we don’t want those jobs here

Finally, the announcement led to an outcry from some of the people who live in the neighborhoods selected to house the new Amazon offices, as well as the local politicians, already blaming the company for the upcoming increases in the cost of living, and the inevitable displacement of the local communities that would come with it.

While the other two issues discussed above are rather complicated and hard to untangle (and also might justify different opinions), I find this one to be ridiculous in a sense that it completely misplaces the blame, and also risks throwing the baby with the water, metaphorically speaking.

Of course, it’s true that the ever-increasing COL is a huge issue in many places with the strong tech presence, and it’s a tragedy when people in local communities find themselves unable to stay in the neighborhoods they often spent a significant chunk of their lives in, and had no intention of leaving, if not for the rising costs. But it is hard to see how this is the fault of the companies based in those places — if anything, the blame should really lie with the local politicians (and sometimes, in the end, with the people themselves, however unfortunate that might sound), who often help to create the conditions that eventually lead to these problems in the first place.

Therefore, I am willing to argue that focusing on questioning the cities’ or states’ officials’ decisions to grant permissions to build in already congested areas, calling to investigate the reasons for the underinvestments in local infrastructure if that appears to be the case, or gathering political will to repeal regulations that prevent construction of additional housing (which often has a lot to do with the NIMBY attitudes of the locals — take SF, for example, whose ridiculous cost of living to a significant extent is a direct result of the laws passed 30 years ago that put severe restrictions on the amount of housing that could be built in any given year, coupled with the desire of those who already own real estate in the city to preserve their way of life) would do much more good than lashing out on the companies as the ultimate evil instead.

The main point, however, is to remember that in order to pull people out of poverty and help them, one still needs to command the necessary economic resources, which in turn could only come with the jobs, and, at the same time, that it’s not the job (no pun intended) of the private companies to address the broad social issues, but rather it’s the exact reason for existence of the local governments, and one of the means to achieve that is to tax the economic outputs of those companies and then responsibly spend the collected resources. This idea, however simple, seems to be escaping a lot of people lately — I’ve previously written how, for all their benefits, some of the EU members are starting to forget this, and now it increasingly seems that some parts of the U.S. are following suit. This is most unfortunate, if only for the reason that vilifying successful companies doesn’t really help to resolve any issues, and instead diverts the attention away from the real source of the problem, which could eventually deepen the issues even more, and make the processes that lead to them in the first place even more broken.

P.S. I do think that it’s possible that creating multiple relatively large offices in 2-tier tech hubs instead of putting two huge ones in the already saturated markets would have been a better idea, but I also reckon that even if Amazon had decided to do that, those would have still been the cities that are already doing quite well (Denver, Austin, etc.), and not the mid-western cities or deep south cities many hoped Amazon would help to revitalize. That, again, goes to back to a point that it’s not exactly Amazon’s job to rebuild the economies of the struggling regions, and the hard truth is that the company has to go where people who will fill the jobs it creates are or want to be, not where others need them to be.

Why We Need To Rethink The Existing Safety Nets

These days, it seems that the discussion on how AI is going to disrupt the vast majority of industries in just a few short years rages everywhere. According to PitchBook, in 2017 VCs have poured more than $10.8 billion into AI & machine learning companies, while the incumbents have spent over $20 billion on AI-related acquisitions; according to Bloomberg, the mentions of AI and machine learning on earnings calls of public companies have soared 7-fold since 2015; and just this week, The Economist published a series of articles, framed as Special Report, on the topic.

In today's context, AI typically refers to machine learning, rather than any kind of attempt to create general intelligence. That, however, doesn't change the fact that the current technology has clearly moved past the point when it was of limited use to non-tech companies, and is now beginning to disrupt a large number of industries, including the ones that weren't particularly tech-savvy in the past. To quote McKinsey Global Institute's "Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation" report:

"We estimate that between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world, based on our midpoint and earliest (that is, the most rapid) automation adoption scenarios. New jobs will be available, based on our scenarios of future labor demand and the net impact of automation, as described in the next section. However, people will need to find their way into these jobs. Of the total displaced, 75 million to 375 million may need to switch occupational categories and learn new skills, under our midpoint and earliest automation adoption scenarios."

To be fair, McKinsey also states that less than 5% of all occupations consist entirely of activities that can be fully automated. Still, here's another valuable quote from the report:

"In about 60 percent of occupations, at least one-third of the constituent activities could be automated, implying substantial workplace transformations and changes for all workers."

Overall, there seems to be little doubt today that even with the current level of technology, the global workforce is about to enter a very volatile period that would require large numbers of people to learn new skills or be altogether retrained, or else risk losing their jobs, and face difficulties finding new employment.

The peculiar nature of disruptive technology adoption

I would also argue that while tech industry, as well as the broader society, have often been overly optimistic when trying to forecast how soon certain revolutionary advances in technology were to happen (heck, in their 1955 proposal the fathers of AI, which included Marvin Minsky, John McCarthy and others, outlined their belief to be able to make significant progress towards developing a machine with general intelligence in a single summer), once the core new technology became available, even the most daring forecasts for adoption rates often turned out to be too conservative.

This is especially true in cases when technology in question was impactful enough, and its nature allowed for the formation of an ecosystem around it — in which case, in just a few years, there were hundreds of thousands of stakeholders involved coming up with new creative ways to benefit from the advantages brought by the new tech.

With AI, or rather, with machine learning (in this case, the distinction is quite important), while the underlying technology is still evolving and will continue to do so, it's already good enough for a wide variety of applications, which prompted a rapid rise in the number of tech companies, startups, consultancies and independent developers involved in the space — today we already have a vast ecosystem around AI, with the ever-growing number of stakeholders involved, and it can only be expected to grow larger in the next few years.

Rethinking the safety nets

What that means is that even the most daring forecasts produced by McKinsey or anyone else might still underestimate the change that's coming. And if that turns out to be true, figuring out how to help all the people who are going to be displaced, becomes of utmost importance, as the society will need to find ways to support those people through periods of unemployment, provide them with the training that would be effective in bringing them back to the workforce (the current government-run retraining programs, while costing a lot to the taxpayers, often turn out to be painfully ineffective, at least in the U.S.), and, ultimately, take care of those who for various reasons can't get back to the workforce, while doing all of the above on an unprecedented scale.

This calls for the creation of robust safety nets for people, while also making sure that it doesn't stifle economic growth: while the safety nets of some European countries are great for their citizens, they also place undue burden on the employers, and incentivize both the mature companies and the startups to move their business to other places, if possible (and in the increasingly global and interconnected world, it is indeed becoming possible to do that more and more frequently).

At the first glance, there is a paradox here: the safety net is becoming increasingly important, but if a robust safety net stands to hurt the economic growth, then there'll be less jobs going on, in turn making the safety net even more essential, and more costly to provide. This paradox, in turn, brings the ultimate question: why are our safety nets designed with the assumption that it's the end goal for people to have a formal full-time job? Note that this is the case for most developed countries, including the U.S.: while it might be easier to fire people in the States compared to many European countries, the system is still designed to incentivize people to seek full-time employment, in some ways even more so than in Europe.

If you think about it, it doesn't seem to make much sense to force people to look for full-time employment above everything else, or to force the employers to make long-term commitments to their employees, or to bear most of the burden associated with their safety nets, in a world that is increasingly global and going through rapid changes at an accelerating pace. Wouldn't it be better if at least most of the safety net would come from the state, while the employers would be incentivized to optimize for efficiency and growth, bringing in people (and firing them) as needed?

This added flexibility for the employers doesn't need to be free either: it's no secret that the corporate taxation is dysfunctional, but it's hard to fix it without offering a decent reason for the companies to play nice (instead of moving the profit center to Ireland) and comply, and the added flexibility on managing their workforce can potentially be a powerful incentive (especially in the HQ markets, where workforce constitutes a significant expense, and can't be easily moved elsewhere). For the businesses, that would mean that they are still being asked to pay their fair share, but at least they won't have to make upfront and long-term commitments that can often have perilous consequences in the changing markets. That remains particularly true for the smaller companies.

Would such a world be more volatile for regular people? Alas, it most likely will be. But it also stands to reason that in a world where your health insurance isn't tied to your employer but is instead provided by the state no matter what and where you have the opportunity to go back to the school as needed, without having to worry about the cost, people would be much more daring to pursue the career options that are best for them long-term.

The final piece: UBI

There is still one component missing, of course. If there is nothing preventing your employer from firing you without much notice, the safety net has to include some mechanism to account for that, and, most likely, it has to be more robust than the currently available programs, which brings the conversation to the concept of UBI, or universal basic income.

Now, that's an incredibly broad topic, and the one that has been in discussion for decades, if not longer (for example, few people know that the U.S. actually conducted a number of experiments on negative taxation way back in 1960s, and even almost got to implement a form of basic income). Also, basic income doesn't stand for one particular idea, but rather includes a range of concepts, from offering everyone the same lump sum regardless of their income or wealth, to ideas of negative taxation that would help to create an income floor for everyone, to proposals that are more limited in scope, but might still play a valuable role in helping to eliminate poverty and providing safety net for people.

The most realistic concept I've seen so far, and the one I like the most, is described in the recently released book called "Fair Shot: Rethinking Inequality and How We Earn", written by a Facebook co-founder Chris Hughes. I'd highly recommend reading the book to anyone interested in the topic, but in short, the idea is to supplement the earnings of every household with the annual income of $50,000 or less, with additional $500/month per working adult (less, if the income is close to $50,000), building on top of existing EITC program, and to pay for this program by eliminating preferential tax treatment for capital gains, and imposing additional taxes on those who earn $250,000 or more per year.

While this idea is less daring that the some of the more sweeping concepts of UBI, it has several extremely interesting components to it. First, it's much less expensive than some of the other UBI proposals, which in theory means that it's possible to implement it even today. Second, unlike the calls to provide basic income to everyone regardless of their wealth or whether they are working or not, Chris proposes to provide this supplementary income to working adults with relatively low earnings, but to use a much broader definition of work that the one currently used in EITC: the idea is to count as work any kinds of paid gigs (e.g. working for Uber, TaskRabbit and the likes), as well as to count homemaking and studying as work. That way, people would remain incentivized to engage in productive activities, but wouldn't be limited in what they could do as much as they are now (although, interestingly, the vast majority of UBI experiments actually provide evidence that people receiving it continue to work, and even work more, instead of withdrawing from the workforce, so this concern is artificial in nature to begin with). Third, while $500/month won't be enough to support someone who has no other income, its value shouldn't be underestimated: the studies show that even small amounts of cash can help people get by during the hardest periods and optimize their careers for longer term.

The path ahead

Even with the UBI in some form, the guaranteed health insurance and the access to free education, people wouldn't exactly get to enjoy their lives without having to worry about work: the goal, at least for now, should be to provide safety net for the periods of turmoil and incentivize people to pursue riskier and more rewarding opportunities career-wise, rather than eliminate the need to worry about finding employment altogether. Still, having this safety net would mean a great deal for someone whose job has been eliminated by automation and who now has trouble finding work, or who needs to go back to college to get retrained, or simply wants to quit her less than inspiring job and try to launch a business.

The change brought by the globalization and automation is inevitable, and so, most places would have to find a way to adapt to it, one way or the other. Right now, places like Netherlands or Nordic countries already have well-developed safety nets, but often represent a challenging environment for the new businesses to grow in, while other places (e.g. the U.S.) can be much more business-friendly, but don't offer all the necessary protections to support people who find themselves worse off than before. What remains to be seen is what path each of those countries would choose to pursue going on, and how it would play out for them over the next 10-20 years.

Data Privacy And GDPR: Treading Carefully Is Still The Best Course

As the rage over Facebook/Cambridge Analytica situation continues, the calls for much more rigorous regulation for tech companies are becoming more and more common. On the surface, this seems reasonable: it's hard to argue that handling of users' data by many companies remains messy, with the users often left confused and frustrated, having no idea about the scope of the data they're sharing with those companies. And yet, I am going to argue that we — as users, customers and society as a whole — stand to lose a lot if we act purely on our instincts here: the excessive regulation, if handled poorly, can harm the market immensely in the years to come, and ultimately leave us worse, not better, off.

Current discussion around data privacy hasn't actually started with the recent Facebook scandal. Over the last few weeks, you might have received notices from multiple tech companies on updated terms of services — those are driven by the companies' preparation for General Data Protection Regulation, or GDPR, a new set of rules aimed to govern data privacy in the EU, to kick in on May 25th this year. If you're interested, here are a couple of decent pieces providing an overview of GDPR, from TechCrunch and The Verge.

Now, it is still the EU regulatory framework, so naturally, it only governs the handling of the data that belongs to the users who reside in the European Union, which prompts the question why should people in other geographies bother to learn about it? Well, to answer it, here's the quote from the recent The Verge article:

"The global nature of the internet means that nearly every online service is affected, and the regulation has already resulted in significant changes for US users as companies scramble to adapt."

And that's exactly right: while GDPR only applies to the data that belongs to the EU citizens, it's often hard, if not altogether impossible, to build a separate set of processes and products for a subset of your users, especially if we are talking about a subset so large, diverse and interconnected as the European users. Therefore, quite a few companies have already announced an intention to use GDPR as the "gold standard" for their operations worldwide, rather than just in the EU.

Quite a few things about GDPR are great: the new "terms of service" are about to become significantly more readable, the companies would be required to ask the users to explicitly opt in on the data sharing arrangements, instead of opting their users in by default, and then forcing them to look for the buried "opt out" options, and the opportunity for the users to request any company to provide a snapshot of all the data they have on them is likely to prove to be extremely useful. The abuse, like in Facebook/Cambridge Analytica case (irrespective of who's to blame there) is also about to become much harder, not to mention much costlier for the companies involved (under GDPR, maximum fines can reach 4% of the company's global turnover, or €20 million, whichever number is larger).

So what's the problem then? Well, first of all, GDPR compliance is going to be costly. Europe has already witnessed the rise of a large number of consultants helping companies to satisfy all the requirements of GDPR before it kicks in in May. The issue with that is that the large companies typically can afford to pay the consultants and the lawyers to optimize their processes. Instead, it's often the smaller companies, or the emerging startups, that can't afford the costs associated with becoming fully compliant with the new regulations.

That, in turn, can mean one of two things: either the authorities choose not to enforce the new laws to a full extent for the companies that are beyond a certain threshold in terms of revenue or the number of users, or GDPR threatens to seriously thwart the competition, aiding the incumbents and harming the emerging players. The second scenario is hardly something that the regulators, not to mention ordinary citizens, can consider a satisfactory outcome, especially in the light of the recent outcry over Facebook, Google and few other big tech companies — most people have no desire to see these companies become even more powerful than they are today, and yet that's exactly what GDPR might end up accomplishing, if it's enforced in the same fashion for all companies, irrespective of their size or influence.

The second problem is that while the first of the principles of GDPR, "privacy by design", isn't really new to the market, the second, "privacy by default" is a significant departure from how many tech companies, in particular those in the marketing/advertising space, operate today. In short, GDPR puts significant restrictions on the data about the user that companies are allowed to collect, and in the situations they're allowed to share it with their partners (and, in most cases, they'd need to obtain an explicit consent from the user before her data could be shared). That potentially puts at risk the entire marketing industry, as most of the current advertising networks employ various mechanisms to track users throughout the internet, as well as routinely acquire data from third parties on the users' activities and preferences in order to enable more effective targeted advertising. Right now, this way of doing things seems to be under direct threat from GDPR.

Now, there are plenty of people who believe that the current advertising practices of many companies are shady at best, and downright outrageous at worst, and any regulation that forces the companies to rethink their business models should be welcomed. To that end, I want to make three points on the situation isn't necessary that simple:

1. Advertising is what makes many of the services we routinely use free. Therefore, if the current business model of the vast majority of those companies comes under threat, we need to accept that we'll be asked to pay for many more of the services we engage with than we do now. The problem, of course, is that most consumers, for better or worse, really hate to pay for the services they use online, which means that a lot of companies might find themselves without a viable business model to go on with.

2. The incumbents are the ones who stand to win here. What comes to mind when you think about the companies that don't need to rely upon third-party data about their users to successfully advertise to them? Facebook, LinkedIn, Google. Those companies already possess huge amounts of information about their users, and therefore they'd actually be the ones that are the least threatened by tightened regulations on data sharing, and likely to become even stronger, if their competitors for the advertising dollars are put out of business.

3. A "separate web" for the EU users. Right now, it looks like many companies are inclined to treat GDPR as the "gold standard". However, it's worth remembering that they still have another option to go with. If GDPR compliance proves to be too harmful for their businesses, instead of adopting it globally, they might choose to go into trouble of creating a separate set of products and processes for the EU users. That, of course, would most likely mean that those products would receive less attention that their counterparts used by the rest of the world, and would feature more limited functionality, harming the users who reside in the EU. It would also harm the competitiveness of the European companies, as well as their ability to scale globally, as, unlike their foreign-based peers, they would face more restrictive and expensive to comply with regulations from the start, while, say, their U.S. peers would have the luxury to scale in the more loosely regulated markets first, before expanding to Europe — at which point, they'd be more likely to have the resources necessary to successfully withstand the costs of compliance.

Once all of this is taken into consideration, I'd argue that it becomes obvious that the benefits that come with the stricter regulation, however significant, don't necessary outweigh the costs and the long-term consequences. Data privacy is, of course, a hugely important issue, but there is little to be gained from pursuing it above everything else, and a lot to lose. With GDPR, the EU has chosen to put itself through a huge experiment, with its outcome far from certain; the rest of the world might benefit from watching how the situation around GDPR unfolds, waiting to see the first results, and then learning from them, before rushing in similar proposals in their home countries.

Cambridge Analytica Crisis: Why Vilifying Facebook Can Do More Harm Than Good

Throughout the week, I've been following Facebook and Cambridge Analytica scandal as it's been raging on, growing more and more incredulous. Yes, this is a pretty bad crisis for Facebook (which they inadvertently made even worse by their clumsy actions last week). But it still felt to me that the public outrage was overblown and to a significant degree misdirected. Here are the key things that contributed to those feelings:

1. Don't lose sight of the actual villains. Aleksandr Kogan and Cambridge Analytica are the ones truly responsible for this, not Facebook. Facebook practices for managing users' data might have been inadequate, but it was Kogan who passed the data to Cambridge Analytica in violation of Facebook policies, and then Cambridge Analytica who chose to keep the data instead of deleting it to comply with Facebook requests.

2. Nobody has a time machine. It might seem almost obvious that Facebook should have reacted differently when it learned that Kogan passed the data to Cambridge Analytica in 2015 — e.g. extensive data audit of Cambridge Analytica machines would have certainly helped. The problem is, it's always easy to make such statements now, yet nobody has a time machine to go back and adjust her actions. Was Facebook sloppy and careless when it decided to trust the word of the company that already got caught breaking the rules? Sure. Should it be punished for that? Perhaps, but rather than using the benefit of hindsight to argue that it should have acted differently in this particular case, it seems more worthwhile to focus on how most companies dealing with users' data approach those "breach of trust" situations in general.

3. Singling out Facebook doesn't make sense. To the previous point, Facebook isn't the only company operating in such a fashion. If one wants to put this crisis to good use, it makes more sense to demand for more transparency and better regulatory frameworks for managing users' data, rather than single out Facebook, and argue that it needs to be regulated and/or punished.

4. Don't lose sight of the forest for the trees. It's also important to remember that the data privacy regulation is a two-way road, and by making the regulations tighter, we might actually make Facebooks of the world better, not worse, harming the emerging startups instead. This is a topic for another post, but in short, strict data regulation usually aids the incumbents while harming the startups that find it more difficult to comply with all the requirements.

5. Data privacy is a right — since when? Finally, while the concept of data privacy as a right certainly seems attractive, it's not as obvious as it might seem. Moreover, it raises an important question — when exactly did the data privacy become a right? This isn't a rhetorical question either. It certainly wasn't so in the past: many of the current incumbents have enjoyed (or even continue to enjoy) periods of loose data regulation in the past (e.g. like Facebook in 2011-2015, or so). So if we pronounce the data privacy to be the right today, we are essentially stifling the competition going forward by denying the startups of today similar opportunities. Does this sound nice? Of course not, but that's the reality of the market, and we have to own it before making any rash decisions, even if some things seem long overdue.

Overall, this crisis is indicative of multiple issues around data management, and can serve to launch a productive discussion on how we might address the data privacy concerns going on. At the same time, it doesn't do anyone any good to vilify Facebook beyond necessary (and some of the reporting these days was utterly disgusting and irresponsible), the #deletefacebook campaign doesn't really seem to be justified (again, why not get rid of the vast majority of the apps then, given that Facebook isn't that different from the rest) and any further discussion about data privacy should be carefully managed to avoid potentially harmful consequences - most of us have no desire to find themselves in the world where we have perfect data privacy, and no competition.

Why We Should Focus On Our Similarities, Not Uniqueness

"Define America in one word... Possibilities. Americans always believe anything is possible."

Tonight, Joe Biden, the 47th Vice President of the U.S., came to Kellogg to deliver a talk on unequal economic growth. For me, that was the first time I've got to witness such a high-profile politician speak in person, so, as you can imagine, I was fairly excited about it. And I definitely wasn't disappointed: overall, it was a very interesting and insightful talk. Unequal economic growth of the last decades remains a significant issue that should not be overlooked, and Vice President Biden in his speech touched on many of the key points.

In particular, his push for the healthcare and education to be treated as people's basic right, and not a privilege, felt appropriate and refreshing, and his comments about the unfair restrictions that the companies today often force onto the workers that limit their job mobility and bargaining power, or about the unreasonably harsh licensing requirements for many jobs that as a result stifle competition, were spot on, while also staying reasonable: he only focused on the right of the workers to compete for the jobs and fair pay in an open marketplace, and not on how people are entitled to those jobs in the first place (an argument that a certain person who-must-not-be-named likes to appeal to so much).

Was Biden's speech mostly focused on the U.S.? Well, yes, yet in a way that was to be expected. In business school, it's easy to grow accustomed to the idea of bringing global perspective into every discussion, but one can't expect everyone to follow on this approach, nor is it really necessary. After all, most of us probably didn't come to Kellogg today expecting Biden to deliver a lecture on the issues of inequality globally - we can always look up to Gates and others for that.

However, there was one thing in today's talk that rubbed me the wrong way. In his speech, Vice President repeatedly emphasized the uncanny ability of the U.S. to reimagine itself, the unique qualities that the U.S. and its people possess, and its special place in history of the world, in the process making a few unflattering remarks about China, and also, to my surprise, U.K., France and Germany.

Curiously enough, I actually do agree with most of those remarks: in my opinion, it's quite fair to say that the U.S. holds a unique place of the in world today, as well as to talk about the very special traits and qualities that brought many people to the U.S. in the first place, and then helped them succeed there and build the country as we know it, and the exceptional ability of the country to reimagine itself, and push forward.

Still, I feel that it's not enough for a statement to be simply correct to make for a compelling, and, more importantly, right, argument, and, in my opinion, that was exactly the case here. In the global world of today, there is more to be gained from focusing on how everyone might benefit from increased cooperation that is predicated on every country acknowledging its strong and weak sides, as well as taking time to praise and learn to work with the strengths of its partners. It's not that the U.S. (or any other place) needs to suddenly lose their unique advantages, or forget its history, of course. Rather, it's about focusing on seeing itself as an essential part of the larger world made of equals, and then promoting that kind of worldview among its citizens.

There is also another argument to be made there. The sense of uniqueness can be seen as a source of pride, but it can also easily lead to the feelings of superiority or entitlement. Yes, Vice President Biden did specifically mention that to him, this discussion isn't about entitlement, but that's the issue with the concept of uniqueness: what it actually means is open to everyone's interpretation. Coming from another country that also has a long history of viewing itself, and its people, as a unique and powerful force in the world (to those of you who don't know that, I'm originally from Russia), I've seen firsthand some of the issues often stemming from such positioning. Yes, the sense of national pride can do a lot of good for any country and its people, but it can also represent a dangerous force if taken too far, with the sentiments of people around it subject to being easily manipulated — which makes me convinced that now is not the time to appeal to it, as the dangers far outweigh any possible benefits.

So while I agree with the essence of the comments Vice President Biden made in his speech, I also strongly believe that in today's world that is becoming increasingly global and yet is also riddled with xenophobia, civil unrest, and white supremacy movements gaining ground, the "identity of uniqueness", if you will, even when tied to a country, and not race, ethnicity, or religion, should perhaps make way for the idea of everyone in the world being essentially the same, and of the ever-increasing importance of all of us working together. After all, whether we like it or not, the world we live in is already global, and nothing would ever reverse this, so the sooner we adjust our philosophies and rhetoric accordingly, the better off we'll all be.

Designing Accessible Products

On Thursday, Microsoft announced Soundscape, an app that aims to make it easier for people who are blind or visually impaired to navigate the cities, by enriching their perception of surroundings through 3D cues.

According to Microsoft:

"Unlike step-by-step navigation apps, Soundscape uses 3D audio cues to enrich ambient awareness and provide a new way to relate to the environment. It allows you to build a mental map and make personal route choices while being more comfortable within unfamiliar spaces."

To me, this appears to be a wonderful idea, and an app like this could eventually make a huge difference for people who are visually impaired, helping them to navigate unfamiliar environments and make a better use of everything the cities have to offer.

I've been very impressed by the commitment Microsoft demonstrated to building more accessible tools, while interning at Microsoft this summer. If you're interested to learn more about the work they are doing, there is a dedicated section on the company's website, highlighting the principles Microsoft utilizes to think about the inclusive design, and providing specific examples of their work.

Of course, Microsoft isn't the only major tech company that has demonstrated a commitment to building products that are truly accessible. Apple has been long known for their attention to the accessibility, and continues to work to make its products accessible. Google, while not necessarily doing a great job in the past, seems to be catching up. And Amazon finally made its Kindle e-readers accessible once again in 2016, after 5 years of producing devices that weren't suited for those who are visually-impaired (the early versions of Kindle readers were actually accessible too, but then Amazon has given up on this functionality).

And yet there are a lot of areas where tech products' accessibility leaves much to be desired, and many companies simply don't pay enough attention to it. Those often come up with multiple reasons to justify it, too. Some companies state that designing with accessibility in mind is too hard or too expensive, or that it just makes their products look dull. Others believe that by ignoring the accessibility issues, they're only foregoing a small percentage of the market (the figures typically mentioned are 5%, or less).

To be clear, none of those arguments should be viewed as acceptable. Moreover, designing with no regard to accessibility today is often classified as discrimination based on disabilities, and over the last 25 years, it has been made illegal in multiple countries (including the U.S. and U.K.), with the customers successfully suing companies who weren't providing accessible options.

But even if we put aside the legal aspect of the issue, do any of the excuses typically used by companies to avoid paying attention to accessibility actually have merit in them? As it turns out, not really.

According to the U.S Census Bureau, in 2010 nearly 1 in 5 People (19%) had a disability, with more than half of them reporting their disability being severe. About 8.1 million people had difficulty seeing, including 2.0 million who were blind or unable to see. About 7.6 million people experienced difficulty hearing, including 1.1 million whose difficulty was severe. About 5.6 million used a hearing aid. Roughly 30.6 million had difficulty walking or climbing stairs, or used a wheelchair, cane, crutches or walker. About 19.9 million people had difficulty lifting and grasping. This includes, for instance, trouble lifting an object like a bag of groceries, or grasping a glass or a pencil.

Now, if you look at those numbers, the argument that by ignoring the accessibility, the companies are foregoing only a small chunk of the market, proves to be obviously incorrect. Even if you single out a particular disability, like having difficulty seeing, it still affects millions of people.

What is perhaps even more important, those numbers don't necessarily include everyone who might benefit from the products being designed with accessibility in mind: a well thought-out design might also benefit people who are temporary disabled, or the youngest and the elderly users. So it's not just about ensuring that the people with disabilities would be able to use your products, but also about creating better products in general.

Here is one great quote related to this discussion, from the Slate.com article "The Blind Deserve Tech Support, Too: Why don’t tech companies care more about customers with disabilities?":

"When you make a product that’s fully accessible to the blind, you are also making a product accessible to the elderly, to people with temporary vision problems, and even to those who might learn better when they listen to a text read aloud than when reading it themselves. This is the idea of universal design: that accessible design is just better design."

Is designing for accessibility time-consuming and expensive? Sometimes, but overall, it really doesn't have to be. A lot of it has to do with learning about and following the best practices related to accessibility, and ensuring that the products you build adhere to the industry standards. Starting to do that might require a certain amount of resources, but in most cases it would be a one-time investment. Besides that, some of the things related to accessibility require very little effort on your part, e.g. adjusting your color scheme to make it easier for people who are color-blind to interact with your product. And in the process of making your products accessible, you are likely to materially improve the experience for your current users as well.

Finally, we are entering an era when the new technology (AI, voice assistants, VR/AR, novel ways to input information, etc.) can contribute a great deal to making it easier for people with disabilities to interact with the products around them. Take, for example, this description of what could be achieved even with the current generation of voice assistants, from "Brave In The Attempt" article on Microsoft's accessibility efforts:

"One of the best Windows tools for people with mobility challenges is Cortana. Just with their voice, users can open apps, find files, play music, check reminders, manage calendars, send emails, and play games like movie trivia or rock, paper, scissors, lizard, Spock. The speech recognition software takes this even further. You can turn all the objects on your screen into numbers to help you choose with your voice. You can vocally select or double-click, dictate, or specify key presses. You can see the full list of speech recognition commands to see all that it can do."

Isn't such a tremendous opportunity to empower people to live much richer lives worth working just a little bit harder for?

Remaking Education

To continue with the topic of education, today we increasingly hear complaints about the growing inadequacy of our education systems to the realities of the world around us. It's impossible not to see merit in some of those, too. In the world that is rapidly moving towards a gig economy, characterized by continuing decline in the average job tenure, with a lot of jobs likely to disappear in the next 10-20 years, a lot of aspects of the traditional education systems are questionable at best.

But in order to understand which parts of the system work well, and which are outdated and require revamping, it's useful to understand the history and context in which current system came into existence in the first place, and the purposes it was set up to serve. Otherwise, proposing any changes would be akin to moving ahead in the dark: we might still stumble upon something useful, but it is just as likely that we would do more harm than good. This is particularly true for something as complex and intertwined with every aspect of our lives as education.

Our current education system as we know it, was largely established in the second half of the 19th century, and the first decades of the 20th century, and coincided with the Second Industrial Revolution. In his (absolutely brilliant, in my opinion) book "The End of Average", Todd Rose argues that to a significant extent, the motivation behind it had less to do with the desire to create a truly meritocratic society — instead, it was largely driven by the ever increasing demand for workers that the new businesses were experiencing. Therefore, the key purpose of education was not to provide everyone with the opportunities to discover their talents and use those in the best possible way, but rather to educate people to a minimum level that would be sufficient for them to fill in the new vacancies.

The Second Industrial Revolution has long since became history; today, we are in the middle of what is widely regarded as the Digital Revolution, or the Third Industrial Revolution. This new era has arguably brought tremendous change to the societies throughout the world and global economy; it's hard to argue that the needs of both the society and individuals today aren't very different from what they've been during the Second Industrial Revolution more than a hundred years ago. And yet, we still to a significant extent rely upon a system that was designed for a different age and circumstances.

That raises several important questions. First, given how much the world has changed over the last 100 years, how suitable our education approaches are for the new circumstances? Yes, it remains possible that a lot could be achieved through the gradual evolution of the existing offerings. But is it too far-fetched to imagine that at least for some aspects of the current system, disruption might make more sense that evolution?

Personally, I don't think so. The idea of providing personalized education in schools required changing pretty much every aspect of the traditional school experience - and yet, the early results seem to be very promising. Same goes for the notion that bootcamps, nanodegrees and other unconventional options for professional education might one day turn into a viable alternative to college education — while it might raise some eyebrows, there is a lot of promising work happening in the space right now. And the list goes on.

Second, if we want to bring positive change to the current education system, we need to focus on designing new solutions that can be successfully scaled. One reason why the entire world still relies on a system that was put in place over a hundred years ago is that it was built to scale. Therefore, if the goal is to have a wide impact, for whatever solutions we propose, it's important to consider whether there is a way to implement them throughout a single state, a country, or the entire globe, as it was done with the school and college education in the past.

To that point, it's also crucial to consider the implications the proposed solutions would have on the existing system: we no longer live in a world that is a blank canvas, therefore, the implications of the change sometimes could be unexpected and profound. The concept of personalized learning illustrates some of these issues well: while students might get tremendous benefits from the new process, we need to consider what would happen when the real world would inevitably start interfering with it. What would happen when the families move, and the students find themselves in the areas where there are no schools with personalized learning options? Would the introduction of personalized learning only deepen the gap between the well-performing schools that are well-manned and access to funding, and the ones that are already struggling? Would it hamper the job mobility for the teachers? I'm sure it's not impossible to find answers to those questions, but in order to do that , we need to be asking those questions in the first place.

Finally, one day a time would come when the context would change again, and we would need to rethink the education system once more. I believe we could do a great service to the future generations if we keep that in mind, and focus on designing solutions that could be adjusted as needed, and are made to be iterated upon.

The Challenge Of Attracting The Best Talent

In one of the classes I'm currently taking at Kellogg, we've recently touched on the issue of top K-12 teachers working at the better performing schools, with the schools that represent a more challenging case often facing significant difficulties attracting and retaining top talent.

This problem, of course, isn't unique to K-12 system. If you think about it, most of us will probably choose to move to a job that offers higher pay, and a better working environment, whenever the opportunity presents itself, without a second thought. And if we believe that the new job would be just as, or more, meaningful than the old one, that typically seals the deal. And who could blame us?

And yet, once you start thinking about what that truly means, the answer becomes less clear. While it most certainly makes sense to look for the greener pastures from an individual's perspective, we might wonder what kind of impact does it have on the world around us? More importantly, are we even serving our own needs in the best possible way by following this line of thinking?

One particularly interesting example to illustrate this point that immediately comes to mind is Google. For years now, it has been being highlighted as one of the most desirable employers in the world. It has the resources required to offer its employees extremely competitive levels of pay, and it is also famous for its great work environment - hey, it even tries to assess people's "Googliness" before hiring them in order to determine whether they'll fit well with the company's culture.

Google is undoubtedly a great place to work, so it isn't really surprising that people from all over the world aspire to work there. However, there is also another side to that story. Almost every person I've talked to who's worked at Google has at some point brought up the issue of being surrounded by people who were overqualified for their jobs. Yes, Google's immense profitability has made it possible for the company to pay for the best available talent. But hiring the best people doesn't automatically mean that you have meaningful problems for them to work on. 

That, of course, doesn't mean that Google shouldn't aim to hire the people of the highest caliber -  after all, as long as it has the resources and the appeal required to attract them, the employees and Google both seem to be better off if it does. And yet, one might wonder, what could many of those people have achieved otherwise? Would the companies they'd work for have more challenging problems for them to work upon? Or would some of those people actually start their own companies that'd eventually change the world?

The same goes for the K-12 system. Nobody could ever blame the teachers for the desire to work for the schools that offer better environments - even if one doesn't care for the compensation and surroundings, it can be much more fulfilling to work in such a place. The question, however, is what impact those teachers might have had at the lower-performing schools: some of those often have a much more pressing need for the best talent, but have trouble attracting such candidates.

So, what could be done to address this issue? I am afraid there are no easy answers here. The best talent is, and will always remain, a scarce commodity, and the best organizations often have a higher appeal (not to mention more resources to offer) to those workers - that is not going to change, nor should anyone want it to, really.

What we could do, however, is create additional incentives for the people to take risks, whether that means going to work for a struggling school, or taking a leap of faith and starting a company. Some of those incentives might be financial in nature, but what seems to me to be even more crucial is for us as a society to promote the importance of raising up to the challenge, especially if it doesn't bring one any immediate rewards, and to celebrate those who choose to do so. This, of course, might be easier said than done, but it's not impossible, and is very much worth the effort.