global perspective

Why We Need To Rethink The Existing Safety Nets

These days, it seems that the discussion on how AI is going to disrupt the vast majority of industries in just a few short years rages everywhere. According to PitchBook, in 2017 VCs have poured more than $10.8 billion into AI & machine learning companies, while the incumbents have spent over $20 billion on AI-related acquisitions; according to Bloomberg, the mentions of AI and machine learning on earnings calls of public companies have soared 7-fold since 2015; and just this week, The Economist published a series of articles, framed as Special Report, on the topic.

In today's context, AI typically refers to machine learning, rather than any kind of attempt to create general intelligence. That, however, doesn't change the fact that the current technology has clearly moved past the point when it was of limited use to non-tech companies, and is now beginning to disrupt a large number of industries, including the ones that weren't particularly tech-savvy in the past. To quote McKinsey Global Institute's "Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation" report:

"We estimate that between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world, based on our midpoint and earliest (that is, the most rapid) automation adoption scenarios. New jobs will be available, based on our scenarios of future labor demand and the net impact of automation, as described in the next section. However, people will need to find their way into these jobs. Of the total displaced, 75 million to 375 million may need to switch occupational categories and learn new skills, under our midpoint and earliest automation adoption scenarios."

To be fair, McKinsey also states that less than 5% of all occupations consist entirely of activities that can be fully automated. Still, here's another valuable quote from the report:

"In about 60 percent of occupations, at least one-third of the constituent activities could be automated, implying substantial workplace transformations and changes for all workers."

Overall, there seems to be little doubt today that even with the current level of technology, the global workforce is about to enter a very volatile period that would require large numbers of people to learn new skills or be altogether retrained, or else risk losing their jobs, and face difficulties finding new employment.

The peculiar nature of disruptive technology adoption

I would also argue that while tech industry, as well as the broader society, have often been overly optimistic when trying to forecast how soon certain revolutionary advances in technology were to happen (heck, in their 1955 proposal the fathers of AI, which included Marvin Minsky, John McCarthy and others, outlined their belief to be able to make significant progress towards developing a machine with general intelligence in a single summer), once the core new technology became available, even the most daring forecasts for adoption rates often turned out to be too conservative.

This is especially true in cases when technology in question was impactful enough, and its nature allowed for the formation of an ecosystem around it — in which case, in just a few years, there were hundreds of thousands of stakeholders involved coming up with new creative ways to benefit from the advantages brought by the new tech.

With AI, or rather, with machine learning (in this case, the distinction is quite important), while the underlying technology is still evolving and will continue to do so, it's already good enough for a wide variety of applications, which prompted a rapid rise in the number of tech companies, startups, consultancies and independent developers involved in the space — today we already have a vast ecosystem around AI, with the ever-growing number of stakeholders involved, and it can only be expected to grow larger in the next few years.

Rethinking the safety nets

What that means is that even the most daring forecasts produced by McKinsey or anyone else might still underestimate the change that's coming. And if that turns out to be true, figuring out how to help all the people who are going to be displaced, becomes of utmost importance, as the society will need to find ways to support those people through periods of unemployment, provide them with the training that would be effective in bringing them back to the workforce (the current government-run retraining programs, while costing a lot to the taxpayers, often turn out to be painfully ineffective, at least in the U.S.), and, ultimately, take care of those who for various reasons can't get back to the workforce, while doing all of the above on an unprecedented scale.

This calls for the creation of robust safety nets for people, while also making sure that it doesn't stifle economic growth: while the safety nets of some European countries are great for their citizens, they also place undue burden on the employers, and incentivize both the mature companies and the startups to move their business to other places, if possible (and in the increasingly global and interconnected world, it is indeed becoming possible to do that more and more frequently).

At the first glance, there is a paradox here: the safety net is becoming increasingly important, but if a robust safety net stands to hurt the economic growth, then there'll be less jobs going on, in turn making the safety net even more essential, and more costly to provide. This paradox, in turn, brings the ultimate question: why are our safety nets designed with the assumption that it's the end goal for people to have a formal full-time job? Note that this is the case for most developed countries, including the U.S.: while it might be easier to fire people in the States compared to many European countries, the system is still designed to incentivize people to seek full-time employment, in some ways even more so than in Europe.

If you think about it, it doesn't seem to make much sense to force people to look for full-time employment above everything else, or to force the employers to make long-term commitments to their employees, or to bear most of the burden associated with their safety nets, in a world that is increasingly global and going through rapid changes at an accelerating pace. Wouldn't it be better if at least most of the safety net would come from the state, while the employers would be incentivized to optimize for efficiency and growth, bringing in people (and firing them) as needed?

This added flexibility for the employers doesn't need to be free either: it's no secret that the corporate taxation is dysfunctional, but it's hard to fix it without offering a decent reason for the companies to play nice (instead of moving the profit center to Ireland) and comply, and the added flexibility on managing their workforce can potentially be a powerful incentive (especially in the HQ markets, where workforce constitutes a significant expense, and can't be easily moved elsewhere). For the businesses, that would mean that they are still being asked to pay their fair share, but at least they won't have to make upfront and long-term commitments that can often have perilous consequences in the changing markets. That remains particularly true for the smaller companies.

Would such a world be more volatile for regular people? Alas, it most likely will be. But it also stands to reason that in a world where your health insurance isn't tied to your employer but is instead provided by the state no matter what and where you have the opportunity to go back to the school as needed, without having to worry about the cost, people would be much more daring to pursue the career options that are best for them long-term.

The final piece: UBI

There is still one component missing, of course. If there is nothing preventing your employer from firing you without much notice, the safety net has to include some mechanism to account for that, and, most likely, it has to be more robust than the currently available programs, which brings the conversation to the concept of UBI, or universal basic income.

Now, that's an incredibly broad topic, and the one that has been in discussion for decades, if not longer (for example, few people know that the U.S. actually conducted a number of experiments on negative taxation way back in 1960s, and even almost got to implement a form of basic income). Also, basic income doesn't stand for one particular idea, but rather includes a range of concepts, from offering everyone the same lump sum regardless of their income or wealth, to ideas of negative taxation that would help to create an income floor for everyone, to proposals that are more limited in scope, but might still play a valuable role in helping to eliminate poverty and providing safety net for people.

The most realistic concept I've seen so far, and the one I like the most, is described in the recently released book called "Fair Shot: Rethinking Inequality and How We Earn", written by a Facebook co-founder Chris Hughes. I'd highly recommend reading the book to anyone interested in the topic, but in short, the idea is to supplement the earnings of every household with the annual income of $50,000 or less, with additional $500/month per working adult (less, if the income is close to $50,000), building on top of existing EITC program, and to pay for this program by eliminating preferential tax treatment for capital gains, and imposing additional taxes on those who earn $250,000 or more per year.

While this idea is less daring that the some of the more sweeping concepts of UBI, it has several extremely interesting components to it. First, it's much less expensive than some of the other UBI proposals, which in theory means that it's possible to implement it even today. Second, unlike the calls to provide basic income to everyone regardless of their wealth or whether they are working or not, Chris proposes to provide this supplementary income to working adults with relatively low earnings, but to use a much broader definition of work that the one currently used in EITC: the idea is to count as work any kinds of paid gigs (e.g. working for Uber, TaskRabbit and the likes), as well as to count homemaking and studying as work. That way, people would remain incentivized to engage in productive activities, but wouldn't be limited in what they could do as much as they are now (although, interestingly, the vast majority of UBI experiments actually provide evidence that people receiving it continue to work, and even work more, instead of withdrawing from the workforce, so this concern is artificial in nature to begin with). Third, while $500/month won't be enough to support someone who has no other income, its value shouldn't be underestimated: the studies show that even small amounts of cash can help people get by during the hardest periods and optimize their careers for longer term.

The path ahead

Even with the UBI in some form, the guaranteed health insurance and the access to free education, people wouldn't exactly get to enjoy their lives without having to worry about work: the goal, at least for now, should be to provide safety net for the periods of turmoil and incentivize people to pursue riskier and more rewarding opportunities career-wise, rather than eliminate the need to worry about finding employment altogether. Still, having this safety net would mean a great deal for someone whose job has been eliminated by automation and who now has trouble finding work, or who needs to go back to college to get retrained, or simply wants to quit her less than inspiring job and try to launch a business.

The change brought by the globalization and automation is inevitable, and so, most places would have to find a way to adapt to it, one way or the other. Right now, places like Netherlands or Nordic countries already have well-developed safety nets, but often represent a challenging environment for the new businesses to grow in, while other places (e.g. the U.S.) can be much more business-friendly, but don't offer all the necessary protections to support people who find themselves worse off than before. What remains to be seen is what path each of those countries would choose to pursue going on, and how it would play out for them over the next 10-20 years.

Data Privacy And GDPR: Treading Carefully Is Still The Best Course

As the rage over Facebook/Cambridge Analytica situation continues, the calls for much more rigorous regulation for tech companies are becoming more and more common. On the surface, this seems reasonable: it's hard to argue that handling of users' data by many companies remains messy, with the users often left confused and frustrated, having no idea about the scope of the data they're sharing with those companies. And yet, I am going to argue that we — as users, customers and society as a whole — stand to lose a lot if we act purely on our instincts here: the excessive regulation, if handled poorly, can harm the market immensely in the years to come, and ultimately leave us worse, not better, off.

Current discussion around data privacy hasn't actually started with the recent Facebook scandal. Over the last few weeks, you might have received notices from multiple tech companies on updated terms of services — those are driven by the companies' preparation for General Data Protection Regulation, or GDPR, a new set of rules aimed to govern data privacy in the EU, to kick in on May 25th this year. If you're interested, here are a couple of decent pieces providing an overview of GDPR, from TechCrunch and The Verge.

Now, it is still the EU regulatory framework, so naturally, it only governs the handling of the data that belongs to the users who reside in the European Union, which prompts the question why should people in other geographies bother to learn about it? Well, to answer it, here's the quote from the recent The Verge article:

"The global nature of the internet means that nearly every online service is affected, and the regulation has already resulted in significant changes for US users as companies scramble to adapt."

And that's exactly right: while GDPR only applies to the data that belongs to the EU citizens, it's often hard, if not altogether impossible, to build a separate set of processes and products for a subset of your users, especially if we are talking about a subset so large, diverse and interconnected as the European users. Therefore, quite a few companies have already announced an intention to use GDPR as the "gold standard" for their operations worldwide, rather than just in the EU.

Quite a few things about GDPR are great: the new "terms of service" are about to become significantly more readable, the companies would be required to ask the users to explicitly opt in on the data sharing arrangements, instead of opting their users in by default, and then forcing them to look for the buried "opt out" options, and the opportunity for the users to request any company to provide a snapshot of all the data they have on them is likely to prove to be extremely useful. The abuse, like in Facebook/Cambridge Analytica case (irrespective of who's to blame there) is also about to become much harder, not to mention much costlier for the companies involved (under GDPR, maximum fines can reach 4% of the company's global turnover, or €20 million, whichever number is larger).

So what's the problem then? Well, first of all, GDPR compliance is going to be costly. Europe has already witnessed the rise of a large number of consultants helping companies to satisfy all the requirements of GDPR before it kicks in in May. The issue with that is that the large companies typically can afford to pay the consultants and the lawyers to optimize their processes. Instead, it's often the smaller companies, or the emerging startups, that can't afford the costs associated with becoming fully compliant with the new regulations.

That, in turn, can mean one of two things: either the authorities choose not to enforce the new laws to a full extent for the companies that are beyond a certain threshold in terms of revenue or the number of users, or GDPR threatens to seriously thwart the competition, aiding the incumbents and harming the emerging players. The second scenario is hardly something that the regulators, not to mention ordinary citizens, can consider a satisfactory outcome, especially in the light of the recent outcry over Facebook, Google and few other big tech companies — most people have no desire to see these companies become even more powerful than they are today, and yet that's exactly what GDPR might end up accomplishing, if it's enforced in the same fashion for all companies, irrespective of their size or influence.

The second problem is that while the first of the principles of GDPR, "privacy by design", isn't really new to the market, the second, "privacy by default" is a significant departure from how many tech companies, in particular those in the marketing/advertising space, operate today. In short, GDPR puts significant restrictions on the data about the user that companies are allowed to collect, and in the situations they're allowed to share it with their partners (and, in most cases, they'd need to obtain an explicit consent from the user before her data could be shared). That potentially puts at risk the entire marketing industry, as most of the current advertising networks employ various mechanisms to track users throughout the internet, as well as routinely acquire data from third parties on the users' activities and preferences in order to enable more effective targeted advertising. Right now, this way of doing things seems to be under direct threat from GDPR.

Now, there are plenty of people who believe that the current advertising practices of many companies are shady at best, and downright outrageous at worst, and any regulation that forces the companies to rethink their business models should be welcomed. To that end, I want to make three points on the situation isn't necessary that simple:

1. Advertising is what makes many of the services we routinely use free. Therefore, if the current business model of the vast majority of those companies comes under threat, we need to accept that we'll be asked to pay for many more of the services we engage with than we do now. The problem, of course, is that most consumers, for better or worse, really hate to pay for the services they use online, which means that a lot of companies might find themselves without a viable business model to go on with.

2. The incumbents are the ones who stand to win here. What comes to mind when you think about the companies that don't need to rely upon third-party data about their users to successfully advertise to them? Facebook, LinkedIn, Google. Those companies already possess huge amounts of information about their users, and therefore they'd actually be the ones that are the least threatened by tightened regulations on data sharing, and likely to become even stronger, if their competitors for the advertising dollars are put out of business.

3. A "separate web" for the EU users. Right now, it looks like many companies are inclined to treat GDPR as the "gold standard". However, it's worth remembering that they still have another option to go with. If GDPR compliance proves to be too harmful for their businesses, instead of adopting it globally, they might choose to go into trouble of creating a separate set of products and processes for the EU users. That, of course, would most likely mean that those products would receive less attention that their counterparts used by the rest of the world, and would feature more limited functionality, harming the users who reside in the EU. It would also harm the competitiveness of the European companies, as well as their ability to scale globally, as, unlike their foreign-based peers, they would face more restrictive and expensive to comply with regulations from the start, while, say, their U.S. peers would have the luxury to scale in the more loosely regulated markets first, before expanding to Europe — at which point, they'd be more likely to have the resources necessary to successfully withstand the costs of compliance.

Once all of this is taken into consideration, I'd argue that it becomes obvious that the benefits that come with the stricter regulation, however significant, don't necessary outweigh the costs and the long-term consequences. Data privacy is, of course, a hugely important issue, but there is little to be gained from pursuing it above everything else, and a lot to lose. With GDPR, the EU has chosen to put itself through a huge experiment, with its outcome far from certain; the rest of the world might benefit from watching how the situation around GDPR unfolds, waiting to see the first results, and then learning from them, before rushing in similar proposals in their home countries.

Why We Should Focus On Our Similarities, Not Uniqueness

"Define America in one word... Possibilities. Americans always believe anything is possible."

Tonight, Joe Biden, the 47th Vice President of the U.S., came to Kellogg to deliver a talk on unequal economic growth. For me, that was the first time I've got to witness such a high-profile politician speak in person, so, as you can imagine, I was fairly excited about it. And I definitely wasn't disappointed: overall, it was a very interesting and insightful talk. Unequal economic growth of the last decades remains a significant issue that should not be overlooked, and Vice President Biden in his speech touched on many of the key points.

In particular, his push for the healthcare and education to be treated as people's basic right, and not a privilege, felt appropriate and refreshing, and his comments about the unfair restrictions that the companies today often force onto the workers that limit their job mobility and bargaining power, or about the unreasonably harsh licensing requirements for many jobs that as a result stifle competition, were spot on, while also staying reasonable: he only focused on the right of the workers to compete for the jobs and fair pay in an open marketplace, and not on how people are entitled to those jobs in the first place (an argument that a certain person who-must-not-be-named likes to appeal to so much).

Was Biden's speech mostly focused on the U.S.? Well, yes, yet in a way that was to be expected. In business school, it's easy to grow accustomed to the idea of bringing global perspective into every discussion, but one can't expect everyone to follow on this approach, nor is it really necessary. After all, most of us probably didn't come to Kellogg today expecting Biden to deliver a lecture on the issues of inequality globally - we can always look up to Gates and others for that.

However, there was one thing in today's talk that rubbed me the wrong way. In his speech, Vice President repeatedly emphasized the uncanny ability of the U.S. to reimagine itself, the unique qualities that the U.S. and its people possess, and its special place in history of the world, in the process making a few unflattering remarks about China, and also, to my surprise, U.K., France and Germany.

Curiously enough, I actually do agree with most of those remarks: in my opinion, it's quite fair to say that the U.S. holds a unique place of the in world today, as well as to talk about the very special traits and qualities that brought many people to the U.S. in the first place, and then helped them succeed there and build the country as we know it, and the exceptional ability of the country to reimagine itself, and push forward.

Still, I feel that it's not enough for a statement to be simply correct to make for a compelling, and, more importantly, right, argument, and, in my opinion, that was exactly the case here. In the global world of today, there is more to be gained from focusing on how everyone might benefit from increased cooperation that is predicated on every country acknowledging its strong and weak sides, as well as taking time to praise and learn to work with the strengths of its partners. It's not that the U.S. (or any other place) needs to suddenly lose their unique advantages, or forget its history, of course. Rather, it's about focusing on seeing itself as an essential part of the larger world made of equals, and then promoting that kind of worldview among its citizens.

There is also another argument to be made there. The sense of uniqueness can be seen as a source of pride, but it can also easily lead to the feelings of superiority or entitlement. Yes, Vice President Biden did specifically mention that to him, this discussion isn't about entitlement, but that's the issue with the concept of uniqueness: what it actually means is open to everyone's interpretation. Coming from another country that also has a long history of viewing itself, and its people, as a unique and powerful force in the world (to those of you who don't know that, I'm originally from Russia), I've seen firsthand some of the issues often stemming from such positioning. Yes, the sense of national pride can do a lot of good for any country and its people, but it can also represent a dangerous force if taken too far, with the sentiments of people around it subject to being easily manipulated — which makes me convinced that now is not the time to appeal to it, as the dangers far outweigh any possible benefits.

So while I agree with the essence of the comments Vice President Biden made in his speech, I also strongly believe that in today's world that is becoming increasingly global and yet is also riddled with xenophobia, civil unrest, and white supremacy movements gaining ground, the "identity of uniqueness", if you will, even when tied to a country, and not race, ethnicity, or religion, should perhaps make way for the idea of everyone in the world being essentially the same, and of the ever-increasing importance of all of us working together. After all, whether we like it or not, the world we live in is already global, and nothing would ever reverse this, so the sooner we adjust our philosophies and rhetoric accordingly, the better off we'll all be.

Remaking Education

To continue with the topic of education, today we increasingly hear complaints about the growing inadequacy of our education systems to the realities of the world around us. It's impossible not to see merit in some of those, too. In the world that is rapidly moving towards a gig economy, characterized by continuing decline in the average job tenure, with a lot of jobs likely to disappear in the next 10-20 years, a lot of aspects of the traditional education systems are questionable at best.

But in order to understand which parts of the system work well, and which are outdated and require revamping, it's useful to understand the history and context in which current system came into existence in the first place, and the purposes it was set up to serve. Otherwise, proposing any changes would be akin to moving ahead in the dark: we might still stumble upon something useful, but it is just as likely that we would do more harm than good. This is particularly true for something as complex and intertwined with every aspect of our lives as education.

Our current education system as we know it, was largely established in the second half of the 19th century, and the first decades of the 20th century, and coincided with the Second Industrial Revolution. In his (absolutely brilliant, in my opinion) book "The End of Average", Todd Rose argues that to a significant extent, the motivation behind it had less to do with the desire to create a truly meritocratic society — instead, it was largely driven by the ever increasing demand for workers that the new businesses were experiencing. Therefore, the key purpose of education was not to provide everyone with the opportunities to discover their talents and use those in the best possible way, but rather to educate people to a minimum level that would be sufficient for them to fill in the new vacancies.

The Second Industrial Revolution has long since became history; today, we are in the middle of what is widely regarded as the Digital Revolution, or the Third Industrial Revolution. This new era has arguably brought tremendous change to the societies throughout the world and global economy; it's hard to argue that the needs of both the society and individuals today aren't very different from what they've been during the Second Industrial Revolution more than a hundred years ago. And yet, we still to a significant extent rely upon a system that was designed for a different age and circumstances.

That raises several important questions. First, given how much the world has changed over the last 100 years, how suitable our education approaches are for the new circumstances? Yes, it remains possible that a lot could be achieved through the gradual evolution of the existing offerings. But is it too far-fetched to imagine that at least for some aspects of the current system, disruption might make more sense that evolution?

Personally, I don't think so. The idea of providing personalized education in schools required changing pretty much every aspect of the traditional school experience - and yet, the early results seem to be very promising. Same goes for the notion that bootcamps, nanodegrees and other unconventional options for professional education might one day turn into a viable alternative to college education — while it might raise some eyebrows, there is a lot of promising work happening in the space right now. And the list goes on.

Second, if we want to bring positive change to the current education system, we need to focus on designing new solutions that can be successfully scaled. One reason why the entire world still relies on a system that was put in place over a hundred years ago is that it was built to scale. Therefore, if the goal is to have a wide impact, for whatever solutions we propose, it's important to consider whether there is a way to implement them throughout a single state, a country, or the entire globe, as it was done with the school and college education in the past.

To that point, it's also crucial to consider the implications the proposed solutions would have on the existing system: we no longer live in a world that is a blank canvas, therefore, the implications of the change sometimes could be unexpected and profound. The concept of personalized learning illustrates some of these issues well: while students might get tremendous benefits from the new process, we need to consider what would happen when the real world would inevitably start interfering with it. What would happen when the families move, and the students find themselves in the areas where there are no schools with personalized learning options? Would the introduction of personalized learning only deepen the gap between the well-performing schools that are well-manned and access to funding, and the ones that are already struggling? Would it hamper the job mobility for the teachers? I'm sure it's not impossible to find answers to those questions, but in order to do that , we need to be asking those questions in the first place.

Finally, one day a time would come when the context would change again, and we would need to rethink the education system once more. I believe we could do a great service to the future generations if we keep that in mind, and focus on designing solutions that could be adjusted as needed, and are made to be iterated upon.

Fighting The Ivory Trade: The Lessons Learned

According to the estimates, in 1979 there were at least 1.3 million African elephants. By early 1990s, that number dropped by more than half, to 600,000. Today, the estimates stand around 415,000, with additional 100 elephants being lost every day, mostly to the poachers engaged in ivory trade.

Recently, the Economist has published a film describing the scope of the problem, and the efforts African countries are currently involved in trying to reduce, and ultimately eliminate, poaching, - I'd highly suggest watching it (it's only 6 minutes long).

The fight to stop poaching is a tough and complicated one, and as one can learn from the film, the best of intentions can sometimes lead to terrible consequences, undoing a lot of the good work that had been done previously. This is something I wanted to focus on, as I believe it's helpful to learn about some of the strategies described in the video, and the reasoning behind them, as those can widely applicable to a number of other issues as well.

The fight to end ivory trade has been going on for decades now, and while it hasn't always been a success, some progress has been made. However, while killing elephant for ivory had been made illegal, the trade itself wasn't completely banned: an exceptions has been made for some countries who made an effort to control the poaching, and the ivory trade also remained legal, albeit with restrictions, in the countries that generated the majority of demand (China, Japan, U.K.). That, in turn, created a surreal situation when the legal and illegal trade co-existed side by side.

The problem is, while one can announce that trading the tusks carved before a certain date is legal, while trading in any tusks carved after that date is not allowed (e.g. this is exactly how the system was set up in the U.K., where trading in tusks carved before 1947 remained legal), there is no real way to separate the demand into those artificial buckets. Moreover, as it turned out, the very fact that the ivory trade was still allowed, even with all of the restrictions, legitimized the desire to own ivory in the eyes of those looking to purchase it.

This became particularly clear in 2008, when the decision to legally sell 102 tons of stockpiled tusks was made. As the tusks have been being seized over the years, it has never been clear what to do with them in the long run, and guarding those has remained expensive and often unsafe. So the argument has been made that the legal sell-off would help to raise the money needed for continuing the conservation efforts, and would also help to depress the prices for ivory, making poaching less economically attractive.

That decision, however, backfired terribly. Those involved in the illegal trade viewed it as a signal that the ivory trade is back (legal or illegal). Moreover, the huge amount of legal ivory flooding the market created a perfect cover for the expansion of illegal trade, as it was often impossible to trace the origin of the tusks. And as it turned out, the legal sell-off didn't even depress the prices, instead, they continued rising - there were multiple theories on why that was the case, with the main explanation accepted today being that the excess demand for the ivory was there, and the legal sell-off certainly didn't help to promote the idea that purchasing ivory might be wrong or immoral.

In 2016, Kenya, trying to decide what to do with a huge amount (105 tons) of stockpiled tusks, and given the terrible outcome of the legal sell-off in 2008, decided to take a different approach: it chose to burn them. It wasn't the first time Kenya was doing that: it first burned 12 tons in 1989 in a widely publicized (and criticized) event, but it has never yet aimed to destroy such a unbelievably huge amount of tusks.

At the first glance, this idea might seem insane: those 105 tons were valued in the hundreds of millions of dollars that could be used to fund further conservation efforts. Moreover, burning so much ivory could have created a sense of scarcity, driving the price of ivory even higher. Finally, some argued that destroying tusks denigrated the dead animals, and sent the message that they were of no value. And yet, Kenya chose to proceed with its plan, widely publicizing the event.

The result: the price of ivory went from $2,000 per kilo in 2013 to around $700 today. That wasn't, of course, the result of Kenya choosing to burn the stockpiled tusks alone. Rather, it came as a result of a series of orchestrated efforts to raise the awareness of the terrible consequences that the demand for ivory had for African elephants, as well as the gradually imposed bans on the legal trade throughout the world (in particular, in China and Hong Kong).

One might argue that the collapse of the legal trade should have just shifted the demand to the illegal market, creating scarcity and driving prices even higher. However, that didn't actually happen, and that's what made this strategy so valuable to learn from.

As it turned out, to a significant extent the demand for ivory was driven by the justification that the existence of legal trade provided it with, and also by general unawareness of the buyers of the real source of most of the ivory they were buying, and the suffering their demand had generated. The phase out of the legal ivory trade that's happening right now, together with the public efforts of the African governments to draw attention to the issue, stripped those moral justifications, and as a result, the demand for the ivory collapsed.

The supply and demand laws of the free market were, of course, still in place, but the relationship between those turned out to be much more complicated than many might have expected. This isn't something unique to the ivory trade, either, - there are other cases where the relationship between supply and demand is complex, and therefore requires very careful management to avoid disastrous consequences. I sincerely hope that those lessons of 2008 and 2016 would be further researched and publicized, as the price paid for these insights was surely too high to let it go to waste.

The Power Of Personification

The cover image for "The Best We could Do" comes from ABRAMS, www.abramsbooks.com

The cover image for "The Best We could Do" comes from ABRAMS, www.abramsbooks.com

I've recently finished reading The Best We Could Do, a graphic novel by Thi Bui. In this book, Bui writes about the story of her family, originally from Vietnam, who came to the U.S. after the fall of Saigon.

When I started it, my knowledge of the history of Vietnam and Vietnamese people was, to my embarrassment, quite limited. And yet from the first pages this book felt so personal and intimate. For the most part, Thi Bui focuses about her own experiences, and those of her family. However, in doing so, she also manages to introduce the readers to the complex history of Vietnam of the 20th century, and gives us a glimpse into how much it affected its people.

What also struck me is how similar the story Thi Bui tells is to the stories I grew up hearing and learning about from my own family and others around me: the complex and often sad history of the Jews in Eastern Europe, the rocky history of the Russians under the communist rule, and so many others. Most of us probably have the stories of their own they grew up hearing and find it easy to emphasize with. One doesn't need to know anything about the history of Vietnam to see the reflection of her own stories in the one Bui tells us in The Best We Could Do. And once you recognize that, it becomes so much harder to remain blind and unmoved by the struggles of others, no matter where they come from, what cultures they belong to, or what languages they speak.

I really wish we'd focus more on telling those personal stories - there is a tremendous power in the idea of personification of history -  something that can never be achieved if we treat the history of the living people just as collections of facts and numbers.

The New Reality Of 2016

The first thing I did last morning was to type “election” query into Google search box. Even though I was following the election results until around 2am, I still nurtured the hope that something changed the last minute. Of course, it didn’t happen. The new reality the world woke up into yesterday was that Donald Trump would be the next president of the United States, something that seemed unthinkable to a lot of us even on the morning of the election.

***

Before continuing, though, I wanted to make a disclaimer. I am not U.S. citizen. Moreover, I hold the passport of one of the most anti-American countries in the world. Therefore, I feel uneasy expressing my opinions on the current election and contemplated whether to publish this post at all, even though I reside in the U.S. right now and intend to do so in the future. I do have vested interests in the U.S. elections (largely aligned with Clinton supporters, or better yet with Bernie Sanders supporters), and also believe that in today’s global economy, the results of the U.S. election influence the entire world. Still, if you feel that this election is the internal affair of the U.S. that should be of no concern for everyone else, that’s OK, just close this post and forget about it; I sincerely hope it hasn’t offended you.

Also, in this post I have no intention to argue how terrible Trump might be (there is already enough written on the topic anyway), or make any predictions how good or bad his presidency will be for the U.S. Instead, I wanted to dig into...

Read More