Designing Accessible Products

On Thursday, Microsoft announced Soundscape, an app that aims to make it easier for people who are blind or visually impaired to navigate the cities, by enriching their perception of surroundings through 3D cues.

According to Microsoft:

"Unlike step-by-step navigation apps, Soundscape uses 3D audio cues to enrich ambient awareness and provide a new way to relate to the environment. It allows you to build a mental map and make personal route choices while being more comfortable within unfamiliar spaces."

To me, this appears to be a wonderful idea, and an app like this could eventually make a huge difference for people who are visually impaired, helping them to navigate unfamiliar environments and make a better use of everything the cities have to offer.

I've been very impressed by the commitment Microsoft demonstrated to building more accessible tools, while interning at Microsoft this summer. If you're interested to learn more about the work they are doing, there is a dedicated section on the company's website, highlighting the principles Microsoft utilizes to think about the inclusive design, and providing specific examples of their work.

Of course, Microsoft isn't the only major tech company that has demonstrated a commitment to building products that are truly accessible. Apple has been long known for their attention to the accessibility, and continues to work to make its products accessible. Google, while not necessarily doing a great job in the past, seems to be catching up. And Amazon finally made its Kindle e-readers accessible once again in 2016, after 5 years of producing devices that weren't suited for those who are visually-impaired (the early versions of Kindle readers were actually accessible too, but then Amazon has given up on this functionality).

And yet there are a lot of areas where tech products' accessibility leaves much to be desired, and many companies simply don't pay enough attention to it. Those often come up with multiple reasons to justify it, too. Some companies state that designing with accessibility in mind is too hard or too expensive, or that it just makes their products look dull. Others believe that by ignoring the accessibility issues, they're only foregoing a small percentage of the market (the figures typically mentioned are 5%, or less).

To be clear, none of those arguments should be viewed as acceptable. Moreover, designing with no regard to accessibility today is often classified as discrimination based on disabilities, and over the last 25 years, it has been made illegal in multiple countries (including the U.S. and U.K.), with the customers successfully suing companies who weren't providing accessible options.

But even if we put aside the legal aspect of the issue, do any of the excuses typically used by companies to avoid paying attention to accessibility actually have merit in them? As it turns out, not really.

According to the U.S Census Bureau, in 2010 nearly 1 in 5 People (19%) had a disability, with more than half of them reporting their disability being severe. About 8.1 million people had difficulty seeing, including 2.0 million who were blind or unable to see. About 7.6 million people experienced difficulty hearing, including 1.1 million whose difficulty was severe. About 5.6 million used a hearing aid. Roughly 30.6 million had difficulty walking or climbing stairs, or used a wheelchair, cane, crutches or walker. About 19.9 million people had difficulty lifting and grasping. This includes, for instance, trouble lifting an object like a bag of groceries, or grasping a glass or a pencil.

Now, if you look at those numbers, the argument that by ignoring the accessibility, the companies are foregoing only a small chunk of the market, proves to be obviously incorrect. Even if you single out a particular disability, like having difficulty seeing, it still affects millions of people.

What is perhaps even more important, those numbers don't necessarily include everyone who might benefit from the products being designed with accessibility in mind: a well thought-out design might also benefit people who are temporary disabled, or the youngest and the elderly users. So it's not just about ensuring that the people with disabilities would be able to use your products, but also about creating better products in general.

Here is one great quote related to this discussion, from the Slate.com article "The Blind Deserve Tech Support, Too: Why don’t tech companies care more about customers with disabilities?":

"When you make a product that’s fully accessible to the blind, you are also making a product accessible to the elderly, to people with temporary vision problems, and even to those who might learn better when they listen to a text read aloud than when reading it themselves. This is the idea of universal design: that accessible design is just better design."

Is designing for accessibility time-consuming and expensive? Sometimes, but overall, it really doesn't have to be. A lot of it has to do with learning about and following the best practices related to accessibility, and ensuring that the products you build adhere to the industry standards. Starting to do that might require a certain amount of resources, but in most cases it would be a one-time investment. Besides that, some of the things related to accessibility require very little effort on your part, e.g. adjusting your color scheme to make it easier for people who are color-blind to interact with your product. And in the process of making your products accessible, you are likely to materially improve the experience for your current users as well.

Finally, we are entering an era when the new technology (AI, voice assistants, VR/AR, novel ways to input information, etc.) can contribute a great deal to making it easier for people with disabilities to interact with the products around them. Take, for example, this description of what could be achieved even with the current generation of voice assistants, from "Brave In The Attempt" article on Microsoft's accessibility efforts:

"One of the best Windows tools for people with mobility challenges is Cortana. Just with their voice, users can open apps, find files, play music, check reminders, manage calendars, send emails, and play games like movie trivia or rock, paper, scissors, lizard, Spock. The speech recognition software takes this even further. You can turn all the objects on your screen into numbers to help you choose with your voice. You can vocally select or double-click, dictate, or specify key presses. You can see the full list of speech recognition commands to see all that it can do."

Isn't such a tremendous opportunity to empower people to live much richer lives worth working just a little bit harder for?

Remaking Education

To continue with the topic of education, today we increasingly hear complaints about the growing inadequacy of our education systems to the realities of the world around us. It's impossible not to see merit in some of those, too. In the world that is rapidly moving towards a gig economy, characterized by continuing decline in the average job tenure, with a lot of jobs likely to disappear in the next 10-20 years, a lot of aspects of the traditional education systems are questionable at best.

But in order to understand which parts of the system work well, and which are outdated and require revamping, it's useful to understand the history and context in which current system came into existence in the first place, and the purposes it was set up to serve. Otherwise, proposing any changes would be akin to moving ahead in the dark: we might still stumble upon something useful, but it is just as likely that we would do more harm than good. This is particularly true for something as complex and intertwined with every aspect of our lives as education.

Our current education system as we know it, was largely established in the second half of the 19th century, and the first decades of the 20th century, and coincided with the Second Industrial Revolution. In his (absolutely brilliant, in my opinion) book "The End of Average", Todd Rose argues that to a significant extent, the motivation behind it had less to do with the desire to create a truly meritocratic society — instead, it was largely driven by the ever increasing demand for workers that the new businesses were experiencing. Therefore, the key purpose of education was not to provide everyone with the opportunities to discover their talents and use those in the best possible way, but rather to educate people to a minimum level that would be sufficient for them to fill in the new vacancies.

The Second Industrial Revolution has long since became history; today, we are in the middle of what is widely regarded as the Digital Revolution, or the Third Industrial Revolution. This new era has arguably brought tremendous change to the societies throughout the world and global economy; it's hard to argue that the needs of both the society and individuals today aren't very different from what they've been during the Second Industrial Revolution more than a hundred years ago. And yet, we still to a significant extent rely upon a system that was designed for a different age and circumstances.

That raises several important questions. First, given how much the world has changed over the last 100 years, how suitable our education approaches are for the new circumstances? Yes, it remains possible that a lot could be achieved through the gradual evolution of the existing offerings. But is it too far-fetched to imagine that at least for some aspects of the current system, disruption might make more sense that evolution?

Personally, I don't think so. The idea of providing personalized education in schools required changing pretty much every aspect of the traditional school experience - and yet, the early results seem to be very promising. Same goes for the notion that bootcamps, nanodegrees and other unconventional options for professional education might one day turn into a viable alternative to college education — while it might raise some eyebrows, there is a lot of promising work happening in the space right now. And the list goes on.

Second, if we want to bring positive change to the current education system, we need to focus on designing new solutions that can be successfully scaled. One reason why the entire world still relies on a system that was put in place over a hundred years ago is that it was built to scale. Therefore, if the goal is to have a wide impact, for whatever solutions we propose, it's important to consider whether there is a way to implement them throughout a single state, a country, or the entire globe, as it was done with the school and college education in the past.

To that point, it's also crucial to consider the implications the proposed solutions would have on the existing system: we no longer live in a world that is a blank canvas, therefore, the implications of the change sometimes could be unexpected and profound. The concept of personalized learning illustrates some of these issues well: while students might get tremendous benefits from the new process, we need to consider what would happen when the real world would inevitably start interfering with it. What would happen when the families move, and the students find themselves in the areas where there are no schools with personalized learning options? Would the introduction of personalized learning only deepen the gap between the well-performing schools that are well-manned and access to funding, and the ones that are already struggling? Would it hamper the job mobility for the teachers? I'm sure it's not impossible to find answers to those questions, but in order to do that , we need to be asking those questions in the first place.

Finally, one day a time would come when the context would change again, and we would need to rethink the education system once more. I believe we could do a great service to the future generations if we keep that in mind, and focus on designing solutions that could be adjusted as needed, and are made to be iterated upon.

The Future Of Online Education: Udacity Nanodegrees

In its 20+ year history, the online education market has experienced quite a few ups and downs. From the launch of lynda.com way back in 1995 (back then, it wasn't even an EdTech company yet, strictly speaking; it only started offering courses online in 2002), to Udemy, with its marketplace for online courses in every conceivable topic, to the MOOC revolution, which promised to democratize higher education — I guess it would be fair to say that EdTech space has tried a lot of things over the years, and has gone through quite a few attempts to re-imagine itself.

On the last point, while MOOCs (massive open online courses) might not have exactly lived up to the (overhyped) expectations so far, the industry continues to live on and evolve, with the startups like Coursera, edX and Udacity continuing to expand their libraries, and experimenting with new approaches and programs.

Most recently, Udacity has shared some metrics that allow us to get a sense of how the company have been doing so far. And, in a word, we could describe it as "not bad at all". Apparently, in 2017 the company had 8 million users on the platform (that includes the users engaged with Udacity free offerings), up from 5 million the year before. Udacity also doubled its revenue to $70 million, which constitutes an impressive growth rate for a company at this stage.

Now, the reason why I believe those numbers are particularly interesting is because of the monetization approach Udacity took a few years ago, when it first introduced its Nanodegrees, a 6-12 month long programs done in collaboration with the industry partners, such as AT&T, IBM and Google, that should presumably allow the students to build deep enough skillset in a specific area in order to be able to successfully find jobs.

While this idea itself isn't necessarily unique - other companies have also been trying to create similar programs, be it in the form of online bootcamps, as is the case for Bloc.io, or the Specializations offered by Coursera, I would argue that Udacity's Nanodegrees offered the most appealing approach. Nanodegrees are developed in a close partnership with industry partners (unlike Coursera's Specializations that are university-driven), and require lower commitment (both from the financial perspective and time-wise) compared to online bootcamps. Finally, the marketing approach of Udacity is vastly superior to that of its key competitors, especially when the Nanodegrees were first launched (they announced it in partnership with AT&T, with AT&T committing to provide internships for up to 100 best students, which was a great move).

Some of the metrics Udacity shared this week were specifically related to Nanodegrees, and provided a glimpse into how they were doing so far. In particular, Udacity has reported that there are 50,000 students currently enrolled into Nanodegrees, and 27,000 have graduated since 2014.

The price per Nanodegree varies quite a bit, and it can also depend on whether the program consists of a single term, or several of those, but with the current pricing, it seems reasonable to assume that the average program probably costs around $500-700. With 50,000 students enrolled, that should amount to $25-35 million in run-rate revenues (strictly speaking, that's not isn't exactly run-rate, but that's unimportant here). The actual number might be a bit different, depending on a number of factors (the actual average price per course, the pricing Udacity offers to its legacy users, etc.), but I'd assume it shouldn't be off by much.

Those numbers ($25-35 million, give or take) are interesting, because they clearly show that Udacity must have other significant revenue streams. There are several possibilities here. In addition to offering learning opportunities to consumers, Udacity also works with the businesses, which theoretically could amount to a hefty chunk of the money it earned last year. Besides that, Udacity also runs a Master in Computer Science online program with Georgia Tech, which is a fairly large program today, and offers some other options to its users, such as a rather pricy Udacity Connect, which provides in-person learning opportunities. and a few Nanodegrees that still operate under its legacy monthly subscription pricing model, such as Full Stack Web Developer Nanodegree. All of those could also contribute to the revenue numbers, of course.

And yet, if you look at Udacity website today, and compare it to how it looked like a couple years ago, everything seems to be focused around the Nanodegrees now, whereas in the past, Udacity felt much more like Coursera, with its focus on free courses, with the users required to pay only for the additional services, such as certificates, etc.. The obvious conclusion to be made here is that apparently Udacity considers Nanodegrees to be a success, and believes that there is a significant potential to scale it further.

One last interesting thing to consider is the number of people who have completed at least one Nanodegree since its introduction in 2014. According to Udacity, only 27,000 people have graduated so far, which is curious, given that it reports 50,000 people are currently enrolled in at least one degree, and most degrees are designed to be completed in 6 to 12 months.

This can only mean one of two things: either Udacity has recently experienced a very significant growth in the number of people enrolling in Nanodegrees (which would explain the existing discrepancy between those two numbers), or the completion rates for the Nanodegrees historically have been relatively low.

Now, the completion rates were one of the key issues for MOOCs, where they proved to be quite dismal. However, the situation for Udacity is somewhat different: here, the users have already paid for the program, so in a way, completion rates are less of a concern (and with the legacy pricing model, where Udacity charged users a monthly subscription, the longer times to completion could have actually benefitted the company). On the other hand, low completion rates might ultimately contribute to the poor reviews, negatively affect user retention, and damage the company's brand, so this issue still needs to be managed very carefully.

Would Udacity's Nanodegrees prove to be a success in the long run? That remains to be seen, but so far, it looks like the company has been doing a pretty good job with those, so the future certainly looks promising.

The Challenge Of Attracting The Best Talent

In one of the classes I'm currently taking at Kellogg, we've recently touched on the issue of top K-12 teachers working at the better performing schools, with the schools that represent a more challenging case often facing significant difficulties attracting and retaining top talent.

This problem, of course, isn't unique to K-12 system. If you think about it, most of us will probably choose to move to a job that offers higher pay, and a better working environment, whenever the opportunity presents itself, without a second thought. And if we believe that the new job would be just as, or more, meaningful than the old one, that typically seals the deal. And who could blame us?

And yet, once you start thinking about what that truly means, the answer becomes less clear. While it most certainly makes sense to look for the greener pastures from an individual's perspective, we might wonder what kind of impact does it have on the world around us? More importantly, are we even serving our own needs in the best possible way by following this line of thinking?

One particularly interesting example to illustrate this point that immediately comes to mind is Google. For years now, it has been being highlighted as one of the most desirable employers in the world. It has the resources required to offer its employees extremely competitive levels of pay, and it is also famous for its great work environment - hey, it even tries to assess people's "Googliness" before hiring them in order to determine whether they'll fit well with the company's culture.

Google is undoubtedly a great place to work, so it isn't really surprising that people from all over the world aspire to work there. However, there is also another side to that story. Almost every person I've talked to who's worked at Google has at some point brought up the issue of being surrounded by people who were overqualified for their jobs. Yes, Google's immense profitability has made it possible for the company to pay for the best available talent. But hiring the best people doesn't automatically mean that you have meaningful problems for them to work on. 

That, of course, doesn't mean that Google shouldn't aim to hire the people of the highest caliber -  after all, as long as it has the resources and the appeal required to attract them, the employees and Google both seem to be better off if it does. And yet, one might wonder, what could many of those people have achieved otherwise? Would the companies they'd work for have more challenging problems for them to work upon? Or would some of those people actually start their own companies that'd eventually change the world?

The same goes for the K-12 system. Nobody could ever blame the teachers for the desire to work for the schools that offer better environments - even if one doesn't care for the compensation and surroundings, it can be much more fulfilling to work in such a place. The question, however, is what impact those teachers might have had at the lower-performing schools: some of those often have a much more pressing need for the best talent, but have trouble attracting such candidates.

So, what could be done to address this issue? I am afraid there are no easy answers here. The best talent is, and will always remain, a scarce commodity, and the best organizations often have a higher appeal (not to mention more resources to offer) to those workers - that is not going to change, nor should anyone want it to, really.

What we could do, however, is create additional incentives for the people to take risks, whether that means going to work for a struggling school, or taking a leap of faith and starting a company. Some of those incentives might be financial in nature, but what seems to me to be even more crucial is for us as a society to promote the importance of raising up to the challenge, especially if it doesn't bring one any immediate rewards, and to celebrate those who choose to do so. This, of course, might be easier said than done, but it's not impossible, and is very much worth the effort.

The Benefits Of Raising Less Money

A couple of weeks ago, TechCrunch published an essay called "Raise softly and deliver a big exit" by Jason Rowley. In this essay, he set to explore the relationship between the amount of funding startups raise, and the success of the exits, measured by the ratio of exit valuation to invested capital (VIC).

The analysis, unfortunately, doesn't provide a breakdown by space the startups operate in, and thus is relatively high level. It also raises some questions about the validity of using VIC as a metric to compare to the amount of capital raised or the valuation: as both of those are in fact used in the calculation of VIC, any inferences about the correlations between either of them and VIC aren't really meaningful.

Still, even if the conclusions aren't statistically meaningful, the analysis itself raises some interesting points, all of which can be summarized in a single phrase: "raising a lot of money makes getting high return on investment less likely".

One could argue that this is a fairly obvious conclusion that doesn't require looking at any specific data, and she'll be right about that: making high returns (meaning a percentage of capital invested, not absolute numbers) at scale is often harder compared to a situation when you invest relatively small amounts of money.

For the startups raising venture capital funding, that appears to be particularly true. Selling your company for $50 million is a success, if it only raised $5 million in funding; it becomes much more complicated if it attracted $100 million in funding - in this case, to deliver the same multiple you'll need to sell it for at least $1 billion, which drastically limits the number of potential buyers (and also the chances that the company would be able to get to the stage when it could be solved for such an amount of money).

So why are we so focused on the huge rounds raised, "unicorn" startups and the outsized exits?

Part of the story is tied to the business model of the VC firms: most of them receive a fixed percentage of the assets under management (AuM) as a management fee (typically, 2% per year), plus carry (say, 20% of the overall proceeds from exits, once the investors in the fund are paid the principal back). Both of those pieces are directly tied to the AuM, creating the incentive to raise more money from the limited partners.

What that means is that there is a misalignment between the interests of limited partners (who care about returns as a percentage of capital invested), and those of general partners (whose compensation, and especially their salaries, is to a significant extent determined by the AuM size, followed by the absolute returns).

This compels the general partners to raise larger funds, which in turn means that they need to pour more money into each startup (or do more deals per fund, which brings the risk of spreading your resources too thin). And investing more money per startup creates the obvious pressure for larger exits.

While VC piece is relatively straightforward, the situation for the startup founders is more complicated. Unlike the general partners of VC firms, the founders do almost exclusively care about the returns: the founders' compensation isn't really tied to the amount of money they raise, only to the proceeds from selling their companies. Another interesting point to consider is that for the vast majority of individuals, the amount of money required to completely change their lives is much lower than the amounts that might be deemed satisfactory for the VC firms, especially the larger ones.

To illustrate this point, for a firm with $1 billion under management, selling a company they've invested $5 million in at $10 million pre-money valuation, for $50 million, isn't really attractive: even though they'd make a decent return on this investment, the absolute gains are too small to make much of a difference.

For the founders of that same company, however, such a deal can be very attractive: if there were 3 of them, it would yield them more than $11 million apiece - a huge sum of money for any first-time entrepreneur. Accepting a deal like that would also leave them free to pursue their next ventures, knowing that they can now take bigger risks, with their financial security already established.

So again, why does the entire industry pay some much attention to the largest deals and exits?

Well, for once, it's just more interesting for the public to follow those deals - they create a rock-star aura around the most prominent founders and VCs, something that is obviously lacking for the smaller investments and exits. Next, some of the more exciting ventures do require outsized investments: that is often particularly true for some of the most well-known B2C startups (e.g. social networks, or on-demand marketplaces) - that, however, certainly isn't the case for a lot of companies out there. Finally, the VC agenda certainly plays a role there as well.

And yet, while all those reasons might be legitimate, it's worth remembering that for every $1 billion exits there could be dozens of $50-100 million sales, and while such deals don't always sound as cool, there surely do have the potential to change the lives of the entrepreneurs involved in them.

Bill & Melinda Gates Foundation Annual Letter: Things To Learn

Last week, Bill & Melinda Gates Foundation published its 2018 annual letter. In their own words, the letter is structured as a series of answers to the "10 Toughest Questions We Get".

This is a brilliantly written document that does a great job providing insights into the work Bill and Melinda are doing with the foundation, as well as outlining the reasons for choosing certain areas to focus on, and the strategies they pursue in each of those.

If you think about it, philanthropy today can, and does, do a great good, but it can also be dangerous if done irresponsibly, given the outsized influence it can often wield on the world, even inadvertently, not to mention that it can be used to specifically promote certain agenda.

In this context, it seems to me incredibly helpful to learn about the worldview and the motivations of the people who lead the largest foundation on the planet. Besides, Bill & Melinda Gates Foundation has been operating for 18 years now, being engaged in multiple areas across the globe. What that means is that even if you don't agree with their stance on certain topics, there might still be a lot to be learned from the approach they are taking, as over the 18 years, they've acquired a huge amount of experience, by trying different things, and figuring out what works and what doesn't.

For those of you who don't have time to go through the entire document, below are some select quotes from the letter that I personally found particularly interesting.

***

Why don’t you give more in the United States?

(Melinda) Our foundation spends about $500 million a year in the United States, most of it on education. That’s a lot, but it is less than the roughly $4 billion we spend to help developing countries.

We don’t compare different people’s suffering. All suffering is a terrible tragedy. We do, however, assess our ability to help prevent different kinds of suffering. When we studied the global health landscape, we realized that our resources could have a disproportionate impact. We knew we could help save literally millions of lives. So that’s what we’ve tried to do.

Why don’t you give money to fight climate change?

(Bill) In philanthropy, you look for problems that can’t be fixed by the market or governments.

Are you imposing your values on other cultures?

(Bill) On one level, I think the answer is obviously no. The idea that children shouldn’t die of malaria or be malnourished is not just our value. It’s a human value. Parents in every culture want their children to survive and thrive.

Sometimes, though, the person asking this question is raising a deeper issue. It’s not so much a question about what we do, but how we do it. Do we really understand people’s needs? Are we working with people on the ground?

How are President Trump’s policies affecting your foundation’s work?

(Bill) More broadly, the America First worldview concerns me. It’s not that the United States shouldn’t look out for its people. The question is how best to do that. My view is that engaging with the world has proven over time to benefit everyone, including Americans, more than withdrawing does. Even if we measured everything the government did only by how much it helped American citizens, global engagement would still be a smart investment.

Is it fair that you have so much influence?

(Melinda) No. It’s not fair that we have so much wealth when billions of others have so little. And it’s not fair that our wealth opens doors that are closed to most people. World leaders tend to take our phone calls and seriously consider what we have to say. Cash-strapped school districts are more likely to divert money and talent toward ideas they think we will fund.

(Bill) There’s another issue at the heart of this question. If we think it’s unfair that we have so much wealth, why don’t we give it all to the government? The answer is that we think there’s always going to be a unique role for foundations. They’re able to take a global view to find the greatest needs, take a long-term approach to solving problems, and manage high-risk projects that governments can’t take on and corporations won’t. If a government tries an idea that fails, someone wasn’t doing their job. Whereas if we don’t try some ideas that fail, we’re not doing our jobs.

Airbnb's Latest Announcements: Hassle-Free Travel And Luxury Properties

Yesterday, Airbnb hosted a large keynote presentation, announcing two important additions to its product: Airbnb Plus and Beyond, as well as a number of smaller additions and changes.

According to the company, "Airbnb Plus is a new selection of only the highest quality homes with hosts known for great reviews and attention to detail. Every Airbnb Plus home is one-of-a-kind, thoughtfully designed, and equipped with a standard set of amenities — whether you’re in a private room or have the entire place to yourself.” At the launch, Airbnb Plus features 2,000 listings across 13 cities, with more to follow. To join Airbnb Plus, the hosts would need to submit an application, which requires paying $149 fee, and then satisfy the company's 100-point quality checklist.

Another service announced yesterday was Beyond, although it won't be launched till late spring, and the amount of information available so far is limited. As Airbnb puts it, Beyond will bring "extraordinary homes with full service hospitality" to the platform.

Besides that, Airbnb is now formally recognizing boutique hotels for the first time: while some hotels have been represented on its platform for years now, Airbnb never paid much attention to those. That is about to change, with the inventory now being separated into several categories that will include vacation homes, unique spaces, bed & breakfast ones and boutique hotels.

***

In my opinion, those changes are extremely significant. They also provide us with the glimpse into the direction Airbnb want to head in the future. While it was the idea of a marketplace for people to rent their apartments to other travelers that made Airbnb into the company it is today, at some point it had to find a way to transcend the limitations of this niche, while also utilizing its strengths to expand into additional areas.

One of the key challenges for Airbnb to solve at the beginning was to convince people to put their trust into the platform, allowing the strangers to stay in their homes. Once Airbnb managed to overcome this initial mistrust, the ratings system allowed it to quickly scale the platform, with both the untrustworthy guests and hosts being filtered out by the market.

With Airbnb Plus, it's now taking this further, using its already established ratings system for the hosts (as well as the statuses of "superhosts", possessed by some of them) to identify the most promising rentals, and then work with their owners to ensure even higher level of comfort for the guests. This seems very smart, as it fully utilizes the existing advantages that come with Airbnb scale and its crowdsourced ratings, thus allowing the company to scale it fast, while also providing the guests with enhanced convenience.

The same goes for the idea of recognizing boutique hotels. In many ways, Airbnb is better positioned to serve this niche that the regular hotel booking systems, not to mention the fact that Airbnb only charges the hosts 3%, charging the guests with the rest, and doing that in a transparent way, while platforms like Booking.com charge the hotels 15 to 20% of the booking value. However, before now, finding the boutique hotels on the platform was slow and inconvenient, damaging the experience for the users. The introduction of separate categories for different types of inventory should allow to improve the user experience, and potentially help to attract additional hotels to the platform.

It's harder to make any definitive conclusions about Airbnb Beyond at this point. On the one hand, judging from the way Airbnb positioned it in the announcement, it represents a long awaited move for the company directly onto the hotels' turf, which significantly expands its total addressable market, and should also potentially allow it to better serve the entire spectrum of their clients' needs.

On the other hand, unlike with the Plus and boutique hotels, the expansion into the full service hospitality doesn't necessarily utilize the existing strengths of the platform, and it's also not a space the company has much experience in. In order to leverage its scale, Airbnb would most likely need to find local partners in each geography, and then figure out a way to ensure that it can provide a consistent and high quality experience the guests are accustomed to with the traditional luxury hotels. This can be a very difficult challenge to tackle, but at the same time, the sheer size of the hospitality industry makes the attempt worth the effort.

Fighting The Ivory Trade: The Lessons Learned

According to the estimates, in 1979 there were at least 1.3 million African elephants. By early 1990s, that number dropped by more than half, to 600,000. Today, the estimates stand around 415,000, with additional 100 elephants being lost every day, mostly to the poachers engaged in ivory trade.

Recently, the Economist has published a film describing the scope of the problem, and the efforts African countries are currently involved in trying to reduce, and ultimately eliminate, poaching, - I'd highly suggest watching it (it's only 6 minutes long).

The fight to stop poaching is a tough and complicated one, and as one can learn from the film, the best of intentions can sometimes lead to terrible consequences, undoing a lot of the good work that had been done previously. This is something I wanted to focus on, as I believe it's helpful to learn about some of the strategies described in the video, and the reasoning behind them, as those can widely applicable to a number of other issues as well.

The fight to end ivory trade has been going on for decades now, and while it hasn't always been a success, some progress has been made. However, while killing elephant for ivory had been made illegal, the trade itself wasn't completely banned: an exceptions has been made for some countries who made an effort to control the poaching, and the ivory trade also remained legal, albeit with restrictions, in the countries that generated the majority of demand (China, Japan, U.K.). That, in turn, created a surreal situation when the legal and illegal trade co-existed side by side.

The problem is, while one can announce that trading the tusks carved before a certain date is legal, while trading in any tusks carved after that date is not allowed (e.g. this is exactly how the system was set up in the U.K., where trading in tusks carved before 1947 remained legal), there is no real way to separate the demand into those artificial buckets. Moreover, as it turned out, the very fact that the ivory trade was still allowed, even with all of the restrictions, legitimized the desire to own ivory in the eyes of those looking to purchase it.

This became particularly clear in 2008, when the decision to legally sell 102 tons of stockpiled tusks was made. As the tusks have been being seized over the years, it has never been clear what to do with them in the long run, and guarding those has remained expensive and often unsafe. So the argument has been made that the legal sell-off would help to raise the money needed for continuing the conservation efforts, and would also help to depress the prices for ivory, making poaching less economically attractive.

That decision, however, backfired terribly. Those involved in the illegal trade viewed it as a signal that the ivory trade is back (legal or illegal). Moreover, the huge amount of legal ivory flooding the market created a perfect cover for the expansion of illegal trade, as it was often impossible to trace the origin of the tusks. And as it turned out, the legal sell-off didn't even depress the prices, instead, they continued rising - there were multiple theories on why that was the case, with the main explanation accepted today being that the excess demand for the ivory was there, and the legal sell-off certainly didn't help to promote the idea that purchasing ivory might be wrong or immoral.

In 2016, Kenya, trying to decide what to do with a huge amount (105 tons) of stockpiled tusks, and given the terrible outcome of the legal sell-off in 2008, decided to take a different approach: it chose to burn them. It wasn't the first time Kenya was doing that: it first burned 12 tons in 1989 in a widely publicized (and criticized) event, but it has never yet aimed to destroy such a unbelievably huge amount of tusks.

At the first glance, this idea might seem insane: those 105 tons were valued in the hundreds of millions of dollars that could be used to fund further conservation efforts. Moreover, burning so much ivory could have created a sense of scarcity, driving the price of ivory even higher. Finally, some argued that destroying tusks denigrated the dead animals, and sent the message that they were of no value. And yet, Kenya chose to proceed with its plan, widely publicizing the event.

The result: the price of ivory went from $2,000 per kilo in 2013 to around $700 today. That wasn't, of course, the result of Kenya choosing to burn the stockpiled tusks alone. Rather, it came as a result of a series of orchestrated efforts to raise the awareness of the terrible consequences that the demand for ivory had for African elephants, as well as the gradually imposed bans on the legal trade throughout the world (in particular, in China and Hong Kong).

One might argue that the collapse of the legal trade should have just shifted the demand to the illegal market, creating scarcity and driving prices even higher. However, that didn't actually happen, and that's what made this strategy so valuable to learn from.

As it turned out, to a significant extent the demand for ivory was driven by the justification that the existence of legal trade provided it with, and also by general unawareness of the buyers of the real source of most of the ivory they were buying, and the suffering their demand had generated. The phase out of the legal ivory trade that's happening right now, together with the public efforts of the African governments to draw attention to the issue, stripped those moral justifications, and as a result, the demand for the ivory collapsed.

The supply and demand laws of the free market were, of course, still in place, but the relationship between those turned out to be much more complicated than many might have expected. This isn't something unique to the ivory trade, either, - there are other cases where the relationship between supply and demand is complex, and therefore requires very careful management to avoid disastrous consequences. I sincerely hope that those lessons of 2008 and 2016 would be further researched and publicized, as the price paid for these insights was surely too high to let it go to waste.

The Struggle Over Snapchat's Controversial Redesign

When a tech company rolls out a major update for a b2c product that's been in use for years and has an army of loyal followers, it is fairly reasonable for it to expect to see a certain amount of backlash, especially if the changes affect the way the users interact with the app. After all, we tend to heavily rely on the acquired habits when dealing with a lot of tech products, and when those habits are being disrupted, even for good reasons, we become frustrated.

Still, when Snapchat introduced the redesigned version of its app back in November, I doubt that the company expected the backlash to turn out to be so severe. Since then, the complaints have never stopped, with an incredible number of users weighing in to demand the reversal of the redesign.

In the tech world today, reversing updates is not entirely unheard of, but it can be exceedingly tricky, especially for a major redesign like this: nobody wants to cave in to the public opinion and admit failure. However, what's more important is that in the world of continuous deployment and constant A/B testing, the decision to introduce major changes is never made blindly. Chances are, Snap had some sound reasons to go forward with this redesign (which, as they were most likely aware of, wouldn't necessarily be taken kindly by the users) - such as, for example, the expectations that the new design would help the company to better monetize the app. The fact that the recently released earnings beat the expectations, sending Snap's share price soaring, only confirms this hypothesis, as many observers connected the improved performance to the redesign.

So, it didn't came as a surprise when earlier this month Evan Spiegel (Snap CEO) defended the redesign, and announced that it's here to stay. That, however, wasn't the end of the story. As it turns out, a month ago, a user from Australia started a petition on Change.org, demanding Shap to reverse the update. Well, as of today, more than 1.2 million (!) people signed it. As a result, yesterday Snap responded to the petition, promising some changes to the app that, according to Snap, would help to alleviate at least some of the issues that the users were complaining about.

This is an interesting development, and definitely not a very common one: even though Snap haven't actually agreed to reverse the redesign (which, again, is totally unsurprising), the amount of backlash they received has ultimately forced them at least to try and communicate the upcoming change to the user community in a more transparent way, and possibly to make some concessions along the way as well (we don't really know whether the announced upcoming changes were planned in advance).

And while I personally don't agree that using Change.org to make such a demand was justified - in order for it to remain an effective vehicle to drive change, the users need to be cognizant of the social significance of the petitions they start - it certainly proved to be effective in this case, which might set an interesting precedent for the future battles between tech companies and their users.

Why ICOs Probably Aren't The Future Of Early Stage Financing (At Least, Not Yet)

While catching up on the recent posts on avc.com, I came across this video from the Upfront Conference, in which a number of VCs and entrepreneurs discuss the pros and cons of ICOs and tokens in the context of early stage funding.

For those of you who don't have time to go over the entire video (although I'd still recommend watching it, as it last only 7 minutes and is highly educative), here are a few quotes from it that I found particularly insightful:

Adam Ludwin, Chain:

"It's not surprising that you see the number of ICOs you see, because of the temptation to raise capital that's not equity, so there is no dilution, and is not debt, meaning you don't have to pay anyone back, so people are just giving you money, and all you give them is the hope that the thing they have will appreciate in price. That's a very tempting deal for any entrepreneur to take."

Jim Robinson, RRE:

"What I have to get right to win is I have to have a company actually work. It has to build what it's supposed to build, it has to find an audience, it has to have sales and repeats. What I have to get right if I'm speculating or investing in tokens is not whether or not they'll actually ever work, it's whether or not I timed it correctly."

Fred Wilson, Union Square Ventures:

"We're gonna invest in the sector for the long term, you know, we're thinking about it as a 10-year or 15-year investment opportunity, and so we try really hard not to get caught up in near-term price speculation... There is not enough reporting and accountability, there's not enough governance, there's too much early liquidity, there's misalignment between potentially the investors in the platform and the developers of the platforms..."

Tom Loverro, IVP:

"This is sort of constitutional democracy in 1776, like nobody really knows how to make this stuff work."

I believe those opinions offer a great perspective on why ICOs should be viewed with caution, and also why they probably aren't going to replace the traditional ways to fund early-stage companies anytime soon.

To be fair, I'm not going to argue with the fact that the cryptocurrencies and blockchain are creating some very exciting possibilities, allowing to rethink the way some things were typically done in the past. And, in case of ICOs, the idea of bypassing the intermediaries, such as VC funds, and the costs associated with them, and investing directly into the promising companies at the early stages (thus leaving potential for a huge upside, if the startup is to succeed) is certainly luring. However, the mechanisms governing such investments were put in place for a reason, and people willing to participate in ICOs need to have a very clear understanding of what they're getting themselves into.

Below is the summary of some of the good reasons to use ICOs to fund companies:

  1. For people who possess deep expertise in a certain field (thus allowing them to figure out what the most promising opportunities are) and at the same time are unwilling, or unable to invest through traditional channels, ICOs might represent an sensible (and cost-effective) investment option
  2. Some might be less interested in the long-term prospects of the companies having ICOs, and rather are hoping to earn high returns by trading tokens - for those, the speculative nature of ICOs, the lack of regulation around it and the low transaction costs can make ICOs quite attractive
  3. Also, in theory, there is nothing preventing the companies that have already gained significant traction from doing ICOs for some very valid reasons, the most famous example of that being Telegram with its huge ICO of $1.2 billion - given Telegram's ambitions to create an ecosystem of decentralized apps that won't be subject to regulation by any government, ICO appears to be exactly the right tool to raise funding; moreover, such later-stage ICOs are obviously less risky than investments made in very early-stage companies, and thus might represent a great niche for ICOs as an investment mechanism

At the same time, there are plenty of reasons for investors to be beware of ICOs (even after we exclude the obvious scams, such as pump and dump):

  1. The protections for the investors are often limited or non-existent: most ICOs today are much more similar to crowdfunding than to public offerings, thus leaving the investors vulnerable from the legal standpoint
  2. Pretty much anyone can invest in ICOs, which is not the case for most of the regulated investments that are typically considered high risk: the concept of accredited investor exists for a reason
  3. The governance structure of many companies having ICOs is often questionable at best, leaving the investors with limited say on the direction of the companies' strategy
  4. The fact that the tokens acquired in ICOs can be traded can be great in a sense that it provides the investors with liquidity; however, that also creates a conflict of interest between the founders and the backers
  5. To build off the previous point, given that the vast majority of the companies doing ICOs these days are early stage, there are often no objective ways to value them, which in turn means that the price of the tokens is subject to huge swings, often based on rumors or the quickly changing sentiments of the public

So, will ICOs evolve in a mature investment mechanism that'll revolutionize early-stage financing? To me, at this point the answer remains unclear: while ICO as a mechanism most certainly has a great potential, I think it'll most likely take years before it evolves into a more reasonable investment tool that the public can truly benefit from.