Views and opinions expressed on this blog are solely my own and do not reflect views of any organizations or employers with whom I am affiliated. Moreover, I am not compensated, monetarily or in any other way, by any persons or firms mentioned in the posts below.

Sunday, November 27, 2016

Ride-Sharing Apps

An Economist's Dream



Steve Levitt captured my sentiment precisely in his recent Freakonomics episode "Why Uber is an Economist's Dream" when he commented about the demand curve, "I've been dreaming of the day I could answer this question, and it probably says a lot about me."

A little background: Levitt, along with Peter Cohen, Robert Hahn, Jonathan Hall and Robert Metcalfe, used consumer reaction to Uber's surge pricing data to estimate a demand curve and the resulting consumer surplus. Uber's data makes this calculation easy to conduct since a consumer's willingness to pay can be tested for almost the exact same product (getting a ride) at many different price levels. They estimated that for every dollar that someone spends, he would have happily spent another $1.60, that $1.60 being the consumer surplus. The economists contend that "back-of-the-envelope calculations suggest that overall consumer surplus generated by the UberX service in the United States in 2015 was $6.8 billion"! Wow, people are willing to pay a LOT to get a ride!

Another dream come true would be figuring out at what price the drivers would have driven at compared with what they actually got paid, or what is known in economic theory as the producer surplus. In order to know the producer surplus, the supply curve itself would have to be estimated by figuring out how many drivers would be willing to drive at every given price. The upward sloping supply curve transpires because the number of drivers willing to drive increases at each price level. 
Ride-sharing apps in reality do espouse many virtues of perfect economics experiments. They are cross-sided or two-sided marketplaces where each side requires different functionality from the platform: the driver requires a rider and a rider requires a driver with a vehicle. Ride-sharing also has merits that other two-sided marketplaces lack: near perfect competition. Although Uber clearly dominates the ride-sharing apps today, I think there is a necessity for multiple apps due to the unique dynamics of this two-sided marketplace and the propinquity of the industry to a perfectly competitive market.




Two-Sided Marketplaces: No study of two-sided marketplaces is complete without revisiting the war between VHS and Betamax. This famous incident has the makings of everything needed for a "winner take all" landscape: people weren't going to buy multiple players or videotapes in both formats before one became the clear standard, and then, manufacturers began choosing sides. In the end, JVC won with its VHS design. Will there be a single winner in the ride-sharing market the way there was in the videotape market? I don't think so, and here's why.

In the article called "Strategies for Two-Sided Markets", Eisenmann, Parker and Van Alstyne proffer that a networked market is likely to be served by a single platform when the following three conditions apply:
  1. Multi-homing costs are high for at least one-user side. "Homing" costs comprise of all expenses network users incur-including adoption, operation, and the opportunity cost of time, in order to establish and maintain platforms affiliation.
  2. Network effects are positive and strong, at least for the users on the side with high multi-homing costs.
  3. Neither side's users have strong preference for special features.
Ride-sharing apps fail test #1. Multi-homing costs are the cost that consumers had of having two different players for videotapes, and that just doesn't exist with ride-sharing. It's easy for riders to have both Lyft and Uber on their phones and to check both for time-until-ride and price. There's no cost of having both apps on the phone and very little opportunity cost of time spent checking both apps. Multi-homing costs aren't even an issue for drivers, as many drivers drive for both companies.

Network effects are strong and positive for both sides and neither side really have a preference for special features, but given that it's really easy to download and compare trips from both apps or to drive for both companies, I don't think ride-sharing is inherently a winner-take-all industry.



Near Perfect Competition: Another reason that ride-sharing apps are such a great case study is that the service of ridesharing is almost (but not quite) a perfectly competitive market. In economic theory, a perfectly competitive market has the following characteristics:
  • No barriers to entry: drivers can decide to enter and exit the market as he pleases without too  much trouble. Because of this, there are too many drivers in the market to measure.
  • No single firm (in this case, driver) can influence the market price or condition. Each driver is a price taker, and price is determined based on supply and demand.
  • Homogenous, almost identical output: one ride may be better than another, but main output is to get from point A to point B. (The slight difference in each experience actually does matter, as discussed below). 
  • There is perfect knowledge with no information failure or time lags. The ride-sharing app platforms make the transmission of information much easier than in the non-virtual world.
Ride-sharing ought to be a perfectly competitive market if all information is available. That is, absent regulation, no one entity ought to be able to set a price, given that there are lots of drivers who provide a seemingly indistinguishable service. One would think prices would naturally be set dynamically where supply and demand meet.

In practice, figuring out the demand curve is difficult, as discussed above in reference to the paper that Levitt co-authored. You may accept a ride at $5.00 but not at $5.50, but it's difficult for Uber to know that without asking you about both of those prices. And this is the reason the market will support at least one more competitor to Uber, to ensure that there is a price checking mechanism for a commodity service. Without Lyft or other competitors, would Uber have created the consumer surplus that it did in 2015?

Another aspect that contributes to the likelihood of multiple ride-sharing apps existing is that, unlike in a theoretical perfect competition, each ride is a little bit different from the one before. The variance in each individual experience results in much more uncertainty about customer satisfaction about the output than if the output were a consistent product, like Campbell's soup. It only takes one bad ride for a rider to desire another option for an app. Uber tries to solve for this with the drivers' ratings system to ensure a consistently pleasant experience. That's not fool-proof (the driver can just have a bad day), however, and it doesn't solve for the negative experience being exogenous to the driver.   

The price and treatment checking will happen for drivers too. Anecdotal data suggests that more drivers prefer Lyft to Uber for various reasons, and the normal ebb-and-flow of business policies will cause either one of the apps to be the darling at any given time. Regardless, I think the people on both sides of the ride-sharing equation like having the choice, and that choice serves as a bit of a price-check on the market.  

What's interesting is that obviously Uber the platform itself is in nothing close to a perfectly competitive market: barriers to entry are enormous, which is why the company is so successful. But it is facilitating transactions that, if done correctly, would buoy the perfect competition of the ridesharing world. It could eschew the need for price-checking by competitors if it could figure out how to let the market set the price dynamically based on supply and demand; that is, if somehow, each ride could be priced by the rider instead of the company itself.  This still doesn't solve for riders just wanting the second choice in the case of disenchantment with one of the platforms. 

Ben Thompson Disagrees: Since I get so much inspiration from Stratechery, I'd be remiss if I didn't mention that Ben Thompson kind of disagrees with me. Thompson argues that since the number of riders is far greater than the number of drivers, and as of now, Lyft has fewer number of riders (less demand), drivers will be too busy serving Uber customers, which will lead to a winner-take-all dynamic.

He explains further, "It doesn’t matter that drivers may work for both Uber and Lyft. If the majority of the ride requests are coming from Uber, they are going to be taking a significantly greater percentage of driver time, and every minute a driver spends on a rider job is a minute that driver is unavailable to the other service. Moreover, this monopolization of driver time accelerates as one platform becomes ever more popular with riders. Unless there is a massive supply of drivers, it is very difficult for the 2nd-place car service to ever get its liquidity to the same level as the market leader (much less the 3rd or 4th entrants in a market)."

Thompson also claims that people build allegiances to a brand and persist with that brand, unless they are given a reason to change; it's simply not worth the time and effort to constantly compare services at the moment of purchase, and that Uber and Lyft would ensure that their prices are pretty similar anyway.

To his first point about drivers, I think people act irrationally, and that explains why drivers drive for Lyft and other apps when liquidity might be the best at Uber. Just check out the reasons people give for enjoying Lyft; there are people who will drive with the less-busy app because they like it that way, so I don't think the liquidity of drivers, at least in large cities, will be an issue.

To the second point about riders having allegiances, again, I think people are fickle. One bad experience, which is easy to have in ride-sharing, and they will open up the "other" app for the next ride. Because unlike a can of soup by Campbell, each ride by Uber actually isn't exactly the same.

Also, people do gravitate towards their niches, especially in dense areas, so ride-sharing apps with a twist could also do well. A pet-friendly car service? Paw-fect! A neighborhood service for hauling kids by a certified parent? It takes a village to drive a child, right? And I'm sure there's already lots of cannabis-friendly ride-sharing schemes being conjured up in a few of the states.  

A Note on Financing: According to Crunchbase, Lyft has raised $2B to date while Uber has raised $8.7B. The thesis of having two successful ride-sharing apps is predicated upon having two apps with pretty good brand awareness and customer acquisition ability. And to do those two, you need financing. Lyft, the #2 player has been able to pay for its customer acquisition costs thus far, but if the financing market dries up and the company has challenges raising its next round, it will have trouble keeping up with its customer acquisition goals. A stalled financing market is bad for both companies, of course, but especially bad for Lyft, since it's trying to play catch-up. A prolonged tightness in the fundraising market, therefore, would end up aiding Uber in retaining and expanding its market dominance.
 
Update: We're seeing financing woes play out for Ola, India's Uber rival, as reports indicate that the Indian ride-sharing app may settle for equity financing at a 40% lower valuation. Still, the company would be valued at $3 billion and is planning on raising approximately $600MM, which would still give it enough ammo to acquire customers. Until funding dries up entirely, I don't think we'll see any of Uber's competitors shutting down.





Sunday, October 9, 2016

Bundling the Newspapers

Where's the Spotify or Netflix for Newspapers? 



As I either come up against paywalls for, or renew subscriptions at, some renowned bastions of journalism, I keep wondering: why isn't there a Spotify or a Netflix for Newspapers? You know, an all-you-can-eat subscription model where I pay a flat monthly fee and get access to the New York Times, Economist, Washington Post, Wall Street Journal, Financial Times and the New Yorker, just to name a few.

To clarify, we're not talking about news aggregators like Digg, Reddit or Google News, RSS feed readers, or curators that work above the paywall to show you free articles or spinets of otherwise for-fee articles. In that regard, no one beats Facebook; PewResearch Center shows that over 60% of users get their news about government and politics from Facebook. Rather, we're talking about what's referred to as bundling of paid content. 


A Little Background: The term "bundling" was previously used in reference to cable companies selling you a package deal of phone, Internet and cable service. More recently, bundling is referred to those companies selling you a subscription to a panoply of cable channels including ones you'll never watch. The idea is that high revenue generating channels subsidize low revenue generating ones and everyone is happy. But viewers got tired of how expensive those cable bills were and moved to online bundles like Neflix, Hulu, HBO Go and Amazon Prime. The irony is that the bundle was dis-aggregated on TV and re-aggregated online, and can cost up to the same amount.

I bring up this background because newspapers are going through the same thing. Everyone knows that print circulation is down, and the bundle of content that a newspaper used to be is becoming dis-aggregated. You have your "new" news, which is "what is happening right now. That kind of news has become a commodity, thanks to Twitter, Facebook, other social media, and anyone with a smartphone. Then, you have the "think pieces", investigative journalism, and local news, which could demand a monetary value from readers. According to an American Press Institute Study published in February 2016, 78% of US newspapers with circulations over 50,000 are using some kind of a digital subscription model. More specifically, 63% of newspapers are using a metered model, 12% are using a freemium model and 3% are using a hard subscription model. With a panoply of newspapers with the multitude of subscription choices, it's a shame that no one has figured out a simple way to access all the finest journalism for one monthly fee yet. 

To be sure, the idea has been tried and tested, but mainly for magazines. Next Issue Media, now Texture, launched in 2012, and has an all-you-can-eat monthly subscription for magazines, and its competitor, Magzter, is now available for us too.

The Startups: Moving on to newspapers, Blendle and Inkl are the two visionaries trying to solve this problems, but they're still in their early stages. Inkl allows you to read newspaper articles by paying either by the article (10 cents) or by a monthly subscription ($15). Blendle, which is in beta, is a pay-per-article model. The pay-per-article model makes model may make sense for journalism, because unlike music, t.v. shows or movies, it is unlikely that readers will consume an article repeatedly. 

The pay-per-article model doesn't allow for higher cost articles to subsidize lower cost articles, which wouldn't matter as much if, like songs in an album, all articles in a publication were written by one author. With the prevalence of this type of model, for better or worse, we'd eventually lose the articles for which not enough readers wanted to pay. Both models tout ad-free reading zones without click-bait, but if your revenue stream is per article, headlines will have to be flashy enough to ensnare readers. Regardless, neither of these bundlers have been able to get all the good papers, and it may be impossible to make that happen (#analysis).

 

The reason goes back to simple economics (you knew this was coming). The market for a newspaper most closely resembles that of a monopolistic competition, where firms have many competitors, barriers to entry are low, but each firm offers a slightly differentiated product. In this case, newspapers aren't perfect substitutes for one another and can set their own price, albeit taking the industry price as  a guideline. Granted, most newspapers aren't doing super well, but Pulitzer Prize winning papers can typically elicit a higher price than other papers. In the long-run, papers price where Average Revenue = Average Total Costs as shown below.
So as long as news organizations are recovering their average total costs and maybe a little more, they should sign up to be part of he bundling service, correct? Not so fast. In the short-run when differentiation between papers is sharp, some can charge above Average Total Costs Curve and make economic profits, as shown below, lowering their incentives to play well with others.

Comparison to Music: There's a reason that blockbuster artists like Adele, BeyoncĂ©, and Taylor Swift have eschewed Spotify, and perhaps we can draw a parallel to the news world. Ben Thompson writes in Stratechery, "The problem with Spotify is that at a very fundamental level it treats music as a commodity. You can’t choose where your $10/month goes based on the emotional impact of a song."

 
He makes two great points:
  1. Money paid to an aggregator is NOT money paid to an artist, and an explicit purchase makes a fan more loyal, not less.
  2. Grossing $10 per customer in a single shot is far more lucrative than pennies from the exact same people when they access your songs in a streaming service.


So it comes down to compensation for those who can demand higher price tags, the Adeles of the newspapers, so to say. In a way, the pay-per-article solves the problem #2, and the pennies per article really do go directly to the papers. It doesn't necessarily create loyal customers, however. So there will always be the Adeles who remain independent, unless you can somehow figure out how to solve the loyalty and price problem (some ideas below).

Twists to the Established Model: Here are some ideas about how to make the newspaper model more efficient, both for readers and publishers. 
  1. A Facebook add-on that allows you to instantly buy articles when you run up against a paywall.
  2. The pay-per-article platform should price discriminate, charging a few pennies more articles coming higher priced newspapers or award winning authors than lower priced newspapers. Click and purchase data will come handy here, and prices can be adjusted as economics change.
  3. A choose-your-own newspaper bundle where the marginal cost of adding each new subscription declines. The papers get paid a % of ad fees (I don't mind having ads, and if that's how authors get paid, I'm for them) from the sites, and the bundler gets the rest, and the reader gets a great deal.
Lastly, while there is a case for writing all news stories for Facebook we know there have been plenty of fake stories on the social network. Facebook, not being a newspaper, doesn't have the obligation to take them down, so there has to be branded news to signal quality, whether that comes in a bundled format or not.


Sunday, June 5, 2016

Virtual Reality and Theory of Modularity

If you're anything like me and don't know much about mobile gaming, you haven't paid a ton of attention to virtual reality. It seems like a cool technology for sure, but useful primarily for playing computer games. That's what I thought, at least, until I heard Marc Andreesen on the A16Z podcast in August of 2015. 

After hearing Andreesen, I realized that VR can change lives for many people around the world. Most people's, especially those in war torn or developing countries, "real reality" is nothing to envy. VR can give people an experience that would be nearly impossible in actual reality. Imagine a student from anywhere in the world being able to sit in a Stanford classroom and interact with students and professors as if he were actually there. VR simulations are already being deployed outside of gaming; they already have the ability to create 3D models of patients' anatomy, can make history and science classes come to life, and allow auto manufacturers to test drive a car that doesn't yet exist. 
Despite all the promises of VR, the technology has mostly remained inaccessible to the masses, and Google intends to do something about that. The month of May brought us Google I/O and its most talked about announcement of the mobile virtual reality platform called Daydream. Daydream follows Google's initial foray into VR, which was in 2014 through a cheap, disposable headset called Cardboard. But Cardboard came with a latency problem which could make users sick, a problem  that was solved by higher-end VR headsets such as the Oculus Rift, Oculus+Samsung's Gear VR and HTC's Vive. 

Daydream's introduction fomented a debate about who is likely to win the VR headset race. Daydream, according to Gizmag, is more reactionary than innovative. Gizmag argues that the quality of Daydream will likely continue to lag behind the higher-end headsets, which have multiple controls and are already working on things like positional tracking. "Daydream shouldn't pose much of a threat anytime soon," claims the article. 

What makes Daydream more interesting is Google's announcement that the VR platform will be based on the next Android version called Android N. Consequently, many phones that run Android will come optimized to run VR experiences and be Daydream ready. These phones will be certified by Google and will be required to have various VR friendly components such as "high-quality sensors for head tracking or screens that can reduce blurring by showing images in extremely short bursts," according to The Verge

In addition, Google will curate the Play Store to have content optimized for Daydream. In fact, "Google VR head Clay Bavor specifically mentioned Hulu, Netflix and Lionsgate as some of the companies bringing media content to Daydream," according to Variety. Moreover, Bavor mentioned that the Company has already built YouTube from the ground up for VR. The accessibility of Daydream is expected to shift relevance of VR from niche PC gamers to any mobile user who wants to experience various apps in a different way. 

Google's playbook for Daydream is straight out of the renowned Clayton Christensen's Innovator's Solution. Christensen begins by defining interdependence and modularity. He says that "an architecture is interdependent at an interface if one part cannot be created independently of the other part- if the way one is designed and made depends on the way the other is designed and made." He goes on to say that "a modular architecture specifies the fit and function of all elements so completely that it doesn't matter who makes the components or subsystems, as long as they meet the specifications. Modular components can be developed in independent work groups or by different companies working at arm's length." 

Each product can have some components that are interdependent and some that are modular. An iPhone and iOS are interdependent, but the apps on the iPhone are modular.

What kind of architecture is best for VR today? Christensen argues that when a product's functionality is not yet good enough to address customer needs, firms that build their products around proprietary, interdependent architectures enjoy a competitive advantage because standardization in modularity takes too many degrees of design freedom away from engineers and performance cannot be optimized. He goes on to say that, "one reason why entrant companies rarely succeed in commercializing a radically new technology is that breakthrough sustaining technologies are rarely plug-compatible with existing systems of use." So if we think today's VR technology isn't good enough for mobile, then an interdependent structure, or the one Google hopes to create with Daydream, could win out because it will be easily fit and function within all the Android phones.

Modularity, however, becomes the dominant design when products become good enough, and there is a performance surplus from the product. Once the requirements for functionality and reliability have been met, products begin competing on speed of upgrades or responsiveness to customers. With modular architectures, companies can introduce new products faster because they don't have to redesign everything. "Whereas in the interdependent world, you had to make all of the key elements of the system in order to make any of them, in a modular world you can prosper by outsourcing or by supplying just one element." If we think that mobile VR technology is good enough, then those companies that innovate faster and are more responsive to consumer needs, such as Oculus and HTC, would become the dominant players.

The trajectory of product architecture as defined by Christensen, is depicted in the figure below:

The Oculus and HTC VR solutions are almost textbook examples of modular designs. Baldwin and Clark in "Managing in an Age of Modularity" note that modular designers "rapidly move in and out of joint ventures, technology alliances, subcontracts, employment agreements and financial arrangements as they compete in a relentless race to innovate." Baldwin and Clark note that since designers achieve modularity by partitioning information into visible design rules and hidden design parameters, modularity is only benefitial if the partition is "precise, unambiguous, and complete."

When it comes to mobile VR, it will be important for the VR architecture to be designed well to fit the phone's architecture. Here, Google's Daydream presents a truly mobile experience, not one that we initially for PCs and later fitted to smart phones. In that context, it wins by providing accessibility of VR to a broad range of users. As John Nagle of Gyoza Games said, "With the launch of Daydream, Google is again further democratising VR, making it accessible to a vastly broader audience than was ever before possible."

As VR becomes more accessible, Google's platform will be able to provide developers with standards and design rules, allowing for modularity in applications that could be used in a VR setting. Nagle goes on to say, "From a development perspective, including a controller and providing a ‘Daydream Ready’ hardware spec is a great advantage, because it means that we can focus on building great content instead of spending time and money developing and testing on so many disparate hardware platforms."



Therefore, Google wins at first with its interdependent design, even if it doesn't have the best VR solution, just because it is able to make the technology accessible for both users and developers. The VR technology just isn't seamless enough yet with mobile phones to be useful to the majority of smart phone owners. Google's reference device that will allow manufacturers to bring their own headsets will create a much needed standardization in the market, which will allow developers to focus on content rather than on hardware.

As Daydream as a VR platform becomes more prevalent, however, the industry will move towards modularity. That is, phone makers can create their own headsets as long as they meet Google's specs. Developers can create content that will align seamlessly with Android. And the market will move towards a performance surplus as VR vendors innovate quickly to offer performance and the rich content to meet user demand. 

Saturday, February 27, 2016

Oil Prices, Tech Market, and the Economy, Part II

The Tech Bubble


The New Yorker
Asset bubbles, or the appearance of them, concern almost everyone due to their omnipresence in the media. Last year, Mark Cuban wrote a paroxysm of how the 2015 tech bubble is worse than the 2000 bubble because retail investors can't participate (Cuban's a smart guy, but this makes no sense. Isn't that a good thing if we're in a bubble and retail investors aren't participating? It would mean fewer people hurt by the popping of a bubble if companies are private instead of public). Everyone, from TechCrunch to Vanity Fair is talking about the tech bubble. I'm just waiting for Drake and Meek Mill to have diss tracks arguing about the existence of a tech bubble. 

I think it's smart to begin by defining an asset bubble, which, surprisingly, isn't easy, followed by some empirical evidence and economic theory.

Definitions: 
Some economists define an asset bubble as an upward price movement in an asset over an extended range of time that then suddenly implodes. For the most part, this definition is too ambiguous because it doesn't convey how high the prices should move and why that movement is not justified. 

A more precise interpretation would define a bubble as a "situation where an asset's price exceeds the fundamental value of the asset," according to Gady Barlevy of the Federal Reserve Bank of Chicago. He goes on to note that an asset's value ought to be the present value of its future cash flows. If the price of an asset increases significantly, perhaps in a short period of time, without any expected change of future cash flows, then there may be an asset bubble. Again, there ought to be a difference between prices when expected cash flows grow by 10% versus 1000%, Thus, this definition is also bereft of quantifying how large the movements have to be to constitute a bubble, but the idea of prices being disassociated with fundamentals is key.  


I also think it's imperative to note what a bubble is NOT: 
  1. There is not necessarily a bubble just because prices in a certain asset class are higher than they used to be. Pricing must be divorced from fundamentals in order to constitute a bubble. 
  2. In the same vein, higher fundamental valuations don't always point to a bubble. Many reporters cite the growing number of unicorns as a sign of a bubble. If the present value of that company's future cash flows is over $1 bil, then the valuation could be justified. That's not to say high valuations are always justified, and indeed, in many cases they are not. But, in order to proclaim a bubble, one should dig several levels deeper to figure out if the entire asset class ought to be generalized as overvalued beyond fundamentals. 
  3. Lower stock prices do not necessarily mean that there was an asset bubble that is in the process of bursting. Day to day volatility in the stock market is a lot higher than day to day volatility in valuations of companies. If stock price compression is due to political reasons, over-reaction to market data, or other exogenous factors, it probably isn't a sign of a bubble popping. A correction from speculative levels of valuations to levels that more adequately reflect future cash flows, however, could be a sign of a bubble deflating. 
  4. Failure of companies does not mean there was a bubble that is imploding. Ben Thompson of Stratechery made a great point about the winner-take-all market: there will be failures in industries where there's only room for one major player due to network effects. Advertising is a zero-sum game and some apps/social media sites will lose out to others. That doesn't mean there was a bubble; that just means expected cash flows from one company were transferred to another.
So what could have caused the recent downturn in tech stocks recently? We can't ignore the impact of the Chinese stock market and recent tech earnings. Ben Thompson, however, makes a great argument:

"I think the recent chill in valuations and fundraising is about coming to terms with the fact that a lot of those unicorns are in the same boat as Facebook and Google’s advertising competitors: they have already missed out to the dominant player in their field (or, that their field was never viable to begin with). In some respects it is tech’s own inequality story: the average and median company and startup will increasingly bifurcate. It’s not a bubble, it’s a rebalancing, and the winners are poised to be bigger and richer than anything we have seen before." 




Empirical Evidence:  
How can we ascertain the existence of a bubble? When valuations reach unrealistic levels, we see more and more financial capital chasing companies in a particular industry. So, first, we can look at how much money has gone into the venture market now and back in 2000. In 2000, over $100 billion had been invested in VC, compared with $59 billion in 2015, according to the PWC MoneyTree Report. So there has been a lot of money spent in VC, just not as much as there was in 2000.


Moreover, during bubble times, investors put money into companies at the "idea"or very early stage. Speculators and non-venture investors enter the market at seed or angel rounds in hopes to land the next unicorn. If investors are chasing newfangled investments in hopes for another gold rush the way they were in 2000, one would expect to see much more capital into the seed and early stages of investments.

According to the PWC MoneyTree data (graph below), there was $21 billion of capital into the seed and early stage VCs in 2015 compared with $29 billion in 2000. Approximately $8MM per venture went in at the seed or early stage  in 2000 compared with $8.6MM into the same rounds in 2015.

It's interesting to note that the average dollar size per investment during seed and early stages is higher this time around, even though total capital deployed is lower, implying that there could be more of a winner-take-all strategy that Ben Thompson alluded to earlier.

So are companies this time of higher quality than before, justifying the higher level of investments? We can look at the quality of tech companies that have raised public financing this time compared with in 2000 to help determine that. While this isn't an apples to apples comparison, it gives us an idea of how mature the companies are when they go public, and ultimately, how far the investments could fall if their valuations aren't justified.

I looked at 2002 revenues of Nasdaq companies that IPO'ed between 1998 and 2002 and compared that with last twelve month (2015/2014) revenues of Nasdaq companies that IPO'ed between 2012 and 2015. The idea was to see how much more traction today's companies have before the public, non-VC investors jump in. Intuitively, we know that since companies have been waiting longer before going public, they should have greater revenue traction, which was corroborated by the data below.

The data above confirms the suspicion that companies are in later stages and have more control over their expenses (which the revenue/employee is supposed to indicate) now than they did in the early 2000s. Which means that if there is a bubble, the speculative nature of it isn't nearly as bad as it was in the early 2000s. Perhaps, instead of acting with irrational exuberance, investors are merely unreasonably quixotic.

So it appears that there's not much of a bubble in the public markets or later stage VCs since, outside of a few companies, valuations are more or less close to fundamentals this time around with higher revenue traction and lower expenses. As in, we're seeing "real companies" at the later stage VCs and public companies than we had in 2000, when we saw more investments due to speculation.

There could be a bubble in the early stage or angel investing staged companies since there is almost as much capital invested at those stages now as in 2000. The perilous impact from the bursting of that bubble would be limited to investments in just those stages.




What would cause the implosion of high valuations in tech companies? Katie Benner of New York Times and Jason Calacanis proffered in an engaging TWIST round-table that what we might see this time around may be similar to what we saw during the 2009 crisis. That is, we may see a negative impact on the tech industry if other industries or consumers began buying less technology. As mentioned in my previous blog post, if oil companies, for example, began spending less or had massive layoffs so consumers couldn't spend on technology, we'd see lower tech revenues. Jason calls this a contagion, or a downturn caused by exogenous factors rather than the inherent overpricing and then correction of tech valuations themselves. The magnitude of the contagion affect this time would be a balance between how much more investment there is in technology now versus in 2009 and how much of an impact a downturn will have on tech revenues now than in 2009.

Economic Theory: 
For economic theory about asset bubbles, there's no better source than Carlota Perez's Technological Revolutions and Financial Capital. In it, she describes four phases of technological revolutions: Irruption, Frenzy, Synergy and Maturity.
The Irruption Phase is when "new revolutionary entrepreneurs outstrip profit making potential of all established production sectors, and there is a rush of financial capital towards them, readily deploying new appropriate instruments when necessary." In this period, there is idle money in search for profitable use, and it leans towards investing in these new entrepreneurs to obtain high yields.

The Frenzy Phase is when there is a decoupling of financial capital and production of new innovation. Financial capital becomes arrogant from the highly profitable "bets" made by investors. Financial capital becomes a powerful magnet to attract investment into new areas, which become the "new economy". The entrepreneurs are forced to do whatever is necessary to attract the investors, in this case, the  VCs.  This is also when uncontrollable inflation sets in, debt mounts at a reckless rhythm, and a vast disproportion between paper wealth and real wealth becomes apparent.

After the Frenzy Phase, there is a turning point, which brings with it a collapse and a recession. Bubbles begin at the end of the Frenzy stage and burst during the turning point.

"There are three structural tensions that make it impossible to keep the frenzy profit going for an indefinite time. There are tensions between real and paper wealth, between the profile of existing demand and that of potential supply in the core products of the revolution, and between the socially excluded and those reaping the benefits of the bubble."

The Synergy Phase is where there is a re-coupling of financial capital and production. Innovation and growth can take place across the whole productive spectrum in this phase.

In Maturity, some disappointment comes from highly profitable sectors reaching their limits in both productivity and markets. Profits begin dwindling, and we begin to see "idle money" in the financial markets again.

So which phase are we in? According to Ms. Perez, we're likely in the midst of the turning point.

While it is possible that there will be several small tech valuation bubbles followed by corrections during the Turning Point, it seems from Perez's work that the big tech crash and related recession have already occurred. And it appears, from the empirical evidence, that we're facing less of a bursting of a tech bubble and more of a contagion effect on the tech industry at the moment. So, are we in a tech bubble? Probably not.

Sunday, February 14, 2016

Oil Prices, Tech Market and the Economy, Part I

IS-LM Model


Dow Jonesy enough for you?
The New Yorker

One of my colleagues and I were recently discussing the dizzying number of stimuli trying to play tug-of-war with the US economy in general and the financial markets (public and private) specifically. There's so much going on! Collapsing oil prices, prodding at the "tech bubble", the presidential elections, burgeoning threat of terrorist activities, Kim Jong-un inching towards insanity... What does all of this mean for our economy and our financial markets. 

As an economist, I like to (over)simplify the world by thinking in terms of frameworks, and the one framework that spoke to me in my macro courses was the IS-LM model, which illustrates the Investment/Savings - Liquidity Preference/Money Supply equilibrium. Behind this almost nonsensical jargon is a simple concept that interest rates and GDP are a function of how much money is sloshing around and whether people would rather invest or save that money.  


For those of you who want the details, the IS and LM curved are derived from the aggregate demand equilibrium where output (Y) = C (consumption) + I (investment) + G (government spending) + X(net exports)

The LM curve is a little harder to understand, but an easy way to think about it is that i(interest rates) is the price of holding on to money. That is, we all would rather have money in our checking accounts, readily accessible (holding money), but if i (price of money) is high enough, we'll let someone else (mutual funds, banks, etc) hold our money and pay us for it. 

The IS curve is pretty relevant today given the collapsing oil prices. Exogenous variables (not just the price) have resulted in a glut of oil supply. Conventional wisdom would tell us that lower oil prices would boost production of items where oil is an ingredient since input prices would go down. As The Economist mentions, however, that doesn't seem to be the case at the moment. 

"Cheaper fuel should stimulate global economic growth. Industries that use oil as an input are more profitable. The benefits to consuming nations typically outweigh the costs to producing ones. But so far in 2016 a 28% lurch downwards in oil prices has coincided with turmoil in global stock markets. It is as if the markets are challenging long-held assumptions about the economic benefits of low energy prices, or asserting that global economic growth is so anemic that an oil glut will do little to help."

Let's go back to our IS-LM framework: oil prices are low, so investment in oil should decline, shifting the IS part of the curve to the left (from IS to ISd). If investment in oil is lower, producers will be reluctant to produce oil (and many may not be able to produce given their much lower income). 
As The Economist says above, it could have been possible that although the I part of Y=C+I+G+X would decline, increase in production of goods could have boosted the C and X portions of the equation, hence leaving the IS curve unchanged or even higher (to the right, to ISi). That doesn't seem to be the case this time, however. It appears that C and X, for whatever reasons, need more to boost them than lower input prices. As an example, you'd like to buy a Lucite table that costs $500 last year and now costs $400 (Consumption), but you still don't think it's a good enough deal to purchase (perhaps wages haven't risen enough, prices of other goods have risen more than you had expected, your taxes have increased, etc). 

The impact of a lower IS curve is that GDP on the X axis (output, income, yield, or whichever other measure you'd like to use) is lower and the economy is producing less. What can be done to counteract that? 


Well, usually, the Fed could just shift the LM curve to the right by spurring money supply. That is, the Fed could increase the money supply by buying bonds (less bonds, more money in the market), which is the mechanism used to achieve what everyone refers to as "reducing interest rates". With lower i (interest rates, the price of money), instead of letting someone else hold your money (put it in a savings account, a mutual fund, etc), you hold it yourself as just a store of value. 

As we know, however, interest rates are pretty close to the lowest they can be. Although rates can be reduced a little more and into negative territory with quantitative easing, there's likely a limit to how low they can go before the economy becomes topsy turvey.

Paul Krugman at the New York Times says that at the rate of 0, the LM curve should be flat, like this:
When rates are at 0, people have no incentive to buy bonds or put money in a savings account; they'd rather hold cash. Changes in the money supply have no impact, which is known as a liquidity trap.

"And IS-LM makes some predictions about what happens in the liquidity trap. Budget deficits shift IS to the right; in the liquidity trap that has no effect on the interest rate. Increases in the money supply do nothing at all," Paul Krugman.

So if the LM curve can't be changed to counteract the lower IS curve and if lower oil prices aren't doing much to spur consumer demand, then we're left with the lower IS curve, meaning lower output and rates.

What does that mean for the rest of the economy? A lower IS curve is further away from full-employment, so we'll likely see higher unemployment, primarily from the energy sector. But rates will also remain low.


Low interest rates lead many investors to continue searching for higher yields elsewhere. That is, investors will take on riskier endeavors because risk-free investments (treasuries) or low risk investments (investment grade bonds) won't yield much. This impacts the technology market and silicon valley enterprises too. The lower the yields, the more money investors will be willing to put into startups and other technology ventures that have the possibility of an out-sized return. And we've seen this phenomenon for a while now since rates have been close to zero.

The demand side of the equation for technology and venture startups may be impacted negatively if products were meant to be sold to firms related to oil production. At this point, however, it doesn't seem likely that we'll see a major revenue slowdown for tech companies just because of lower oil prices. It also doesn't seem like lower oil prices are having a recessionary impact on the rest of the economy because it hasn't hit industrial production, the real GDP, real income or wholesale retail sales.

What about the "tech bubble" then? And do the weakness in the public markets portend an impending disaster? Should we expect a deleterious impact from a repeat of the 2000's tech crash? Or is this time different? I will address these issues in the context of a technological revolution in Part II of this post.