My guest this episode is Jeff Yan, founder of Chameleon Trading.
Jeff began his career in high frequency trading at Hudson River Trading but soon moved over to the world of crypto where he built one of the largest market making firms in the space.
After Jeff gets me up to speed with the basics of high frequency market making, we dive into some of the more esoteric components, particularly with respect to centralized crypto exchanges. These include infrastructure quirks, adversarial algorithms, and why HFT P&L might actually be predictive of medium-term price movement.
In the back half of the conversation, Jeff explains the problems he sees with current decentralized exchanges and introduces Hyperliquid, a new decentralized trading platform built on its own blockchain to provide performant order book execution for perpetual futures.
Please enjoy my conversation with Jeff Yan.
Transcript
Corey Hoffstein 00:00
All right. 3, 2, 1. Let’s jam. Hello and welcome everyone. I’m Corey Hoffstein. And this is Flirting with Models, the podcast that pulls back the curtain to discover the human factor behind the quantitative strategy.
Narrator 00:20
Corey Hoffstein is the co-founder and Chief Investment Officer of Newfound Research. Due to industry regulations, he will not discuss any of Newfound Research’s funds on this podcast. All opinions expressed by podcast participants are solely their own opinion and do not reflect the opinion of Newfound Research. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of Newfound Research may maintain positions and securities discussed in this podcast. For more information visit thinknewfound.com.
Corey Hoffstein 00:51
My guest this episode is Jeff Yan, founder of Chameleon Trading. Jeff began his career in high frequency trading at Hudson River Trading, but soon moved over to the world of crypto, where he built one of the largest market making firms in the space. After Jeff gets me up to speed with the basics of high frequency market making, we dive into some of the more esoteric components, particularly with respect to centralized crypto exchanges. These include infrastructure quirks, adversarial algorithms, and why HFT pnl might actually be predictive of medium term price movements. In the back half of the conversation, Jeff explains the problems he sees with current decentralized exchanges and introduces Hyperliquid, a new decentralized trading platform built on its own blockchain to provide performant order book execution for perpetual futures. Please enjoy my conversation with Jeff Yan.
Jeff, welcome to the show. Hitting it off early in the season with some crypto here. We’re gonna see how the audience responds. But I think this is gonna be a fun one, because we’re talking high frequency trading, maybe even digging a little into the secrets of high frequency trading. And then in the back half of the conversation, we’re going to be talking about protocol design, which is a whole different new area of quant thinking. So really excited to have you on. Thank you for joining me.
Jeff Yan 02:15
Great to be on, Corey. Thanks for having me.
Corey Hoffstein 02:17
Let’s start with the typical stuff for guests who maybe don’t know who you are, or haven’t caught on to your quickly growing Twitter stream. Let’s get into your background a bit.
Jeff Yan 02:27
My story probably sounds pretty similar for a lot of HFT folks out there. I graduated from Harvard, studied computer science and math. Went straight to Hudson River Trading, which is one of the bigger market makers in tradfi. I worked on US equities. I really had a great time there. It was the perfect environment. When I joined, it was about 150 people; I know now it’s a lot bigger. Can’t say enough positive things. Learned so much, got to work on the most interesting problems, perfect mix of engineering and math. This is like paradise for a quant. But 2018 came along and with it the crypto mania of building smart contracts on Ethereum. Read the yellow paper, and it just clicked. I knew that this was going to be the future.
I left to build sort of an L2 exchange protocol. It was in the format of a prediction market because back then, Augur had found a good product market fit. But we were interested in the exchange technology. We raised money, moved out to San Francisco to build this thing, built a team, but shut it down after a few months because we realized it was not the right time. A lot of regulatory uncertainty, and we really couldn’t find users. People barely knew how smart contracts worked, were interested in speculating on tokens and not really defi at the time. So, shut that down, did a little soul searching, traveled and ultimately decided that I wanted to go back into trading because the day to day was a lot more interesting than struggling to find product market fit. I was contemplating going back into the industry and joining some company but thought maybe I would, since I knew all this about crypto from building, try to trade crypto first.
It started as a bit of a side project, but I quickly saw the opportunity there and scaled it up. Really way faster than I thought was possible. I was surprised by how inefficient the markets were. Been heads down building that for maybe at this point almost three years. Seriously started in early 2020, which was great timing. Kind of got to grow with the market. So as the market 10x-ed even like 100x-ed in volume, we kind of grew with it. Ultimately, our market share ended up being one of the biggest centralized exchange market makers.
About a year ago we started looking at defi trading, and it was really reminiscent of when we started centralized exchange trading in that there were a ton of inefficiencies, but in this case, the protocols themselves were quite poorly designed. And we also saw this demand for a truly decentralized product. After the whole FTX thing, people were finally catching on to the not your keys, not your coins, counterparty risk, that kind of stuff. Basically, it clicked for us that this was now the right time to build a decentralized exchange. We’ve been sort of at that for maybe a quarter, a little more than that now. The HFT stuff is still running in the back, autopilot maintenance mode. But we’re really focused and excited about building this DEX at this point.
Corey Hoffstein 05:20
Well, a lot to unpack there and stuff. We’ll get into the conversation. I’m excited to talk about learnings you had in the high frequency space and how that ultimately has influenced your design of this DEX. But I want to start with the basics of high frequency. When I talk to people who are in high frequency trading, it seems like one of the biggest decisions you have to make is this concept of making versus taking. It seems to be a very clear line in the sand of very differentiated strategies, and what it takes to succeed with each of those types of strategies. Was hoping maybe you could explain the differences and how those differences have implications on the choice of strategy design, infrastructure need, and even the research process.
Jeff Yan 06:06
I do think this is the first big decision that you need to make when you’re starting HFT. High level, I will say there are more similarities than there are differences. At the end of the day, you are doing this very infrastructure intensive, latency sensitive trading, but in many regards, they are opposites as well. The first big difference is that I would say making is more infrastructure heavy, and taking is more stats / math model heavy. I think the best way to decide between the two is just what sort of works, what sort of research are you inclined towards. Maybe as a concrete example, when you’re market making, you’re kind of at the whim of people coming in and picking you off. You can’t really afford to slip up. You often have large implicit exposure by being levered up and having open orders, over many instruments, many price levels. And if you screw it up, really, that heavy tail is really going to be painful. Whereas, you can have a strategy that takes once a day, and it can be a really good strategy. And it can be high frequency. It could be news based. It could be some sort of niche signal. But you have the luxury, and because of that, you have the luxury to be much smarter. If your thing is slow, most of the time, doesn’t trigger most of the time, that’s okay. As long as when you do trade, it’s good. But with making, if you’re doing well 99% of the time and 1% of the time, you’re a little slow and can’t keep up with the data, you’re going to lose enough money during that 1% of time that negates your pnl from the other 99%. That’s the fundamental infrastructure versus model difference between the two.
Corey Hoffstein 07:42
Is it too simplistic to say, with taking, you expect the market to move in the direction in which you’re trading because you’re willing to cross the bid ask spread and so you expect the market to keep moving versus making you’re hoping the market doesn’t move? Someone’s crossing the spread to meet you? And then you’re hoping the market doesn’t move so that you can then sell across the spread? Again, is that a fair difference? Like one is hoping the market almost stays flat in the timeframe of the trade, and one is hoping there’s a directional move?
Jeff Yan 08:12
Yes, exactly. With HFT, we like to mark out to pretty short time horizons, but this is kind of true in general, no matter what frequency you’re trading at. The instant you take, you’re actually suffering a loss. So you’re marking to mid – you’re instantly suffering a loss, and you’re only going to be profitable on average, if like you said, over whatever predictive horizon you have, the price on average compensates for that immediate loss plus fees. Whereas making, the initial pnl is the highest it will ever be. You just made the spread. But you’re banking on that not being, on average, adverse selection. And so necessarily, when you make, if you sort of average out markouts of your trades, that pnl will decay over time. But your hope is that it doesn’t decay past zero.
Corey Hoffstein 08:58
In our pre-call, you mentioned that one of the most difficult aspects of scaling up your business was actually not on the research side, but on the infrastructure side. I saw on Twitter you said something to the effect of knowing how to normalize data isn’t going to print you money. But without it, you definitely won’t. Was hoping you could talk about maybe some of the biggest lessons you learned in the infrastructure side of the equation and why you think it’s so important.
Jeff Yan 09:25
Your question sort of has two parts, and they’re pretty tied. There’s the trading infrastructure and then there’s the research infrastructure. Data cleaning is also in the research; it’s more like statistical practices. Whereas trading infrastructure is pretty unique to high frequency trading. Both are super important. The stats stuff I guess is more well known, but probably worth emphasizing that the regime of signal to noise in high frequency trading is orders of magnitude higher than most things people study in academia. Filtering outliers is exponentially more important. If you don’t think about this stuff correctly, if you really just ignore all the outliers, then your model is going to be screwed over when the sort of Black Swan tail events do happen. But if you don’t filter them or normalize them correctly, then the outliers are going to basically determine your entire model. Concretely, I think depending on what you’re doing, using things like percentiles can be a lot more robust than using the actual values. If you are using actual values then, are you throwing out outliers? Are you clipping outliers? These kinds of things have big effects.
On the infrastructure side, I think the biggest lesson we learned sounds kind of silly, but you really need to learn this firsthand. You need to look at the data. You might think you’re super smart, that you have this great pipeline that will clean the data and give you the inputs you want to your models. But I’d say it’s impossible to spend too much time looking at data; you’re always gonna learn something new. When starting out, write down all the raw stuff you’re getting from the exchanges and just comb through it. Look for outliers, sanity check things. I think a pretty crazy example of this is that at some point, some exchange had a bug on their feed machines and flipped the price and size fields. I forget if it was the book stream or the trade stream. But regardless, it completely messed up our internal accounting code. Imagine Bitcoin’s price and size being flipped. So 20k 0.1 being recorded as 0.1 20k. Threw a wrench in everything. I think a lot of firms probably shut down immediately or quickly recovered and switched to an alternative data source. But things like that, you really want to be close to the raw data. Because no matter what logic you write, it’s not going to be perfectly robust.
I guess another tip is to really focus on timestamps. Exchanges will often give you a bunch of timestamps with their data. And it’s kind of up to you to figure out exactly what they mean with each timestamp. This is important in terms of understanding the black box in terms of your latency – what are you measuring exactly? And seeing if you’re keeping up, for example, or if they’re sending you garbage. Timestamps are a great way to distinguish between these different cases.
Corey Hoffstein 12:19
One of the things that I see discussed a lot among high frequency traders is this concept of fair. I know it’s something you’ve written about a few times – talking about making sure someone’s trading around fair. What is fair? How do you measure it? Why is it an important concept?
Jeff Yan 12:37
I think fair means something slightly different for every trading firm. It kind of speaks to the style of trading they’re doing. But at a high level, what’s in common is that fair sort of incorporates your modeling into a predicted price. It’s a really useful abstraction because it splits this problem of writing a profitable strategy into two, I would say comparably difficult pieces, depending on your strategy. And that is the predicting the price piece and the executing your orders piece. I guess this kind of goes back to the making versus taking question you asked earlier, but making is heavier on the execution side, whereas taking is heavier on the modeling side. Basically for taking, you’re spending almost all of your time thinking about this fair price. And I think what goes into it is really up to you as a trader. What kinds of data do you think you have an edge processing over the market? Where are the markets inefficient? Because there doesn’t have to be one fair price; you might have multiple fairs as inputs to this more machine learning style trading. You might have like a one second prediction and a one day prediction, and your execution strategy may use these in different ways. The optimization problem can be different in pnl space, but I think when starting out, you can get very far by just doing a clean cut and saying, “All right, I’m going to put my work into first just coming up with a number, which is what I think I will trade around. I’ll quote around this; I’ll use this number to cross the spread. This will just be like my oracle, and then working around it. Like okay, I have this oracle price, it’s given to me, what’s the best way I can execute around it.”
Corey Hoffstein 14:19
And so, could that be something as simple as looking at one exchange, and you might say, just throwing this example out there, “Almost all the liquidity is at Binance, I’m just going to assume the price at Binance is fair.” And then if other exchanges are lagging that by milliseconds or seconds, you might be using Binance as fair and thinking, “Okay, I can cross the spread at OKX or something like that because you’re expecting this catchup across a different exchange.” And then there are other maybe statistical ways of estimating fair, where you’re not taking truth from one exchange, but you’re trying to use other market or book related signals to come up with fair. Is that a fair explanation or idea?
Jeff Yan 15:01
Yeah, that’s the right idea. I think using the most liquid venue as the fair is a really good first approximation. And I think before I started crypto, I think way back in the day, this was probably the best way to go about it. Because there were 10% arbitrages between the exchanges. The problem was like, how do you move money between them, not like how do you predict the price. And so this would work super well.
These days, there’s been an interesting trajectory, where there’s been splitting, splintering of liquidity, and then some sort of consolidation towards Binance, especially recently. The thing you mentioned is probably a very good place to start – just use Binance as fair. That being said, I think you need to be careful when just using an external source as a fair. Yeah, maybe OKX is lagging a couple milliseconds. And maybe it’s not, it’s not gonna be this simple these days. But let’s say there was just an opportunity to close the arb each time Binance moved, because nobody was lifting orders on OKX. So you do that, and it’ll work most of the time, but then it’s crypto, so OKX maybe goes into wallet maintenance. And it’s no longer possible to withdraw or deposit this coin, at least between Binance and OKX. And now suddenly, you’ll see the arb can’t be closed, and the markets diverge. And if your fair is just Binance price, then you might get screwed.
There’s always subtlety, even in this super simple example. It’s never going to be as simple as OK, here’s a number that I pulled from some feed, and that’s my fair, but it’s certainly a good first approximation.
Corey Hoffstein 16:25
That leads nicely into where I wanted to go next, which was around the idiosyncrasies of crypto exchanges. And just that, historically, reputationally, they are notoriously unreliable, from a technology standpoint. You gave the example earlier of the dirty data, where price and volume got swapped, broken APIs, poor documentation, not all the API endpoints are always documented, some of them are hidden. Sometimes you can have different parameters that no one actually knows about. I think you had a great Twitter example about that recently about being able to skip the risk engine or have a risk engine run in parallel. Stuff that is completely undocumented. That is, interesting examples of orthogonal alpha that doesn’t necessarily have to do with price prediction around fair. How much alpha is there in things like simply understanding the API better than your competitors or measuring the latency of endpoints correctly? Versus say, more traditional statistical alphas, where you’re trying to use the order book to guess pressure and direction?
Jeff Yan 17:32
The tweet you’re referring to, I think, was one of my more popular ones.
Corey Hoffstein 17:36
Which I still don’t know whether it was an April Fool’s joke, by the way.
Jeff Yan 17:39
I guess April Fool’s is past, so I’m allowed to say it was a joke, but it’s closer to reality than people think. I think the real joke is that it’s actually kind of true. I’ve been meaning to do a follow up on that. That’s a good reminder, I should go tweet that after this podcast.
But I think your intuition is good. I think when you work at a quant company, you start to develop preferences. Or maybe you come in with a preference of what you want to work on. Like, “Oh yeah, I studied math, so I’m just going to make cool machine learning models, find signals, and generate alpha. That’s what matters, because that’s the hardest thing to do.” And I think that kind of attitude maybe works at a big company, because people are so specialized. But if you’re trying to run HFT on your own, then you’re not going to get anywhere with that attitude.
The sort of dirty work that you’re mentioning – understanding the APIs well, seeing what’s missing in the documentation, measuring latencies – this kind of stuff is super important. My mental model for high frequency trading, really just like things in life, is that it’s a product of many numbers. As a quant, you still want to be quantitative about it. It’s not additive; your efforts into different bins are additive, and those bins might make your model a little bit better. Maybe you spend 10x the time and make 10x the delta there. But at the end of day, it’s the product. So it’s infrastructure times model, for example.
As a concrete example, if the infrastructure is at one, and your modeling is at 10, then, where are you going to spend your unit of work? Obviously, you should always be working on the thing that is smallest. And the tough thing about HFT is that it’s kind of hard to know what these things are in the formula that you’re multiplying together. When we started, we thought it would be modeling work. But it’s important to have this meta analysis of like, “Wait, am I actually doing the most important things?” And you quickly realize that it’s not obvious, and there’s a lot of edge in just knowing what to work on. The dirty work is super important; it’s always about getting the lowest hanging fruit, the 80/20 principle. Especially when things are going well, it’s easy to fall into the trap of like, “All right, I got the basics down. Let me go do some cool machine learning research and do the innovative stuff.” We fell into this trap as well. Not that there isn’t any alpha out there, but it’s a lot of work for diminishing returns. When you’re on a small team and there are still a lot of opportunities and your strategy is doing well, it’s always good to actually ask yourself and be honest. Be convinced by what the data tells you.
Corey Hoffstein 20:12
For those who are keen on starting out in high frequency trading and crypto, you’ve recommended that they either just go make markets on Binance and focus on alpha generation, which I sort of interpreted as taking not making, or picking some long tail exchange and trying to figure out the infrastructure quirks around that long tail exchange and that’s a good source of edge. Can you elaborate on why you think these are the two best avenues and how the approaches differ?
Jeff Yan 20:45
It’s a bit like the bell curve meme, and you just don’t want to be that guy in the middle. In this case, if you view the bell curve as the exchanges, then the big problem is the middle exchanges, maybe say rank two through seven or something. You have a lot less volume than Binance, but about the same level of competitiveness and toxic flow. The flow can be worse than Binance, because at least Binance, as we know, the reason their volume is so high is that they have a complete stranglehold over retail volume. I don’t know how they do it, but they do. The numbers speak for themselves. You don’t get that padding, that nice mix of toxic and retail flow. The big HFT firms have all onboarded to the top, I don’t know how many, let’s say top 15. They’ve definitely onboarded. They’re going to be trading full capacity, and you’re not going to get much juice there. So if you’re willing to challenge yourself to do that super scalable, large centralized exchange trading strategies, then just start with Binance. And it will generalize as well as it does. And there’s no point in starting in the middle.
But the other thing you mentioned is again, you can also be super left in the bell curve. There’s no shame in finding a super small opportunity, something that is overlooked by the big players or just doesn’t have enough capacity for it to be worth their time. I think niche infrastructure is a super good example of this. Exchanges are written by people. Just like with DEXs, the protocol designs can be just outright dumb. You can see this to a lesser extent on a lot of how smaller centralized exchanges write their tech. And if you’re the only one who has this insight into how that works, then that can be a strategy.
Infrastructure is actually often a big source for alpha. And there’s not such a clean line between the two. And in this case, the problem – it’s not really a problem – but you might be concerned that this doesn’t generalize. Like, “OK, I understand how the tech on this random small exchange works, but that’s not gonna help me on Binance.” And yes, that’s true, but I think people undervalue just having something that works live. That should be everyone’s number one priority, and it really shouldn’t matter how small. I guess there’s sort of a floor on how small it can be, unless you’re looking at super weird things. If you’re trading some amount of volume, you’re gonna make some money. And if that is high sharpe and robust to tail events, then you’ve got something that 99% of people don’t. And yeah, maybe the exact strategy doesn’t generalize, but in my experience, you get the reps in for the full research loop. By putting things into production, you learned so much doing that, that then even just scrapping it and going for Binance at that point will be orders of magnitude easier. And also, often little things like maybe the tech isn’t exactly the same on other exchanges, but you start to notice these principles, and you start to get this fountain of, or this endless stream of ideas from other things that already work. And those types of ideas tend to be way better than things you pluck out of thin air. I think there’s a lot of value to both approaches. I’d say if you’re not sure, then start with the small stuff, often, and then start with the big stuff. Honestly, just try both.
Corey Hoffstein 23:55
You use this phrase, toxic flow. Can you define what toxic flow is for people who have never heard that phrase before?
Jeff Yan 24:03
It’s basically informed flow. A mental model for how I saw crypto grow up was, when I came in, I was already a little bit late, so I can only imagine projecting back in time what it looked like. But even when I came in, it was quite a lot of retail. And there were big players playing, but the balance was still that there was not enough liquidity for what retail was demanding. Retail flow is what you want to target. The super obvious things, like you just write generic maker strategies that post liquidity, like we talked about earlier with making versus taking. If retail comes in and trades against your making orders, you’re gonna keep most of that spread that they crossed. You just do that, and it makes money. That’s a strong sign that flow is by and large retail. But over time, people notice this, they put up their maker strategies, and when there’s more liquidity from the makers, it suddenly makes sense for people to run taker strategies. Spreads get smaller as people compete to capture this really good retail flow, and then the takers suddenly come in and start picking off bad maker orders. This is just how markets evolve.
There’s a lot of value that the takers provide as well. It’s not clear that the maker orders are all market makers and that taker orders are retail. It’s a bit of a mix. And so the best market is just, in my opinion, one where people are free to trade. But from the makers’ perspective, these takers are super annoying. They used to have this super easy strategy. You just put orders out and every time you got hit, you made a little bit of money. But all of a sudden, this like 1% tail of trades you’re getting, you’re losing 10 basis points on. And that outweighs the one basis point you’ve collected from all the retail, something like that – bit of a mental model. The toxic flow is basically these takers, and it kind of depends on who you’re asking. Whether the flow is toxic depends on the strategy you’re running. But there’s this general split between retail and sophisticated flow.
Corey Hoffstein 25:51
Well, talking about sophisticated flow, what I’m sure any high frequency trader would consider toxic is the idea of an adverse algorithm that tricks your algorithm. So crypto is and was in many ways, still just the Wild West, and there is a degree of explicit market manipulation that would likely be considered illegal in most traditional markets. And it will be used against you to trick and exploit any of your automated high frequency trading strategies. I would love to know how much you ran across this kind of adversarial behavior. Maybe you can share an example of an experience you had in the wild, and having run high frequency trading strategies, how you think about protecting yourself from it.
Jeff Yan 26:42
It is indeed the Wild West. I think the positive way to look at crypto is that it’s also an experiment. Your perspective matters a lot. Regulators will obviously latch on to this, “Oh, they don’t follow our carefully researched securities laws.” But defi proponents will say, “These securities laws really are probably the result of a lot of lobbying and human judgment, and maybe crypto’s an opportunity to look at a more libertarian experiment. What do we actually need to regulate?” I don’t know; the truth is probably somewhere in between those two. I’m not a regulator or a policymaker, but those are my philosophical thoughts on it.
Certainly, from a practical perspective, if you don’t pay attention to these manipulative extractive strategies, then you’re gonna have a hard time doing crypto HFT. It’s also not that the exchanges don’t want to regulate it, but it’s not clear which bodies regulate which exchanges. It’s not clear to me, and I think a lot of these laws are murky. Maybe that’s a bit of why this happened. And it’s just hard to run an exchange, so they’ve got other things to worry about.
For concrete examples, I think spoofing is a really big one. By spoofing, I don’t really know the technical definition. I think there are many terms in US Securities and Futures laws. But I mean, broadly, when I say spoofing, you see it very clearly on the order books, and the resulting price graphs. People place these massive orders that they clearly don’t want to get executed, in some sense. If they were executed, they would be unhappy. It’s hard to prove intent, but it’s very clear these orders are not to get filled. They are to give the impression that there is demand on that side of the book. And as a result, if there’s some algo that is looking at the liquidity on the order book as a signal for where the price will go, then it’s hopefully tricking these, maybe to place orders on the side that they want. And then depending on what trickery is accomplished, then the spoofing algorithm can then either replace making orders that get aggressed into or even aggress against passive orders that are placed mistakenly, I guess. That’s a super common example.
As for another example, I don’t know if this is really market manipulation, but there are certainly pump and dump circles. For fun, I joined a few, never participated, just was a lurker, and man, these things are quite something. I think they’ve been cleaned up a lot recently, which is cool. But back in the day, they would generate crazy volume. It was basically some insider announcing some coin. And then all this dumb retail, I have no idea where they find these people, but they manage to convince a lot of people to just buy at once. And then the insiders sell into that. And as an HFT, you might think that’s okay, but it’s actually surprisingly tricky to navigate because there’s such a strong reversion effect that you can kind of be tricked.
Those are two concrete examples. I guess in terms of dealing with them, it goes back to the earlier question you had about infrastructure versus model versus strategy, like what do you work on? And I view this as another category of miscellaneous, random stuff that comes up that you just need to do.
Corey Hoffstein 29:49
Risk management perhaps.
Jeff Yan 29:50
Risk management, yes, special scenarios. I don’t know. If you don’t do this, and you do everything else perfectly, depending on the regime we’re in and what you’re trading, this could make or break your average pnl. When we first saw this, it was pretty scary, because I guess we were lucky when we first started. Maybe the symbols we were trading on initially just were pretty hard to manipulate or people hadn’t gotten around to it yet. Anyway, we just completely did not foresee this problem and naively built in ignorance of it. It got to a point where things were going well, we had this pnl, and then once we fell for these tricks, it was very dramatic. You could lose a day’s worth of pnl in a minute. If you don’t tune your strategies, they will do dumb things. In some sense, automated trading is the dumbest trading, because it’s some simple state machine that has no human discretion. It will just do what it’s programmed to do.
Our approach was just to be pretty practical about it. You could sit back and analyze it or come up with models to predict whether there’s manipulation going on. But one of our big edges, at least, starting out was that we just moved super fast and didn’t really care for the proper way to do things. It was very much grounded in the data. So for us, it was like, “Okay, this is happening; it’s not happening that much. Let’s just shut it off when we lose money in a specific pattern.” And this is something you can code up in like an hour and put in production. That was the 80/20 back then. And yeah, you’re missing out on some opportunities, but it frees up your time to start scaling out and working on the things that actually are like 10x multipliers to your pnl and not really worry about this, maybe this 5% of the time when you’re shut off you’re losing money you could’ve been making or something.
So there’s a bit of a judgment call. There’s a constant trade off. What’s the best thing to be working on? Since then, we’ve had a lot more time to work on things. We do have more complicated models now of predicting these regimes and figuring out what’s going on. Instead of doing these very discrete actions, rather having a continuous adjustment to the strategies. At this point, I’d say we have a pretty good understanding of how these manipulators operate and detecting them. But again, I think for people starting out, the 80/20 principle is super important.
Corey Hoffstein 32:14
Do you find that that sort of market manipulation, spoofing is more prevalent in sort of the fatter tail of exchanges with the fatter tail of coins? Or is that something that you will still see, even on Binance with Bitcoin and Ethereum?
Jeff Yan 32:28
It’s pretty rare for Bitcoin and Ethereum on any exchange, because there’s just a lot more liquidity. I would say it’s more about the asset and less about the exchange. I’ve seen it on almost all exchanges. People do different things on different exchanges. You can’t tell if they’re different people, but they all follow the same patterns. There’s some sweet spot of liquidity. If there’s really no flow on the token, then maybe it’s not worth doing. But some sort of tail asset that has some volume going on, so you can kind of trick the algos. The algos expect some volume, some amount of trading, but you can kind of trick them to make some bad trades.
Corey Hoffstein 33:03
I’m a believer that the way we see the market is often influenced by the horizon over which we trade it. You as a high frequency trader, I think, probably have a different perspective of the way markets work given your intuition around microstructure versus say someone like me, who operates on a longer tail time horizon that might focus on more fundamental drivers of returns over the long run. You had a tweet where you said one mental model for markets is a viscous fluid. Docks to the system play out as damped oscillations in the price discovery process. I thought that was a really interesting idea. I was hoping you could talk about that a little bit and expand upon what you meant with that quote.
Jeff Yan 33:44
I’m a big believer as well in the fundamental understanding of things. It was kind of like the math and physics upbringing I had – if I don’t understand it, then I find it hard to innovate on this sort of black box. I like to just come up with these mental analogies, sort of metaphors for how things work. If it’s a viscous fluid model, maybe the real question is like, “Why does HFT even make money?” And if you ask retail, often they view it as this predatory thing. “Ah, they’re frontrunning us or, I don’t know, hunting our stops or whatever.” But no, I’m not saying that HFT is doing God’s work or anything, but I think that it’s providing a needed service to these markets. In terms of these shocks to the system, a model is like, outside of market structure, you can abstract price moves as these like external factors that are essentially random for our purposes. Maybe somebody just needs to buy a lot and demands that liquidity now. Maybe there’s a news event moving the actual fair value of this token, and so some people are gonna trade that. But its demand kind of just comes out of nowhere and often violently hits the book. It’s a pretty PvP scenario, so there’s a lot of urgency for people to execute. It can be a cycle. Some people might be trading off momentum where trades can trigger other trades. There’s a lot of unstable equilibria. It’ll be like a big shock, and then people come in, and almost have this discussion about what the actual fair is. The first move will be the biggest and then often, maybe they’ll say, “Oh, we overshot.” Someone will come in and trade that mean reversion. Maybe it’s a medium frequency trader, maybe it’s a high frequency trader who just knows, “Oh, five seconds from now the fair price is on average going to revert.” And then someone else might say, “Oh, no, no, this is like a much bigger deal. And we’re gonna start TWAPing until the price hits this 20% increase or something. Elon adding DOGE to Twitter is a real thing. You guys are wrong.” They might go pick off the mean reversion traders. It’s like there’s this big discussion / battle going on between the different actors. But the key characteristic is that the moves get smaller and smaller, right? People are kind of voting with their money and more or less, people get into the positions that they want to get into. And then there’s sort of this dollar weighted averaging going on, and the price settles at the fair. That’s kind of how markets work.
Within all this chaos, in HFT the mandate is to buy low and sell high. You think about that, just like the squiggly line that’s moving up and down all over the place. If HFT on average buys when the squiggly line is low and sells when the squiggly line is high, then the market impact of HFT on average is to smooth out this squiggly line. And that is good for everyone. It makes the price snap to the fair price much faster. And it sort of ensures that it’s as close to fair as possible along its trajectory. The better the HFT is, the more liquidity there is on your market, the more viscous this fluid is. I don’t know how helpful this mental model is. But that was what the tweet was about.
Corey Hoffstein 37:01
If you asked me whether I thought HFT pnl was positively auto correlated, I could probably come up with some arguments as to why it would be. I could see it being regime dependent, I could see it, particularly on the left tail, once you start incurring losses, I could see it simply just being a case where that algo, for whatever reason, was no longer printing. And so once you start to lose money, you would continue to lose money.
You performed an interesting study where you looked at your pnl not being auto correlated to itself, but as an input to a predictive model on mid frequency prices of the things you were trading. And if you asked me whether I thought your HFT pnl would be predictive in any way of the prices of the things you were trading, I would say maybe not unless they were taker strategies all in the same direction. I wouldn’t expect particularly for a maker strategy to be predictive. You found there actually was some signal there, that actually your own pnl on the HFT side was a meaningful predictor of mid frequency price movements. Explain that to me.
Jeff Yan 38:11
This was one of our crazy ideas. I think I mentioned earlier that it’s almost always better to work off of something that already works. Your hit rate is gonna be a lot higher. You have this base to scale off of. But we definitely leave room for the one-off crazy explorations, and sometimes they pay off. So this was one of our more successful hobby projects. We didn’t have strong priors going into the study. The motivation mainly was, “Hey, we have more capital than we can deploy to high frequency strategies. We’ve onboarded a ton of exchanges. Those are constant factor scaling. There are diminishing returns because the exchanges get smaller and smaller. And so maybe we can get into mid frequency. That’s the golden goose – sharpe three, sharpe four strategies that have 10x, 100x the capacity of HFT. Sounds great.
So that was the initial motivation. But we’re generally pretty strong believers in efficient markets. Basically, “Yeah, we have all this edge in HFT. But give us some market data, I don’t know, daily returns, whatever, and ask us to predict daily returns, and we don’t know where to start.” So with that humility, this crazy idea was a way to kind of get a foothold in medium frequency trading. Often if you can just get some data source that is useful that people don’t have, that itself can be a trading strategy. And we’re not about to send satellites to go look at parking lots or whatever the classic examples are, but what data do we have? Well, we have our HFT pnl, and obviously, that’s private to us and it’s not random. You just look at graphs, and it’s very interesting. If you think about it, what is it correlated with, going back to the discussion about toxic versus retail flow? It’s pretty correlated with retail flow. I guess your priors in general are if you can segment some actors in the market and figure out what they’re doing, then that’s a very good signal. Priors are that that thing is predictive of something. The direction is less obvious. We kind of went in with, “Okay, we have this thing, it’s correlated to this other thing, the retail flow. Yeah, that’s probably correlated with the price. Why don’t we just work through it and analyze it?”
So that was the motivation. We did this analysis; we basically regressed various pnl-based features, delta of the pnl, the derivative of the pnl, against a wide range of mid frequency price movements. We were also just not sure how mid frequency work goes. So we cast a wide net, “Okay, like maybe it’s predictive of 5 minute returns,” and exponentially scaled it out to a few hours. That was the whole study. We happen to have this data because we have a dashboard, and it reports all the pnls of all of our strategies. We could also slice it on exchange, on strategy, on symbol. So we did all these things. It’s really noisy. I think there are techniques to deal with this. Obviously, we wouldn’t regress one coin’s pnl and try to predict that coin’s mid frequency movements. I think that’s just way too much noise. We basically just did an 80/20 on this. We did some bucketing, some bending, following our priors to not overfit too much.
By and large, we found this pretty interesting effect, which I think is counterintuitive to everyone I’ve talked to about this, which is that our HFT pnl, whether it’s maker or taker, it doesn’t actually matter, is negatively correlated with returns in crypto. Its effect is pretty strong, but if you zoom in on actually trying to capture it… We were super excited when we saw this, by the way, we’re like, “Holy shit, let’s just pivot. We’ll just run HFT at a loss; we’ll just trade mid-frequency. Things are gonna be great.” It was a very strong effect. I don’t remember the exact numbers, but like, 10s of basis points on maybe an hour, two hour horizon with very high capacity.
The problem is, if you actually look, the signal only triggers to tell you to short. There’s not a reverse effect. Maybe there would be, but we tune our strategies to not lose money. It’s like, “You make money; all right, short.” And what do you short, right? Like you short the futures. But if you actually look into doing it, there’s this one effect, which is like funding rates. When this happens, a lot of sophisticated people are shorting, and I don’t think everyone’s using the same signal. But this is just in general, alpha is super correlated with other alpha. People can be looking at totally different things, but at the end of the day, alpha is super correlated. People are smart, and they’re making the right trades. So there’s the funding rate, and then there was this other thing, where the symbols it performed the best on, outliers with extreme success, we obviously look at those, as in any study, were the things that are very hard to short. The net effect is still interesting, because we accumulate inventory when we’re trading, and you can bias the inventory, you can sort of internalize between your strategies. Different firms think about this differently, but there is obviously something you can do. Even if you didn’t do that, you could just bias your strategies when this is absolutely strongest, to bias to not holding inventory and this will have a positive effect. But it’s not a surefire, obvious trade you can make in isolation. I think there’s something there with the futures, but I think it was not compelling enough to really look into and make a standalone strategy around, which is why I think this is the closest thing to alpha that is the kind of stuff that’s shareable on Twitter, I guess. But I think depending on your set of strategies and what you’re running, this could actually be super actionable alpha.
Corey Hoffstein 43:50
I was gonna say, I love this idea that it might not be an actionable alpha in the sense that if you actually want to short the futures, it might actually be priced into the funding rate of the futures. But biasing your inventory is another way to actually implement that alpha in a way that can have a meaningful impact on your pnl. It reminds me of the frequency I trade, DFA, for example. They don’t trade momentum specifically, but when they go to buy value stocks, they’re going to screen out the ones with really low momentum. They’re not explicitly incorporating momentum as a factor, but they’re waiting for that negative momentum, which occurs at a totally different time horizon, to abate before they buy their value stocks. A totally different set of factors, but similar idea of taking a theoretically orthogonal alpha signal, not trading it explicitly, but incorporating it into the way you’re trading to add some marginal edge, marginal improvement to what you’re doing. I love that concept.
Jeff Yan 44:51
I was going to add on to that. I think that’s a really interesting example that I hadn’t heard of. I’ve heard of some crazy stories, like some manual traders who swing large size, so I assume they know what they’re doing, will say things like, “Oh yeah, in crypto when the 50-day moving average crosses whatever… I have a signal that’s not technical analysis… but when that happens, that’s when I trigger.” I haven’t looked at that in particular, but it reminded me a lot of that. It’s like waiting for some other thing that you think is predictive before triggering.
Corey Hoffstein 45:19
Some conditional signal to change. Yeah, fascinating stuff. One of the things you’ve mentioned, we’ve been talking a lot about centralized exchanges, but we haven’t really talked about onchain strategies or decentralized exchanges all that much. You mentioned, one of your favorite discontinued onchain strategies was trading RFQs. I was hoping you would explain what that was, why it was a strategy that you loved and that worked so well. And then why you discontinued it.
Jeff Yan 45:49
This was about half a year ago, I think, when we were in the middle of expanding to defi. We had heard a lot of the best opportunities were starting to move onchain and centralized exchange trading was kind of hitting diminishing returns. Volumes were pretty low. So we’re like, “Okay, let’s spend more time looking at defi.” I think back then RFQs were a bit of a fad. Doug from Crocswap has written some interesting threads about this lately. I tend to agree with Doug that it’s not a good design. I think it’s trying to take something that works in tradfi, but not really applying it well to defi.
For context for listeners, RFQ stands for request for quotation, I believe. The idea is good. It’s like, “Well, let’s try to filter out this toxic flow that market makers hate so much. Let’s try to have retail interact directly with makers.” Retail will come in and say, “Hey, I’m retail; give me a quote.” And then the maker will give them a quote, usually inside the BBO or certainly for the size that retail wants, maybe better than if the retail were to hit the book directly. And then the retail gets the quote. For defi, it’s like a signed payload that you broadcast to some smart contract, which then verifies it, and then does the fund transfer between them, between retail and the market maker. It’s just like OTC, but it’s a protocol built around it, I guess.
This may sound good, and it happens a lot in tradfi. I think Jane Street does a lot of this kind of stuff, and it’s really good. You want to be on the other side of retail flow. You’re providing retail a great service by giving them bigger size and not getting frontrun by HFT. Good in theory, but in defi, it’s obviously a dumb idea because how do you prove that you’re retail? Everything’s anonymous, and you’re not KYC-ed.
As a proof of concept for this, we basically span up a simple Python script that just asked for quotes from these market makers. And they were quoting us, you know, like five basis points away, quotes valid for like 60 seconds, 90 seconds or something. Most of the time, it’s really good for the market maker to get that fill. They’re putting like 100k in size or something like that. And we’re just like, “Okay, we’ll just wait until the price moves.” And the price obviously moves; crypto is really volatile. And when it moves, we’re like, “OK, we’ll just process this transaction. What are you going to do about it?” And this thing is super high sharpe. You can do even better. You don’t even have to wait for the price to move, the trigger is basically a free option, and the option has time value as well. You literally just wait until the option is about to expire. And then you just decide if you want to trade or not. So that makes it even more consistently good.
We just did this. And then I guess we were not the first ones to do this, or maybe we were, but the market makers react. And they say, “Okay, we’re gonna stop quoting you because you are making us lose money. You’re clearly not retail.” They just start to give you super wide quotes or just not quote at all. Then you just switch accounts. You just fund a new wallet and do it again. Fundamentally, I think there’s nothing wrong with this strategy. I guess a concern I have is that the main value we’re adding when running this strategy is that we’re proving that this RFQ market structure is dumb. There should be an intellectual reallocation of capital towards working on something that makes more sense. And maybe we sort of accomplished that? I think now for RFQs, makers have the last look instead of retail. I mean, like you said, we stopped running the strategy. But I think there has been an evolution. I do think that once you do that though, the whole benefit of RFQs goes away. You can see the discussion on Twitter threads, but it’s a hard problem to improve upon a central limit order book. And I just don’t think RFQs work in defi.
I guess this is a good example of us trying things in defi and realizing how immature the space is. Some protocols have not thought things through. This is a nice segue into us deciding that maybe we’re actually the best people to build something that is actually going to service retail and create a platform for decentralized price discovery.
Corey Hoffstein 49:51
Let’s take that segue because that’s where I wanted to go next and talk about your newest project. So you’re continuing to run the high frequency book, but you’ve pivoted a lot of your intellectual horsepower towards this project Hyperliquid. What is it? Why are you building it?
Jeff Yan 50:08
That’s right. We’re basically building it because when trading on defi, we were perplexed. There’s a ton of retail flow, even in the defi winter of May 2022. There’s a ton of retail flow, and they’re using these absolutely horrendous protocols. They’re paying a ton of gas, because the L1s suck. And they’re using these protocols where the design also sucks, for example, RFQs. It was amazing to us that people actually want to use this stuff, and you can kind of see it in the data. The demand is there. And so, we started exploring this. I don’t remember exactly when FTX happened in this timeline, but it was certainly before FTX collapsed, but not that much before.
When FTX blew up, I think the narrative obviously shifted dramatically towards, “Oh, shit. There’s this whole counterparty risk thing like not your keys, not your coins.” This kind of stuff that used to be a meme was all of a sudden top of people’s minds. That just solidified our conviction that this was something we should build.
Re: what to build, I think we actually struggled with that a decent amount. We wanted to figure out what people actually wanted and what was not being serviced well in the market. There are a ton of Uniswap clones, innovations, or integrations, like aggregators, different curves, different formulas, different adjustments you can make to make the AMM thing work. We’re not strong believers in AMMs. I think there’s just a lot of dumb liquidity that is being provided due to this false, misleading narrative of impermanent loss and/or yield farming and remnants of that. We’re not really strong believers in that anyway, and even if that were the thing that the market was demanding, there are so many people trying to service that, what are we going to add by building one?
We can look toward centralized exchanges and say, “What do people want? Where does price discovery happen? Where’s liquidity?” It’s all in perps. Perps are this actually, ingenious innovation. I think it was actually invented in tradfi, but popularized by crypto. Let’s see who’s doing that in a decentralized way. Basically, no one. I mean, dydx’s order book is centralized, and they’re the closest you can get. They have some traction. We basically thought, “Why don’t we build this?”
I think the pitch for traders is – You like Binance. You like Bybit. You’d like something that’s centralized that you’d rather not have to trust. There will be this thing, Hyperliquid. There is this thing, Hyperliquid. It recently launched in closed alpha. It gives you the same experience. Maybe liquidity is not quite there yet. But fundamentally nothing is barring the same liquidity, tight spreads, instant confirmations, epsilon gas, basically gas to the extent of preventing DDoS, but the chain itself can handle 10s of 1,000s of orders per second without an issue. Everything’s transparent. Everything’s onchain. Everything is a transaction. That is basically the vision.
We’re targeting defi to start because it’s hard to make that educational pitch. And I think a lot of people are trying to do it, educating people that, “Hey, there’s a new way to do things. You don’t need a custodian. A blockchain, a smart contract can be your custodian.” That is a hard thing to sell and not really our edge in doing. But there are these people who want to do it today. That’s what we’re targeting. You’re basically showing them, “Hey, out of all the different protocols, most of them are not serious. Most of them are just clones of something that sort of works, like a band aid solution. Maybe it’s based on oracle price, whatever. It’s good for degen gamblers, but not good for serious traders who want real liquidity.” But Hyperliquid stands out because it is built with that in mind.
We had to innovate a lot on the tech to make this happen. So we were heads down building for part of a quarter. We really wanted to make it work through some smart contracts. I think we were kind of sold on the dydx model of trustless offchain matching, but trust the settlement. Upon further thinking, it’s pretty flawed. The system is only as decentralized as its weakest component – as its most centralized component. We basically decided this was not acceptable. This will not actually let us scale to the vision we actually want. We need to be fully decentralized and that leaves us very little choice. We need to build our own blockchain. That kind of just did it. We have very much a no nonsense, don’t take things for granted attitude. People say it’s hard to build L1s, but we kind of just said, “Okay, let’s find some consensus protocol.” Found Tendermint. Not great. Honestly surprised it works, but it’s been battle-tested. So we took it and built on top of it. It’s gotten us to where we are today.
Corey Hoffstein 54:29
Can you talk a little bit more about that? Because I know deciding to build your own L1 is a key differentiator between what you’re doing with Hyperliquid and other DEXs and is a crucial component to your approach. First, let’s make the assumption that some folks listening have no idea what an L1 is. Can you explain what an L1 is? And then second, again, why was that such an important, critical decision for you?
Jeff Yan 54:56
L1, I think, was a whole narrative. A lot of the big investments, Solana, Avalanche, etc. were part of the whole L1 principle. But really it’s quite simple. An L1 is just a blockchain. It’s usually contrasted with smart contract-based approaches, where you take another L1, whether that’s Ethereum, Solana, and build your exchange as a smart contract that the L1 executes. That’s what it is.
The reason it’s so important. There’s this weird incentive where people want to build on an L1, because you get the VC slush funds and L1s have a lot of tokens. You get that kind of backing and the PR. It’s sort of a safer bet. Obviously the L1s are really trying to get people to build on them because an L1 has a general purpose smart contract. An L1 has no value unless people are building on it. So there’s bias towards defaulting to smart contracts. Whereas, if you look at Cosmos chains, which are all built on Tendermint, no one’s really incentivized to be pushing those. No value actually accrues, at least now. There’s no value that accrues to Atom, for example. I think they’re starting to come up with ways to do this. But it’s fundamentally a self-sovereign system. Keep that in mind in terms of assessment between the two you hear. My personal opinion, having tried both, is that it’s hard for me to imagine building a good exchange or contract platform, certainly for derivatives, certainly, if you want to run an order book, which I talked about earlier is a good model. Some validation for this idea is that dydx, which is probably the frontrunner, is pivoting to building their own blockchain five years later. For them, it was maybe some sort of legal pressures; I can only speculate there. But the current thing that they’re running is obviously not decentralized, and everybody knows this. I guess they will sunset that when they’re ready.
But from our perspective, L1s are the way to build an exchange. Maybe as a concrete example, if you’re running a smart contract exchange, you’re constrained by how the smart contract protocol works. On Ethereum, transactions must be triggered by a user action. So then if you want to do these things, these very basic operations on a perpetuals exchange, such as distribute funding every eight hours, this is the mechanism by which the price is pushed towards the spot price. That’s a super hard thing to design if you’re trying to build an order book on an L1. Let’s say you have 100,000 outstanding positions. The number of storage slot updates you need to make on Ethereum doesn’t fit into a block. You have to design a protocol around who does this. And you need some auction to figure out, is that privileged people who are allowed to trigger funding? Who gets the credit? There’s got to be some fee to them, because they’re paying gas. It’s not atomic. There’d be this weird thing where like, you get funding at approximately every 8 hours, but depending on how many people there are, you might be three minutes late. How are you going to run a trading strategy around that? This is a super basic operation; all perp exchanges need this. But if you’re running your own L1, it’s trivial. You just bake it into the consensus protocol itself. You just say, “All right, when you’re producing new blocks, you’re gonna execute arbitrary code. So if this block is a new multiple of eight hours since time zero, let’s just trigger this thing and do the thing.” It’s so much simpler. I think running an exchange is a lot closer to building an L1 than it is to writing some simple smart contract.
Corey Hoffstein 58:38
Now you’re talking about perps. I want to go back a little bit to the traditional way in which decentralized exchanges currently operate, which is via liquidity pools at different fee tiers. So someone who wants to provide liquidity might put in, I don’t know, Ethereum and Bitcoin for the Ethereum and Bitcoin pair. And they might offer liquidity at a one basis point fee tier or a five basis point, or a 30, or a 100, I think, is how high they go. But there are these very specific buckets. I can’t, for example, offer liquidity at a 15 basis point tier.
This is a very, very different model than the order book model, one in which you are intimately familiar with through your high frequency trading days. Why do you think the order book model that you’re adopting for Hyperliquid is inherently better than this fee tier structure that current decentralized exchanges operate on?
Jeff Yan 59:36
The fee tier thing is interesting. If you look at the AMMs, they’re slowly trying to progress towards being an order book. A lot of defi is a little frustrating. It’s like reinventing the wheel. Maybe there’ll be some innovations along the way. But fundamentally the liquidity pool model is both ingenious and a scam. It was born out of necessity. In like 2018 or whenever, Uniswap was built. It wasn’t feasible to do anything other than a few simple arithmetic operations, one or two storage updates per transaction with the user. There’s a tolerance to how much gas they’re willing to pay. And so it was born out of necessity, this computational constraint. They kind of managed to get it to work by tricking people into providing liquidity to the pool. I think impermanent loss was a super good marketing ploy, borderline, I feel like unethical. I think these people are smart. I find it hard to believe that they didn’t know what they were doing. But I think tricking people to say, “Hey, like you put your stuff here. You’re not trading. You’re not posting liquidity. You’re just depositing into this yield thing. And yeah, you might have some loss, but don’t worry, it’s impermanent.” It’s definitely questionable.
I think people are waking up to this now. You just model the prices as a random walk. There used to be a lot of controversy around this. I don’t really know why. It’s super obvious as a trader. You just arb these pools and make a ton of money. It’s a super competitive trade now, but it’s a really good trade. Who’s providing this liquidity? It’s not professional market makers like in an order book. It’s a bunch of retail that maybe put their funds there and literally forgot they put them there. It’s just negative EV over time. You’re just suffering. Add this yield farming stuff to incentivize liquidity, and then maybe the yield farming thing dries up when retail forgets that their liquidity is still there, I don’t really know. But it’s not a sustainable model. People might say, “Oh, the volumes are pretty high. Maybe it works.” But it’s because of this sort of ingenious marketing scheme. Over time, I expect the liquidity just to be trending downwards. When it actually hits the equilibrium, you’re just gonna find the liquidity needs to be so bad that the fees pay for the adverse selection relative to the retail flow. And that level of liquidity, if you do the math, is awful. And that’s the fundamental argument for why these pool-based things don’t work.
An evolution of that is GMX, or all the GMX clones, where instead of having this constant curve, they use an oracle price. They have all these tricks and sort of limits and things like that to get the oracle price to be relatively accurate when trades come in. But even then, you start to see these pretty famous cases of people manipulating the price on centralized exchanges, and then trading on GMX against the manipulated oracle price. I view all this stuff as Band Aid solutions.
I think the tech is finally at a place in L1 consensus or this kind of general area of research, where you can just not make these sacrifices. You can have your cake and eat it. You can be decentralized and run an order book, which is, from what I can tell, empirically speaking, the only way people have found to encourage real price discovery, real markets.
Corey Hoffstein 1:02:44
One of the potential problems with having your own L1 is it requires people to bridge money on and off the L1 from some sort of fiat on-ramp or another chain, which I could see potentially being a risk to price discovery. The price discovery on the platform might not be as efficient because money moving on and off the platform is inherently speed limited by this bridging component. I’m curious as to your thoughts there. Do you see that being a potential risk in operating this as your own L1? Or do you think that that’s a non-issue?
Jeff Yan 1:03:26
It is definitely an issue in crypto in general, not even just in defi, because if you’re trading on centralized exchanges, when you’re doing arbs, your withdrawals and deposits are on the blockchains. If things are congested, you still have this issue. But we’re focusing on perps to start, like I said, it’s because it’s the 80/20 here in terms of opportunity. Almost all the volume is in perps. And the nice thing about perps is that you can start with another 80/20, which is just margining with USDC and call it a day. It’s not that hard to add a couple more stables in there to diversify across stablecoin risk. But by and large, people are pretty willing to get onboard with this model. It’s like, “All right, I deposit my USDC into this bridge, chain, contract, or whatever. And then this lets me express my opinion on a large class of crypto assets. That’s pretty cool.” So in times of high volatility and price discovery, as long as you have the collateral, you can express your opinion. The spot perp arb is a statistical thing. You’re just trying to harvest the funding rate at a profitable spread between spot and perps. You can do that trade without moving the spot or the USDC around. At least the perp leg, you can do on Hyperliquid.
With that being said, I think it’s an interesting concern in general, and I think there’s a lot of interesting, sort of omnichain technology coming out these days. We’ve integrated with some, and we’re always looking for more. We’re happy to be supporting the people pushing that frontier. We’ve got a lot on our plates, so we’re not really doing the multichain integration stuff. But the ultimate goal, which is really just a technological limitation at this point, which is being solved in many ways. The ultimate goal is that you have your assets on any source chain; we’re pretty agnostic. Just send it to this trustless, decentralized bridge protocol, and then it will serve as collateral on Hyperliquid.
Corey Hoffstein 1:05:18
Now you’ve been writing your own custom L1. My expectation is that you’d still be operating at speeds that are orders of magnitude slower than most of the major crypto centralized exchanges as well as traditional finance exchanges. Do you think faster is always better? Or is there an ultimate limit to the benefit? Can you get 99.9% of the way, with an order book speed, that’s still orders of magnitude slower than what you see at something like Binance nowadays?
Jeff Yan 1:05:51
I think you can. Even today, if you try Hyperliquid from the UI, or from the Python SDK, or the raw API, you’ll see the latency is maybe 100 milliseconds, maybe 300. It’s not super deterministic because of the blocks being produced. You might say, “Oh, that’s terrible. That’s like 10x the Binance order entry latencies.” But latency doesn’t work like that. It’s not like fees. It’s not linear. Certainly for a user, for retail, which is the most important segment that you have to cater to first, human reaction can’t really differentiate between 100 milliseconds and 10 milliseconds. Even if they can, they don’t care. They just want something that’s immediate. Prices don’t move that much in 100 milliseconds versus 10. For all intents and purposes, it’s zero. So the latency incurred by block times is basically solved by running a custom L1.
Now, if you look at Ethereum and other blockchains with more than 10 seconds, that is obviously a huge hit to user experience. Prices move a lot in 10 seconds. For a user, like we said, there’s diminishing returns for latency in that regard. In terms of the order book speed, I guess, the thing you really care about is TPS, which stands for transactions per second. For a DEX, you care about orders and cancels, etc. per second. And yes, there’s going to be an order of magnitude difference here, between running a decentralized exchange and trading on somewhere like Binance. That being said, I also think this doesn’t matter, because computers get exponentially better anyway. And even now, they’re at a point where they’re good enough. So I don’t know the exact numbers on Binance’s matching engine, but let’s say it does a million orders per second. And let’s say, we write our L1, it’s really performant and does 100,000 orders per second. It’s not like Binance is 10x better there. It’s hard to evaluate this, but you can very easily design a protocol that caps out at 100,000 transactions per second that is a great protocol and is sufficient for price discovery on the assets that are listed. Sure, during the insane volatility spikes, maybe your orders will be a couple blocks delayed or like 10 blocks delayed, to hit the chain. But it’s not like this stuff doesn’t happen on centralized exchanges either. Yes, it’s an order of magnitude, but it’s in the direction that doesn’t suffer an order of magnitude in cost. But if you look at some L1 chains, they do maybe like 10 transactions per second, then yeah, I think the difference between 10 and 100,000 is a huge deal.
I think some people take different views on this. It’s not only a TPS thing, it’s also an engineering thing. Dydx is really into the idea of offchain order books. I think even with v4, the plan is they’re going to let validators run their books, but only have settlement onchain. And I guess, in theory, you can get maybe an order of magnitude TPS boost there, but I think what you’re giving up is pretty expensive. This increases the opportunity for MEV. And there’s ambiguity about what truth is, the source of truth, because order books, in my mind, are part of the state. Having that be offchain is a little bit hard to reason about. You take the couple of order of magnitude hits here and there, but the thing you’re building is so much more robust and resilient. And the transparency you get, I think, far outweighs the costs.
I will say also that we have looked a lot into the latest research on consensus. As we expect, when it comes to it, that is going to be the limiting factor. And there’s a lot of really cool stuff. Tendermint’s pretty old. The idea I believe is 10 years old. I don’t actually know the birth of the idea, but at least 10 years. People have thought a lot about this problem since then. The only issue is that the modern consensus protocols are not quite production ready. We ended up going with Tendermint for now, but building everything else from scratch. Not relying on the Cosmos SDK, but writing in a very performant way in Rust. We’ve done the research, and we will continue to keep tabs on this. For us, it’s very easy to swap out Tendermint for any consensus protocol that is production ready that we deem is better. We expect like a 10x, at least a 10x improvement there when the time comes. We’re pretty optimistic about the tech stack on which we’re building. The proof of concept is there. The benchmarking looks good. We wouldn’t be doing the marketing and user acquisition push if we didn’t think the platform could support this exchange.
Corey Hoffstein 1:10:26
Well, Jeff, we’ve come to the end here. Longtime listeners of my podcast will know that one of the things I do is completely change my cover art every season. And the cover art this season is inspired by tarot cards. I’m letting each guest pick the tarot card that speaks to them. You picked the chariot, which will be the design of your cover art, which you have yet to see, but will be available soon. The last question on the episode is why did you pick that card?
Jeff Yan 1:10:59
There are positive versions and negative versions: control, willpower, success, action, determination. All very positive things. I think it speaks strongly to how we go about doing things. I think it sets us apart from a lot of projects or teams in the space. We shoot for something that is pretty unreasonable. If you ask most people, “Can you build Binance in a fully decentralized way and not sacrifice anything?” They’ll probably say, “Maybe in five years or something.” But we don’t make assumptions. We push ourselves, do research from first principles, and ship things. It’s sort of like willpower, action, determination. That’s how we do things. It’s part of the game. For trading, you have to like winning just as much as you want to make money. If you just have one of those, you’re not going to succeed in trading. And now that we’re building something bigger, it’s even more important. We have this vision. People need this thing, and nobody’s building it. It’s partly because it’s just really hard to build. Our team is like the chariot. We’re just gonna go do it.
Corey Hoffstein 1:12:06
I love it. Well, Jeff, this has been fantastic. I really appreciate your time and best of luck with Hyperliquid.
Jeff Yan 1:12:13
Thanks, Corey. It was great talking to you. Appreciate it.