These numbers are the latest predictions for my ML model, with 0 being sell, 1 being hold, or do nothing, and 2 being buy. One number is generated every 6 hours. So over the past day it has advised to sell, then hold, then buy, then sell again. I’ve actually sold and bought ETH twice in the past couple of days, both times with close to no change in total balance, except for some transaction fees. I might have to rethink this strategy a bit. Fees on Binance are 0.15% per trade, but that can add up over time.
I usually search Google for help with problems I run into while coding, but sometimes that doesn’t work. Usually that’s because I find it hard to frame a question that gives relevant results. Then, as a last resort, there’s StackOverflow. My history with them has been rather chequered. I was chided for asking one question (no idea why) and banned for a week. On another question I got a very helpful answer from one user just before a moderator stepped in and said the kind of question I had asked was not permitted. Luckily I already had an answer. What will happen this time?
Problem is I opened a Jupyter Notebook directly from Yves Hilpisch’s GitHub repo in Google Colab. Two cells in the notebook attempt to open another file, one to run a .py file, and the other to load a .csv file. Both cells gave ‘file not found’ error even though the files exist in the correct places. Since nearly every Jupyter Notebook under the sun loads a csv file at least, what is the point of Google Colab providing the option to open a notebook directly from a GitHub repo if it can’t load such a file? I’m clearly missing something.
Of course there’s a workaround. I can download the repo to my local machine and then upload it to my Google Drive. In fact I seem to remember that it’s possible to transfer a GitHub repo directly to Google Drive without the download. Then run the notebook from Drive, and that should have no problem accessing the data files. But still, what is the point of suggesting it’s even possible to run a notebook directly from GitHub? Just a source of frustration. As if I don’t have enough of that in my life.
AFTERTHOUGHT
As a former teacher of programming, it has occurred to me that Google Colab/GitHub is an ideal combination for teaching. A teacher could upload example code to GitHub, students could open the files from Google Colab, all without the students or the institution having to concern themselves with installation and maintenance of a Python environment. Also maybe data files could be accessed via http request rather than from a drive. I must check that (the http request) out as it would answer my basic question of how would this be generally useful.
Just made first trade under new strategy. I had some logic errors in my data preparation code, which I fixed, and then the predictions were that I should sell ETH for BTC. Previously all the recent signals were to do nothing, but after the fix the last dozen or so signals have been sell.
So I borrowed some ETH (not too much) on AAVE and transferred it to Binance, and traded it for BTC. So I guess the idea now is that I buy back that amount of ETH when the price falls and repay the debt. With a little BTC profit remaining, after transaction fees. Hopefully. Anyway, that’s the plan.
Here’s the latest plan. Trade ETH for BTC. I have some BTC on Binance. Without any need for shorting if ETH goes up against BTC I can simply trade BTC for ETH on the spot market at 0.15% fee. And when it goes down against BTC I can trade back the other way. Unfortunately ETH has been falling against BTC for quite a while now and it might be some time before I get to put my first trade on using this strategy.
I’ve backtested this plan, and it gives a decent return, about 50% profit in the past year. Significantly better result using XGBoost with the MA crossover than when not using it. Of course those backtest results never work out in practice, often quite the opposite. I guess it’s a matter of how much I actually commit to trading this strategy. It definitely makes the whole business simpler though, and that’s a good thing.
UPDATE:
I’ve just backtested using the XGBoost alone, and that gives me the best result on this pair so far. And as for waiting for ETH to go up against BTC, I could just borrow some ETH on AAVE and transfer to Binance…
Facepalm moment. The MA cross is approaching, and I discover that I can’t borrow ADA on AAVE. I’m not actually looking to borrow at the moment, because the indicated trade is a long not a short, but I wan’t to be able to borrow and I can’t. Why has it taken me so long to realize this? Actually xgboost is giving me a long string of no-go signals, so perhaps I wouldn’t have traded the crossover anyway, but still doesn’t solve my fundamental problem.
I’ve been looking for other liquidity pools that do offer borrowing ADA. There are a few but they all seem to be fairly new projects. Security is important in crypto generally, and I was happy to go with AAVE because it is a big project that has been around for a while and has a good reputation, and was recommended to me by someone who knows a lot more about DeFi than I do. Not so these other pools. So what to do?
AAVE does offer borrowing ETH, of course. I think it’s time to investigate the long term prospects of trading ETH. At least it will be doable. One reason I went with ADA is that it seems pretty undervalued at the moment, which is a fundamental consideration, but fundamentals really shouldn’t be an issue with this strategy anyway. I won’t be buying to hold, after all.
UPDATE
Testing the strategy with ETHUSDT instead of ADAUSDT shows a small profit, bigger with XGBoost than without, but still small. Lots of work to do. Tweaking the lookback periods might help, but perhaps I should use some base strategy other than Moving Average Crossover. There are plenty of indicators to try after all.
I’ve rejigged my XGBoost enhanced ADAUSDT MA Crossover trading strategy for both long and short positions, and backtesting shows a better result than long only. The backtesting result for the past year is about the same as BTC has gone up in the past year. So, do I leave all my capital in BTC, or put some of it into trading ADA?
My main reason for spending so much time exploring trading strategies is that buy and hold only works if prices go up. Trading can work in all markets, if done right, but doing it right is the hard part. I could lose the lot if not careful.
So what does the future hold, crypto-wise? One analyst has said it’s unlikely to go up anytime soon because no one has got any money to invest. Certainly the hope of greater adoption by regular people with the ‘new’ BTC ETFs has proven illusory. I’m sure it will go up eventually, but I might not live to see it! At least the trading keeps me occupied, so I’ll close my BTC margin position at least and put the collateral to use trading ADA on a DeFi exchange. And according to the chart there should be an MA Crossover in the next day or two. Hoping XGBoost gives me the green light on that. At the moment it’s No, No, No.
The first strategy one learns about, when using indicators, is the Moving Average Crossover. It’s generally not a very successful strategy but it’s easy to explain and implement. Very easy in TradingView because one can just add the two moving averages to a chart and set the lookback periods to one’s liking. After years of trying far more complex strategies, such as statistical arbitrage, coming back to the Moving Average Crossover feels surreal. Sure, I’m ‘supervising’ it with an ML algorithm, in the sense that the algo gives the green light to trade (or not) based on it’s own assessment of the chances of profit, but that doesn’t make a big difference to the overall result. Picking the right values for the lookback periods has far greater impact. Or so it seems.
Here’s the chart I’m currently trading. ADAUSDT on Binance, 6 hourly candles, with 15 bar and 60 bar moving averages added to help spot a crossover as soon as it happens. When short crosses long from below a trade could be on, depending on the prediction of future return made by the XGBoost model. Exit strategy is 2% take profit or stop loss. Only about 1 trade per month with this setup, but backtesting did show significant profit over the past 18 months. However, previous backtesting looked good for my stat arb strategy too, and that didn’t work out so well in the long run. I’m not going to over commit again.
I’ve done some testing using other values than the 15/60 bars that I’ve used here for the two moving averages, and the results are very different. Seems like my initial guesses are right on the sweet spot, at least for this trading pair. I wonder how brittle it is though. Perhaps it’s just chance that gives me good results now, but maybe in a couple of months that won’t be the case. It doesn’t take much of a change to wipe out the profit completely, even produce a loss. I don’t really trust backtesting much, given my past experience.
Backtest results for approx 3 years of ADA/USDT, ended with 24% of initial capital, committing all of current balance to each trade. Looks pretty bad, so I checked the actual prices over the period and ADAUSDT went form 2.1820 at the start of the period to 0.3818 at the end, i.e. ended at 17.25% of initial value. So I think I can call that a win for my strategy. I lost less money than with a buy and hold strategy.
I think I need to refine my signal a bit. Perhaps look for gain over a shorter timeframe. And maybe actually a 2% gain and not a 1% gain. Of course with a more than 80% drop over the period of the backtest, perhaps a long-only strategy is not the way to go. I’m looking at that because it’s much easier to implement, just trade in a Binance Spot Wallet with OCO orders. Have I mentioned my dissatisfaction with the ban on margin trading by Australian citizens? What party do I vote for to get that reversed?
If the crypto market generally goes up over the next couple of years this isn’t a problem, but my optimism from the start of the year has significantly declined. The approval of BTC spot ETFs by the US SEC at the start of the year presaged a large inflow of capital to the crypto market, but that hasn’t happened. The ‘legitimizing’ of crypto as an asset class, the ease of onboarding due to ETFs rather than holding crypto, the increasing scarcity of BTC particularly, all these have made no difference. The fact that there is a wealth transfer from my generation to younger people as we die off, and younger people are more comfortable with digital assets, has also made no difference yet. Perhaps it will all happen after I’m dead.
UPDATE:
I’ve backtested only more recent data here, and added an extra condition before entering a trade – that the average return over the past 60 bars (15 days) be positive. Another parameter to optimize, but at least I’m getting some profit at last.
AFTERTHOUGHT
The pair of conditions (positive MA and positive ML prediction) give a better outcome than either one alone. Perhaps instead of just using a positive moving average as an entry condition I could use something a little more sophisticated such as a moving average crossover. In this case the XGBoost model is providing the meta-labelling to a traditional trading strategy. Not sure where RL fits into all this, but it might be a while before I’m confident with that anyway. However it does mean that I can roll some of the features that I was planning to use in the RL model into the XGB model, such as general crypto index, SPY, Fear and Greed, etc. No harm in trying I guess. This whole thing is definitely a Work in Progress.
I need to set some starting parameters for this exercise. First are the trading rules. So I open a trade on a positive prediction of increase over the coming seven days. Fixed position or cumulative? I think I’ll go for cumulative this time. Put in my entire account balance (as much as I’m committing to this exercise anyway) and just continue with whatever the account holds at the time. This is more doable if I’m setting stops at 2% loss as it should prevent large drawdowns.
Exit strategy? I think I’ll go with 2% take profit or 2% stop loss for the moment. I might need to optimize these numbers, but I need a starting point. What about exiting after a set number of days? I guess if the price isn’t moving much either way there’s not much to be gained by doing this.
I need a trained model to produce signals, but can’t backtest on the data used to train the model. I’ve got about six years of data, so maybe train the model on four years of data, and then start backtesting from that point on. I could retrain the model before each trade to include all the data up ’til that point. I’ll see how that works out.
So what could I optimize here? Currently my go/no go signal is determined by whether or not the price increases by more than 1% at any time in the following seven days. I could change the percent, or the number of days. I’ve basically made it a binary classification problem. Maybe that’s not the best approach, but at the moment it seems pretty good to me. Of course the stop loss/take profit percentages might not produce the best results. I’ll play around with those numbers to see what works best on the historical data. Something that I could consider in the future is to increase the number of trading pairs I’m looking at. Or maybe to include some extra input features. For the moment I’ll go with what I’ve got.
A week ago I entered a trade, shown by the bottom yellow line. Prediction was that the price would increase by more than 1% over the next seven days, and I set a limit order to take profit at the level of the top yellow line. However it was all downhill from there and I’m wondering whether to close the trade and take a substantial loss, or wait for it to recover.
My XGBoost model is showing an 87% precision using the last 10% of data as a test set, with the model trained on the first 90% of data (about 10,000 6 hour OHLCV bars). So, 13% of positive predictions are false positives, as has happened here. But 87% precision is pretty good, a lot better than I expected. If I had implemented basic risk management, such as setting a stop loss at 2% loss, my current unrealized loss would have been a much smaller realized loss, and I’d now be in a position to buy in at a much lower price and end up in a lot better position than I will be.
So I haven’t really fully implemented the RL model that I plan to use to ‘supervise’ the trading of the XGBoost model, and indeed I’m waiting for a book by Yves Hilpisch on the subject to be released in December. However if I do some basic backtesting just using the XGBoost model, to determine the best levels for take profit and stop loss I think there’s an excellent chance that I can produce a profitable strategy without such an additional input. 87% precision really is pretty good. I should make the most of it.
I recently upgraded my OS to Ubuntu 24 (from Ubuntu 22), and was immediately reminded why I always decide afterwards that it was a big mistake. It only happens once every few years, so I forget the pain that happened last time.
Minor issues are fairly easily fixed. Getting 640×480 res on your 4k monitor? No worries, just change the settings back to use the nVidia driver. All your python packages gone AWOL because the system default version of python has been upgraded? Try reinstalling them.
But now there are lots of warnings that there didn’t used to be. ‘This is an externally managed environment, don’t mess with it’. So I could go through the new approach to installing packages system wide, but I’m not sure if I should. Back to using Docker for all my environments. But now jupyter notebooks no longer work. Servers don’t respond. I can’t open a notebook either inside PyCharm or without.
Of course I could go back to using Google Colab for notebooks, or just do everything in .py files. At least they still work. If the packages are installed. Lots of ‘invalid interpreter’ messages on my various PyCharm projects. I have been trying to avoid conflicts by using the system installation wherever possible, but now that that’s been overridden almost nothing works.
When I was doing computer graphics I read somewhere that when a studio starts on a new project they freeze all the software that they’re using, even if the project takes years. One doesn’t want an upgrade on some crucial tool to wreck years of work. I should adopt a policy of getting a new OS only when I get a new computer, which doesn’t happen very often.