When the Market Takes Weekends Off – Devlog Stocksimpy
I recently took a long break from working on StockSimPy — school got busy and pulled me away — but I’m back. The project is getting closer to the finish line, yet somehow the closer I get, the further it feels. The core components — backtesting, portfolio management, and stock data handling — are semi-functional, but parts like importing data from yfinance and cleaning it still need fixes, along with a few unexpected bugs.
Why Add Performance Metrics?
When I first planned StockSimPy, I didn’t think about including a dedicated performance class. I figured total return and a price graph would be enough. That changed after someone commented on one of my previous devlogs, asking if I planned to add metrics like the Sortino ratio.
That question stuck with me. I thought: If I were someone testing a trading strategy, would I want performance metrics? The answer was obviously yes. So, I researched what to include. For the initial release, I wanted just enough metrics to make the analysis meaningful without overcomplicating it. I settled on three: volatility, Sharpe ratio, and maximum drawdown.
This was my first time implementing these metrics, so I kept things simple — but not everything went smoothly.
Challenge #1: Counting Days
The stock market isn’t open every day. There are weekends, federal holidays, and random closures, which complicate calculating metrics like the Sharpe ratio and annualized return. I initially assumed that yfinance data only included trading days, but I needed to confirm how many actual trading days the backtest covered.
Then it hit me: my DataFrame already contained the portfolio value for each recorded date. By subtracting the first date from the last, I could find the total number of days in the simulation. From there, it was straightforward to plug that into the formula for annualized return. (I used an external resource for the math.)
Challenge #2: AI Boundaries
I didn’t want StockSimPy to turn into a “vibe-coded” AI-assisted project. My rule was to only use AI for documentation, examples, and code formatting. At first, that worked great — it helped generate detailed documentation and catch small formatting inconsistencies. But after a while, I noticed my calculations were returning wrong results. Turns out, the AI reformatted some logic incorrectly. Luckily, I had staged my changes beforehand, so rolling back was painless.
Challenge #3: Confirming the Accuracy of the Results
To do this, I chose some stocks and checked their data on these performance metrics for the 1Y range. Then I ran my backtesting for 1Y, buying as many stocks as possible on day 1 — emphasis on day 1. Most of the data was close enough, except for the total return, which differed by about 10% in some extreme cases.
I tried to figure out what was going wrong, and it turns out my code was running the backtest for 364 days instead of a full year. The initial date when all the stock was bought was on a Monday, meaning there was a decent price difference between that Monday and the previous Friday, which caused a surprisingly large difference once it was compounded over the entire year.
Conclusion
Overall, these challenges taught me that the accuracy of a finance project isn’t about the code, but the tiny assumptions buried in the data. With every print statement that I used to debug and fix errors, stocksimpy gets closer to being a reliable tool. There is still a lot more to develop…
If there is anything you want me to add, or any suggestions. Please reach out to me on my socials, or comment below 👇. It really helps with improving the library.
If you would like to support me:
⭐ Star stocksimpy on Github
📰 Read more of my posts on Medium
💬 Let’s connect on LinkedIn
📱 And on Instagram
Thanks for reading. Let me know what else you would suggest I add before the initial release.