I wear a WHOOP. I've coached people on movement and sleep for many years and I still can't answer that question for myself. The algorithm is locked. You get a number, you trust it, you stop there.
When we built Open Wearables, we decided the scoring layer should work differently. Sleep Score and Resilience Score shipped in v0.5 - every coefficient, every threshold, every weighting is in the repo and you can fork them, tune for endurance athletes or elder care or clinical populations. Moreover, you run them on your own infrastructure and the same algorithms feed the MCP layer so AI coaching can cite the actual data behind a recommendation instead of approximating.
The open health scoring piece is what I keep coming back to. The fact that Garmin and Oura calculate HRV differently and never explain why has always bothered me. Being able to see the actual formula matters more than most people realize.
Open Wearables
@dominik_cywinski Garmin and Oura don't just get different readings, they measure at different points during sleep and use different averaging methods. When you can't see the formula, you can't even tell if you're comparing the same thing.
@dominik_cywinski Garmin, Oura, Whoop, Apple... every single provider has it's own algorithms. That might be frustrating.
Open Wearables
@dominik_cywinski being transparent about how the scores are calculated is one thing, but also having an option to actually tweak these for your is actually pretty awesome, you should give it a try
Open Wearables
@dominik_cywinski exactly - same metric, different number, no explanation. you can't make decisions on data you can't trust
and the problem compounds when you switch devices. two years of Oura data vs six months of Garmin and you have no idea if the trend is real or just a formula change
open formulas fix that
Congrats, team @piotr_ratkowski @bartmichalak @piotr_sobusiak Hehehe, seems like Piotrs are dominating. How you handle normalization across different wearable data formats?
Open Wearables
Thanks @mikhail_prasolov ! That's actually quite tricky as the data coming from different providers is surprisingly quite different even though these are often the same pieces of information. As you mentioned these are often in different formats, sometimes aggregated or processed in some way. We've come up with what we called Unified Health Data Model and you can also see Data Types and Coverage Matrix to check what data types we support and how they are normalized across providers.
Open Wearables
@piotr_ratkowski @bartmichalak @piotr_sobusiak @mikhail_prasolov
haha Piotr representation is strong on this team, can confirm
normalization is the boring core of the whole thing - every provider has different field names, different units, different nulls, different timestamps. we map everything to a unified schema at ingestion so whatever's above it (scoring, AI, your app) never has to care where the data came from
it's unglamorous work but it's what makes everything else possible
@piotr_ratkowski @bartmichalak @piotr_sobusiak @mikhail_prasolov Our unified data model assumes that we create a Strategy class for each provider. By inheriting from an abstract base strategy, this class enforces a consistent structure. This means we have predefined areas (e.g. workouts, continuous data (sleep, HR, etc.), provider authorisation, or the method of data retrieval) and in each there are either certain gaps to be filled (e.g. workouts and continuous data require the definition of a function to normalise the data to our model, which requires knowledge of the payload coming from the provider’s API – the rest of the functions that process this normalised data are already inherited from the base strategy) or several defined paths to choose from (e.g. authorisation requires specifying the authorisation method imposed by the provider’s API, and the data retrieval method determines whether we need to query the API periodically ourselves, or whether it is sufficient to connect once and receive data almost in real time). Finally, the normalised data is sent to the database, and from there it is retrieved, for example, by our webhooks, which will send notifications to clients’ backends whenever any data becomes available.
I wear an Oura ring and have always been curious how the readiness score actually gets calculated. The fact that you can actually look at the algorithm here seems like it could change how much I trust the number. Do you publish explanations alongside the code?
Open Wearables
@michal_wlodarczyk1 yes we do, each algorithm of open wearables is open and can be audited. Our Health Science Lead also runs her own substack in which she will also explain the details behind algorithms.
https://thesciencebehindwearables.substack.com/
@michal_wlodarczyk1 But to clarify that - we don't have an access to the Oura's alghoritms, nobody does. But our algorithms are accessible in our Github repo. They even have separate directory, so it's easy to find and analyse them.
@michal_wlodarczyk1 That’s a good instinct — “readiness” scores feel objective until you realize they’re really just weighted models over a handful of signals.
@michal_wlodarczyk1 As a running coach, this transparency is exactly what I've been waiting for. When athletes ask me "should I train hard today?", a black-box score doesn't help me make the call but being able to see what signals are weighted and why changes everything. Trust in the data = trust in the decision :)
Congrats on the launch!! Is there something like a web dashboard for regular users, or is this purely for developers building things on top?
Open Wearables
@agata_wieczorek Thanks Agata! OW is a developer infrastructure, not a consumer product. There's a developer portal for managing the deployment (users, OAuth credentials, debugging), but end users see whatever dashboard you build on top. Some teams ship their own UI, others embed widgets we provide.
@agata_wieczorek As Piotr said, there is only a dashboard for administrators. So company using Open Wearables as a platforms has to create it's own UI for regular users.
Open Wearables
@agata_wieczorek it is mostly b2b / devs but you can also run your own instance with one click deployment on Railway and play around with your data through MCP server we provide 🙂
Open Wearables
@agata_wieczorek thank you!
there is a dashboard - you can connect your devices, see your data and scores. but the main value right now is for developers building health apps on top of it
if you're not a developer the most interesting part is probably the AI layer - asking questions about your own health data in plain language. that's where it gets useful for everyone
Open source health scoring algorithms is the part that got my attention. Most of these platforms treat the scoring logic as the crown jewels. Who decides what goes into the scoring models?
Open Wearables
@konrad_talaga1 You're right that most platforms guard scoring as the crown jewel. We went the opposite way on purpose: black-box scores are fine until a clinician or regulator asks why the number says what it says.
Who decides: our own R&D, led by Anna Zych (Health Science Lead), grounded in published research where it exists (sleep stages, HRV, training load all have public literature) and open discussion in PRs and GitHub issues where it doesn't. Momentum's engineering team owns direction, contributors shape it. Thresholds are tunable, so if you disagree with our defaults you fork and calibrate for your population.
@konrad_talaga1 As a Piotr said. If you want to ask questions about scoring models, you can find Anna on Linkedin or our Discord :)
Open Wearables
@konrad_talaga1 that's the best part - you can use OW scores as a base because everything is open (including reasoning why these score are built this way) so you can use them as a starting point and tweak from there
Interesting concept. Curious how it handles the cases where the same underlying metric (say, HRV) gets measured differently by different devices. Does it normalize across them or leave that to the developer?
Open Wearables
@michal_grela We normalize what's normalizable: units (ms), timestamps, structure. Same API call regardless of source. What we can't paper over is measurement methodology. Garmin and Oura measure HRV overnight, Apple during breathing exercises, Whoop computes its own daily baseline. Same name, different signal. We surface the source and context in the response so you decide whether to treat them as comparable or filter to one provider. Faking consistency would hide a real semantic difference.
@michal_grela
That's how ;)
Open Wearables
@michal_grela as Sebastian mentioned - you can set different priorities for all providers we already support and in the future we are planning to make it even more granular and set priorities for particular devices.
So cool for businesses!
Do you plan to release / or have something for regular users, who don't know how to use open source ? :D I mean, I would love to audit my apple health workouts and get some feedback our of it.
Open Wearables
@mwarcholinski Thanks! Open Wearables is rather B2B product but with a little bit of Claude support you should be able to run it locally as everything is dockerized and shouldn't take more than few minutes or use one-click Railway deployment to set up your own instance in cloud. Then you could also use the MCP server to talk your apple health data and get the feedback you need 😄
@mwarcholinski As Piotr said, it's B2B, but you can still use cloud based providers as a private user (by cloud based I mean Apple, Samsung & Google).
Open Wearables
@mwarcholinski that's exactly where we're going with the AI layer - you ask a question in plain language, you get a real answer based on your actual data
not there yet for non-technical users out of the box, but it's the direction. watch this space