{"id":265,"date":"2026-04-13T23:56:49","date_gmt":"2026-04-14T03:56:49","guid":{"rendered":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/?p=265"},"modified":"2026-04-16T02:22:11","modified_gmt":"2026-04-16T06:22:11","slug":"personal-project-update-2","status":"publish","type":"post","link":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/uncategorized\/personal-project-update-2\/","title":{"rendered":"Personal Project Update #2"},"content":{"rendered":"\n<p>Back in my <a href=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/uncategorized\/personal-project-update-1\/\">first update<\/a>, I talked about the problem I&#8217;m solving: athletes don&#8217;t have an accessible, data-driven way to analyze their technique. You either hire an expensive biomechanics lab, or you film yourself and guess. I&#8217;m building <strong>Motion X<\/strong> to fix that. It is an AI-powered biomechanics platform that gives you the kind of precise, frame-by-frame feedback that used to require a sports science degree.<\/p>\n\n\n\n<p>Since the last post, Motion X went from an idea with some early research to a functional web application. In this blog post, I will walk you through what it can do now in general.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">The Core: Video Analysis<\/h2>\n\n\n\n<p>The flagship feature of MOTION X is <strong>dual video comparison<\/strong>. The user can upload two videos, a reference (maybe your coach, a pro athlete, or now you can even select and import directly from a youtube video) and your current attempt. The system then breaks down exactly where your form differs, joint by joint, frame by frame.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"668\" src=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-18.43.29-1024x668.png\" alt=\"\" class=\"wp-image-266\" srcset=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-18.43.29-1024x668.png 1024w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-18.43.29-300x196.png 300w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-18.43.29-768x501.png 768w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-18.43.29-1536x1002.png 1536w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-18.43.29-2048x1336.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p class=\"has-text-align-center has-medium-font-size\">Video Analysis Page<\/p>\n\n\n\n<p class=\"has-x-large-font-size\">Here&#8217;s what happens when you hit &#8220;Analyze&#8221;:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Pose Detection<\/strong> \u2014 Every frame of both videos gets processed  through joint identification, which detects 33 anatomical landmarks on the body. This includes shoulders, elbows, wrists, hips, knees, ankles, the full skeleton. The system tracks all 33 points but focuses scoring on 8 key joint angles: both elbows, both shoulders, both hips, and both knees. These are the joints that matter most for athletic form across almost every sport.<\/li>\n\n\n\n<li><strong>Angle Calculation<\/strong> \u2014 For each of those 8 joints, the system calculates the precise angle using vector math (specifically <code>atan2<\/code>-based angle computation between three connected landmarks). So instead of &#8220;your arm looks a bit off,&#8221; you get &#8220;your right elbow is at 142\u00b0 when the reference is at 158\u00b0 \u2014 that&#8217;s a 16\u00b0 deviation.&#8221;<\/li>\n\n\n\n<li><strong>Temporal Alignment (STCF-DTW)<\/strong> \u2014 This is where it gets interesting. Two people never perform a movement at exactly the same speed. A squat might take you 3 seconds but the reference only 2. Naive frame-to-frame comparison would be completely wrong. So I implemented <strong>Spatiotemporal Coupling Feature Dynamic Time Warping<\/strong>. This is a modified DTW algorithm that uses a 32-dimensional feature vector per frame (8 angles + 8 angular velocities + 8 accelerations + 8 coupling ratios between adjacent joints) to intelligently align both videos by movement phase, not by time. This means the system knows that your frame 47 corresponds to the reference&#8217;s frame 32 because you&#8217;re both at the bottom of the squat, even if you got there at different speeds.<\/li>\n\n\n\n<li><strong>Scoring<\/strong> \u2014 Each joint gets a score from 0\u2013100 based on angle deviation from the reference. Under 5\u00b0 difference gives a perfect score of a hundred. Under 10\u00b0 gives a score ranging from 80-90. The further off you are, the lower the score. You get per-joint scores, per-frame scores, and an overall score for the entire movement.<\/li>\n\n\n\n<li><\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Interactive Joint Inspector<\/h3>\n\n\n\n<p>On top of the video, you can toggle the Joint Inspector. This allows the user to click on any joint in the skeleton overlay, and a popup would appear showing the exact angle, the reference angle, the difference, and a score. <\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Video Analysis Demo\" width=\"500\" height=\"375\" src=\"https:\/\/www.youtube.com\/embed\/iy4Ph15VLvg?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>(The two videos in this demo shows the same person but with slight movement deviations. In a real scenario, however, it will be two different people doing the same movement.)<\/p>\n\n\n\n<p>This video is a demo of the video analysis feature. The user is able turn on the skeletal overlay that supports hovering. At the start, the to videos are not synced properly in terms of movements. By clicking the &#8220;sync&#8221; button, it automatically aligns the two videos in terms of body movement using the STCF-DTW algorithm. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Solo Video Analysis<\/h3>\n\n\n\n<p>Considering that users might not always have a reference video, and sometimes just want to analyze form on its own. The system supports single video analysis. Allowing the user to upload a single video and get full skeleton tracking, joint angles over time, and AI feedback just without a comparison baseline.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"670\" height=\"1024\" src=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-21.03.16-670x1024.png\" alt=\"\" class=\"wp-image-279\" srcset=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-21.03.16-670x1024.png 670w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-21.03.16-196x300.png 196w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-21.03.16-768x1173.png 768w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-21.03.16-1006x1536.png 1006w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-21.03.16.png 1160w\" sizes=\"auto, (max-width: 670px) 100vw, 670px\" \/><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">Image Analysis<\/h2>\n\n\n\n<p>Not everything needs to be a video. For quick form checks, a single picture of your deadlift setup, your rowing catch position, your squat depth. Image Analysis allows you upload two images side-by-side and instantly compare joint angles.<\/p>\n\n\n\n<p>The foundation is essentially the same. Same level of pose detection, same 8-joint angle model, and the same scoring system. You get a radar chart showing all 8 joints at once, a bar chart comparing your angles to the reference, and a detailed breakdown table. It takes about 2 seconds and gives you a full biomechanical comparison from a single photo.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"687\" src=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/4_1776133962_Screenshot_2026-04-13_at_19.31.23-1024x687.png\" alt=\"\" class=\"wp-image-267\" srcset=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/4_1776133962_Screenshot_2026-04-13_at_19.31.23-1024x687.png 1024w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/4_1776133962_Screenshot_2026-04-13_at_19.31.23-300x201.png 300w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/4_1776133962_Screenshot_2026-04-13_at_19.31.23-768x515.png 768w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/4_1776133962_Screenshot_2026-04-13_at_19.31.23-1536x1030.png 1536w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/4_1776133962_Screenshot_2026-04-13_at_19.31.23-2048x1374.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">Real-Time Analysis<\/h2>\n\n\n\n<p>This is a feature that&#8217;s currently still in development. But I&#8217;m sure this will turn out to be a great feature. It enables Real-Time Practice, where the user can turn on your their camera, and get <strong>l<\/strong>ive skeleton tracking with instant feedback as they move.<\/p>\n\n\n\n<p>The camera frames stream to the server via WebSocket, get processed through backend server, and the results come back with your joint angles and scores, the goal is for all this to happen under 100ms. You see your skeleton overlaid on the video feed in real time, with joints lighting up in real time as you move. Load a reference video alongside it, and you can practice matching the form live.<\/p>\n\n\n\n<p>It&#8217;s like having a coach watching you in real time, except this coach can see every angle to the degree.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">AI \u2014 The Brain Behind the Data<\/h2>\n\n\n\n<p>Numbers and charts are great, but most athletes don&#8217;t want to interpret a radar chart. They want someone to tell them: <em>&#8220;Your right knee is a bit off in at the bottom of the squat \u2014 focus on pushing your knees out over your toes.&#8221;<\/em> That&#8217;s what the AI layer does. <\/p>\n\n\n\n<p>I have also had a discussion with Mr. Crompton, and this is a feature that we both think would greatly improve the user experience.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Contextual Analysis<\/h3>\n\n\n\n<p>After any analysis (video, image, or real-time), you can request an AI Professional Analysis. This isn&#8217;t normal ChatGPT-style advice. The AI receives:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Your full motion data with every joint angle, every frame<\/li>\n\n\n\n<li>A structured &#8220;motion brief&#8221; highlighting your worst frames, biggest deviations, and phase-by-phase breakdown<\/li>\n\n\n\n<li>Your sport (auto-detected from the video)<\/li>\n\n\n\n<li>Your complete session history, including what you&#8217;ve worked on before, what cues you were given, whether you improved or not<\/li>\n<\/ul>\n\n\n\n<p>The result is a detailed, professional coaching analysis that references specific moments in your video. It&#8217;ll say things like &#8220;at frame 47, your elbow drops below parallel during the recovery phase&#8221;, which is a real frame you can jump to and verify. No hallucinated advice. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Session Memory \u2014 &#8220;Am I Getting Better?&#8221;<\/h3>\n\n\n\n<p>This is the feature that really sets Motion X apart from every other AI fitness tool I&#8217;ve seen. Most AI coaching apps are stateless, where they analyze your movements in isolation and forget you exist. Motion X remembers.<\/p>\n\n\n\n<p>Every session&#8217;s key metrics get saved to your profile. The next time you come back, the AI checks: did you follow the cues from last time? Did your angles actually improve? It tells you right on the dashboard and it gives you the next cue to focus on.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"275\" src=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.34.55-1024x275.png\" alt=\"\" class=\"wp-image-268\" srcset=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.34.55-1024x275.png 1024w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.34.55-300x81.png 300w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.34.55-768x206.png 768w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.34.55-1536x412.png 1536w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.34.55-2048x550.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">AI Chat<\/h3>\n\n\n\n<p>Beyond one-off analysis, there&#8217;s a full <strong>AI Assistant<\/strong> chat. You can ask follow-up questions about your technique, get explanations of biomechanical concepts, or dive deeper into specific joints. The conversation maintains context, so you can have a back-and-forth like you would with a real coach.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"960\" src=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.36.03-1024x960.png\" alt=\"\" class=\"wp-image-269\" srcset=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.36.03-1024x960.png 1024w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.36.03-300x281.png 300w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.36.03-768x720.png 768w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.36.03-1536x1440.png 1536w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.36.03.png 1698w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">Challenges<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Building the Algorithm<\/h3>\n\n\n\n<p>The STCF-DTW alignment system was by far the hardest technical challenge. Standard Dynamic Time Warping gets you 80% of the way there, but it doesn&#8217;t understand that a squat and a deadlift have fundamentally different phase structures. The spatiotemporal coupling features (velocity, acceleration, and joint coupling ratios) were my innovation to make the alignment phase-aware. Getting the 32-dimensional feature vector right, too few dimensions and the alignment is sloppy, too many and it overfits to noise.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"443\" src=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.13-1024x443.png\" alt=\"\" class=\"wp-image-271\" srcset=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.13-1024x443.png 1024w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.13-300x130.png 300w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.13-768x332.png 768w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.13-1536x665.png 1536w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.13-2048x886.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"842\" src=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.27-1024x842.png\" alt=\"\" class=\"wp-image-272\" srcset=\"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.27-1024x842.png 1024w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.27-300x247.png 300w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.27-768x632.png 768w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.27-1536x1264.png 1536w, https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-content\/uploads\/sites\/33\/2026\/04\/Screenshot-2026-04-13-at-20.46.27-2048x1685.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Making AI Feedback Actually Useful<\/h3>\n\n\n\n<p>One of my main focus was the user experience. And the first version of the AI feedback was not that great. It would say things like &#8220;your form could be improved&#8221;. Thanks, very helpfull. The breakthrough was the <strong>motion brief<\/strong>: a structured JSON document that gives the AI model precise, frame-level data instead of vague summaries. Once the AI could see that your right elbow was at 142\u00b0 at frame 47 while the reference was at 158\u00b0, the feedback became specific and actionable. Grounding the AI in real data was the difference between a gimmick and a genuinely useful tool.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">Surprises<\/h2>\n\n\n\n<p><strong>Auto-detecting the sport changes the experience completely.<\/strong> When the AI knows you&#8217;re doing a squat vs. a rowing stroke, the feedback is dramatically more relevant. I added computer vision-based sport detection (upload a video and it figures out what you&#8217;re doing). This shows that context is very important.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">Am I Going to Make the Deadline?<\/h2>\n\n\n\n<p>I believe that this is an achievable goal. And it really depends on what I define as a final product. Personally, I really want to push this to production grade, and to start beta testing with real users. Though before that, there are still a lot of stuff to do, such as fixing bugs and improving details.<\/p>\n\n\n\n<p>It is worth point out that right now the core product works. Including getting frame-by-frame analysis with scores, receiving AI coaching that references your specific movements, tracking your progress over time, and exporting professional PDF reports. That&#8217;s a functional product of some sort.<\/p>\n\n\n\n<p>One of the main features still waiting to be added is mobile experience. I want to optimize the touch interactions for phone users and run testing with real users. There&#8217;s also production infrastructure work: rate limiting, error tracking, and background processing for longer videos.<\/p>\n\n\n\n<p>My strategy is to stay on track, working on the core flow and make sure it&#8217;s bulletproof, then polish the other details. The foundation is solid, the architecture is clean, and the hardest algorithmic work is done. I&#8217;m feeling pretty good about it.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p class=\"has-large-font-size\"><strong>Current Stage: Closed Beta<\/strong><\/p>\n\n\n\n<p>Next Stage: Beta User Testing (Target Data: Apr 23)<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Back in my first update, I talked about the problem I&#8217;m solving: athletes don&#8217;t have an accessible, data-driven way to analyze their technique. You either hire an expensive biomechanics lab, or you film yourself and guess. I&#8217;m building Motion X to fix that. It is an AI-powered biomechanics platform that gives you the kind of [&hellip;]<\/p>\n","protected":false},"author":31,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-265","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/posts\/265","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/users\/31"}],"replies":[{"embeddable":true,"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/comments?post=265"}],"version-history":[{"count":9,"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/posts\/265\/revisions"}],"predecessor-version":[{"id":282,"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/posts\/265\/revisions\/282"}],"wp:attachment":[{"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/media?parent=265"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/categories?post=265"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wp.stgeorges.bc.ca\/michaelh\/wp-json\/wp\/v2\/tags?post=265"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}