Trying to get BlazePose TFJS working in Angular 12. I have an empty project and have installed the requires packages (I think). Package.json looks like this:
{
"name": "posetrackingtest",
"version": "0.0.0",
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build",
"watch": "ng build --watch --configuration development",
"test": "ng test"
},
"private": true,
"dependencies": {
"@angular/animations": "~12.0.3",
"@angular/common": "~12.0.3",
"@angular/compiler": "~12.0.3",
"@angular/core": "~12.0.3",
"@angular/forms": "~12.0.3",
"@angular/platform-browser": "~12.0.3",
"@angular/platform-browser-dynamic": "~12.0.3",
"@angular/router": "~12.0.3",
"@mediapipe/pose": "^0.3.1621277220",
"@tensorflow-models/pose-detection": "^0.0.3",
"@tensorflow/tfjs-backend-webgl": "^3.7.0",
"@tensorflow/tfjs-converter": "^3.7.0",
"@tensorflow/tfjs-core": "^3.7.0",
"rxjs": "~6.6.0",
"tslib": "^2.1.0",
"zone.js": "~0.11.4"
},
"devDependencies": {
"@angular-devkit/build-angular": "~12.0.3",
"@angular/cli": "~12.0.3",
"@angular/compiler-cli": "~12.0.3",
"@types/jasmine": "~3.6.0",
"@types/node": "^12.11.1",
"jasmine-core": "~3.7.0",
"karma": "~6.3.0",
"karma-chrome-launcher": "~3.1.0",
"karma-coverage": "~2.0.3",
"karma-jasmine": "~4.0.0",
"karma-jasmine-html-reporter": "^1.5.0",
"typescript": "~4.2.3"
}
}
I have a single component with the following HTML:
<video
#videoplayer
id="videoplayer"
autoplay>
</video>
My typescript code for the component is:
import { AfterViewInit, Component, ElementRef, OnInit, ViewChild } from '@angular/core';
import '@tensorflow/tfjs-backend-webgl';
import * as poseDetection from '@tensorflow-models/pose-detection';
@Component({
selector: 'app-pose',
templateUrl: './pose.component.html',
styleUrls: ['./pose.component.css']
})
export class PoseComponent implements OnInit, AfterViewInit {
@ViewChild("videoplayer", { static: false }) videoplayer: ElementRef;
public detector: any;
public poses: any;
public error: string;
constructor() { }
ngOnInit(): void {}
ngAfterViewInit () : void{
this.init();
}
async init() {
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
try {
const stream = await navigator.mediaDevices.getUserMedia({
video: true
});
if (stream) {
console.log(stream);
this.videoplayer.nativeElement.srcObject = stream;
console.log(this.videoplayer.nativeElement);
console.log("About to load detector");
let detectorConfig = {
runtime: 'tfjs',
enableSmoothing: true,
modelType: 'full'
};
this.detector = await poseDetection.createDetector(poseDetection.SupportedModels.BlazePose, detectorConfig);
console.log(this.detector);
console.log("Detector loaded");
let poses = await this.detector.estimatePoses(this.videoplayer.nativeElement);
console.log(poses);
this.error = null;
} else {
this.error = "You have no output video device";
}
} catch (e) {
this.error = e;
}
}
}
}
I don't get any errors, can see myself via the webcam on the HTML page when I execute it, but the output of console.log(poses);
is just an empty list []
. i.e. no pose data.
Also, how do I get the let poses = await this.detector.estimatePoses(this.videoplayer.nativeElement);
line to execute constantly? Does the this.poses
variable get updated constantly or do I need to iterate somehow?
What am I doing wrong please? Thanks.
I was having a relatively similar issue(null Blazepose output, although I was using pure javascript). To fix this issue I setup my camera following what google did in their camera.js file(https://github.com/tensorflow/tfjs-models/blob/master/pose-detection/demos/live_video/src/camera.js).
To answer the part about getting the line to execute constantly, I used requestAnimationFrame
Then we can start this loop once the video has loaded
This can also be done using setInterval, however request animation frame is preferable as it will not cause a backlog if you end up falling behind the current frame, it will skip ahead to the current frame.