github matiasdelellis/facerecognition v0.9.30
Nextclod Face Recognition v0.9.30

latest releases: v0.9.51, v0.9.50, v0.9.40...
14 months ago

Codename???? 🤔

imagen

[0.9.30] - 2023-08-23

  • Implement the Chinese Whispers Clustering algorithm in native PHP.
  • Open the model before requesting information. Issue #679
  • If Imaginary is configured, check that it is accessible before using it.
  • If Memories is installed, show people's photos in this app.
  • Add face thumbnail when search persons.
  • Disable auto rotate for HEIF images in imaginary. Issue #662
  • Add the option to print the progress in json format.

Why that meme as codename?

To the happiness of many (Issue #690, #688, #687, #685, #649, #632, #627, #625, etc..?), Implement the Chinese Whispers Clustering algorithm in native PHP, just means that we do not depend on the pdlib extension, but it goes without saying that its use is still highly recommended.

So, the application can be installed without pdlib or bzip2 installed. But if you want to use models 1, 2, 3, or 4 you still have to rely on these extensions.

Do you insist on not installing these extensions?.
You must configure the external model and select model 5 here, and thus free yourself from these extensions. 😬

Well, You will understand that it is slower, however I must admit that with JIT enabled, it is quite acceptable, and this is the only reason why decided to publish it.

Some statistics

Just I added 2162 Big Bang Theory photos on my test server, resulting in 6059 faces, and I cluster them with both implementations..

Dlib: (Reference)

  • User time (seconds): 10.53
  • Maximum resident set size (kbytes): 245412

PHP:

  • User time (seconds): 45.45
  • Maximum resident set size (kbytes): 266060

Time:
=> 45.45/10.53 = 4,316239316

Memory:
=> 266060/245412 = 1,084136065

PHP + JIT
User time (seconds): 16.20
Maximum resident set size (kbytes): 283760

Time:
=> 16.20/10.53 = 1,538461538

Memory:
=> 283760/245412 = 1,156259678

So, as you can see the php implementation is 3.3 times slower, but if you enable JIT, it's only 53 percent slower. I guess it's ok, and the memory didn't increase much. 😄

Note:

Once again I insist on recommending the use of local models (with dlib), and I invite those who want to use it to give a little love to the external model. 😬

Don't miss a new facerecognition release

NewReleases is sending notifications on new releases.