With "AI" services being available in more places than ever, the inherent assumption from the service makers is that their system is trustworthy. That you can set aside all your worries and simply trust that the data returned by said system is correct and up to date.
Sadly in reality, this isn't the case. In this particular instance a lawyer is alleged to have used ChatGPT to generate cases that would prove his point. Sounds like a great idea in theory - the issue is that ChatGPT made them up. They're completely fictitious.
So how can we, as an AI-powered recruitment tool, prove to you that you should trust our AI? Just because we say a particular candidate matches your criteria or not, doesn't mean that it does, right?
Well this week, we've taken decisive steps to prove to you that it does. With the latest update of Syft, we're providing breakdown scores of every metric we match against.
As a recuriter or hiring manager, this not only shows you exactly why Syft is recommending a particular candidate, but it also makes it fairer for every candidate on your list. We want to remove as much bias from the recruitment process as possible, and this feature helps us move the needle in the right direction.
We're also thrilled to show you everything else we've updated with our latest release!
New candidate search - easily find a candidate by name to match against
New candidate listing layout, which details the top skills and location
Ability to tag a candidate from the search/filter screen
Find Similar Candidate is now removable once applied
The job description match is now precomputed so Syft loads twice as fast
Candidate CV can now be found in the Candidate Detail page
Controls so you can decide which roles to use Syft with
Specific control to Syft more than 1000 candidates for a particular role
Lots of smaller layout issues
We hope all our users love the latest version of Syft!