Tell us a little about BeehiveID.
We built BeehiveID to solve the online identity problem. In the modern world, we are dealing more and more with strangers online. How do you know who you can trust? Typically, online identity is an email address. But an email address really tells you nothing about the person. BeehiveID is a set of lightweight technologies that can establish whether an online account is backed by a real person, while still preserving privacy.
What separates BeehiveID from other online identity protection systems?
Our technologies are designed to provide a relatively strong identification mechanism, while still preserving privacy and minimising customer friction. Most system in use now are “back end” systems that examine the properties of the transaction — IP address, proxy detection, geolocation and browser fingerprinting. These are low friction because the customer isn’t even aware they are being used. However, they are easily circumvented and have a high rate of false positives — they often classify good customers as fraudulent, resulting in lost sales or engagement.
The other side of the coin is invasive systems like background checks or systems that require a customer to email a copy of a driver’s license, utility bill, bank account, or verify their physical address by receiving a special piece of mail. These are effective, but high friction and low privacy. Most customers will not go through these complex procedures if they have other choices.
BeehiveID is designed to provide strong identification in 1-3 clicks, while preserving maximum privacy. Our partner sites only receive an identification core, no other private customer information is shared with the partner sites.
What did you do before BeehiveID. What is your background in?
My co-founder and I worked in very large-scale biometric systems for the US Department of Defense (DoD). I was the architect for Iraqi and Afghan national biometric identification systems, and went to the Middle East over a dozen times, deploying a wide variety of biometric systems. In those environments, identity is critical and the systems we worked on were saving lives.
However, “customer friction” is not a concept there — people will put up with a lot of complexity to ensure safety. We became interested in the online market because identity is such a huge problem there, just like it was in the Middle East. But friction has been a challenge to address — how to provide strong identity with low friction?
For you, what is the single most exciting piece of new technology, with regards to online security?
Public key cryptography is incredibly important. It allows people to be completely anonymous and still assert a bulletproof online identity, supporting both digital signatures to validate identity and easy encryption of content. However, the technology is not user friendly, but I think that will change as more people get involved in it.
Do you think we will see the widespread use of biometric ID systems in tech (such as Apple’s TouchID) over the next few years?
I haven’t really been impressed with TouchID. As a fingerprint technology, it works quite well and the small sensor is amazing. However, I don’t think it is solving a useful problem. It is fairly easy to spoof, and the small payments it would be used for don’t have a huge fraud loss. A PIN serves the same purpose and has none of the issues associated with fingerprints.I think the sweet spot for TouchID is making the phone a little easier/faster to unlock than a PIN, but of all my friends who have iPhones, only one actually uses it.
Do you think the public have become more aware of the risks posed to their security and privacy, following Edward Snowdon’s revelations? Considering your work with the military, do you think the revelations were a positive thing?
Although I do work with the military, I am still a huge privacy advocate. I think the publicity about Snowden was great for bringing attention to over-zealous government programs. However, I am not a fan of his methods. Simply mailing a large body of classified material to news agencies has the potential to do real damage to national security. The people I blame are members of congress who were allowed access to the information about what these agencies were doing. They are responsible for oversight and representing the people. Why didn’t they speak up?
Do you feel like cyber criminals are a step ahead of intelligence agencies, with regards to techniques such as concealing their identity?
Absolutely. As the old saying goes, “The hound runs for his supper, the rabbit runs for his life”. Cyber criminals who aren’t good at concealing their identity don’t stay criminals for very long. There are an amazing wealth of technologies available to people who want to hide their identity — effectively unbreakable encryption, widespread proxy servers, the onion router (TOR), etc.
If you look at the recent Silk Road (online drug market) case, they weren’t able to find Ross Ulbright through d-anonymising him. They essentially had to use old-school police techniques to infiltrate his organisation and use the undercover agents to expose him. Of course, even with all these technologies, criminals have to be careful. All it takes is one mistake to potentially lose your anonymity.
BeehiveID recently introduced two new types of image-matching systems, tell us a little about them.
Biometrics and image processing go hand-in-hand. Our two technologies are related to the work we have done in the past in face matching. The first technology is what we are calling “fuzzy image match”. It allows us to match photos that are “very similar” but not exactly the same. So if a photo has been resized, cropped, colour-corrected, etc., we can still tell it is the same photo. We use this with dating sites to detect duplicate profile photos that are used by scammers. This technology doesn’t use biometrics at all. It can recognise photos of anything.
The second technology is a lightweight verification system for people who don’t use (or don’t want to use) social network verification. We have people upload a selfie of themselves holding a piece of paper with some words that we have given them written on a piece of paper. The words ensure that they aren’t uploading a picture they found on the Internet. We use biometrics to ensure the same person isn’t creating multiple accounts. We also use manual human verification and image processing to detect Photoshop attempts. Selfies are so common these days that this becomes a simple method that anyone can use.
What has surprised you most about working in the online dating industry?
When I was dating in the late 80s, online dating didn’t exist. We had newspaper personals, but hardly anyone ever did that, and no one would admit to it. So online dating has been a huge eye-opener to me. The variety of dating options has blown me away — big sites, small site, sites for every imaginable niche, hookup sites, sugar daddy sites, etc. I guess I am glad I am not dating now, because I would be paralysed by choice. I have definitely met a much wider variety of interesting personalities compared to working in defense.
What are some of the most sophisticated dating scams you’ve seen?
We don’t always see the details of the scams, but the ones I have seen seem to be very conventional. Scammers simply engage vulnerable people at an emotional level and play their heartstrings to get money. Confirmation bias is a powerful thing — people see and hear what they expect and ignore contrary evidence. Our brains have special processing areas just for faces. When you see a face of a trustworthy looking person in a profile, you start building up an internal set of expectations for that person, even when there is no evidence for it. Of course, charge-backs (real and friendly fraud) are always going to be a problem for a dating site.
Scams in non-dating sites are quite creative though. When people can steal products from online sites, they figure out all kinds of innovative strategies. We talked to a site that sold tires online, and the Russian mafia had a whole network of people creating fake accounts to steal tires. People were working for this network as shippers thinking it was a regular job, when they were really covering tracks for the thieves.
Which new scamming trends should dating sites watch out for?
I see a trend where a scam develops as a manual process, with lots of A/B testing. Once it becomes successful, it can be partially or fully automated. I have talked to a few dating sites where fake profiles are being created wholesale via what is clearly an automated process. A site can go from being mostly clean to overrun with scammers almost overnight. Sites have to be vigilant and constantly alert.
As the architect of BeehiveID’s Social Authentication Engine, could you describe how this program helps other companies to secure the online identities of their users?
Bank robbers wear masks. People who are honest are willing to assert a real identity online, as long as they can be assured of privacy and if it is low-friction. Scammers are not willing to provide anything that tracks back to their real identity. Once you have a real identity, a huge set of problems go away, for dating sites and any other peer-to-peer type sites. Our social authentication engine was created to allow people to assert a real identity as easily and safely as possible.
Do you think most dating providers do enough to protect their consumers?
Every large dating site we have talked to has a significant team dedicated solely to fraud prevention. I think the dating sites are doing as much as they can with manual processes. Hopefully our tools will allow them to do more with fewer people, and to allow the small sites to have strong anti-fraud mechanisms without having to have a significant anti-fraud team.The sites can only do so much. People using the site have to be vigilant as well. In any online community, self-policing is going to be a critical factor as well.
With regards to security, what is the one thing all dating sites should be doing?
I actually think one of the simplest things a site can do is to implement a “this profile looks suspicious” button along with a free text field for someone to say why it looks suspicious. Some people have great judgement, and if they think a profile looks weird it should probably be examined. Maybe create an incentive — someone who finds 20 scammers gets a free month of service or something like that.
Of course, this can be used nefariously, so it needs to be managed properly. But a simple button along with a text field would go a long way towards scam prevention.
What exciting new things are BeehiveID working on, and have planned for the next year?
One of our biggest objections is that people are concerned about using Facebook for social verification. They don’t trust Facebook, or they don’t trust us. That’s understandable. We are working on a product that will support social-based trust verification without having to reveal *any* information. We will be launching this product later this year.
Read Alex’s blog The End of Social Network APIs here.