2FA is "something you have" (or ".. you are", for biometrics): it is supposed to prove that you currently physically posses the single copy of a token. The textbook example is a TOTP stored in a Yubikey.
Granted, this has been watered down a lot by the way-too-common practice of storing TOTP secrets in password managers, but that's how it is supposed to work.
Does your mTOTP prove you own the single copy? No, you could trivially tell someone else the secret key. Does it prove that you currently own it? No, you can pre-calculate a verification token for future use.
I still think it is a very neat idea on paper, but I'm not quite seeing the added value. The obvious next step is to do all the math in client-side code and just have the user enter the secret - doing this kind of mental math every time you log in is something only the most hardcore nerds get excited about.
import base64
import hmac
import struct
import time
def totp(key, time_step=30, digits=6, digest='sha1'):
key = base64.b32decode(key.upper() + '=' \* ((8 - len(key)) % 8))
counter = struct.pack('>Q', int(time.time() / time_step))
mac = hmac.new(key, counter, digest).digest()
offset = mac[-1] & 0x0f
binary = struct.unpack('>L', mac[offset:offset+4])[0] & 0x7fffffff
return str(binary)[-digits:].zfill(digits)
https://dev.to/yusadolat/understanding-totp-what-really-happ...As I already mentioned, the fact that people often use it wrong undermines its security, but that doesn't change the intended outcome.
As long as you never enter the secret anywhere but only do the computation is your head, this is just using your brain as the second factor. I would not call this a password since it is not used in the same way. Passwords are entered in plain text into fields that you trust, but that also means that passwords can be stolen. This proves that you are in possession of your brain.
The only difference here is that you are hashing the password in your head, instead of trusting the client to hash it for you before submitting it to the server.
Which makes the threat model here what, exactly? Keyloggers, or login pages that use outdated/insecure methods to authenticate with the server?
No, 2FA means authentication using 2 factors of the following 3 factors:
- What you know (eg password)
- What you have (eg physical token)
- What you are (eg biometrics)
You can "be the 2FA" without a token by combining a password (what you know) and biometrics (what you are). Eg, fingerprint reader + password, where you need both to login.
Combine that with the practical problems with biometrics when trying to auth to a remote system, and in practice that second factor is more often than not "something you have". And biometrics is usually more of a three-factor system, with the device you enrolled your fingerprints on being an essential part of the equation.
https://en.wikipedia.org/wiki/Password-authenticated_key_agr...
The idea of it was so neat to me, I just had to thinker with it.
> It explores the limits of time-based authentication under strict human constraints and makes no claims of cryptographic equivalence to standard TOTP.
I think they're just having fun.
And the main point (though I agree that it doesn't make it 2FA), is to not have the secret be disclosed when you prove that you have it, which is what TOTP also achieves, which makes phishing or sniffing it significantly less valuable.
An ssh keyfile requires an attacker to break into the device but is likely fairly easy to snag with only user level access.
Bypassing a password manager that handles TOTP calculations or your ssh key or similar likely requires gaining root and even then could be fairly tricky depending on the precise configuration and implementation. That should generally be sufficient to necessitate knowledge of the master password plus device theft by an insufficiently sophisticated attacker.
Given TOTP or an ssh key managed exclusively by a hardware token it will be all but impossible for anyone to avoid device theft. Still, even TPMs have occasionally had zero day vulnerabilities exposed.
The non-disclosure is indeed neat, but the same can be achieved with a password. For example: generate public/private keypair on account creation. Encrypt private key with user password. Store both on server. On auth, client downloads encrypted priv key, decrypts it with user-entered password, then signs nonce and provides it to server as proof of knowledge of user password.
AFAIK the primary technical concerns are insecure storage by the server (bad hash or salt) or keylogging of the client device. But the real issue is the human factor - ie phishing. As long as the shared secret can't be phished it solves the vast majority of real world problems.
Point being, TOTP on a rooted phone handled by a FOSS password manager app whose secret store the end user retains full access to will successfully prevent the vast majority of real world attacks. You probably shouldn't use a FOSS password manager on a rooted device for your self hosted crypto wallet though.
Like, banking site requiring phone's 2FA (whether actual or SMS), okay, you have to know password and access to the device or at least a SIM card so 2 things need to be compromised. Computer vulnerable, no problem, phone vulerable, no problem, both need to be vulnerable to defeat it
...then someone decided to put banking on the second factor and now phone has both password and token (or access to SMS) to make a transaction, so whole system is one exploit away from defeat.
Nonetheless I do not see what issues 2FA has that this solves. Having the electronic device is the security. Without it there is no security.
They are both too mutable (cuts and burns will alter them) and not mutable enough (you can't re-roll your fingerprints after a leak).
On top of that, you are also literally leaving them on everything you touch, making it trivial for anyone in your physical presence to steal them.
They are probably pretty decent for police use, but I don't believe they are a good replacement for current tech when it comes to remote auth.
My concern with them nearly always comes down to privacy. They are far too easy to abuse for collecting and selling user data. There are probably ways around that but how much will you ever be able to trust an opaque black box that pinky promises to irreversibly and uniquely hash your biometric data? It's an issue of trust and transparency.
It doesnt add any security, as it is trivially computable from the other digits already computed.
It appears to be a checksum, but I can't see why one would be needed.
This is an early POC, and sanity checks like this are exactly the kind of feedback I’m looking for.
It's definitely computable on a piece of paper and reasonably secure against replay attacks.
So given a single pass code and the login time, you can just compute all possible pass codes. Since more than one key could produce the same pass code, you would need 2 or 3 to narrow it down.
In fact, you don't even need to know the login time really, even just knowing roughly when would only increase the space to search by a bit.
So the key would have to be longer. And random or a lot longer. Over 80 random bits is generally a good idea. That's roughly 24 decimal digits (random!). I guess about 16 alphanumerical characters would do to, again random. Or a very long passphrase.
So either remember long, random strings or doing a lot more math. I think it's doable but really not convenient.
I think it is too simple to reduce the definition of second factor to how it is stored. It is rather a question of what you need to log in. For TOTP the client has the freedom to choose any of (not exhaustive):
1. Remember password, put TOTP in an app on smartphone => Client has to remember password and be in possession of smartphone.
2. Put password and TOTP in password manager => Client has to remember the master password to the password manager and be in possession of the device on which it runs. Technically, you have to be in possession of just the encrypted bits making up the password database, but it is still a second factor separate from the master password.
In the end it's all just hidden information. The question is the difficulty an attacker would face attempting to exfiltrate that information. Would he require physical access to the device? For how long? Etc.
If the threat model is a stranger on the other side of an ocean using a leaked password to log in to my bank account but I use TOTP with a password manager (or even, god forbid, SMS codes) then the attack will be thwarted. However both of those (TOTP and SMS) are vulnerable to a number of threat models that a hardware token isn't.
I think the defining characteristic is how it is used. I can use a password like a second factor, and I can use a TOTP code like a password. The service calls it a password or a second factor because that was the intention of the designer. But I can thwart those intentions if I so choose.
Recall the macabre observation that for some third factor implementations the "something you are" can quickly be turned into "something your attacker has".
And that makes it a password (i.e. the primary factor, not a second factor). The whole point of a second factor is that it's not trivially cloneable (hence why, for example, SMS is a poor form of 2FA in the presence of widespread SIM cloning attacks).
You are already part of the 2FA — you’re the first factor: “something you know”.
The second factor: “something you have” — often a personal device, or an object. This is ideally something no one else can be in possession of at the same time as you are.