Rating:
# TeamItaly CTF 2023
## [crypto] Pwn Chall (0 solves)
## Understanding the challenge
From the Dockerfile provided we see that the challenge clones the github repository `LearningToSQI/SQISign-SageMath` and applies a patch. With this, it generates and verifies two signatures, which are provided, for the messages `"Good"` and `"luck"` and finally it expects a signature for the message `"give me the flag"`. If the signature is correct it outputs the flag.
For an overview of how SQISign works you can read the [blog](https://learningtosqi.github.io/posts/overview/) from Maria Corte-Real Santos and Giacomo Pope, which also describes many aspects of the implementation of the repository used for this challenge.
## Analizing the patch
The patch is very small and modifies two main things:
1. It removes the `@cached_function` decorator, which is used only on two functions in the code.
```python
# We don't like optimization
# @cached_function
def torsion_basis(E, D, canonical=False):
```
```python
# We don't like optimization
# @cached_function
def has_order_constants(D):
```
2. It allows isogenies of degree up to `l^3000` for the signature (but it still has to be a power of `l`).
```python
# Allowing isogenies of degree higher then 2^1000, but should be doable also with the original verification
```
As suggested in the comment this last patch is not really part of the vulnerability, but it allows a faster signature forge.
Instead, analyzing the two functions affected by the cache, we see something interesting in `torsion_basis`:
```python
# This ensures all random points are deterministically
# generated
if canonical:
set_random_seed(0)
```
If the function is called with `canonical=True` it sets Sage's random seed to 0! Looking for something similar in the rest of the code, we see that the functions `generate_random_point` (which sets the seed to a custom value, taken in the range `(0, 2000)`) and `generate_linearly_independent_point`.
## Exploiting the vulnerability
Now we can search for somewhere in the code where one of this functions is called with `canonical=True` (or `seed != None` for `generate_random_point`). Analyzing the `decompression` function, used during the signature verification, we see that the function `generate_random_point` is called with a given seed. Thus, at the end of the first verification the seed will be set deterministically from the output of the first signature (which we have!) and thus the commitment for the second isogeny will only depend on this seed. Indeed the commitment is fully determined by `P, Q, x` generated at
```python
# Generate a random kernel
# of order T_prime
P, Q = torsion_basis(E0, T_prime)
x = randint(1, T_prime)
```
It is important to notice that this is true only because the decorator `@cached_function` on `torsion_basis` is removed, otherwise the points `P` and `Q` generated in the commitment with
```python
P, Q = torsion_basis(E0, T_prime)
```
would be the same as in the first signature, thus based on the initial random seed of Sage, which we don't know.
Now we can simulate the first signature verification and generate a new commitment which will be equal to the one used for the second signature.
```python
EA = EllipticCurve(Fp4, EA_coeffs)
E10 = EllipticCurve(Fp4, E10_coeffs)
E1 = EllipticCurve(Fp4, E11_coeffs)
verifier.verify(EA, (E10, S0), b"Good")
P, Q = torsion_basis(E0, T_prime)
x = randint(1, T_prime)
ψ_ker = P + x * Q
Iψ = kernel_to_ideal(ψ_ker, T_prime)
ψ = EllipticCurveIsogenyFactored(E0, ψ_ker, order=T_prime)
assert ψ.codomain() == E1
```
## Forging the signature
At this point we still don't have the private key for $E_A$, but we do have a path from $E_0$ to $E_A$ given by $\hat{\sigma}\circ \phi_1 \circ \psi_1$, where $\phi_1$ is the challenge isogeny generated by the message `"luck"` and $\psi_1$ is the commitment secret we just retrieved.
Since the verification allows for isogenies of degree up to $2^{3000}$ we can forge the signature in a simpler way then retrieving the private key.
First we compute the ideal corresponding to $\phi_1 \circ \psi_1$ as it's done in the `response` function:
```python
ϕ1 = EllipticCurveIsogenyFactored(E1, ϕ1_ker, order=Dc)
E2 = ϕ1.codomain()
E2.set_order((p**2 - 1) ** 2)
ψ1_dual = dual_isogeny(ψ1, ψ1_ker, order=T_prime)
Iϕ1_pullback = kernel_to_ideal(ψ1_dual(ϕ1_ker), Dc)
Iψ1Iϕ1 = Iψ1.intersection(Iϕ1_pullback)
```
Since $I_{\psi_1}I_{\phi_1}$ is an ideal with left order $O_0$ we can call the original KLPT algorithm (the function `EquivalentSmoothIdealHeuristic` in the repo) on it to obtain an ideal which norm is a power of 2 and the corresponding isogeny $\iota_1 : E_0 \rightarrow E_2$.
Using the same commitment $\psi_1$ we can do the exact same thing for the challenge signature $\phi_2 : E_1 \rightarrow E_2'$ generated by the message `"give me the flag"` to obtain another isogeny $\iota_2 : E_0 \rightarrow E_2'$ of degree power of 2.
This way we have the isogeny $\sigma' = \iota_2 \circ \hat{\iota_1} \circ \sigma : E_A \rightarrow E_2'$ of degree power of 2.
There is still a problem with $\sigma'$, which is that, for how it is constructed, it is a backtracking isogeny (so it passes many times for the same curve). So, before calling the compression function, we have to remove backtracking.
```python
def remove_backtracking(sigma):
sigma_chain = isogeny_into_blocks(sigma, l)
curves = [sigma.domain()]
new_sigmas = []
for j, fac in enumerate(sigma_chain):
for i, curve in enumerate(curves):
if fac.codomain().is_isomorphic(curve):
new_sigmas = new_sigmas[:i] + [curve.isomorphism_to(fac.codomain())]
curves = curves[:i+1] + [new_sigmas[-1].codomain()]
break
else:
curves.append(fac.codomain())
new_sigmas.append(fac)
new_sigma = EllipticCurveHom_composite.from_factors(new_sigmas)
return new_sigma
```
Finally, we can call `compression` on the result and obtain the signature `S`.
## Notes
1. By obtaining the commitment secrets it is actually possible to compute an ideal with
left order $O_0$ and right order $O_1$, where $O_1$ is the maximal order corresponding to $E_A$, but I didn't find an efficient way to do it.
2. It should be possible to construct the final isogeny in a simpler way by finding ideals equivalent to $I_{\phi_1}$ and $I_{\phi_2}$ of norm power of 2 and composing the corresponding isogenies with $\sigma$ without passing trhough $E_0$. I've tried to do it by calling `SigningKLPT` on $I_{\phi_1}$ with an ideal equivalent to $I_{\psi_1}$ of prime norm, but it wasn't able to find a solution for the StrongApproximation and was quickly running out of new $\gamma$.
## Code
```python
from sage.schemes.elliptic_curves.hom_composite import EllipticCurveHom_composite
import os
import sys
sys.path.insert(1, "SQISign-SageMath")
from setup import Fp4, E0, p, T_prime, l, f_step_max, e, Dc, O0, T
from ideals import left_isomorphism, pushforward_ideal
from isogenies import torsion_basis, EllipticCurveIsogenyFactored, dual_isogeny
from deuring import kernel_to_ideal, IdealToIsogenyFromKLPT
from KLPT import EquivalentSmoothIdealHeuristic, small_equivalent_ideal, EquivalentPrimeIdealHeuristic, SigningKLPT
from compression import decompression, compression, isogeny_into_blocks
from SQISign import SQISign
z4 = Fp4.gens()[0]
### output ###
EA_coeffs = <EA_coeffs>
E10_coeffs = <E10_coeffs>
S0 = <S0>
E11_coeffs = <E11_coeffs>
S1 = <S1>
EA = EllipticCurve(Fp4, EA_coeffs)
E10 = EllipticCurve(Fp4, E10_coeffs)
E11 = EllipticCurve(Fp4, E11_coeffs)
E1 = E11
S = S1
target = "give me the flag"
prover, verifier = SQISign(), SQISign()
prover.pk = EA
def check_secrets(secrets, E1):
P, Q, x = secrets
P = E0(P)
Q = E0(Q)
ψ_ker = P + x * Q
Iψ = kernel_to_ideal(ψ_ker, T_prime)
assert Iψ.norm() == T_prime, "Iψ has the wrong norm"
ψ = EllipticCurveIsogenyFactored(E0, ψ_ker, order=T_prime)
if ψ.codomain() == E1:
print("FOUND")
print(f"{x = }")
return ψ_ker, ψ, Iψ
# print("Nope")
def compute_phipsi_smooth(msg, ψ_ker, ψ, Iψ, E1):
ϕ_ker = prover.challenge_from_message(E1, msg.encode())
ϕ = EllipticCurveIsogenyFactored(E1, ϕ_ker, order=Dc)
E2 = ϕ.codomain()
E2.set_order((p**2 - 1) ** 2)
# Computing IψIϕ
ψ_dual = dual_isogeny(ψ, ψ_ker, order=T_prime)
Iϕ_pullback = kernel_to_ideal(ψ_dual(ϕ_ker), Dc)
IψIϕ = Iψ.intersection(Iϕ_pullback)
assert IψIϕ.norm() == Iψ.norm() * Iϕ_pullback.norm()
# Reducing IψIϕ
IψIϕ_prime = small_equivalent_ideal(IψIϕ)
IψIϕ_prime, _, _ = EquivalentPrimeIdealHeuristic(IψIϕ_prime)
# Computing an ideal equivalente to IψIϕ of norm power of l
IψIϕ_smooth = EquivalentSmoothIdealHeuristic(IψIϕ_prime, l**800)
print(f"{factor(IψIϕ_smooth.norm()) = }")
I_trivial = O0.unit_ideal()
ϕ_trivial = E0.isogeny(E0(0))
ϕψ_smooth = IdealToIsogenyFromKLPT(
IψIϕ_smooth, I_trivial, ϕ_trivial, I_prime=IψIϕ_prime
)
return ϕψ_smooth, E2
def remove_backtracking(sigma):
sigma_chain = isogeny_into_blocks(sigma, l)
curves = [sigma.domain()]
new_sigmas = []
for j, fac in enumerate(sigma_chain):
if j < len(sigma_chain) - 1:
assert fac.codomain() == sigma_chain[j+1].domain()
for i, curve in enumerate(curves):
if fac.codomain().is_isomorphic(curve):
new_sigmas = new_sigmas[:i] + [curve.isomorphism_to(fac.codomain())]
curves = curves[:i+1] + [new_sigmas[-1].codomain()]
break
else:
curves.append(fac.codomain())
new_sigmas.append(fac)
new_sigma = EllipticCurveHom_composite.from_factors(new_sigmas)
return new_sigma
# Getting the secret commitment
verifier.verify(EA, (E10, S0), b"Good")
P, Q = torsion_basis(E0, T_prime)
x = randint(1, T_prime)
secrets = (P, Q, x)
hope = check_secrets(secrets, E1)
if hope:
ψ_ker, ψ, Iψ = hope
else:
print("Secrets not found :(")
exit()
print("\nComputing given_phipsi_smooth")
if not os.path.isfile("dumps/given_phipsi_smooth") or (len(sys.argv) > 1 and sys.argv[1] == "new"):
given_ϕψ_smooth, E2_given = compute_phipsi_smooth("luck", ψ_ker, ψ, Iψ, E1)
with open("dumps/given_phipsi_smooth", "wb") as f:
f.write(dumps((given_ϕψ_smooth, E2_given)))
else:
with open("dumps/given_phipsi_smooth", "rb") as f:
given_ϕψ_smooth, E2_given = loads(f.read())
print("Done")
print("\nComputing target_phipsi_smooth")
if not os.path.isfile("dumps/target_phipsi_smooth") or (len(sys.argv) > 1 and sys.argv[1] == "new"):
target_ϕψ_smooth, E2_target = compute_phipsi_smooth(target, ψ_ker, ψ, Iψ, E1)
with open("dumps/target_phipsi_smooth", "wb") as f:
f.write(dumps((target_ϕψ_smooth, E2_target)))
else:
with open("dumps/target_phipsi_smooth", "rb") as f:
target_ϕψ_smooth, E2_target = loads(f.read())
print("Done")
σ = decompression(EA, E2_given, S, l, f_step_max, e)
# redefining the dual function to deal with isogenies of degree 1
def my_dual(phi):
if is_prime(phi.degree()):
return phi.dual()
else:
return EllipticCurveHom_composite.from_factors([my_dual(x) if x.degree() > 1 else x.codomain().isomorphism_to(x.domain()) for x in phi.factors()[::-1]])
print()
print("Computing composition of isogenies")
if not os.path.isfile("dumps/phi_EA_E0"):
iso = σ.codomain().isomorphism_to(given_ϕψ_smooth.codomain())
phi_EA_E0 = my_dual(given_ϕψ_smooth) * iso * σ
with open("dumps/phi_EA_E0", "wb") as f:
f.write(dumps(phi_EA_E0))
else:
with open("dumps/phi_EA_E0", "rb") as f:
phi_EA_E0 = loads(f.read())
print("Done")
print()
print("Computing final isogeny")
if not os.path.isfile("dumps/final_sigma"):
final_sigma = target_ϕψ_smooth * phi_EA_E0
final_sigma = remove_backtracking(final_sigma)
with open("dumps/final_sigma", "wb") as f:
f.write(dumps(final_sigma))
else:
with open("dumps/final_sigma", "rb") as f:
final_sigma = loads(f.read())
print("Done")
print(f"{factor(final_sigma.degree()) = }")
deg_sigma = factor(final_sigma.degree())[0][1]
print(f"{deg_sigma = }")
if not os.path.isfile("dumps/final_S"):
final_S = compression(EA, final_sigma, l, f_step_max)
with open("dumps/final_S", "w") as f:
f.write(final_S)
else:
with open("dumps/final_S", "r") as f:
final_S = f.read()
print(f"{final_S = }")
ϕ_ker = prover.challenge_from_message(E1, target.encode())
assert verifier.verify_response(EA, E1, final_S, ϕ_ker, deg_sigma=deg_sigma)
```