Also there is Arjen Bot's page of factorizations of 2^n+1

http://www.euronet.nl/users/bota/medium-p-odd.txt

## Monday, 31 December 2007

## Sunday, 30 December 2007

### Home Primes Search

Let me introduce another factorization project, the Home Primes Search:

http://www.mersennewiki.org/index.php/Home_Prime

http://www.mersennewiki.org/index.php/Home_Primes_Search

http://www.mersennewiki.org/index.php/Home_Prime

http://www.mersennewiki.org/index.php/Home_Primes_Search

## Saturday, 29 December 2007

### NFSNET

Here is a link to the NFSNET project, a distributed attack on Cunningham Project targets using the Number Field Sieve:

http://www.nfsnet.org/

http://www.nfsnet.org/

## Friday, 28 December 2007

### P49_101_87

I can't resist mentioning a fun result I had recently for the XYYXF project:

http://xyyxf.at.tut.by/records.html#ecm

[There I am currently in tenth place! :)]

I can certainly recommend the XYYXF ECMNet server as a slick and easy way to find interesting factors for that project.

http://xyyxf.at.tut.by/records.html#ecm

[There I am currently in tenth place! :)]

I can certainly recommend the XYYXF ECMNet server as a slick and easy way to find interesting factors for that project.

## Thursday, 27 December 2007

### WE on Carmichaels

When running WE on Carmichaels, clearly the algorithm will never terminate.

http://en.wikipedia.org/wiki/Carmichael_numbers

However, there is a simple workaround, by simply multiplying the input number to be factored (if Carmichael) by a suitable larg(ish) prime, eg a Mersenne. This will normally eject the factors, additionally, as required.

Here are some examples:

1)

tiggatoo:~/math/we james$ java we2tr2

f168003672409*(2^127-1)

[28584343649372241809274301558307881708368828786343]

base#72

elapsed=0s factor=6073 A=13044086312830013543739988769

base#70

elapsed=0s factor=3037 A=860763419187377590694240501658

base#230

elapsed=1s factor=9109 A=771019052198021707484273222802

170141183460469231731687303715884105727

duration: 1 seconds

2)

tiggatoo:~/math/we james$ java we2tr2

f173032371289*(2^89-1)

[107101850255573582977951074701018631079]

base#43

elapsed=0s factor=3067 A=1236330343540696294158885383112

base#149

elapsed=0s factor=6133 A=559525871748815289923955370298

base#758

elapsed=1s factor=9199 A=194689393018084379227390724801

618970019642690137449562111

duration: 1 seconds

3)

tiggatoo:~/math/we james$ java we2tr2

f973694665856161*(2^127-1)

[165665562777913375106491436985024163141370898298334047]

base#3

elapsed=0s factor=6841 A=1245204978926759694403655173839

base#0

elapsed=0s factor=2281 A=1024632434440788626941999868632

base#7

elapsed=0s factor=4561 A=457244767350940464517786915322

base#12

elapsed=0s factor=13681 A=85779304638188382542657979233

170141183460469231731687303715884105727

duration: 0 seconds

And here, for the record/or in case anyone wants to confirm results above, is the Java source code to we2tr2 (Copyright JGW ~ 2000):

import java.math.BigInteger;

import java.util.Random;

import java.util.Date;

public class we2tr2 {

static BigInteger zero = new BigInteger("0");

static BigInteger one = new BigInteger("1");

static BigInteger two = new BigInteger("2");

static BigInteger hundred = new BigInteger("100");

static BigInteger thousand = new BigInteger("1000");

static BigInteger n = new BigInteger("0");

static BigInteger p = new BigInteger("0");

static BigInteger known = new BigInteger("0");

static BigInteger numtrials = new BigInteger("0");

static String s;

static int olds1 = 0;

static int s1 = 0;

static Date d;

static long starttime;

static long finishtime;

static long duration;

public static void main (String args[])

throws java.io.IOException {

we2tr2 we2tr2inst = new we2tr2();

char c;

String sInput;

StringBuffer sbInput = new StringBuffer("");

while ((c = (char)System.in.read()) != '\n' && c != '\r')

sbInput.append(c);

System.in.read();

sInput = sbInput.toString().trim();

if (sInput.charAt(0) == 'f' || sInput.charAt(0) == 'F') {

s = sInput.substring(1).trim();

s1 = 0;

olds1 = 0;

p = we2tr2inst.eval(s);

System.out.println('[' + p.toString() + ']');

}

else {

p = new BigInteger(sInput);

}

n = p;

d = new Date();

starttime = d.getTime();

we2tr2inst.factorize(n);

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println("duration: " + duration + " seconds");

System.out.println();

}

public boolean factorize(BigInteger n) {

boolean prime = false;

BigInteger numtested = new BigInteger("0");

BigInteger T = new BigInteger("1");

BigInteger b = new BigInteger("1");

BigInteger A = new BigInteger("2");

BigInteger wanless = new BigInteger("2");

if (n.isProbablePrime(1000)) {

prime = true;

System.out.println(n);

return prime;

}

// workaround - apparent java bug in modPow - JGW

if (n.compareTo(two) < 0)

return false;

if (n.remainder(two).compareTo(zero) == 0) {

System.out.println(two.toString());

// added 2006-06-09

return(factorize(n.divide(two)));

}

// end workaround

while (wanless.compareTo(n) < 0)

wanless = wanless.multiply(two);

Random r = new Random();

numtested = zero;

while (T.compareTo(one) == 0 || T.compareTo(n) == 0) {

// changed JW 2005-3-23

A = new BigInteger(hundred.intValue(), r);

// added JGW 2006-06-09

System.out.print("base#" + numtested + '\r');

// changed DT 2005-2-20

b = A.modPow(wanless, n);

T = n.gcd(b.modPow(n, n).subtract(b));

numtested = numtested.add(one);

}

if (T.compareTo(one) > 0 && T.compareTo(n) < 0) {

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println();

System.out.println("elapsed=" + duration + "s" + '\t' + "factor=" + T.toString() + '\t' + "A=" + A.toString() + '\t');

factorize(n.divide(T));

}

return prime;

}

public BigInteger evalRand(char op, BigInteger oldn) {

BigInteger n = new BigInteger("1");

switch (op) {

case 'r':

case 'R':

Random r = new Random();

n = new BigInteger(oldn.intValue(), r);

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalFact(BigInteger oldn, char op) {

BigInteger n = new BigInteger("1");

BigInteger i = new BigInteger("1");

BigInteger j = new BigInteger("1");

boolean prime = true;

switch (op) {

case '!':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one))

n = n.multiply(i);

break;

case '#':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one)) {

prime = true;

for (j = two; (prime == true) && (j.multiply(j).compareTo(i) <= 0); j = j.add(one))

if (i.remainder(j).compareTo(zero) == 0)

prime = false;

if (prime == true)

n = n.multiply(i);

}

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalPower(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '^':

n = oldn.pow(n1.intValue());

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalProduct(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '*':

n = oldn.multiply(n1);

break;

case '/':

n = oldn.divide(n1);

break;

case '%':

n = oldn.remainder(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalSum(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '+':

n = oldn.add(n1);

break;

case '-':

n = oldn.subtract(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger eval(String s) {

BigInteger oldn0 = new BigInteger("0");

BigInteger oldn1 = new BigInteger("0");

BigInteger oldn2 = new BigInteger("0");

BigInteger n = new BigInteger("0");

char oldop0 = 0;

char oldop1 = 0;

char oldop2 = 0;

char op = 0;

while (s1 < s.length()) {

switch (s.charAt(s1)) {

case '(':

case '[':

case '{':

s1++;

n = eval(s);

break;

case '0':

case '1':

case '2':

case '3':

case '4':

case '5':

case '6':

case '7':

case '8':

case '9':

n = readNum(s);

break;

default:

break;

}

if (s1 < s.length()) {

switch (s.charAt(s1)) {

case ')':

case ']':

case '}':

case '!':

case '#':

case 'r':

case 'R':

case '^':

case '*':

case '/':

case '%':

case '+':

case '-':

op = s.charAt(s1);

s1++;

break;

default:

break;

}

}

else

op = 0;

switch (op) {

case 0:

case ')':

case ']':

case '}':

n = evalPower(oldn2, n, oldop2);

n = evalProduct(oldn1, n, oldop1);

n = evalSum(oldn0, n, oldop0);

return n;

case '!':

case '#':

n = evalFact(n, op);

break;

case 'r':

case 'R':

n = readNum(s);

n = evalRand(op, n);

break;

case '^':

n = evalPower(oldn2, n, oldop2);

oldn2 = n;

oldop2 = op;

break;

case '*':

case '/':

case '%':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldn1 = n;

oldop1 = op;

break;

case '+':

case '-':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldop1 = 0;

n = evalSum(oldn0, n, oldop0);

oldn0 = n;

oldop0 = op;

break;

default:

break;

}

}

return n;

}

public BigInteger readNum(String s) {

BigInteger n = new BigInteger("0");

olds1 = s1;

while (s1 < s.length() && Character.isDigit(s.charAt(s1)))

s1++;

n = new BigInteger(s.substring(olds1, s1));

return n;

}

}

http://en.wikipedia.org/wiki/Carmichael_numbers

However, there is a simple workaround, by simply multiplying the input number to be factored (if Carmichael) by a suitable larg(ish) prime, eg a Mersenne. This will normally eject the factors, additionally, as required.

Here are some examples:

1)

tiggatoo:~/math/we james$ java we2tr2

f168003672409*(2^127-1)

[28584343649372241809274301558307881708368828786343]

base#72

elapsed=0s factor=6073 A=13044086312830013543739988769

base#70

elapsed=0s factor=3037 A=860763419187377590694240501658

base#230

elapsed=1s factor=9109 A=771019052198021707484273222802

170141183460469231731687303715884105727

duration: 1 seconds

2)

tiggatoo:~/math/we james$ java we2tr2

f173032371289*(2^89-1)

[107101850255573582977951074701018631079]

base#43

elapsed=0s factor=3067 A=1236330343540696294158885383112

base#149

elapsed=0s factor=6133 A=559525871748815289923955370298

base#758

elapsed=1s factor=9199 A=194689393018084379227390724801

618970019642690137449562111

duration: 1 seconds

3)

tiggatoo:~/math/we james$ java we2tr2

f973694665856161*(2^127-1)

[165665562777913375106491436985024163141370898298334047]

base#3

elapsed=0s factor=6841 A=1245204978926759694403655173839

base#0

elapsed=0s factor=2281 A=1024632434440788626941999868632

base#7

elapsed=0s factor=4561 A=457244767350940464517786915322

base#12

elapsed=0s factor=13681 A=85779304638188382542657979233

170141183460469231731687303715884105727

duration: 0 seconds

And here, for the record/or in case anyone wants to confirm results above, is the Java source code to we2tr2 (Copyright JGW ~ 2000):

import java.math.BigInteger;

import java.util.Random;

import java.util.Date;

public class we2tr2 {

static BigInteger zero = new BigInteger("0");

static BigInteger one = new BigInteger("1");

static BigInteger two = new BigInteger("2");

static BigInteger hundred = new BigInteger("100");

static BigInteger thousand = new BigInteger("1000");

static BigInteger n = new BigInteger("0");

static BigInteger p = new BigInteger("0");

static BigInteger known = new BigInteger("0");

static BigInteger numtrials = new BigInteger("0");

static String s;

static int olds1 = 0;

static int s1 = 0;

static Date d;

static long starttime;

static long finishtime;

static long duration;

public static void main (String args[])

throws java.io.IOException {

we2tr2 we2tr2inst = new we2tr2();

char c;

String sInput;

StringBuffer sbInput = new StringBuffer("");

while ((c = (char)System.in.read()) != '\n' && c != '\r')

sbInput.append(c);

System.in.read();

sInput = sbInput.toString().trim();

if (sInput.charAt(0) == 'f' || sInput.charAt(0) == 'F') {

s = sInput.substring(1).trim();

s1 = 0;

olds1 = 0;

p = we2tr2inst.eval(s);

System.out.println('[' + p.toString() + ']');

}

else {

p = new BigInteger(sInput);

}

n = p;

d = new Date();

starttime = d.getTime();

we2tr2inst.factorize(n);

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println("duration: " + duration + " seconds");

System.out.println();

}

public boolean factorize(BigInteger n) {

boolean prime = false;

BigInteger numtested = new BigInteger("0");

BigInteger T = new BigInteger("1");

BigInteger b = new BigInteger("1");

BigInteger A = new BigInteger("2");

BigInteger wanless = new BigInteger("2");

if (n.isProbablePrime(1000)) {

prime = true;

System.out.println(n);

return prime;

}

// workaround - apparent java bug in modPow - JGW

if (n.compareTo(two) < 0)

return false;

if (n.remainder(two).compareTo(zero) == 0) {

System.out.println(two.toString());

// added 2006-06-09

return(factorize(n.divide(two)));

}

// end workaround

while (wanless.compareTo(n) < 0)

wanless = wanless.multiply(two);

Random r = new Random();

numtested = zero;

while (T.compareTo(one) == 0 || T.compareTo(n) == 0) {

// changed JW 2005-3-23

A = new BigInteger(hundred.intValue(), r);

// added JGW 2006-06-09

System.out.print("base#" + numtested + '\r');

// changed DT 2005-2-20

b = A.modPow(wanless, n);

T = n.gcd(b.modPow(n, n).subtract(b));

numtested = numtested.add(one);

}

if (T.compareTo(one) > 0 && T.compareTo(n) < 0) {

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println();

System.out.println("elapsed=" + duration + "s" + '\t' + "factor=" + T.toString() + '\t' + "A=" + A.toString() + '\t');

factorize(n.divide(T));

}

return prime;

}

public BigInteger evalRand(char op, BigInteger oldn) {

BigInteger n = new BigInteger("1");

switch (op) {

case 'r':

case 'R':

Random r = new Random();

n = new BigInteger(oldn.intValue(), r);

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalFact(BigInteger oldn, char op) {

BigInteger n = new BigInteger("1");

BigInteger i = new BigInteger("1");

BigInteger j = new BigInteger("1");

boolean prime = true;

switch (op) {

case '!':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one))

n = n.multiply(i);

break;

case '#':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one)) {

prime = true;

for (j = two; (prime == true) && (j.multiply(j).compareTo(i) <= 0); j = j.add(one))

if (i.remainder(j).compareTo(zero) == 0)

prime = false;

if (prime == true)

n = n.multiply(i);

}

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalPower(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '^':

n = oldn.pow(n1.intValue());

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalProduct(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '*':

n = oldn.multiply(n1);

break;

case '/':

n = oldn.divide(n1);

break;

case '%':

n = oldn.remainder(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalSum(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '+':

n = oldn.add(n1);

break;

case '-':

n = oldn.subtract(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger eval(String s) {

BigInteger oldn0 = new BigInteger("0");

BigInteger oldn1 = new BigInteger("0");

BigInteger oldn2 = new BigInteger("0");

BigInteger n = new BigInteger("0");

char oldop0 = 0;

char oldop1 = 0;

char oldop2 = 0;

char op = 0;

while (s1 < s.length()) {

switch (s.charAt(s1)) {

case '(':

case '[':

case '{':

s1++;

n = eval(s);

break;

case '0':

case '1':

case '2':

case '3':

case '4':

case '5':

case '6':

case '7':

case '8':

case '9':

n = readNum(s);

break;

default:

break;

}

if (s1 < s.length()) {

switch (s.charAt(s1)) {

case ')':

case ']':

case '}':

case '!':

case '#':

case 'r':

case 'R':

case '^':

case '*':

case '/':

case '%':

case '+':

case '-':

op = s.charAt(s1);

s1++;

break;

default:

break;

}

}

else

op = 0;

switch (op) {

case 0:

case ')':

case ']':

case '}':

n = evalPower(oldn2, n, oldop2);

n = evalProduct(oldn1, n, oldop1);

n = evalSum(oldn0, n, oldop0);

return n;

case '!':

case '#':

n = evalFact(n, op);

break;

case 'r':

case 'R':

n = readNum(s);

n = evalRand(op, n);

break;

case '^':

n = evalPower(oldn2, n, oldop2);

oldn2 = n;

oldop2 = op;

break;

case '*':

case '/':

case '%':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldn1 = n;

oldop1 = op;

break;

case '+':

case '-':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldop1 = 0;

n = evalSum(oldn0, n, oldop0);

oldn0 = n;

oldop0 = op;

break;

default:

break;

}

}

return n;

}

public BigInteger readNum(String s) {

BigInteger n = new BigInteger("0");

olds1 = s1;

while (s1 < s.length() && Character.isDigit(s.charAt(s1)))

s1++;

n = new BigInteger(s.substring(olds1, s1));

return n;

}

}

## Wednesday, 26 December 2007

### Charles Babbage

Here is a sketch (from Wikipedia) of Charles Babbage (b 26 December 1791).

He was the first to envisage a machine that could compute, though at the time this vision was of a purely mechanical device.

From http://en.wikipedia.org/wiki/Babbage

"He began in 1822 with what he called the difference engine, made to compute values of polynomial functions"

"The first difference engine was composed of around 25,000 parts, weighed fifteen tons (13,600 kg), and stood 8 ft (2.4 m) high. Although he received ample funding for the project, it was never completed"

"In 1991 a perfectly functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked"

He was the first to envisage a machine that could compute, though at the time this vision was of a purely mechanical device.

From http://en.wikipedia.org/wiki/Babbage

"He began in 1822 with what he called the difference engine, made to compute values of polynomial functions"

"The first difference engine was composed of around 25,000 parts, weighed fifteen tons (13,600 kg), and stood 8 ft (2.4 m) high. Although he received ample funding for the project, it was never completed"

"In 1991 a perfectly functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked"

## Tuesday, 25 December 2007

### P2007

To celebrate Christmas 2007, for the past few days I have been factorizing P2007.

Here are the results:

ibu:~/math james$ time java superfac9

f2^2007+1

[14696072899510457910180264975074329395485666586735298566113827031369808145822340017365241424851280254956108347379039523500123122699047108242251921358933160773008638610599971840088163730974725743542902654728126239332046779346737710585256579333179693308275839559444787047544912589519783891140629020412202583212053620350010688717104574055412999539319651392054912347738448106306817040926244005345442289064602444671410741520258787821875717396461207456197233847539467765831034596299478021012490490523728714592688694474716929987628644661687302977141155300336976022455747686505323874664699578081559660947075760129]

wanless...

brutep: 3

wanless...

brutep: 3

wanless...

brutep: 3

brute...

brutep: 19

wanless...

ecm...

aprtcle: 247531

ecm...

aprtcle: 219256122131

wanless...

aprtcle: 20493495920905043950407650450918171260318303154708405513

ecm...

aprtcle: 4340301546362831119363

aprtcle:

56377694445208154141927654613855613062927113955212040908548454699046039020893338370875013074480485757794923

ecm...

aprtcle: 73215361

ecm... ^C

real 3459m6.204s

user 3424m7.740s

sys 5m57.800s

Here are the results:

ibu:~/math james$ time java superfac9

f2^2007+1

[14696072899510457910180264975074329395485666586735298566113827031369808145822340017365241424851280254956108347379039523500123122699047108242251921358933160773008638610599971840088163730974725743542902654728126239332046779346737710585256579333179693308275839559444787047544912589519783891140629020412202583212053620350010688717104574055412999539319651392054912347738448106306817040926244005345442289064602444671410741520258787821875717396461207456197233847539467765831034596299478021012490490523728714592688694474716929987628644661687302977141155300336976022455747686505323874664699578081559660947075760129]

wanless...

brutep: 3

wanless...

brutep: 3

wanless...

brutep: 3

brute...

brutep: 19

wanless...

ecm...

aprtcle: 247531

ecm...

aprtcle: 219256122131

wanless...

aprtcle: 20493495920905043950407650450918171260318303154708405513

ecm...

aprtcle: 4340301546362831119363

aprtcle:

56377694445208154141927654613855613062927113955212040908548454699046039020893338370875013074480485757794923

ecm...

aprtcle: 73215361

ecm... ^C

real 3459m6.204s

user 3424m7.740s

sys 5m57.800s

## Monday, 24 December 2007

### Andrew Odlyzko

Andrew Odlyzko (picture reproduced with kind permission)

http://www.dtc.umn.edu/~odlyzko/

is probably mainly known for his contribution to the analysis of Riemann zeros, and the study of discrete logarithms. However, he also helped develop the Lanczos linear algebra stage of the QS (and other sieves), in his paper "Solving large sparse linear systems over finite fields" (1991),

http://citeseer.ist.psu.edu/140341.html

and has also authored a description of the state of factorization, looking ahead, "The future of integer factorization" (1995)

http://www.dtc.umn.edu/~odlyzko/doc/crypto.html

http://www.dtc.umn.edu/~odlyzko/

is probably mainly known for his contribution to the analysis of Riemann zeros, and the study of discrete logarithms. However, he also helped develop the Lanczos linear algebra stage of the QS (and other sieves), in his paper "Solving large sparse linear systems over finite fields" (1991),

http://citeseer.ist.psu.edu/140341.html

and has also authored a description of the state of factorization, looking ahead, "The future of integer factorization" (1995)

http://www.dtc.umn.edu/~odlyzko/doc/crypto.html

## Sunday, 23 December 2007

### Fibonacci# factorization

Fibonacci Number Factorization:

http://home.att.net/~blair.kelly/mathematics/fibonacci/index.html

http://home.att.net/~blair.kelly/mathematics/fibonacci/index.html

## Saturday, 22 December 2007

### Tom Flowers

Today marks the birthday of Dr Tommy Flowers, the builder (somewhat w/ Alan Turing) of the first modern ie digital computer. It was constructed during WWII here in UK, to decrypt German operations messages, and christened 'Colossus'. It contained 2400 thermionic valves (picture from Wikipedia) each several centimetres long, so its name was suitably apt! These valves were basically operating as switches (even though they were primarily designed as amplifiers), and serve the same role that transistors on a chip do in modern microprocessors, though these latter contain many millions on a tiny wafer.

The Colossus represented a real advance in computing though, as it was the first time that a machine had been built using electronic rather than mechanical switches for greater speed and flexibility. It was this potential that Flowers recognised.

From http://www.ivorcatt.com/47c.htm

"At the time I had no thought or knowledge of computers in the modern sense and had never heard the term used except to describe somebody who did calculations on a desk machine."

"Colossus was useful in more than one way, and there were even demonstrations applying it to number theory. But these demonstrations were more notable for their ingenuity than for their effectiveness."

The Colossus represented a real advance in computing though, as it was the first time that a machine had been built using electronic rather than mechanical switches for greater speed and flexibility. It was this potential that Flowers recognised.

From http://www.ivorcatt.com/47c.htm

"At the time I had no thought or knowledge of computers in the modern sense and had never heard the term used except to describe somebody who did calculations on a desk machine."

"Colossus was useful in more than one way, and there were even demonstrations applying it to number theory. But these demonstrations were more notable for their ingenuity than for their effectiveness."

## Friday, 21 December 2007

### Partition# factorization

Factorization of Partition Numbers:

http://www.asahi-net.or.jp/~KC2H-MSM/mathland/part/

(see

http://en.wikipedia.org/wiki/Partition_%28number_theory%29

for a description/definition of a 'partition' number)

http://www.asahi-net.or.jp/~KC2H-MSM/mathland/part/

(see

http://en.wikipedia.org/wiki/Partition_%28number_theory%29

for a description/definition of a 'partition' number)

## Thursday, 20 December 2007

### Primorials(+/-1) Factorization

[The first in a trio of links, for this blog, of some types of numbers currently being factorized]

Factorization of Primorials(+/-1):

http://primorial.unit82.com/

Factorization of Primorials(+/-1):

http://primorial.unit82.com/

## Wednesday, 19 December 2007

### Factor Announcements

Here is a nice page detailing historical announcements of factorization achievements:

http://www.crypto-world.com/FactorAnnouncements.html

http://www.crypto-world.com/FactorAnnouncements.html

## Tuesday, 18 December 2007

## Monday, 17 December 2007

### Msieve v1.32

New version of msieve available:

Announcement at:

http://mersenneforum.org/showpost.php?p=120869&postcount=344

Download from:

http://www.boo.net/~jasonp/qs.html

It would appear that the promised merge w/ GGNFS is underway...

Announcement at:

http://mersenneforum.org/showpost.php?p=120869&postcount=344

Download from:

http://www.boo.net/~jasonp/qs.html

It would appear that the promised merge w/ GGNFS is underway...

## Sunday, 16 December 2007

### Number of University Departments Factorizing

How many University departments around the world are actively engaged in factorization? These spring to mind...

Well, there's Bruce Dodson's university, Lehigh in Pennsylvania USA for one - I understand Bruce manages several dozen PC's running ECM curves on the Cunningham Project targets.

Paul Zimmermann, at INRIA, in Nancy, France, is in charge of developing the GMP-ECM software itself, that Bruce Dodson (amongst others :) is running. I believe Alex Kruppa has links there, as well as many PC's in a French grid, the GRID5000. Amazingly, I'm guessing, the 5000 refers to an approximate, or at least, goal, of number of cores in the grid:

https://www.grid5000.fr/mediawiki/index.php/Special:G5KHardware

Greg Childers, at Cal State, Fullerton, California USA, also has several dozen machines available for factoring, which he uses on a variety of projects.

At Cambridge University, UK, Paul Leyland has, and still is, contributing to a variety of projects, including Cunningham Project (via NFSNET - more on that project at a later date, hopefully) and more recently, Homogeneous Cunninghams.

Then of course there is the (legendary?) CWI, or Centrum voor Wiskunde en Informatica in Amsterdam in the Netherlands...

There must be many more (maybe these are just the most vociferous!:), feel free to add to the list, you other folks out there [or let me know of specific omissions] - but that's 5 for starters, [I seem to have picked mainly computational rather than theoretical research] almost without thinking!

Well, there's Bruce Dodson's university, Lehigh in Pennsylvania USA for one - I understand Bruce manages several dozen PC's running ECM curves on the Cunningham Project targets.

Paul Zimmermann, at INRIA, in Nancy, France, is in charge of developing the GMP-ECM software itself, that Bruce Dodson (amongst others :) is running. I believe Alex Kruppa has links there, as well as many PC's in a French grid, the GRID5000. Amazingly, I'm guessing, the 5000 refers to an approximate, or at least, goal, of number of cores in the grid:

https://www.grid5000.fr/mediawiki/index.php/Special:G5KHardware

Greg Childers, at Cal State, Fullerton, California USA, also has several dozen machines available for factoring, which he uses on a variety of projects.

At Cambridge University, UK, Paul Leyland has, and still is, contributing to a variety of projects, including Cunningham Project (via NFSNET - more on that project at a later date, hopefully) and more recently, Homogeneous Cunninghams.

Then of course there is the (legendary?) CWI, or Centrum voor Wiskunde en Informatica in Amsterdam in the Netherlands...

There must be many more (maybe these are just the most vociferous!:), feel free to add to the list, you other folks out there [or let me know of specific omissions] - but that's 5 for starters, [I seem to have picked mainly computational rather than theoretical research] almost without thinking!

## Saturday, 15 December 2007

### Tesla

...or you can buy a dedicated device (one example pictured (c) NVIDIA), called a 'Tesla', with the NVIDIA CUDA GPUs in situ:

http://en.wikipedia.org/wiki/NVIDIA_Tesla

http://en.wikipedia.org/wiki/NVIDIA_Tesla

## Friday, 14 December 2007

### Msieve V1.31

New version of msieve (by Jason Papadopoulos) now available. It seems this is a significant upgrade, with lots of new stuff.

Download from:

http://www.boo.net/~jasonp/qs.html

Announcement here:

http://mersenneforum.org/showpost.php?p=120642&postcount=339

Download from:

http://www.boo.net/~jasonp/qs.html

Announcement here:

http://mersenneforum.org/showpost.php?p=120642&postcount=339

## Thursday, 13 December 2007

### CUDA

Here are some links to an interesting new technology, which might be very useful for any "embarassingly parallel" (see earlier blog-post for definition of that) factorization (or other) algorithms. It's basically a method of using all the processing power inherent in modern GPUs for CPU-type calculations, and something I for one will be keeping an eye on...

http://en.wikipedia.org/wiki/CUDA

http://developer.nvidia.com/object/cuda.html#downloads

http://courses.ece.uiuc.edu/ece498/al1/

http://courses.ece.uiuc.edu/ece498/al1/Syllabus.html

It's even coming to Mac! :)

http://forums.nvidia.com/index.php?showtopic=47884

http://en.wikipedia.org/wiki/CUDA

http://developer.nvidia.com/object/cuda.html#downloads

http://courses.ece.uiuc.edu/ece498/al1/

http://courses.ece.uiuc.edu/ece498/al1/Syllabus.html

It's even coming to Mac! :)

http://forums.nvidia.com/index.php?showtopic=47884

## Wednesday, 12 December 2007

### GMP-M+2

I've recently noticed an interesting feature of GMP's implementation of mod_pow (ie modular exponentiation). I'm using this as part of random-base WEP on large M+2's. Anyway I always imagined, and hoped it would use "Russian Peasant" (see earlier post on this blog for description of that). If that is the case,

http://gmplib.org/manual/Powering-Algorithms.html#Powering-Algorithms

there should theoretically be a speed advantage to testing numbers that are (very) close to a power of 2 (as M+2's obviously are). And indeed I am observing just that - a 40% (!) speed increase testing the _raw_ M+2, rather than dividing out any factors (even though dividing out factors obviously leaves a slightly smaller number).

http://gmplib.org/manual/Powering-Algorithms.html#Powering-Algorithms

there should theoretically be a speed advantage to testing numbers that are (very) close to a power of 2 (as M+2's obviously are). And indeed I am observing just that - a 40% (!) speed increase testing the _raw_ M+2, rather than dividing out any factors (even though dividing out factors obviously leaves a slightly smaller number).

## Tuesday, 11 December 2007

### C138_101_75

I'm a total xyyxf junkie! :)

Recently started another GNFS factorization for them, this time of 138-digit input number.

ETA several months... [on 1 CPU](assuming all goes well). Currently sieved 9502/839184 rels. required.

Hopefully the process will further increase my understanding of GNFS, which is kindof the point!

Recently started another GNFS factorization for them, this time of 138-digit input number.

ETA several months... [on 1 CPU](assuming all goes well). Currently sieved 9502/839184 rels. required.

Hopefully the process will further increase my understanding of GNFS, which is kindof the point!

## Monday, 10 December 2007

### WIFC

This entry also is probably somewhat overdue:

Check out Hisanori Mishima's comprehensive page on factorization at:

http://www.asahi-net.or.jp/%7EKC2H-MSM/mathland/matha1/

Note that there are also download links to factoring software (by Satoshi Tomabechi) on this page (especially MPQS-type factoring)

Check out Hisanori Mishima's comprehensive page on factorization at:

http://www.asahi-net.or.jp/%7EKC2H-MSM/mathland/matha1/

Note that there are also download links to factoring software (by Satoshi Tomabechi) on this page (especially MPQS-type factoring)

## Sunday, 9 December 2007

### Cunningham Project

This is probably somewhat overdue, but here are a couple of links to the Cunningham Project, "one of the oldest continuously ongoing activities in computational number theory". [apparently it's been around since 1925]

http://en.wikipedia.org/wiki/Cunningham_project

http://homes.cerias.purdue.edu/~ssw/cun/index.html

http://en.wikipedia.org/wiki/Cunningham_project

http://homes.cerias.purdue.edu/~ssw/cun/index.html

## Saturday, 8 December 2007

### So just how big are these numbers?

So just how big are these numbers?

Well mathematics is unique among the sciences for generating really SILLY (or impressive, depending on your point-of-view :) sized numbers. Even (standard) physics not only doesn't come close, but can't even stand comparison. Hence the other-worldly nature of math (and mathematicians! :)). Now possibly combinatorial aspects of holistic physics theories can generate big numbers, but in this case the theory is as much mathematical as physical, anyway.

The way math generates these numbers is by a trick called 'exponentiation' (usually). This allows one to write (and sometimes even calculate with and manipulate) a very long number in a very succinct form. Exponentiation is basically a shorthand notation for multiplying a number by itself many times. Thus 10^100 (also known as a googol) is the number you get by multiplying 10 by itself 100 times. If written out in full it is '1' followed by 100 zeroes. And this isn't even that big by math (or even factorization) standards. Some of the larger M+2 numbers I've been testing would have millions of digits if written out in full. You can see why I called these numbers 'silly'-sized! And if you want to go really crazy in math, you can even iterate/stack the exponents... (there are notations existent for this)

I expect many of you will be aware of the story of the Chinese inventor of chess? Apparently the emperor was so impressed with chess that he promised the inventor a reward based on the 64 squares of the chess-board, that the inventor (obviously a mathematician too) had tricked him into granting: namely this:

1 grain of rice for the first square, 2 for the second, 4 for the third, 8 for the fourth, and so on, doubling each time. Total number of grains of rice 2^64, or about 10^20 - far more than the whole production of China, and the emperor was never able to make good his promise...! This doubling process is another example of exponentiation, as the emperor clearly learned the hard way.

For comparison:

1) Number of seconds elapsed since Genesis ~ 2*10^11 (just a measly 12-digit number)

2) Number of seconds elapsed since the Jurassic ~ 2*10^15 (just a measly 16-digit number)

3) Number of seconds elapsed since the start of the visible universe ~ 3*10^20 (just a measly 21-digit number)

Incidentally, the complete factorization of any of these (exact) numbers on a modern PC would only take a (very) split-second.

I'll repeat...

Some of the numbers under test at the Mersenneplustwo project, for example, have _millions_ of digits. See why I used the adjective 'SILLY' yet - there really is almost no other word for these-size numbers?

Finally, and this is perhaps the most remarkable fact of all - mathematics also deals with the infinite. And compared to infinity (ie the _whole_ Universe) every single one of these numbers is actually infinitesimally SMALL! Now how do you get your head around that??? I know I for one, have problems with that...

Ahh, the mysteries of the finite and the infinite - or as Shakespeare famously put it:

"To be or not to be, that is the question"

Well mathematics is unique among the sciences for generating really SILLY (or impressive, depending on your point-of-view :) sized numbers. Even (standard) physics not only doesn't come close, but can't even stand comparison. Hence the other-worldly nature of math (and mathematicians! :)). Now possibly combinatorial aspects of holistic physics theories can generate big numbers, but in this case the theory is as much mathematical as physical, anyway.

The way math generates these numbers is by a trick called 'exponentiation' (usually). This allows one to write (and sometimes even calculate with and manipulate) a very long number in a very succinct form. Exponentiation is basically a shorthand notation for multiplying a number by itself many times. Thus 10^100 (also known as a googol) is the number you get by multiplying 10 by itself 100 times. If written out in full it is '1' followed by 100 zeroes. And this isn't even that big by math (or even factorization) standards. Some of the larger M+2 numbers I've been testing would have millions of digits if written out in full. You can see why I called these numbers 'silly'-sized! And if you want to go really crazy in math, you can even iterate/stack the exponents... (there are notations existent for this)

I expect many of you will be aware of the story of the Chinese inventor of chess? Apparently the emperor was so impressed with chess that he promised the inventor a reward based on the 64 squares of the chess-board, that the inventor (obviously a mathematician too) had tricked him into granting: namely this:

1 grain of rice for the first square, 2 for the second, 4 for the third, 8 for the fourth, and so on, doubling each time. Total number of grains of rice 2^64, or about 10^20 - far more than the whole production of China, and the emperor was never able to make good his promise...! This doubling process is another example of exponentiation, as the emperor clearly learned the hard way.

For comparison:

1) Number of seconds elapsed since Genesis ~ 2*10^11 (just a measly 12-digit number)

2) Number of seconds elapsed since the Jurassic ~ 2*10^15 (just a measly 16-digit number)

3) Number of seconds elapsed since the start of the visible universe ~ 3*10^20 (just a measly 21-digit number)

Incidentally, the complete factorization of any of these (exact) numbers on a modern PC would only take a (very) split-second.

I'll repeat...

Some of the numbers under test at the Mersenneplustwo project, for example, have _millions_ of digits. See why I used the adjective 'SILLY' yet - there really is almost no other word for these-size numbers?

Finally, and this is perhaps the most remarkable fact of all - mathematics also deals with the infinite. And compared to infinity (ie the _whole_ Universe) every single one of these numbers is actually infinitesimally SMALL! Now how do you get your head around that??? I know I for one, have problems with that...

Ahh, the mysteries of the finite and the infinite - or as Shakespeare famously put it:

"To be or not to be, that is the question"

## Friday, 7 December 2007

### M+2 record

Mersenneplustwo project.

Here is a record of the number, and size, of new factors found, as well as an indication of the total effort, for each year since 2005:

2005 - 10 (largest 38-digits) - 4GHz-yrs

2006 - 3 (largest 19-digits) - 12GHz-yrs

2007[so far] - 2 (largest 23-digits) - 100GHz-yrs

Here is a record of the number, and size, of new factors found, as well as an indication of the total effort, for each year since 2005:

2005 - 10 (largest 38-digits) - 4GHz-yrs

2006 - 3 (largest 19-digits) - 12GHz-yrs

2007[so far] - 2 (largest 23-digits) - 100GHz-yrs

## Thursday, 6 December 2007

### M+2 progress

A bit of news from the Mersenneplustwo project:

As hoped for, getting Lenny the new MacBook going on those mid-range M+2's with mprime (ie by ECM) recently yielded a 21-digit factor from (M+2)1257787:

http://bearnol.is-a-geek.com/Mersenneplustwo/Mersenneplustwo.html

Thanks once again to George Woltman, for his cool software!

http://www.mersenne.org/freesoft.htm

This is the second new factor found this year (both by mprime)...

As hoped for, getting Lenny the new MacBook going on those mid-range M+2's with mprime (ie by ECM) recently yielded a 21-digit factor from (M+2)1257787:

http://bearnol.is-a-geek.com/Mersenneplustwo/Mersenneplustwo.html

Thanks once again to George Woltman, for his cool software!

http://www.mersenne.org/freesoft.htm

This is the second new factor found this year (both by mprime)...

## Wednesday, 5 December 2007

## Tuesday, 4 December 2007

### FFTs-part5

The Cooley-Tukey Algorithm is an FFT algorithm especially suited, and adapted for large integer multiplication. It was first published in 1965.

GIMPS' mprime uses "radix-4" Cooley-Tukey to achieve a computation time of the order of NlogN (for N*N multiplication).

http://en.wikipedia.org/wiki/Cooley-Tukey_FFT_algorithm

GIMPS' mprime uses "radix-4" Cooley-Tukey to achieve a computation time of the order of NlogN (for N*N multiplication).

http://en.wikipedia.org/wiki/Cooley-Tukey_FFT_algorithm

## Monday, 3 December 2007

### FFTs-part4

Parseval's Theorem

A special case of Plancherel's Theorem, when function x = function y. Then the scaling constant = 1/(2*pi) [FT] or 1/N [DFT]

http://en.wikipedia.org/wiki/Parseval%27s_theorem

"The interpretation of this form of the theorem is that the total energy contained in a waveform x(t) summed across all of time t is equal to the total energy of the waveform's Fourier Transform X(f) summed across all of its frequency components f."

A special case of Plancherel's Theorem, when function x = function y. Then the scaling constant = 1/(2*pi) [FT] or 1/N [DFT]

http://en.wikipedia.org/wiki/Parseval%27s_theorem

"The interpretation of this form of the theorem is that the total energy contained in a waveform x(t) summed across all of time t is equal to the total energy of the waveform's Fourier Transform X(f) summed across all of its frequency components f."

## Sunday, 2 December 2007

### FFTs-part3

Plancherel's Theorem

A (unitary) relationship between the DFTs of any two functions x and y under multiplication (subject to a constant scaling factor), which property can be of additional use in simplifying the problem under computation.

http://en.wikipedia.org/wiki/Plancherel_theorem

A (unitary) relationship between the DFTs of any two functions x and y under multiplication (subject to a constant scaling factor), which property can be of additional use in simplifying the problem under computation.

http://en.wikipedia.org/wiki/Plancherel_theorem

## Saturday, 1 December 2007

### FFTs-part2

The DFT (discrete Fourier transform) is basically a digital sampling (ie approximation) of the analog Fourier transform, and as such it remains finite throughout the domain.

http://en.wikipedia.org/wiki/Discrete_Fourier_transform

A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform and its inverse, for eg, a particular problem.

http://en.wikipedia.org/wiki/Fast_Fourier_transform

http://en.wikipedia.org/wiki/Discrete_Fourier_transform

A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform and its inverse, for eg, a particular problem.

http://en.wikipedia.org/wiki/Fast_Fourier_transform

## Friday, 30 November 2007

### FFTs-part1

The Fourier Transform - what is it?

Well, suppose we have some sum or computation we wish to evaluate, for example to calculate a*b for some a and b, e.g. large integers (of several thousands of digits or more).

Sometimes it can be easier to map each of the inputs to a new domain via some transform, perform the multiplication (approximating suitably if necessary) and then apply the reverse transform to get the original answer. Such a transform is the "Fourier" transform (and its inverse), and because of the way it maps objects/events to wave functions (expressed in terms of sin's and cos's) (and back) its respective domains of operation are termed 'time' and 'frequency'.

http://en.wikipedia.org/wiki/Fourier_transform

http://mathworld.wolfram.com/FourierTransform.html

http://mathworld.wolfram.com/FourierSeries.html

Obviously, in order for the process to work, it depends on the reciprocity of the transforming function. This is achieved by the orthogonality of the underlying individual waveforms (Sturm-Liouville).

http://en.wikipedia.org/wiki/Sturm-Liouville_theory

Well, suppose we have some sum or computation we wish to evaluate, for example to calculate a*b for some a and b, e.g. large integers (of several thousands of digits or more).

Sometimes it can be easier to map each of the inputs to a new domain via some transform, perform the multiplication (approximating suitably if necessary) and then apply the reverse transform to get the original answer. Such a transform is the "Fourier" transform (and its inverse), and because of the way it maps objects/events to wave functions (expressed in terms of sin's and cos's) (and back) its respective domains of operation are termed 'time' and 'frequency'.

http://en.wikipedia.org/wiki/Fourier_transform

http://mathworld.wolfram.com/FourierTransform.html

http://mathworld.wolfram.com/FourierSeries.html

Obviously, in order for the process to work, it depends on the reciprocity of the transforming function. This is achieved by the orthogonality of the underlying individual waveforms (Sturm-Liouville).

http://en.wikipedia.org/wiki/Sturm-Liouville_theory

## Thursday, 29 November 2007

### Plancherel

## Wednesday, 28 November 2007

### C127_113_36

Well, I finally completed my factorization of C127_113_36 for the XYYXF project - and by the recommended method - ie sieving by GGNFS, followed by post-processing with msieve. Thanks to Greg Childers, Bob Backstrom and Hallstein Hansen, who helped me with this transfer.

Basically msieve needs three files:

1) worktodo.ini - just containing the input number

2) msieve.fb - with the details (roughly) transferred from the .poly file of GGNFS

3) msieve.dat - with all the actual relations from GGNFS, translated to msieve format by procrels w/ the following crucial command:

"procrels -fb C127_113_36.fb -prel rels.bin -dump"

After this it's just a case of running msieve w/

"msieve -nc -v"

In the end the sieving [Lintel] took me about 3 months(!), because of memory limitations, and the GGNFS 'bounce', while the postprocessing in msieve [PPC-Tiger] finished to schedule in less than 2 days.

Basically msieve needs three files:

1) worktodo.ini - just containing the input number

2) msieve.fb - with the details (roughly) transferred from the .poly file of GGNFS

3) msieve.dat - with all the actual relations from GGNFS, translated to msieve format by procrels w/ the following crucial command:

"procrels -fb C127_113_36.fb -prel rels.bin -dump"

After this it's just a case of running msieve w/

"msieve -nc -v"

In the end the sieving [Lintel] took me about 3 months(!), because of memory limitations, and the GGNFS 'bounce', while the postprocessing in msieve [PPC-Tiger] finished to schedule in less than 2 days.

## Tuesday, 27 November 2007

### Sieving Records

In fact, here is a nice page detailing historical records of factorizations using sieving methods:

http://www.crypto-world.com/FactorRecords.html

http://www.crypto-world.com/FactorRecords.html

## Monday, 26 November 2007

### Quadratic Sieve

Also, here is some info on the Quadratic Sieve (QS or MPQS, or, with slight variation, SIQS) method, invented in 1981 by Carl Pomerance, as an improvement to Dixon's Method:

http://en.wikipedia.org/wiki/Quadratic_sieve

http://mathworld.wolfram.com/QuadraticSieve.html

From the former link:

"On April 2, 1994, the factorization of RSA-129 was completed using QS. It was a 129-digit number, the product of two large primes, one of 64 digits and the other of 65. The factor base for this factorization contained 524339 primes. The data collection phase took 5000 MIPS-years, done in distributed fashion over the Internet. The data collected totaled 2GB. The data processing phase took 45 hours on Bellcore's MasPar (massively parallel) supercomputer. This was the largest published factorization by a general-purpose algorithm, until NFS was used to factor RSA-130, completed April 10, 1996."

http://en.wikipedia.org/wiki/Quadratic_sieve

http://mathworld.wolfram.com/QuadraticSieve.html

From the former link:

"On April 2, 1994, the factorization of RSA-129 was completed using QS. It was a 129-digit number, the product of two large primes, one of 64 digits and the other of 65. The factor base for this factorization contained 524339 primes. The data collection phase took 5000 MIPS-years, done in distributed fashion over the Internet. The data collected totaled 2GB. The data processing phase took 45 hours on Bellcore's MasPar (massively parallel) supercomputer. This was the largest published factorization by a general-purpose algorithm, until NFS was used to factor RSA-130, completed April 10, 1996."

### Dixon's Method

And here is some info about Dixon's method, which is related to CFRAC, and the precursor to most other modern sieving methods.

http://en.wikipedia.org/wiki/Dixon%27s_factorization_method

http://mathworld.wolfram.com/DixonsFactorizationMethod.html

This method was first published in 1981.

http://en.wikipedia.org/wiki/Dixon%27s_factorization_method

http://mathworld.wolfram.com/DixonsFactorizationMethod.html

This method was first published in 1981.

## Sunday, 25 November 2007

### CFRAC

Here are some links describing the CFRAC, or "continued fraction" factorization method:

http://en.wikipedia.org/wiki/Continued_fraction_factorization

http://mathworld.wolfram.com/ContinuedFractionFactorizationAlgorithm.html

Notable achievements of this method, first envisaged in 1931, include the factorization of F7, the seventh Fermat number, in 1970 by Morrison and Brillhart.

http://en.wikipedia.org/wiki/Continued_fraction_factorization

http://mathworld.wolfram.com/ContinuedFractionFactorizationAlgorithm.html

Notable achievements of this method, first envisaged in 1931, include the factorization of F7, the seventh Fermat number, in 1970 by Morrison and Brillhart.

## Saturday, 24 November 2007

### Rho:x^2-2

Most polynomials work nicely with Pollard Rho.

However, f(x)=x^2 and f(x)=x^2-2 should be avoided.

Here's what John Pollard had to say by way of reason, in 1975 [BIT 15, P.333]:

"(i) that all polynomials x^2+b seem equally good in (1) except that x^2 and x^2-2 should not be used (whatever the starting value x0), the latter for reasons connected with its appearance in the Lucas-Lehmer test for primality of the Mersenne Numbers [3],"

while Knuth has this to say [TAOCP Vol.2 P.386]:

"In those rare cases where failure occurs for large N, we could try using f(x)=x^2+c for some c<>0 or 1. The value c=-2 should also be avoided, since the recurrence x_(m+1) = (x_m)^2-2 has solutions of the form x_m = r^(2^m) + r^-(2^m). Other values of c do not seem to lead to simple relationships mod p, and they should all be satisfactory when used with suitable starting values."

However, f(x)=x^2 and f(x)=x^2-2 should be avoided.

Here's what John Pollard had to say by way of reason, in 1975 [BIT 15, P.333]:

"(i) that all polynomials x^2+b seem equally good in (1) except that x^2 and x^2-2 should not be used (whatever the starting value x0), the latter for reasons connected with its appearance in the Lucas-Lehmer test for primality of the Mersenne Numbers [3],"

while Knuth has this to say [TAOCP Vol.2 P.386]:

"In those rare cases where failure occurs for large N, we could try using f(x)=x^2+c for some c<>0 or 1. The value c=-2 should also be avoided, since the recurrence x_(m+1) = (x_m)^2-2 has solutions of the form x_m = r^(2^m) + r^-(2^m). Other values of c do not seem to lead to simple relationships mod p, and they should all be satisfactory when used with suitable starting values."

## Friday, 23 November 2007

### 2007 chips

I recently came across the following page, which has some interesting speed comparisons of the various types of modern chips...

http://www.tomshardware.com/2007/07/16/cpu_charts_2007/page36.html

http://www.tomshardware.com/2007/07/16/cpu_charts_2007/page36.html

## Thursday, 22 November 2007

### Desktop2

## Wednesday, 21 November 2007

### LIM (Part 4) - Schonhage-Strassen

The Schonhage-Strassen (SSA) is an asymptotically fast multiplicative algorithm for large integers, developed in 1971.

http://en.wikipedia.org/wiki/SchÃ¶nhage-Strassen_algorithm

It uses Fast Fourier transforms (FFTs) (more on these at a later date hopefully) and its run-time complexity is of order nlognloglogn. [Note that the FFT must be performed modulo 2^n+1 for a suitable n, but by choosing n large enough this equates to a regular multiplication]

This means that SSA outperforms Karatsuba or Toom-Cook for numbers with tens of thousands of digits or more. An example of its implementation is in GIMPS' Prime95/mprime software. A second example is the recent addition of SSA to the open-source math library GMP.

http://en.wikipedia.org/wiki/SchÃ¶nhage-Strassen_algorithm

It uses Fast Fourier transforms (FFTs) (more on these at a later date hopefully) and its run-time complexity is of order nlognloglogn. [Note that the FFT must be performed modulo 2^n+1 for a suitable n, but by choosing n large enough this equates to a regular multiplication]

This means that SSA outperforms Karatsuba or Toom-Cook for numbers with tens of thousands of digits or more. An example of its implementation is in GIMPS' Prime95/mprime software. A second example is the recent addition of SSA to the open-source math library GMP.

## Tuesday, 20 November 2007

### LIM (Part 3) - Toom-Cook

Another method of multiplication is called Toom-Cook, first described in 1963.

http://en.wikipedia.org/wiki/Toom-Cook_multiplication

This is basically a generalization of the Karatsuba Method, by splitting the input numbers into multiple parts at a time, rather than just two (as in Karatsuba) or 1 (ie no splitting) (as in classical long multiplication).

Toom-3 (3-way Toom-Cook) reduces 9 multiplications to 5, and runs in order n^(log5/log3) time.

http://en.wikipedia.org/wiki/Toom-Cook_multiplication

This is basically a generalization of the Karatsuba Method, by splitting the input numbers into multiple parts at a time, rather than just two (as in Karatsuba) or 1 (ie no splitting) (as in classical long multiplication).

Toom-3 (3-way Toom-Cook) reduces 9 multiplications to 5, and runs in order n^(log5/log3) time.

## Monday, 19 November 2007

### WEP-M+2 milestone

The WEP-M+2 Project has just announced that it has reached the milestone of 1000 instances of finding the 12-digit factor of (M+2)2203. Thanks to everyone who has participated so far.

http://bearnol.is-a-geek.com/wanless2/

I estimate (I'm the project admin :) that those 1000 instances equate to about 27 CPU-years (modern CPU cores).

http://bearnol.is-a-geek.com/wanless2/

I estimate (I'm the project admin :) that those 1000 instances equate to about 27 CPU-years (modern CPU cores).

### LIM (Part 2) - Karatsuba

A first improvement to long multiplication is the Karatsuba Algorithm.

http://en.wikipedia.org/wiki/Karatsuba_algorithm

This was invented in 1960, and has a time complexity of order n^(log_2(3)).

It relies on the observation that two-digit multiplication can be done with only 3, rather than 4, multiplications classically required. By "dividing and conquering" (ie splitting) the numbers to be multiplied this can be extended to larger numbers.

http://en.wikipedia.org/wiki/Karatsuba_algorithm

This was invented in 1960, and has a time complexity of order n^(log_2(3)).

It relies on the observation that two-digit multiplication can be done with only 3, rather than 4, multiplications classically required. By "dividing and conquering" (ie splitting) the numbers to be multiplied this can be extended to larger numbers.

## Sunday, 18 November 2007

### Large Integer Multiplication (LIM) Part 1

Large Integer Multiplication is an algorithm to multiply two large integers together efficiently.

This is often used by factorization algorithms, both for exponentiation, for example - when it is combined with the Russian Peasant method for extra speed (see earlier) or in fact, often, for division (by multiplying by the inverse of a number as the dividend)

"Long Multiplication" is the obvious, and naive, method, but the time complexity of this is of order n^2 for two n-digit integers. So a number of improvements have been suggested. These will be examined in later parts of this series. For the moment read all about LIM on Wikipedia:

http://en.wikipedia.org/wiki/Multiplication_algorithm

This is often used by factorization algorithms, both for exponentiation, for example - when it is combined with the Russian Peasant method for extra speed (see earlier) or in fact, often, for division (by multiplying by the inverse of a number as the dividend)

"Long Multiplication" is the obvious, and naive, method, but the time complexity of this is of order n^2 for two n-digit integers. So a number of improvements have been suggested. These will be examined in later parts of this series. For the moment read all about LIM on Wikipedia:

http://en.wikipedia.org/wiki/Multiplication_algorithm

## Saturday, 17 November 2007

### Integer Factorization Records

"Integer factorization records" on Wikipedia has a summary of the current, and recent, state-of-play for the biggest non-trivial numbers yet factored:

http://en.wikipedia.org/wiki/Integer_factorization_records

Note that these have all been achieved with some form of Number Field Sieve, either General (for numbers of no especial form), or Special (for the two Mersenne numbers cited).

http://en.wikipedia.org/wiki/Integer_factorization_records

Note that these have all been achieved with some form of Number Field Sieve, either General (for numbers of no especial form), or Special (for the two Mersenne numbers cited).

## Friday, 16 November 2007

### Msieve v1.30

New version [primarily a bugfix] of msieve (by Jason Papadopoulos) now available.

Download from:

http://www.boo.net/~jasonp/qs.html

Download from:

http://www.boo.net/~jasonp/qs.html

### The Magic Words are Squeamish Ossifrage

Here is a rather magnificent looking ossifrage [picture from Wikipedia] (though I don't know whether he's especially squeamish! :)

The connection with factorization?

Well the first ever RSA challenge, posed by Martin Gardner in 1977, had this phrase as its encrypted solution - deciphered in 1994 for the $100 prize:

http://en.wikipedia.org/wiki/The_Magic_Words_are_Squeamish_Ossifrage

## Thursday, 15 November 2007

### SQUFOF

Shanks' square forms factorization was devised as an improvement on Fermat's method.

Here is its entry on Wikipedia:

http://en.wikipedia.org/wiki/SQUFOF

and here is its implementation in superfac9:

BigInteger factorizeshanks(BigInteger n) {

BigInteger a = new BigInteger("0");

BigInteger f = new BigInteger("0");

BigInteger h1 = new BigInteger("0");

BigInteger h2 = new BigInteger("0");

BigInteger k = new BigInteger("0");

BigInteger p = new BigInteger("0");

BigInteger pp = new BigInteger("0");

BigInteger q = new BigInteger("0");

BigInteger qq = new BigInteger("0");

BigInteger qqq = new BigInteger("0");

BigInteger r = new BigInteger("0");

BigInteger te = new BigInteger("0");

BigInteger i = new BigInteger("0");

BigInteger count = new BigInteger("0");

k = sqrt(n);

if (fastsquareQ(n)) return k;

a=k; h1=k; h2=ONE; pp=ZERO; qq=ONE; qqq=n; r=ZERO;

for (count=ONE;count.compareTo(TENTHOUSAND)<0;count=count.add(ONE)) {

p=k.subtract(r);

q=qqq.add(a.multiply(pp.subtract(p)));

a=(p.add(k)).divide(q);

r=(p.add(k)).remainder(q);

te=(a.multiply(h1)).add(h2);

h2=h1;

h1=te;

pp=p;

qqq=qq;

qq=q;

te = sqrt(q);

i=i.add(ONE);

if ((i.remainder(TWO).compareTo(ZERO))!=0 || !fastsquareQ(q)) continue;

te=h2.subtract(te);

f=n.gcd(te);

if (f.compareTo(ONE) > 0 && f.compareTo(n) < 0)

return f;

}

return f;

}

Here is its entry on Wikipedia:

http://en.wikipedia.org/wiki/SQUFOF

and here is its implementation in superfac9:

BigInteger factorizeshanks(BigInteger n) {

BigInteger a = new BigInteger("0");

BigInteger f = new BigInteger("0");

BigInteger h1 = new BigInteger("0");

BigInteger h2 = new BigInteger("0");

BigInteger k = new BigInteger("0");

BigInteger p = new BigInteger("0");

BigInteger pp = new BigInteger("0");

BigInteger q = new BigInteger("0");

BigInteger qq = new BigInteger("0");

BigInteger qqq = new BigInteger("0");

BigInteger r = new BigInteger("0");

BigInteger te = new BigInteger("0");

BigInteger i = new BigInteger("0");

BigInteger count = new BigInteger("0");

k = sqrt(n);

if (fastsquareQ(n)) return k;

a=k; h1=k; h2=ONE; pp=ZERO; qq=ONE; qqq=n; r=ZERO;

for (count=ONE;count.compareTo(TENTHOUSAND)<0;count=count.add(ONE)) {

p=k.subtract(r);

q=qqq.add(a.multiply(pp.subtract(p)));

a=(p.add(k)).divide(q);

r=(p.add(k)).remainder(q);

te=(a.multiply(h1)).add(h2);

h2=h1;

h1=te;

pp=p;

qqq=qq;

qq=q;

te = sqrt(q);

i=i.add(ONE);

if ((i.remainder(TWO).compareTo(ZERO))!=0 || !fastsquareQ(q)) continue;

te=h2.subtract(te);

f=n.gcd(te);

if (f.compareTo(ONE) > 0 && f.compareTo(n) < 0)

return f;

}

return f;

}

## Wednesday, 14 November 2007

### RSA-155

RSA-155 (a 155 digit semiprime) was factored on August 22, 1999 by GNFS.

RSA-155 = 102639592829741105772054196573991675900716567808038066803341933521790711307779

* 106603488380168454820927220360012878679207958575989291522270608237193062808643

Read more about it at Wikipedia:

http://en.wikipedia.org/wiki/RSA-155

and on the official announcement:

http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind9908&L=nmbrthry&P=1905

RSA-155 = 102639592829741105772054196573991675900716567808038066803341933521790711307779

* 106603488380168454820927220360012878679207958575989291522270608237193062808643

Read more about it at Wikipedia:

http://en.wikipedia.org/wiki/RSA-155

and on the official announcement:

http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind9908&L=nmbrthry&P=1905

## Tuesday, 13 November 2007

### snfspoly

There is also 'snfspoly' - the equivalent of 'phi' for XYYXF composites, which can be difficult to find - search for it on the yahoo XYYXF mailing list, or email me...

## Monday, 12 November 2007

### Phi

Alex Kruppa has written a small (but growing) program, licensed under the GPL, called 'phi', for generating SNFS polynomials for use with GGNFS or msieve. It can currently produce correct polys for cyclotomic numbers (eg Cunningham Project) and 'Homogeneous' Cunninghams. Search for the source code (written in 'C', and using GMP) in the factoring section of the mersenneforum.

## Sunday, 11 November 2007

## Saturday, 10 November 2007

### What does the term "embarrassingly parallel" mean?

An algorithm that can be run (very) profitably on many threads simultaneously. For example, trial-division, with a different random seed in each thread as the potential factor, to avoid repeating work.

## Friday, 9 November 2007

### FireStream

I imagine these new chips from AMD would be great for embarrassingly parallel apps like most factorization methods...

http://www.pcworld.com/article/id,139413-c,amd/article.html

Here is some more on this series of chips, from Wikipedia:

http://en.wikipedia.org/wiki/AMD_Stream_Processor

http://www.pcworld.com/article/id,139413-c,amd/article.html

Here is some more on this series of chips, from Wikipedia:

http://en.wikipedia.org/wiki/AMD_Stream_Processor

## Thursday, 8 November 2007

### Velocity Engine

Apple has produced a sample application, to demonstrate their "Velocity Engine", which uses the vector capability of PPC chips, G4 & G5.

http://en.wikipedia.org/wiki/AltiVec

It happens to be a factorization program! :)

http://developer.apple.com/samplecode/VelEng_Multiprecision/index.html

The program proceeds by trial-division, then rho, and finally ECM - and is quite fast...

http://en.wikipedia.org/wiki/AltiVec

It happens to be a factorization program! :)

http://developer.apple.com/samplecode/VelEng_Multiprecision/index.html

The program proceeds by trial-division, then rho, and finally ECM - and is quite fast...

## Wednesday, 7 November 2007

### Lenny

Well I've only been and gone and done it! Bought a new MacBook that is, complete with Leopard. It's called "Lenny"

I've already got it running George Woltman's mprime 25.5 for Mac OSX (beta), on medium-sized M+2 numbers, and have high expectations of finding a new factor or two :)

I've already got it running George Woltman's mprime 25.5 for Mac OSX (beta), on medium-sized M+2 numbers, and have high expectations of finding a new factor or two :)

## Tuesday, 6 November 2007

### Richard Brent

Here is a picture of Richard Brent, from his homepage:

http://wwwmaths.anu.edu.au/~brent/

(image linked to in situ)

## Monday, 5 November 2007

### Brent's Method

Brent's improvement (in 1980) to Pollard's rho method is to extend the core idea by generalizing the cycle by powers of two.

Full details here, with due reference to Floyd -

http://web.comlab.ox.ac.uk/oucl/work/richard.brent/pd/rpb051i.pdf

The following from Wikipedia:

Input: n, the integer to be factored; x0, such that 0 ≤ x0 ≤ n; m such that m > 0; and f(x), a pseudo-random function modulo n.

Output: a non-trivial factor of n, or failure.

1. y ← x0, r ← 1, q ← 1.

2. Do:

1. x ← y

2. For i = 1 To r:

1. y ← f(y)

3. k ← 0

4. Do:

1. ys ← y

2. For i = 1 To min(m, r − k):

1. y ← f(y)

2. q ← (q × |x − y|) mod n

3. g ← GCD(q, n)

4. k ← k + m

5. Until (k ≥ r or g > 1)

6. r ← 2r

3. Until g > 1

4. If g = n then

1. Do:

1. ys ← f(ys)

2. g ← GCD(|x − ys|, n)

2. Until g > 1

5. If g = n then return failure, else return g

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function brentFactor(N)

# Initial values x(i) and x(m) for i = 0.

xi := 2

xm := 2

for i from 1 to infinity

# Find x(i) from x(i-1).

xi := (xi ^ 2 + 1) % N

s := gcd(xi - xm, N)

if s <> 1 and s <> N then

return s, N/s

end if

if integralPowerOf2(i) then

xm := xi

end if

end do

end function

Here is its Java implementation from superfac9:

BigInteger factorizebrent(BigInteger n) {

BigInteger k = new BigInteger("1");

BigInteger r = new BigInteger("1");

BigInteger i = new BigInteger("1");

BigInteger m = new BigInteger("1");

BigInteger iter = new BigInteger("1");

BigInteger z = new BigInteger("1");

BigInteger x = new BigInteger("1");

BigInteger y = new BigInteger("1");

BigInteger q = new BigInteger("1");

BigInteger ys = new BigInteger("1");

m=TEN;

r=ONE;

iter=ZERO;

z=ZERO;

y=z;

q=ONE;

do {

x=y;

for (i=ONE;i.compareTo(r)<=0;i=i.add(ONE)) y=((y.multiply(y)).add(THREE)).mod(n);

k=ZERO;

do {

iter=iter.add(ONE);

// System.out.print("iter=" + iter.toString() + '\r');

ys=y;

for (i=ONE;i.compareTo(mr_min(m,r.subtract(k)))<=0;i=i.add(ONE)) {

y=((y.multiply(y)).add(THREE)).mod(n);

q=((y.subtract(x)).multiply(q)).mod(n);

}

z=n.gcd(q);

k=k.add(m);

} while (k.compareTo(r)<0 && z.compareTo(ONE)==0);

r=r.multiply(TWO);

} while (z.compareTo(ONE)==0 && iter.compareTo(TENTHOUSAND)<0);

if (z.compareTo(n)==0) do {

ys=((ys.multiply(ys)).add(THREE)).mod(n);

z=n.gcd(ys.subtract(x));

} while (z.compareTo(ONE)==0);

return z;

}

Achievements of this method include the factorization, in 1980, of the eighth Fermat number:

http://wwwmaths.anu.edu.au/~brent/pub/pub061.html

Full details here, with due reference to Floyd -

http://web.comlab.ox.ac.uk/oucl/work/richard.brent/pd/rpb051i.pdf

The following from Wikipedia:

Input: n, the integer to be factored; x0, such that 0 ≤ x0 ≤ n; m such that m > 0; and f(x), a pseudo-random function modulo n.

Output: a non-trivial factor of n, or failure.

1. y ← x0, r ← 1, q ← 1.

2. Do:

1. x ← y

2. For i = 1 To r:

1. y ← f(y)

3. k ← 0

4. Do:

1. ys ← y

2. For i = 1 To min(m, r − k):

1. y ← f(y)

2. q ← (q × |x − y|) mod n

3. g ← GCD(q, n)

4. k ← k + m

5. Until (k ≥ r or g > 1)

6. r ← 2r

3. Until g > 1

4. If g = n then

1. Do:

1. ys ← f(ys)

2. g ← GCD(|x − ys|, n)

2. Until g > 1

5. If g = n then return failure, else return g

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function brentFactor(N)

# Initial values x(i) and x(m) for i = 0.

xi := 2

xm := 2

for i from 1 to infinity

# Find x(i) from x(i-1).

xi := (xi ^ 2 + 1) % N

s := gcd(xi - xm, N)

if s <> 1 and s <> N then

return s, N/s

end if

if integralPowerOf2(i) then

xm := xi

end if

end do

end function

Here is its Java implementation from superfac9:

BigInteger factorizebrent(BigInteger n) {

BigInteger k = new BigInteger("1");

BigInteger r = new BigInteger("1");

BigInteger i = new BigInteger("1");

BigInteger m = new BigInteger("1");

BigInteger iter = new BigInteger("1");

BigInteger z = new BigInteger("1");

BigInteger x = new BigInteger("1");

BigInteger y = new BigInteger("1");

BigInteger q = new BigInteger("1");

BigInteger ys = new BigInteger("1");

m=TEN;

r=ONE;

iter=ZERO;

z=ZERO;

y=z;

q=ONE;

do {

x=y;

for (i=ONE;i.compareTo(r)<=0;i=i.add(ONE)) y=((y.multiply(y)).add(THREE)).mod(n);

k=ZERO;

do {

iter=iter.add(ONE);

// System.out.print("iter=" + iter.toString() + '\r');

ys=y;

for (i=ONE;i.compareTo(mr_min(m,r.subtract(k)))<=0;i=i.add(ONE)) {

y=((y.multiply(y)).add(THREE)).mod(n);

q=((y.subtract(x)).multiply(q)).mod(n);

}

z=n.gcd(q);

k=k.add(m);

} while (k.compareTo(r)<0 && z.compareTo(ONE)==0);

r=r.multiply(TWO);

} while (z.compareTo(ONE)==0 && iter.compareTo(TENTHOUSAND)<0);

if (z.compareTo(n)==0) do {

ys=((ys.multiply(ys)).add(THREE)).mod(n);

z=n.gcd(ys.subtract(x));

} while (z.compareTo(ONE)==0);

return z;

}

Achievements of this method include the factorization, in 1980, of the eighth Fermat number:

http://wwwmaths.anu.edu.au/~brent/pub/pub061.html

## Sunday, 4 November 2007

### Pollard P-1 Method

There is also the Pollard "p-1" method, invented in 1974. It relies on Fermat's Little Theorem.

Here is a Wikipedia article about this method:

http://en.wikipedia.org/wiki/Pollard%27s_p_-_1_algorithm

And here is its description on the mersenneforum:

http://mersennewiki.org/index.php/P-1_Factorization_Method

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function pollard_p1(N)

# Initial value 2^(k!) for k = 0.

two_k_fact := 1

for k from 1 to infinity

# Calculate 2^(k!) (mod N) from 2^((k-1)!).

two_k_fact := modPow(two_k_fact, k, N)

rk := gcd(two_k_fact - 1, N)

if rk <> 1 and rk <> N then

return rk, N/rk

end if

end for

end function

Here is a Wikipedia article about this method:

http://en.wikipedia.org/wiki/Pollard%27s_p_-_1_algorithm

And here is its description on the mersenneforum:

http://mersennewiki.org/index.php/P-1_Factorization_Method

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function pollard_p1(N)

# Initial value 2^(k!) for k = 0.

two_k_fact := 1

for k from 1 to infinity

# Calculate 2^(k!) (mod N) from 2^((k-1)!).

two_k_fact := modPow(two_k_fact, k, N)

rk := gcd(two_k_fact - 1, N)

if rk <> 1 and rk <> N then

return rk, N/rk

end if

end for

end function

## Saturday, 3 November 2007

### A Random Factorization

Below is the factorization of a random 100-digit number.

First in Pari/GP: (which uses Rho, ECM and MPQS)

[NB the time quoted is process-time, rather than real-time]

? #

timer = 1 (on)

? factor(905771525917281232131519213461223147373627632478259763073719184206592688398458994971036043749073482)

time = 12mn, 17,641 ms.

%1 =

[2 1]

[3 1]

[11 1]

[18701 1]

[111977 1]

[122016508135030794072521 1]

[3174449800530489735869567 1]

[16919752823495547077187437987066464785943 1]

...and then with superfac9:

tiggatoo:~/math james$ time java superfac9 < random100d.txt

[905771525917281232131519213461223147373627632478259763073719184206592688398458994971036043749073482]

wanless...

brutep: 2

wanless...

wanless...

brutep: 3

brutep: 11

ecm...

ecm...

aprtcle: 18701

aprtcle: 111977

ecm...

aprtcle: 122016508135030794072521

siqs...

aprtcle: 3174449800530489735869567

aprtcle: 16919752823495547077187437987066464785943

duration: 27490 seconds

Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 0

at java.lang.String.charAt(String.java:558)

at superfac9.main(superfac9.java:192)

real 458m11.888s

user 59m52.469s

sys 0m29.617s

First in Pari/GP: (which uses Rho, ECM and MPQS)

[NB the time quoted is process-time, rather than real-time]

? #

timer = 1 (on)

? factor(905771525917281232131519213461223147373627632478259763073719184206592688398458994971036043749073482)

time = 12mn, 17,641 ms.

%1 =

[2 1]

[3 1]

[11 1]

[18701 1]

[111977 1]

[122016508135030794072521 1]

[3174449800530489735869567 1]

[16919752823495547077187437987066464785943 1]

...and then with superfac9:

tiggatoo:~/math james$ time java superfac9 < random100d.txt

[905771525917281232131519213461223147373627632478259763073719184206592688398458994971036043749073482]

wanless...

brutep: 2

wanless...

wanless...

brutep: 3

brutep: 11

ecm...

ecm...

aprtcle: 18701

aprtcle: 111977

ecm...

aprtcle: 122016508135030794072521

siqs...

aprtcle: 3174449800530489735869567

aprtcle: 16919752823495547077187437987066464785943

duration: 27490 seconds

Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 0

at java.lang.String.charAt(String.java:558)

at superfac9.main(superfac9.java:192)

real 458m11.888s

user 59m52.469s

sys 0m29.617s

## Friday, 2 November 2007

### Trial Division

Perhaps I should have started with this, but...

Trial division is the simplest and most naive of factorization methods.

It _is_ however, the fastest method for easily eliminating very small factors from composite inputs.

Note also, though, interestingly, that in fact it is also just about the only method capable of finding small (or indeed any) factors of really large numbers of specific algebraic form(s), because those algebraic forms can then be calculated modulo the suspected factor, keeping the required calculations small.

http://en.wikipedia.org/wiki/Trial_division

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function trialDivision(N)

for s from 2 to floor(sqrt(N))

if s divides N then

return s, N/s

end if

end for

end function

Here is its Java implementation (as 'brute' - standing for 'brute force') from superfac9:

BigInteger factorizebrute(BigInteger n) {

BigInteger a = new BigInteger("2");

while (a.compareTo(TENTHOUSAND) < 0 && a.multiply(a).compareTo(n) <= 0) {

if (n.remainder(a).compareTo(ZERO) == 0)

return a;

else

a = a.add(ONE);

}

return n;

}

Trial division is the simplest and most naive of factorization methods.

It _is_ however, the fastest method for easily eliminating very small factors from composite inputs.

Note also, though, interestingly, that in fact it is also just about the only method capable of finding small (or indeed any) factors of really large numbers of specific algebraic form(s), because those algebraic forms can then be calculated modulo the suspected factor, keeping the required calculations small.

http://en.wikipedia.org/wiki/Trial_division

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function trialDivision(N)

for s from 2 to floor(sqrt(N))

if s divides N then

return s, N/s

end if

end for

end function

Here is its Java implementation (as 'brute' - standing for 'brute force') from superfac9:

BigInteger factorizebrute(BigInteger n) {

BigInteger a = new BigInteger("2");

while (a.compareTo(TENTHOUSAND) < 0 && a.multiply(a).compareTo(n) <= 0) {

if (n.remainder(a).compareTo(ZERO) == 0)

return a;

else

a = a.add(ONE);

}

return n;

}

## Thursday, 1 November 2007

### P+1 p60

This just in... Alex Kruppa is reporting a new, record-size factor, of 60 digits, found with the P+1 method.

Read all about it here:

http://mersenneforum.org/showthread.php?p=117442#post117442

Read all about it here:

http://mersenneforum.org/showthread.php?p=117442#post117442

### Wave-particle duality

Today I (also) have some musings on the wave-particle duality for you, and its effect on factorization.

Clearly there is a mapping (according to QT), from the wave-to-particle domains, or very equivalently, the time-to-space domains.

As integers get larger, the required precision to express that integer increases - leading to a more efficient representation/manipulation in the wave domain, rather than the particle. Hence FFT-methods for eg Large-integer-multiplication (more on that some other time hopefully).

Note that a so-called quantum computer, would also be operating naturally largely in the wave-domain.

This leads me to the suggestion, that maybe a QC would operate best on native representations of FFTs...

Clearly there is a mapping (according to QT), from the wave-to-particle domains, or very equivalently, the time-to-space domains.

As integers get larger, the required precision to express that integer increases - leading to a more efficient representation/manipulation in the wave domain, rather than the particle. Hence FFT-methods for eg Large-integer-multiplication (more on that some other time hopefully).

Note that a so-called quantum computer, would also be operating naturally largely in the wave-domain.

This leads me to the suggestion, that maybe a QC would operate best on native representations of FFTs...

### Russian Peasant

'Russian Peasant' is a method for fast exponentiation. It is also known as 'exponentiation by squaring'. The basic idea is to exploit knowledge of the binary expansion of the exponent, by using selective repeated squaring. Depending on the factorization algorithm, and the size of the input number, this can lead to significant performance enhancement.

http://en.wikipedia.org/wiki/Exponentiation_by_squaring

Here is an on-line demo:

http://www.math.umbc.edu/~campbell/NumbThy/Class/BasicNumbThy.html#Modular-PowMod

The classic description is in Knuth (The Art of Computer Programming) 4.6.3

Java code as below:

static BigInteger power(BigInteger x, BigInteger n, BigInteger mod) { // Knuth 4.6.3 - computes x^n modulo mod BigInteger N = new BigInteger("0"); BigInteger Y = new BigInteger("0"); BigInteger Z = new BigInteger("0");

N=n; Y=one; Z=x;

while (true) { if (N.remainder(two).compareTo(zero) > 0) { N=N.divide(two); Y=Z.multiply(Y).remainder(mod); if (N.compareTo(zero) == 0) return Y; } else { N=N.divide(two); } Z=Z.multiply(Z).remainder(mod); } }

http://en.wikipedia.org/wiki/Exponentiation_by_squaring

Here is an on-line demo:

http://www.math.umbc.edu/~campbell/NumbThy/Class/BasicNumbThy.html#Modular-PowMod

The classic description is in Knuth (The Art of Computer Programming) 4.6.3

Java code as below:

static BigInteger power(BigInteger x, BigInteger n, BigInteger mod) { // Knuth 4.6.3 - computes x^n modulo mod BigInteger N = new BigInteger("0"); BigInteger Y = new BigInteger("0"); BigInteger Z = new BigInteger("0");

N=n; Y=one; Z=x;

while (true) { if (N.remainder(two).compareTo(zero) > 0) { N=N.divide(two); Y=Z.multiply(Y).remainder(mod); if (N.compareTo(zero) == 0) return Y; } else { N=N.divide(two); } Z=Z.multiply(Z).remainder(mod); } }

## Wednesday, 31 October 2007

### History of factorization records by sieving methods

This graph is by Francois Morain. http://algo.inria.fr/seminars/sem00-01/morain.html

(image linked to in situ)

The relevant figure is towards the bottom of the article, in Section 4.

## Tuesday, 30 October 2007

### Penryn

It would appear that Intel's next-generation processors (Penryn) draw up to 40% less power than current CPUs, under full load. :))) Quite welcome I imagine, in the current climate (pun intended :( ).

http://www.tomshardware.com/2007/10/29/intel_penryn_4ghz_with_air_cooling/page14.html

This is no doubt due to moving to the smaller 45nm die resolution. Paul Otellini (CEO) has said that Penryn will ship 12 November 2007...

http://www.tomshardware.com/2007/10/29/intel_penryn_4ghz_with_air_cooling/page14.html

This is no doubt due to moving to the smaller 45nm die resolution. Paul Otellini (CEO) has said that Penryn will ship 12 November 2007...

## Monday, 29 October 2007

### Coincidence?

Is it coincidence that Fermat's method works best with factors close to the square root of the input number, while NFS is particularly good at factoring semiprimes?

## Sunday, 28 October 2007

### XYYXF Project milestone

Congratulations to the XYYXF project, who have just finished their original target of factoring all composites with x and y <=100, after 6 years! The search continues with x and y extended to 150...

http://xyyxf.at.tut.by/news.html#0

http://xyyxf.at.tut.by/news.html#0

### Msieve v1.29

New version of msieve (by Jason Papadopoulos) now available.

Download from:

http://www.boo.net/~jasonp/qs.html

Announcement at:

http://mersenneforum.org/showthread.php?p=117235#post117235

Download from:

http://www.boo.net/~jasonp/qs.html

Announcement at:

http://mersenneforum.org/showthread.php?p=117235#post117235

### Fermat's Method

This 400-yr old method is the basis for many modern sieving factorization methods.

It relies on the discovery of an expression for the input number, as the difference between two (integral) squares.

http://en.wikipedia.org/wiki/Fermat%27s_factorization_method

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function fermatFactor(N)

for x from ceil(sqrt(N)) to N

ySquared := x * x - N

if isSquare(ySquared) then

y := sqrt(ySquared)

s := (x - y)

t := (x + y)

if s <> 1 and s <> N then

return s, t

end if

end if

end for

end function

And here is its Java implementation (from superfac9) [with thanks to DT]:

BigInteger factorizefermat(BigInteger n) {

BigInteger a, bSq;

long iterations = 1L;

a = sqrt(n);

if(n.mod(a).compareTo(ZERO) == 0) return a;

while(iterations<10000000) {

bSq = a.multiply(a).subtract(n);

if(fastsquareQ(bSq))

return a.subtract(sqrt(bSq)); // bSq is a square, factorization found

a = a.add(ONE);

iterations++;

}

return n;

}

It relies on the discovery of an expression for the input number, as the difference between two (integral) squares.

http://en.wikipedia.org/wiki/Fermat%27s_factorization_method

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function fermatFactor(N)

for x from ceil(sqrt(N)) to N

ySquared := x * x - N

if isSquare(ySquared) then

y := sqrt(ySquared)

s := (x - y)

t := (x + y)

if s <> 1 and s <> N then

return s, t

end if

end if

end for

end function

And here is its Java implementation (from superfac9) [with thanks to DT]:

BigInteger factorizefermat(BigInteger n) {

BigInteger a, bSq;

long iterations = 1L;

a = sqrt(n);

if(n.mod(a).compareTo(ZERO) == 0) return a;

while(iterations<10000000) {

bSq = a.multiply(a).subtract(n);

if(fastsquareQ(bSq))

return a.subtract(sqrt(bSq)); // bSq is a square, factorization found

a = a.add(ONE);

iterations++;

}

return n;

}

## Saturday, 27 October 2007

### Mark Manasse

Here is a picture of Mark Manasse (reproduced by kind permission), who is notable for his involvement in the factorizations of F9, RSA-110 and RSA-120 in the early 1990's. In fact he helped develop the Number Field Sieve, which counted these numbers among its early successes.

http://www.std.org/~msm/common/f9paper.pdf

[Incidentally this paper, above, I see, also explains why WE works on prime-powers...basically Fermat's Little Theorem in combination w/ the Binomial Theorem]

http://en.wikipedia.org/wiki/RSA-110

http://en.wikipedia.org/wiki/RSA-120

http://www.std.org/~msm/common/nfspaper.pdf

## Friday, 26 October 2007

### "factorization"

Here's a bit of a Meta-post:

Regarding the spelling/usage of the word "factorization".

First off note that there is a synonym for "factorization", namely "factoring" - however since this also has a completely unrelated meaning (financial of some sort IIRC), I generally tend to use the longer form, unless it would sound particularly ugly.

Similarly the verb can just be "to factor", instead of "to factorize", however in this case the correct (mathematical) meaning is usually clearer, since it will be associated with an object (eg C127_113_36).

In addition (just to add to the confusion :) there is an alternative _spelling_ of "factorization" as "factorisation" (in British English).

Regarding the spelling/usage of the word "factorization".

First off note that there is a synonym for "factorization", namely "factoring" - however since this also has a completely unrelated meaning (financial of some sort IIRC), I generally tend to use the longer form, unless it would sound particularly ugly.

Similarly the verb can just be "to factor", instead of "to factorize", however in this case the correct (mathematical) meaning is usually clearer, since it will be associated with an object (eg C127_113_36).

In addition (just to add to the confusion :) there is an alternative _spelling_ of "factorization" as "factorisation" (in British English).

## Thursday, 25 October 2007

### Desktop

My desktop is pretty busy at the moment.

Here is a current screen grab.

The windows are (left-to-right):

1) Random-based WEP on (M+2)859433

2) WEP on F7

3) WEP on F8

4) WEP on (M+2)4253

5) msieve (NFS) on C127_113_36 (xyyxf)

6) ECM via ECMNet for xyyxf

7) GGNFS on rsa100

8) GGNFS on C127_113_36

All this on a 1.9GHz G5 (iMac called 'tiggatoo')!

## Wednesday, 24 October 2007

### GGNFS on PPC (update)

Basically, the problem of "not enough polynomials in mpqs", is fixed in the latest CVS code (0.77.1-20060722) available on Sourceforge (with a separate, preliminary, classical sieve stage).

This is the story of how I succeeded in obtaining, compiling and running it on my iMac G5 (with thanks to Mark Rodenkirch, who responded to my query on the yahoo GGNFS mailing list).

Step 1)

Obtain, compile and install a 64-bit GMP archive (PPC 970)

Step 2)

Download the source code of GGNFS via CVS from Sourceforge, thus:

"cvs -d:pserver:anonymous@ggnfs.cvs.sourceforge.net:/cvsroot/ggnfs checkout -R branch_0"

The source-tree is also available for browsing at:

http://ggnfs.cvs.sourceforge.net/ggnfs/branch_0/src/

Step 3)

Necessary specifically (atm) for PPC [(probably) not for other architectures]:

tweak/hack the code to include its own implementation [supplied, but disabled] of getline() in if.c by,

a) inserting a suitable declaration for the missing getline() function at the top of file if.c

b) remming out the ifdef (and matching endif) of GGNFS_GNU_MISSING lower down, to actually activate the missing implementation of getline()

(I also investigated other methods of achieving same eg trying to define GGNFS_GNU_MISSING elsewhere to achieve the same effect, but this was the first method that worked.)

Step 4)

Compile with "make ppc_970"

Step 5)

Note that the default implementation of factLat.pl, by which factorizations are actually run, already has the correct path to the binaries, so does not need adjusting, unlike some previous versions.

This is the story of how I succeeded in obtaining, compiling and running it on my iMac G5 (with thanks to Mark Rodenkirch, who responded to my query on the yahoo GGNFS mailing list).

Step 1)

Obtain, compile and install a 64-bit GMP archive (PPC 970)

Step 2)

Download the source code of GGNFS via CVS from Sourceforge, thus:

"cvs -d:pserver:anonymous@ggnfs.cvs.sourceforge.net:/cvsroot/ggnfs checkout -R branch_0"

The source-tree is also available for browsing at:

http://ggnfs.cvs.sourceforge.net/ggnfs/branch_0/src/

Step 3)

Necessary specifically (atm) for PPC [(probably) not for other architectures]:

tweak/hack the code to include its own implementation [supplied, but disabled] of getline() in if.c by,

a) inserting a suitable declaration for the missing getline() function at the top of file if.c

b) remming out the ifdef (and matching endif) of GGNFS_GNU_MISSING lower down, to actually activate the missing implementation of getline()

(I also investigated other methods of achieving same eg trying to define GGNFS_GNU_MISSING elsewhere to achieve the same effect, but this was the first method that worked.)

Step 4)

Compile with "make ppc_970"

Step 5)

Note that the default implementation of factLat.pl, by which factorizations are actually run, already has the correct path to the binaries, so does not need adjusting, unlike some previous versions.

## Tuesday, 23 October 2007

### Fermat number factor search

For the very latest on the search for Fermat number factors, checkout George Woltman's page at:

http://www.mersenne.org/ecmf.htm

Prime95/mprime includes an ECM factoring ability, easily controlled by a single ini file, and is particularly well-suited (and fast!) to looking for factors of larger Fermat (and Mersenneplustwo ;-) numbers. George Woltman even supplies a file of already-known factors of Fermat numbers, that mprime can read, to avoid the time spent re-discovering these (or you can start from scratch without reading-in that file, as a double-check that your set-up is finding factors successfully - this won't affect your ability to also find new factors)

http://www.mersenne.org/ecmf.htm

Prime95/mprime includes an ECM factoring ability, easily controlled by a single ini file, and is particularly well-suited (and fast!) to looking for factors of larger Fermat (and Mersenneplustwo ;-) numbers. George Woltman even supplies a file of already-known factors of Fermat numbers, that mprime can read, to avoid the time spent re-discovering these (or you can start from scratch without reading-in that file, as a double-check that your set-up is finding factors successfully - this won't affect your ability to also find new factors)

## Monday, 22 October 2007

### Euclidean Algorithm

The Euclidean Algorithm is vital to many factorization algorithms.

It is a very fast method for finding the gcd (greatest common divisor) of two numbers, without actually performing the factorization of either.

This page has pretty much all you need to know about it:

http://en.wikipedia.org/wiki/Euclidean_algorithm

It is a very fast method for finding the gcd (greatest common divisor) of two numbers, without actually performing the factorization of either.

This page has pretty much all you need to know about it:

http://en.wikipedia.org/wiki/Euclidean_algorithm

## Sunday, 21 October 2007

### "Monte Carlo" method(s)

What is a "Monte Carlo" (factorization) method?

Essentially one where luck can play a major part - hence the rather exciting-sounding connection with gambling! This is the case whenever the process is seeded by some (pseudo-or-otherwise) random number generation (eg for the benefit of parallelization), or when there is inherent unpredictability in the algorithm itself.

Note also, that for the Rho method, for example, this Monte Carlo nature of the algorithm results in performance gains caused by a statistical effect akin to the so-called "Birthday Paradox". This latter, in turn, is the term given to the surprisingly high observed probability of at least one coincidence of two different random variables in the same range (eg two or more people's birthdays)

The following Wikipedia link talks about Monte Carlo effect in general:

http://en.wikipedia.org/wiki/Monte_Carlo_method

and here is a link to the Birthday Paradox:

http://en.wikipedia.org/wiki/Birthday_Paradox

Essentially one where luck can play a major part - hence the rather exciting-sounding connection with gambling! This is the case whenever the process is seeded by some (pseudo-or-otherwise) random number generation (eg for the benefit of parallelization), or when there is inherent unpredictability in the algorithm itself.

Note also, that for the Rho method, for example, this Monte Carlo nature of the algorithm results in performance gains caused by a statistical effect akin to the so-called "Birthday Paradox". This latter, in turn, is the term given to the surprisingly high observed probability of at least one coincidence of two different random variables in the same range (eg two or more people's birthdays)

The following Wikipedia link talks about Monte Carlo effect in general:

http://en.wikipedia.org/wiki/Monte_Carlo_method

and here is a link to the Birthday Paradox:

http://en.wikipedia.org/wiki/Birthday_Paradox

## Saturday, 20 October 2007

### Rho method

Today I am going to be talking a little bit about Pollard's Rho algorithm, which, for such an apparently simple algorithm, is surprisingly good at finding moderate-size factors of general numbers.

Some reference links:

http://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm

[as I mentioned briefly once before]

http://mersennewiki.org/index.php/Rho_Factorization_Method

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function pollardRho(N)

# Initial values x(i) and x(2*i) for i = 0.

xi := 2

x2i := 2

do

# Find x(i+1) and x(2*(i+1))

xiPrime := xi ^ 2 + 1

x2iPrime := (x2i ^ 2 + 1) ^ 2 + 1

# Increment i: change our running values for x(i), x(2*i).

xi := xiPrime % N

x2i := x2iPrime % N

s := gcd(xi - x2i, N)

if s <> 1 and s <> N then

return s, N/s

end if

end do

end function

The Rho method is also the main method used by GMP's demo factorization sample program (or was last time I looked) - immediately below:

void

factor_using_pollard_rho (mpz_t n, int a_int, unsigned long p)

{

mpz_t x, x1, y, P;

mpz_t a;

mpz_t g;

mpz_t t1, t2;

int k, l, c, i;

if (flag_verbose)

{

printf ("[pollard-rho (%d)] ", a_int);

fflush (stdout);

}

mpz_init (g);

mpz_init (t1);

mpz_init (t2);

mpz_init_set_si (a, a_int);

mpz_init_set_si (y, 2);

mpz_init_set_si (x, 2);

mpz_init_set_si (x1, 2);

k = 1;

l = 1;

mpz_init_set_ui (P, 1);

c = 0;

while (mpz_cmp_ui (n, 1) != 0)

{

S2:

if (p != 0)

{

mpz_powm_ui (x, x, p, n); mpz_add (x, x, a);

}

else

{

mpz_mul (x, x, x); mpz_add (x, x, a); mpz_mod (x, x, n);

}

mpz_sub (t1, x1, x); mpz_mul (t2, P, t1); mpz_mod (P, t2, n);

c++;

if (c == 20)

{

c = 0;

mpz_gcd (g, P, n);

if (mpz_cmp_ui (g, 1) != 0)

goto S4;

mpz_set (y, x);

}

S3:

k--;

if (k > 0)

goto S2;

mpz_gcd (g, P, n);

if (mpz_cmp_ui (g, 1) != 0)

goto S4;

mpz_set (x1, x);

k = l;

l = 2 * l;

for (i = 0; i < k; i++)

{

if (p != 0)

{

mpz_powm_ui (x, x, p, n); mpz_add (x, x, a);

}

else

{

mpz_mul (x, x, x); mpz_add (x, x, a); mpz_mod (x, x, n);

}

}

mpz_set (y, x);

c = 0;

goto S2;

S4:

do

{

if (p != 0)

{

mpz_powm_ui (y, y, p, n); mpz_add (y, y, a);

}

else

{

mpz_mul (y, y, y); mpz_add (y, y, a); mpz_mod (y, y, n);

}

mpz_sub (t1, x1, y); mpz_gcd (g, t1, n);

}

while (mpz_cmp_ui (g, 1) == 0);

if (!mpz_probab_prime_p (g, 3))

{

do

{

mp_limb_t a_limb;

mpn_random (&a_limb, (mp_size_t) 1);

a_int = (int) a_limb;

}

while (a_int == -2 || a_int == 0);

if (flag_verbose)

{

printf ("[composite factor--restarting pollard-rho] ");

fflush (stdout);

}

factor_using_pollard_rho (g, a_int, p);

break;

}

else

{

mpz_out_str (stdout, 10, g);

fflush (stdout);

fputc (' ', stdout);

}

mpz_div (n, n, g);

mpz_mod (x, x, n);

mpz_mod (x1, x1, n);

mpz_mod (y, y, n);

if (mpz_probab_prime_p (n, 3))

{

mpz_out_str (stdout, 10, n);

fflush (stdout);

fputc (' ', stdout);

break;

}

}

mpz_clear (g);

mpz_clear (P);

mpz_clear (t2);

mpz_clear (t1);

mpz_clear (a);

mpz_clear (x1);

mpz_clear (x);

mpz_clear (y);

}

This Java code is taken directly from superfac9:

BigInteger factorizerho(BigInteger n) {

BigInteger loop = new BigInteger("1");

BigInteger x = new BigInteger("5");

BigInteger y = new BigInteger("2");

while (n.gcd(x.subtract(y)).compareTo(ONE) == 0 && loop.compareTo(TENTHOUSAND) < 0) {

x = x.multiply(x).add(ONE).mod(n);

x = x.multiply(x).add(ONE).mod(n);

y = y.multiply(y).add(ONE).mod(n);

loop = loop.add(ONE);

}

return n.gcd(x.subtract(y));

}

Note that there are several variants (given different start-point values for x and y for example), but the basic "Monte-Carlo" principle remains the same.

Notable successes with the Rho method or its variants include Brent and Pollard's factorization of the eighth Fermat number in 1980. The 16-digit factor of F8 took about 2 hours to find on a UNIVAC 1100/42, apparently.

Some reference links:

http://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm

[as I mentioned briefly once before]

http://mersennewiki.org/index.php/Rho_Factorization_Method

Also, the pseudocode immediately below is taken from a PD document by Connelly Barnes of Oregon State University

http://oregonstate.edu/~barnesc/documents/factoring.pdf

function pollardRho(N)

# Initial values x(i) and x(2*i) for i = 0.

xi := 2

x2i := 2

do

# Find x(i+1) and x(2*(i+1))

xiPrime := xi ^ 2 + 1

x2iPrime := (x2i ^ 2 + 1) ^ 2 + 1

# Increment i: change our running values for x(i), x(2*i).

xi := xiPrime % N

x2i := x2iPrime % N

s := gcd(xi - x2i, N)

if s <> 1 and s <> N then

return s, N/s

end if

end do

end function

The Rho method is also the main method used by GMP's demo factorization sample program (or was last time I looked) - immediately below:

void

factor_using_pollard_rho (mpz_t n, int a_int, unsigned long p)

{

mpz_t x, x1, y, P;

mpz_t a;

mpz_t g;

mpz_t t1, t2;

int k, l, c, i;

if (flag_verbose)

{

printf ("[pollard-rho (%d)] ", a_int);

fflush (stdout);

}

mpz_init (g);

mpz_init (t1);

mpz_init (t2);

mpz_init_set_si (a, a_int);

mpz_init_set_si (y, 2);

mpz_init_set_si (x, 2);

mpz_init_set_si (x1, 2);

k = 1;

l = 1;

mpz_init_set_ui (P, 1);

c = 0;

while (mpz_cmp_ui (n, 1) != 0)

{

S2:

if (p != 0)

{

mpz_powm_ui (x, x, p, n); mpz_add (x, x, a);

}

else

{

mpz_mul (x, x, x); mpz_add (x, x, a); mpz_mod (x, x, n);

}

mpz_sub (t1, x1, x); mpz_mul (t2, P, t1); mpz_mod (P, t2, n);

c++;

if (c == 20)

{

c = 0;

mpz_gcd (g, P, n);

if (mpz_cmp_ui (g, 1) != 0)

goto S4;

mpz_set (y, x);

}

S3:

k--;

if (k > 0)

goto S2;

mpz_gcd (g, P, n);

if (mpz_cmp_ui (g, 1) != 0)

goto S4;

mpz_set (x1, x);

k = l;

l = 2 * l;

for (i = 0; i < k; i++)

{

if (p != 0)

{

mpz_powm_ui (x, x, p, n); mpz_add (x, x, a);

}

else

{

mpz_mul (x, x, x); mpz_add (x, x, a); mpz_mod (x, x, n);

}

}

mpz_set (y, x);

c = 0;

goto S2;

S4:

do

{

if (p != 0)

{

mpz_powm_ui (y, y, p, n); mpz_add (y, y, a);

}

else

{

mpz_mul (y, y, y); mpz_add (y, y, a); mpz_mod (y, y, n);

}

mpz_sub (t1, x1, y); mpz_gcd (g, t1, n);

}

while (mpz_cmp_ui (g, 1) == 0);

if (!mpz_probab_prime_p (g, 3))

{

do

{

mp_limb_t a_limb;

mpn_random (&a_limb, (mp_size_t) 1);

a_int = (int) a_limb;

}

while (a_int == -2 || a_int == 0);

if (flag_verbose)

{

printf ("[composite factor--restarting pollard-rho] ");

fflush (stdout);

}

factor_using_pollard_rho (g, a_int, p);

break;

}

else

{

mpz_out_str (stdout, 10, g);

fflush (stdout);

fputc (' ', stdout);

}

mpz_div (n, n, g);

mpz_mod (x, x, n);

mpz_mod (x1, x1, n);

mpz_mod (y, y, n);

if (mpz_probab_prime_p (n, 3))

{

mpz_out_str (stdout, 10, n);

fflush (stdout);

fputc (' ', stdout);

break;

}

}

mpz_clear (g);

mpz_clear (P);

mpz_clear (t2);

mpz_clear (t1);

mpz_clear (a);

mpz_clear (x1);

mpz_clear (x);

mpz_clear (y);

}

This Java code is taken directly from superfac9:

BigInteger factorizerho(BigInteger n) {

BigInteger loop = new BigInteger("1");

BigInteger x = new BigInteger("5");

BigInteger y = new BigInteger("2");

while (n.gcd(x.subtract(y)).compareTo(ONE) == 0 && loop.compareTo(TENTHOUSAND) < 0) {

x = x.multiply(x).add(ONE).mod(n);

x = x.multiply(x).add(ONE).mod(n);

y = y.multiply(y).add(ONE).mod(n);

loop = loop.add(ONE);

}

return n.gcd(x.subtract(y));

}

Note that there are several variants (given different start-point values for x and y for example), but the basic "Monte-Carlo" principle remains the same.

Notable successes with the Rho method or its variants include Brent and Pollard's factorization of the eighth Fermat number in 1980. The 16-digit factor of F8 took about 2 hours to find on a UNIVAC 1100/42, apparently.

## Friday, 19 October 2007

### F12-WEP

And here is the WEP algorithm running (unsuccessfully) for an hour or so looking for new factors of F12:

tiggatoo:~/math/wec james$ time ./factorize3.gmp -P4096 -T1000 114689 26017793 63766529

P4096 T1000

real 59m16.287s

user 12m32.262s

sys 0m4.508s

tiggatoo:~/math/wec james$ time ./factorize3.gmp -P4096 -T1000 114689 26017793 63766529

P4096 T1000

real 59m16.287s

user 12m32.262s

sys 0m4.508s

## Thursday, 18 October 2007

### M521^2 by WE

Here is a factorization illustrating the automatic algebraic factoring capability of superfac9, due to WE algorithm:

tiggatoo:~/math james$ java superfac9

f(2^521-1)^2

[47125446914534694131579097993419809976955095716785201420286055195012674566357244479460731079205201122720511132925006540350105785156086431086764996857554304847155991333706718342307167456986269662311038377104760933477381254100896222805785374204495333936040246318307567782851014765052850751581472024524956029996236801]

wanless...

aprtcle: 6864797660130609714981900799081393217269435300143305409394463459185543183397656052122559640661454554977296311391480858037121987999716643812574028291115057151

aprtcle: 6864797660130609714981900799081393217269435300143305409394463459185543183397656052122559640661454554977296311391480858037121987999716643812574028291115057151

duration: 115 seconds

tiggatoo:~/math james$ java superfac9

f(2^521-1)^2

[47125446914534694131579097993419809976955095716785201420286055195012674566357244479460731079205201122720511132925006540350105785156086431086764996857554304847155991333706718342307167456986269662311038377104760933477381254100896222805785374204495333936040246318307567782851014765052850751581472024524956029996236801]

wanless...

aprtcle: 6864797660130609714981900799081393217269435300143305409394463459185543183397656052122559640661454554977296311391480858037121987999716643812574028291115057151

aprtcle: 6864797660130609714981900799081393217269435300143305409394463459185543183397656052122559640661454554977296311391480858037121987999716643812574028291115057151

duration: 115 seconds

## Wednesday, 17 October 2007

### Near-repdigit factorization(s)

And speaking of repunits (or repdigits - the factorization of these latter reduce to the former), there is also Makoto Kamada's page of factorizations of near-repdigits.

http://homepage2.nifty.com/m_kamada/math/factorizations.htm

Clearly this enlarges the options for factoring candidates significantly.

http://homepage2.nifty.com/m_kamada/math/factorizations.htm

Clearly this enlarges the options for factoring candidates significantly.

## Tuesday, 16 October 2007

### Repunit factorization

Here is a link to Yousuke Koide's page on repunit factorization(s). Repunits are numbers specifically whose decimal representation consists of all 1's.

http://www.h4.dion.ne.jp/~rep/

It would appear that the lowest repunit whose prime factorization is yet unknown, is R239 (see also Torbjorn Granlund's page at Swox).

http://swox.com/~tege/fac10m.txt

[Note that Swox is the organization behind the widely-used GMP math library

http://gmplib.org/]

http://www.h4.dion.ne.jp/~rep/

It would appear that the lowest repunit whose prime factorization is yet unknown, is R239 (see also Torbjorn Granlund's page at Swox).

http://swox.com/~tege/fac10m.txt

[Note that Swox is the organization behind the widely-used GMP math library

http://gmplib.org/]

## Monday, 15 October 2007

### GGNFS bounce!

GGNFS is cool! (and deservedly a standard reference implementation). But it would appear I have been a victim of the GGNFS "bounce", as Bob Backstrom would call it! I had almost collected enough relations to proceed to the linear algebra stage of my factorization attempt of C127_113_36 (for the xyyxf project), when this happened. Apparently, discarding heavy relations speeds up the later stage, but it's going to take me a while to replace those discarded relations. Bob assures me that this bounce will not happen again, and also suggests (as in fact I wondered) that it may well be possible to proceed to the LA immediately with msieve. However, since I'm fairly new at this (not having run even straight GGNFS before on a number this size), for now I think I've decided to wait, watch, and hopefully learn...

## Sunday, 14 October 2007

### Benchmarks: Java vs C (Addendum)

Of course Assembler can be even faster than C. George Woltman's big number library (gwnum) - part of Prime95/mprime - for example, seems very fast, perhaps at least as fast again as GMP, particularly for extrabig numbers.

Gwnum also uses advanced FFT techniques, and because it is partly written in assembler, not surprisingly, it is not portable - only being available for the most common platform, x86.

It would be interesting to rewrite my WEP application using gwnum...

http://www.mersenne.org/freesoft.htm

http://en.wikipedia.org/wiki/Fft

Gwnum also uses advanced FFT techniques, and because it is partly written in assembler, not surprisingly, it is not portable - only being available for the most common platform, x86.

It would be interesting to rewrite my WEP application using gwnum...

http://www.mersenne.org/freesoft.htm

http://en.wikipedia.org/wiki/Fft

## Saturday, 13 October 2007

### Benchmarks: Java vs C (Part 4)

Here are the promised benchmarks, running random-based WEP on input numbers of, respectively, 100, 1000 and 10000 bits.

If anyone would care to (and has the patience to) extend the tables, feel free to send me results!

Result: Java is, very approximately, an order of magnitude, only, slower than C.

tiggatoo:~/math/wec james$ time java we2tpr2 < P109.in.2.txt

[109]

[1]

base#0

elapsed=0s factor=3 A=645264517869175453037177387327

base#10746

elapsed=43s factor=104124649 A=245093880973395622884386051056621

2077756847362348863128179

duration: 43 seconds

real 0m46.195s

user 0m5.185s

sys 0m0.510s

tiggatoo:~/math/wec james$ time ./factorize3.gmp -P109 -T10748 3

P109 T10748

real 0m7.176s

user 0m0.793s

sys 0m0.011s

tiggatoo:~/math/wec james$ time java we2tpr2 < P1279.in.2.txt

[1279]

[1]

base#0

elapsed=2s factor=3 A=530978962560898025744780713800683

base#11

elapsed=13s factor=706009 A=1243358263022091534365334367195221

^Cse#100

real 2m34.366s

user 0m17.542s

sys 0m0.222s

tiggatoo:~/math/wec james$ time ./factorize3.gmp -P1279 -T114 3 706009

P1279 T114

real 0m29.889s

user 0m3.344s

sys 0m0.022s

tiggatoo:~/math/wec james$ time java we2tpr2 < P11213.in.2.txt

[11213]

[1]

base#0

elapsed=1510s factor=3 A=93613194278166973511368466002199

^Cse#2

real 45m19.926s

user 5m14.923s

sys 0m2.640s

tiggatoo:~/math/wec james$ time ./factorize3.gmp -P11213 -T3 3

P11213 T3

real 5m44.733s

user 0m40.864s

sys 0m0.247s

If anyone would care to (and has the patience to) extend the tables, feel free to send me results!

Result: Java is, very approximately, an order of magnitude, only, slower than C.

tiggatoo:~/math/wec james$ time java we2tpr2 < P109.in.2.txt

[109]

[1]

base#0

elapsed=0s factor=3 A=645264517869175453037177387327

base#10746

elapsed=43s factor=104124649 A=245093880973395622884386051056621

2077756847362348863128179

duration: 43 seconds

real 0m46.195s

user 0m5.185s

sys 0m0.510s

tiggatoo:~/math/wec james$ time ./factorize3.gmp -P109 -T10748 3

P109 T10748

real 0m7.176s

user 0m0.793s

sys 0m0.011s

tiggatoo:~/math/wec james$ time java we2tpr2 < P1279.in.2.txt

[1279]

[1]

base#0

elapsed=2s factor=3 A=530978962560898025744780713800683

base#11

elapsed=13s factor=706009 A=1243358263022091534365334367195221

^Cse#100

real 2m34.366s

user 0m17.542s

sys 0m0.222s

tiggatoo:~/math/wec james$ time ./factorize3.gmp -P1279 -T114 3 706009

P1279 T114

real 0m29.889s

user 0m3.344s

sys 0m0.022s

tiggatoo:~/math/wec james$ time java we2tpr2 < P11213.in.2.txt

[11213]

[1]

base#0

elapsed=1510s factor=3 A=93613194278166973511368466002199

^Cse#2

real 45m19.926s

user 5m14.923s

sys 0m2.640s

tiggatoo:~/math/wec james$ time ./factorize3.gmp -P11213 -T3 3

P11213 T3

real 5m44.733s

user 0m40.864s

sys 0m0.247s

## Friday, 12 October 2007

### Benchmarks: Java vs C (Part 3)

Here is the (nearly) same program (factorize3.mac.c) coded in C (also copyright JGW). It needs to be compiled with -lgmp, the GMP math library.

/* Factoring with WEP method using random base(s).

*/

#include

#include

#include

#include

#include "gmp.h"

int flag_verbose = 0;

int

factor_using_random_wep (mpz_t s, unsigned long p, unsigned long T)

{

mpz_t V, v1, v2, v3, b, bb, bbb, bbbb, wanless;

long numtrials;

int flag;

gmp_randstate_t rstate;

if (flag_verbose)

{

printf ("[wep ");

printf ("s=");

mpz_out_str (stdout, 10, s);

printf ("\tp=");

printf ("%ld", p);

printf ("\tT=");

printf ("%ld", T);

printf ("]\n\r");

fflush (stdout);

}

if (mpz_probab_prime_p (s, 3))

{

printf ("P%ld\tT0\t", p);

mpz_out_str (stdout, 10, s);

fflush (stdout);

flag=2;

return(flag);

}

gmp_randinit (rstate, GMP_RAND_ALG_LC, 128);

{

#if HAVE_GETTIMEOFDAY

struct timeval tv;

gettimeofday (&tv, NULL);

gmp_randseed_ui (rstate, tv.tv_sec + tv.tv_usec);

#else

time_t t;

time (&t);

gmp_randseed_ui (rstate, t);

#endif

}

numtrials=0;

mpz_init_set_si (b, 1);

mpz_init_set_si (bb, 1);

mpz_init_set_si (bbb, 1);

mpz_init_set_si (bbbb, 1);

mpz_init_set_si (V, 1);

mpz_init_set_si (v1, 1);

mpz_init_set_si (v2, 1);

mpz_init_set_si (v3, 1);

mpz_init_set_si (wanless, 2);

while (mpz_cmp (wanless, s) < 0)

mpz_mul_ui (wanless, wanless, 2);

while (numtrials < T && (mpz_cmp_ui (V, 1) == 0 || mpz_cmp (V, s) == 0))

{

mpz_urandomb (b, rstate, 100L);

mpz_mul_ui(bb, b, p);

mpz_mul_ui(bbb, bb, 2);

mpz_add_ui(bbbb, bbb, 1);

mpz_powm (v1, bbbb, wanless, s);

mpz_powm (v2, v1, s, s);

mpz_sub (v3, v2, v1);

mpz_gcd (V, s, v3);

if (flag_verbose)

if (numtrials%1000 == 0)

{

printf ("numtrials=%ld\tb=", numtrials);

mpz_out_str (stdout, 10, b);

printf ("\r");

fflush (stdout);

}

numtrials++;

}

if (flag_verbose)

printf("\n\r");

if (mpz_cmp_ui (V, 1) > 0 && mpz_cmp (V, s) < 0)

{

printf ("P%ld\tT%ld\t", p, numtrials);

mpz_out_str (stdout, 10, V);

printf("\tbase=");

mpz_out_str (stdout, 10, bbbb);

printf ("\n");

fflush (stdout);

flag=3;

}

else

{

printf ("P%ld\tT%ld\n", p, T);

flag=0;

}

mpz_clear (b);

mpz_clear (bb);

mpz_clear (bbb);

mpz_clear (bbbb);

mpz_clear (V);

mpz_clear (v1);

mpz_clear (v2);

mpz_clear (v3);

mpz_clear (wanless);

return (flag);

}

main (int argc, char *argv[])

{

mpz_t r, s, t, F, f;

unsigned long p, T;

int i;

int flag;

if (argc > 1 && !strcmp (argv[1], "-v"))

{

flag_verbose = 1;

argv++;

argc--;

}

mpz_init (r);

mpz_init (s);

mpz_init (t);

mpz_init (F);

mpz_init (f);

mpz_set_ui (F, 1);

mpz_set_ui (f, 1);

if (argc > 1)

{

p = 0;

for (i = 1; i < argc; i++)

{

if (!strncmp (argv[i], "-M", 2))

{

p = atoi (argv[i] + 2);

mpz_set_ui (t, 1);

mpz_mul_2exp (t, t, p);

mpz_sub_ui (t, t, 1);

}

else if (!strncmp (argv[i], "-P", 2))

{

p = atoi (argv[i] + 2);

mpz_set_ui (t, 1);

mpz_mul_2exp (t, t, p);

mpz_add_ui (t, t, 1);

}

else if (!strncmp (argv[i], "-T", 2))

{

T = atoi (argv[i] + 2);

}

else

{

mpz_set_str (f, argv[i], 0);

mpz_mul (F, F, f);

}

}

mpz_mod (r, t, F);

if (mpz_cmp_si (r, 0) != 0) {

printf ("Wrong known factors!\n");

flag =1;

}

else {

mpz_div (s, t, F);

flag = factor_using_random_wep (s, p, T);

}

}

mpz_clear(r);

mpz_clear(s);

mpz_clear(t);

mpz_clear(F);

mpz_clear(f);

exit (flag);

}

/* Factoring with WEP method using random base(s).

*/

#include

#include

#include

#include

#include "gmp.h"

int flag_verbose = 0;

int

factor_using_random_wep (mpz_t s, unsigned long p, unsigned long T)

{

mpz_t V, v1, v2, v3, b, bb, bbb, bbbb, wanless;

long numtrials;

int flag;

gmp_randstate_t rstate;

if (flag_verbose)

{

printf ("[wep ");

printf ("s=");

mpz_out_str (stdout, 10, s);

printf ("\tp=");

printf ("%ld", p);

printf ("\tT=");

printf ("%ld", T);

printf ("]\n\r");

fflush (stdout);

}

if (mpz_probab_prime_p (s, 3))

{

printf ("P%ld\tT0\t", p);

mpz_out_str (stdout, 10, s);

fflush (stdout);

flag=2;

return(flag);

}

gmp_randinit (rstate, GMP_RAND_ALG_LC, 128);

{

#if HAVE_GETTIMEOFDAY

struct timeval tv;

gettimeofday (&tv, NULL);

gmp_randseed_ui (rstate, tv.tv_sec + tv.tv_usec);

#else

time_t t;

time (&t);

gmp_randseed_ui (rstate, t);

#endif

}

numtrials=0;

mpz_init_set_si (b, 1);

mpz_init_set_si (bb, 1);

mpz_init_set_si (bbb, 1);

mpz_init_set_si (bbbb, 1);

mpz_init_set_si (V, 1);

mpz_init_set_si (v1, 1);

mpz_init_set_si (v2, 1);

mpz_init_set_si (v3, 1);

mpz_init_set_si (wanless, 2);

while (mpz_cmp (wanless, s) < 0)

mpz_mul_ui (wanless, wanless, 2);

while (numtrials < T && (mpz_cmp_ui (V, 1) == 0 || mpz_cmp (V, s) == 0))

{

mpz_urandomb (b, rstate, 100L);

mpz_mul_ui(bb, b, p);

mpz_mul_ui(bbb, bb, 2);

mpz_add_ui(bbbb, bbb, 1);

mpz_powm (v1, bbbb, wanless, s);

mpz_powm (v2, v1, s, s);

mpz_sub (v3, v2, v1);

mpz_gcd (V, s, v3);

if (flag_verbose)

if (numtrials%1000 == 0)

{

printf ("numtrials=%ld\tb=", numtrials);

mpz_out_str (stdout, 10, b);

printf ("\r");

fflush (stdout);

}

numtrials++;

}

if (flag_verbose)

printf("\n\r");

if (mpz_cmp_ui (V, 1) > 0 && mpz_cmp (V, s) < 0)

{

printf ("P%ld\tT%ld\t", p, numtrials);

mpz_out_str (stdout, 10, V);

printf("\tbase=");

mpz_out_str (stdout, 10, bbbb);

printf ("\n");

fflush (stdout);

flag=3;

}

else

{

printf ("P%ld\tT%ld\n", p, T);

flag=0;

}

mpz_clear (b);

mpz_clear (bb);

mpz_clear (bbb);

mpz_clear (bbbb);

mpz_clear (V);

mpz_clear (v1);

mpz_clear (v2);

mpz_clear (v3);

mpz_clear (wanless);

return (flag);

}

main (int argc, char *argv[])

{

mpz_t r, s, t, F, f;

unsigned long p, T;

int i;

int flag;

if (argc > 1 && !strcmp (argv[1], "-v"))

{

flag_verbose = 1;

argv++;

argc--;

}

mpz_init (r);

mpz_init (s);

mpz_init (t);

mpz_init (F);

mpz_init (f);

mpz_set_ui (F, 1);

mpz_set_ui (f, 1);

if (argc > 1)

{

p = 0;

for (i = 1; i < argc; i++)

{

if (!strncmp (argv[i], "-M", 2))

{

p = atoi (argv[i] + 2);

mpz_set_ui (t, 1);

mpz_mul_2exp (t, t, p);

mpz_sub_ui (t, t, 1);

}

else if (!strncmp (argv[i], "-P", 2))

{

p = atoi (argv[i] + 2);

mpz_set_ui (t, 1);

mpz_mul_2exp (t, t, p);

mpz_add_ui (t, t, 1);

}

else if (!strncmp (argv[i], "-T", 2))

{

T = atoi (argv[i] + 2);

}

else

{

mpz_set_str (f, argv[i], 0);

mpz_mul (F, F, f);

}

}

mpz_mod (r, t, F);

if (mpz_cmp_si (r, 0) != 0) {

printf ("Wrong known factors!\n");

flag =1;

}

else {

mpz_div (s, t, F);

flag = factor_using_random_wep (s, p, T);

}

}

mpz_clear(r);

mpz_clear(s);

mpz_clear(t);

mpz_clear(F);

mpz_clear(f);

exit (flag);

}

## Thursday, 11 October 2007

### Benchmarks: Java vs C (Part 2)

Here is the program, we2tpr2.java, coded in Java (copyright JGW, all rights reserved :) ). Note how elegant, and convenient, the BigInteger facility is - this has been a standard feature of Java from its inception. Java is almost unique in that respect - most other languages rely on add-in libraries to provide big number capability.

import java.math.BigInteger;

import java.util.Random;

import java.util.Date;

public class we2tpr2 {

static BigInteger zero = new BigInteger("0");

static BigInteger one = new BigInteger("1");

static BigInteger two = new BigInteger("2");

static BigInteger hundred = new BigInteger("100");

static BigInteger thousand = new BigInteger("1000");

static BigInteger n = new BigInteger("0");

static BigInteger p = new BigInteger("0");

static BigInteger known = new BigInteger("0");

static BigInteger numtrials = new BigInteger("0");

static String s;

static int olds1 = 0;

static int s1 = 0;

static Date d;

static long starttime;

static long finishtime;

static long duration;

public static void main (String args[])

throws java.io.IOException {

we2tpr2 we2tpr2inst = new we2tpr2();

char c;

String sInput;

StringBuffer sbInput = new StringBuffer("");

while ((c = (char)System.in.read()) != '\n' && c != '\r')

sbInput.append(c);

System.in.read();

sInput = sbInput.toString().trim();

if (sInput.charAt(0) == 'f' || sInput.charAt(0) == 'F') {

s = sInput.substring(1).trim();

s1 = 0;

olds1 = 0;

p = we2tpr2inst.eval(s);

System.out.println('[' + p.toString() + ']');

}

else {

p = new BigInteger(sInput);

}

sbInput = new StringBuffer("");

while ((c = (char)System.in.read()) != '\n' && c != '\r')

sbInput.append(c);

System.in.read();

sInput = sbInput.toString().trim();

if (sInput.charAt(0) == 'f' || sInput.charAt(0) == 'F') {

s = sInput.substring(1).trim();

s1 = 0;

olds1 = 0;

known = we2tpr2inst.eval(s);

System.out.println('[' + known.toString() + ']');

}

else {

known = new BigInteger(sInput);

}

n = we2tpr2inst.mersenneplustwo(p);

if (n.remainder(known).compareTo(zero) > 0) {

System.out.println("Wrong known factors!");

return;

}

else

n = n.divide(known);

d = new Date();

starttime = d.getTime();

we2tpr2inst.factorize(n, p);

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println("duration: " + duration + " seconds");

System.out.println();

}

public BigInteger mersenneplustwo(BigInteger p) {

BigInteger i = new BigInteger("0");

n = two;

for (i = one; i.compareTo(p) < 0; i = i.add(one))

n = n.multiply(two);

n = n.add(one);

return n;

}

public boolean factorize(BigInteger n, BigInteger p) {

boolean prime = false;

BigInteger numtested = new BigInteger("0");

BigInteger T = new BigInteger("1");

BigInteger b = new BigInteger("1");

BigInteger A = new BigInteger("2");

BigInteger wanless = new BigInteger("2");

if (n.isProbablePrime(1000)) {

prime = true;

System.out.println(n);

return prime;

}

// workaround - apparent java bug in modPow - JGW

if (n.compareTo(two) < 0)

return false;

if (n.remainder(two).compareTo(zero) == 0) {

System.out.println(two.toString());

return true;

}

// end workaround

while (wanless.compareTo(n) < 0)

wanless = wanless.multiply(two);

Random r = new Random();

numtested = zero;

while (T.compareTo(one) == 0 || T.compareTo(n) == 0) {

// changed JW 2005-3-23

A = new BigInteger(hundred.intValue(), r);

A = (A.multiply(two).multiply(p)).add(one);

// added JGW 2006-06-09

System.out.print("base#" + numtested + '\r');

// changed DT 2005-2-20

b = A.modPow(wanless, n);

T = n.gcd(b.modPow(n, n).subtract(b));

numtested = numtested.add(one);

}

if (T.compareTo(one) > 0 && T.compareTo(n) < 0) {

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println();

System.out.println("elapsed=" + duration + "s" + '\t' + "factor=" + T.toString() + '\t' + "A=" + A.toString() + '\t');

factorize(n.divide(T), p);

}

return prime;

}

public BigInteger evalRand(char op, BigInteger oldn) {

BigInteger n = new BigInteger("1");

switch (op) {

case 'r':

case 'R':

Random r = new Random();

n = new BigInteger(oldn.intValue(), r);

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalFact(BigInteger oldn, char op) {

BigInteger n = new BigInteger("1");

BigInteger i = new BigInteger("1");

BigInteger j = new BigInteger("1");

boolean prime = true;

switch (op) {

case '!':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one))

n = n.multiply(i);

break;

case '#':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one)) {

prime = true;

for (j = two; (prime == true) && (j.multiply(j).compareTo(i) <= 0); j = j.add(one))

if (i.remainder(j).compareTo(zero) == 0)

prime = false;

if (prime == true)

n = n.multiply(i);

}

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalPower(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '^':

n = oldn.pow(n1.intValue());

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalProduct(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '*':

n = oldn.multiply(n1);

break;

case '/':

n = oldn.divide(n1);

break;

case '%':

n = oldn.remainder(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalSum(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '+':

n = oldn.add(n1);

break;

case '-':

n = oldn.subtract(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger eval(String s) {

BigInteger oldn0 = new BigInteger("0");

BigInteger oldn1 = new BigInteger("0");

BigInteger oldn2 = new BigInteger("0");

BigInteger n = new BigInteger("0");

char oldop0 = 0;

char oldop1 = 0;

char oldop2 = 0;

char op = 0;

while (s1 < s.length()) {

switch (s.charAt(s1)) {

case '(':

case '[':

case '{':

s1++;

n = eval(s);

break;

case '0':

case '1':

case '2':

case '3':

case '4':

case '5':

case '6':

case '7':

case '8':

case '9':

n = readNum(s);

break;

default:

break;

}

if (s1 < s.length()) {

switch (s.charAt(s1)) {

case ')':

case ']':

case '}':

case '!':

case '#':

case 'r':

case 'R':

case '^':

case '*':

case '/':

case '%':

case '+':

case '-':

op = s.charAt(s1);

s1++;

break;

default:

break;

}

}

else

op = 0;

switch (op) {

case 0:

case ')':

case ']':

case '}':

n = evalPower(oldn2, n, oldop2);

n = evalProduct(oldn1, n, oldop1);

n = evalSum(oldn0, n, oldop0);

return n;

case '!':

case '#':

n = evalFact(n, op);

break;

case 'r':

case 'R':

n = readNum(s);

n = evalRand(op, n);

break;

case '^':

n = evalPower(oldn2, n, oldop2);

oldn2 = n;

oldop2 = op;

break;

case '*':

case '/':

case '%':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldn1 = n;

oldop1 = op;

break;

case '+':

case '-':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldop1 = 0;

n = evalSum(oldn0, n, oldop0);

oldn0 = n;

oldop0 = op;

break;

default:

break;

}

}

return n;

}

public BigInteger readNum(String s) {

BigInteger n = new BigInteger("0");

olds1 = s1;

while (s1 < s.length() && Character.isDigit(s.charAt(s1)))

s1++;

n = new BigInteger(s.substring(olds1, s1));

return n;

}

}

import java.math.BigInteger;

import java.util.Random;

import java.util.Date;

public class we2tpr2 {

static BigInteger zero = new BigInteger("0");

static BigInteger one = new BigInteger("1");

static BigInteger two = new BigInteger("2");

static BigInteger hundred = new BigInteger("100");

static BigInteger thousand = new BigInteger("1000");

static BigInteger n = new BigInteger("0");

static BigInteger p = new BigInteger("0");

static BigInteger known = new BigInteger("0");

static BigInteger numtrials = new BigInteger("0");

static String s;

static int olds1 = 0;

static int s1 = 0;

static Date d;

static long starttime;

static long finishtime;

static long duration;

public static void main (String args[])

throws java.io.IOException {

we2tpr2 we2tpr2inst = new we2tpr2();

char c;

String sInput;

StringBuffer sbInput = new StringBuffer("");

while ((c = (char)System.in.read()) != '\n' && c != '\r')

sbInput.append(c);

System.in.read();

sInput = sbInput.toString().trim();

if (sInput.charAt(0) == 'f' || sInput.charAt(0) == 'F') {

s = sInput.substring(1).trim();

s1 = 0;

olds1 = 0;

p = we2tpr2inst.eval(s);

System.out.println('[' + p.toString() + ']');

}

else {

p = new BigInteger(sInput);

}

sbInput = new StringBuffer("");

while ((c = (char)System.in.read()) != '\n' && c != '\r')

sbInput.append(c);

System.in.read();

sInput = sbInput.toString().trim();

if (sInput.charAt(0) == 'f' || sInput.charAt(0) == 'F') {

s = sInput.substring(1).trim();

s1 = 0;

olds1 = 0;

known = we2tpr2inst.eval(s);

System.out.println('[' + known.toString() + ']');

}

else {

known = new BigInteger(sInput);

}

n = we2tpr2inst.mersenneplustwo(p);

if (n.remainder(known).compareTo(zero) > 0) {

System.out.println("Wrong known factors!");

return;

}

else

n = n.divide(known);

d = new Date();

starttime = d.getTime();

we2tpr2inst.factorize(n, p);

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println("duration: " + duration + " seconds");

System.out.println();

}

public BigInteger mersenneplustwo(BigInteger p) {

BigInteger i = new BigInteger("0");

n = two;

for (i = one; i.compareTo(p) < 0; i = i.add(one))

n = n.multiply(two);

n = n.add(one);

return n;

}

public boolean factorize(BigInteger n, BigInteger p) {

boolean prime = false;

BigInteger numtested = new BigInteger("0");

BigInteger T = new BigInteger("1");

BigInteger b = new BigInteger("1");

BigInteger A = new BigInteger("2");

BigInteger wanless = new BigInteger("2");

if (n.isProbablePrime(1000)) {

prime = true;

System.out.println(n);

return prime;

}

// workaround - apparent java bug in modPow - JGW

if (n.compareTo(two) < 0)

return false;

if (n.remainder(two).compareTo(zero) == 0) {

System.out.println(two.toString());

return true;

}

// end workaround

while (wanless.compareTo(n) < 0)

wanless = wanless.multiply(two);

Random r = new Random();

numtested = zero;

while (T.compareTo(one) == 0 || T.compareTo(n) == 0) {

// changed JW 2005-3-23

A = new BigInteger(hundred.intValue(), r);

A = (A.multiply(two).multiply(p)).add(one);

// added JGW 2006-06-09

System.out.print("base#" + numtested + '\r');

// changed DT 2005-2-20

b = A.modPow(wanless, n);

T = n.gcd(b.modPow(n, n).subtract(b));

numtested = numtested.add(one);

}

if (T.compareTo(one) > 0 && T.compareTo(n) < 0) {

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println();

System.out.println("elapsed=" + duration + "s" + '\t' + "factor=" + T.toString() + '\t' + "A=" + A.toString() + '\t');

factorize(n.divide(T), p);

}

return prime;

}

public BigInteger evalRand(char op, BigInteger oldn) {

BigInteger n = new BigInteger("1");

switch (op) {

case 'r':

case 'R':

Random r = new Random();

n = new BigInteger(oldn.intValue(), r);

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalFact(BigInteger oldn, char op) {

BigInteger n = new BigInteger("1");

BigInteger i = new BigInteger("1");

BigInteger j = new BigInteger("1");

boolean prime = true;

switch (op) {

case '!':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one))

n = n.multiply(i);

break;

case '#':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one)) {

prime = true;

for (j = two; (prime == true) && (j.multiply(j).compareTo(i) <= 0); j = j.add(one))

if (i.remainder(j).compareTo(zero) == 0)

prime = false;

if (prime == true)

n = n.multiply(i);

}

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalPower(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '^':

n = oldn.pow(n1.intValue());

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalProduct(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '*':

n = oldn.multiply(n1);

break;

case '/':

n = oldn.divide(n1);

break;

case '%':

n = oldn.remainder(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalSum(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '+':

n = oldn.add(n1);

break;

case '-':

n = oldn.subtract(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger eval(String s) {

BigInteger oldn0 = new BigInteger("0");

BigInteger oldn1 = new BigInteger("0");

BigInteger oldn2 = new BigInteger("0");

BigInteger n = new BigInteger("0");

char oldop0 = 0;

char oldop1 = 0;

char oldop2 = 0;

char op = 0;

while (s1 < s.length()) {

switch (s.charAt(s1)) {

case '(':

case '[':

case '{':

s1++;

n = eval(s);

break;

case '0':

case '1':

case '2':

case '3':

case '4':

case '5':

case '6':

case '7':

case '8':

case '9':

n = readNum(s);

break;

default:

break;

}

if (s1 < s.length()) {

switch (s.charAt(s1)) {

case ')':

case ']':

case '}':

case '!':

case '#':

case 'r':

case 'R':

case '^':

case '*':

case '/':

case '%':

case '+':

case '-':

op = s.charAt(s1);

s1++;

break;

default:

break;

}

}

else

op = 0;

switch (op) {

case 0:

case ')':

case ']':

case '}':

n = evalPower(oldn2, n, oldop2);

n = evalProduct(oldn1, n, oldop1);

n = evalSum(oldn0, n, oldop0);

return n;

case '!':

case '#':

n = evalFact(n, op);

break;

case 'r':

case 'R':

n = readNum(s);

n = evalRand(op, n);

break;

case '^':

n = evalPower(oldn2, n, oldop2);

oldn2 = n;

oldop2 = op;

break;

case '*':

case '/':

case '%':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldn1 = n;

oldop1 = op;

break;

case '+':

case '-':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldop1 = 0;

n = evalSum(oldn0, n, oldop0);

oldn0 = n;

oldop0 = op;

break;

default:

break;

}

}

return n;

}

public BigInteger readNum(String s) {

BigInteger n = new BigInteger("0");

olds1 = s1;

while (s1 < s.length() && Character.isDigit(s.charAt(s1)))

s1++;

n = new BigInteger(s.substring(olds1, s1));

return n;

}

}

## Wednesday, 10 October 2007

### Benchmarks: Java vs C (Part 1)

Over the next few days I am going to be benchmarking a small factorization program (random-based WEP algorithm) written (approximately) identically in Java (using Java's built-in BigInteger feature) and in C (using the GMP math library), and seeing how the two compare for various size inputs.

But first some background information about Java:

http://en.wikipedia.org/wiki/Java_%28programming_language%29

and a picture (from Wikipedia) of the Java mascot, Duke:

and (also from Wikipedia) of the creator of Java (at Sun Microsystems), James Gosling:

But first some background information about Java:

http://en.wikipedia.org/wiki/Java_%28programming_language%29

and a picture (from Wikipedia) of the Java mascot, Duke:

and (also from Wikipedia) of the creator of Java (at Sun Microsystems), James Gosling:

## Tuesday, 9 October 2007

### Fermat number factorizers

Here is a nice page detailing historical (and recent) figures in the search for factors of Fermat numbers. Note that the first computer calculations are dated around 1953.

http://www.fermatsearch.org/history.php

http://www.fermatsearch.org/history.php

## Monday, 8 October 2007

### What is a "trapdoor" function?

Term first coined by Diffie and Hellman in 1976, refers to a function that is easy to compute forwards, yet very difficult to invert, ie compute backwards. Large integer multiplication/factorization serves this purpose for RSA.

see link:

http://en.wikipedia.org/wiki/Trapdoor_function

see link:

http://en.wikipedia.org/wiki/Trapdoor_function

## Sunday, 7 October 2007

### Cryptography

I am currently listening (for free, on iTunes - search on "math 55") to Professor James Demmel's Discrete Mathematics series of lectures at University of California, Berkeley - highly recommended (so far). Anyway, Lecture 18 audio is relevant to today's post, which is on public-key cryptography, and, specifically RSA. The reason cryptography is involved is because RSA relies on the difficulty of factoring large semiprimes for keeping encoded messages secret. Some links:

http://en.wikipedia.org/wiki/Public-key_cryptography

http://en.wikipedia.org/wiki/Rsa

http://www.cs.berkeley.edu/~demmel/ma55_Fall07/LectureNotes/Lecture_15_Oct_03.txt

http://en.wikipedia.org/wiki/Public-key_cryptography

http://en.wikipedia.org/wiki/Rsa

http://www.cs.berkeley.edu/~demmel/ma55_Fall07/LectureNotes/Lecture_15_Oct_03.txt

## Saturday, 6 October 2007

### History of factorization records by ECM

Original at:

http://www.loria.fr/~zimmerma/records/factor.html

(http://www.loria.fr/~zimmerma/records/ecmrecord.jpg)

Reproduced by kind permission of Paul Zimmermann

## Friday, 5 October 2007

### 222-digit SNFS completed with msieve

From the following link (by Greg Childers):

http://mersenneforum.org/showthread.php?p=115322#post115322

"The NFSNet factorization of 5^317-1 (SNFS difficulty of 222 digits) has been completed with msieve."

"...the runtime was just over 6 days"

Apparently this represents a real advance in speed of processing over the established suite for postprocessing large NFS sieve jobs. Thanks Jason (Papadopoulos)!

http://mersenneforum.org/showthread.php?p=115322#post115322

"The NFSNet factorization of 5^317-1 (SNFS difficulty of 222 digits) has been completed with msieve."

"...the runtime was just over 6 days"

Apparently this represents a real advance in speed of processing over the established suite for postprocessing large NFS sieve jobs. Thanks Jason (Papadopoulos)!

## Thursday, 4 October 2007

### Msieve 1.28

New version of msieve (by Jason Papadopoulos) now available.

Download from:

http://www.boo.net/~jasonp/qs.html

Download from:

http://www.boo.net/~jasonp/qs.html

### Mersennewiki

...also, there is the Mersennewiki, which has a factorization section at:

http://mersennewiki.org/index.php/Factorization

http://mersennewiki.org/index.php/Factorization

## Wednesday, 3 October 2007

### mersenneforum

I'd just like to draw people's attention to the 'factoring' section of the mersenneforum (at the following link) - it has certainly made very interesting reading for me at times in the past...

http://mersenneforum.org/forumdisplay.php?f=19

http://mersenneforum.org/forumdisplay.php?f=19

## Tuesday, 2 October 2007

### Msieve v1.27

New version of msieve (by Jason Papadopoulos) now available.

Download from:

http://www.boo.net/~jasonp/qs.html

Announcement at:

http://mersenneforum.org/showthread.php?p=115491#post115491

Download from:

http://www.boo.net/~jasonp/qs.html

Announcement at:

http://mersenneforum.org/showthread.php?p=115491#post115491

## Monday, 1 October 2007

### Fermat number factorizations

Speaking of Fermat number factorization - this page:

http://www.prothsearch.net/fermat.html

lists the current state-of-play.

Note that only F5-F11 have been completely factored. The smallest Fermat number with no known factors is F14, and the smallest Fermat number whose compositeness (ie factorizability) has not been proven is F33.

[for additional/preparatory reference - Wikipedia article on Fermat numbers: http://en.wikipedia.org/wiki/Fermat_number]

http://www.prothsearch.net/fermat.html

lists the current state-of-play.

Note that only F5-F11 have been completely factored. The smallest Fermat number with no known factors is F14, and the smallest Fermat number whose compositeness (ie factorizability) has not been proven is F33.

[for additional/preparatory reference - Wikipedia article on Fermat numbers: http://en.wikipedia.org/wiki/Fermat_number]

### Factorization of tenth Fermat number in 1995

From Richard Brent's page:

http://wwwmaths.anu.edu.au/~brent/pub/pub161.html

"We describe the complete factorization of the tenth Fermat number F10 by the elliptic curve method (ECM). The tenth Fermat number is a product of four prime factors with 8, 10, 40 and 252 decimal digits. The 40-digit factor was found after about 140 Mflop-years of computation"

http://wwwmaths.anu.edu.au/~brent/pub/pub161.html

"We describe the complete factorization of the tenth Fermat number F10 by the elliptic curve method (ECM). The tenth Fermat number is a product of four prime factors with 8, 10, 40 and 252 decimal digits. The 40-digit factor was found after about 140 Mflop-years of computation"

Subscribe to:
Posts (Atom)