Also there is Arjen Bot's page of factorizations of 2^n+1

http://www.euronet.nl/users/bota/medium-p-odd.txt

## Monday, 31 December 2007

## Sunday, 30 December 2007

### Home Primes Search

Let me introduce another factorization project, the Home Primes Search:

http://www.mersennewiki.org/index.php/Home_Prime

http://www.mersennewiki.org/index.php/Home_Primes_Search

http://www.mersennewiki.org/index.php/Home_Prime

http://www.mersennewiki.org/index.php/Home_Primes_Search

## Saturday, 29 December 2007

### NFSNET

Here is a link to the NFSNET project, a distributed attack on Cunningham Project targets using the Number Field Sieve:

http://www.nfsnet.org/

http://www.nfsnet.org/

## Friday, 28 December 2007

### P49_101_87

I can't resist mentioning a fun result I had recently for the XYYXF project:

http://xyyxf.at.tut.by/records.html#ecm

[There I am currently in tenth place! :)]

I can certainly recommend the XYYXF ECMNet server as a slick and easy way to find interesting factors for that project.

http://xyyxf.at.tut.by/records.html#ecm

[There I am currently in tenth place! :)]

I can certainly recommend the XYYXF ECMNet server as a slick and easy way to find interesting factors for that project.

## Thursday, 27 December 2007

### WE on Carmichaels

When running WE on Carmichaels, clearly the algorithm will never terminate.

http://en.wikipedia.org/wiki/Carmichael_numbers

However, there is a simple workaround, by simply multiplying the input number to be factored (if Carmichael) by a suitable larg(ish) prime, eg a Mersenne. This will normally eject the factors, additionally, as required.

Here are some examples:

1)

tiggatoo:~/math/we james$ java we2tr2

f168003672409*(2^127-1)

[28584343649372241809274301558307881708368828786343]

base#72

elapsed=0s factor=6073 A=13044086312830013543739988769

base#70

elapsed=0s factor=3037 A=860763419187377590694240501658

base#230

elapsed=1s factor=9109 A=771019052198021707484273222802

170141183460469231731687303715884105727

duration: 1 seconds

2)

tiggatoo:~/math/we james$ java we2tr2

f173032371289*(2^89-1)

[107101850255573582977951074701018631079]

base#43

elapsed=0s factor=3067 A=1236330343540696294158885383112

base#149

elapsed=0s factor=6133 A=559525871748815289923955370298

base#758

elapsed=1s factor=9199 A=194689393018084379227390724801

618970019642690137449562111

duration: 1 seconds

3)

tiggatoo:~/math/we james$ java we2tr2

f973694665856161*(2^127-1)

[165665562777913375106491436985024163141370898298334047]

base#3

elapsed=0s factor=6841 A=1245204978926759694403655173839

base#0

elapsed=0s factor=2281 A=1024632434440788626941999868632

base#7

elapsed=0s factor=4561 A=457244767350940464517786915322

base#12

elapsed=0s factor=13681 A=85779304638188382542657979233

170141183460469231731687303715884105727

duration: 0 seconds

And here, for the record/or in case anyone wants to confirm results above, is the Java source code to we2tr2 (Copyright JGW ~ 2000):

import java.math.BigInteger;

import java.util.Random;

import java.util.Date;

public class we2tr2 {

static BigInteger zero = new BigInteger("0");

static BigInteger one = new BigInteger("1");

static BigInteger two = new BigInteger("2");

static BigInteger hundred = new BigInteger("100");

static BigInteger thousand = new BigInteger("1000");

static BigInteger n = new BigInteger("0");

static BigInteger p = new BigInteger("0");

static BigInteger known = new BigInteger("0");

static BigInteger numtrials = new BigInteger("0");

static String s;

static int olds1 = 0;

static int s1 = 0;

static Date d;

static long starttime;

static long finishtime;

static long duration;

public static void main (String args[])

throws java.io.IOException {

we2tr2 we2tr2inst = new we2tr2();

char c;

String sInput;

StringBuffer sbInput = new StringBuffer("");

while ((c = (char)System.in.read()) != '\n' && c != '\r')

sbInput.append(c);

System.in.read();

sInput = sbInput.toString().trim();

if (sInput.charAt(0) == 'f' || sInput.charAt(0) == 'F') {

s = sInput.substring(1).trim();

s1 = 0;

olds1 = 0;

p = we2tr2inst.eval(s);

System.out.println('[' + p.toString() + ']');

}

else {

p = new BigInteger(sInput);

}

n = p;

d = new Date();

starttime = d.getTime();

we2tr2inst.factorize(n);

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println("duration: " + duration + " seconds");

System.out.println();

}

public boolean factorize(BigInteger n) {

boolean prime = false;

BigInteger numtested = new BigInteger("0");

BigInteger T = new BigInteger("1");

BigInteger b = new BigInteger("1");

BigInteger A = new BigInteger("2");

BigInteger wanless = new BigInteger("2");

if (n.isProbablePrime(1000)) {

prime = true;

System.out.println(n);

return prime;

}

// workaround - apparent java bug in modPow - JGW

if (n.compareTo(two) < 0)

return false;

if (n.remainder(two).compareTo(zero) == 0) {

System.out.println(two.toString());

// added 2006-06-09

return(factorize(n.divide(two)));

}

// end workaround

while (wanless.compareTo(n) < 0)

wanless = wanless.multiply(two);

Random r = new Random();

numtested = zero;

while (T.compareTo(one) == 0 || T.compareTo(n) == 0) {

// changed JW 2005-3-23

A = new BigInteger(hundred.intValue(), r);

// added JGW 2006-06-09

System.out.print("base#" + numtested + '\r');

// changed DT 2005-2-20

b = A.modPow(wanless, n);

T = n.gcd(b.modPow(n, n).subtract(b));

numtested = numtested.add(one);

}

if (T.compareTo(one) > 0 && T.compareTo(n) < 0) {

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println();

System.out.println("elapsed=" + duration + "s" + '\t' + "factor=" + T.toString() + '\t' + "A=" + A.toString() + '\t');

factorize(n.divide(T));

}

return prime;

}

public BigInteger evalRand(char op, BigInteger oldn) {

BigInteger n = new BigInteger("1");

switch (op) {

case 'r':

case 'R':

Random r = new Random();

n = new BigInteger(oldn.intValue(), r);

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalFact(BigInteger oldn, char op) {

BigInteger n = new BigInteger("1");

BigInteger i = new BigInteger("1");

BigInteger j = new BigInteger("1");

boolean prime = true;

switch (op) {

case '!':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one))

n = n.multiply(i);

break;

case '#':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one)) {

prime = true;

for (j = two; (prime == true) && (j.multiply(j).compareTo(i) <= 0); j = j.add(one))

if (i.remainder(j).compareTo(zero) == 0)

prime = false;

if (prime == true)

n = n.multiply(i);

}

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalPower(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '^':

n = oldn.pow(n1.intValue());

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalProduct(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '*':

n = oldn.multiply(n1);

break;

case '/':

n = oldn.divide(n1);

break;

case '%':

n = oldn.remainder(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalSum(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '+':

n = oldn.add(n1);

break;

case '-':

n = oldn.subtract(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger eval(String s) {

BigInteger oldn0 = new BigInteger("0");

BigInteger oldn1 = new BigInteger("0");

BigInteger oldn2 = new BigInteger("0");

BigInteger n = new BigInteger("0");

char oldop0 = 0;

char oldop1 = 0;

char oldop2 = 0;

char op = 0;

while (s1 < s.length()) {

switch (s.charAt(s1)) {

case '(':

case '[':

case '{':

s1++;

n = eval(s);

break;

case '0':

case '1':

case '2':

case '3':

case '4':

case '5':

case '6':

case '7':

case '8':

case '9':

n = readNum(s);

break;

default:

break;

}

if (s1 < s.length()) {

switch (s.charAt(s1)) {

case ')':

case ']':

case '}':

case '!':

case '#':

case 'r':

case 'R':

case '^':

case '*':

case '/':

case '%':

case '+':

case '-':

op = s.charAt(s1);

s1++;

break;

default:

break;

}

}

else

op = 0;

switch (op) {

case 0:

case ')':

case ']':

case '}':

n = evalPower(oldn2, n, oldop2);

n = evalProduct(oldn1, n, oldop1);

n = evalSum(oldn0, n, oldop0);

return n;

case '!':

case '#':

n = evalFact(n, op);

break;

case 'r':

case 'R':

n = readNum(s);

n = evalRand(op, n);

break;

case '^':

n = evalPower(oldn2, n, oldop2);

oldn2 = n;

oldop2 = op;

break;

case '*':

case '/':

case '%':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldn1 = n;

oldop1 = op;

break;

case '+':

case '-':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldop1 = 0;

n = evalSum(oldn0, n, oldop0);

oldn0 = n;

oldop0 = op;

break;

default:

break;

}

}

return n;

}

public BigInteger readNum(String s) {

BigInteger n = new BigInteger("0");

olds1 = s1;

while (s1 < s.length() && Character.isDigit(s.charAt(s1)))

s1++;

n = new BigInteger(s.substring(olds1, s1));

return n;

}

}

http://en.wikipedia.org/wiki/Carmichael_numbers

However, there is a simple workaround, by simply multiplying the input number to be factored (if Carmichael) by a suitable larg(ish) prime, eg a Mersenne. This will normally eject the factors, additionally, as required.

Here are some examples:

1)

tiggatoo:~/math/we james$ java we2tr2

f168003672409*(2^127-1)

[28584343649372241809274301558307881708368828786343]

base#72

elapsed=0s factor=6073 A=13044086312830013543739988769

base#70

elapsed=0s factor=3037 A=860763419187377590694240501658

base#230

elapsed=1s factor=9109 A=771019052198021707484273222802

170141183460469231731687303715884105727

duration: 1 seconds

2)

tiggatoo:~/math/we james$ java we2tr2

f173032371289*(2^89-1)

[107101850255573582977951074701018631079]

base#43

elapsed=0s factor=3067 A=1236330343540696294158885383112

base#149

elapsed=0s factor=6133 A=559525871748815289923955370298

base#758

elapsed=1s factor=9199 A=194689393018084379227390724801

618970019642690137449562111

duration: 1 seconds

3)

tiggatoo:~/math/we james$ java we2tr2

f973694665856161*(2^127-1)

[165665562777913375106491436985024163141370898298334047]

base#3

elapsed=0s factor=6841 A=1245204978926759694403655173839

base#0

elapsed=0s factor=2281 A=1024632434440788626941999868632

base#7

elapsed=0s factor=4561 A=457244767350940464517786915322

base#12

elapsed=0s factor=13681 A=85779304638188382542657979233

170141183460469231731687303715884105727

duration: 0 seconds

And here, for the record/or in case anyone wants to confirm results above, is the Java source code to we2tr2 (Copyright JGW ~ 2000):

import java.math.BigInteger;

import java.util.Random;

import java.util.Date;

public class we2tr2 {

static BigInteger zero = new BigInteger("0");

static BigInteger one = new BigInteger("1");

static BigInteger two = new BigInteger("2");

static BigInteger hundred = new BigInteger("100");

static BigInteger thousand = new BigInteger("1000");

static BigInteger n = new BigInteger("0");

static BigInteger p = new BigInteger("0");

static BigInteger known = new BigInteger("0");

static BigInteger numtrials = new BigInteger("0");

static String s;

static int olds1 = 0;

static int s1 = 0;

static Date d;

static long starttime;

static long finishtime;

static long duration;

public static void main (String args[])

throws java.io.IOException {

we2tr2 we2tr2inst = new we2tr2();

char c;

String sInput;

StringBuffer sbInput = new StringBuffer("");

while ((c = (char)System.in.read()) != '\n' && c != '\r')

sbInput.append(c);

System.in.read();

sInput = sbInput.toString().trim();

if (sInput.charAt(0) == 'f' || sInput.charAt(0) == 'F') {

s = sInput.substring(1).trim();

s1 = 0;

olds1 = 0;

p = we2tr2inst.eval(s);

System.out.println('[' + p.toString() + ']');

}

else {

p = new BigInteger(sInput);

}

n = p;

d = new Date();

starttime = d.getTime();

we2tr2inst.factorize(n);

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println("duration: " + duration + " seconds");

System.out.println();

}

public boolean factorize(BigInteger n) {

boolean prime = false;

BigInteger numtested = new BigInteger("0");

BigInteger T = new BigInteger("1");

BigInteger b = new BigInteger("1");

BigInteger A = new BigInteger("2");

BigInteger wanless = new BigInteger("2");

if (n.isProbablePrime(1000)) {

prime = true;

System.out.println(n);

return prime;

}

// workaround - apparent java bug in modPow - JGW

if (n.compareTo(two) < 0)

return false;

if (n.remainder(two).compareTo(zero) == 0) {

System.out.println(two.toString());

// added 2006-06-09

return(factorize(n.divide(two)));

}

// end workaround

while (wanless.compareTo(n) < 0)

wanless = wanless.multiply(two);

Random r = new Random();

numtested = zero;

while (T.compareTo(one) == 0 || T.compareTo(n) == 0) {

// changed JW 2005-3-23

A = new BigInteger(hundred.intValue(), r);

// added JGW 2006-06-09

System.out.print("base#" + numtested + '\r');

// changed DT 2005-2-20

b = A.modPow(wanless, n);

T = n.gcd(b.modPow(n, n).subtract(b));

numtested = numtested.add(one);

}

if (T.compareTo(one) > 0 && T.compareTo(n) < 0) {

d = new Date();

finishtime = d.getTime();

duration = (finishtime-starttime)/1000;

System.out.println();

System.out.println("elapsed=" + duration + "s" + '\t' + "factor=" + T.toString() + '\t' + "A=" + A.toString() + '\t');

factorize(n.divide(T));

}

return prime;

}

public BigInteger evalRand(char op, BigInteger oldn) {

BigInteger n = new BigInteger("1");

switch (op) {

case 'r':

case 'R':

Random r = new Random();

n = new BigInteger(oldn.intValue(), r);

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalFact(BigInteger oldn, char op) {

BigInteger n = new BigInteger("1");

BigInteger i = new BigInteger("1");

BigInteger j = new BigInteger("1");

boolean prime = true;

switch (op) {

case '!':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one))

n = n.multiply(i);

break;

case '#':

for (i = one; i.compareTo(oldn) <= 0; i = i.add(one)) {

prime = true;

for (j = two; (prime == true) && (j.multiply(j).compareTo(i) <= 0); j = j.add(one))

if (i.remainder(j).compareTo(zero) == 0)

prime = false;

if (prime == true)

n = n.multiply(i);

}

break;

default:

n = oldn;

break;

}

return n;

}

public BigInteger evalPower(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '^':

n = oldn.pow(n1.intValue());

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalProduct(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '*':

n = oldn.multiply(n1);

break;

case '/':

n = oldn.divide(n1);

break;

case '%':

n = oldn.remainder(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger evalSum(BigInteger oldn, BigInteger n1, char op) {

BigInteger n = new BigInteger("0");

switch (op) {

case '+':

n = oldn.add(n1);

break;

case '-':

n = oldn.subtract(n1);

break;

default:

n = n1;

break;

}

return n;

}

public BigInteger eval(String s) {

BigInteger oldn0 = new BigInteger("0");

BigInteger oldn1 = new BigInteger("0");

BigInteger oldn2 = new BigInteger("0");

BigInteger n = new BigInteger("0");

char oldop0 = 0;

char oldop1 = 0;

char oldop2 = 0;

char op = 0;

while (s1 < s.length()) {

switch (s.charAt(s1)) {

case '(':

case '[':

case '{':

s1++;

n = eval(s);

break;

case '0':

case '1':

case '2':

case '3':

case '4':

case '5':

case '6':

case '7':

case '8':

case '9':

n = readNum(s);

break;

default:

break;

}

if (s1 < s.length()) {

switch (s.charAt(s1)) {

case ')':

case ']':

case '}':

case '!':

case '#':

case 'r':

case 'R':

case '^':

case '*':

case '/':

case '%':

case '+':

case '-':

op = s.charAt(s1);

s1++;

break;

default:

break;

}

}

else

op = 0;

switch (op) {

case 0:

case ')':

case ']':

case '}':

n = evalPower(oldn2, n, oldop2);

n = evalProduct(oldn1, n, oldop1);

n = evalSum(oldn0, n, oldop0);

return n;

case '!':

case '#':

n = evalFact(n, op);

break;

case 'r':

case 'R':

n = readNum(s);

n = evalRand(op, n);

break;

case '^':

n = evalPower(oldn2, n, oldop2);

oldn2 = n;

oldop2 = op;

break;

case '*':

case '/':

case '%':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldn1 = n;

oldop1 = op;

break;

case '+':

case '-':

n = evalPower(oldn2, n, oldop2);

oldop2 = 0;

n = evalProduct(oldn1, n, oldop1);

oldop1 = 0;

n = evalSum(oldn0, n, oldop0);

oldn0 = n;

oldop0 = op;

break;

default:

break;

}

}

return n;

}

public BigInteger readNum(String s) {

BigInteger n = new BigInteger("0");

olds1 = s1;

while (s1 < s.length() && Character.isDigit(s.charAt(s1)))

s1++;

n = new BigInteger(s.substring(olds1, s1));

return n;

}

}

## Wednesday, 26 December 2007

### Charles Babbage

Here is a sketch (from Wikipedia) of Charles Babbage (b 26 December 1791).

He was the first to envisage a machine that could compute, though at the time this vision was of a purely mechanical device.

From http://en.wikipedia.org/wiki/Babbage

"He began in 1822 with what he called the difference engine, made to compute values of polynomial functions"

"The first difference engine was composed of around 25,000 parts, weighed fifteen tons (13,600 kg), and stood 8 ft (2.4 m) high. Although he received ample funding for the project, it was never completed"

"In 1991 a perfectly functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked"

He was the first to envisage a machine that could compute, though at the time this vision was of a purely mechanical device.

From http://en.wikipedia.org/wiki/Babbage

"He began in 1822 with what he called the difference engine, made to compute values of polynomial functions"

"The first difference engine was composed of around 25,000 parts, weighed fifteen tons (13,600 kg), and stood 8 ft (2.4 m) high. Although he received ample funding for the project, it was never completed"

"In 1991 a perfectly functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked"

## Tuesday, 25 December 2007

### P2007

To celebrate Christmas 2007, for the past few days I have been factorizing P2007.

Here are the results:

ibu:~/math james$ time java superfac9

f2^2007+1

[14696072899510457910180264975074329395485666586735298566113827031369808145822340017365241424851280254956108347379039523500123122699047108242251921358933160773008638610599971840088163730974725743542902654728126239332046779346737710585256579333179693308275839559444787047544912589519783891140629020412202583212053620350010688717104574055412999539319651392054912347738448106306817040926244005345442289064602444671410741520258787821875717396461207456197233847539467765831034596299478021012490490523728714592688694474716929987628644661687302977141155300336976022455747686505323874664699578081559660947075760129]

wanless...

brutep: 3

wanless...

brutep: 3

wanless...

brutep: 3

brute...

brutep: 19

wanless...

ecm...

aprtcle: 247531

ecm...

aprtcle: 219256122131

wanless...

aprtcle: 20493495920905043950407650450918171260318303154708405513

ecm...

aprtcle: 4340301546362831119363

aprtcle:

56377694445208154141927654613855613062927113955212040908548454699046039020893338370875013074480485757794923

ecm...

aprtcle: 73215361

ecm... ^C

real 3459m6.204s

user 3424m7.740s

sys 5m57.800s

Here are the results:

ibu:~/math james$ time java superfac9

f2^2007+1

[14696072899510457910180264975074329395485666586735298566113827031369808145822340017365241424851280254956108347379039523500123122699047108242251921358933160773008638610599971840088163730974725743542902654728126239332046779346737710585256579333179693308275839559444787047544912589519783891140629020412202583212053620350010688717104574055412999539319651392054912347738448106306817040926244005345442289064602444671410741520258787821875717396461207456197233847539467765831034596299478021012490490523728714592688694474716929987628644661687302977141155300336976022455747686505323874664699578081559660947075760129]

wanless...

brutep: 3

wanless...

brutep: 3

wanless...

brutep: 3

brute...

brutep: 19

wanless...

ecm...

aprtcle: 247531

ecm...

aprtcle: 219256122131

wanless...

aprtcle: 20493495920905043950407650450918171260318303154708405513

ecm...

aprtcle: 4340301546362831119363

aprtcle:

56377694445208154141927654613855613062927113955212040908548454699046039020893338370875013074480485757794923

ecm...

aprtcle: 73215361

ecm... ^C

real 3459m6.204s

user 3424m7.740s

sys 5m57.800s

## Monday, 24 December 2007

### Andrew Odlyzko

Andrew Odlyzko (picture reproduced with kind permission)

http://www.dtc.umn.edu/~odlyzko/

is probably mainly known for his contribution to the analysis of Riemann zeros, and the study of discrete logarithms. However, he also helped develop the Lanczos linear algebra stage of the QS (and other sieves), in his paper "Solving large sparse linear systems over finite fields" (1991),

http://citeseer.ist.psu.edu/140341.html

and has also authored a description of the state of factorization, looking ahead, "The future of integer factorization" (1995)

http://www.dtc.umn.edu/~odlyzko/doc/crypto.html

http://www.dtc.umn.edu/~odlyzko/

is probably mainly known for his contribution to the analysis of Riemann zeros, and the study of discrete logarithms. However, he also helped develop the Lanczos linear algebra stage of the QS (and other sieves), in his paper "Solving large sparse linear systems over finite fields" (1991),

http://citeseer.ist.psu.edu/140341.html

and has also authored a description of the state of factorization, looking ahead, "The future of integer factorization" (1995)

http://www.dtc.umn.edu/~odlyzko/doc/crypto.html

## Sunday, 23 December 2007

### Fibonacci# factorization

Fibonacci Number Factorization:

http://home.att.net/~blair.kelly/mathematics/fibonacci/index.html

http://home.att.net/~blair.kelly/mathematics/fibonacci/index.html

## Saturday, 22 December 2007

### Tom Flowers

Today marks the birthday of Dr Tommy Flowers, the builder (somewhat w/ Alan Turing) of the first modern ie digital computer. It was constructed during WWII here in UK, to decrypt German operations messages, and christened 'Colossus'. It contained 2400 thermionic valves (picture from Wikipedia) each several centimetres long, so its name was suitably apt! These valves were basically operating as switches (even though they were primarily designed as amplifiers), and serve the same role that transistors on a chip do in modern microprocessors, though these latter contain many millions on a tiny wafer.

The Colossus represented a real advance in computing though, as it was the first time that a machine had been built using electronic rather than mechanical switches for greater speed and flexibility. It was this potential that Flowers recognised.

From http://www.ivorcatt.com/47c.htm

"At the time I had no thought or knowledge of computers in the modern sense and had never heard the term used except to describe somebody who did calculations on a desk machine."

"Colossus was useful in more than one way, and there were even demonstrations applying it to number theory. But these demonstrations were more notable for their ingenuity than for their effectiveness."

The Colossus represented a real advance in computing though, as it was the first time that a machine had been built using electronic rather than mechanical switches for greater speed and flexibility. It was this potential that Flowers recognised.

From http://www.ivorcatt.com/47c.htm

"At the time I had no thought or knowledge of computers in the modern sense and had never heard the term used except to describe somebody who did calculations on a desk machine."

"Colossus was useful in more than one way, and there were even demonstrations applying it to number theory. But these demonstrations were more notable for their ingenuity than for their effectiveness."

## Friday, 21 December 2007

### Partition# factorization

Factorization of Partition Numbers:

http://www.asahi-net.or.jp/~KC2H-MSM/mathland/part/

(see

http://en.wikipedia.org/wiki/Partition_%28number_theory%29

for a description/definition of a 'partition' number)

http://www.asahi-net.or.jp/~KC2H-MSM/mathland/part/

(see

http://en.wikipedia.org/wiki/Partition_%28number_theory%29

for a description/definition of a 'partition' number)

## Thursday, 20 December 2007

### Primorials(+/-1) Factorization

[The first in a trio of links, for this blog, of some types of numbers currently being factorized]

Factorization of Primorials(+/-1):

http://primorial.unit82.com/

Factorization of Primorials(+/-1):

http://primorial.unit82.com/

## Wednesday, 19 December 2007

### Factor Announcements

Here is a nice page detailing historical announcements of factorization achievements:

http://www.crypto-world.com/FactorAnnouncements.html

http://www.crypto-world.com/FactorAnnouncements.html

## Tuesday, 18 December 2007

## Monday, 17 December 2007

### Msieve v1.32

New version of msieve available:

Announcement at:

http://mersenneforum.org/showpost.php?p=120869&postcount=344

Download from:

http://www.boo.net/~jasonp/qs.html

It would appear that the promised merge w/ GGNFS is underway...

Announcement at:

http://mersenneforum.org/showpost.php?p=120869&postcount=344

Download from:

http://www.boo.net/~jasonp/qs.html

It would appear that the promised merge w/ GGNFS is underway...

## Sunday, 16 December 2007

### Number of University Departments Factorizing

How many University departments around the world are actively engaged in factorization? These spring to mind...

Well, there's Bruce Dodson's university, Lehigh in Pennsylvania USA for one - I understand Bruce manages several dozen PC's running ECM curves on the Cunningham Project targets.

Paul Zimmermann, at INRIA, in Nancy, France, is in charge of developing the GMP-ECM software itself, that Bruce Dodson (amongst others :) is running. I believe Alex Kruppa has links there, as well as many PC's in a French grid, the GRID5000. Amazingly, I'm guessing, the 5000 refers to an approximate, or at least, goal, of number of cores in the grid:

https://www.grid5000.fr/mediawiki/index.php/Special:G5KHardware

Greg Childers, at Cal State, Fullerton, California USA, also has several dozen machines available for factoring, which he uses on a variety of projects.

At Cambridge University, UK, Paul Leyland has, and still is, contributing to a variety of projects, including Cunningham Project (via NFSNET - more on that project at a later date, hopefully) and more recently, Homogeneous Cunninghams.

Then of course there is the (legendary?) CWI, or Centrum voor Wiskunde en Informatica in Amsterdam in the Netherlands...

There must be many more (maybe these are just the most vociferous!:), feel free to add to the list, you other folks out there [or let me know of specific omissions] - but that's 5 for starters, [I seem to have picked mainly computational rather than theoretical research] almost without thinking!

Well, there's Bruce Dodson's university, Lehigh in Pennsylvania USA for one - I understand Bruce manages several dozen PC's running ECM curves on the Cunningham Project targets.

Paul Zimmermann, at INRIA, in Nancy, France, is in charge of developing the GMP-ECM software itself, that Bruce Dodson (amongst others :) is running. I believe Alex Kruppa has links there, as well as many PC's in a French grid, the GRID5000. Amazingly, I'm guessing, the 5000 refers to an approximate, or at least, goal, of number of cores in the grid:

https://www.grid5000.fr/mediawiki/index.php/Special:G5KHardware

Greg Childers, at Cal State, Fullerton, California USA, also has several dozen machines available for factoring, which he uses on a variety of projects.

At Cambridge University, UK, Paul Leyland has, and still is, contributing to a variety of projects, including Cunningham Project (via NFSNET - more on that project at a later date, hopefully) and more recently, Homogeneous Cunninghams.

Then of course there is the (legendary?) CWI, or Centrum voor Wiskunde en Informatica in Amsterdam in the Netherlands...

There must be many more (maybe these are just the most vociferous!:), feel free to add to the list, you other folks out there [or let me know of specific omissions] - but that's 5 for starters, [I seem to have picked mainly computational rather than theoretical research] almost without thinking!

## Saturday, 15 December 2007

### Tesla

...or you can buy a dedicated device (one example pictured (c) NVIDIA), called a 'Tesla', with the NVIDIA CUDA GPUs in situ:

http://en.wikipedia.org/wiki/NVIDIA_Tesla

http://en.wikipedia.org/wiki/NVIDIA_Tesla

## Friday, 14 December 2007

### Msieve V1.31

New version of msieve (by Jason Papadopoulos) now available. It seems this is a significant upgrade, with lots of new stuff.

Download from:

http://www.boo.net/~jasonp/qs.html

Announcement here:

http://mersenneforum.org/showpost.php?p=120642&postcount=339

Download from:

http://www.boo.net/~jasonp/qs.html

Announcement here:

http://mersenneforum.org/showpost.php?p=120642&postcount=339

## Thursday, 13 December 2007

### CUDA

Here are some links to an interesting new technology, which might be very useful for any "embarassingly parallel" (see earlier blog-post for definition of that) factorization (or other) algorithms. It's basically a method of using all the processing power inherent in modern GPUs for CPU-type calculations, and something I for one will be keeping an eye on...

http://en.wikipedia.org/wiki/CUDA

http://developer.nvidia.com/object/cuda.html#downloads

http://courses.ece.uiuc.edu/ece498/al1/

http://courses.ece.uiuc.edu/ece498/al1/Syllabus.html

It's even coming to Mac! :)

http://forums.nvidia.com/index.php?showtopic=47884

http://en.wikipedia.org/wiki/CUDA

http://developer.nvidia.com/object/cuda.html#downloads

http://courses.ece.uiuc.edu/ece498/al1/

http://courses.ece.uiuc.edu/ece498/al1/Syllabus.html

It's even coming to Mac! :)

http://forums.nvidia.com/index.php?showtopic=47884

## Wednesday, 12 December 2007

### GMP-M+2

I've recently noticed an interesting feature of GMP's implementation of mod_pow (ie modular exponentiation). I'm using this as part of random-base WEP on large M+2's. Anyway I always imagined, and hoped it would use "Russian Peasant" (see earlier post on this blog for description of that). If that is the case,

http://gmplib.org/manual/Powering-Algorithms.html#Powering-Algorithms

there should theoretically be a speed advantage to testing numbers that are (very) close to a power of 2 (as M+2's obviously are). And indeed I am observing just that - a 40% (!) speed increase testing the _raw_ M+2, rather than dividing out any factors (even though dividing out factors obviously leaves a slightly smaller number).

http://gmplib.org/manual/Powering-Algorithms.html#Powering-Algorithms

there should theoretically be a speed advantage to testing numbers that are (very) close to a power of 2 (as M+2's obviously are). And indeed I am observing just that - a 40% (!) speed increase testing the _raw_ M+2, rather than dividing out any factors (even though dividing out factors obviously leaves a slightly smaller number).

## Tuesday, 11 December 2007

### C138_101_75

I'm a total xyyxf junkie! :)

Recently started another GNFS factorization for them, this time of 138-digit input number.

ETA several months... [on 1 CPU](assuming all goes well). Currently sieved 9502/839184 rels. required.

Hopefully the process will further increase my understanding of GNFS, which is kindof the point!

Recently started another GNFS factorization for them, this time of 138-digit input number.

ETA several months... [on 1 CPU](assuming all goes well). Currently sieved 9502/839184 rels. required.

Hopefully the process will further increase my understanding of GNFS, which is kindof the point!

## Monday, 10 December 2007

### WIFC

This entry also is probably somewhat overdue:

Check out Hisanori Mishima's comprehensive page on factorization at:

http://www.asahi-net.or.jp/%7EKC2H-MSM/mathland/matha1/

Note that there are also download links to factoring software (by Satoshi Tomabechi) on this page (especially MPQS-type factoring)

Check out Hisanori Mishima's comprehensive page on factorization at:

http://www.asahi-net.or.jp/%7EKC2H-MSM/mathland/matha1/

Note that there are also download links to factoring software (by Satoshi Tomabechi) on this page (especially MPQS-type factoring)

## Sunday, 9 December 2007

### Cunningham Project

This is probably somewhat overdue, but here are a couple of links to the Cunningham Project, "one of the oldest continuously ongoing activities in computational number theory". [apparently it's been around since 1925]

http://en.wikipedia.org/wiki/Cunningham_project

http://homes.cerias.purdue.edu/~ssw/cun/index.html

http://en.wikipedia.org/wiki/Cunningham_project

http://homes.cerias.purdue.edu/~ssw/cun/index.html

## Saturday, 8 December 2007

### So just how big are these numbers?

So just how big are these numbers?

Well mathematics is unique among the sciences for generating really SILLY (or impressive, depending on your point-of-view :) sized numbers. Even (standard) physics not only doesn't come close, but can't even stand comparison. Hence the other-worldly nature of math (and mathematicians! :)). Now possibly combinatorial aspects of holistic physics theories can generate big numbers, but in this case the theory is as much mathematical as physical, anyway.

The way math generates these numbers is by a trick called 'exponentiation' (usually). This allows one to write (and sometimes even calculate with and manipulate) a very long number in a very succinct form. Exponentiation is basically a shorthand notation for multiplying a number by itself many times. Thus 10^100 (also known as a googol) is the number you get by multiplying 10 by itself 100 times. If written out in full it is '1' followed by 100 zeroes. And this isn't even that big by math (or even factorization) standards. Some of the larger M+2 numbers I've been testing would have millions of digits if written out in full. You can see why I called these numbers 'silly'-sized! And if you want to go really crazy in math, you can even iterate/stack the exponents... (there are notations existent for this)

I expect many of you will be aware of the story of the Chinese inventor of chess? Apparently the emperor was so impressed with chess that he promised the inventor a reward based on the 64 squares of the chess-board, that the inventor (obviously a mathematician too) had tricked him into granting: namely this:

1 grain of rice for the first square, 2 for the second, 4 for the third, 8 for the fourth, and so on, doubling each time. Total number of grains of rice 2^64, or about 10^20 - far more than the whole production of China, and the emperor was never able to make good his promise...! This doubling process is another example of exponentiation, as the emperor clearly learned the hard way.

For comparison:

1) Number of seconds elapsed since Genesis ~ 2*10^11 (just a measly 12-digit number)

2) Number of seconds elapsed since the Jurassic ~ 2*10^15 (just a measly 16-digit number)

3) Number of seconds elapsed since the start of the visible universe ~ 3*10^20 (just a measly 21-digit number)

Incidentally, the complete factorization of any of these (exact) numbers on a modern PC would only take a (very) split-second.

I'll repeat...

Some of the numbers under test at the Mersenneplustwo project, for example, have _millions_ of digits. See why I used the adjective 'SILLY' yet - there really is almost no other word for these-size numbers?

Finally, and this is perhaps the most remarkable fact of all - mathematics also deals with the infinite. And compared to infinity (ie the _whole_ Universe) every single one of these numbers is actually infinitesimally SMALL! Now how do you get your head around that??? I know I for one, have problems with that...

Ahh, the mysteries of the finite and the infinite - or as Shakespeare famously put it:

"To be or not to be, that is the question"

Well mathematics is unique among the sciences for generating really SILLY (or impressive, depending on your point-of-view :) sized numbers. Even (standard) physics not only doesn't come close, but can't even stand comparison. Hence the other-worldly nature of math (and mathematicians! :)). Now possibly combinatorial aspects of holistic physics theories can generate big numbers, but in this case the theory is as much mathematical as physical, anyway.

The way math generates these numbers is by a trick called 'exponentiation' (usually). This allows one to write (and sometimes even calculate with and manipulate) a very long number in a very succinct form. Exponentiation is basically a shorthand notation for multiplying a number by itself many times. Thus 10^100 (also known as a googol) is the number you get by multiplying 10 by itself 100 times. If written out in full it is '1' followed by 100 zeroes. And this isn't even that big by math (or even factorization) standards. Some of the larger M+2 numbers I've been testing would have millions of digits if written out in full. You can see why I called these numbers 'silly'-sized! And if you want to go really crazy in math, you can even iterate/stack the exponents... (there are notations existent for this)

I expect many of you will be aware of the story of the Chinese inventor of chess? Apparently the emperor was so impressed with chess that he promised the inventor a reward based on the 64 squares of the chess-board, that the inventor (obviously a mathematician too) had tricked him into granting: namely this:

1 grain of rice for the first square, 2 for the second, 4 for the third, 8 for the fourth, and so on, doubling each time. Total number of grains of rice 2^64, or about 10^20 - far more than the whole production of China, and the emperor was never able to make good his promise...! This doubling process is another example of exponentiation, as the emperor clearly learned the hard way.

For comparison:

1) Number of seconds elapsed since Genesis ~ 2*10^11 (just a measly 12-digit number)

2) Number of seconds elapsed since the Jurassic ~ 2*10^15 (just a measly 16-digit number)

3) Number of seconds elapsed since the start of the visible universe ~ 3*10^20 (just a measly 21-digit number)

Incidentally, the complete factorization of any of these (exact) numbers on a modern PC would only take a (very) split-second.

I'll repeat...

Some of the numbers under test at the Mersenneplustwo project, for example, have _millions_ of digits. See why I used the adjective 'SILLY' yet - there really is almost no other word for these-size numbers?

Finally, and this is perhaps the most remarkable fact of all - mathematics also deals with the infinite. And compared to infinity (ie the _whole_ Universe) every single one of these numbers is actually infinitesimally SMALL! Now how do you get your head around that??? I know I for one, have problems with that...

Ahh, the mysteries of the finite and the infinite - or as Shakespeare famously put it:

"To be or not to be, that is the question"

## Friday, 7 December 2007

### M+2 record

Mersenneplustwo project.

Here is a record of the number, and size, of new factors found, as well as an indication of the total effort, for each year since 2005:

2005 - 10 (largest 38-digits) - 4GHz-yrs

2006 - 3 (largest 19-digits) - 12GHz-yrs

2007[so far] - 2 (largest 23-digits) - 100GHz-yrs

Here is a record of the number, and size, of new factors found, as well as an indication of the total effort, for each year since 2005:

2005 - 10 (largest 38-digits) - 4GHz-yrs

2006 - 3 (largest 19-digits) - 12GHz-yrs

2007[so far] - 2 (largest 23-digits) - 100GHz-yrs

## Thursday, 6 December 2007

### M+2 progress

A bit of news from the Mersenneplustwo project:

As hoped for, getting Lenny the new MacBook going on those mid-range M+2's with mprime (ie by ECM) recently yielded a 21-digit factor from (M+2)1257787:

http://bearnol.is-a-geek.com/Mersenneplustwo/Mersenneplustwo.html

Thanks once again to George Woltman, for his cool software!

http://www.mersenne.org/freesoft.htm

This is the second new factor found this year (both by mprime)...

As hoped for, getting Lenny the new MacBook going on those mid-range M+2's with mprime (ie by ECM) recently yielded a 21-digit factor from (M+2)1257787:

http://bearnol.is-a-geek.com/Mersenneplustwo/Mersenneplustwo.html

Thanks once again to George Woltman, for his cool software!

http://www.mersenne.org/freesoft.htm

This is the second new factor found this year (both by mprime)...

## Wednesday, 5 December 2007

## Tuesday, 4 December 2007

### FFTs-part5

The Cooley-Tukey Algorithm is an FFT algorithm especially suited, and adapted for large integer multiplication. It was first published in 1965.

GIMPS' mprime uses "radix-4" Cooley-Tukey to achieve a computation time of the order of NlogN (for N*N multiplication).

http://en.wikipedia.org/wiki/Cooley-Tukey_FFT_algorithm

GIMPS' mprime uses "radix-4" Cooley-Tukey to achieve a computation time of the order of NlogN (for N*N multiplication).

http://en.wikipedia.org/wiki/Cooley-Tukey_FFT_algorithm

## Monday, 3 December 2007

### FFTs-part4

Parseval's Theorem

A special case of Plancherel's Theorem, when function x = function y. Then the scaling constant = 1/(2*pi) [FT] or 1/N [DFT]

http://en.wikipedia.org/wiki/Parseval%27s_theorem

"The interpretation of this form of the theorem is that the total energy contained in a waveform x(t) summed across all of time t is equal to the total energy of the waveform's Fourier Transform X(f) summed across all of its frequency components f."

A special case of Plancherel's Theorem, when function x = function y. Then the scaling constant = 1/(2*pi) [FT] or 1/N [DFT]

http://en.wikipedia.org/wiki/Parseval%27s_theorem

"The interpretation of this form of the theorem is that the total energy contained in a waveform x(t) summed across all of time t is equal to the total energy of the waveform's Fourier Transform X(f) summed across all of its frequency components f."

## Sunday, 2 December 2007

### FFTs-part3

Plancherel's Theorem

A (unitary) relationship between the DFTs of any two functions x and y under multiplication (subject to a constant scaling factor), which property can be of additional use in simplifying the problem under computation.

http://en.wikipedia.org/wiki/Plancherel_theorem

A (unitary) relationship between the DFTs of any two functions x and y under multiplication (subject to a constant scaling factor), which property can be of additional use in simplifying the problem under computation.

http://en.wikipedia.org/wiki/Plancherel_theorem

## Saturday, 1 December 2007

### FFTs-part2

The DFT (discrete Fourier transform) is basically a digital sampling (ie approximation) of the analog Fourier transform, and as such it remains finite throughout the domain.

http://en.wikipedia.org/wiki/Discrete_Fourier_transform

A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform and its inverse, for eg, a particular problem.

http://en.wikipedia.org/wiki/Fast_Fourier_transform

http://en.wikipedia.org/wiki/Discrete_Fourier_transform

A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform and its inverse, for eg, a particular problem.

http://en.wikipedia.org/wiki/Fast_Fourier_transform

Subscribe to:
Posts (Atom)