tag:blogger.com,1999:blog-81723314207651247032018-05-28T21:09:30.019-07:00William Stein's Number Theory Research BlogWilliam Steinhttps://plus.google.com/115360165819500279592noreply@blogger.comBlogger15125tag:blogger.com,1999:blog-8172331420765124703.post-77951964329144783652011-01-31T11:28:00.000-08:002011-01-31T11:28:32.553-08:00Higher rank curves with nontrivial ShaI wonder if all of the problems are open. Here we consider pairs (E,p) where E is an elliptic curve over Q and p is a prime. <br /><br /><ol><li> Prove there are infinitely many pairs (E,p) such that: (a) E(Q) has rank >= 2, and (b) Sha(E/Q)(p) is finite.</li><li> Prove there are infinitely many pairs (E,p) such that: (a) E(Q) has rank >= 2, and (b) Sha(E/Q)(p) is nonzero.</li><li> Prove there are infinitely many pairs (E,p) such that: (a) E(Q) has rank >= 2, and (b) Sha(E/Q)(p) is nonzero and finite.</li><li>Prove 2 assuming the full BSD conjecture. (1 is trivially implied by BSD and the existence of families of curves with 2 marked points.)</li><li>Prove 1 assuming the BSD rank conjecture, but not the BSD formula.</li><li>Same questions but over a number field K.</li><li>Same questions but for triples (E,p,K), where E is over K and we vary over all K.</li></ol>William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com1tag:blogger.com,1999:blog-8172331420765124703.post-74428694487290112742010-11-18T11:45:00.000-08:002010-11-18T17:18:15.510-08:00Current Research Papers and BooksThis post is about the papers I'm working on these days. Each of the papers below are papers that wouldn't be too difficult to finish and submit. As far as I know, most of the serious theoretical issues in all of the papers are resolved. I'm also putting links below into a snapshot of each of the papers as of this post (which might vanish or get updated in the future). <br /><br /><ol><li> <a href="http://wstein.org/papers/current/back_burner/nongorenstein/">Calegari-Stein: Non-eisenstein descent</a></li><ul><li> Numerous small technical questions to answer, still</li><li> Most of it should now be do-able in Sage; patch up anything that isn't</li></ul><li> <a href="http://wstein.org/papers/current/modabvar">Stein: Computing End(A<sub>f</sub>) and minimal degree of A<sub>f</sub>^ --> A<sub>f</sub></a> </li><ul><li>The computations that I did with Tseno Tselkov a few years ago...</li><li>Polish and publish</li><li>More data should be computable these days</li><li>I'm not sure the algorithm is even implemented in Sage, but it shouldn't be too hard</li></ul><li> <a href="http://wstein.org/papers/current/kamienny/">Kamienny-Stein-Stoll: Quartic torsion points</a></li><ul><li>Easy approach: just let it depend on Magma. Lame. Or just put the Magma part in a separate paper not by me (use a pseudonym or just Stoll).</li><li>Harder: determine cupsidal torsion subgroup using new ideas; at least helps</li><li>Hard as hell: implement Hesse's algorithm in Sage for this (this is what I want to do -- it's weak to apply Hesse to solve this problem, if I don't truly understand it, anyways.)</li><li>Do a better writeup of the intro based on talks I've given</li></ul><li> <a href="http://wstein.org/papers/current/kolyconj/">Stein: Kolyvagin's conjecture</a></li><ul><li>First ever verification of it</li><li>Interesting theoretical results about kolyvagin derivative: highlight in intro</li><li>Just needs polish, at this point. Maybe improve algorithm and redo tables, better.</li><li>Should add more about what is in Stein-Weinstein</li></ul><li> <a href="http://wstein.org/papers/current/kolydensity/">Stein-Weinstein: Kolyvagin densities</a></li><ul><li>Explains data from Kolyvagin conj</li><li>Need tex file</li><li>Just needs polish, I think. Jared wants to push the results too far, maybe?</li></ul><li> <a href="http://wstein.org/papers/current/wuthrich-stein/">Stein-Wuthrich: Aplying <i>p</i>-adic methods to bound Shafarevich-Tate groups</a></li><ul><li>Need to apply it to get some big interesting tables/computational results, or what is the point!?</li><li>There is a weird merge issue regarding reducible case</li></ul><li> <a href="http://wstein.org/papers/current/bradshaw-stein-motivic/">Bradshaw-Stein: Provable computation of motivic L-functions</a></li><ul><li>Put his thesis into a paper, and give it my spin and numerical data</li><li>Motivate with quote from Dokchitser's paper; once this paper is published, people will be able to reference it when claiming the existence of algorithms to do blah, blah, even if they don't actually <i>use</i> it as such.</li><li>Just a part of the thesis!</li></ul><li> <a href="http://wstein.org/papers/current/bradshaw-stein-heegner_bound/">Bardshaw-Stein: A conjectural Kolyvagin-style bound over ring<br />class fields for curves with analytic rank at least 2</a></li><ul><li>Based on Robert's calculations in his thesis</li><li>Very easy sell to publish this in a good journal</li><li>Can be viewed as a sort of follow up to a Bertolini-Darmon paper</li></ul><li> <a href="http://wstein.org/rh/">Mazur-Stein: A popular book on the Riemann-Hypothesis</a><br /><ul><li>Tricky because of theoretical issues with non-tempered disributions, which we're confronted with. Despite being a popular book, it's definitely interesting research, as is the case with anything Barry touches!</li></ul></li><li> <a href="http://wstein.org/books/ribet-stein/main.pdf">Ribet-Stein: Lectures on Modular Forms and Hecke operators</a><br /><ul><li>Just needs polish and more examples.</li><li>Did get some work done polishing this when I taught out of it in 2003</li></ul><br /><br /></ol>William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com0tag:blogger.com,1999:blog-8172331420765124703.post-4657141164675688272010-09-02T02:25:00.000-07:002010-09-02T02:25:14.307-07:00Purple SageIn order to support my computational research projects, I've decided to start a new project called <a href="http://purple.sagemath.org/">"Purple Sage" (PSAGE)</a>. This is based on the <a href="http://sagemath.org/">Sage</a> math software project that I started nearly 6 years ago, but has very different goals. The main goal of the Sage project is to <i><b>Create a viable free open source alternative to Magma, Maple, Mathematica, and Matlab,</b><span class="Apple-style-span" style="font-style: normal;"> whereas</span></i> the goal of the PSAGE project is to <i><b>Create viable free open source software for arithmetic geometry research</b></i>. These are very different goals. <br /><br />I intend to spend most of my mathematical programming effort during the next year on PSAGE, though I'll continue to support Sage by providing infrastructure, helping with releases, and sponsoring Sage Days workshops. During the last 2-3 years, the Sage project has become large, stable, and democratic, and has substantial momentum behind it. It is time for me to do work that much more directly supports my research.<br /><br />PSAGE will consist of (1) a stable subset of Sage, which includes only C/C++/Python/Cython code (no Fortran or Lisp), and has no Pexpect interfaces, (2) numerous research-oriented cutting edge Python/Cython libraries, and (3) a large network-accessible database of mathematical objects, probably built using <a href="http://mongodb.com/">MongoDB</a>. The target audience for the documentation and code will be research mathematicians (e.g., like the members of <a href="http://mathoverflow.ne/">http://mathoverflow.ne</a>t). I will focus on writing code for arithmetic geometry, though I'm open to including in PSAGE other code contributions.<br /><br />The development process will focus on making cutting edge code for arithmetic geometry available quickly to other researchers. The requirements of 100% test coverage, documentation, and peer review will be removed, so that useful code, e.g., for computing with Maass forms, Siegel modular forms, Hilbert modular forms, etc., will get into researchers hands quickly. <br /><br />PSAGE could in the long run lead to a more modular approach to Sage itself. For example, the PSAGE library will be a Python library like any other, and it should be possible to just install it into an existing Sage as an optional package. There could be similar projects, e.g., NSAGE = Numerical Sage, which focus on numerical computation. And, having a smaller core will help with porting, maintenance, and flexibility.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com4tag:blogger.com,1999:blog-8172331420765124703.post-38229929405598123032010-08-24T23:26:00.000-07:002010-08-25T17:56:07.237-07:00Computational Projects: Part 1<div class="separator" style="clear: both; text-align: left;"></div><div class="separator" style="clear: both; text-align: left;">I made a diagram that illustrates most of the algorithms I wish <a href="http://sagemath.org/">Sage</a> had to support my current mathematical research program. I am focusing most of my Sage development effort on ensuring that these get implemented as soon as possible. I will not <i>depend</i> on anybody else to help me with any of this, though if people would like to help in various ways that would be greatly appreciated. Over half of these are in the closed commercial <a href="http://magma.maths.usyd.edu.au/magma/">Magma software</a> already, which makes it much easier for me to implement them in Sage.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><span class="Apple-style-span" style="font-size: x-large;">Research Versus the Goals of the Sage Project</span></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><span class="Apple-style-span" style="font-size: x-large;"></span>A concern I have is that there might be disagreement in the Sage community about some of the design decisions that are necessary to support the implementation of my planned algorithms. For example, improving number field arithmetic to be dramatically faster requires making some sacrifices so that the implementation takes one week instead of two months (e.g., fundamentally depending on <a href="http://www.flintlib.org/">FLINT </a>for basic arithmetic, which at least one Sage developer will be quite unhappy about). Even though I'm the leader of the Sage project, I won't just add code to Sage that a lot of people don't like. However, implementing everything planned here is a lot of work, and given the time and resources I have available, it will be <b><u><i>impossible </i></u></b>to fully document and test everything up to the standard necessary for inclusion in Sage. If this gets to be a nontrivial problem, I will maintain my own alternative ``fork'' of Sage, which I'll distribute separately. It will have `bleeding edge'' code that I'm developing, and I'll make full source releases available on a separate website, but I will not have to worry about constantly updating packages (like <a href="http://maxima.sourceforge.net/">Maxima</a> and <a href="http://scipy.org/">Scipy</a>) that are irrelevant to my research. The architecture of Sage makes creating such a fork easy, and of course I know how to manage this. In a few years, once I've built everything and complete the research I plan to do with it, I can spend the time merging back the good parts into mainline Sage. I know that several other Sage developers (e.g., Nick Alexander, David Roe, etc.) do something similar, since their personal research is so important to them, and getting code into Sage is overly difficult. It is very important that I acknowledge this tension, because currently the demands of "publishing" code in Sage -- and indeed of maintaining Sage as a general purpose system -- are so tough that they make genuine development of Sage as a <b><i>research tool</i></b> too difficult. Solution: create something that isn't Sage.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b><span class="Apple-style-span" style="font-size: x-large;">The Diagram</span></b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Here is the diagram. It is a tiny picture, but if you click on it you should get a bigger PDF that you can zoom into (if you are using Linux, download <a href="http://wstein.org/blog/20100824/sagecodingprojects20100.png">this PNG</a> instead). <span id="goog_743430088"></span><span id="goog_743430089"></span><a href="http://www.blogger.com/"></a></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://wstein.org/blog/20100824/sage_coding_projects_20100824.pdf"><img border="0" height="197" src="http://4.bp.blogspot.com/_urfwuonbW3E/THSvT5tWtDI/AAAAAAAAbCc/am-k69NclzE/s320/all.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><br />I will spend the rest of this post discussing the first few algorithms at the top of the diagram, and my motivation for implementing them. Future posts will address the rest of the algorithms.<br /><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><a href="http://3.bp.blogspot.com/_urfwuonbW3E/THS0EvhmWSI/AAAAAAAAbCk/qW1HtM6zcl8/s1600/zoom.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="306" src="http://3.bp.blogspot.com/_urfwuonbW3E/THS0EvhmWSI/AAAAAAAAbCk/qW1HtM6zcl8/s400/zoom.png" width="400" /></a><br /><br /><br /><b>Number Fields</b><br />The top of the diagram lists number fields, and the two projects involve greatly speeding up basic <i>arithmetic</i> in Sage with elements of number fields. This is important to my research, since I intend to do explicit comptutations with elliptic curves and modular forms over number fields, including making large tables, hence fast arithmetic is important. Arithmetic in some cases in Sage is ridiculously slow, especially for relative fields. Fixing this involves switching from using <a href="http://www.shoup.net/ntl/">NTL</a> for all number field arithmetic to using FLINT (via Sebastian Pancratz's new rational polynomials) for absolue fields and Singular for relative extensions. I spent some time on this project and the results are at <a href="http://trac.sagemath.org/sage_trac/ticket/9541">Trac 9541</a>, but the code there should not be used as is. Instead, I'll finish this project by extracting out the best ideas from those patches, and adding more. The main idea for relative number fields is to create a C interface (written in Cython!) to Singular for computing with transcendence degree 1 function fields over the rational numbers. The other lesson I learned when working on <a href="http://trac.sagemath.org/sage_trac/ticket/9541">Trac 9541</a> is that it is orders of magnitude easier to delete all code related to using NTL for number fields instead of trying to simultaneously support several different models for numbers fields.<br /><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b>Function Fields</b></div>Continuing clockwise, the next cluster of algorithms involves function fields of transcedence degree 1 over the prime subfield, i.e., function fields of algebraic curves. Function fields are important for my research because an application of explicit Riemann-Roch spaces and divisor reduction is a key step in completing a paper on torsion points on elliptic curves over quartic fields that I'm writing with Kamienny and Stoll. Function field arithmetic, rings, ideals, polynomial factorization, norm, traces, ``Poppov reduction'', etc., are the subject of <a href="http://trac.sagemath.org/sage_trac/ticket/9054">Trac 9054</a>, which should be relatively easy to finish. The next step is to implement <a href="http://www.math.tu-berlin.de/~hess/">Florian Hesse</a>'s algorithm for Riemann-Roch spaces, by following the description in Florian's papers and talks on the topic. Next, I'll have to implement divisor reduction and computation of maximal orders (normalization of curves). I suspect that <a href="http://w3.uwyo.edu/~chall14/">Chris Hall</a> will help. Parallel to this, it is also a good idea to use the same code as in the number field case to make basic arithmetic in certain function fields much faster than the generic code of <a href="http://trac.sagemath.org/sage_trac/ticket/9054">Trac 9054</a>.<br /><br /><b>Elliptic Curves over Function Fields</b><br /><div class="separator" style="clear: both; text-align: left;">I am also interested in code for computing with elliptic curves that are defined over function fields (usually over finite fields). This is related to work of Chris Hall, Sal Baig, and others at recent Sage</div>Days on function fields. This code will make it easier to computationally explore function field analogues of new ideas related to the Gross-Zagier formula and the BSD conjecture. The main goals are to implement algorithms for computing every quantity appearing in the Birch and Swinnerton-Dyer conjecture, including torsion, Tamagawa numbers, <i>L</i>-functions, Mordell-Weil groups (2-descent), and regulators. Also, I need to implement code for doing computations with Drinfeld modules, since Drinfled modules provide the analogue of modular curves and Heegner points in the function field setting.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com13tag:blogger.com,1999:blog-8172331420765124703.post-84058762210102435842010-08-20T23:48:00.000-07:002010-08-21T09:08:06.928-07:00Kolyvagin's Conjecture: a status reportDuring the Summer of 2009, I read some papers of Cornut, Vatsal, and Jetchev-Kane that were motivated by Mazur's conjecture about Heegner points. Meanwhile, I was thinking about Kolyvagin's conjecture on nontriviality of his Euler system of Heegner points. This is a conjecture about any elliptic curve <i>E</i> over <b>Q</b> along with any prime p and a fixed choice of quadratic imaginary field K that satisfies the Heegner hypothesis (e.g., each prime dividing the conductor of <i>E</i> splits in <i>K</i>). The conjecture is simply that one of Kolyvagin's cohomology classes in H^1<i>(K</i>, <i>E[p^n]</i>) is nonzero, for some <i>n</i>. When <i>E</i> has analytic rank 1 over <i>K</i> this conjecture holds for the class tau_1 for <i>n</i> sufficiently large, by the Gross-Zagier formula. <br /><br />Three years ago, I wrote <a href="http://wstein.org/papers/kolyconj/">a paper</a> with Dimitar Jetchev and Kristen Lauter where we gave *numerical evidence* that Kolyvagin's conjecture holds for the curve 389a with <i>p=3</i>, <i>K</i>=<b>Q</b>(sqrt(-7)), n=1, and tau_5!=0. But we didn't *prove* that tau_5 is nonzero. Moreover, our numerical approach was so inefficient that we couldn't have done many more examples. Nonetheless, this was good news - at least there was evidence thst Kolyvagin's construction doesn't completely degenerate for curves of rank > 1. <br /><br />Last summer, when reading Cornut, Jetchev-Kane, etc., I figured out how to make some of what they do explicit and machine computable, and I implemented code. This was several weeks of focused programming work in <a href="http://sagemath.org/">Sage</a>, starting from scratch, partly because I refuse to used the closed <a href="http://magma.maths.usyd.edu.au/magma/">Magma</a> math software system (which had maybe half of what I would have needed).<br /><br />Anyway, running all this code for a week on my (big NSF funded!) cluster, and also Jon Hanke's at Univ of Georgia resulted in provable verification of Kolyvagin's conjecture for around 100 triples <i>(E,K,p)</i> with <i>E</i> of rank 2, and <i>K</i> of class number 1. I am right now working on <a href="http://wstein.org/papers/kolyconj2/">polishing the write up</a> of this for publication (I plan to submit the paper sometime in the next month). <br /><br />At <a href="http://wiki.sagemath.org/days17">a Sage Days</a> we had out on Lopez in late summer 2009, <a href="http://www.math.ucla.edu/%7Ejared/">Jared Weinstein</a> got interested in this project, and with some input from<a href="http://www.math.uci.edu/%7Ekrubin/"> Karl Rubin</a> about how to apply Kolyvagin systems, we figured out how to compute the distribution of Kolyvagin classes which massively clarified the numerical data I had obtained. He also found a completely different *conditional* algorithm to compute Kolyvagin classes: when I implemented it the results exactly matched what I got with my algorithm the previous month: nice.<br /><br />In December 2009, at <a href="http://wiki.sagemath.org/dayscambridge2">another Sage Days</a> at the <a href="http://www.claymath.org/">Clay Math Institute</a> in Cambridge, MA, Jennifer Balakrishnan became interested in the project and ran my code (with some modifications) to verify Kolyvagin's conjecture for the rank 3 curve 5077a and <i>p=3</i>. The relevant quadratic imaginary field <i>K</i> had larger class number so we had to improve the theory a bit. The computer calculations also took many hours. Meanwhile, Jared Weinstein came up with some preliminary rank 3 analogues of his results from the previous summer, which gave Jen and me confidence that our rank 3 verification of Kolyvagin's conjecture was correct. We also plan to write a paper about this soon, but still haven't really done much (Jen has though, actually).<br /><br />During the next few months (early 2010) Jared worked on refining and generalizing his density arguments, with some input from me. I also finally read the <a href="http://math.stanford.edu/%7Erubin/preprints/kolysys.pdf">Mazur-Rubin book on Kolyvagin systems</a>. <br /><br />In June at <a href="http://wiki.sagemath.org/days22">Sage Days 22 at MSRI</a> Jared ran a very coherent 2 week minicourse on Kolyvagin and had several students do projects making explicit subtle cases of explicit computation with Kolyvagin's Euler system when p=3. They were forced to use Magma, since they had to adapt some general 3-descent code, and there is only one implementation of 3-descent in the world: Michael Stoll's in Magma. (My student <a href="http://www.rlmiller.org/">Robert Miller</a> started along the path to a free open implementation last year, but we decided he should finish his thesis first. That said, it will be a great contribution to math when somebody finally implements a free 3-descent.) Anyway, I think their Magma computation with a modified 3-descent led to some great new data related to Kolyvagin's Euler system, I think when Sha(<i>E/K</i>)[3] is nontrivial.<br /><br />There are a few other related directions that I haven't mentioned above. First it is possible to use similar techniques to verify the natural analogue of Kolyvagin's conjecture for modular abelian varieties of larger rank and I have done this for a few examples. It would also be interested to extend the Stein-Weinstein density business to this case. There is probably an analogue of everything for modular motives (in the sense of Scholl) and maybe <a href="http://www.math.ucla.edu/%7Ekhopkins/">Kimberly Hopkins</a> would have some interest in this.<br /><br />Another interesting direction to pursue is totally real fields. Around 2001, <a href="http://www.math.columbia.edu/%7Eszhang/papers/Heegner01.pdf">Zhang generalized some of the Gross-Zagier formula</a> and also some of Kolyvagin's Euler system to this setting. A natural Ph.D. (?) project is to generalize enough to state an analogue of Kolyvagin's conjecture, then verify the conjecture in at least one case of a curve of rank 2. This will inevitably involve Hilbert modular forms, and certain explicit computation with quaternion algebras over a totally real field. Currently all existing code in this direction is in Magma, so step 1 is to understand enough of the recent work of Voight and Dembelle to implement open source code in Sage (it appears neither Dembelle or Voight have any plans to do this, unfortunately). But for this project doing only one single verification of Kolyvagin means that one doesn't have to write very general code, so this should be much easier than a general implementation (which would likely take at least a year fulltime work). Probably my Ph.D. student Aly Deines will take on this project.<br /><br />I will close this blog post with some remarks about the whole point of this line of investigation. In first learned about Kolyvagin's conjecture from Christophe Cornut back in 2001 when we were both B.P. Assistant Professors at Harvard together. At the time, I think Christophe was perhaps trying to prove the conjecture, maybe using similar tools (e.g. Ratner's theorem) to what he used to <a href="http://www.math.jussieu.fr/%7Ecornut/papers/mcinv_published.pdf">prove Mazur's conjecture</a> on nontriviality of Heegner points. I only wondered about verifying the conjecture in a few cases later in 2007, since I started wondering about BSD for rank 2 curves, and it is natural to wonder whether or not Kolyvagin's approach just falls apart for rank 2 or not. (Answer: it doesn't!) Anyway, I stumbled on a way of checking the conjecture computationally in specific cases, and this has led to some potentially interesting questions and results that explain the computations. Also, one gets a new algorithm to compute something about Selmer groups, which applies in some cases where <i>p</i>-adic techniques don't, e.g. additive primes.<br /><br />I personally do <i>not</i> think anything mentioned above will prove Kolyvagin's conjecture in general. I.e. the above techniques are very unlikely to say anything much about the question that Cornut was attacking back in 2001. I published in IMRN <a href="http://wstein.org/papers/stein-ggz/">a (fairly naive and formal!) conjectural analogue of the Gross-Zagier formula for arbitrary rank</a>. This is in a totally diffent direction to the work of Zhang, Yang, etc. generalizing Gross-Zagier. A "suitably form" of this conjectural formula does imply Kolyvagin's conjecture (and the full Birch and Swinnerton-Dyer rank conjecture). I personally think the most likely way in which Kolyvagin's conjecture will be proved is that somebody will prove something nontrivial toward the above mentioned generalization of Gross-Zagier.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com0tag:blogger.com,1999:blog-8172331420765124703.post-76043081428204534452010-02-11T15:56:00.001-08:002010-02-11T15:56:57.663-08:00Tamagawa numbers and components groups at additive primesI had an idea while driving back from snowboarding today that could lead to a nice paper if worked out. The problem is to find an algorithm to compute Tamagawa numbers of optimal newform modular abelian varieties $A$ at primes~$p$ of additive reduction, say quotients of $J_0(N)$ with $p^2\mid N$.<br /><br />Idea. We have $A^{\vee} \to J_0(N) \to A$ is a morphism with computable degree (basically the modular degree). We can compute the component group of $J_0(N)$ explicitly as a $\T$-module, I think, by work of Raynaud. Using functoriality of the component group, we can thus probably in many cases locate the prime-to-the-modular degree part of the image of the component group of $A$ in $J_0(N)$. Since $p$ is an additive prime, there is a pretty tight bound on the primes that can divide the component group's order, so in some (many?) cases this may already be enough to nail down the component group.<br /><br />When it isn't we can also look for congruences with lower level, and if there aren't any, then conclude in many cases (when representation is irreducible) by Ribet's theorem that those primes don't divide the component group.<br /><br />Another thought is that we should be able to use the modular degree and computation of the component group of $J_0(N)$ at $p$ to give a bound on the powers of primes that can divide $c_{A,p}$, which is also new. <br /><br />I'm sure putting together everything above would (a) yield some very interesting tables, and (b) be easily publishable as a paper.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com2tag:blogger.com,1999:blog-8172331420765124703.post-35089737625173733622008-12-17T20:29:00.000-08:002008-12-17T20:46:58.945-08:00Hermite forms and torsion pointsI spent most of the day polishing off a paper called <i>Heuristically Fast Computation of Hermite Normal Forms of Random Integer Matrices</i> with Clement Pernet and Robert Bradshaw. It's about a very fast algorithm Clement and I came up with (building on lots of existing work!) to compute Hermite forms. Some interesting things happened -- first, of course, we didn't completely finish the paper today. Second, Robert nailed down a very serious but "silly" efficiency bug in our current Sage implementation that Craig Citro had introduced, which made it VASTLY slower than it should be (I had stupidly positively reviewed that patch, too). Third, as with writing any paper, we had to carefully prove several key points in the algorithm that we had brushed aside before, and some of these were quite surprising to me. For example, in computing the Hermite form of a square nonsingular matrix with determinant d, one should in fact work modulo d, not 2*d as we had done before (we had blindly followed what Allan Steel says he does in Magma, which is actually overkill). We're running a bunch of benchmarks of Sage versus Magma on one of my new Sun 24-core 128GB RAM boxes, which is very fun. If our paper is sufficiently well written, Allan will referee it and the next version of Magma will be at least as fast as our implementation, since we explain all of our tricks.<br /><br />I also talked with Ralph Greenberg for a while today about the field Q(E[ell^n]) where E is an elliptic curve over Q, and we assume that the ell-adic representation is surjective. We used group and Galois theory to compute all the abelian quotients of the Galois group G of Q(E[ell^n]). One deduces that the only abelian quotient is (Z/ell*Z)^*, hence the maximal abelian subfield of Q(E[ell^n]) is Q(zeta_ell). This can be used along with the Chebotarev density theory to prove that if P is a point on E(Q) of infinite order and m is an integer divisible only by primes such that the ell-adic representation is surjective, then there are infinitely many primes p such that the reduction of P modulo p has order divisible by m. And that's interesting, since it's a step in a paper I'm writing about a generalization of the Gross-Zagier formula to higher rank. There are other ways to prove the above reduction statement, I think in more generality, which I thought through last night. But I was just very curious about the structure of G, which is why Ralph and I talked it through. The basic idea for understanding it is to use a few tricks, including that PSL_2 is simple, that there is always a square root of the matrix -1 in SL_2(F_ell), and that the kernel of reduction from SL_2(Z/ell^nZ) to SL_2(F_{ell}) is isomorphic to trace zero matrices with the conjugation action, hence a simple representation.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com4tag:blogger.com,1999:blog-8172331420765124703.post-60731017168988630822008-11-21T14:29:00.000-08:002008-11-21T14:40:39.807-08:00Modular degrees, congruence moduli, and the Heegner-Kolyvagin subgroup of Mordell-WeilToday in the advanced number theory reading seminar, Robert Bradshaw talked about the algorithm of Delaunay-Watkins for computing modular degrees using L-functions. In particular, he talked about some of the background definitions involving the symmetric square L-function and the Petersson norm. Craig Citro (in the audience) kept bringing up the adjoint L-function, since it is very relevant to his Ph.D. thesis work. There is a formula for L(Sym^2(f),2) that involves the modular degree, and there is a formula that involves L(Ad(f),1) and the congruence modulus. There's a paper of Agashe-Ribet-Stein (me) that proves that the modular degree m divides the congruence modulus c for elliptic curves, and we conjecture that 2*ord_p(c/m) <= ord_p(N), where N is the conductor of the curve. Ken Ribet in fact proved this when ord_p(N) <= 1, but none of us have made any progress when ord_p(N) >= 2. I'm very curious if rephrasing these divisibilities as relations about L-functions yields any new insight either about L-functions or the conjecture? Or nothing? I have no idea yet. <br /><br />I also thought some about various ways of defining subgroups of Mordell-Weil groups of elliptic curves of arbitrary rank. The result is that now I have I think three <i>a priori</i> completely different definitions of a subgroup V of E(K), where K is a Heegner quadratic imaginary field, and I suspect that all three definitions give the same group. The first definition is the subgroup of E(K) that is in the kernel of all the maps to component groups and to the dual of all groups Sha(E/K)(p^oo), where dual is defined by lifting to the Selmer group and using the cup product. The second definition is the intersection of the sums of the inverse images of all subgroups of E(K_lambda)/M E(K_lambda) generated by Kolyvagins points P_lambda. The third subgroup is the inverse image of the subgroups of J_0(N*lambda)(K)/M generated by Heegner points x_K in J_0(N*lambda)(K).William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com2tag:blogger.com,1999:blog-8172331420765124703.post-49540179469554837282008-11-19T21:52:00.000-08:002008-11-19T22:15:27.397-08:00Gross Zagier, Hilbert Class Polynomials, and Rank 2 Curves with 3 ShaToday I gave a talk on the Gross-Zagier formula in the <a target="_new" href="http://wiki.wstein.org/adv-nt-uw">this seminar</a>. I stated the main formula over the Hilbert class field, then explained exactly how to use it to deduce a formula over the quadratic imaginary field K. The proof involves using a summation trick and projecting onto eigencomponents of the Hecke algebra. I also talked about the relationship between L(f,chi,s), L_A(f,s), and the ranks of the eigencomponents of J(H)_C.<br /><br />Right after my talk, several of us went to Microsoft Research, where we had lunch, then Drew Sutherland gave a <i>superb</i> talk on his very impressive multi-modular algorithm for computing Hilbert Class Polynomials modulo one cryptographic sized prime P. The main advantages of his multi-modular method are that it is very easy to parallelize, is extremely memory efficient compared to all other known approaches, and in several situations in the algorithm he can lower the constant in the time complexity by a big factor by using explicit models for X_1(N) (and some other tricks). Very nice.<br /><br />When I got home, Mark Watkins had sent me a table of 630 elliptic curves of prime conductor and rank 2 that have Shafarevich-Tate groups of order divisible by 9 (see <a target="_new" href="http://sagenb.org/home/pub/79">this Sage worksheet</a>). The smallest-conductor example E we know of is y^2 + x*y = x^3 - x^2 + 94*x + 9, which has conductor "only" 53295337. The field K=QQ(sqrt(-7)) satisfies the Heegner Hypothesis, and E^K has rank 1. Also, an hour computation shows that Sha(E^K)_an = 1. I double checked that indeed Sha(E/QQ)_an = 9. Of course the torsion subgroup and Tamagawa numbers are trivial (this follows automatically from the Neumann-Setzer classification). So one expects that the Heegner-Kolyvagin subgroup [which I've never "publicly defined"] of E(K)=Z x Z x Z is a subgroup of index 3 = sqrt(#Sha). I wonder if it is possible to somehow compute -- at least conjecturally(!) -- which subgroup it is? Possibly, yes, by finding the subgroup of E(QQ)/3 E(QQ) that maps to a subgroup of Sel^(3)(E/QQ) that is orthogonal to the inverse image of Sha(E/Q)[3]. I wonder if complexity-wise such a computation is impossibly hard or reasonable?William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com2tag:blogger.com,1999:blog-8172331420765124703.post-34568926951365779392008-11-12T17:35:00.000-08:002008-11-12T17:52:38.772-08:00L-functions: a seminar and the Gross-Zagier L-functionToday we had the first meeting of the <a href="http://wiki.wstein.org/adv-nt-uw">Advanced Grad Student Number Theory Research Reading Seminar at UW</a>. The participants are me, Robert Miller, Robert Bradshaw, and Craig Citro, and we intend to talk about papers of Dokchitser, Gross-Zagier, Watkins, Delauney, Kartik, Bosman, Kolyvagin, Ghate, and Hida. The unifying theme is mostly about explicit applications of L-functions and modular forms to very hard problems in algebraic number theory. We first had an organizational meeting and decided what we'll be talking about, then I gave a talk on Dokchitser's paper (basically following the talk I gave at UT Austin a few days ago, but greatly abbreviated due to lack of time). <br /><br />After that, I went out to lunch with Tom Boothby, Sourav San Gupta, Craig Citro, and Robert Bradshaw at Cedars, where we talked about a reading course Boothby and Gupta are doing on algebraic number theory (basically, exercises from Marcus's book). <br /><br />Next, Robert Bradshaw, Craig Citro, and I sat down in my office and figured out precisely how to use Dokchitser's algorithm to compute the Gross-Zagier L-function L_A(f,s). This involved using the Legendre duplication formula (page 217 of Ahlfors), explicitly computing the representation numbers r_A(n) using quadratic forms (Robert Bradshaw has a super-fast Cython implementation of this which is 20 times faster than Magma's, which uses careful bounds on the ellipse one enumerates over), and explicit computation of the product of two Dirichlet series as a Dirichlet series, which again Robert coded up very quickly by noting that one of the series is very sparse. Then we got all this to work via the Sage/Dokchitser interface, and it computed the power series of L_A(f,s) at 1 for 37a and D=-40. Robert will be making this all much more systematic as a patch to Sage, and extend it to work for any newform of any weight or level (for Gamma0(N)), so that we can do many systematic computations. In particular, I'm very interested in what happens when ord_{s=1} L_A(f,s) >= 3 (it's always odd), and also what happens when f isn't attached to an elliptic curve.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com0tag:blogger.com,1999:blog-8172331420765124703.post-86893457006282031182008-11-08T15:59:00.000-08:002008-11-08T16:49:00.106-08:00Dokchitser's Algorithm for Computing L-functionsOn the trip Thursday from Seattle to Austin, I read <a href="http://arxiv.org/abs/math/0207280">Tim Dokchiter's paper "Computing special values of motivic L-functions"</a>. This is Tim's oldest paper <a href="http://www.dpmms.cam.ac.uk/~td278/">listed here</a>, and some of his other papers build on it. It describes a very general algorithm for computing "anything" numerical about motivic L-series, which are Dirichlet series with meromorphic continuation, functional equation, etc. We are implemented this algorithm in Sage right now, since it's potentially very important to many things I care about:<br /><br /><ol><br /> <li> Birch and Swinnerton-Dyer, Bloch-Kato, Stark Conjectures<br /> <li> Heegner heights: Gross-Zagier + Zhang; computation of twisted L-series by characters of class groups, and also computing Rankin-Selberg convolutions. <br /> <li> Computing conductors of curves, local factors at primes of bad reduction<br /> <li> Computing the Petersson pairing by computing special values of symmetric square L-functions; get explicit Poincares series from this. Also, the related problem of <br />computing modular degrees of modular abelian varieties and modular motives analytically.<br /></ol><br /><br />Dokchitser's algorithm is already implemented (by him) in GP/PARI and Magma, but a highly optimized implementation done directly in Sage would be extremely desirable. Nick Alexander, Jen Balakrishnan, and Sourev San Gupta have all worked on this but their implementation is still rough and has bit rotted. We already use the GP/PARI version in Sage a lot, but the main motivation for doing a new implementation is that it will be more flexible wrt the above problems, and we can potentially optimize it. <br /><br />Speaking of speed, I tested Pari versus Magma's Dokchitser implementation for the elliptic curve [1,2,3,4,5], and Pari was 2 to 6 times faster on everything I tried for that L-series. Somebody conjectured this might be that Dokchitser knows pari better than Magma, or that Pari has more relevant low level data structures that he could use. <br /><br /><br />Today I gave a talk on Dokchitser's algorithm, focused on explaining the key "theoretical" ideas of the algorithm, without getting bogged down in details. In my talk, I discussed the applications of Dokchitser's algorithm, then wrote down his formula for L^*(s) in terms of G_s(t) and the a_n, and emphasized that the G_s(t) depend only on the gamma factor. Then I said some words about how to compute G_s(t), and finally I gave almost <i>all</i> the gory details of derivation for the formula for L^*(s) in terms of G_s(t) and a_n. I was surprised to end up giving all the details. Mike Rubinstein and Fernando Rodriguez-Villegas both made a lot of helpful comments during they talk.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com1tag:blogger.com,1999:blog-8172331420765124703.post-71581929836937919592008-11-02T23:02:00.000-08:002008-11-03T16:18:48.398-08:00Gross-Zagier: Heegner Points and Derivatives of L-seriesToday, after answering at least a dozen emails from new users about basic design decisions in Sage, I read chapters II and V of Gross and Zagier's seminal paper "Heegner Points and Derivatives of L-series" (at the Vivace coffee shop by REI in downtown Seattle). Chapter II is a very technical derivation of a formula for local archimedian heights of pairings of certain Heegner points. It mainly involves a careful construction of a certain "resolvent kernel function" G by a limiting process, deriving relations between that function and the local archimedian contributions to the height, and dealing with various cases. The function G looks perhaps computable, though I've never heard of anybody computing it; there are some related examples in the last chapter of this paper. <br /><br />I only skimmed chapters III and IV, which I'm saving for later, since they're 50 pages of very technical arguments. <br /><br />Chapter V is extremely exciting, since it ties together the ideas from the previous chapters to finally relate heights of Heegner points to L-values. The basic idea is that using Rankin's method, Gross and Zagier write down a "horrendous" formula for the coefficients of a certain modular form that encodes the linear functional that sends a newform f to the value of the derivative at 1 of a certain L-series. They also use that G function mentioned above and explicit computation with local heights to write down another horrendous formulas for height pairings of Heegner points. They then observe in Chapter V that the formulas as the same! (And, yes, they include an excited exclamation point at that point in the paper.) The rest of Chapter V involves deducing the statements that originally motivated Gross's interest in this problem (e.g., the conjectures of Birch related to the BSD conjecture), an application to weight 1, and some ruminations on higher weight analogues that involve Grothendieck motives attached to higher weight modular forms, though Gross-Zagier talk only of Deligne's cohomology theory, since this paper was written before Scholl's paper on motives attached to modular forms. Much of these later deductions will be very fun to generalize to higher derivatives, since they're basically formal applications of their main formula. <br /><br />Idea/Question: Something I've long wondered about is whether there is a way to compute the *conjectural* order of Sha and the regulator for modular abelian varieties A over the rational numbers with positive rank. Reading Chapter V of Gross-Zagier makes me think there is in the common case when rank(A) = dim(A), which would be <i>incredibly useful</i> for making a large table that generalizes Cremona's book to dimension bigger than 1. Here's the very roughly sketched idea. Given a newform f of degree > 1 over the rational numbers, there is a corresponding abelian variety A=A_f of dimension d > 1. It is fairly easy to compute L(A,s), and even the leading coefficient L^(d)(A,1) and BSD volume Omega_A. Assuming A has rank d, according to the BSD conjecture, L^(d)(A,1)/Omega_A is a nonzero rational multiple alpha of the regulator. I wonder if one can get any information at all -- even conjecturally -- about this multiple by computing something explicit involving the various constructions in the Gross-Zagier paper? Some sort of calculation using Gross-Zagier should give the regulator of the subgroup generated by the Heegner point and its Galois conjugates, and this will be a square multiple of the true regulator. Comparing that with what BSD gives for Reg*Sha, and throwing in knowledge of (or bounds on) the Tamagawa numbers and torsion, and an analogue of the conjecture in Section 5 that the index of the Heegner subgroup in the full group of rational points is a formula involving the square root of something involving the torsion, Tamagawa numbers, Manin constant, and order of Sha, should yield a conjectural formula for Sha. This would take working out stuff not given explicitly in Gross-Zagier in higher dimension, but I think would be do-able. This will definitely require explicitly computing something about p-divisibility or indexes of Heegner points in A(K), possibly using ideas from Johan Bosman's Ph.D. thesis. <br /><br />I talked more with Robert Bradshaw about all the above, and our conclusion was a first interesting problem is just to figure out precisely what replaces the elliptic curve index [E(K):Z y_K] when E is replaced by A=A_f. Maybe [A_f(K) : T y_K] where T is the Hecke algebra? But then sqrt(#Sha(E))*prod c_p * Torsion(E) has to be replaced by something else since #Sha(A) might not be a perfect square.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com1tag:blogger.com,1999:blog-8172331420765124703.post-34096415774224618442008-10-31T22:02:00.000-07:002008-10-31T22:10:45.807-07:00Annals, Crypto, and Modular FormsThis morning I read Iwasawa's original paper where he introduces p-adic L-series of Dirichlet characters. I basically read this on a whim, since I saw it in a friend's collection of scans. <br /><br />I also skimmed through all the number theory related papers published by Annals of Math during the last few years. Probably the most interesting were Kartik Prasanna's paper on integrality of Petersson norms (vol. 163, 2006, 901-967), in which he remarks "I am grateful to my advisor Andrew Wiles for suggesting the problem mentioned above and for his guidance and encouragement. The idea that one could use Iwasawa theory to prove the integrality of the Rankin-Selberg L-value is due to him and after his oral explanation I merely had to work out the details." Very humble. Another very nice paper is the fourth in Manjul Bhargava's sequence of papers about parametrizing quintic rings. There's also a nice paper by H. Darmon about his p-adic version of Heegner points.<br /><br />After that I read through half my paper on generalizing the Gross-Zagier formula, and marked it up with comments. <br /><br />Then I met with some "Dutch crypto guys" from Microsoft Research and had lunch. After that I went to Clement Pernet's class, where he explained Weiedemann's algorithm for computing minimal polynomials of pairs (A,v) where A is a sparse matrix and v is a vector. <br /><br />Next I went to the new Number Theory and Computation seminar that Dan Shumow and I started, and saw a nice talk by Dustin Moody on how pairings provide new techniques for algebraic crypto. Next Craig Citro gave an exciting and erudite talk on enumeration of mod p modular forms, which develops in great details some ideas I discussed with him at Sage Day 7 at IPAM. I gave Craig a hard time during his talk though :-). <br /><br />Now I'm dressed up as a skateboarder for Halloween.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com1tag:blogger.com,1999:blog-8172331420765124703.post-10358969480344534252008-10-30T08:44:00.000-07:002008-10-30T09:00:05.927-07:00Verification of the Birch and Swinnerton-Dyer Conjecture for Specific Elliptic CurvesI spent last night and this morning making changes to my paper <a target="_new" href="http://wstein.org/papers/bsdalg/">Verification of the Birch and Swinnerton-Dyer Conjecture for Specific Elliptic Curves</a>. I had been sitting on the referee report for several months, since there are four co-authors, and I naively hoped one of them would fix all the problems. <br /><br />This is a paper I wrote nearly four years ago really as a summer student project: SAGE -- Summer of Arithmetic Geometry Experience -- that I ran at Harvard for a group of undergrads. Five undergrads there all independently requested to work with me on research over the summer, and this paper was one of the results of that summer. <br /><br />This paper was also the first application of <a href="http://sagemath.org" target="_new">the Sage Mathematics Software</a>. I tried to repeat some calculations from the paper when making improvements suggested by the referee, and was scared that I would find bugs that prevent me from doing them. Indeed, within <i>a few seconds</i> I was disappointed to find <a href="http://trac.sagemath.org/sage_trac/ticket/4390">a showstopper bug</a>, which fortunately I was able to fix in a few minutes. <br /><br />The referee remarks about the paper were almost all very useful; they definitely improve the quality of the paper a lot. <br /><br />The paper itself proves most of the BSD conjectural formula for elliptic curves of conductor up to 1000. The obvious next steps with that paper done are:<br /><ol> <br /> <li> Mostly prove BSD for far more curves (say up to conductor 130,000). This would be subtle and interesting, since all kinds of problems that don't quite happen for conductor up to 1000 would certainly happen here. <br /> <li> Completely prove BSD for all curves up to conductor 1,000, except the 18 curves of rank 2. This will rely on <a href="http://wstein.org/papers/shark/">my joint paper with Wuthrich</a>, which will make this almost completely straightforward, except perhaps in a handful of surprising cases. <br /> <li> Prove the prediction of BSD about Sha at a bunch of primes for curves of rank 2. This can also be done using my joint paper with Wuthrich.<br /></ol>William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com1tag:blogger.com,1999:blog-8172331420765124703.post-1903537714661869972008-10-29T18:54:00.000-07:002008-10-29T19:05:00.165-07:00A Reading Seminar, Schemes, and Petersson normsToday I had lunch with Craig Citro, Robert Miller, Robert Bradshaw, and Dan Shumow, and we mainly discussed starting an "advanced number theory reading seminar". Craig's the main impetus behind it, and it will be mainly about talking about papers like Gross-Zagier, etc.<br /><br />After that I met with Tom Boothby and Sourav San Gupta and defined schemes for them, and explained how to think of maps between rings of integers as morphisms of schemes, and factorization of primes in integer rings as computation of fibers. We also explicitly computed the morphism from y=x^2 to y=0 in terms of maps on affine rings.<br /><br />After that, I met with Robert Bradshaw (one of my Ph.D. students), and looked at a bunch of papers that are related to computing Petersson norms and special values of symmetric power L-functions. The problem we're looking at currently is: efficiently compute L(Sym^2(f), 2) for general newform f, and compute the Petersson norm of f. This is "essentially done," as explained in (Delaunay, 2003), when the level of f is square free. When the level is not square free, things are more interesting, and we discussed at least two ideas for dealing with this more general case: (1) try all possibilities for the bad factors and see which leads to a consistent functional equation, or (2) compute the modular degree via modular symbols and linear algebra, and use a formula as in Cremona or Zagier's papers on modular degree. We also looked at an old paper of Rankin that Kevin Buzzard sent us that computes Petersson inner products for level 1 forms -- it is horrendous, only level 1, and doesn't use symmetric square L-functions at all, and is clearly at least O(N^3) complexity (or worse). <br /><br />When reading through these papers, we ran into numerous cases where we couldn't get relevant references because scans are not available online (or cost $30/each, and our university subscription doesn't go back far enough), and I don't have a scan or photocopy, and of course the library is closed. It's extremely annoying how frustrating the situation still still is regarding having access to the literature in number theory.William Steinhttps://plus.google.com/115360165819500279592noreply@blogger.com1