Still alive, projects

August 19, 2012

If you wondered where I’ve been the last year, I’ve been on Google+ and somehow didn’t feel like posting anything useful in the last couple of months.

However, I’ve got quite a few cool projects running:

  1. Got me a pair of Infiniband Adapters. Plan: Brew up a software which will receive data via IP on multiple IB-nodes and write the data to distributed shared memory via RDMA/IB. Then let one or more IB-nodes read from that (ring)-buffer and aggregate data so that it can be shoved into an RDBMS.
  2. Brew up an AWS image for easy BOINC-crunching while preserving the workunits on a headnode. Why? Because images on the AWS spot-market are cheap. But spot-machine do not retain any data. So I might be putting it into S3 along with some headnode directing which machine (machines id keeps changing!) can work on which workunits.
  3. Become more confident in the language Erlang. Got quite a few projects which could benefit from easy protocol prototyping with ASN.1 in Erlang. Stay tuned.
  4. Got me a Spartan-3E FPGA board. Not sure what to do with it, but it’s awesome :)
  5. Was playing around with GNU Radio and my RTL DVB-receiver recently.
  6. Doing some serious Openstreetmap work recently.
  7. Mrs. Janssen and be got fond of geocaching. Lot’s of outdoor stuff. Sweet!
  8. Been to this year’s Linuxbierwanderung. Was awesome!

I ain’t dead yet.


A Better Rice For The World

May 19, 2010

RiceThe Nutritious Rice for the World (Rice) project, a World Community Grid BOINC project, ended a few weeks ago. BOINC (Berkeley Open Infrastructure for Network Computing) is a non-commercial program and infrastructure which allows volunteers to donate their computer’s spare computing resources to take part in very interesting, computing intense scientific projects. Many people around the world contributed their CPU-resources to help figure out the structure of proteins of the most common strains of rice. In the end, about 25,761 years of CPU-time were contributed to the project. IBM heavily contributed to this project through their World Community Grid (WCG) program, offering Rice a massive userbase and community.

Rice is one of the most common foods in various parts of the world. It’s in the interest of us all to find varieties and breeds of rice which are most nutritious or resistant against pests; the project’s goal is to find out which varieties of rice interbreed with others to give the best results so that we’ll get new strains of rice which are harder, better, faster, stronger.

Ram Samudrala

Dr. Ram Samudrala

A lot of BOINC-users who contributed to the project (like myself) are now asking themselves a lot of questions. Who are the people behind the scenes? How much work is necessary to get a project like this into operation? What was IBM’s role? What will happen with the contributed results? And after all, who will benefit from the project?

I think no one can give better answers than Ram Samudrala, PhD and Principal Investigator of a computational genomics research group at the University of Washington. Rocker, scientist and Emacs-admirer – he was so kind to answer me some questions about the project.

Tell us a little about yourself and how you got involved in the Rice-project.
Ram: I’m a professor researching computational biology at the University of Washington Seattle. My overarching interest has been to understand and model how the genome of an organism (genotype) specifies its behaviour and characteristics (phenotype). We develop computational algorithms to this end that are applied to whole genomes and we work on many organisms. Rice was specifically chosen since our collaborators at the Beijing Genomics Institute had just finished sequence (and we annotated the refined version) and I also got a $1.9 million grant from the US National Science Foundation (NSF) to predict the structure and functions of all proteins encoded by the rice genome. We developed algorithms to do this and we applied it to all rice proteins. Then IBM came along and offered us the means to redo some of our calculations on the most difficult proteins using the WCG and then we ported our code over to work on the Grid.

When was the first time you considered using voluntary distributed computing for your project?
Ram: Since the days of SETI@home, and since we built our own local clusters to do structural computational biology, but porting our code to BOINC was always a inertial challenge.

Did you consider using other DC-infrastructures except BOINC, like distributed.net? If yes, why did you decide using BOINC?
Ram: No, we used BOINC since it was what was supported by IBM WCG.

Have you considered asking the NCSA for computing resources?
Ram: Yep, but it’s a cumbersome process, like applying for a grant, and again, porting software to work on different architectures. The barrier is that we get grant money to do research and not develop software. I have used NIST supercomputing resources in the past.

You said you would need 200 years of computing time using your available resources. Besides voluntary distributed computing and the University of Washington, were there other universities or institutes directly contributing computing-resources to your project?
Ram: Not for this project, no.

Rice BOINC Splashscreen

Rice BOINC Splashscreen

You were using algorithms from the Protinfo website. Which one did you actually use, how much effort did you put into customizing it for using it in BOINC? Can you tell us if those algorithms and implementation are released under a free license?
Ram: It’s the Protinfo AB algorithm, which is our ab initio or de novo simulation protocol. IBM spent a fair amount of time porting the code
to work with BOINC. The original algorithms/software are all freely available without any claim of copyright (i.e., in the public domain).

Could you explain “de novo” and “ab initio” for non-scientists, please?
Ram: De novo” and “ab initio” generally are translated to mean “from first principles”. In the old days, this used to mean using pure physics energy potentials for protein folding. These days, to us, it means any set of general principles that is not biased to a particular protein or organism.

If the algorithms you used are under a free license, did you already manage to publish the modifications, if there are any?
Ram: The modifications involving the porting are with IBM and they are unpublished.

(Ed. note: Since the software was released in the public domain there’s no requirement to publish the modifications.)

IBM helped you out in customizing the protein-prediction algorithms for various platforms. Can you tell us how much they contributed?
Ram: All the customisation was done by IBM engineers. We just gave them the original software and ran sanity checks on the output. I’m a strong free software and anti IP proponent, to the degree that I encourage commercial use without restrictions on the software (people can always use the public domain versions if they want to).

Rice Terraces by Flickr user ~MVI~

Rice Terraces

How much time did you save by using the World Community Grid’s infrastructure compared to if you would’ve set it up all on your own, like other projects do?
Ram: IBM took about six months or so to port our software, so I presume it would’ve required that kind of an investment. Keep in mind that they had a lot of prior experience with BOINC. IBM now maintains the code and does the PR and runs the predictions for us. I’d say this would be a full time programmer/sysadm type of person and if I had that extra money, I’d rather spend it on someone doing the basic research.

If there are flaws about BOINC, which would you like to be addressed first?
Ram: I can’t think of any in the way we did it with IBM, but without IBM, the PR machine has to be powerful to get people on board. It’s more than just recruiting people, but also motivating them as IBM does with badges and giving them a sense of community and providing a support infrastructure. This is hard for a research lab to do on their own (it can be done, but is it really the best use of our talents is the questions).

Programming and debugging is an iterative process. Looking at your sourcecode-repository, how many releases of the software were necessary until you got the cow flying?
Ram: For this case, internally we probably had about 10 or so iterations in total, but the basic science part of the software is something that has evolved over 18 years.

How did you do beta-testing, did you use the publicly available beta-projects at WCG? Or, were you actually just doing it in your lab?
Ram: It was mostly in our group. We just submitted sequences for which we knew the answers and we did a dry run initially with the same sequences.

I’m curious there – were these structures predicted by other algorithms or was that done the hard way, using X-ray crystallography?
Ram: These were done the hard way, at the bench. These are our gold standard for when we know we’re right or wrong, so we benchmark our methods against all this. When we did the rice project, we did sequences with known answers to see how well things would work and that there was no chance of anything going wrong.

Dr. Ling-Hong Hung

Dr. Ling-Hong Hung

How was is like getting in touch with the community? Was the feedback helpful? How many people from your team were actually dealing with the community?
Ram: At its peak, we had 3 people dealing with the community, our sysadm and project lead Michal Guerquin, our programmer and scientist Ling-Hong Hung, and myself. Opening our software to the Grid and the community definitely presented some challenges, which I believe will be the focus of our first paper. An interesting tangent of that is that we’ve had to port some of our analysis software to work on GPUs so we could handle all this data. So some good technological developments here that we’ll be writing about shortly.

Michal Guerquin

Michal Guerquin

A lot of people are concerned about “Frankenfood”. Your project’s website explicitly states that this is not about genetic engineering, but about finding the most nutritious rice-strains for interbreeding with other rice-crops. Is there anything you’d like to explain to people who are still concerned?
Ram: We’re simply extending what farmers have been doing for millenia in a more rational way, and also what has been going on in nature for billions of years. The problem to us is scientific and all knowledge that is produced (which from our end will be completely free and transparent) can be used in various ways according to the will of the people. But we have governments and politicians to handle the deeper societal implications. What I mean by this is that people should petition their representatives, as they are doing successfully in many parts of the world, to decide where to go with genetically modified organisms, which I see as ultimately having a socioeconomic/political solution.

Your project is one of the very few with a fixed end, almost all other projects are handing out work-units for new phases. How comes that you’re finished now? Is everything from the rice-genome now analyzed from a computational point-of-view and nothing else left to do?
Ram: Not at all. We obtained a huge amount of data and we’re now pressed to analyse it. I honestly can say that we were overwhelmed with this data. My goal as a scientist though is not just to develop technical tools and produce large tables and graphs but try to come up with something tangible that is prioritised and can be tested at the bench that really changes the make up of rice in a desired manner. The computations and the Grid are the means by which we arrived at this step, but our job now is to figure out where the best low hanging fruit is in collaboration with rice researchers (which we are doing with researchers around the world including IRRI, Phillipines). [Ed. note: IRRI, International Rice Research Institute]

Focussing on the data: Now that you know how those proteins really look like, where do you draw a line and say “this protein is more nutritious than others”? My basic understanding is that the nutritious parts in rice is actually carbohydrates (starch), proteins and some fat. How do I have to imagine this analysis?
Ram: So the proteins we’re talking about are gene products, that carry out almost all the functions in rice (or any other organism). So we use
the protein to refer to a molecule that does this, rather than the nutrition use of the word “protein” which refers to these biological molecules broken down and aggregrated (see “Protein” and “Protein (nutrient)” in Wikipedia).

By nutrition we mean anything that leads to higher range of bioavailable substances like dietary minerals and vitamins. In rice, examples include elements like iron or organics like vitamin A. Incidentally the “golden rice” GMO is a product of Monsanto that has higher beta-carotene, a precursor to vitamin A (“Golden Rice” at Wikipedia). We’d like to get to something like that by crossbreeding without the use of genetic engineering, working on both micro and macronutrients.

So in the end, we need to be able to create a rice strain that does have enriched nutrients and is perhaps better than current strains in
terms of yield and/or hardiness. Before we go off and start crossing rice, there are a number of molecule biology bench experiments that
can be done to say whether predictions we make about the activity of certain proteins will be correct so we’d do them first.

Do you plan to publish all your results in an Open Access Journal?
Ram: Yep, that would be the ideal. Publishing in Open Access Journals also sometimes costs money. I’m not a big fan of the “pay to publish”
model—it’s not a lot of money and some scientists have grants to do this, but it’s not a good principle.

Thank you very much for this interview!
Ram: Thanks; I enjoyed the questions!

Dr. Ram Samudrala is a tenured Professor at the University of Washington, Seattle. He’s head of the Nutritious Rice For The World project and one of the inventors of protein prediction algorithms. He’s a notorious contributor of scientific papers and generally a very nice guy I’d like to buy a drink.

Creative Commons License

Dieses Werk bzw. Inhalt ist unter einer Creative Commons-Lizenz lizenziert.
The rice picture is copyrighted and CC-BY-SA by Flickr-user kadaoor.
The rice-paddy picture is copyrighted and CC-BY by Flickr-user ~MVI~
The pictures of the teammembers were used by permission of the Rice-team.
The BOINC splashscreen is copyrighted by IBM and the World Community Grid and was used with permission.


Einstein@home gossip: Application ported to the PS3, but no SPE-support so far

January 24, 2007

EinsteinVia the Einstein@home Cruncher’s Corner:

An Einstein@home cruncher ported the Einstein@home science application to Sony’s Playstation 3, but only using the Power PC core of the CPU. He did not use the SPEs from the Cell yet, which explains the floating-point performance as resulted on the Computer summary page for his PS3:

Operating System: Linux 2.6.16-20061110.ydl.2ps3
Measured floating point speed: 284.52 million ops/sec
Measured integer speed: 974.06 million ops/sec

Porting the science-application to support the SPEs of the Cell will be hard, without real compiler support. I can only guess what compiler he used and only speculate about the plans he has. Considering that the SPEs only do single-precission FLOPS he’ll have to find a way to implement double-precision in software (which isn’t a problem nowadays, algorithms exist). Also, the Sony CBEs don’t have all SPEs, it’s rumoured that only 6 out of 8 SPEs are utilized – my guess is that Sony gets the CPUs which failed the inital burn-in tests, the ones where some SPEs are dead. Cheap enough for a consumer-product.

However, interesting times ahead. Go, Gaurav, go!

About Einstein@home:
Einstein@Home is a program that uses your computer’s idle time to search for spinning neutron stars (also called pulsars) using data from the LIGO and GEO gravitational wave detectors. Einstein@Home is a World Year of Physics 2005 project supported by the American Physical Society (APS) and by a number of international organizations.

Tech Tags:


BOINC: ‘Returning Results Immediately’ considered harmful

December 27, 2006

BOINC logoVia Romworld:

Rom Walton from the UCB BOINC-Team wrote an article about the impact of the “return results immediately” ("-return_results_immediately") setting of the BOINC-client. His point is that this setting puts a high and unnecessary load onto the project’s servers. He claims that leaving it up to BOINC when to send the results is about 70% more effective than sending them immediately because less database-queries are needed.

His figures make sense so i agree with him. So please don’t use that feature in your BOINC-client, especially considering the problems on some project’s servers in the past.

Thank you for your cooperation :-)

Tech Tags:


Rosetta: Article “Deciphering Protein Structures”

September 14, 2006

Rosetta@home logoVia the NCSA:

The NCSA wrote a very easy to understand, yet quite complete article with explanations about David Baker’s Rosetta project, an theoretical approach to deduct a protein’s structure using computer-simulations.

Things I learned from this article:

  1. The code does not start with a “flat” protein-molecule, starting to wiggle it around, but with a “homologous known protein structure” as a starting point. I don’t understand if that’s good or bad, but it limits the permutations to be checked.
  2. David created a portal known as Robetta, where other biologists can submit their models to be crunched.
  3. The Rosetta-project (not to be confused with Rosetta@home) uses a lot of CPU-hours on NCSA’s clusters and supercomputers (Tungsten Linux Cluster, NCSA Condor Flock, and now possibly TeraGrid resources)

However, quite a nice read, go and grab it while it’s hot!

Tech Tags:


BOINC: Why you should care about the credit-system

September 6, 2006

BOINC logoThe probably heaviest request from users during the last BOINC user-survey was definitively “Introduce a more fair credit-system”. It’s still kind of frustrating that some projects hand out lots of credits per CPU-hour where others are more close-fisted with their credits. And there’s also the issue that we have “calibrating” BOINC-clients which sail around the known credit-issues and manipulate the claimed credits for a work-unit.

Some people consider this cheating, others claim that this is self-defence – their argument is “why should we get less credit in total even if we crunch more data per day?”

Both have a point, so some projects finally decided to go away from the naïve BOINC credit-scheme (which is based on the internal benchmarking algorithm) and create their own, CPU-hour based scheme.

For instance, Einstein@home recently “tuned” their credit-calibration (1) again to be more fair – silently – which caused an outcry from some people because they’ll issue less credits in general now. Rosetta@Home introduced a new credit-mechanism as well, but is more transparent (transparency is the major pro for Rosetta@Home anyway).

Why would someone who’s into science care about the credit-system anyway? There’re several reasons: Motivation and individual success is the absolute base for public voluntary distributed computing, something which some people out there didn’t understand yet. If you want to build up and maintain a large user base you need to give them incentives. Credits, public blessings, and – important – constant reports about the project’s success which show more than just how many percent of the project is already done like RC5 does. (OK, i have to admit, there isn’t much to report in the RC5-project, but you get my point, don’t you?)

And that’s the reason why you have to care about how many credits you issue and how you discuss the credit-issue in public – never underestimate the so-dubbed “credit-whores” – they’re your user-base and might wander of to projects which hand you more credits. If you’ve lost a user you’ll never get him back – most probably.

Be opportunistic and go for the high-performers even if they’re just after the credits. Be nice to your users and give them real reports every couple of weeks. Participate in the fori and give your users feedback. If possible, organize parties to meet your users (no one ever said anything about that you should pay). Optimize your science-application and be as fair as possible with the credits. Take rants and criticism serious. If people start optimizing your science-application: Embrace the changes and let them take part in the validation-process.

Remember: Even if tuning your science-application to be as efficient as possible takes a lot of effort, remember: Your users will thank you because they can crunch more data and you push your project onto a new level.

Corrections:

(1)
Bernd Machenschalk from the Einstein@Home-project correctly pointed out that they did not change the credit-system but only the calibration of the system they’ve introduced with the S5-run.

Tech Tags:


Einstein@home S5 update

August 8, 2006

EinsteinBen Owen of the Einstein@home science-team posted an update about the ongoing efforts; this posting was done in the Science-forum, not on the frontpage, where one would expect it.

He reports that the National Science Board officially certified the project as “reached the initial design goal” which means “we’re officially in business”. He also points out that the S5 raw-data is twice as good as S4 was; they had some problems with the precission of their interferometers, mostly due to construction-works outside the L1-site.

Check out the forum for more details.

Tech Tags:


Follow

Get every new post delivered to your Inbox.

Join 120 other followers