After having learned a little bit about Folding@Home(F@H) over the years, I found myself ready and willing to join a team and try it out. Folding@Home is a non-profit distributed computing project operated by the Pande lab at Stanford university. Essentially, the software simulates potential ways a protein can be folded, and once complete, sends the “work unit” back to the database for further analysis. The software runs locally on your computer and harnesses your CPU’s (or GPU as well) raw computational power to solve these combinatoric puzzles. The proteins being investigated by scientists at Stanford are crucial to understanding diseases like Alzheimer’s, Huntington’s, and many forms of cancer. Short summaries of the research projects are available according to which cause you select. So if you were to download and install the client on your PC, you could say you are “donating your processing power to science”! There are even “teams” of folders, who participate under a registered team name. These teams are ranked based on the sum of the points earned by their “players”.
But why are there teams and how do they compete? Good question. Here is the page with a list of teams by rank. Not all work units are created equal. The F@H client attempts to assign work units appropriate for each worker. For instance, your old desktop from 2007 with a duo core processor can theoretically accomplish any size work unit, but it wouldn’t always be practical. So the F@H client will detect the older CPU architecture and assign a smaller work unit that it can complete within a day or two or even a few hours. On the other side of the spectrum, you could theoretically assign your GPU (these are folding workhorses) many smaller work units, but it would also be inefficient since more time will be spent getting and returning work while the actual folding could take as little as several seconds.
The client also detects how powerful your graphics card is and sends it a proportionally complex work unit. Work units that are completed quickly are given a bonus bounty, which makes it a very fun way for PC enthusiasts communities (5 of the top 6 teams) to test and benchmark their systems. Currently, the best way to earn points is with a high end graphics card as these components have more memory, are more parallel, and are much better at floating point operations relative to traditional CPUs.
While each device alone may not have a lot of computing power, the use of networking and coordinating work units result in quite a staggering accomplishment: computing power that rivals some of the world’s fastest supercomputers.
F@H has been around since the turn of the century. The graph is a few years out of date, but it clearly shows just how powerful distributed computing can be. At the time of writing, the F@H webpage reads “Today we are 105,387 computers strong outputting 15,876 teraflops of computing power.” Which means it probably has plateaued since this graphic was made while newer and faster supercomputers have been constructed. F@H isn’t the only distributed computing project, though. There is also BOINC, which has somewhere between 2 and 3 times as many volunteers, with about 2/3 overall computing power (more laptops and older computers, most likely). BOINC is short for Berkeley Open Infrastructure for Network Computing (whew) and its client is open source, unlike F@H which was just made from open source tools. BOINC projects extend beyond the realm of proteins and lipids, with many universities leading teams attacking problems in physics, mathematics, astronomy, and even cognitive science (MindModeling@Home). Some projects seek to advance the search for extraterrestrial life while others focus solely on discovering new prime numbers. All of these projects are able to make use of computing power. Neat, huh?
The F@H team posted an update recently, addressing what is to come in 2016. One new area of development is mobile computing. With the mobile app just making its debut in the summer, the effectiveness of mobile devices for computing is still a work in progress. However, if the market demand for better graphics performance keeps up, the innovations in mobile graphics could be the foundation for a viable mobile F@H and/or BOINC client. With nearly everyone having a smartphone these days, this is certainly an exciting prospect for some.
- Cnet article that touches on companies invested in “grid” computing
- Recent discovery in astronomy & space
- A blogger’s skeptical take on distributed computing
Oftentimes, when a technology is first discovered, implementations and attempts to use that technology fall flat. That is because it takes time to refine and incorporate the technology into existing infrastructure. One thing that holds distributed computing (vs a supercomputer) is the latency between nodes/workers is oftentimes inconsistent, making massively parallel computing much harder.Who knows how much distributed computing technology will be integrated into our society in the coming years? It may not see much more attention for 5, maybe even 10 more years. But as the internet and other networks become more connected, and as more fiber-optic cable is laid, it is only a matter of time before the potential of this technology is fully realized.