Utility Computing

Theoretically Measuring Performance of a Computer

As of today, there are so many benchmark to measure performance of any computers into a number for later comparison against other models. I will not talk about that it totally depends on what you want to do exactly. Generally, the performance of a computer directly depends on speeding of floating point operation so it is possible to calculate the upper bound of its performance, Theoretical Peak Performance, by an easy equation.

Rpeak = [CPUs] x [CPU Clock rate (GHz)] x [CPU floating point issue rate]

where

CPUs = [the number of processor] x [the number of cores per processors]

Linux Kernel 2.6.x Local Root Exploit

As of today, Linux Kernel 2.6.x has been hacked for lots of local root exploits. Anyway, it doesn't matter how many they are but it does matter that most of that exploits valid on most Linux stations. One serious case is that they also valid on even cluster distribution like NPACI Rocks. In other words, the whole servers in a cluster maybe exploited for cracking bigger goal, e.g., password decryption. Ones may argue that it is not that dangerous because they are local root exploits, not remote root exploits. Yes, they are. But you have to imagine the power of grid computing where you can run a job seemlessly on remote clusters with automatic executable staging. That's enough. One may exploit the whole grid instantly.

Applying Grid in E-Military

Army also needs the power of Grid. Army usually is the largest organization in all countries. Army has lots of branch. General usually wants to command all army under-control just-in-time in face-to-face manner. So Access Grid would fit this requirement perfectly. Many armies have their own high-speed private network channels so they don’t need to care about quality and bandwidth. Army has its own spies and agencies. They produce high-volume of information. These information must be mined to analyze confident, accuracy, and probability. In short, all information must be prioritized and rearranged as fast as possible. Historical is essential here. Data Grid could be used to store images captured by private satellites for further mentioning. Computational Grid also fits in purpose to simulate how to deal with a war correctly. War is just a kind of complex chess.

Applying Grid in E-Commerce

E-Commerce, including enterprise computing, need Grid technology to fulfill their requirements. First of all, let us start from traditional high-performance computing needs in business sector. Actually, businesses do need high-performance computing. Unfortunately, almost businesses in Thailand have been absolutely believed in expertises. What does it mean? I’m talking about large-scale decision support system. There are so many decision support algorithm to predict the future markets based on historical and statistical analysis theories. You might argue that it’s not that large and a high-end computer could fit the job. Well, to accurately predict the future, you have to consider lots of variables. The system with a few variables may be done in a short time but you must not underestimate the realistic problems.

Why do you need High-Performance Computing?

Have you ever heard about High-Performance Computing or HPC, for short? Generally, HPC is a kind of technology that lets applications run faster. You might argue that today processors are so fast and you don’t need HPC. True and false. Single processor is not as fast as ones want it to be. Let me give you some example. Assuming you want to play a 2-player game, e.g., chess, and you want to win this game. How to win this game? It’s very easy to win any games. You have to do something that your opponent will have lowest chance to win the game. If you don’t have a computer, you must predict possible steps on your own. Actually, nobody predicts in action. They just believe in their experiences. If you play the game for so long, you will remember some patterns. Computer could help you a lot to simulate all possible playing to find the best solution for current state. Well, single processor or a few processors are not enough in most cases. For example, to play a chess game, you have 16 pieces on the board and your opponent has another 16 pieces. So you have to simulate 16^10 = 1,099,511,627,776 cases to find the best move for just 10 steps ahead. Suppose that your computer could simulate 10,000,000 cases in just a second. That means you could finish this simulation in 30 hours. Your opponent will never wait for that long. However, if you have 256 processors, it would take only 7 minutes.

Parallel Approach for Bandwidth Minimization Problem

According to my previous post about Bandwidth Minimization Problem and Windows Supercomputing Contest 2006 at Kasetsart University, the committees has already received all codes including the one I mentioned early. Okay, let me express how it looks like. Firstly, I tested them all on 8-processor node, dual core, dual processor, and Hyper-Threading, one by one with 8-node graph. As a result, the codes could be classified into 4 classes.