Tuesday, 30 August 2011

IBM Is Building the Largest Data Storage Array Ever, 120 Petabytes Big

Hard Drive Platter Closeup Approximately 200,000 of these hard drives make up IBM's new array.Wikimedia Commons
Researchers at IBM's Almaden, California research lab are building what will be the world's largest data array--a monstrous repository of 200,000 individual hard drives all interlaced. All together, it has a storage capacity of 120 petabytes, or 120 million gigabytes.
There are plenty of challenges inherent in building this kind of groundbreaking array, which, says, IBM, is destined to be used for, as Technology Review writes, "an unnamed client that needs a new supercomputer for detailed simulations of real-world phenomena." For one thing, IBM had to rely on water-cooling units rather than traditional fans, as this many hard drives creates heat that can't be subdued in the normal manner. There's also a sophisticated backup system that senses the number of hard disk failures and adjusts the speed of rebuilding data accordingly--the more failures, the faster it rebuilds. According to IBM, that should allow it to operate with the absolute minimum of data loss, even none.
IBM's also using a new filesystem, designed in-house, that writes individual files to multiple disks so different parts of the file can be read and written to at the same time.
This kind of array is bottlenecked pretty severely by the speed of the drives themselves, so IBM has to rely on software improvements like that new recovery and filesystem to up the speed and enable the use of so many different drives at once.
Arrays like this could be used for all kinds of high-intensity work, especially data-heavy duties like weather and seismic monitoring (or people monitoring)--though of course we're curious as to what this particular array will be used for.

Read More at: http://www.popsci.com/technology/article/2011-08/ibm-building-worlds-largest-data-array-120-petabytes-worth

New Computer Chip Modeled on a Living Brain Can Learn and Remember

IBM, with help from DARPA, has built two working prototypes of a "neurosynaptic chip." Based on the neurons and synapses of the brain, these first-generation cognitive computing cores could represent a major leap in power, speed and efficiency     
IBM's "Neurosynaptic" chip prototypeIBM Research Zurich
A pair of brain-inspired cognitive computer chips unveiled today could be a new leap forward — or at least a major fork in the road — in the world of computer architecture and artificial intelligence.
About a year ago, we told you about IBM’s project to map the neural circuitry of a macaque, the most complex brain networking project of its kind. Big Blue wasn’t doing it just for the sake of science — the goal was to reverse-engineer neural networks, helping pave the way to cognitive computer systems that can think as efficiently as the brain. Now they’ve made just such a system — two, actually — and they’re calling them neurosynaptic chips.

Built on 45 nanometer silicon/metal oxide semiconductor platform, both chips have 256 neurons. One chip has 262,144 programmable synapses and the other contains 65,536 learning synapses — which can remember and learn from their own actions. IBM researchers have used the compute cores for experiments in navigation, machine vision, pattern recognition, associative memory and classification, the company says. It’s a step toward redefining computers as adaptable, holistic learning systems, rather than yes-or-no calculators.
“This new architecture represents a critical shift away form today’s traditional von Neumann computers, to extremely power-efficient architecture,” Dharmendra Modha, project leader for IBM Research, said in an interview. “It integrates memory with processors, and it is fundamentally massively parallel and distributed as well as event-driven, so it begins to rival the brain’s function, power and space.”
You can read up on Von Neumann architecture over here, but essentially it is a system with two data portals, which are shared by the input instructions and output data. This creates a bottleneck that will fundamentally limit the speed of memory transfer. IBM’s system eliminates that bottleneck by putting the circuits for data computation and storage together, allowing the system to compute information from multiple sources at the same time with greater efficiency. Also like the brain, the chips have synaptic plasticity, meaning certain regions can be reconfigured to perform tasks to which they were not initially assigned.
IBM’s long-term goal is to build a chip system with 10 billion neurons and 100 trillion synapses that consumes just one kilowatt-hour of electricity and fits inside a shoebox, Modha said.
The project is funded by DARPA’s SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) initiative, and IBM just completed phases 0 and 1. IBM’s project, which involves collaborators from Columbia University, Cornell University, the University of California-Merced and the University of Wisconsin-Madison, just received another $21 million in funding for phase 2, the company said.
Computer scientists have been working for some time on systems that can emulate the brain’s massively parallel, low-power computing prowess, and they’ve made several breakthroughs. Last year, computer engineer Steve Furber described a synaptic computer network that consists of tens of thousands of cellphone chips.
The most notable computer-brain achievements have been in the field of memristors. As their name implies, a memory resistor can “remember” the last resistance that it possessed when current was flowing through it — so after current is turned back on, the resistance of the circuit will be the same. We will not attempt to delve too deeply here, but this basically makes a system much more efficient.

Read More at: http://www.popsci.com/technology/article/2011-08/first-generation-cognitive-chips-based-brain-architecture-will-revolutionize-computing-ibm-says

NASA Satellites Watch as Hurricane Irene Bears Down on East Coast


Hurricane Irene Swirls Northward Hurricane Irene is seen swallowing nearly the entire eastern seaboard in this satellite image from Friday morning. The image of Earth's full disk was captured by the GOES-13 satellite at 10:45 a.m. EDT. Click here to embiggen. NASA Goddard Space Flight Center
If you live in the eastern time zone, odds are you're battening down the hatches in advance of Hurricane Irene, a Category 2 monster threatening much of the eastern seaboard. Coastal communities are under mandatory evacuation orders in several states. This NASA Goddard image from Friday morning makes it clear this storm is no joke.
Evacuations are under way in several boroughs in New York, which apparently has never happened before, and the city’s subway system is shutting down at noon tomorrow. Multiple states are under emergency status, Atlantic City is closing casinos, college students are not allowed to move into their dorms, and disaster planners are bracing for power outages from the Carolinas to New England.
Irene has weakened slightly, but the storm could strengthen before it makes landfall, which is expected to happen in North Carolina on Saturday. From there, it will track north through the Chesapeake Bay and the Delaware, Maryland and Virginia peninsula, eventually moving through New York City, where it could still be a Category 2 storm. It will eventually pass through Quebec before petering out in the Labrador Sea.

Stay safe, everybody.

New Phase-Changing Gel Method Repairs Severed Blood Vessels Better than Stitches


Human Artery Cross-SectionWikimedia Commons
A new heat-sensitive gel and glue combo is a major step forward for cardiovascular surgery, enabling blood vessels to be reconnected without puncturing them with a needle and thread. It represents the biggest change to vascular suturing in 100 years, according to Stanford University Medical Center researchers.

Sutures are an effective way to reconnect severed blood vessels, but they can introduce complications, for instance when cells are traumatized by the puncturing needle and clog up the vessel, which can lead to blood clots. What’s more, it’s difficult to suture blood vessels less than 1 millimeter wide, the Stanford team said. One of the authors on this study, Stanford microsurgeon Dr. Geoffrey Gurtner, was inspired to work on this problem a decade ago after a five-hour surgery in which he reattached the severed finger of a year-old infant, according to Stanford Medical School.
Sutures work by stitching together sides of a blood vessel and then tightening the stitch to pull open the lumen, or the inner part of the vessel, so the blood can flow through. Gluing a vessel together instead would require keeping the lumens open to their full diameter — think of trying to attach two deflated balloons. But dilating the lumen by inserting something inside introduces a wide range of problems, too.
Gurtner initially thought about using ice to fill up the lumen instead, but that meant making the vessels extremely cold, which would be too time-consuming and difficult on the operating table. He approached an engineering professor, Gerald Fuller, about using some kind of biocompatible phase change material, which could easily turn from a liquid to a solid and back again. It turned out Fuller knew of a thermo-reversible polymer, Poloxamer 407, that was already FDA approved for medical use.
Working with materials scientists, the team figured out how to modify the polymer so that it would become solid and elastic when heated warmer than body temperature, and would dissolve into the bloodstream at body temperature. In a study on rat aortas, the team heated it with a halogen lamp, and used the solidified polymer to fill up the lumen, opening it all the way. Then they used an existing bioadhesive to glue the blood vessels back together, a Stanford news release explains. The work was published in this week’s issue of Nature Medicine.
The polymer technique was five times faster than the traditional hand-sewing method, the researchers say. It even worked on superfine blood vessels, just 0.2 millimeters wide, which would not work with a needle and thread. The team monitored test subject rats for up to two years after the polymer suturing, and found no complications.
“This new technology has potential for improving efficiency and outcomes in the surgical treatment of cardiovascular disease,” the authors say.

Saturday, 27 August 2011

Microsoft's Windows Phone 7 is the most stable mobile phone OS

Windows Phone 7 may not be flying off the shelves, but in my extensive experience the platform is the most stable and reliable out there. I recommend that you at least give the platform a try because it is NOTHING like Windows Mobile and the Microsoft name does not kill the experience.
As regular readers know, I have used nearly every mobile phone operating system over the years and bounce around between devices, operating systems, and carriers faster than most people I know. I have been using Windows Phone 7 since July 2010 and can say without a doubt that Windows Phone 7 has been the most stable and reliable mobile phone operating system I have ever used, even counting the early technical preview and current beta development versions of Mango that I have been running on my phones. People may have preconceived notions about Microsoft’s mobile platform given that Pocket PC and Windows Mobile had issues, but you can throw all of that out the window and if you give Windows Phone 7 an honest chance I think you will find out the same thing.
My first data-enabled phone was the original T-Mobile Sidekick and then a Nokia Series 60 device. I then moved through various Palm Treos, Windows Mobile phones (smartphone and Pocket PC), a couple BlackBerrys, more Symbian phones, some iPhones, several Android phones, a couple webOS Palm phones, and a few Windows Phone 7 devices. I won’t go back to the “old days” when I had Palm OS phones and Windows Mobile phones since they are not applicable now and were quite unstable compared to today’s phones, but these are my experiences with each of the current modern platforms.
I will say that some platform instability likely comes from 3rd party software and sometimes even defective hardware. I think you will see in my discussions of each platform that there are some issues even if the 3rd party software is well developed.

Android

Out of the six modern smartphone operating systems, Android is clearly the least stable of them all. I have seen numerous low memory issues, force close warnings, random resets, freezes on apps and connections, and more. With so many different manufacturers, different versions of the OS out and available, and thousands of apps that are clearly not built well enough to prevent issues Android is clearly in last in the stability department.

Apple iOS

I was blown away by the sheer speed and fluidity of the first iPhone, especially when compared to my Windows Mobile phones. iOS is a very user friendly platform and for the most part is fairly stable. However, I have experienced complete lockups and failure to launch on every iPhone and even on my iPad 2. The iOS experience is funny because something will get goofed up in the OS and then tapping an app to launch it gives you an impression it is starting up and then you are taken right back to the home screen with no indication or explanation at all. You can keep doing this and always get returned to the home screen. I understand they don’t want to confuse the consumer, but some indication of what corrective action needs to be taken would be helpful.
It is also not always clear how to get an iOS device back up and running. There is no battery to remove so you have to generally follow a button press routine and maybe even have to connect to a desktop to restore your device.

BlackBerry

There have been some rock solid BlackBerry devices in the past, but over the last couple of years the models I have used have disappointed me at times, in regards to stability. The main issue I have experienced with BlackBerry devices is a lock up/freeze where the device gives you the impression it is doing something in the background, but then that never changes and you have to perform a battery pull to get things started again. I still think this is one reason RIM continues to have removable batteries on their devices.

Symbian

It’s been just about 10 years since I started using Nokia’s Symbian and for a long time they were my most stable platform. Then Nokia started messing with the hardware too much and skimping on internal capacities. I think many of the issues I had on Symbian were related to low internal memory. Then again, both of the Nokia N8 devices in my house randomly just lock up and won’t let the touch screen be used to activate anything so a reset is required.

webOS

Palm’s webOS (HP didn’t keep it going long enough to count in the name) has been quite stable, but still not perfect and I did experience a few random resets and freezes on my devices in the past. It does take a while for the initial sync when you setup lots of services, but I don’t count that as an instability. There have been issues in the past with backup and restore failures though and when you are relying on the cloud for so much, this just cannot happen. I also experience weird double key entry issues, but that may have been due to the crappy webOS hardware, which in and of itself was a failure of the platform.
Palm does have a major community built around hacking webOS and there are many hacks available that address issues with the platform so you can make your device more stable and reliable by following the guidance of the webOS Internals team.

Windows Phone 7

Zero! That is how many times I have seen a reset on ANY Windows Phone 7 device that I have been using in over a year. During that time I have used at least six WP7 devices on all four wireless carriers. This includes running the early tech preview on the first WP7 device all the way through the latest RTM version of Mango I have on my HTC HD7. I even have the Dell Venue Pro, with a 32GB microSD card in it, that has been rock solid stable even though I have read a number of reports of issues with that device. I don’t know if I have just been blessed with an uncanny knack for using stable WP7 devices, but from what I read online there are many more customers just as pleased as I am with the stability of Windows Phone 7.
The only issue I have seen on Windows Phone 7 is an occasional temporary freeze as many things are downloading at once, but I have not had to perform a soft reset or a battery pull on the devices yet. This only happened to me a couple times in the earlier version of WP7 prior to the NoDo and Mango updates.
I get comments from readers that Windows Phone 7 is junk and think these people likely have never used WP7 or just have a hatred for anything from Microsoft. I have been getting more comments from readers that have actually tried WP7 and the majority of them agree that it is a very good mobile operating system that continues to get better.
Most of these mobile platforms are getting more stable as we move forward, but then again we still have companies releasing devices as beta for the consumers to test and then get expected updates later to address problems that should have been taken care of during initial development. I am not saying that everyone should get a Windows Phone 7 device since I am a firm believer in choice and that we all have different needs, wants, and desires so there is no one device for everyone. I am just trying to share my experiences with you so that you can make informed choices and not choices based on fear of a name or preconceived notions based on feedback from just a few.
Have you used all the platforms and if so, what has been the most stable for you?

HP single-handedly destroys non-iPad tablet market

After less than two months on sale, HP has pulled the plug on the TouchPad tablet and is so desperate to get rid of them that it is having a firesale, selling the 16GB TouchPad for $99 and the 32GB model for $149. But not only has HP killed the TouchPad, it has also single-handedly destroyed the entire non-iPad tablet market.
So what went wrong with the TouchPad? I think that several factors contributed to the death of the TouchPad:
  • No app ecosystem
  • An OS that people didn’t care about
  • HP’s own lack of confidence in the product - I agree with John Gruber: HP’s new CEO, Léo Apotheker, has no interest in playing in the consumer market at all
  • The iPad effect - Probably the biggest reason that the TouchPad withered and died on the vine is the iPad
Let’s look at that ‘iPad effect’ in a little more detail.
Apple sell millions of iPads every quarter, and it seems that most tech companies have no idea why it sells. In order to try to compete with the iPad, HP developed a tablet with a design and the tech specs similar to that of the iPad, priced it like the iPad, spent a ton of money on commercials featuring celebrities, and pushed the tablet out to big retailers in huge quantities.
And still no one cared about the TouchPad.
The reason: People are buying the iPad not because it’s a tablet, but because it is an iPad. Apple has NOT carved out a market for tablets, Apple carved out a market for the iPad. Think about it: When Apple released the iPod back in 2001, did this create an enormous market for media players? No. It created an enormous market for the iPod.
And why should the iPad carve out a market for tablets? Apple doesn’t even refer to the iPad as a tablet! Sure, Apple refers to them as amazing, magical, even revolutionary, but not as tablets.
Price is another factor. When Apple unveiled the iPad, tech pundits were bowled over by the price. $499 was seen as cheap. And it was cheap - for an Apple product. Was $499 cheap for a tablet? Well, the TouchPad (which, remember, was a pretty decent tablet) didn’t sell at $499, and even a drop to $399 didn’t invigorate sales much. However, once HP dropped the price to $99 as part of its firesale, this move resulted in overwhelming demand for a product that was essentially dead and that HP would no longer release updates for. This price drop was enough to push the TouchPad to the top of Amazon’s electronics chart, above the Kindle.

So there you have it. Unless you’re selling iPads, the stampede-inducing price point for a 16GB tablet is $99. OK, maybe this is a little on the low side, but the price definitely lies between $399 and $99, maybe around the $250 mark. But according to iSuppli, the bill of materials and manufacture of the 16GB TouchPad comes in at $298.
Which is why HP has destroyed tablets. The demise of the TouchPad has uncovered the dirty truth - the $500 tablet price point is too high … way too high. This applies to webOS tablets, Android tablets, and it will likely apply to Windows 8 tablets, although the Windows might have a bit more oomph than webOS and Android and might be able to sustain this price for a while — but eventually OEMs will engage into a race to the bottom and prices will fall.

Apple gives Tim Cook $384 million stock grant

Apple gives Tim Cook $384 million stock grant

Steve Jobs took home $1 a year for serving as Apple's CEO. The company's new leader, Tim Cook, is getting a richer deal.

Apple's board has given Cook a restricted stock grant of 1 million shares, Apple (AAPL, Fortune 500) reported late Friday in a regulatory filing. Those shares have a market value of $383.6 million, based on the stock's closing price on Friday.


But Cook will collect the shares only if he remains an Apple employee for the next decade. Half of his stock will vest in August 2016, and half will vest five years later, in 2021.

As Apple's chief operating officer, Cook collected an annual salary last year of $800,000 and an additional bonus of $900,000. He also took home a special award from the board for his "outstanding performance" as acting CEO during Jobs' 2009 medical leave: A $5 million cash bonus and a grant of 75,000 shares.

That put Cook's total 2010 compensation at $59 million -- enough to make him one of the tech industry's highest-paid executives.

In contrast, Steve Jobs earned a $1 annual salary every year since he rejoined Apple in 1997. While many $1-a-year CEOs reap big back-end stock and options packages, Jobs was almost a financial ascetic: He collected no stock awards most years, no cash bonuses and no perks, even turning down a 401(k) match from Apple.

But in late 1999, Apple's board famously came through with a whopper of an executive bonus: The company spent $90 million to buy Jobs a Gulfstream V airplane. It also tossed in options on 10 million Apple shares.

"Apple's market cap has risen from less than $2 billion to over $16 billion under Steve's leadership," Apple board member Ed Woolard said at the time. "Steve has taken no compensation thus far, and we are therefore delighted to give him this airplane in appreciation of the great job he has done for our shareholders during this period."

Apple's market cap currently stands at $355.6 billion -- making it the most valuable publicly traded company in the world.

UK's atomic clock 'is world's most accurate'

An atomic clock at the UK's National Physical Laboratory (NPL) has the best long-term accuracy of any in the world, research has found. Studies of the clock's performance, to be published in the journal Metrologia, show it is nearly twice as accurate as previously thought. The clock would lose or gain less than a second in some 138 million years. The UK is among the handful of nations providing a "standard second" that keeps the world on time. However, the international race for higher accuracy is always on, meaning the record may not stand for long. The NPL's CsF2 clock is a "caesium fountain" atomic clock, in which the "ticking" is provided by the measurement of the energy required to change a property of caesium atoms known as "spin".
By international definition, it is the electromagnetic waves required to accomplish this "spin flip" that are measured; when 9,192,631,770 peaks and troughs of these waves go by, one standard second passes.
Matching colours Inside the clock, caesium atoms are gathered into bunches of 100 million or so, and passed through a cavity where they are exposed to these electromagnetic waves.

The colour, or frequency, is adjusted until the spins are seen to flip - then the researchers know the waves are at the right frequency to define the second. The NPL-CsF2 clock provides an "atomic pendulum" against which the UK's and the world's clocks can be compared, ensuring they are all ticking at the same time. That correction is done at the International Bureau of Weights and Measures (BIPM) in the outskirts of Paris, which collates definitions of seconds from six "primary frequency standards" - CsF2 in the UK, two in France, and one each in the US, Germany and Japan. For those six high-precision atomic pendulums, absolute accuracy is a tireless pursuit. At the last count in 2010, the UK's atomic clock was on a par with the best of them in terms of long-term accuracy: to about one part in 2,500,000,000,000,000.

What time is it, exactly?

World clock
  • The international time standard is maintained by a network of over 300 clocks worldwide
  • These are sent by satellite and averaged at BIPM, a measurement institute in France
  • But the "tick" of any one of them could drift out of accuracy, so BIPM corrects the average using six "primary frequency standards" in Europe, the US and Japan
  • Their corrected result, "International Atomic Time", is occasionally compared with the time-honoured measure of time by astronomical means
  • Occasionally a "leap second" is added or subtracted to correct any discrepancy
But the measurements carried out by the NPL's Krzysztof Szymaniec and colleagues at Pennsylvania State University in the US have nearly doubled the accuracy. The second's strictest definition requires that the measurements are made in conditions that Dr Szymaniec said were impossible actually to achieve in the laboratory. "The frequency we measure is not necessarily the one prescribed by the definition of a second, which requires that all the external fields and 'perturbations' would be removed," he explained to BBC News.
"In many cases we can't remove these perturbations; but we can measure them precisely, we can assess them, and introduce corrections for them." The team's latest work addressed the errors in the measurement brought about by the "microwave cavity" that the atoms pass through (the waves used to flip spins are not so far in frequency from the ones that flip water molecules in food, heating them in a microwave oven).

A fuller understanding of how the waves are distributed within it boosted the measurement's accuracy, as did a more detailed treatment of what happens to the measurement when the millions of caesium atoms collide.
Without touching a thing, the team boosted the known accuracy of the machine to one part in 4,300,000,000,000,000. But as Dr Szymaniec said, the achievement is not just about international bragging rights; better standards lead to better technology. "Nowadays definitions for electrical units are based on accurate frequency measurements, so it's vital for the UK as an economy to maintain a set of standards, a set of procedures, that underpin technical development," he said. "The fact that we can develop the most accurate standard has quite measurable economic implications."

Source: http://www.bbc.co.uk/news/science-environment-14657002

Sunday, 21 August 2011

Jan Lokpal Bill & it's importance

Source: http://indiaagainstcorruption.org/  

The Jan Lokpal Bill (Citizen's ombudsman Bill) is a draft anti-corruption bill drawn up by prominent civil society activists seeking the appointment of a Jan Lokpal, an independent body that would investigate corruption cases, complete the investigation within a year and envisages trial in the case getting over in the next one year. 

Drafted by Justice Santosh Hegde (former Supreme Court Judge and former Lokayukta of Karnataka), Prashant Bhushan (Supreme Court Lawyer) and Arvind Kejriwal (RTI activist), the draft Bill envisages a system where a corrupt person found guilty would go to jail within two years of the complaint being made and his ill-gotten wealth being confiscated. It also seeks power to the Jan Lokpal to prosecute politicians and bureaucrats without government permission. 

Retired IPS officer Kiran Bedi and other known people like Swami Agnivesh, Sri Sri Ravi Shankar, Anna Hazare and Mallika Sarabhai are also part of the movement, called India Against Corruption. Its website describes the movement as "an expression of collective anger of people of India against corruption. We have all come together to force/request/persuade/pressurize the Government to enact the Jan Lokpal Bill. We feel that if this Bill were enacted it would create an effective deterrence against corruption."

Anna Hazare, anti-corruption crusader, went on a fast-unto-death in April, demanding that this Bill, drafted by the civil society, be adopted. Four days into his fast, the government agreed to set up a joint committee with an equal number of members from the government and civil society side to draft the Lokpal Bill together. The two sides met several times but could not agree on fundamental elements like including the PM under the purview of the Lokpal. Eventually, both sides drafted their own version of the Bill.

The government has introduced its version in Parliament in this session. Team Anna is up in arms and calls the government version the "Joke Pal Bill." Anna Hazare declared that he would begin another fast in Delhi on August 16. Hours before he was to begin his hunger strike, the Delhi Police detained and later arrested him. There are widespread protests all over the country against his arrest.         

The website of the India Against Corruption movement calls the Lokpal Bill of the government an "eyewash" and has on it a critique of that government Bill. 

A look at the salient features of Jan Lokpal Bill:


1. An institution called LOKPAL at the centre and LOKAYUKTA in each state will be set up 

2. Like Supreme Court and Election Commission, they will be completely independent of the governments. No minister or bureaucrat will be able to influence their investigations.

3. Cases against corrupt people will not linger on for years anymore: Investigations in any case will have to be completed in one year. Trial should be completed in next one year so that the corrupt politician, officer or judge is sent to jail within two years.

4. The loss that a corrupt person caused to the government will be recovered at the time of conviction. 

5. How will it help a common citizen: If any work of any citizen is not done in prescribed time in any government office, Lokpal will impose financial penalty on guilty officers, which will be given as compensation to the complainant.

6. So, you could approach Lokpal if your ration card or passport or voter card is not being made or if police is not registering your case or any other work is not being done in prescribed time. Lokpal will have to get it done in a month's time. You could also report any case of corruption to Lokpal like ration being siphoned off, poor quality roads been constructed or panchayat funds being siphoned off. Lokpal will have to complete its investigations in a year, trial will be over in next one year and the guilty will go to jail within two years.

7. But won't the government appoint corrupt and weak people as Lokpal members? That won't be possible because its members will be selected by judges, citizens and constitutional authorities and not by politicians, through a completely transparent and participatory process. 

8. What if some officer in Lokpal becomes corrupt? The entire functioning of Lokpal/ Lokayukta will be completely transparent. Any complaint against any officer of Lokpal shall be investigated and the officer dismissed within two months.

9. What will happen to existing anti-corruption agencies? CVC, departmental vigilance and anti-corruption branch of CBI will be merged into Lokpal. Lokpal will have complete powers and machinery to independently investigate and prosecute any officer, judge or politician. 

10. It will be the duty of the Lokpal to provide protection to those who are being victimized for raising their voice against corruption.