You are right, GPU is very much like a CPU -- both are processors. The difference comes from the application and specialization.
First thing to note here is that a typical laptop contains more processors than just a CPU or GPU. There are many processors will are acting as controllers of various subsystems. Your laptop may have more than 20 processors. The difference between them is "the purpose". Several of these processors are micro-controllers, controlling the operation of hard drives, or implementing memory and I/O interfaces. These are "embedded processors", whose functionality is not visible to the common user. To a computer system developer -- one who is assembling these parts to make a laptop--, all these processors are important, as they have to use them to interface the components.
However, we only hear about the CPU. Clearly it is the most powerful and versatile processor of all the rest of them, and can be programmed by a user. In the sense, that you can write a program to run on it. The other processors are hidded -- they will be used by the system; you do not directly control them.
GPUs used to be the processor that processed the images for display on the screen. As screens become larger, GPU performance and capabilities increased dramatically -- so much so that for some applications, they are much more faster than the actual CPU. Noting this, some GPU companies, noteably nVIDIA started to provide a programming interface to GPUs. This trend is called GP-GPU computing. Implying that GPUs are exposed to the programmer, and they can run applications on them. So now, in addition to "directly" using CPUs, you can use GPUs too for your application.
Clearly, GPUs also execute instructions just like CPUs do, but they are designed a little differently. Rather than building a processor that will enable one program to run fast (CPU), GPUs are good at running a lots of threads fast -- even though each thread is slow.
The reason behind this is: since the referesh rate for your screens are typically ~ 60-80 frames per second, GPUs only need to do the computation of one pixel in 1/80 sec. Even if each pixel work is 10000 instructions, and we assume 100 Hz screen refresh rate, a processor @ 1 MHz is enough. That is quite low performance. GPUs use a lot of "tiny, low performance" processors to do the screen rendering work. Essentially they are a completely different design point than CPUs. While CPUs are better if they run at higher frequencies, GPUs are better if they have more cores, even if they run at much lesser frequencies.
Now that programmer's have access to GPUs, it opens up whole new doors. All applications have some parts which are serial, and some parallel parts. For example, if you are doing some image processing, MATLAB simulation etc, then there is lot of parallelism in them. Even though there were parallel parts in your application, previously you were limited by the parallelism and serial speed of the main processor. Now GPUs provide vast amounts of parallelism for you to use and accelerate your applications. Users can now map the parallel part of their applications onto the GPUs and accelerate their application. Thus the main use of GPUs is as an accelerator.
To summarize CPUs and GPUs are different design points of processor. CPUs are very good at execution serial code, while GPUs are very good (both in power and performance) for parallel code. GPUs provide the first readily available accelerator to speed up the parallel parts of your applications.