My understanding is this:
1) Light comes into the camera
2) Light gets converted to digital values
3) Camera does a wee bit of processing
***If you are shooting JPG***
5) Additional processing occurs to adjust contrast, sharpness, noise reduction, saturation, white balance and probably a few other things I'm missing.
6) RAW data is compressed into JPG format
7) JPG data is written to the memory card
***If you are shooting RAW***
5) "RAW" data is compressed and written in your camera's RAW format along with metadata about the camera's settings for values like white balance, saturation, etc.
When viewing any file, RAW or JPG, your computer must convert it to a displayable set of colors. With JPG, that's pretty straightforward (ignoring the whole subject of colorspaces). With RAW, it's a little more complicated.
The RAW file really map directly to colors to display. There needs to be work done to demosaic the pixels because each one is all red, all blue, or all green (ignoring Foveon sensors). RAW files are also unadjusted for things like white balance, saturation, etc. The RAW processing software can do the conversion using the values for white balance, saturation, etc stored with the image. Or it can examine the image and guess at values it thinks are good. Or it can use basic defaults and let you make adjustments. Every RAW conversion package is different in how it does the conversion.
The RAW conversion software that I'm most familiar with (Lightroom and Adobe Camera RAW), starts with basic values for most settings and you adjust them to suit your needs when you do the conversion. There really isn't a meaningful "default" mode. You could leave the values untouched, but that's a bit like not salting your fries. The cook assumed that you would use some salt, but he didn't know how much, so he left it up to you.
For the purposes of trying to compare a RAW file to a JPG, there really isn't a great apples-to-apples comparison. I suppose that if you used the camera makers software and they used the same algorithms and you set up your converter to respect settings in the camera at the time, you could get it pretty close. But that's just not the way RAW shooters shoot.
My local photography group has struggled with the issue for it's photo contests. Their guiding philosophy is that you cannot perform any manipluations that you could do in a darkroom without taking extreme measures. In other words, you can do basic things like crop and adjust exposure. Some people are agitating that the rules be changed to allow any global processing (adjustments that affect the entire photo) and dust removal.
My own personal philosophy (and I'm not advocating for its use in contests here or at our local club) is that the art of photography includes everything that can be done to create the image that tells the story you want told. That means adjusting the scene - using flash, using diffusers, posing subjects, moving elements. That means using filters - polarizers, graduated neutral density filters, star filters. That means using every possible setting on your camera - second curtain sync, high fps, long shutter speeds. That means using all of the post processing tools at your disposal.
When comparing the work of various photographers, I can see the need to try to keep a relatively level playing field between film and digital shooters and between those that have extensive post-processing capabilities and those that do not. The problem is that film and digital aren't the same. RAW and JPG aren't the same. They require different techniques. The best you can do is come up with some guidelines and hope that people follow them.
I confess that with my contest postings, I use the basic adjusments in lightroom (generally white balance, exposure, black level, vibrance). The problem is that I couldn't use many of my shots if I didn't. I shoot with the assumption that I will be performing those adjustments.
I purposefully overexpose shots because brighter images are richer and more noise free. However, if I didn't adjust the exposure downward as part of development, my shots would look terrible and not the way I intended when I shot.
I pay almost no attention to the white balance when I shoot. I like to adjust my white balance to what is "correct" for the lighting in post-production and then I often adjust it a bit to warm or cool the scene as befits it. If I was shooting JPG, I'd spend more time trying to get it right before I shot, but I'd rather spend that time on my computer at home than I would on my vacation.
I add vibrance because my camera (and I think all DSLRs record RAW files with no additional saturation, making the pictures look relatively unsaturated).
I sharpen my pictures because the demosaicing filter on a digital camera purposefully blurs the image to prevent moire patterns. It is expected that you will sharpen the image as part of your processing.
To sum up, RAW conversion isn't straightforward. There isn't a meaningful "default" conversion. For situations in which my shots are going to be compared with JPGs, I just try to stay roughly within the capabilities of what I could do with a JPG, although I'll freely admit that it's much easier for me to make the best decision with a RAW file and before/after views than it is to work it out before shooting in the field.