Parametric Tests deal with what you can say about a variable when you know (or assume that you know) its distribution belongs to a "known parametrized family of probability distributions".
Now what the heck does that mean?
Suppose, for example, the grade distribution in a certain course may have historically followed a "bell curve", with the majority of the grades around a middle C, with smaller (but roughly equal) numbers of B's and D's, and an even smaller number of A's and F's.
Frequently, such "bell curves" can be approximated by a well-known probability distribution, called the Normal distribution.
Now pause that train of thought for a moment, and recall that the position and shape of graphs of quadratic functions of the following form depend only on the parameters of $a$, $h$, and $k$.$$f(x) = a(x-h)^2+k$$
That is to say, knowing the values of these three parameters for any given quadratic function completely specifies the quadratic, telling us everything about the function (and its graph) that we wish to know (i.e., how to evaluate the function, where the vertex is, what is the direction of opening, what is the vertical stretching factor, etc...).
Now going back to the Normal distribution, recall that the function that generates a Normal curve is a bit more complicated -- but it behaves in a similar way. We can completely specify what a given Normal curve looks like by knowing just two parameters: $\mu$ (mu) and $\sigma$ (sigma).
When we assume that the distribution of some variable (like course grades) follows a well-known distribution (like the Normal distribution), that can be boiled down to knowledge of just a couple of parameters (like mu and sigma), and then we use that assumption in the performance of some statistical test, we are said to be using a parametric test.
When you can't make such an assumption about the underlying distribution of a variable, before looking at the data, and must instead use more robust (but frequently less powerful) methods as a result, to answer the same kinds of questions, then you are using a nonparametric test.
Because nonparametric tests don't require the typical assumptions about the nature of the underlying distributions that their parametric counterparts do, they are called "distribution free".
There are advantages and disadvantages to using non-parametric tests. In addition to being distribution-free, they can often be used for nominal or ordinal data. That said, they are generally less sensitive and less efficient too.
Frequently, performing these nonparametric tests requires special ranking and counting techniques.