As the last section concluded, we will be especially interested in solving equations -- especially polynomial equations -- going forward. One powerful strategy for solving equations involves using inverse functions. Indeed, this strategy has application to not only polynomials, but also equations involving a great many different functions.
As such, this section takes a deep look at several (simple) functions we might encounter in those aforementioned equations we might attempt to solve. Part of that "deep look" includes finding and examining the graphs for these functions -- and the graphs of their related inverse functions if they have them. As we work to understand more complicated functions, it will be especially important to understand how some "simple functions" affect the graphs of other functions when composed with them.
However, before getting into all that, let us clarify what type of inverses we are talking about here. Recall, we have seen a number of "inverses" at this point, so it will be good to be clear about which we currently intend.
All of the different types of inverses we have seen thus far have involved some operation that we sought to somehow "undo", returning things in some way to some appropriate "identity".
In the context of braids, the operation was concatenation and we sought an inverse braid that would undo the effects of a given braid, returning it to the "identity braid" of $n$ parallel strands
In the context of permutations, the operation was composition and we sought a permutation that would undo a permutation, returning $n$ elements to their original order (just like the "identity permutation" that leaves all positions unchanged).
The additive inverse of a real number under normal addition is the value that could be added to a given value to produce the additive identity of zero. (e.g., $-3$ is the additive inverse of $3$ as $3 + (-3) = 0$).
The multipicative inverse of a non-zero real number is the value that could be multiplied by a given value and produce the multiplicative identity of one. (e.g., $\frac{2}{3}$ is the multiplicative inverse of $\frac{3}{2}$ as $\frac{3}{2} \cdot \frac{2}{3} = 1$)
The additive (or multiplicative, when it exists) inverse of an integer in $0,1,2,\ldots,(n-1)$ under clock arithmetic is the integer again in $0,1,2,\ldots,(n-1)$ that produces a sum (or product) whose remainder is $0$ (or $1$) upon division by $n$, respectively.
Recalling that under composition and an appropriate domain, the function $I(x) = x$ serves as an identity (i.e., $f(I(x)) = x$ and $I(f(x)) = x$ for all $x$), we can say functions $f$ and $g$ are inverses when their composition is identical to $I$. That is to say, $f$ and $g$ are inverses when $f(g(x))=x$ and $g(f(x))=x$ for every $x$ in the respective domain.
In the context of solving equations, this last (compositional) inverse of a function will be most useful.
As a quick example, the functions $f(x) = x^3$ and $g(x) = \sqrt[3]{x}$ are inverses (again, in a compositional sense) as $$\begin{array}{rcccccl} f(g(x)) &=& f(\sqrt[3]{x}) &=& (\sqrt[3]{x})^3 &=& x \quad \textrm{and}\\ g(f(x)) &=& g(x^3) &=& \sqrt[3]{x^3} &=& x \end{array}$$ As a matter of verbiage, we say a function $f$ is invertible if there exists a function $g$ such that $f$ and $g$ are inverses of one another.
As we have done previously, we denote the inverse of an invertible function $f$ by $f^{-1}$. That said, a common source of confusion for students stems from how similar this notation looks to the multiplicative inverse of the value $f(x)$ for some given $x$.
As the multiplicative inverse so-described potentially depends on the value of $x$, let us the adopt the following convention: When we write $f^{-1}(x)$, we will mean the (compositional) inverse of $f$, whereas when we write $[f(x)]^{-1}$, we will mean the multiplicative inverse of the value $f(x)$.
So as an example, suppose $f(x)=x^3$. Then, $$f^{-1}(x) = \sqrt[3]{x} \quad \quad \textrm{while,} \quad \quad [f(x)]^{-1} = \frac{1}{x^3}$$
There is more we can say about inverses, however -- especially with regard to their graphs.
We have already noted that $f(x)=x^3$ is invertible with $f^{-1}(x) = \sqrt[3](x)$. Consider what happens when we draw the graphs of both of these functions on the same set of axes:
There is a striking symmetry between these two graphs -- one that is shared between any pair of inverse functions we might choose to draw. In every case, two functions that are inverses of one another will be symmetric about the identity function (whose graph is given by $y=x$).
To see why, note that $(x,y)$ is a point on the graph of some invertible function $f$ if and only if $(y,x)$ is a point on the graph of $y=f^{-1}(x)$.
As an example -- just consider the graphs of $f(x)=x^3$ (in red) and $f^{-1}(x)=\sqrt[3]{x}$ (in blue) shown above. The points $(-2,-8)$, $(-1,-1)$, $(0,0)$, $(1,1)$, and $(2,8)$ are all points on the graph of the former. While $(-8,-2)$, $(-1,-1)$, $(0,0)$, $(1,1)$, and $(8,2)$ are all on the graph of the latter.
To argue the more general case, suppose $(x_0,y_0)$ is a point on the graph of $y=f(x)$ and $f$ is invertible.
Consequently, $f(x_0) = y_0$, and thus, $f^{-1}(f(x_0)) = f^{-1}(y_0)$.
However, since $f^{-1}$ is the inverse of $f$, we also know $f^{-1}(f(x_0))=x_0$.
Hence, $f^{-1}(y_0) = x_0$, which tells us that $(y_0,x_0)$ is a point on the graph of $y=f^{-1}(x)$.
In our earlier discussion of functions, we noticed that a function must have two properties in order to have an inverse. It must be:
Recall that "injectivity" played out visually in the horizontal line test, where we noted that a function would not have an inverse when we could find some horizontal line that intersected the graph of the function two or more times, as this would represent one output value $y$ associated with two different inputs $x$.
We can make a parallel argument for why the horizontal line test works by appealing to this new-found symmetry of a function with its inverse across the line $y=x$.Note that any function that fails the horizontal line test (like the red $f(x)=x^2$ below, given the green horizontal line) will have a reflection across the line $y=x$ (here, in blue), which must then intersect at least twice some vertical line (the reflection of the horizontal line, shown in orange for this example). The reflected graph of course can't correspond to any function as it has two $y$-values (outputs) associated with a single $x$-value (input)!
Let us not get ahead of our skis, however. Note that both applying the horizontal line test to determine if a function has an inverse and graphing the inverse (presuming it exists) by taking advantage of this symmetry over the line $y=x$ requires us to graph the original function in question.
Developing all of the skills needed to graph an arbitrary function takes time, however. Indeed, much of calculus is spent addressing this question. However, deducing the graphs of many simple functions is quite easy, and we do so next. Of course, with every graph of a function we determine, we get the graph of its inverse function (if it exists) immediately given the symmetry discussed above.
Consider the following types of functions -- in doing so, you may assume $c$ always represents some constant real value in all of the functions that follow:
A Constant Function : $f(x) = c$
With an output that remains the same for all input values in its domain (here, $\mathbb{R}$), constant functions are perhaps the simplest of functions. Because for any single constant function, the outputs are always the same value, the points on its graph must always be at the same height. This means the graph of the function will always be a horizontal line at that common height/output value.
Three example constant functions are shown below, $f(x)=5$, $g(x)=4$, and $h(x)=3$ (in red, magenta, and orange, respectively).
Notice that constant functions fail the horizontal line test spectacularly! We don't just get two points of intersection between the graph and some horizontal line, we get infinitely many! Consequently, constant functions are not invertible.
The Identity Function, $I$ : $I(x) = x$
Recall with respect to composition, the identity function (with domain $\mathbb{R}$) is simply the function that does nothing to its input, returning it again as output, unchanged.
As such, its graph consists of all points $(x,y)$ where $y=x$. Quick inspection reveals these points lie on the line that forms a $45^{\circ}$ angle with both the positive $x$ and $y$-axes, as shown below.
This graph easily passes the horizontal line test, and thus has an inverse. Whether we determine this inverse by reflecting its graph across itself (which leaves it unchanged), or by realizing that to get back to the original inputs from the outputs requires nothing be done -- the result is the same. The identity function is its own inverse.
A Vertical Translation : $f(x) = x + c$
Before getting to why we named this function what we did, let us consider what it does. Note that the output is always $c$ units more than the output of $I(x)$. As outputs are indicated by the heights/$y$-coordinates of points on the graph, the graph of $f(x) = x + c$ should then be identical to the graph of $I(x) = x$, but moved $c$ units up (or down if $c$ is negative).
As such, the functions $f(x)=x+5$ and $g(x)=x+4$ (shown below in orange and magenta) shift the graph of the identity function up by $5$ and $4$, respectively, while the graph of $h(x) = x-3$ (shown in red) shifts the identity function down by $3$ units.
To understand the curious name for functions of this form, note that "translation" stems from the Latin translatus which means "to carry over". In the familiar context of translating a passage of text written in one language to another, we are "carrying over" a set of ideas expressed in the first to the second. Here, we are literally (and rigidly) carrying/moving an entire graph (in this case, the graph of $I(x)=x$) to some other part of the coordinate plane. In this case, that movement is up or down -- hence a "vertical translation".
Here too, these functions obviously pass the horizontal line test and thus have an inverse. Computationally, undoing the addition of $c$ simply requires a subtraction of $c$, telling us that $f^{-1}(x) = x - c$. The inverse functions to the aformentioned example functions graphed above are shown in various shades of blue below. Observe that for these functions (as for all invertible functions) the graph of each is a reflection of the graph of its inverse across the line $y=x$.
Importantly, we can rewrite each of the inverse functions $f^{-1}(x) = x - c$ alternatively as $f^{-1}(x) = x + (-c)$. In this way, we can see that the inverse of a vertical translation is itself another vertical translation (but in the opposite direction).
You may have noticed from the graphs above that shifting the identity function up by $c$ is indistinguishable from shifting it left by that same amount. This is a consequence of the particular function whose graph is being translated upwards or downwards (i.e., the identify function $I(x)=x$).We will sometimes want to consider vertical translations of other functions (i.e., a composition of a vertical translation and some other function $g(x)$. Such functions are easily shown to have the form $f(x) = g(x) + c$ and their graphs similarly found to be identical to the graph of the associated $g(x)$, but shifted up $c$ units (or down if $c$ is negative). For many (but not all) of the possible functions we might use as $g(x)$, there will be no confusion over the direction of translation -- it will be clearly vertical, as suggested by the following graphs:
A Reflection over the $x$-axis: $f(x) = -x$
Here, the outputs ($y$-values) are identical in magnitude to their respective inputs ($x$-values), but opposite in sign. As such, the graph of this function and the graph of the identity function (which leaves both magnitude and signs unchanged) are mirror images of one another -- with points on opposite sides of the $x$-axis (which explains the name we have used for this function).
Upon noting that undoing a multiplication by $-1$ requires only that we multiply by $-1$ a second time, we can quickly conclude this function too, is its own inverse. This is affirmed upon looking at the graph, as the reflection of $f(x)=-x$ across the $y=x$ line is itself.
Just like vertical translations, we often will want to consider compositions of a reflection over the $x$-axis with some other function $g(x)$. Such functions of course can be written the form $f(x) = -g(x)$, with their graphs similarly forming a "mirror image" of the graph of $g(x)$ across the $x$-axis, as suggested by the two examples that follow:
An Important Side Comment: Note that we can use what we have learned above (and what we will soon learn below) to find graphs of certain compositions of different functions. Specifically, we can find the graphs of such compositions as long as we know the graph of the inner-most function and the effect each function subsequently applied will have. For example, suppose we wanted to find the graph of $f(x) = -x^2 + 2$. Notice this is the composition of three functions: a vertical translation, a reflection over the $x$-axis, and $g(x) = x^2$ (whose graph we know). Specifically, if $g(x) = x^2$, $r(x) = -x$, and $t(x) = x+2$, then $f(x) = t(r(g(x)))$. As such, to find the graph of $f(x)$, we start with the graph of $g(x)$, the inner-most function; apply $r(x)$ by reflecting $g(x)$ over the $x$-axis; and then finally apply $t(x)$ by translating the resulting graph up $2$ units, as shown below. (Note: the graph $f(x)$ we seek is the one shown in bright red on the right): |
A Scaling Function : $f(x) = cx$ (with $c \gt 0$)
Recall the idea of a scale model in architecture or model-building where linear measurements of some real-world object are all multiplied by the same amount (most often a fraction less than one, but not always). Scaling functions work similarly. For these functions, we multiply every input by some constant value $c \gt 0$. (Granted, if $c=1$, the result is indistinguishable from the identity function.)
Note that below, the functions $h(x) = 5x$ and $g(x) = 4x$ (shown in orange and magenta) represent scaling functions where $c \gt 1$. These are called (vertical) dilations. Just as when one's pupils dilate, they increase in size -- here, distances to from the $x$-axis to the function are now larger than (and by the same factor) the corresponding distances from the $x$-axis to the identity function.
Similarly, the function $f(x) = \frac{1}{3}x$ (shown in red) is a scaling function where $c \lt 1$. For such values of $c$ the scaling function is known instead as a (vertical) contraction, as the distances between the $x$-axis and the function are now smaller than (again by the same factor) the corresponding distances between the $x$-axis and the identity function.
Note, the restriction that $c \gt 0$ ensures all of these functions are invertible. Interestingly, such inverse functions can always be written as scaling functions themselves as $f(x) = x/c = \frac{1}{c}x$. In this way, the inverse of any dilation is a contraction and vice versa.
Much like vertical translations and reflections, we will often find it useful to compose a scaling function with some other function $g(x)$. The effect on vertical distances from the $x$-axis to the points of $g(x)$ is similar -- they are all appropriately scaled by the same factor, as seen in the two examples below:
The Absolute Value Function : $f(x) = |x|$
Defined as the distance from zero, the absolute value must of course always be non-negative. For $x \ge 0$, this distance agrees with the value of $x$, and so $|x| = x$ when $x \ge 0$. However, for $x \lt 0$ this distance agrees in magnitude with $x$, but disagrees with it in sign. We can correct things by multiplying these negative $x$-values by $-1$ yielding $|x| = -x$ when $x \lt 0$.
It is possible to write a formula for $|x|$ consisting of a single expression, namely $|x| = \sqrt{x^2}$, but at best this is perhaps overly clever -- and at worst, it might hide the interpretation of this function as a distance from some.
Instead, a clearer definition is piecewise defined, where we use multiple functions that are each applied to a separate section of the domain typically identified by some condition (e.g., $x=3$, $x>7$, $-4 \lt x \ge 2$, etc).
The piecewise defined version of $|x|$ can thus be written as $$|x| = \left\{\begin{array}{ccl} x &\textrm{ if }& x \ge 0\\ -x &\textrm{ if }& x \lt 0 \end{array}\right.$$
Given the above, the graph of $f(x) = |x|$ is easily obtained by putting together "pieces" of two other graphs, the right "half" (i.e., where $x \ge 0$) of the identity function and the left "half" (i.e., where $x \lt 0$) of a reflection over the $x$-axis, as shown below in red:
Notably, as evinced by the green line drawn above, the absolute value function fails the horizontal line test (as $|x| = |-x|$, for every $x \neq 0$), and thus has no inverse.
As we have done with most of the functions above, it will be useful to see how the graph of a more general function $g(x)$ is transformed when composed with the absolute value function (for clarity, when the absolute value is applied last). Again, the "effect" of the absolute value is to preserve values' magnitudes, but force them to be positive.
As such, any points on the graph of $g(x)$ on or above the $x$-axis (where $y \ge 0$) are unchanged by the absolute value and thus, also points on the graph of $f(x) = |g(x)|$. However, points on the graph of $g(x)$ below the $x$-axis (where $y \lt 0$) are reflected by the appliation of the absolute value to above the $x$-axis in the graph of $f(x)$, as shown in the example below:
Here's a second example, using a different function inside the absolute value:
The Reciprocal Function : $f(x) = \cfrac{1}{x}$ (or equivalently, $f(x) = x^{-1}$)
Understanding the nature of the reciprocal function can largely be distilled down to four cases: What happens when you reciprocate a large positive value, a small positive value, a large negative value, and a small negative value.
Of course, if the magnitude of an input is large the reciprocal will be small -- indeed, vanishingly small as larger and larger magnitude inputs are considered. That is to say, we can keep the value of the reciprocal as small as we want by using inputs with sufficiently large magnitude inputs.
Taking the sign of the inputs into account as well, we note that large positive inputs $x$ are associated with small positive heights/outputs $y$, while large negative inputs $x$ are associated with small negative heights/outputs $y$.
On the flip side, the smaller we make a non-zero input in magnitude, the larger $1/x$ becomes in magnitude. Indeed, this happens without bound! That means that we we can keep the magnitude of the output as large as we want by keeping the input values $x$ sufficiently close to $x=0$ (except possibly $x=0$ itself).
Again taking signs into account, we note that small positive inputs $x$ are associated with large positive heights/outputs $y$, while small negative inputs $x$ are associated with large negative heights/outputs $y$.
Couple all of the above with the observation that to "undo" a reciprocation, one must only reciprocate again. This means $1/x$ is its own inverse, and its graph must then be symmetric with respect to the line $y=x$, as can be seen in the graph of $f(x)=1/x$ shown below.
In an interesting connection to the above properties, the word reciprocate stems from the Latin reciprocare which means "rise and fall, move back and forth; reverse the motion of" and the related word reciprocus, which means "returning the same way, alternating".
The behavior of this function for inputs $x$ either close to zero, or large in magnitude is perhaps most effiently conveyed in the language of limits, one of the first topics tackled in calculus. We save a full rigorous treatment of limits for that course, but introduce the notation below so that similar behavior in other functions can be described as efficiently as possible (i.e., using as little ink as possible).
First, we introduce the notation for limits at infinity:
When one can keep the height/value of a function $f(x)$ as close as desired to some given value $y=L$ by using sufficiently large positive inputs, we say $f(x)$ has a limit at (positive) infinity, indicating this by writing what is shown at right. | $$\lim_{x \rightarrow \infty} f(x) = L$$ |
Alternatively, when one can keep the height/value of $f(x)$ as close as desired to some $y=L$ by using sufficiently large-in-magnitude negative inputs, we say $f(x)$ has a limit at negative infinity, writing instead |
$$\lim_{x \rightarrow -\infty} f(x) = L$$ |
Graphically, when one of these two things are true there will be a horizontal line (called a horizontal asymptote) at $y=L$ to which we can keep the function as close as desired by looking either far enough left (or right) in the graph.
As we can make the magnitude of $1/x$ as small as we wish (i.e., as close to $y=0$ as desired) by using inputs of sufficient magnitude (whether they be positive or negative), we thus write this more succinctly as $$\lim_{x \rightarrow \infty} \frac{1}{x} = 0 \quad \textrm{ and } \quad \lim_{x \rightarrow -\infty} \frac{1}{x} = 0$$
As such, the $x$-axis serves as horizontal asymptote for the reciprocal function $1/x$ both on the far-left and the far-right.
Now let us also consider the notation for infinite limits:
When one can keep the height/value of a function $f(x)$ as large as desired by using inputs sufficiently close to but greater than some $x=c$, we say $f(x)$ has a limit of (positive) infinity to the right of $x=c$, writing this more tightly with the notation at right. | $$\lim_{x \rightarrow c^+} f(x) = +\infty$$ |
When everything is identical to the above, except we use inputs sufficiently close to but less than some $x=c$, we say we have a limit of (positive) infinity to the left of $x=c$ instead, writing what is shown at right. | $$\lim_{x \rightarrow c^-} f(x) = +\infty$$ |
Similar notations are used when one can keep the height/value of $f(x)$ as large-in-magnitude as desired -- but negative under these conditions. We call this either a limit of negative infinity to the right of $x=c$ when using inputs $x$ greater than $c$: | $$\lim_{x \rightarrow c^+} f(x) = -\infty$$ |
...or a limit of negative infinity to the left of $x=c$ when using inputs $x$ less than $c$: | $$\lim_{x \rightarrow c^-} f(x) = -\infty$$ |
Graphically, all four combinations allow us to draw a vertical line (called a vertical asymptote) at $x=c$ near which we say (rather loosely) the function "flies off to positive (or negative) infinity".
Turning again to the context of the reciprocal function, note we can keep $1/x$ as large as desired by using sufficiently small inputs to the right of zero and we can keep $1/x$ as large-in-magnitude as desired -- but negative -- by using sufficiently small inputs to the left of zero. As such, we can say: $$\lim_{x \rightarrow 0^+} = +\infty \quad \textrm{ and } \quad \lim_{x \rightarrow 0^-} = -\infty$$
Further, the $y$-axis serves as a vertical asymptote for the reciprocal function $1/x$.
We can even "mix-and-match" these ideas. For example, to describe a function $f$ whose outputs can be kept as large (and positive) as desired by keeping the inputs sufficiently large and positive, we use the first equation below. Considering the remaining combinations of signs on inputs and outputs gives us the next three variations. $$\lim_{x \rightarrow \infty} f(x) = \infty \quad \quad \lim_{x \rightarrow -\infty} f(x) = \infty \quad \quad \lim_{x \rightarrow \infty} f(x) = -\infty \quad \quad \lim_{x \rightarrow -\infty} f(x) = -\infty$$ The succinctness of the language and notation of limits pays off when we attempt to describe what the reciprocal function will do when composed with some other function $g(x)$.
As seen below, anywhere $g(x)$ crosses (or is tangent to) the $x$-axis will produce a vertical asymptote in the graph of the reciprocation of $g(x)$. Further -- immediately to the left or right of these locations, one will have an associated limit of either positive or negative infinity, depending on whether the graph of $g(x)$ is above or below the $x$-axis, respectively.
Also, anywhere the graph of $g(x)$ grows to infinity (or falls to negative infinity), the graph of the reciprocal of $g(x)$ will get arbitrarily close to the $x$-axis. If this happens on the far left or right in the graph of $g(x)$, the graph of the reciprocal of $g(x)$ will have a horizontal asymptote on that same side.
All of this may be difficult to digest without seeing some specific concrete examples. As such, consider the following:
Notice how the graph of $g(x)=-x^2+2$ and its reciprocal, as shown below, relate to one another:
There are several things of note in the graphs above:
Here's another example:
There are again several things to observe above:
As one final example, consider reciprocating just $g(x)=x^2$:
In this case, note that the following two things are true:
A Positive Odd Power Function : $f(x) = x^n$ (where $n \gt 1$ is an odd integer)
We've already seen the graph of $f(x)=x^3$ at the top of this section, but can now argue what we should see in the more general case for power functions when the exponent is an odd integer.
With output $y$-values that must span from large negative values to large positive ones (take a minute to convince yourself that's true) -- as $x$ increases, it should be clear that $y=x^n$ increases (i.e., as long as the sign is preserved -- which it is here due to the odd exponent -- products of larger numbers are larger).
Understanding the differences in the rates of that increase as $x$ takes on different values requires some understanding of calculus, but plotting a sufficient number of points (or graphing things on a graphing calculator) should be enough to convince us that the graphs of such functions generally follow a "steep-shallow-steep" path of ascent, as seen below in the red, magenta, and orange graphs:
Note, we exclude addressing the function $f(x)=x^1$ as an odd power, as this is indistinguishable from the identity function.
All of these graphs shown appear to pass the horizontal line test, suggesting each such function $f$ has an inverse. This is confirmed to be true in general when one realizes that raising of a value to an odd integer $n$ can be undone by simply by taking the $n^{th}$ root. Thus, if $f(x) = x^n$ then $f^{-1}(x) = \sqrt[n]{x}$.
Reflecting the graphs above over $y=x$ reveals the graphs of the corresponding inverses (each being an odd root function), shown in blue, cyan, and slate, respectively. Of course the previous "steep-shallow-steep" path of ascent under reflection creates a "shallow-steep-shallow" path of ascent for these odd root functions.
Occassionally, a function exhibit a wonderful symmetry in that when reflected over the $x$-axis the result is indistinguishable from that same function reflected over the $y$-axis. Algebraically this manifests for a function $f$ when for all $x$ in the domain, $f(-x) = -f(x)$.
As $(-x)^n = -x^n$ when $n$ is a positive odd integer, this symmetry happens for all positive odd power functions. There are other functions we will later consider that exhibit the same type of symmetry. Regardless of which function is involved, we call any function that exhibits this type of symmetry an odd function.
A Positive Even Power Function : $f(x) = x^n$ (where $n \gt 0$ is an even integer)
We've already seen the graph of $f(x)=x^2$ near the top of this section, and observed that it is not invertible.
One can argue that the other functions of this form behave similarly. For positive $x$, larger powers of larger values are larger, while for negative $x$, the reverse should be true given the even exponent (the reader should convince themselves this is true). This, along with the fact that the minimum output for any of these functions must be $0^n = 0$ (none can be negative given the even exponent), means that graphs of even power functions are all roughly "U"-shaped, as suggested by the three such functions plotted below.
Connected to this, we have a symmetry seen in these functions, whereby the graph of the function on the left side of the $y$-axis is a mirror image of the graph on the right side of the $y$ axis. Algebraically, this happens for any function where $f(-x) = f(x)$. As $(-x)^n = x^n$ for all $x$ when $n$ is a positive even value, all positive even power functions exhibit this kind of symmetry. Note, other functions' graphs can have this type of symmetry as well (for example, consider $|x|$). However, in a manner similar to the definition of an odd function, we call any function whose graph is symmetric to the $y$-axis in this manner an even function.
Also related to being "U-shaped", note that for positive even power functions, both of the following hold: $\lim_{x \rightarrow -\infty} = \infty$ and $\lim_{x \rightarrow \infty} = \infty$. We often say this more loosely as such functions "open upwards".
Importantly, none of these positive even power functions are invertible with a domain of all reals. One can easily see the functions below don't pass the horizontal line test (consider the green line drawn). More generally, note that for even $n$, we have $x^n = (-x)^n$ for any non-zero $x$. As such, we can always find a horizontal line that intersects the corresponding graph more than once.
That said, if we restrict the domain of the above functions to only non-negative reals (i.e., the right halves of the graphs above), their inverses exists! Note that for any function defined by $f(x) = x^n$ where $n$ is positive and even and whose domain has been restricted to the non-negative reals, the function $g$ (with the same domain) defined by $g(x) = \sqrt[n]{x}$ serves as an inverse. A few examples are shown below:
A Simple Exponential Function : $f(x) = c^x$ (with positive $c \neq 1$)
Graphs of simple exponentials behave qualitatively different depending on the value of $c$.
If $c > 1$, then the additional multiplication by $c$ seen in $c^x$ as $x$ increases only makes the output larger. Consequently, for such values of $c$ the height of $f(x)=c^x$ will increase as one moves from left to right, as seen below for $c = 3.1$, $2$, and $5/2$ (in orange, magenta, and red, respectively)..
Notice that $c^0 = 1$ regardless of the value of $c$, which suggests that all functions of the form $f(x)=c^x$ will pass through the point $(0,1)$. This too is easily observed above.
Additionally, when $c \gt 1$ note that $c^x$ for a large negative value of $x$ is the reciprocal of a large positive value, and thus very small. For example, $$2^{-100} = \frac{1}{2^{100}} = \textstyle{(\frac{1}{2})^{100}} \quad \textrm{(a very small value!)}$$ Knowing what we know about reciprocation, it should then come as no surprise that $$\lim_{x \rightarrow -\infty} c^x = 0 \quad \textrm{ when } c \gt 1$$ Graphically, this of course manifests with the $x$-axis serving as a horizontal asymptote on the far-left of the graph of $f(x)=c^x$ (as can be seen in the graphs above).
With regard to inverse functions, recall that the composition $c^{\log_c x} = x$ and $\log_c c^x = x$, so the inverse of $f(x) = c^x$ is $f^{-1}(x) = \log_c x$.
When $c \gt 1$, we can thus determine the graph of $\log_c x$ by reflecting the above-described graphs (with a horizontal asymptote of $y=0$ on the far-left, a point at $(0,1)$, and unbounded growth on the far-right) to obtain a graph with a vertical asymptote of $x=0$ where the function "flies off to negative infinity", a point at $(1,0)$, and (very slow) unbounded ascent on the far-right. Note that this conclusion is consistent with how the three "blue-ish" graphs appear above.
If $0 \lt c < 1$ instead, the graph of $f(x) = c^x$ has some significant differences when compared to those described above for greater $c$ values.
For example, now the additional multiplications by $c$ seen when $x$ increases serve to make the ouput smaller instead. Thus, for these values of $c$ the height of $f(x)=c^x$ will decrease as one moves from left to right, as seen below for $c=0.322$, $0.5$, and $3/5$ (in orange, magenta, and red, respectively).
Following parallel arguments to those made previously for $c \gt 1$, we can deduce that for $0 \lt c \lt 1$ the heights of $f(x) = c^x$ grow in an unbounded way as we move left and $\lim_{x \rightarrow \infty} c^x = 0$. As such, the $x$-axis again serves as a horizontal asymptote for the graph, but this time on the far-right.
We do still, however, have a point on the graph of $f(x) = c^x$ at $(0,1)$ regardless of whether $c \gt 1$ or $0 \lt c \lt 1$.
All of these differences between graphs of $f(x) = c^x$ for $c \gt 1$ versus $0 \lt c \lt 1$ create related differences between the graphs of their inverses (i.e., logarithmic functions) upon reflection over the line $y=x$.
Specifically, for $0 \lt c \lt 1$, the graph of $f(x) = \log_c x$ will still have a vertical asymptote at $x=0$ but now the function "flies off to positive infinity" (instead of negative infinity). Also, the unbounded growth in $c^x$ for these $c$ values as $x$ moves left turns into (very slow) unbounded descent in the graph of $\log_c x$ (instead of ascent).
As one point of constancy, however -- the graph of $\log_c x$ still always passes through $(1,0)$, regardless of the value of $c$, just as the graph of $c^x$ always passed through $(0,1)$.