A function with a non-zero derivative, with an inverse function that has no derivative.
While studying calculus, I encountered the following statement:
"Given a function $f(x)$ with $f'(x_0)neq 0$, such that $f$ has an inverse in some neighborhood of $x_0$, and such that $f$ is continuous on said neighborhood, then $f^{-1}$ has a derivative at $f(x_0)$ given by:
$${f^{-1}}'(x_0)=frac{1}{f'(x_0)}$$
My questions is - why does $f$ have to be continuous on a whole neighborhood of $x_0$ and not just at $x_0$? Is there some known counter-example for that?
calculus derivatives proof-explanation inverse-function inverse-function-theorem
New contributor
add a comment |
While studying calculus, I encountered the following statement:
"Given a function $f(x)$ with $f'(x_0)neq 0$, such that $f$ has an inverse in some neighborhood of $x_0$, and such that $f$ is continuous on said neighborhood, then $f^{-1}$ has a derivative at $f(x_0)$ given by:
$${f^{-1}}'(x_0)=frac{1}{f'(x_0)}$$
My questions is - why does $f$ have to be continuous on a whole neighborhood of $x_0$ and not just at $x_0$? Is there some known counter-example for that?
calculus derivatives proof-explanation inverse-function inverse-function-theorem
New contributor
6
Welcome to MSE. Nice first question!
– José Carlos Santos
yesterday
Example 1 in this post shows that continuity is not necessary (for pointwise differentiability of the inverse).
– user21820
13 hours ago
add a comment |
While studying calculus, I encountered the following statement:
"Given a function $f(x)$ with $f'(x_0)neq 0$, such that $f$ has an inverse in some neighborhood of $x_0$, and such that $f$ is continuous on said neighborhood, then $f^{-1}$ has a derivative at $f(x_0)$ given by:
$${f^{-1}}'(x_0)=frac{1}{f'(x_0)}$$
My questions is - why does $f$ have to be continuous on a whole neighborhood of $x_0$ and not just at $x_0$? Is there some known counter-example for that?
calculus derivatives proof-explanation inverse-function inverse-function-theorem
New contributor
While studying calculus, I encountered the following statement:
"Given a function $f(x)$ with $f'(x_0)neq 0$, such that $f$ has an inverse in some neighborhood of $x_0$, and such that $f$ is continuous on said neighborhood, then $f^{-1}$ has a derivative at $f(x_0)$ given by:
$${f^{-1}}'(x_0)=frac{1}{f'(x_0)}$$
My questions is - why does $f$ have to be continuous on a whole neighborhood of $x_0$ and not just at $x_0$? Is there some known counter-example for that?
calculus derivatives proof-explanation inverse-function inverse-function-theorem
calculus derivatives proof-explanation inverse-function inverse-function-theorem
New contributor
New contributor
edited yesterday
LoveTooNap29
1,0241613
1,0241613
New contributor
asked yesterday
Ran KiriRan Kiri
1085
1085
New contributor
New contributor
6
Welcome to MSE. Nice first question!
– José Carlos Santos
yesterday
Example 1 in this post shows that continuity is not necessary (for pointwise differentiability of the inverse).
– user21820
13 hours ago
add a comment |
6
Welcome to MSE. Nice first question!
– José Carlos Santos
yesterday
Example 1 in this post shows that continuity is not necessary (for pointwise differentiability of the inverse).
– user21820
13 hours ago
6
6
Welcome to MSE. Nice first question!
– José Carlos Santos
yesterday
Welcome to MSE. Nice first question!
– José Carlos Santos
yesterday
Example 1 in this post shows that continuity is not necessary (for pointwise differentiability of the inverse).
– user21820
13 hours ago
Example 1 in this post shows that continuity is not necessary (for pointwise differentiability of the inverse).
– user21820
13 hours ago
add a comment |
4 Answers
4
active
oldest
votes
The suggestion in the title isn't how it'll work. Instead of having an inverse that doesn't have a derivative, we'll fail to have a continuous inverse. Also, the required condition for the theorem isn't just that $f$ is continuous on an interval - it's that $f'$ is continuous on an interval around the key point.
Example: $f(x)=begin{cases}x+2x^2sinfrac 1x&xneq 0\0&x = 0end{cases}$.
This $f$ is differentiable everywhere, with derivative $1$ at zero, but it doesn't have an inverse in any neighborhood of zero. Why? Because it isn't monotone on any neighborhood of zero. We have $f'(x)=1+4xsinfrac1x-2cosfrac1x$ for $xneq 0$, which is negative whenever $frac1xequiv 0mod 2pi$. We can find a one-sided inverse $g$ with $f(g(x))=x$, but this $g$ will necessarily have infinitely many jump discontinuities near zero.
The calculation of the derivative of $f^{-1}$ is just an application of the chain rule. The real meat of the inverse function theorem is the existence of a differentiable inverse.
Yes I can see it now. Thank you very much!
– Ran Kiri
yesterday
add a comment |
First off, any function has an inverse "at $x_0$" because we just assign $f(x_0)$ the value $x_0$; it's really meaningless to talk at an inverse existing at a point. We need a whole neighbourhood because then we can use the derivative, which is defined by a limit, so we must be able to "approach" $x_0$ arbitarily closely.
add a comment |
The continuity condition is not necessary. It's enough that $f$ be injective on some neighborhood. This said, if your function has a sequence of jump discontinuities near $x_0$, you might have that there is no open interval $U$ around $x_0$ for which $f(U)$ is also an interval. This means that $f^{-1}$ might be defined on a strange domain, though we can still technically differentiate it to get the desired result.
Formally, the statement you would need to prove is the following:
Let $A$ and $B$ be subsets of $mathbb R$ and $f:Arightarrow B$ and $g:Brightarrow mathbb R$. Suppose that $x_0in A$ is an accumulation point of $A$ and $f(x_0)$ is an accumulation point of $B$. Then,
If two of the derivatives $f'(x_0)$ and $g'(f(x_0))$ and $(fcirc g)'(x_0)$ exist and are non-zero, the third exists as well.
If all of the derivatives exist, then $(fcirc g)'(x_0)=f'(x_0)cdot g'(f(x_0)).$
One you have this statement, you can apply it to a pair where we take $g=f^{-1}$. Note that we can make this work even if $f$ isn't defined on an interval around $x_0$ - it's okay as long as we have enough points to define the relevant limit towards $x_0$.
Granted, it is a bit unusual to talk about derivatives on sets that aren't open, but there's no technical limitations preventing it, though the proof of the suggested lemma is a pain.
add a comment |
I think that as long as $f^{-1}$ is well-defined on a neighborhood of $f(x_0)$, and continuous at $f(x_0)$, there is no issue.
Indeed, $f(f^{-1}(f(x_0)+h))=f(x_0)+h$ so $h=f(f^{-1}(f(x_0)+h))-f(x_0) sim f’(x_0)(f^{-1}(f(x_0)+h)-x_0)$, and the conclusion (of differentiability and value of the derivative) follows.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Ran Kiri is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3064468%2fa-function-with-a-non-zero-derivative-with-an-inverse-function-that-has-no-deri%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
The suggestion in the title isn't how it'll work. Instead of having an inverse that doesn't have a derivative, we'll fail to have a continuous inverse. Also, the required condition for the theorem isn't just that $f$ is continuous on an interval - it's that $f'$ is continuous on an interval around the key point.
Example: $f(x)=begin{cases}x+2x^2sinfrac 1x&xneq 0\0&x = 0end{cases}$.
This $f$ is differentiable everywhere, with derivative $1$ at zero, but it doesn't have an inverse in any neighborhood of zero. Why? Because it isn't monotone on any neighborhood of zero. We have $f'(x)=1+4xsinfrac1x-2cosfrac1x$ for $xneq 0$, which is negative whenever $frac1xequiv 0mod 2pi$. We can find a one-sided inverse $g$ with $f(g(x))=x$, but this $g$ will necessarily have infinitely many jump discontinuities near zero.
The calculation of the derivative of $f^{-1}$ is just an application of the chain rule. The real meat of the inverse function theorem is the existence of a differentiable inverse.
Yes I can see it now. Thank you very much!
– Ran Kiri
yesterday
add a comment |
The suggestion in the title isn't how it'll work. Instead of having an inverse that doesn't have a derivative, we'll fail to have a continuous inverse. Also, the required condition for the theorem isn't just that $f$ is continuous on an interval - it's that $f'$ is continuous on an interval around the key point.
Example: $f(x)=begin{cases}x+2x^2sinfrac 1x&xneq 0\0&x = 0end{cases}$.
This $f$ is differentiable everywhere, with derivative $1$ at zero, but it doesn't have an inverse in any neighborhood of zero. Why? Because it isn't monotone on any neighborhood of zero. We have $f'(x)=1+4xsinfrac1x-2cosfrac1x$ for $xneq 0$, which is negative whenever $frac1xequiv 0mod 2pi$. We can find a one-sided inverse $g$ with $f(g(x))=x$, but this $g$ will necessarily have infinitely many jump discontinuities near zero.
The calculation of the derivative of $f^{-1}$ is just an application of the chain rule. The real meat of the inverse function theorem is the existence of a differentiable inverse.
Yes I can see it now. Thank you very much!
– Ran Kiri
yesterday
add a comment |
The suggestion in the title isn't how it'll work. Instead of having an inverse that doesn't have a derivative, we'll fail to have a continuous inverse. Also, the required condition for the theorem isn't just that $f$ is continuous on an interval - it's that $f'$ is continuous on an interval around the key point.
Example: $f(x)=begin{cases}x+2x^2sinfrac 1x&xneq 0\0&x = 0end{cases}$.
This $f$ is differentiable everywhere, with derivative $1$ at zero, but it doesn't have an inverse in any neighborhood of zero. Why? Because it isn't monotone on any neighborhood of zero. We have $f'(x)=1+4xsinfrac1x-2cosfrac1x$ for $xneq 0$, which is negative whenever $frac1xequiv 0mod 2pi$. We can find a one-sided inverse $g$ with $f(g(x))=x$, but this $g$ will necessarily have infinitely many jump discontinuities near zero.
The calculation of the derivative of $f^{-1}$ is just an application of the chain rule. The real meat of the inverse function theorem is the existence of a differentiable inverse.
The suggestion in the title isn't how it'll work. Instead of having an inverse that doesn't have a derivative, we'll fail to have a continuous inverse. Also, the required condition for the theorem isn't just that $f$ is continuous on an interval - it's that $f'$ is continuous on an interval around the key point.
Example: $f(x)=begin{cases}x+2x^2sinfrac 1x&xneq 0\0&x = 0end{cases}$.
This $f$ is differentiable everywhere, with derivative $1$ at zero, but it doesn't have an inverse in any neighborhood of zero. Why? Because it isn't monotone on any neighborhood of zero. We have $f'(x)=1+4xsinfrac1x-2cosfrac1x$ for $xneq 0$, which is negative whenever $frac1xequiv 0mod 2pi$. We can find a one-sided inverse $g$ with $f(g(x))=x$, but this $g$ will necessarily have infinitely many jump discontinuities near zero.
The calculation of the derivative of $f^{-1}$ is just an application of the chain rule. The real meat of the inverse function theorem is the existence of a differentiable inverse.
edited yesterday
answered yesterday
jmerryjmerry
2,511312
2,511312
Yes I can see it now. Thank you very much!
– Ran Kiri
yesterday
add a comment |
Yes I can see it now. Thank you very much!
– Ran Kiri
yesterday
Yes I can see it now. Thank you very much!
– Ran Kiri
yesterday
Yes I can see it now. Thank you very much!
– Ran Kiri
yesterday
add a comment |
First off, any function has an inverse "at $x_0$" because we just assign $f(x_0)$ the value $x_0$; it's really meaningless to talk at an inverse existing at a point. We need a whole neighbourhood because then we can use the derivative, which is defined by a limit, so we must be able to "approach" $x_0$ arbitarily closely.
add a comment |
First off, any function has an inverse "at $x_0$" because we just assign $f(x_0)$ the value $x_0$; it's really meaningless to talk at an inverse existing at a point. We need a whole neighbourhood because then we can use the derivative, which is defined by a limit, so we must be able to "approach" $x_0$ arbitarily closely.
add a comment |
First off, any function has an inverse "at $x_0$" because we just assign $f(x_0)$ the value $x_0$; it's really meaningless to talk at an inverse existing at a point. We need a whole neighbourhood because then we can use the derivative, which is defined by a limit, so we must be able to "approach" $x_0$ arbitarily closely.
First off, any function has an inverse "at $x_0$" because we just assign $f(x_0)$ the value $x_0$; it's really meaningless to talk at an inverse existing at a point. We need a whole neighbourhood because then we can use the derivative, which is defined by a limit, so we must be able to "approach" $x_0$ arbitarily closely.
answered yesterday
Henno BrandsmaHenno Brandsma
105k347114
105k347114
add a comment |
add a comment |
The continuity condition is not necessary. It's enough that $f$ be injective on some neighborhood. This said, if your function has a sequence of jump discontinuities near $x_0$, you might have that there is no open interval $U$ around $x_0$ for which $f(U)$ is also an interval. This means that $f^{-1}$ might be defined on a strange domain, though we can still technically differentiate it to get the desired result.
Formally, the statement you would need to prove is the following:
Let $A$ and $B$ be subsets of $mathbb R$ and $f:Arightarrow B$ and $g:Brightarrow mathbb R$. Suppose that $x_0in A$ is an accumulation point of $A$ and $f(x_0)$ is an accumulation point of $B$. Then,
If two of the derivatives $f'(x_0)$ and $g'(f(x_0))$ and $(fcirc g)'(x_0)$ exist and are non-zero, the third exists as well.
If all of the derivatives exist, then $(fcirc g)'(x_0)=f'(x_0)cdot g'(f(x_0)).$
One you have this statement, you can apply it to a pair where we take $g=f^{-1}$. Note that we can make this work even if $f$ isn't defined on an interval around $x_0$ - it's okay as long as we have enough points to define the relevant limit towards $x_0$.
Granted, it is a bit unusual to talk about derivatives on sets that aren't open, but there's no technical limitations preventing it, though the proof of the suggested lemma is a pain.
add a comment |
The continuity condition is not necessary. It's enough that $f$ be injective on some neighborhood. This said, if your function has a sequence of jump discontinuities near $x_0$, you might have that there is no open interval $U$ around $x_0$ for which $f(U)$ is also an interval. This means that $f^{-1}$ might be defined on a strange domain, though we can still technically differentiate it to get the desired result.
Formally, the statement you would need to prove is the following:
Let $A$ and $B$ be subsets of $mathbb R$ and $f:Arightarrow B$ and $g:Brightarrow mathbb R$. Suppose that $x_0in A$ is an accumulation point of $A$ and $f(x_0)$ is an accumulation point of $B$. Then,
If two of the derivatives $f'(x_0)$ and $g'(f(x_0))$ and $(fcirc g)'(x_0)$ exist and are non-zero, the third exists as well.
If all of the derivatives exist, then $(fcirc g)'(x_0)=f'(x_0)cdot g'(f(x_0)).$
One you have this statement, you can apply it to a pair where we take $g=f^{-1}$. Note that we can make this work even if $f$ isn't defined on an interval around $x_0$ - it's okay as long as we have enough points to define the relevant limit towards $x_0$.
Granted, it is a bit unusual to talk about derivatives on sets that aren't open, but there's no technical limitations preventing it, though the proof of the suggested lemma is a pain.
add a comment |
The continuity condition is not necessary. It's enough that $f$ be injective on some neighborhood. This said, if your function has a sequence of jump discontinuities near $x_0$, you might have that there is no open interval $U$ around $x_0$ for which $f(U)$ is also an interval. This means that $f^{-1}$ might be defined on a strange domain, though we can still technically differentiate it to get the desired result.
Formally, the statement you would need to prove is the following:
Let $A$ and $B$ be subsets of $mathbb R$ and $f:Arightarrow B$ and $g:Brightarrow mathbb R$. Suppose that $x_0in A$ is an accumulation point of $A$ and $f(x_0)$ is an accumulation point of $B$. Then,
If two of the derivatives $f'(x_0)$ and $g'(f(x_0))$ and $(fcirc g)'(x_0)$ exist and are non-zero, the third exists as well.
If all of the derivatives exist, then $(fcirc g)'(x_0)=f'(x_0)cdot g'(f(x_0)).$
One you have this statement, you can apply it to a pair where we take $g=f^{-1}$. Note that we can make this work even if $f$ isn't defined on an interval around $x_0$ - it's okay as long as we have enough points to define the relevant limit towards $x_0$.
Granted, it is a bit unusual to talk about derivatives on sets that aren't open, but there's no technical limitations preventing it, though the proof of the suggested lemma is a pain.
The continuity condition is not necessary. It's enough that $f$ be injective on some neighborhood. This said, if your function has a sequence of jump discontinuities near $x_0$, you might have that there is no open interval $U$ around $x_0$ for which $f(U)$ is also an interval. This means that $f^{-1}$ might be defined on a strange domain, though we can still technically differentiate it to get the desired result.
Formally, the statement you would need to prove is the following:
Let $A$ and $B$ be subsets of $mathbb R$ and $f:Arightarrow B$ and $g:Brightarrow mathbb R$. Suppose that $x_0in A$ is an accumulation point of $A$ and $f(x_0)$ is an accumulation point of $B$. Then,
If two of the derivatives $f'(x_0)$ and $g'(f(x_0))$ and $(fcirc g)'(x_0)$ exist and are non-zero, the third exists as well.
If all of the derivatives exist, then $(fcirc g)'(x_0)=f'(x_0)cdot g'(f(x_0)).$
One you have this statement, you can apply it to a pair where we take $g=f^{-1}$. Note that we can make this work even if $f$ isn't defined on an interval around $x_0$ - it's okay as long as we have enough points to define the relevant limit towards $x_0$.
Granted, it is a bit unusual to talk about derivatives on sets that aren't open, but there's no technical limitations preventing it, though the proof of the suggested lemma is a pain.
answered yesterday
Milo BrandtMilo Brandt
39.4k475139
39.4k475139
add a comment |
add a comment |
I think that as long as $f^{-1}$ is well-defined on a neighborhood of $f(x_0)$, and continuous at $f(x_0)$, there is no issue.
Indeed, $f(f^{-1}(f(x_0)+h))=f(x_0)+h$ so $h=f(f^{-1}(f(x_0)+h))-f(x_0) sim f’(x_0)(f^{-1}(f(x_0)+h)-x_0)$, and the conclusion (of differentiability and value of the derivative) follows.
add a comment |
I think that as long as $f^{-1}$ is well-defined on a neighborhood of $f(x_0)$, and continuous at $f(x_0)$, there is no issue.
Indeed, $f(f^{-1}(f(x_0)+h))=f(x_0)+h$ so $h=f(f^{-1}(f(x_0)+h))-f(x_0) sim f’(x_0)(f^{-1}(f(x_0)+h)-x_0)$, and the conclusion (of differentiability and value of the derivative) follows.
add a comment |
I think that as long as $f^{-1}$ is well-defined on a neighborhood of $f(x_0)$, and continuous at $f(x_0)$, there is no issue.
Indeed, $f(f^{-1}(f(x_0)+h))=f(x_0)+h$ so $h=f(f^{-1}(f(x_0)+h))-f(x_0) sim f’(x_0)(f^{-1}(f(x_0)+h)-x_0)$, and the conclusion (of differentiability and value of the derivative) follows.
I think that as long as $f^{-1}$ is well-defined on a neighborhood of $f(x_0)$, and continuous at $f(x_0)$, there is no issue.
Indeed, $f(f^{-1}(f(x_0)+h))=f(x_0)+h$ so $h=f(f^{-1}(f(x_0)+h))-f(x_0) sim f’(x_0)(f^{-1}(f(x_0)+h)-x_0)$, and the conclusion (of differentiability and value of the derivative) follows.
edited yesterday
answered yesterday
MindlackMindlack
1,99217
1,99217
add a comment |
add a comment |
Ran Kiri is a new contributor. Be nice, and check out our Code of Conduct.
Ran Kiri is a new contributor. Be nice, and check out our Code of Conduct.
Ran Kiri is a new contributor. Be nice, and check out our Code of Conduct.
Ran Kiri is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3064468%2fa-function-with-a-non-zero-derivative-with-an-inverse-function-that-has-no-deri%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
6
Welcome to MSE. Nice first question!
– José Carlos Santos
yesterday
Example 1 in this post shows that continuity is not necessary (for pointwise differentiability of the inverse).
– user21820
13 hours ago