Random walking an ensemble of particles, best leveraging of NumPy?
$begingroup$
I've implemented the example given to me in this answer in the script below.
begin{align*}
xbig(tfrac{1}{2}Delta tbig) &= x(0) + tfrac{1}{2}Delta t, p(0)/m
\
p(Delta t) &= p(0)exp(-gamma Delta t) + sqrt{1-exp(-2gamma Delta t)} sqrt{m k_B T} , R
\
x(Delta t) &= xbig(tfrac{1}{2}Delta tbig) + tfrac{1}{2}Delta t, p(Delta t)/m
end{align*}
Every nsubsteps
I save a snapshot of the positions of an ensemble of nparticles
particles in 3D, then plot $r^2$ versus time; the envelope should increase roughly linearly with time.
I will adjust the size of the group to maximize speed. There's overhead instantiating numpy arrays which is a penalty doing one particle at a time, but if the arrays are too big I get hit by something that might be cache swapping.
I will like to keep the feature of only saving results every 1/n of the iterations so I don't build unnecessarily long lists.
Question If I want to ramp this up to do simulations for much longer times, or simulate many more particles, am I missing a substantially faster way to do this calculation?
import numpy as np
import matplotlib.pyplot as plt
from numpy import random
pi = np.pi
kB = 1.381E-23
T = 293.
eta_air = 18.5E-06 # viscocity of air Pa-s (kg m^-1 s^-1)
eta_water = 1.0E-03 # viscocity of water Pa-s (kg m^-1 s^-1)
rho = 1000. # kg/m^3 porous carbon
R = 0.5 * 2.5E-06 # meters, PM2.5 radius
m = (4./3) * pi * rho * R**3 # kg
mkBT = m * kB * T
gamma = 6 * pi * eta_air * R / m
nsteps = 10000
nsubsteps = 100
dt = 0.000001
tsteps = dt * nsubsteps * np.arange(nsteps)
nparticles = 10
ndims = 3 # keep it real
A = np.exp(-gamma * dt)
B = np.sqrt(1 - np.exp(-2*gamma*dt)) * np.sqrt(mkBT)
C = 0.5 * dt / m
print "np.exp(-gamma * dt) should be of order unity: ", np.exp(-gamma * dt)
x, p = np.zeros((2, nparticles, ndims))
xs =
for i in range(nsteps):
rans = random.normal(0, 1, (nsubsteps, nparticles, ndims))
for j in range(nsubsteps):
xh = x + 0.5 * dt * p / m
p = p * A + B * rans[j]
x = xh + C * p
xs.append(x) # save only every nsubstepth (is that a word?)
x = np.moveaxis(np.array(xs), 0, 2)
rsqs = (x**2).sum(axis=1)
if True:
plt.figure()
plt.ylabel('r^2 (um^2)', fontsize=16)
plt.xlabel('time (sec)', fontsize=16)
for thing in rsqs:
plt.plot(tsteps, 1E+12 * thing)
plt.title('random walk of spherical PM2.5 particles in air', fontsize=18)
plt.show()
python python-2.x numpy simulation physics
$endgroup$
add a comment |
$begingroup$
I've implemented the example given to me in this answer in the script below.
begin{align*}
xbig(tfrac{1}{2}Delta tbig) &= x(0) + tfrac{1}{2}Delta t, p(0)/m
\
p(Delta t) &= p(0)exp(-gamma Delta t) + sqrt{1-exp(-2gamma Delta t)} sqrt{m k_B T} , R
\
x(Delta t) &= xbig(tfrac{1}{2}Delta tbig) + tfrac{1}{2}Delta t, p(Delta t)/m
end{align*}
Every nsubsteps
I save a snapshot of the positions of an ensemble of nparticles
particles in 3D, then plot $r^2$ versus time; the envelope should increase roughly linearly with time.
I will adjust the size of the group to maximize speed. There's overhead instantiating numpy arrays which is a penalty doing one particle at a time, but if the arrays are too big I get hit by something that might be cache swapping.
I will like to keep the feature of only saving results every 1/n of the iterations so I don't build unnecessarily long lists.
Question If I want to ramp this up to do simulations for much longer times, or simulate many more particles, am I missing a substantially faster way to do this calculation?
import numpy as np
import matplotlib.pyplot as plt
from numpy import random
pi = np.pi
kB = 1.381E-23
T = 293.
eta_air = 18.5E-06 # viscocity of air Pa-s (kg m^-1 s^-1)
eta_water = 1.0E-03 # viscocity of water Pa-s (kg m^-1 s^-1)
rho = 1000. # kg/m^3 porous carbon
R = 0.5 * 2.5E-06 # meters, PM2.5 radius
m = (4./3) * pi * rho * R**3 # kg
mkBT = m * kB * T
gamma = 6 * pi * eta_air * R / m
nsteps = 10000
nsubsteps = 100
dt = 0.000001
tsteps = dt * nsubsteps * np.arange(nsteps)
nparticles = 10
ndims = 3 # keep it real
A = np.exp(-gamma * dt)
B = np.sqrt(1 - np.exp(-2*gamma*dt)) * np.sqrt(mkBT)
C = 0.5 * dt / m
print "np.exp(-gamma * dt) should be of order unity: ", np.exp(-gamma * dt)
x, p = np.zeros((2, nparticles, ndims))
xs =
for i in range(nsteps):
rans = random.normal(0, 1, (nsubsteps, nparticles, ndims))
for j in range(nsubsteps):
xh = x + 0.5 * dt * p / m
p = p * A + B * rans[j]
x = xh + C * p
xs.append(x) # save only every nsubstepth (is that a word?)
x = np.moveaxis(np.array(xs), 0, 2)
rsqs = (x**2).sum(axis=1)
if True:
plt.figure()
plt.ylabel('r^2 (um^2)', fontsize=16)
plt.xlabel('time (sec)', fontsize=16)
for thing in rsqs:
plt.plot(tsteps, 1E+12 * thing)
plt.title('random walk of spherical PM2.5 particles in air', fontsize=18)
plt.show()
python python-2.x numpy simulation physics
$endgroup$
2
$begingroup$
It's definitely worth making the switch regardless if it is faster, since python2.x EOL is less then a year away ;) pythonclock.org
$endgroup$
– Ludisposed
Mar 6 at 14:51
$begingroup$
@Ludisposed yes I know (blush). I'm lookiing/hoping for a case that forces me to take the plunge.
$endgroup$
– uhoh
Mar 6 at 14:52
add a comment |
$begingroup$
I've implemented the example given to me in this answer in the script below.
begin{align*}
xbig(tfrac{1}{2}Delta tbig) &= x(0) + tfrac{1}{2}Delta t, p(0)/m
\
p(Delta t) &= p(0)exp(-gamma Delta t) + sqrt{1-exp(-2gamma Delta t)} sqrt{m k_B T} , R
\
x(Delta t) &= xbig(tfrac{1}{2}Delta tbig) + tfrac{1}{2}Delta t, p(Delta t)/m
end{align*}
Every nsubsteps
I save a snapshot of the positions of an ensemble of nparticles
particles in 3D, then plot $r^2$ versus time; the envelope should increase roughly linearly with time.
I will adjust the size of the group to maximize speed. There's overhead instantiating numpy arrays which is a penalty doing one particle at a time, but if the arrays are too big I get hit by something that might be cache swapping.
I will like to keep the feature of only saving results every 1/n of the iterations so I don't build unnecessarily long lists.
Question If I want to ramp this up to do simulations for much longer times, or simulate many more particles, am I missing a substantially faster way to do this calculation?
import numpy as np
import matplotlib.pyplot as plt
from numpy import random
pi = np.pi
kB = 1.381E-23
T = 293.
eta_air = 18.5E-06 # viscocity of air Pa-s (kg m^-1 s^-1)
eta_water = 1.0E-03 # viscocity of water Pa-s (kg m^-1 s^-1)
rho = 1000. # kg/m^3 porous carbon
R = 0.5 * 2.5E-06 # meters, PM2.5 radius
m = (4./3) * pi * rho * R**3 # kg
mkBT = m * kB * T
gamma = 6 * pi * eta_air * R / m
nsteps = 10000
nsubsteps = 100
dt = 0.000001
tsteps = dt * nsubsteps * np.arange(nsteps)
nparticles = 10
ndims = 3 # keep it real
A = np.exp(-gamma * dt)
B = np.sqrt(1 - np.exp(-2*gamma*dt)) * np.sqrt(mkBT)
C = 0.5 * dt / m
print "np.exp(-gamma * dt) should be of order unity: ", np.exp(-gamma * dt)
x, p = np.zeros((2, nparticles, ndims))
xs =
for i in range(nsteps):
rans = random.normal(0, 1, (nsubsteps, nparticles, ndims))
for j in range(nsubsteps):
xh = x + 0.5 * dt * p / m
p = p * A + B * rans[j]
x = xh + C * p
xs.append(x) # save only every nsubstepth (is that a word?)
x = np.moveaxis(np.array(xs), 0, 2)
rsqs = (x**2).sum(axis=1)
if True:
plt.figure()
plt.ylabel('r^2 (um^2)', fontsize=16)
plt.xlabel('time (sec)', fontsize=16)
for thing in rsqs:
plt.plot(tsteps, 1E+12 * thing)
plt.title('random walk of spherical PM2.5 particles in air', fontsize=18)
plt.show()
python python-2.x numpy simulation physics
$endgroup$
I've implemented the example given to me in this answer in the script below.
begin{align*}
xbig(tfrac{1}{2}Delta tbig) &= x(0) + tfrac{1}{2}Delta t, p(0)/m
\
p(Delta t) &= p(0)exp(-gamma Delta t) + sqrt{1-exp(-2gamma Delta t)} sqrt{m k_B T} , R
\
x(Delta t) &= xbig(tfrac{1}{2}Delta tbig) + tfrac{1}{2}Delta t, p(Delta t)/m
end{align*}
Every nsubsteps
I save a snapshot of the positions of an ensemble of nparticles
particles in 3D, then plot $r^2$ versus time; the envelope should increase roughly linearly with time.
I will adjust the size of the group to maximize speed. There's overhead instantiating numpy arrays which is a penalty doing one particle at a time, but if the arrays are too big I get hit by something that might be cache swapping.
I will like to keep the feature of only saving results every 1/n of the iterations so I don't build unnecessarily long lists.
Question If I want to ramp this up to do simulations for much longer times, or simulate many more particles, am I missing a substantially faster way to do this calculation?
import numpy as np
import matplotlib.pyplot as plt
from numpy import random
pi = np.pi
kB = 1.381E-23
T = 293.
eta_air = 18.5E-06 # viscocity of air Pa-s (kg m^-1 s^-1)
eta_water = 1.0E-03 # viscocity of water Pa-s (kg m^-1 s^-1)
rho = 1000. # kg/m^3 porous carbon
R = 0.5 * 2.5E-06 # meters, PM2.5 radius
m = (4./3) * pi * rho * R**3 # kg
mkBT = m * kB * T
gamma = 6 * pi * eta_air * R / m
nsteps = 10000
nsubsteps = 100
dt = 0.000001
tsteps = dt * nsubsteps * np.arange(nsteps)
nparticles = 10
ndims = 3 # keep it real
A = np.exp(-gamma * dt)
B = np.sqrt(1 - np.exp(-2*gamma*dt)) * np.sqrt(mkBT)
C = 0.5 * dt / m
print "np.exp(-gamma * dt) should be of order unity: ", np.exp(-gamma * dt)
x, p = np.zeros((2, nparticles, ndims))
xs =
for i in range(nsteps):
rans = random.normal(0, 1, (nsubsteps, nparticles, ndims))
for j in range(nsubsteps):
xh = x + 0.5 * dt * p / m
p = p * A + B * rans[j]
x = xh + C * p
xs.append(x) # save only every nsubstepth (is that a word?)
x = np.moveaxis(np.array(xs), 0, 2)
rsqs = (x**2).sum(axis=1)
if True:
plt.figure()
plt.ylabel('r^2 (um^2)', fontsize=16)
plt.xlabel('time (sec)', fontsize=16)
for thing in rsqs:
plt.plot(tsteps, 1E+12 * thing)
plt.title('random walk of spherical PM2.5 particles in air', fontsize=18)
plt.show()
python python-2.x numpy simulation physics
python python-2.x numpy simulation physics
edited 3 hours ago
uhoh
asked Mar 6 at 14:30
uhohuhoh
1907
1907
2
$begingroup$
It's definitely worth making the switch regardless if it is faster, since python2.x EOL is less then a year away ;) pythonclock.org
$endgroup$
– Ludisposed
Mar 6 at 14:51
$begingroup$
@Ludisposed yes I know (blush). I'm lookiing/hoping for a case that forces me to take the plunge.
$endgroup$
– uhoh
Mar 6 at 14:52
add a comment |
2
$begingroup$
It's definitely worth making the switch regardless if it is faster, since python2.x EOL is less then a year away ;) pythonclock.org
$endgroup$
– Ludisposed
Mar 6 at 14:51
$begingroup$
@Ludisposed yes I know (blush). I'm lookiing/hoping for a case that forces me to take the plunge.
$endgroup$
– uhoh
Mar 6 at 14:52
2
2
$begingroup$
It's definitely worth making the switch regardless if it is faster, since python2.x EOL is less then a year away ;) pythonclock.org
$endgroup$
– Ludisposed
Mar 6 at 14:51
$begingroup$
It's definitely worth making the switch regardless if it is faster, since python2.x EOL is less then a year away ;) pythonclock.org
$endgroup$
– Ludisposed
Mar 6 at 14:51
$begingroup$
@Ludisposed yes I know (blush). I'm lookiing/hoping for a case that forces me to take the plunge.
$endgroup$
– uhoh
Mar 6 at 14:52
$begingroup$
@Ludisposed yes I know (blush). I'm lookiing/hoping for a case that forces me to take the plunge.
$endgroup$
– uhoh
Mar 6 at 14:52
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "196"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f214843%2frandom-walking-an-ensemble-of-particles-best-leveraging-of-numpy%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Code Review Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f214843%2frandom-walking-an-ensemble-of-particles-best-leveraging-of-numpy%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
It's definitely worth making the switch regardless if it is faster, since python2.x EOL is less then a year away ;) pythonclock.org
$endgroup$
– Ludisposed
Mar 6 at 14:51
$begingroup$
@Ludisposed yes I know (blush). I'm lookiing/hoping for a case that forces me to take the plunge.
$endgroup$
– uhoh
Mar 6 at 14:52