Running a Python script automatically when launching a Docker container











up vote
1
down vote

favorite












Is it possible to run a python script automatically upon starting a Docker container?



My command to attach to an image is:



docker run -i -t --entrypoint /bin/bash myimage -s


Is there a way to add an additional command that runs a script upon launching it?
I would prefer not to use a Dockerfile as some of the python modules I use are from private repos and need to be downloaded manually, so a Dockerfile would not completely build the image I want.










share|improve this question






















  • docker run -i -t --entrypoint /bin/bash myimage -s python /path/to/python_file.py ?
    – Andrés Pérez-Albela H.
    Nov 6 '15 at 10:57















up vote
1
down vote

favorite












Is it possible to run a python script automatically upon starting a Docker container?



My command to attach to an image is:



docker run -i -t --entrypoint /bin/bash myimage -s


Is there a way to add an additional command that runs a script upon launching it?
I would prefer not to use a Dockerfile as some of the python modules I use are from private repos and need to be downloaded manually, so a Dockerfile would not completely build the image I want.










share|improve this question






















  • docker run -i -t --entrypoint /bin/bash myimage -s python /path/to/python_file.py ?
    – Andrés Pérez-Albela H.
    Nov 6 '15 at 10:57













up vote
1
down vote

favorite









up vote
1
down vote

favorite











Is it possible to run a python script automatically upon starting a Docker container?



My command to attach to an image is:



docker run -i -t --entrypoint /bin/bash myimage -s


Is there a way to add an additional command that runs a script upon launching it?
I would prefer not to use a Dockerfile as some of the python modules I use are from private repos and need to be downloaded manually, so a Dockerfile would not completely build the image I want.










share|improve this question













Is it possible to run a python script automatically upon starting a Docker container?



My command to attach to an image is:



docker run -i -t --entrypoint /bin/bash myimage -s


Is there a way to add an additional command that runs a script upon launching it?
I would prefer not to use a Dockerfile as some of the python modules I use are from private repos and need to be downloaded manually, so a Dockerfile would not completely build the image I want.







python docker






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 6 '15 at 10:52









GreenGodot

1,05331834




1,05331834












  • docker run -i -t --entrypoint /bin/bash myimage -s python /path/to/python_file.py ?
    – Andrés Pérez-Albela H.
    Nov 6 '15 at 10:57


















  • docker run -i -t --entrypoint /bin/bash myimage -s python /path/to/python_file.py ?
    – Andrés Pérez-Albela H.
    Nov 6 '15 at 10:57
















docker run -i -t --entrypoint /bin/bash myimage -s python /path/to/python_file.py ?
– Andrés Pérez-Albela H.
Nov 6 '15 at 10:57




docker run -i -t --entrypoint /bin/bash myimage -s python /path/to/python_file.py ?
– Andrés Pérez-Albela H.
Nov 6 '15 at 10:57












1 Answer
1






active

oldest

votes

















up vote
3
down vote



accepted










As a matter of fact there is. Just don't use --entrypoint. Instead:



docker run -it myimage /bin/bash -c /run.sh


Obviously, this assumes that the image itself contains a simple Bash script at the location /run.sh.



#!/bin/bash
command1
command2
command3
...


If you don't want that, you can mount the current folder inside the running container and run a local script:



docker run -it -v $(pwd):/mnt myimage /bin/bash -c /mnt/run.sh


ENTRYPOINT vs. CMD seems to be a common cause of confusion.



Think about it this way:





  • ENTRYPOINT is a way to hard-code a certain behavior that cannot be changed after setting it up.


  • CMD is the default way to supply a command to be run.


Docker containers can be set up to run as self-contained applications. If you're so inclined, you could create throwaway containers that accept command line arguments (a file for example), pull that in, work their magic and return you a processed file. Some people use this to set up build environments with different configurations and just run them on demand, not cluttering up their host machine.



However, your usage scenario feels tedious, since you are apparently doing the setup by hand. It would be easier to set the download credentials as environment variables, like this:



docker run -d -e "DEEP=purple" -e "LED=zeppelin" myimage /bin/bash -c /run.sh


You can then use those within the script as placeholders. This way, you get the best of both worlds. For added security, your run.sh should unset the environment variables once they have been used, like this:



#!/bin/bash
command1
command2
command3
...
unset DEEP
unset LED





share|improve this answer























  • this is against docker best practices as it does not handle signal forwarding - docs.docker.com/engine/userguide/eng-image/…
    – Vincent De Smet
    Jul 26 '16 at 4:41






  • 1




    @VincentDeSmet: How about creating a new answer with the relevant info and an example script then? This would benefit everyone.
    – herrbischoff
    Jul 26 '16 at 8:09













Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f33565151%2frunning-a-python-script-automatically-when-launching-a-docker-container%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
3
down vote



accepted










As a matter of fact there is. Just don't use --entrypoint. Instead:



docker run -it myimage /bin/bash -c /run.sh


Obviously, this assumes that the image itself contains a simple Bash script at the location /run.sh.



#!/bin/bash
command1
command2
command3
...


If you don't want that, you can mount the current folder inside the running container and run a local script:



docker run -it -v $(pwd):/mnt myimage /bin/bash -c /mnt/run.sh


ENTRYPOINT vs. CMD seems to be a common cause of confusion.



Think about it this way:





  • ENTRYPOINT is a way to hard-code a certain behavior that cannot be changed after setting it up.


  • CMD is the default way to supply a command to be run.


Docker containers can be set up to run as self-contained applications. If you're so inclined, you could create throwaway containers that accept command line arguments (a file for example), pull that in, work their magic and return you a processed file. Some people use this to set up build environments with different configurations and just run them on demand, not cluttering up their host machine.



However, your usage scenario feels tedious, since you are apparently doing the setup by hand. It would be easier to set the download credentials as environment variables, like this:



docker run -d -e "DEEP=purple" -e "LED=zeppelin" myimage /bin/bash -c /run.sh


You can then use those within the script as placeholders. This way, you get the best of both worlds. For added security, your run.sh should unset the environment variables once they have been used, like this:



#!/bin/bash
command1
command2
command3
...
unset DEEP
unset LED





share|improve this answer























  • this is against docker best practices as it does not handle signal forwarding - docs.docker.com/engine/userguide/eng-image/…
    – Vincent De Smet
    Jul 26 '16 at 4:41






  • 1




    @VincentDeSmet: How about creating a new answer with the relevant info and an example script then? This would benefit everyone.
    – herrbischoff
    Jul 26 '16 at 8:09

















up vote
3
down vote



accepted










As a matter of fact there is. Just don't use --entrypoint. Instead:



docker run -it myimage /bin/bash -c /run.sh


Obviously, this assumes that the image itself contains a simple Bash script at the location /run.sh.



#!/bin/bash
command1
command2
command3
...


If you don't want that, you can mount the current folder inside the running container and run a local script:



docker run -it -v $(pwd):/mnt myimage /bin/bash -c /mnt/run.sh


ENTRYPOINT vs. CMD seems to be a common cause of confusion.



Think about it this way:





  • ENTRYPOINT is a way to hard-code a certain behavior that cannot be changed after setting it up.


  • CMD is the default way to supply a command to be run.


Docker containers can be set up to run as self-contained applications. If you're so inclined, you could create throwaway containers that accept command line arguments (a file for example), pull that in, work their magic and return you a processed file. Some people use this to set up build environments with different configurations and just run them on demand, not cluttering up their host machine.



However, your usage scenario feels tedious, since you are apparently doing the setup by hand. It would be easier to set the download credentials as environment variables, like this:



docker run -d -e "DEEP=purple" -e "LED=zeppelin" myimage /bin/bash -c /run.sh


You can then use those within the script as placeholders. This way, you get the best of both worlds. For added security, your run.sh should unset the environment variables once they have been used, like this:



#!/bin/bash
command1
command2
command3
...
unset DEEP
unset LED





share|improve this answer























  • this is against docker best practices as it does not handle signal forwarding - docs.docker.com/engine/userguide/eng-image/…
    – Vincent De Smet
    Jul 26 '16 at 4:41






  • 1




    @VincentDeSmet: How about creating a new answer with the relevant info and an example script then? This would benefit everyone.
    – herrbischoff
    Jul 26 '16 at 8:09















up vote
3
down vote



accepted







up vote
3
down vote



accepted






As a matter of fact there is. Just don't use --entrypoint. Instead:



docker run -it myimage /bin/bash -c /run.sh


Obviously, this assumes that the image itself contains a simple Bash script at the location /run.sh.



#!/bin/bash
command1
command2
command3
...


If you don't want that, you can mount the current folder inside the running container and run a local script:



docker run -it -v $(pwd):/mnt myimage /bin/bash -c /mnt/run.sh


ENTRYPOINT vs. CMD seems to be a common cause of confusion.



Think about it this way:





  • ENTRYPOINT is a way to hard-code a certain behavior that cannot be changed after setting it up.


  • CMD is the default way to supply a command to be run.


Docker containers can be set up to run as self-contained applications. If you're so inclined, you could create throwaway containers that accept command line arguments (a file for example), pull that in, work their magic and return you a processed file. Some people use this to set up build environments with different configurations and just run them on demand, not cluttering up their host machine.



However, your usage scenario feels tedious, since you are apparently doing the setup by hand. It would be easier to set the download credentials as environment variables, like this:



docker run -d -e "DEEP=purple" -e "LED=zeppelin" myimage /bin/bash -c /run.sh


You can then use those within the script as placeholders. This way, you get the best of both worlds. For added security, your run.sh should unset the environment variables once they have been used, like this:



#!/bin/bash
command1
command2
command3
...
unset DEEP
unset LED





share|improve this answer














As a matter of fact there is. Just don't use --entrypoint. Instead:



docker run -it myimage /bin/bash -c /run.sh


Obviously, this assumes that the image itself contains a simple Bash script at the location /run.sh.



#!/bin/bash
command1
command2
command3
...


If you don't want that, you can mount the current folder inside the running container and run a local script:



docker run -it -v $(pwd):/mnt myimage /bin/bash -c /mnt/run.sh


ENTRYPOINT vs. CMD seems to be a common cause of confusion.



Think about it this way:





  • ENTRYPOINT is a way to hard-code a certain behavior that cannot be changed after setting it up.


  • CMD is the default way to supply a command to be run.


Docker containers can be set up to run as self-contained applications. If you're so inclined, you could create throwaway containers that accept command line arguments (a file for example), pull that in, work their magic and return you a processed file. Some people use this to set up build environments with different configurations and just run them on demand, not cluttering up their host machine.



However, your usage scenario feels tedious, since you are apparently doing the setup by hand. It would be easier to set the download credentials as environment variables, like this:



docker run -d -e "DEEP=purple" -e "LED=zeppelin" myimage /bin/bash -c /run.sh


You can then use those within the script as placeholders. This way, you get the best of both worlds. For added security, your run.sh should unset the environment variables once they have been used, like this:



#!/bin/bash
command1
command2
command3
...
unset DEEP
unset LED






share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 6 '15 at 12:30

























answered Nov 6 '15 at 12:07









herrbischoff

1,6011434




1,6011434












  • this is against docker best practices as it does not handle signal forwarding - docs.docker.com/engine/userguide/eng-image/…
    – Vincent De Smet
    Jul 26 '16 at 4:41






  • 1




    @VincentDeSmet: How about creating a new answer with the relevant info and an example script then? This would benefit everyone.
    – herrbischoff
    Jul 26 '16 at 8:09




















  • this is against docker best practices as it does not handle signal forwarding - docs.docker.com/engine/userguide/eng-image/…
    – Vincent De Smet
    Jul 26 '16 at 4:41






  • 1




    @VincentDeSmet: How about creating a new answer with the relevant info and an example script then? This would benefit everyone.
    – herrbischoff
    Jul 26 '16 at 8:09


















this is against docker best practices as it does not handle signal forwarding - docs.docker.com/engine/userguide/eng-image/…
– Vincent De Smet
Jul 26 '16 at 4:41




this is against docker best practices as it does not handle signal forwarding - docs.docker.com/engine/userguide/eng-image/…
– Vincent De Smet
Jul 26 '16 at 4:41




1




1




@VincentDeSmet: How about creating a new answer with the relevant info and an example script then? This would benefit everyone.
– herrbischoff
Jul 26 '16 at 8:09






@VincentDeSmet: How about creating a new answer with the relevant info and an example script then? This would benefit everyone.
– herrbischoff
Jul 26 '16 at 8:09




















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f33565151%2frunning-a-python-script-automatically-when-launching-a-docker-container%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Berounka

Sphinx de Gizeh

Different font size/position of beamer's navigation symbols template's content depending on regular/plain...