Is it possible to decompose a neural network?












0














For some arbitrary configuration of a neural network, say a basic multi layer perceptron, is there a way to break down a network into a number of smaller networks, compute them individually, and recombine them?



For matrix multiplication, you can do something like this, where if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together, then add them together to get the same result as doing the 16x16 multiplication, without doing any unnecessary computation.



I am wondering if there is any kind of similar mechanism someone could use if they had some kind of hardware multiplier capable of computing a network of a fixed size and wanted to use it to compute a larger network.



Thanks for any help you can offer!










share|cite|improve this question






















  • Maybe relevant: iphome.hhi.de/samek/pdf/MonICML16.pdf.
    – Martín-Blas Pérez Pinilla
    Dec 4 '18 at 8:03










  • if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together... True but irrelevant. Your naive algorithm requires the same number of multiplications. The Strassen algorithm (en.wikipedia.org/wiki/Strassen_algorithm) is a bit better but more complex.
    – Martín-Blas Pérez Pinilla
    Dec 4 '18 at 8:03












  • That was kind of what I was getting at, I don't necessarily need a more efficient algorithm, if the best I can do is some kind of operation that has equal complexity, then that's fine as well. I just want to know what the best that can be done is.
    – Zephyr
    Dec 4 '18 at 13:39
















0














For some arbitrary configuration of a neural network, say a basic multi layer perceptron, is there a way to break down a network into a number of smaller networks, compute them individually, and recombine them?



For matrix multiplication, you can do something like this, where if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together, then add them together to get the same result as doing the 16x16 multiplication, without doing any unnecessary computation.



I am wondering if there is any kind of similar mechanism someone could use if they had some kind of hardware multiplier capable of computing a network of a fixed size and wanted to use it to compute a larger network.



Thanks for any help you can offer!










share|cite|improve this question






















  • Maybe relevant: iphome.hhi.de/samek/pdf/MonICML16.pdf.
    – Martín-Blas Pérez Pinilla
    Dec 4 '18 at 8:03










  • if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together... True but irrelevant. Your naive algorithm requires the same number of multiplications. The Strassen algorithm (en.wikipedia.org/wiki/Strassen_algorithm) is a bit better but more complex.
    – Martín-Blas Pérez Pinilla
    Dec 4 '18 at 8:03












  • That was kind of what I was getting at, I don't necessarily need a more efficient algorithm, if the best I can do is some kind of operation that has equal complexity, then that's fine as well. I just want to know what the best that can be done is.
    – Zephyr
    Dec 4 '18 at 13:39














0












0








0







For some arbitrary configuration of a neural network, say a basic multi layer perceptron, is there a way to break down a network into a number of smaller networks, compute them individually, and recombine them?



For matrix multiplication, you can do something like this, where if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together, then add them together to get the same result as doing the 16x16 multiplication, without doing any unnecessary computation.



I am wondering if there is any kind of similar mechanism someone could use if they had some kind of hardware multiplier capable of computing a network of a fixed size and wanted to use it to compute a larger network.



Thanks for any help you can offer!










share|cite|improve this question













For some arbitrary configuration of a neural network, say a basic multi layer perceptron, is there a way to break down a network into a number of smaller networks, compute them individually, and recombine them?



For matrix multiplication, you can do something like this, where if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together, then add them together to get the same result as doing the 16x16 multiplication, without doing any unnecessary computation.



I am wondering if there is any kind of similar mechanism someone could use if they had some kind of hardware multiplier capable of computing a network of a fixed size and wanted to use it to compute a larger network.



Thanks for any help you can offer!







neural-networks






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 3 '18 at 20:37









Zephyr

1012




1012












  • Maybe relevant: iphome.hhi.de/samek/pdf/MonICML16.pdf.
    – Martín-Blas Pérez Pinilla
    Dec 4 '18 at 8:03










  • if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together... True but irrelevant. Your naive algorithm requires the same number of multiplications. The Strassen algorithm (en.wikipedia.org/wiki/Strassen_algorithm) is a bit better but more complex.
    – Martín-Blas Pérez Pinilla
    Dec 4 '18 at 8:03












  • That was kind of what I was getting at, I don't necessarily need a more efficient algorithm, if the best I can do is some kind of operation that has equal complexity, then that's fine as well. I just want to know what the best that can be done is.
    – Zephyr
    Dec 4 '18 at 13:39


















  • Maybe relevant: iphome.hhi.de/samek/pdf/MonICML16.pdf.
    – Martín-Blas Pérez Pinilla
    Dec 4 '18 at 8:03










  • if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together... True but irrelevant. Your naive algorithm requires the same number of multiplications. The Strassen algorithm (en.wikipedia.org/wiki/Strassen_algorithm) is a bit better but more complex.
    – Martín-Blas Pérez Pinilla
    Dec 4 '18 at 8:03












  • That was kind of what I was getting at, I don't necessarily need a more efficient algorithm, if the best I can do is some kind of operation that has equal complexity, then that's fine as well. I just want to know what the best that can be done is.
    – Zephyr
    Dec 4 '18 at 13:39
















Maybe relevant: iphome.hhi.de/samek/pdf/MonICML16.pdf.
– Martín-Blas Pérez Pinilla
Dec 4 '18 at 8:03




Maybe relevant: iphome.hhi.de/samek/pdf/MonICML16.pdf.
– Martín-Blas Pérez Pinilla
Dec 4 '18 at 8:03












if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together... True but irrelevant. Your naive algorithm requires the same number of multiplications. The Strassen algorithm (en.wikipedia.org/wiki/Strassen_algorithm) is a bit better but more complex.
– Martín-Blas Pérez Pinilla
Dec 4 '18 at 8:03






if you have two 16x16 matricies, you can break them into 8x8 blocks and multiply the individual blocks together... True but irrelevant. Your naive algorithm requires the same number of multiplications. The Strassen algorithm (en.wikipedia.org/wiki/Strassen_algorithm) is a bit better but more complex.
– Martín-Blas Pérez Pinilla
Dec 4 '18 at 8:03














That was kind of what I was getting at, I don't necessarily need a more efficient algorithm, if the best I can do is some kind of operation that has equal complexity, then that's fine as well. I just want to know what the best that can be done is.
– Zephyr
Dec 4 '18 at 13:39




That was kind of what I was getting at, I don't necessarily need a more efficient algorithm, if the best I can do is some kind of operation that has equal complexity, then that's fine as well. I just want to know what the best that can be done is.
– Zephyr
Dec 4 '18 at 13:39










0






active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3024631%2fis-it-possible-to-decompose-a-neural-network%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3024631%2fis-it-possible-to-decompose-a-neural-network%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Berounka

Sphinx de Gizeh

Different font size/position of beamer's navigation symbols template's content depending on regular/plain...