Producing a softmax on two channels in Tensorflow and Keras
My network's penultimate layer has shape (U, C)
where C
is the number of channels. I'd like to apply softmax function across each channel separately.
For example, if U=2
and C=3
, and the layer produces [ [1 2 3], [10 20 30] ]
, I'd like the output to do softmax(1, 2, 3)
for channel 0 and softmax(10, 20, 30)
for the channel 1.
Is there a way I can do this with Keras? I'm using TensorFlow as the backend.
UPDATE
Please also explain how to ensure that the loss is the sum of both cross entropies, and how I can verify that? (That is, I don't want the optimizer to only train for loss on one of the softmax, but rather the sum of each's cross entropy loss). The model uses Keras's built in categorical_crossentropy
for loss.
python tensorflow keras softmax
add a comment |
My network's penultimate layer has shape (U, C)
where C
is the number of channels. I'd like to apply softmax function across each channel separately.
For example, if U=2
and C=3
, and the layer produces [ [1 2 3], [10 20 30] ]
, I'd like the output to do softmax(1, 2, 3)
for channel 0 and softmax(10, 20, 30)
for the channel 1.
Is there a way I can do this with Keras? I'm using TensorFlow as the backend.
UPDATE
Please also explain how to ensure that the loss is the sum of both cross entropies, and how I can verify that? (That is, I don't want the optimizer to only train for loss on one of the softmax, but rather the sum of each's cross entropy loss). The model uses Keras's built in categorical_crossentropy
for loss.
python tensorflow keras softmax
"U=2 and C=3" is inconsistent[ [1 10] [2 20] [3 30] ]
, since it is an array with shape(3, 2)
and not(2, 3)
. Please edit your post and make it consistent.
– today
Nov 23 '18 at 13:09
@today I've never mastered the way numpy displays arrays as strings. U=2 and C=3 is correct; if you can edit to the correct numpy string, I'd appreciate it; if not, tell me what it is and I'll edit myself.
– SRobertJames
Nov 23 '18 at 15:26
add a comment |
My network's penultimate layer has shape (U, C)
where C
is the number of channels. I'd like to apply softmax function across each channel separately.
For example, if U=2
and C=3
, and the layer produces [ [1 2 3], [10 20 30] ]
, I'd like the output to do softmax(1, 2, 3)
for channel 0 and softmax(10, 20, 30)
for the channel 1.
Is there a way I can do this with Keras? I'm using TensorFlow as the backend.
UPDATE
Please also explain how to ensure that the loss is the sum of both cross entropies, and how I can verify that? (That is, I don't want the optimizer to only train for loss on one of the softmax, but rather the sum of each's cross entropy loss). The model uses Keras's built in categorical_crossentropy
for loss.
python tensorflow keras softmax
My network's penultimate layer has shape (U, C)
where C
is the number of channels. I'd like to apply softmax function across each channel separately.
For example, if U=2
and C=3
, and the layer produces [ [1 2 3], [10 20 30] ]
, I'd like the output to do softmax(1, 2, 3)
for channel 0 and softmax(10, 20, 30)
for the channel 1.
Is there a way I can do this with Keras? I'm using TensorFlow as the backend.
UPDATE
Please also explain how to ensure that the loss is the sum of both cross entropies, and how I can verify that? (That is, I don't want the optimizer to only train for loss on one of the softmax, but rather the sum of each's cross entropy loss). The model uses Keras's built in categorical_crossentropy
for loss.
python tensorflow keras softmax
python tensorflow keras softmax
edited Nov 25 '18 at 20:31
asked Nov 23 '18 at 4:09
SRobertJames
2,39573270
2,39573270
"U=2 and C=3" is inconsistent[ [1 10] [2 20] [3 30] ]
, since it is an array with shape(3, 2)
and not(2, 3)
. Please edit your post and make it consistent.
– today
Nov 23 '18 at 13:09
@today I've never mastered the way numpy displays arrays as strings. U=2 and C=3 is correct; if you can edit to the correct numpy string, I'd appreciate it; if not, tell me what it is and I'll edit myself.
– SRobertJames
Nov 23 '18 at 15:26
add a comment |
"U=2 and C=3" is inconsistent[ [1 10] [2 20] [3 30] ]
, since it is an array with shape(3, 2)
and not(2, 3)
. Please edit your post and make it consistent.
– today
Nov 23 '18 at 13:09
@today I've never mastered the way numpy displays arrays as strings. U=2 and C=3 is correct; if you can edit to the correct numpy string, I'd appreciate it; if not, tell me what it is and I'll edit myself.
– SRobertJames
Nov 23 '18 at 15:26
"U=2 and C=3" is inconsistent
[ [1 10] [2 20] [3 30] ]
, since it is an array with shape (3, 2)
and not (2, 3)
. Please edit your post and make it consistent.– today
Nov 23 '18 at 13:09
"U=2 and C=3" is inconsistent
[ [1 10] [2 20] [3 30] ]
, since it is an array with shape (3, 2)
and not (2, 3)
. Please edit your post and make it consistent.– today
Nov 23 '18 at 13:09
@today I've never mastered the way numpy displays arrays as strings. U=2 and C=3 is correct; if you can edit to the correct numpy string, I'd appreciate it; if not, tell me what it is and I'll edit myself.
– SRobertJames
Nov 23 '18 at 15:26
@today I've never mastered the way numpy displays arrays as strings. U=2 and C=3 is correct; if you can edit to the correct numpy string, I'd appreciate it; if not, tell me what it is and I'll edit myself.
– SRobertJames
Nov 23 '18 at 15:26
add a comment |
2 Answers
2
active
oldest
votes
Define a Lambda
layer and use the softmax
function from the backend with a desired axis to compute the softmax over that axis:
from keras import backend as K
from keras.layers import Lambda
soft_out = Lambda(lambda x: K.softmax(x, axis=my_desired_axis))(input_tensor)
Update: A numpy array with N dimension would have a shape of (d1, d2, d3, ..., dn)
. Each one of them is called an axis. So the first axis (i.e. axis=0
) has dimension d1
, the second axis (i.e. axis=1
) has dimension d2
and so on. Further, the most common case of an array is a 2D array or a matrix which has a shape of (m, n)
, i.e. m
rows (i.e. axis=0
) and n
columns (i.e. axis=1
). Now when we specify an axis for performing an operation, it means that the operation should be computed over that axis. Let me make this more clear by examples:
>>> import numpy as np
>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> a.shape
(3, 4) # three rows and four columns
>>> np.sum(a, axis=0) # compute the sum over the rows (i.e. for each column)
array([12, 15, 18, 21])
>>> np.sum(a, axis=1) # compute the sum over the columns (i.e. for each row)
array([ 6, 22, 38])
>>> np.sum(a, axis=-1) # axis=-1 is equivalent to the last axis (i.e. columns)
array([ 6, 22, 38])
Now, in your example, the same thing holds for computing softmax function. You must first determine over which axis you want to compute the softmax and then specify that using axis
argument. Further, note that softmax by default is applied on the last axis (i.e. axis=-1
) so if you want to compute it over the last axis you don't need the Lambda layer above. Just use the Activation
layer instead:
from keras.layers import Activation
soft_out = Activation('softmax')(input_tensor)
Update 2: There is also another way of doing this using Softmax
layer:
from keras.layers import Softmax
soft_out = Softmax(axis=desired_axis)(input_tensor)
@SRobertJames I have updated my answer. Please take a look.
– today
Nov 23 '18 at 19:27
Very helpful; Q updated accordingly. Can you please address the last remaining point, that is, how do we ensure we are computing cross entropy loss on each entry ofaxis -1
independently, with the loss to optimize being the sum of both cross entropies?
– SRobertJames
Nov 25 '18 at 18:43
@SRobertJames It depends on what the loss function is. Do you use a custom loss function or the built-incategorical_crossentropy
function?
– today
Nov 25 '18 at 18:45
The build incategorical_crossentropy
– SRobertJames
Nov 25 '18 at 20:30
@SRobertJamescategorical_crossentropy
is applied on the last axis (i.e.axis=-1
) by default. As for the errors, edit your question and include the code and the error you get.
– today
Nov 26 '18 at 9:58
|
show 1 more comment
use functional api for multiple outputs. https://keras.io/getting-started/functional-api-guide/
input = Input(...)
...
t = some_tensor
t0 = t0[:,:,0]
t1 = t0[:,:,1]
soft0 = Softmax(output_shape)(t0)
soft1 = Softmax(output_shape)(t1)
outputs = [soft0,soft1]
model = Model(inputs=input, outputs=outputs)
model.compile(...)
model.fit(x_train, [y_train0, ytrain1], epoch = 10, batch_size=32)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53440551%2fproducing-a-softmax-on-two-channels-in-tensorflow-and-keras%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Define a Lambda
layer and use the softmax
function from the backend with a desired axis to compute the softmax over that axis:
from keras import backend as K
from keras.layers import Lambda
soft_out = Lambda(lambda x: K.softmax(x, axis=my_desired_axis))(input_tensor)
Update: A numpy array with N dimension would have a shape of (d1, d2, d3, ..., dn)
. Each one of them is called an axis. So the first axis (i.e. axis=0
) has dimension d1
, the second axis (i.e. axis=1
) has dimension d2
and so on. Further, the most common case of an array is a 2D array or a matrix which has a shape of (m, n)
, i.e. m
rows (i.e. axis=0
) and n
columns (i.e. axis=1
). Now when we specify an axis for performing an operation, it means that the operation should be computed over that axis. Let me make this more clear by examples:
>>> import numpy as np
>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> a.shape
(3, 4) # three rows and four columns
>>> np.sum(a, axis=0) # compute the sum over the rows (i.e. for each column)
array([12, 15, 18, 21])
>>> np.sum(a, axis=1) # compute the sum over the columns (i.e. for each row)
array([ 6, 22, 38])
>>> np.sum(a, axis=-1) # axis=-1 is equivalent to the last axis (i.e. columns)
array([ 6, 22, 38])
Now, in your example, the same thing holds for computing softmax function. You must first determine over which axis you want to compute the softmax and then specify that using axis
argument. Further, note that softmax by default is applied on the last axis (i.e. axis=-1
) so if you want to compute it over the last axis you don't need the Lambda layer above. Just use the Activation
layer instead:
from keras.layers import Activation
soft_out = Activation('softmax')(input_tensor)
Update 2: There is also another way of doing this using Softmax
layer:
from keras.layers import Softmax
soft_out = Softmax(axis=desired_axis)(input_tensor)
@SRobertJames I have updated my answer. Please take a look.
– today
Nov 23 '18 at 19:27
Very helpful; Q updated accordingly. Can you please address the last remaining point, that is, how do we ensure we are computing cross entropy loss on each entry ofaxis -1
independently, with the loss to optimize being the sum of both cross entropies?
– SRobertJames
Nov 25 '18 at 18:43
@SRobertJames It depends on what the loss function is. Do you use a custom loss function or the built-incategorical_crossentropy
function?
– today
Nov 25 '18 at 18:45
The build incategorical_crossentropy
– SRobertJames
Nov 25 '18 at 20:30
@SRobertJamescategorical_crossentropy
is applied on the last axis (i.e.axis=-1
) by default. As for the errors, edit your question and include the code and the error you get.
– today
Nov 26 '18 at 9:58
|
show 1 more comment
Define a Lambda
layer and use the softmax
function from the backend with a desired axis to compute the softmax over that axis:
from keras import backend as K
from keras.layers import Lambda
soft_out = Lambda(lambda x: K.softmax(x, axis=my_desired_axis))(input_tensor)
Update: A numpy array with N dimension would have a shape of (d1, d2, d3, ..., dn)
. Each one of them is called an axis. So the first axis (i.e. axis=0
) has dimension d1
, the second axis (i.e. axis=1
) has dimension d2
and so on. Further, the most common case of an array is a 2D array or a matrix which has a shape of (m, n)
, i.e. m
rows (i.e. axis=0
) and n
columns (i.e. axis=1
). Now when we specify an axis for performing an operation, it means that the operation should be computed over that axis. Let me make this more clear by examples:
>>> import numpy as np
>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> a.shape
(3, 4) # three rows and four columns
>>> np.sum(a, axis=0) # compute the sum over the rows (i.e. for each column)
array([12, 15, 18, 21])
>>> np.sum(a, axis=1) # compute the sum over the columns (i.e. for each row)
array([ 6, 22, 38])
>>> np.sum(a, axis=-1) # axis=-1 is equivalent to the last axis (i.e. columns)
array([ 6, 22, 38])
Now, in your example, the same thing holds for computing softmax function. You must first determine over which axis you want to compute the softmax and then specify that using axis
argument. Further, note that softmax by default is applied on the last axis (i.e. axis=-1
) so if you want to compute it over the last axis you don't need the Lambda layer above. Just use the Activation
layer instead:
from keras.layers import Activation
soft_out = Activation('softmax')(input_tensor)
Update 2: There is also another way of doing this using Softmax
layer:
from keras.layers import Softmax
soft_out = Softmax(axis=desired_axis)(input_tensor)
@SRobertJames I have updated my answer. Please take a look.
– today
Nov 23 '18 at 19:27
Very helpful; Q updated accordingly. Can you please address the last remaining point, that is, how do we ensure we are computing cross entropy loss on each entry ofaxis -1
independently, with the loss to optimize being the sum of both cross entropies?
– SRobertJames
Nov 25 '18 at 18:43
@SRobertJames It depends on what the loss function is. Do you use a custom loss function or the built-incategorical_crossentropy
function?
– today
Nov 25 '18 at 18:45
The build incategorical_crossentropy
– SRobertJames
Nov 25 '18 at 20:30
@SRobertJamescategorical_crossentropy
is applied on the last axis (i.e.axis=-1
) by default. As for the errors, edit your question and include the code and the error you get.
– today
Nov 26 '18 at 9:58
|
show 1 more comment
Define a Lambda
layer and use the softmax
function from the backend with a desired axis to compute the softmax over that axis:
from keras import backend as K
from keras.layers import Lambda
soft_out = Lambda(lambda x: K.softmax(x, axis=my_desired_axis))(input_tensor)
Update: A numpy array with N dimension would have a shape of (d1, d2, d3, ..., dn)
. Each one of them is called an axis. So the first axis (i.e. axis=0
) has dimension d1
, the second axis (i.e. axis=1
) has dimension d2
and so on. Further, the most common case of an array is a 2D array or a matrix which has a shape of (m, n)
, i.e. m
rows (i.e. axis=0
) and n
columns (i.e. axis=1
). Now when we specify an axis for performing an operation, it means that the operation should be computed over that axis. Let me make this more clear by examples:
>>> import numpy as np
>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> a.shape
(3, 4) # three rows and four columns
>>> np.sum(a, axis=0) # compute the sum over the rows (i.e. for each column)
array([12, 15, 18, 21])
>>> np.sum(a, axis=1) # compute the sum over the columns (i.e. for each row)
array([ 6, 22, 38])
>>> np.sum(a, axis=-1) # axis=-1 is equivalent to the last axis (i.e. columns)
array([ 6, 22, 38])
Now, in your example, the same thing holds for computing softmax function. You must first determine over which axis you want to compute the softmax and then specify that using axis
argument. Further, note that softmax by default is applied on the last axis (i.e. axis=-1
) so if you want to compute it over the last axis you don't need the Lambda layer above. Just use the Activation
layer instead:
from keras.layers import Activation
soft_out = Activation('softmax')(input_tensor)
Update 2: There is also another way of doing this using Softmax
layer:
from keras.layers import Softmax
soft_out = Softmax(axis=desired_axis)(input_tensor)
Define a Lambda
layer and use the softmax
function from the backend with a desired axis to compute the softmax over that axis:
from keras import backend as K
from keras.layers import Lambda
soft_out = Lambda(lambda x: K.softmax(x, axis=my_desired_axis))(input_tensor)
Update: A numpy array with N dimension would have a shape of (d1, d2, d3, ..., dn)
. Each one of them is called an axis. So the first axis (i.e. axis=0
) has dimension d1
, the second axis (i.e. axis=1
) has dimension d2
and so on. Further, the most common case of an array is a 2D array or a matrix which has a shape of (m, n)
, i.e. m
rows (i.e. axis=0
) and n
columns (i.e. axis=1
). Now when we specify an axis for performing an operation, it means that the operation should be computed over that axis. Let me make this more clear by examples:
>>> import numpy as np
>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> a.shape
(3, 4) # three rows and four columns
>>> np.sum(a, axis=0) # compute the sum over the rows (i.e. for each column)
array([12, 15, 18, 21])
>>> np.sum(a, axis=1) # compute the sum over the columns (i.e. for each row)
array([ 6, 22, 38])
>>> np.sum(a, axis=-1) # axis=-1 is equivalent to the last axis (i.e. columns)
array([ 6, 22, 38])
Now, in your example, the same thing holds for computing softmax function. You must first determine over which axis you want to compute the softmax and then specify that using axis
argument. Further, note that softmax by default is applied on the last axis (i.e. axis=-1
) so if you want to compute it over the last axis you don't need the Lambda layer above. Just use the Activation
layer instead:
from keras.layers import Activation
soft_out = Activation('softmax')(input_tensor)
Update 2: There is also another way of doing this using Softmax
layer:
from keras.layers import Softmax
soft_out = Softmax(axis=desired_axis)(input_tensor)
edited Nov 28 '18 at 16:24
answered Nov 23 '18 at 13:07
today
10.2k21536
10.2k21536
@SRobertJames I have updated my answer. Please take a look.
– today
Nov 23 '18 at 19:27
Very helpful; Q updated accordingly. Can you please address the last remaining point, that is, how do we ensure we are computing cross entropy loss on each entry ofaxis -1
independently, with the loss to optimize being the sum of both cross entropies?
– SRobertJames
Nov 25 '18 at 18:43
@SRobertJames It depends on what the loss function is. Do you use a custom loss function or the built-incategorical_crossentropy
function?
– today
Nov 25 '18 at 18:45
The build incategorical_crossentropy
– SRobertJames
Nov 25 '18 at 20:30
@SRobertJamescategorical_crossentropy
is applied on the last axis (i.e.axis=-1
) by default. As for the errors, edit your question and include the code and the error you get.
– today
Nov 26 '18 at 9:58
|
show 1 more comment
@SRobertJames I have updated my answer. Please take a look.
– today
Nov 23 '18 at 19:27
Very helpful; Q updated accordingly. Can you please address the last remaining point, that is, how do we ensure we are computing cross entropy loss on each entry ofaxis -1
independently, with the loss to optimize being the sum of both cross entropies?
– SRobertJames
Nov 25 '18 at 18:43
@SRobertJames It depends on what the loss function is. Do you use a custom loss function or the built-incategorical_crossentropy
function?
– today
Nov 25 '18 at 18:45
The build incategorical_crossentropy
– SRobertJames
Nov 25 '18 at 20:30
@SRobertJamescategorical_crossentropy
is applied on the last axis (i.e.axis=-1
) by default. As for the errors, edit your question and include the code and the error you get.
– today
Nov 26 '18 at 9:58
@SRobertJames I have updated my answer. Please take a look.
– today
Nov 23 '18 at 19:27
@SRobertJames I have updated my answer. Please take a look.
– today
Nov 23 '18 at 19:27
Very helpful; Q updated accordingly. Can you please address the last remaining point, that is, how do we ensure we are computing cross entropy loss on each entry of
axis -1
independently, with the loss to optimize being the sum of both cross entropies?– SRobertJames
Nov 25 '18 at 18:43
Very helpful; Q updated accordingly. Can you please address the last remaining point, that is, how do we ensure we are computing cross entropy loss on each entry of
axis -1
independently, with the loss to optimize being the sum of both cross entropies?– SRobertJames
Nov 25 '18 at 18:43
@SRobertJames It depends on what the loss function is. Do you use a custom loss function or the built-in
categorical_crossentropy
function?– today
Nov 25 '18 at 18:45
@SRobertJames It depends on what the loss function is. Do you use a custom loss function or the built-in
categorical_crossentropy
function?– today
Nov 25 '18 at 18:45
The build in
categorical_crossentropy
– SRobertJames
Nov 25 '18 at 20:30
The build in
categorical_crossentropy
– SRobertJames
Nov 25 '18 at 20:30
@SRobertJames
categorical_crossentropy
is applied on the last axis (i.e. axis=-1
) by default. As for the errors, edit your question and include the code and the error you get.– today
Nov 26 '18 at 9:58
@SRobertJames
categorical_crossentropy
is applied on the last axis (i.e. axis=-1
) by default. As for the errors, edit your question and include the code and the error you get.– today
Nov 26 '18 at 9:58
|
show 1 more comment
use functional api for multiple outputs. https://keras.io/getting-started/functional-api-guide/
input = Input(...)
...
t = some_tensor
t0 = t0[:,:,0]
t1 = t0[:,:,1]
soft0 = Softmax(output_shape)(t0)
soft1 = Softmax(output_shape)(t1)
outputs = [soft0,soft1]
model = Model(inputs=input, outputs=outputs)
model.compile(...)
model.fit(x_train, [y_train0, ytrain1], epoch = 10, batch_size=32)
add a comment |
use functional api for multiple outputs. https://keras.io/getting-started/functional-api-guide/
input = Input(...)
...
t = some_tensor
t0 = t0[:,:,0]
t1 = t0[:,:,1]
soft0 = Softmax(output_shape)(t0)
soft1 = Softmax(output_shape)(t1)
outputs = [soft0,soft1]
model = Model(inputs=input, outputs=outputs)
model.compile(...)
model.fit(x_train, [y_train0, ytrain1], epoch = 10, batch_size=32)
add a comment |
use functional api for multiple outputs. https://keras.io/getting-started/functional-api-guide/
input = Input(...)
...
t = some_tensor
t0 = t0[:,:,0]
t1 = t0[:,:,1]
soft0 = Softmax(output_shape)(t0)
soft1 = Softmax(output_shape)(t1)
outputs = [soft0,soft1]
model = Model(inputs=input, outputs=outputs)
model.compile(...)
model.fit(x_train, [y_train0, ytrain1], epoch = 10, batch_size=32)
use functional api for multiple outputs. https://keras.io/getting-started/functional-api-guide/
input = Input(...)
...
t = some_tensor
t0 = t0[:,:,0]
t1 = t0[:,:,1]
soft0 = Softmax(output_shape)(t0)
soft1 = Softmax(output_shape)(t1)
outputs = [soft0,soft1]
model = Model(inputs=input, outputs=outputs)
model.compile(...)
model.fit(x_train, [y_train0, ytrain1], epoch = 10, batch_size=32)
answered Nov 23 '18 at 6:50
Mete Han Kahraman
40017
40017
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53440551%2fproducing-a-softmax-on-two-channels-in-tensorflow-and-keras%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
"U=2 and C=3" is inconsistent
[ [1 10] [2 20] [3 30] ]
, since it is an array with shape(3, 2)
and not(2, 3)
. Please edit your post and make it consistent.– today
Nov 23 '18 at 13:09
@today I've never mastered the way numpy displays arrays as strings. U=2 and C=3 is correct; if you can edit to the correct numpy string, I'd appreciate it; if not, tell me what it is and I'll edit myself.
– SRobertJames
Nov 23 '18 at 15:26