Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Freezing the graph and then reloading for inference #26

Open
camer314 opened this issue Mar 10, 2018 · 4 comments
Open

Freezing the graph and then reloading for inference #26

camer314 opened this issue Mar 10, 2018 · 4 comments

Comments

@camer314
Copy link

Hello,

I am wanting to freeze your model into a .PB file and then later reload that when i want to pass my own sentences. I am familiar with image training where operations are named (e.g 'input' and 'final_result') and i feed the necessary tensors.

However in your model there are no named operations and the testing script does a lot of heavy lifting with restoration and loading of vector files.

Is it possible to just load a PB file and pass in a sentence? And if so then what do I pass that sentence in to and then what tensor do i need as a result?

Basically i am trying to get this running on TF mobile, so only using the Java/C# library, all my other models i have just frozen to PB, restored and fed the correct tensor in, got the results and i am wanting to do the same here but not sure what the absolute minimum requirement is.

@camer314
Copy link
Author

I think i am on the right track here, at the very least it returns me an answer, but given my very small number of training iterations I wont know until fully trained if its the right answer

To start i gave the input_data tensor a name:

input_data = tf.placeholder(tf.int32, [batchSize, maxSeqLength], name="camtest_input_data")

Then when i printed the value of prediction = (tf.matmul(last, weight) + bias) to came back with a name of "add:0" which i am taking as being my output tensor

I can now write out the graph using "add" as my output name:

`
from tensorflow.python.framework import graph_util
from tensorflow.python.platform import gfile

output_graph_def = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ["add"])

with gfile.FastGFile("frozen_graph.pb", 'wb') as f:
f.write(output_graph_def.SerializeToString())

`

Now when i want to pass new sentences in later on, I can restore the frozen graph and feed the new information like this, which at least doesnt crash but not sure if its a complete replica of the testing script using checkpoint files? In particular am i right in what i think is the crucial input and output?

`inputText = FLAGS.sentence
inputMatrix = getSentenceMatrix(inputText)

tf.reset_default_graph()
graph = load_graph(os.path.join(local_path, "frozen_graph.pb"))

with tf.Session(graph=graph) as sess:
prediction = sess.graph.get_tensor_by_name('add:0')
predictedSentiment = sess.run(prediction, {'camtest_input_data:0': inputMatrix})[0]
`

@adeshpande3
Copy link
Owner

Hmm, I'm actually not too familiar with saving into .pb files, but the general approach you have of naming the variables, using f.write, then load_graph, and then sess.run seems right to me. Will let you know if I find any more info on this.

@camer314
Copy link
Author

Thanks that would be great

@dreamibor
Copy link

@camer314 Is it possible to share the frozen model (.pb)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants