只有caffemodel文件可以反推prototxt嗎?
提這個問題是因為,我們需要給一套模型加密,如果只把prototxt加密就安全了,那麼一套簡單的加密方案就可以解決問題了。
可以通過caffemodel文件反推prototxt,代碼如下:
#coding=utf-8
"""
@author: kangkai
"""
from caffe.proto import caffe_pb2
def toPrototxt(modelName, deployName):
with open(modelName, "rb") as f:
caffemodel = caffe_pb2.NetParameter()
caffemodel.ParseFromString(f.read())
# 兼容新舊版本
# LayerParameter 消息中的 blobs 保存著可訓練的參數
for item in caffemodel.layers:
item.ClearField("blobs")
for item in caffemodel.layer:
item.ClearField("blobs")
# print(caffemodel)
with open(deployName, "w") as f:
f.write(str(caffemodel))
if __name__ == "__main__":
modelName = "facenet_iter_14000.caffemodel"
deployName = "facenet_deploy.prototxt"
toPrototxt(modelName, deployName)
caffemodel里存的是Net參數,可以被反序列化層NetParameter,裡面就是數據了。。。
可以自己實現下blob、layer和net的序列化和反序列化,這樣子加密 。
C++的話直接這樣就可以了。除了prototxt,還可以得到所有layer的weights
caffe::NetParameter proto;
caffe::ReadProtoFromBinaryFile("/XXXX.caffemodel", proto);
caffe::WriteProtoToTextFile(proto, "/XXX.txt");
include &
#include &
#include &
#include "caffe.pb.h"
#include &
#include &
#include &
#include &
#include &
#include &
#include &
using namespace caffe;
using google::protobuf::io::FileInputStream;
using google::protobuf::io::FileOutputStream;
using google::protobuf::io::ZeroCopyInputStream;
using google::protobuf::io::CodedInputStream;
using google::protobuf::io::ZeroCopyOutputStream;
using google::protobuf::io::CodedOutputStream;
using google::protobuf::io::GzipOutputStream;
using google::protobuf::Message;
using namespace std;
bool ReadProtoFromBinaryFile(const char* file, Message* net) {
int fd = _open(file, O_RDONLY | O_BINARY);
if (fd == -1) return false;
ZeroCopyInputStream* raw_input = new FileInputStream(fd);
CodedInputStream* coded_input = new CodedInputStream(raw_input);
bool success = net-&>ParseFromCodedStream(coded_input);
delete coded_input;
delete raw_input;
_close(fd);
return success;
}
void WriteProtoToTextFile(const Message proto, const char* filename) {
int fd = _open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0644);
FileOutputStream* output = new FileOutputStream(fd);
google::protobuf::TextFormat::Print(proto, output);
delete output;
_close(fd);
}
int main()
{
bool success = ReadProtoFromBinaryFile("*.caffemodel" net);
if (!success) {
printf("error:%s
", caffemodel[i].c_str());
return;
}
for (int i = 0; i &< net.layer_size(); ++i)
{
int n = net.mutable_layer(i)-&>mutable_blobs()-&>size();
if (n)
{
net.mutable_layer(i)-&>mutable_blobs(0)-&>Clear();
if (n&>1)
{
net.mutable_layer(i)-&>mutable_blobs(1)-&>Clear();
}
}
}
WriteProtoToTextFile(net, "*.prototxt");
return 0;
}
推薦閱讀:
※SparkNet和CaffeOnSpark有哪些重要的區別?
※如何在faster—rcnn上訓練自己的數據集(單類和多類)??
※caffe為什麼要使用lmdb資料庫?
※怎麼使用caffe實現人臉的識別?
※caffe官方將會支持TX1的fp16的特性嗎?
TAG:深度學習DeepLearning | Caffe深度學習框架 |