利用GAN生成动漫头像

前言

这篇博客参考自:GAN学习指南:从原理入门到制作生成Demo
前面曾经写过一篇:GAN入门介绍
这里再提供一个视频(文末):干货 | 直观理解GAN背后的原理:以人脸图像生成为例
GAN的原理很简单,但是它有很多变体,如:DCGAN、CycleGAN、DeblurGAN等,它们也被用在不同地方,本文将用到DCGAN来生成动漫头像,可以做到以假乱真的地步。
补充项目资源github地址

原理

这里写图片描述

  • 整个式子由两项构成。x表示真实图片,z表示输入G网络的噪声,而G(z)表示G网络生成的图片。
  • D(x)表示D网络判断真实图片是否真实的概率(因为x就是真实的,所以对于D来说,这个值越接近1越好)。而D(G(z))是D网络判断G生成的图片的是否真实的概率。
  • G的目的:上面提到过,D(G(z))是D网络判断G生成的图片是否真实的概率,G应该希望自己生成的图片“越接近真实越好”。也就是说,G希望D(G(z))尽可能得大,这时V(D, G)会变小。因此我们看到式子的最前面的记号是min_G

那么如何将图像处理与GAN结合呢?我们可以将CNN(卷积神经网络)与GAN结合,这里是论文地址Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
DCGAN中的G网络示意:

代码实现

参考GAN学习指南:从原理入门到制作生成Demo
爬了动漫图库网站:konachan.net - Konachan.com Anime Wallpapers。

  • 原始数据集的搜集

爬虫代码如下:

#采用request+beautiful库爬取
import requests
from bs4 import BeautifulSoup
import os
import traceback#python异常模块def download(url, filename):#判断文件是否存在,存在则退出本次循环if os.path.exists(filename):print('file exists!')returntry:r = requests.get(url, stream=True, timeout=60)#以流数据形式请求,你可获取来自服务器的原始套接字响应r.raise_for_status()with open(filename, 'wb') as f:#将文本流保存到文件for chunk in r.iter_content(chunk_size=1024):if chunk:  # filter out keep-alive new chunksf.write(chunk)f.flush()return filenameexcept KeyboardInterrupt:if os.path.exists(filename):os.remove(filename)raise KeyboardInterruptexcept Exception:traceback.print_exc()#把返回信息输出到控制台if os.path.exists(filename):os.remove(filename)if os.path.exists('imgs') is False:os.makedirs('imgs')start = 1
end = 8000
for i in range(start, end + 1):url = 'http://konachan.net/post?page=%d&tags=' % i#需要爬取的urlhtml = requests.get(url).text#获取的html页面内容soup = BeautifulSoup(html, 'html.parser')for img in soup.find_all('img', class_="preview"):target_url = 'http:' + img['src']filename = os.path.join('imgs', target_url.split('/')[-1])download(target_url, filename)print('%d / %d' % (i, end))

最后,经过大概半天,我爬取大概500多M图片
这里写图片描述

由于这些图片比较复杂,对于网络难以训练,我们需要截取出动漫人物的头像,通过opencv工具,github上面已经有这个项目应用,
nagadomi/lbpcascade_animeface

import cv2#需提前在你的python环境下安装opencv包
import sys
import os.path
from glob import globdef detect(filename, cascade_file="lbpcascade_animeface.xml"):if not os.path.isfile(cascade_file):#lbpcascade_animeface.xml文件可在github上面找到,就是一个巨长的xml格式代码,表示看不懂。raise RuntimeError("%s: not found" % cascade_file)cascade = cv2.CascadeClassifier(cascade_file)image = cv2.imread(filename)gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)gray = cv2.equalizeHist(gray)faces = cascade.detectMultiScale(gray,# detector optionsscaleFactor=1.1,minNeighbors=5,minSize=(48, 48))for i, (x, y, w, h) in enumerate(faces):face = image[y: y + h, x:x + w, :]face = cv2.resize(face, (96, 96))save_filename = '%s-%d.jpg' % (os.path.basename(filename).split('.')[0], i)cv2.imwrite("faces/" + save_filename, face)#写入文件if __name__ == '__main__':if os.path.exists('faces') is False:os.makedirs('faces')file_list = glob('imgs/*.jpg')for filename in file_list:detect(filename)

上面主要是detectMultiScale函数难理解

#C#版本void detectMultiScale(  const Mat& image,  #image--待检测图片,一般为灰度图像加快检测速度CV_OUT vector<Rect>& objects,  #objects--被检测物体的矩形框向量组double scaleFactor = 1.1,  #scaleFactor--表示在前后两次相继的扫描中,搜索窗口的比例系数。默认为1.1即每次搜索窗口依次扩大10%;int minNeighbors = 3,   #minNeighbors--表示构成检测目标的相邻矩形的最小个数(默认为3个)。如果组成检测目标的小矩形的个数和小于 min_neighbors - 1 都会被排除。如果min_neighbors 为 0, 则函数不做任何操作就返回所有的被检候选矩形框,这种设定值一般用在用户自定义对检测结果的组合程序上;int flags = 0,  #flags--要么使用默认值,要么使用CV_HAAR_DO_CANNY_PRUNING,如果设置为CV_HAAR_DO_CANNY_PRUNING,那么函数将会使用Canny边缘检测来排除边缘过多或过少的区域,因此这些区域通常不会是人脸所在区域;Size minSize = Size(),  #minSize和maxSize用来限制得到的目标区域的范围。Size maxSize = Size()  );  

截取后的人物数据:
这里写图片描述
500多m的图片,最后只剩60多m。

  • 训练图像
    代码只能引用DCGAN的github代码:carpedm20/DCGAN-tensorflow

###mian.py代码

import os
import scipy.misc # 
import numpy as npfrom model import DCGAN
from utils import pp, visualize, to_json, show_all_variablesimport tensorflow as tfflags = tf.app.flags
flags.DEFINE_integer("epoch", 25, "Epoch to train [25]")#迭代次数
flags.DEFINE_float("learning_rate", 0.0002, "Learning rate of for adam [0.0002]")#学习速率,默认是0.002
flags.DEFINE_float("beta1", 0.5, "Momentum term of adam [0.5]")
flags.DEFINE_integer("train_size", np.inf, "The size of train images [np.inf]")#训练数据大小
flags.DEFINE_integer("batch_size", 64, "The size of batch images [64]")#每次迭代的图像数量
flags.DEFINE_integer("input_height", 108, "The size of image to use (will be center cropped). [108]")#需要指定输入图像的高
flags.DEFINE_integer("input_width", None, "The size of image to use (will be center cropped). If None, same value as input_height [None]")#需要指定输入图像的宽
flags.DEFINE_integer("output_height", 64, "The size of the output images to produce [64]")
flags.DEFINE_integer("output_width", None, "The size of the output images to produce. If None, same value as output_height [None]")
flags.DEFINE_string("dataset", "celebA", "The name of dataset [celebA, mnist, lsun]")#需要指定处理哪个数据集
flags.DEFINE_string("input_fname_pattern", "*.jpg", "Glob pattern of filename of input images [*]")#输入的文件格式
flags.DEFINE_string("checkpoint_dir", "checkpoint", "Directory name to save the checkpoints [checkpoint]")
flags.DEFINE_string("sample_dir", "samples", "Directory name to save the image samples [samples]")#储存训练样例
目录
flags.DEFINE_boolean("train", False, "True for training, False for testing [False]")
flags.DEFINE_boolean("crop", False, "True for training, False for testing [False]")
flags.DEFINE_boolean("visualize", False, "True for visualizing, False for nothing [False]")
FLAGS = flags.FLAGSdef main(_):pp.pprint(flags.FLAGS.__flags)if FLAGS.input_width is None:FLAGS.input_width = FLAGS.input_heightif FLAGS.output_width is None:FLAGS.output_width = FLAGS.output_heightif not os.path.exists(FLAGS.checkpoint_dir):os.makedirs(FLAGS.checkpoint_dir)if not os.path.exists(FLAGS.sample_dir):os.makedirs(FLAGS.sample_dir)
#控制GPU资源使用率#gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)run_config = tf.ConfigProto()run_config.gpu_options.allow_growth=Truewith tf.Session(config=run_config) as sess:if FLAGS.dataset == 'mnist':dcgan = DCGAN(sess,input_width=FLAGS.input_width,input_height=FLAGS.input_height,output_width=FLAGS.output_width,output_height=FLAGS.output_height,batch_size=FLAGS.batch_size,sample_num=FLAGS.batch_size,y_dim=10,dataset_name=FLAGS.dataset,input_fname_pattern=FLAGS.input_fname_pattern,crop=FLAGS.crop,checkpoint_dir=FLAGS.checkpoint_dir,sample_dir=FLAGS.sample_dir)else:dcgan = DCGAN(sess,input_width=FLAGS.input_width,input_height=FLAGS.input_height,output_width=FLAGS.output_width,output_height=FLAGS.output_height,batch_size=FLAGS.batch_size,sample_num=FLAGS.batch_size,dataset_name=FLAGS.dataset,input_fname_pattern=FLAGS.input_fname_pattern,crop=FLAGS.crop,checkpoint_dir=FLAGS.checkpoint_dir,sample_dir=FLAGS.sample_dir)show_all_variables()if FLAGS.train:dcgan.train(FLAGS)else:if not dcgan.load(FLAGS.checkpoint_dir)[0]:raise Exception("[!] Train a model first, then run test mode")# to_json("./web/js/layers.js", [dcgan.h0_w, dcgan.h0_b, dcgan.g_bn0],#                 [dcgan.h1_w, dcgan.h1_b, dcgan.g_bn1],#                 [dcgan.h2_w, dcgan.h2_b, dcgan.g_bn2],#                 [dcgan.h3_w, dcgan.h3_b, dcgan.g_bn3],#                 [dcgan.h4_w, dcgan.h4_b, None])# Below is codes for visualizationOPTION = 1visualize(sess, dcgan, FLAGS, OPTION)if __name__ == '__main__':tf.app.run()

main.py主要是调用前面定义好的模型、图像处理方法,来进行训练测试,程序的入口。
###utils.py代码

from __future__ import division
import math
import json
import random
import pprint # print data_struct#打印数据结构
import scipy.misc
import numpy as np
from time import gmtime, strftime
from six.moves import xrangeimport tensorflow as tf
import tensorflow.contrib.slim as slimpp = pprint.PrettyPrinter()get_stddev = lambda x, k_h, k_w: 1/math.sqrt(k_w*k_h*x.get_shape()[-1])#随机标准化数据def show_all_variables():#打印出所有需要训练的变量model_vars = tf.trainable_variables()slim.model_analyzer.analyze_vars(model_vars, print_info=True)def get_image(image_path, input_height, input_width,#获取图片的参数resize_height=64, resize_width=64,crop=True, grayscale=False):image = imread(image_path, grayscale)#读取灰度图像return transform(image, input_height, input_width,resize_height, resize_width, crop)def save_images(images, size, image_path):#储存转变后的图像return imsave(inverse_transform(images), size, image_path)def imread(path, grayscale = False):if (grayscale):return scipy.misc.imread(path, flatten = True).astype(np.float)#flatten决定是否转化为灰度图像else:return scipy.misc.imread(path).astype(np.float)def merge_images(images, size):return inverse_transform(images)def merge(images, size):#size定义8*8的格式h, w = images.shape[1], images.shape[2]if (images.shape[3] in (3,4)):#如果是RGB图像c = images.shape[3]img = np.zeros((h * size[0], w * size[1], c))for idx, image in enumerate(images):i = idx % size[1]j = idx // size[1]img[j * h:j * h + h, i * w:i * w + w, :] = imagereturn imgelif images.shape[3]==1:#如果是灰度图img = np.zeros((h * size[0], w * size[1]))for idx, image in enumerate(images):i = idx % size[1]j = idx // size[1]img[j * h:j * h + h, i * w:i * w + w] = image[:,:,0]return imgelse:raise ValueError('in merge(images,size) images parameter ''must have dimensions: HxW or HxWx3 or HxWx4')def imsave(images, size, path):image = np.squeeze(merge(images, size))## 从数组的形状中删除单维条目,即把shape中为1的维度去掉return scipy.misc.imsave(path, image)def center_crop(x, crop_h, crop_w,resize_h=64, resize_w=64):#根据crop_h裁剪if crop_w is None:crop_w = crop_hh, w = x.shape[:2]j = int(round((h - crop_h)/2.))i = int(round((w - crop_w)/2.))return scipy.misc.imresize(x[j:j+crop_h, i:i+crop_w], [resize_h, resize_w])def transform(image, input_height, input_width, resize_height=64, resize_width=64, crop=True):if crop:cropped_image = center_crop(image, input_height, input_width, resize_height, resize_width)else:cropped_image = scipy.misc.imresize(image, [resize_height, resize_width])return np.array(cropped_image)/127.5 - 1.def inverse_transform(images):return (images+1.)/2.def to_json(output_path, *layers):#定义json数据来存储层with open(output_path, "w") as layer_f:lines = ""for w, b, bn in layers:layer_idx = w.name.split('/')[0].split('h')[1]B = b.eval()if "lin/" in w.name:W = w.eval()depth = W.shape[1]else:W = np.rollaxis(w.eval(), 2, 0)depth = W.shape[0]biases = {"sy": 1, "sx": 1, "depth": depth, "w": ['%.2f' % elem for elem in list(B)]}if bn != None:gamma = bn.gamma.eval()beta = bn.beta.eval()gamma = {"sy": 1, "sx": 1, "depth": depth, "w": ['%.2f' % elem for elem in list(gamma)]}beta = {"sy": 1, "sx": 1, "depth": depth, "w": ['%.2f' % elem for elem in list(beta)]}else:gamma = {"sy": 1, "sx": 1, "depth": 0, "w": []}beta = {"sy": 1, "sx": 1, "depth": 0, "w": []}if "lin/" in w.name:fs = []for w in W.T:fs.append({"sy": 1, "sx": 1, "depth": W.shape[0], "w": ['%.2f' % elem for elem in list(w)]})lines += """var layer_%s = {"layer_type": "fc", "sy": 1, "sx": 1, "out_sx": 1, "out_sy": 1,"stride": 1, "pad": 0,"out_depth": %s, "in_depth": %s,"biases": %s,"gamma": %s,"beta": %s,"filters": %s};""" % (layer_idx.split('_')[0], W.shape[1], W.shape[0], biases, gamma, beta, fs)else:fs = []for w_ in W:fs.append({"sy": 5, "sx": 5, "depth": W.shape[3], "w": ['%.2f' % elem for elem in list(w_.flatten())]})lines += """var layer_%s = {"layer_type": "deconv", "sy": 5, "sx": 5,"out_sx": %s, "out_sy": %s,"stride": 2, "pad": 1,"out_depth": %s, "in_depth": %s,"biases": %s,"gamma": %s,"beta": %s,"filters": %s};""" % (layer_idx, 2**(int(layer_idx)+2), 2**(int(layer_idx)+2),W.shape[0], W.shape[3], biases, gamma, beta, fs)layer_f.write(" ".join(lines.replace("'","").split()))def make_gif(images, fname, duration=2, true_image=False):import moviepy.editor as mpy#利用moviepy.editor模块来制作动图,为了可视化用的def make_frame(t):try:x = images[int(len(images)/duration*t)]except:x = images[-1]if true_image:return x.astype(np.uint8)else:return ((x+1)/2*255).astype(np.uint8)clip = mpy.VideoClip(make_frame, duration=duration)clip.write_gif(fname, fps = len(images) / duration)#然后返回每帧图像。最后视频修剪并制作成GIF动画def visualize(sess, dcgan, config, option):#分为0、1、2、3、4种option。如果option=0,则之间显示生产的样本‘如果option=1,根据不同数据集不一样的处理,并利用前面的save_images()函数将训练sample保存下来;本次在main.py中选用option=1。image_frame_dim = int(math.ceil(config.batch_size**.5))if option == 0:z_sample = np.random.uniform(-0.5, 0.5, size=(config.batch_size, dcgan.z_dim))samples = sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample})save_images(samples, [image_frame_dim, image_frame_dim], './samples/test_%s.png' % strftime("%Y%m%d%H%M%S", gmtime()))elif option == 1:values = np.arange(0, 1, 1./config.batch_size)for idx in xrange(100):print(" [*] %d" % idx)z_sample = np.zeros([config.batch_size, dcgan.z_dim])for kdx, z in enumerate(z_sample):z[idx] = values[kdx]if config.dataset == "mnist":y = np.random.choice(10, config.batch_size)y_one_hot = np.zeros((config.batch_size, 10))y_one_hot[np.arange(config.batch_size), y] = 1samples = sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample, dcgan.y: y_one_hot})else:samples = sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample})save_images(samples, [image_frame_dim, image_frame_dim], './samples/test_arange_%s.png' % (idx))elif option == 2:values = np.arange(0, 1, 1./config.batch_size)for idx in [random.randint(0, 99) for _ in xrange(100)]:print(" [*] %d" % idx)z = np.random.uniform(-0.2, 0.2, size=(dcgan.z_dim))z_sample = np.tile(z, (config.batch_size, 1))#z_sample = np.zeros([config.batch_size, dcgan.z_dim])for kdx, z in enumerate(z_sample):z[idx] = values[kdx]if config.dataset == "mnist":y = np.random.choice(10, config.batch_size)y_one_hot = np.zeros((config.batch_size, 10))y_one_hot[np.arange(config.batch_size), y] = 1samples = sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample, dcgan.y: y_one_hot})else:samples = sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample})try:make_gif(samples, './samples/test_gif_%s.gif' % (idx))except:save_images(samples, [image_frame_dim, image_frame_dim], './samples/test_%s.png' % strftime("%Y%m%d%H%M%S", gmtime()))elif option == 3:values = np.arange(0, 1, 1./config.batch_size)for idx in xrange(100):print(" [*] %d" % idx)z_sample = np.zeros([config.batch_size, dcgan.z_dim])for kdx, z in enumerate(z_sample):z[idx] = values[kdx]samples = sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample})make_gif(samples, './samples/test_gif_%s.gif' % (idx))elif option == 4:image_set = []values = np.arange(0, 1, 1./config.batch_size)for idx in xrange(100):print(" [*] %d" % idx)z_sample = np.zeros([config.batch_size, dcgan.z_dim])for kdx, z in enumerate(z_sample): z[idx] = values[kdx]image_set.append(sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample}))make_gif(image_set[-1], './samples/test_gif_%s.gif' % (idx))new_image_set = [merge(np.array([images[idx] for images in image_set]), [10, 10]) \for idx in range(64) + range(63, -1, -1)]make_gif(new_image_set, './samples/test_gif_merged.gif', duration=8)def image_manifold_size(num_images):#对图像size向上和向下取整manifold_h = int(np.floor(np.sqrt(num_images)))manifold_w = int(np.ceil(np.sqrt(num_images)))assert manifold_h * manifold_w == num_imagesreturn manifold_h, manifold_w

这就是全部utils.py全部内容,主要负责图像的一些基本操作,获取图像、保存图像、图像翻转,和利用moviepy模块可视化训练过程。

###ops.py代码

import math
import numpy as np 
import tensorflow as tffrom tensorflow.python.framework import opsfrom utils import *try:image_summary = tf.image_summaryscalar_summary = tf.scalar_summaryhistogram_summary = tf.histogram_summarymerge_summary = tf.merge_summarySummaryWriter = tf.train.SummaryWriter
except:image_summary = tf.summary.imagescalar_summary = tf.summary.scalarhistogram_summary = tf.summary.histogrammerge_summary = tf.summary.mergeSummaryWriter = tf.summary.FileWriterif "concat_v2" in dir(tf):#返回按照维度连接的张量def concat(tensors, axis, *args, **kwargs):return tf.concat_v2(tensors, axis, *args, **kwargs)
else:def concat(tensors, axis, *args, **kwargs):return tf.concat(tensors, axis, *args, **kwargs)class batch_norm(object):def __init__(self, epsilon=1e-5, momentum = 0.9, name="batch_norm"):with tf.variable_scope(name):self.epsilon  = epsilonself.momentum = momentumself.name = namedef __call__(self, x, train=True):return tf.contrib.layers.batch_norm(x,#封装好的批处理类decay=self.momentum,  # 滑动平均参数updates_collections=None,epsilon=self.epsilon,# 防零极小值scale=True,is_training=train,scope=self.name)def conv_cond_concat(x, y):#连接x,y与Int32型的[x_shapes[0], x_shapes[1], x_shapes[2], y_shapes[3]]维度的张量乘积。"""Concatenate conditioning vector on feature map axis."""x_shapes = x.get_shape()y_shapes = y.get_shape()return concat([x, y*tf.ones([x_shapes[0], x_shapes[1], x_shapes[2], y_shapes[3]])], 3)def conv2d(input_, output_dim, k_h=5, k_w=5, d_h=2, d_w=2, stddev=0.02,name="conv2d"):#卷积函数:获取随机正态分布权值、实现卷积、获取初始偏置值,获取添加偏置值后的卷积变量并返回。with tf.variable_scope(name):w = tf.get_variable('w', [k_h, k_w, input_.get_shape()[-1], output_dim],initializer=tf.truncated_normal_initializer(stddev=stddev))conv = tf.nn.conv2d(input_, w, strides=[1, d_h, d_w, 1], padding='SAME')#卷积函数,包括输入的四维矩阵,权重,步长,SAME表示使用全0填充biases = tf.get_variable('biases', [output_dim], initializer=tf.constant_initializer(0.0))conv = tf.reshape(tf.nn.bias_add(conv, biases), conv.get_shape())return convdef deconv2d(input_, output_shape,k_h=5, k_w=5, d_h=2, d_w=2, stddev=0.02,name="deconv2d", with_w=False):#解卷积函数:获取随机正态分布权值、解卷积,获取初始偏置值,获取添加偏置值后的卷积变量,判断with_w是否为真,真则返回解卷积、权值、偏置值,否则返回解卷积。with tf.variable_scope(name):# filter : [height, width, output_channels, in_channels]w = tf.get_variable('w', [k_h, k_w, output_shape[-1], input_.get_shape()[-1]],initializer=tf.random_normal_initializer(stddev=stddev))try:deconv = tf.nn.conv2d_transpose(input_, w, output_shape=output_shape,strides=[1, d_h, d_w, 1])# Support for verisons of TensorFlow before 0.7.0except AttributeError:deconv = tf.nn.deconv2d(input_, w, output_shape=output_shape,strides=[1, d_h, d_w, 1])biases = tf.get_variable('biases', [output_shape[-1]], initializer=tf.constant_initializer(0.0))deconv = tf.reshape(tf.nn.bias_add(deconv, biases), deconv.get_shape())if with_w:return deconv, w, biaseselse:return deconvdef lrelu(x, leak=0.2, name="lrelu"):#定义激活函数return tf.maximum(x, leak*x)#返回的是a,b之间的最大值def linear(input_, output_size, scope=None, stddev=0.02, bias_start=0.0, with_w=False):#进行线性运算,获取一个随机正态分布矩阵,获取初始偏置值,如果with_w为真,则返回xw+b,权值w和偏置值b;否则返回xw+b。shape = input_.get_shape().as_list()with tf.variable_scope(scope or "Linear"):matrix = tf.get_variable("Matrix", [shape[1], output_size], tf.float32,tf.random_normal_initializer(stddev=stddev))bias = tf.get_variable("bias", [output_size],initializer=tf.constant_initializer(bias_start))if with_w:return tf.matmul(input_, matrix) + bias, matrix, biaselse:return tf.matmul(input_, matrix) + bias

下图定义了tf.contrib.layers.batch_norm封装的一些操作
这里写图片描述
这个文件主要定义了一些变量连接的函数、批处理规范化的函数、卷积函数、解卷积函数、激励函数、线性运算函数。
###model.py代码

from __future__ import division
import os
import time
import math
from glob import glob  # file path search
import tensorflow as tf
import numpy as np
from six.moves import xrangefrom ops import *
from utils import *def conv_out_size_same(size, stride):#定义步幅和大小return int(math.ceil(float(size) / float(stride)))class DCGAN(object):  #定义DCGAN类def __init__(self, sess, input_height=108, input_width=108, crop=True,batch_size=64, sample_num = 64, output_height=64, output_width=64,y_dim=None, z_dim=100, gf_dim=64, df_dim=64,gfc_dim=1024, dfc_dim=1024, c_dim=3, dataset_name='default',input_fname_pattern='*.jpg', checkpoint_dir=None, sample_dir=None):"""#主要是对一些默认的参数进行初始化。包括session、crop、批处理大小batch_size、样本数量sample_num、输入与输出的高和宽、各种维度、生成器与判别器的批处理、数据集名字、灰度值、构建模型函数,需要注意的是,要判断数据集的名字是否是mnist,是的话则直接用load_mnist()函数加载数据,否则需要从本地data文件夹中读取数据,并将图像读取为灰度图Args:sess: TensorFlow sessionbatch_size: The size of batch. Should be specified before training.y_dim: (optional) Dimension of dim for y. [None]z_dim: (optional) Dimension of dim for Z. [100]gf_dim: (optional) Dimension of gen filters in first conv layer. [64]df_dim: (optional) Dimension of discrim filters in first conv layer. [64]gfc_dim: (optional) Dimension of gen units for for fully connected layer. [1024]dfc_dim: (optional) Dimension of discrim units for fully connected layer. [1024]c_dim: (optional) Dimension of image color. For grayscale input, set to 1. [3]"""self.sess = sessself.crop = cropself.batch_size = batch_sizeself.sample_num = sample_numself.input_height = input_heightself.input_width = input_widthself.output_height = output_heightself.output_width = output_widthself.y_dim = y_dimself.z_dim = z_dimself.gf_dim = gf_dimself.df_dim = df_dimself.gfc_dim = gfc_dimself.dfc_dim = dfc_dim# batch normalization : deals with poor initialization helps gradient flowself.d_bn1 = batch_norm(name='d_bn1')self.d_bn2 = batch_norm(name='d_bn2')if not self.y_dim:self.d_bn3 = batch_norm(name='d_bn3')self.g_bn0 = batch_norm(name='g_bn0')self.g_bn1 = batch_norm(name='g_bn1')self.g_bn2 = batch_norm(name='g_bn2')if not self.y_dim:self.g_bn3 = batch_norm(name='g_bn3')self.dataset_name = dataset_nameself.input_fname_pattern = input_fname_patternself.checkpoint_dir = checkpoint_dirif self.dataset_name == 'mnist':self.data_X, self.data_y = self.load_mnist()self.c_dim = self.data_X[0].shape[-1]else:self.data = glob(os.path.join("./data", self.dataset_name, self.input_fname_pattern))imreadImg = imread(self.data[0]);if len(imreadImg.shape) >= 3: #check if image is a non-grayscale image by checking channel numberself.c_dim = imread(self.data[0]).shape[-1]else:self.c_dim = 1self.grayscale = (self.c_dim == 1)self.build_model()def build_model(self):if self.y_dim:self.y= tf.placeholder(tf.float32, [self.batch_size, self.y_dim], name='y')if self.crop:image_dims = [self.output_height, self.output_width, self.c_dim]else:image_dims = [self.input_height, self.input_width, self.c_dim]self.inputs = tf.placeholder(tf.float32, [self.batch_size] + image_dims, name='real_images')#输入为真的图像inputs = self.inputsself.z = tf.placeholder(tf.float32, [None, self.z_dim], name='z')#定义噪音张量zself.z_sum = histogram_summary("z", self.z)# histogram_summary可以看各层网络权重,偏置的分布if self.y_dim:self.G = self.generator(self.z, self.y)#初始化生成器Gself.D, self.D_logits = \self.discriminator(inputs, self.y, reuse=False)#用输入inputs初始化判别器Dself.sampler = self.sampler(self.z, self.y)self.D_, self.D_logits_ = \self.discriminator(self.G, self.y, reuse=True)#判别生成图像else:self.G = self.generator(self.z)self.D, self.D_logits = self.discriminator(inputs)#输入图片为真的概率self.sampler = self.sampler(self.z)self.D_, self.D_logits_ = self.discriminator(self.G, reuse=True)#生成图片为真的概率self.d_sum = histogram_summary("d", self.D)self.d__sum = histogram_summary("d_", self.D_)self.G_sum = image_summary("G", self.G)def sigmoid_cross_entropy_with_logits(x, y):#交叉熵定义损失函数try:return tf.nn.sigmoid_cross_entropy_with_logits(logits=x, labels=y)except:return tf.nn.sigmoid_cross_entropy_with_logits(logits=x, targets=y)self.d_loss_real = tf.reduce_mean(sigmoid_cross_entropy_with_logits(self.D_logits, tf.ones_like(self.D)))#判别输入图片真实数据平均损失值self.d_loss_fake = tf.reduce_mean(sigmoid_cross_entropy_with_logits(self.D_logits_, tf.zeros_like(self.D_)))#判别生成图像与0(假图片)的平均损失值self.g_loss = tf.reduce_mean(sigmoid_cross_entropy_with_logits(self.D_logits_, tf.ones_like(self.D_)))#判别生成图像与1的平均损失值self.d_loss_real_sum = scalar_summary("d_loss_real", self.d_loss_real)self.d_loss_fake_sum = scalar_summary("d_loss_fake", self.d_loss_fake)self.d_loss = self.d_loss_real + self.d_loss_fake#判别器损失值和self.g_loss_sum = scalar_summary("g_loss", self.g_loss)self.d_loss_sum = scalar_summary("d_loss", self.d_loss)t_vars = tf.trainable_variables()#定义所以变量self.d_vars = [var for var in t_vars if 'd_' in var.name]self.g_vars = [var for var in t_vars if 'g_' in var.name]self.saver = tf.train.Saver()#保存训练变量def train(self, config):#定义判别器优化器d_optim和生成器优化器g_optimd_optim = tf.train.AdamOptimizer(config.learning_rate, beta1=config.beta1) \.minimize(self.d_loss, var_list=self.d_vars)g_optim = tf.train.AdamOptimizer(config.learning_rate, beta1=config.beta1) \.minimize(self.g_loss, var_list=self.g_vars)try:tf.global_variables_initializer().run()except:tf.initialize_all_variables().run()self.g_sum = merge_summary([self.z_sum, self.d__sum,self.G_sum, self.d_loss_fake_sum, self.g_loss_sum])self.d_sum = merge_summary([self.z_sum, self.d_sum, self.d_loss_real_sum, self.d_loss_sum])self.writer = SummaryWriter("./logs", self.sess.graph)#分别将关于生成器和判别器有关的变量各合并到一个变量中,并写入事件文件中sample_z = np.random.uniform(-1, 1, size=(self.sample_num , self.z_dim))#噪音z初始化if config.dataset == 'mnist':sample_inputs = self.data_X[0:self.sample_num]sample_labels = self.data_y[0:self.sample_num]else:sample_files = self.data[0:self.sample_num]sample = [get_image(sample_file,input_height=self.input_height,input_width=self.input_width,resize_height=self.output_height,resize_width=self.output_width,crop=self.crop,grayscale=self.grayscale) for sample_file in sample_files]if (self.grayscale):sample_inputs = np.array(sample).astype(np.float32)[:, :, :, None]else:sample_inputs = np.array(sample).astype(np.float32)counter = 1start_time = time.time()#起始时间start_timecould_load, checkpoint_counter = self.load(self.checkpoint_dir)#加载检查点,并判断加载是否成功if could_load:counter = checkpoint_counterprint(" [*] Load SUCCESS")else:print(" [!] Load failed...")for epoch in xrange(config.epoch):if config.dataset == 'mnist':batch_idxs = min(len(self.data_X), config.train_size) // config.batch_sizeelse:      self.data = glob(os.path.join("./data", config.dataset, self.input_fname_pattern))batch_idxs = min(len(self.data), config.train_size) // config.batch_sizefor idx in xrange(0, batch_idxs):if config.dataset == 'mnist':batch_images = self.data_X[idx*config.batch_size:(idx+1)*config.batch_size]batch_labels = self.data_y[idx*config.batch_size:(idx+1)*config.batch_size]else:batch_files = self.data[idx*config.batch_size:(idx+1)*config.batch_size]batch = [get_image(batch_file,input_height=self.input_height,input_width=self.input_width,resize_height=self.output_height,resize_width=self.output_width,crop=self.crop,grayscale=self.grayscale) for batch_file in batch_files]if self.grayscale:batch_images = np.array(batch).astype(np.float32)[:, :, :, None]else:batch_images = np.array(batch).astype(np.float32)batch_z = np.random.uniform(-1, 1, [config.batch_size, self.z_dim]) \.astype(np.float32)if config.dataset == 'mnist':# Update D network#更新D网络_, summary_str = self.sess.run([d_optim, self.d_sum],feed_dict={ self.inputs: batch_images,self.z: batch_z,self.y:batch_labels,})self.writer.add_summary(summary_str, counter)# Update G network#更新G网络_, summary_str = self.sess.run([g_optim, self.g_sum],feed_dict={self.z: batch_z, self.y:batch_labels,})self.writer.add_summary(summary_str, counter)# Run g_optim twice to make sure that d_loss does not go to zero (different from paper)_, summary_str = self.sess.run([g_optim, self.g_sum],feed_dict={ self.z: batch_z, self.y:batch_labels })self.writer.add_summary(summary_str, counter)errD_fake = self.d_loss_fake.eval({self.z: batch_z, self.y:batch_labels})errD_real = self.d_loss_real.eval({self.inputs: batch_images,self.y:batch_labels})errG = self.g_loss.eval({self.z: batch_z,self.y: batch_labels})else:# Update D network_, summary_str = self.sess.run([d_optim, self.d_sum],feed_dict={ self.inputs: batch_images, self.z: batch_z })self.writer.add_summary(summary_str, counter)# Update G network_, summary_str = self.sess.run([g_optim, self.g_sum],feed_dict={ self.z: batch_z })self.writer.add_summary(summary_str, counter)# Run g_optim twice to make sure that d_loss does not go to zero (different from paper)_, summary_str = self.sess.run([g_optim, self.g_sum],feed_dict={ self.z: batch_z })self.writer.add_summary(summary_str, counter)errD_fake = self.d_loss_fake.eval({ self.z: batch_z })errD_real = self.d_loss_real.eval({ self.inputs: batch_images })errG = self.g_loss.eval({self.z: batch_z})counter += 1print("Epoch: [%2d] [%4d/%4d] time: %4.4f, d_loss: %.8f, g_loss: %.8f" \% (epoch, idx, batch_idxs,time.time() - start_time, errD_fake+errD_real, errG))if np.mod(counter, 100) == 1:#每100次batch训练后,根据数据集是否是mnist的不同,获取样本、判别器损失值、生成器损失值,调用utils.py文件的save_images函数,保存训练后的样本,并以epoch、batch的次数命名文件。然后打印判别器损失值和生成器损失值。if config.dataset == 'mnist':samples, d_loss, g_loss = self.sess.run([self.sampler, self.d_loss, self.g_loss],feed_dict={self.z: sample_z,self.inputs: sample_inputs,self.y:sample_labels,})save_images(samples, image_manifold_size(samples.shape[0]),'./{}/train_{:02d}_{:04d}.png'.format(config.sample_dir, epoch, idx))print("[Sample] d_loss: %.8f, g_loss: %.8f" % (d_loss, g_loss)) else:try:samples, d_loss, g_loss = self.sess.run([self.sampler, self.d_loss, self.g_loss],feed_dict={self.z: sample_z,self.inputs: sample_inputs,},)save_images(samples, image_manifold_size(samples.shape[0]),'./{}/train_{:02d}_{:04d}.png'.format(config.sample_dir, epoch, idx))print("[Sample] d_loss: %.8f, g_loss: %.8f" % (d_loss, g_loss)) except:print("one pic error!...")if np.mod(counter, 500) == 2:self.save(config.checkpoint_dir, counter)def discriminator(self, image, y=None, reuse=False):#定义判别器with tf.variable_scope("discriminator") as scope:if reuse:scope.reuse_variables()if not self.y_dim:#如果为假,则直接设置5层,前4层为使用lrelu激活函数的卷积层,最后一层是使用线性层,最后返回h4和sigmoid处理后的h4。h0 = lrelu(conv2d(image, self.df_dim, name='d_h0_conv'))h1 = lrelu(self.d_bn1(conv2d(h0, self.df_dim*2, name='d_h1_conv')))h2 = lrelu(self.d_bn2(conv2d(h1, self.df_dim*4, name='d_h2_conv')))h3 = lrelu(self.d_bn3(conv2d(h2, self.df_dim*8, name='d_h3_conv')))h4 = linear(tf.reshape(h3, [self.batch_size, -1]), 1, 'd_h4_lin')return tf.nn.sigmoid(h4), h4else:yb = tf.reshape(y, [self.batch_size, 1, 1, self.y_dim])x = conv_cond_concat(image, yb)h0 = lrelu(conv2d(x, self.c_dim + self.y_dim, name='d_h0_conv'))h0 = conv_cond_concat(h0, yb)h1 = lrelu(self.d_bn1(conv2d(h0, self.df_dim + self.y_dim, name='d_h1_conv')))h1 = tf.reshape(h1, [self.batch_size, -1])      h1 = concat([h1, y], 1)h2 = lrelu(self.d_bn2(linear(h1, self.dfc_dim, 'd_h2_lin')))h2 = concat([h2, y], 1)h3 = linear(h2, 1, 'd_h3_lin')return tf.nn.sigmoid(h3), h3def generator(self, z, y=None):#定义生成器with tf.variable_scope("generator") as scope:if not self.y_dim:#如果为假:首先获取输出的宽和高,然后根据这一值得到更多不同大小的高和宽的对。然后获取h0层的噪音z,权值w,偏置值b,然后利用relu激励函数。h1层,首先对h0层解卷积得到本层的权值和偏置值,然后利用relu激励函数。h2、h3等同于h1。h4层,解卷积h3,然后直接返回使用tanh激励函数后的h4。s_h, s_w = self.output_height, self.output_widths_h2, s_w2 = conv_out_size_same(s_h, 2), conv_out_size_same(s_w, 2)s_h4, s_w4 = conv_out_size_same(s_h2, 2), conv_out_size_same(s_w2, 2)s_h8, s_w8 = conv_out_size_same(s_h4, 2), conv_out_size_same(s_w4, 2)s_h16, s_w16 = conv_out_size_same(s_h8, 2), conv_out_size_same(s_w8, 2)# project `z` and reshapeself.z_, self.h0_w, self.h0_b = linear(z, self.gf_dim*8*s_h16*s_w16, 'g_h0_lin', with_w=True)self.h0 = tf.reshape(self.z_, [-1, s_h16, s_w16, self.gf_dim * 8])h0 = tf.nn.relu(self.g_bn0(self.h0))self.h1, self.h1_w, self.h1_b = deconv2d(h0, [self.batch_size, s_h8, s_w8, self.gf_dim*4], name='g_h1', with_w=True)h1 = tf.nn.relu(self.g_bn1(self.h1))h2, self.h2_w, self.h2_b = deconv2d(h1, [self.batch_size, s_h4, s_w4, self.gf_dim*2], name='g_h2', with_w=True)h2 = tf.nn.relu(self.g_bn2(h2))h3, self.h3_w, self.h3_b = deconv2d(h2, [self.batch_size, s_h2, s_w2, self.gf_dim*1], name='g_h3', with_w=True)h3 = tf.nn.relu(self.g_bn3(h3))h4, self.h4_w, self.h4_b = deconv2d(h3, [self.batch_size, s_h, s_w, self.c_dim], name='g_h4', with_w=True)return tf.nn.tanh(h4)else:s_h, s_w = self.output_height, self.output_widths_h2, s_h4 = int(s_h/2), int(s_h/4)s_w2, s_w4 = int(s_w/2), int(s_w/4)# yb = tf.expand_dims(tf.expand_dims(y, 1),2)yb = tf.reshape(y, [self.batch_size, 1, 1, self.y_dim])z = concat([z, y], 1)h0 = tf.nn.relu(self.g_bn0(linear(z, self.gfc_dim, 'g_h0_lin')))h0 = concat([h0, y], 1)h1 = tf.nn.relu(self.g_bn1(linear(h0, self.gf_dim*2*s_h4*s_w4, 'g_h1_lin')))h1 = tf.reshape(h1, [self.batch_size, s_h4, s_w4, self.gf_dim * 2])h1 = conv_cond_concat(h1, yb)h2 = tf.nn.relu(self.g_bn2(deconv2d(h1,[self.batch_size, s_h2, s_w2, self.gf_dim * 2], name='g_h2')))h2 = conv_cond_concat(h2, yb)return tf.nn.sigmoid(deconv2d(h2, [self.batch_size, s_h, s_w, self.c_dim], name='g_h3'))def sampler(self, z, y=None):with tf.variable_scope("generator") as scope:scope.reuse_variables()if not self.y_dim:s_h, s_w = self.output_height, self.output_widths_h2, s_w2 = conv_out_size_same(s_h, 2), conv_out_size_same(s_w, 2)s_h4, s_w4 = conv_out_size_same(s_h2, 2), conv_out_size_same(s_w2, 2)s_h8, s_w8 = conv_out_size_same(s_h4, 2), conv_out_size_same(s_w4, 2)s_h16, s_w16 = conv_out_size_same(s_h8, 2), conv_out_size_same(s_w8, 2)# project `z` and reshapeh0 = tf.reshape(linear(z, self.gf_dim*8*s_h16*s_w16, 'g_h0_lin'),[-1, s_h16, s_w16, self.gf_dim * 8])h0 = tf.nn.relu(self.g_bn0(h0, train=False))h1 = deconv2d(h0, [self.batch_size, s_h8, s_w8, self.gf_dim*4], name='g_h1')h1 = tf.nn.relu(self.g_bn1(h1, train=False))h2 = deconv2d(h1, [self.batch_size, s_h4, s_w4, self.gf_dim*2], name='g_h2')h2 = tf.nn.relu(self.g_bn2(h2, train=False))h3 = deconv2d(h2, [self.batch_size, s_h2, s_w2, self.gf_dim*1], name='g_h3')h3 = tf.nn.relu(self.g_bn3(h3, train=False))h4 = deconv2d(h3, [self.batch_size, s_h, s_w, self.c_dim], name='g_h4')return tf.nn.tanh(h4)else:s_h, s_w = self.output_height, self.output_widths_h2, s_h4 = int(s_h/2), int(s_h/4)s_w2, s_w4 = int(s_w/2), int(s_w/4)# yb = tf.reshape(y, [-1, 1, 1, self.y_dim])yb = tf.reshape(y, [self.batch_size, 1, 1, self.y_dim])z = concat([z, y], 1)h0 = tf.nn.relu(self.g_bn0(linear(z, self.gfc_dim, 'g_h0_lin'), train=False))h0 = concat([h0, y], 1)h1 = tf.nn.relu(self.g_bn1(linear(h0, self.gf_dim*2*s_h4*s_w4, 'g_h1_lin'), train=False))h1 = tf.reshape(h1, [self.batch_size, s_h4, s_w4, self.gf_dim * 2])h1 = conv_cond_concat(h1, yb)h2 = tf.nn.relu(self.g_bn2(deconv2d(h1, [self.batch_size, s_h2, s_w2, self.gf_dim * 2], name='g_h2'), train=False))h2 = conv_cond_concat(h2, yb)return tf.nn.sigmoid(deconv2d(h2, [self.batch_size, s_h, s_w, self.c_dim], name='g_h3'))def load_mnist(self):data_dir = os.path.join("./data", self.dataset_name)fd = open(os.path.join(data_dir,'train-images-idx3-ubyte'))loaded = np.fromfile(file=fd,dtype=np.uint8)trX = loaded[16:].reshape((60000,28,28,1)).astype(np.float)fd = open(os.path.join(data_dir,'train-labels-idx1-ubyte'))loaded = np.fromfile(file=fd,dtype=np.uint8)trY = loaded[8:].reshape((60000)).astype(np.float)fd = open(os.path.join(data_dir,'t10k-images-idx3-ubyte'))loaded = np.fromfile(file=fd,dtype=np.uint8)teX = loaded[16:].reshape((10000,28,28,1)).astype(np.float)fd = open(os.path.join(data_dir,'t10k-labels-idx1-ubyte'))loaded = np.fromfile(file=fd,dtype=np.uint8)teY = loaded[8:].reshape((10000)).astype(np.float)trY = np.asarray(trY)teY = np.asarray(teY)X = np.concatenate((trX, teX), axis=0)y = np.concatenate((trY, teY), axis=0).astype(np.int)seed = 547np.random.seed(seed)np.random.shuffle(X)np.random.seed(seed)np.random.shuffle(y)y_vec = np.zeros((len(y), self.y_dim), dtype=np.float)for i, label in enumerate(y):y_vec[i,y[i]] = 1.0return X/255.,y_vec@propertydef model_dir(self):return "{}_{}_{}_{}".format(self.dataset_name, self.batch_size,self.output_height, self.output_width)def save(self, checkpoint_dir, step):model_name = "DCGAN.model"checkpoint_dir = os.path.join(checkpoint_dir, self.model_dir)if not os.path.exists(checkpoint_dir):os.makedirs(checkpoint_dir)self.saver.save(self.sess,os.path.join(checkpoint_dir, model_name),global_step=step)def load(self, checkpoint_dir):import reprint(" [*] Reading checkpoints...")checkpoint_dir = os.path.join(checkpoint_dir, self.model_dir)ckpt = tf.train.get_checkpoint_state(checkpoint_dir)if ckpt and ckpt.model_checkpoint_path:ckpt_name = os.path.basename(ckpt.model_checkpoint_path)self.saver.restore(self.sess, os.path.join(checkpoint_dir, ckpt_name))counter = int(next(re.finditer("(\d+)(?!.*\d)",ckpt_name)).group(0))print(" [*] Success to read {}".format(ckpt_name))return True, counterelse:print(" [*] Failed to find a checkpoint")return False, 0

这就是model.py所有内容,主要是定义了DCGAN的类,完成了生成判别网络的实现,说实话我对于优化器中最小化判别生成图片与真实图片误差中最小化g_loss不懂,难道不应该是增强判别网络G的能力吗?然后最大化g_loss才对?
最后经过迭代可看出变化:
第1次:
这里写图片描述
第50次:

第100次:
这里写图片描述
第150次:
这里写图片描述
第200次:
这里写图片描述

总结

对于GAN还是好多不懂,能力有限,还需要学习。。。
中途执行过程中出现问题这里写图片描述
执行命令改成这个后解决:

python main.py --input_height 96 --input_width 96 --output_height 48 --output_width 48 --dataset faces --crop --train --epoch 300 --input_fname_pattern "*.jpg"

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/55179.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

运营策略与营销实战

学习目标&#xff1a; 提示&#xff1a;这里可以添加学习目标 例如&#xff1a; 运营策略与营销实战 学习内容&#xff1a; 提示&#xff1a;这里可以添加要学的内容 例如&#xff1a; 运营背景拉新转化拼团结构满减结构 学习时间&#xff1a; 提示&#xff1a;这里可以…

跨界营销策划案例合集(共13份)

合集名称&#xff1a;跨界营销策划案例合集 数量&#xff1a;共13份 具体内容&#xff1a; 【跨界合作案例】2018富力东山新天地跨界品牌事件营销“真女神真豪宅”策划方案.pptx【跨界合作案例】2020人民日报文创中国潮跨界联名整合营销合作计划.pptx【跨界合作案例】2020可…

营销活动效果评估

作者介绍 花花 曾任职于美团、腾讯、今日头条担任数据分析师。 操盘过上百亿的资源评估&#xff0c;与大家一起成长学习。 01 序言 一年一度的双十一又到了&#xff0c;大家被哪个平台的营销活动吸引准备剁手了呢&#xff1f;是选择最早打出“百亿补贴”&#xff0c;价格让无数…

YouTube营销活动方案

网红营销需要根据客户的目标创建不同类型的广告系列&#xff0c;和大多数营销活动一样&#xff0c;广告系列的类型会对你的营销活动产生很大的影响。 (1)宣传活动 宣传活动的目的是向市场或目标受众介绍新产品。这款产品可能不是全新的&#xff0c;但它是第一款面向特定受众推广…

如何构建营销活动平台(二):业务功能设计

营销活动主流程 功能描述 模块介绍 活动管理 新建活动&#xff1a;a.常用活动模板 b.现有活动上修改 c.新活动活动管理&#xff1a;a.发布活动 b.暂停活动 c.终止活动 d.继续活动 渠道投放 活动形式管理 数据统计 奖励管理 消息触达 广告管理 人群 思维导图的作用 营…

商场/购物中心会员营销活动方案

社交是每个身处社会中的人必经之路&#xff0c;而购物中心的会员营销也离不开社交的加持。建立会员连接&#xff0c;提升会员互动&#xff0c;都能影响商场与会员之间的关系。 购物中心在日常会员营销过程中&#xff0c;如何提升会员互动是比较大的工作内容&#xff0c;因为只…

七月节日营销活动方案

七月与炎热酷暑分不开 这个月份虽然没有很多节日 但作为营销人的我们 依然不能放过每个能够营销的机会 接下来&#xff0c;让我们一起来看看 七月节日营销活动方案的内容吧 七月节日热点营销活动方案 雨科网营销活动多样化的趣味活动帮助企业低成本高效实现公众号涨粉、门店…

新媒体运营:如何策划出一场完整高效的活动方案?(一) 黎想

这是一个“内容为王”的时代&#xff0c;新媒体运营输出的应该是干货&#xff0c;产生的是“流量”。这都对新媒体运营人员提出了较高的技能要求&#xff0c;新媒体运营人员一定是综合性人才。那么如何策划出一场完整高效的活动方案&#xff1f; 今天就由艺形艺意工作室创始人…

新媒体活动策划方案要点

要策划好一场活动&#xff0c;没有什么捷径可走&#xff0c;但脱离不了最基本的方法论&#xff0c;再不断的学习、实践、总结&#xff0c;一步步提升&#xff0c;慢慢形成一套自己的运营理论和独特的策划思维。对于一名新媒体策划人员来说&#xff0c;经常面临的情况是&#xf…

小饭馆促销活动流程,小饭馆网络营销方案

每个行业都需要推广&#xff0c;小饭馆更是如此&#xff0c;创新营销就成为了小饭馆在市场上脱颖而出最有利的方式。那么如何进行小饭馆的推广&#xff1f;今天我们就和各位聊聊小饭馆引流推广应该如何做&#xff01; 小饭馆拓客引流流程 小饭馆的引流推广方案主要围绕以下三个…

营销活动策划思路

营销活动是为企业提高微信营销活动的互动平台&#xff0c;多种活动类型实现节日营销、品牌传播、获客拉新、付费转化、粉丝促进活跃等。 公众号24小时微信渗透式包围&#xff0c;影响用户消费行为&#xff0c;实现涨粉目的&#xff1b;可以为电商引流&#xff0c;也可以为现场…

活动营销策划方案和计划书PPT模板欣赏-朴尔PPT

用途&#xff1a;活动策划 模板格式&#xff1a;pptx格式&#xff08;可随意下载编辑&#xff09; 基本属性&#xff1a;页数&#xff1a;30页 | 大小&#xff1a;10MB | 软件&#xff1a;PowerPoint/WPS | 格式&#xff1a;PPTX | 比例&#xff1a;16:9 推荐指数&#xff1a;★…

餐饮店开业活动策划如何做?引流促销营销怎么做?餐饮会员日周年庆方案

餐饮店开业活动策划如何做&#xff1f;引流促销营销怎么做&#xff1f;餐饮会员日周年庆方案&#xff1f; 我来给你一套完整的【餐饮店引流锁客数字化运营方案】请收好。 引流&#xff1a; 开餐饮店一定要知道&#xff0c;人气非常重要&#xff0c;既然做活动&#xff0c;我…

chatgpt赋能python:Python爬虫教程:如何使用Python爬取电影信息

Python爬虫教程&#xff1a;如何使用Python爬取电影信息 在数字化时代&#xff0c;海量的影视资源唾手可得&#xff0c;但是当你需要获取特定类型的影视资源时&#xff0c;如同针在海底&#xff0c;费力费时。Python作为一种高效易用的编程语言&#xff0c;可以让你轻松爬取电…

ChatGPT原理剖析-李宏毅

ChatGPT原理剖析 ChatGPT原理剖析2_2 ChatGPT分为三个部分 1.Generative,Pre-trained,Transformer 2.以往的机器学习是根据成对的例句&#xff0c;通过大量成对的例句找出规律&#xff0c;可以理解为找出了某个函数 3.如果让人类来制造例句的话&#xff0c;效率会比较低&…

云数据库技术行业动态:ClickHouse Cloud正式GA或有融资;openGauss社区引入新成员

行业动态 ClickHouse Cloud正式GA&#xff0c;同时&#xff0c;B轮融资得到进一步增加 这是由ClickHouse官方推出云服务&#xff0c;启用新域名&#xff1a;clickhouse.cloud &#xff0c;类似于MongoDB的Atlas服务。目前支持在AWS构建&#xff0c;从Roadmap看&#xff0c;很快…

【数值分析】用幂法计算矩阵的主特征值和对应的特征向量(附matlab代码)

题目 用幂法计算下列矩阵的按模最大特征值及对应的特征向量 幂法 代码 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % 简介&#xff1a;用幂法计算矩阵的主特征值和对应的特征向量 % 作者&#xff1a;不雨_亦潇潇 % 文件&#xff1a;mifa.m % 日期&#xff1a;20221109 % 博…

计算两个对应点集之间的旋转矩阵R和转移矩阵T

这篇文章的相应数学推到在这个地方&#xff0c;有兴趣的可以瞧一瞧计算两个点集合的旋转矩阵R和T的数学推导 假设有两个点集A和B&#xff0c;且这两个点集合的元素数目相同且一一对应。为了寻找这两个点集之间的旋转矩阵 R R R和转移矩阵 t t t。可以将这个问题建模成如下的公式…

diag矩阵(Diag矩阵计算公式)

A&#xff1d;diag(a1,a2..an)是表示对角矩阵吗&#xff1f;书上没有明确? 是的,其中ai表示在第i行第i列的数是ai,其余都是0 对角行列式的读法对角行列式(对角矩阵的行列式)可记为-diag( 对所有的数学符号都要考虑音(读),形(写),意(内涵), 其中读和写都是为了记载和交流, |d…

澳洲将推新支付系统 实现跨行实时转账

澳洲准备在国庆日&#xff08;Australia Day&#xff09;后推出新支付系统&#xff0c;实现不同银行间实时转账&#xff0c;有望淘汰BSB电汇清算网络编码。 据《悉尼先驱晨报》报道&#xff0c;2012年时&#xff0c;澳洲对支付基础设施进行了审查&#xff0c;促使新支付平台…