分类 未分类 下的文章

eletron 12版本以后默认开启了上下文隔离

webPreferences: {
      contextIsolation: true,
}

上下文隔离后, 内存变成了不一样的区域, 通过桥接函数contextBridge.exposeInMainWorld从preload转移过去的都是复制的内容, 桥接过去的obj也是复制的, 在render中修改并不会影响preload. 甚至ipcRender也不能直接桥接过去, .on会丢失.
可是renderer中最重要的就是ipc消息交互了. ipcRenderer.on用不了要如何监听ipcMain发过来的消息?
Eletron官方文档中居然都没有写如何桥接.on,只写了.send.invoke
参考这个回答吧: https://stackoverflow.com/questions/59993468/electron-contextbridge
两个人给出了两种方式:都是preload.js
第一种:

const {
    contextBridge,
    ipcRenderer
} = require("electron");

// Expose protected methods that allow the renderer process to use
// the ipcRenderer without exposing the entire object
contextBridge.exposeInMainWorld(
    "api", {
        send: (channel, data) => {
            // whitelist channels
            let validChannels = ["toMain"];
            if (validChannels.includes(channel)) {
                ipcRenderer.send(channel, data);
            }
        },
        receive: (channel, func) => {
            let validChannels = ["fromMain"];
            if (validChannels.includes(channel)) {
                // Deliberately strip event as it includes `sender` 
                ipcRenderer.on(channel, (event, ...args) => func(...args));
            }
        }
    }
);

第二种:

const { ipcRenderer, contextBridge } = require('electron')

const validChannels = ["toMain", "myRenderChannel"];

contextBridge.exposeInMainWorld(
  "api", {
    send: (channel, data) => {
        if (validChannels.includes(channel)) {
            ipcRenderer.send(channel, data);
        }
    },
    on: (channel, callback) => {
      if (validChannels.includes(channel)) {
        // Filtering the event param from ipcRenderer
        const newCallback = (_, data) => callback(data);
        ipcRenderer.on(channel, newCallback);
      }
    },
    once: (channel, callback) => { 
      if (validChannels.includes(channel)) {
        const newCallback = (_, data) => callback(data);
        ipcRenderer.once(channel, newCallback);
      }
    },
    removeListener: (channel, callback) => {
      if (validChannels.includes(channel)) {
        ipcRenderer.removeListener(channel, callback);
      }
    },
    removeAllListeners: (channel) => {
      if (validChannels.includes(channel)) {
        ipcRenderer.removeAllListeners(channel)
      }
    },
  }
);

选择global版本

<script src="https://cdn.bootcdn.net/ajax/libs/vue/3.2.0-beta.7/vue.global.js"></script>
<script src="https://cdn.bootcdn.net/ajax/libs/vue/3.2.0-beta.7/vue.global.min.js"></script>

文档准备好后再运行脚本

var ready = function ( fn ) {

  // Sanity check
  if ( typeof fn !== 'function' ) return;

  // If document is already loaded, run method
  if ( document.readyState === 'complete'  ) {
      return fn();
  }

  // Otherwise, wait until document is loaded
  document.addEventListener( 'DOMContentLoaded', fn, false );

};

防止看到花括号

    <div id="app" class="app" v-cloak>{{counter}}</div>
[v-cloak]{
  display: none!important;
}

https://www.bilibili.com/read/cv6183523/

做法:

  1. 切换到TechDraw工具组
  2. 点击最左边的插入默认页, 会插入一个新的2D绘图页. 并且视图Tab页和组合浏览器->模型中会增加一个Page
  3. 点击回3D视图页面, 选择要插入2D图纸的零部件,并且将视图角度旋转到需要表现的角度;
  4. 点击插入视图,将插入2D视图到2D图纸.
  5. 刚插入的可能很丑.点击选中插入的视图, 修改左侧组合浏览器中的数据属性, 将Coarse View设置为True, 选择Rotation旋转调整角度, 选择数据属性, 将Line Width设置为0.1mm.
  6. 在2D视图上点击右键选择导出为SVG, 保存即可.

Windows上安装TortoiseGit后,已经commit的文件会打, 未加入版本管理的文件会打?, 有修改的文件或者文件夹会打!, 刚刚加入文件管理,但还没有commit的会打+.
如果对项目进行commit时, tortoiseGit发现不了任何的新的修改,然而项目目录上就是有个!, 跟着!一路进入会找到若干个文件上打着?,但这些文件明明已经commit并且也没有任何修改,这时候一般是这几个文件的文件名大小写改变导致的.
登录git的WEB服务器端查看这几个文件名与本地的异同, 将本地文件名大小写改为与服务器一致就可以解决,文件将恢复状态.

jekyll本身是预处理成静态页面,所以是不支持的。如果还是只用静态页面服务器的话,要支持只能从js上想办法。

JS获得GET参数

location.search可以获取get参数, 如?b=qq&c=dd,然后将其转化为json对象,我用的是字符串替换+JSON.parse

let urlParams = JSON.parse('{"'+location.search.substring(1).replace(/=/g,'":"').replace(/&/g,'","')+'"}')

对单个DOM元素在JS中修改

给DOM元素加id,再通过js修改即可。

  document.getElementById('name').innerText = someNewName

按GET参数不同显示不同的post

可以先在css中设置class,属性是display:none将元素隐藏,然后将所有的post按打上不同的class标记,再在js中遍历post的元素,将需要显示的class标记删除。
css:

.qq,.tr{
display:none;
}

预处理前的liquid代码

<li class="{{post.b}}"> ....</li>

js代码

    var all = document.getElementsByClassName(urlParams.b);
    console.log(all.length)
    //这儿删除了 i++, 因为 每次使用classList.remove, 都会让当前的all[i]从all的数组中删除, 数组会不断变短直至为0
    for (var i = 0; i < all.length;) { 
      all[i].classList.remove(urlParams.b)
    }

批量修改class颜色

可以通过css变量
参考:https://stackoverflow.com/questions/9436123/javascript-changing-a-class-style/65471649

:root {
    --some-color: red;
}

.someClass {
    color: var(--some-color);
}

Then you can change the variable's value in Javascript with

document.documentElement.style.setProperty('--some-color', '(random color)');

纹饰 图案 寓意
缠枝纹 俗称“缠枝花”,又名“万寿藤”。因其结构连绵不断,故又具“生生不息”之意,寓意吉庆。
莲瓣纹 莲瓣纹是佛教推崇的纹饰,莲花代表普度众生。抽象起来像是花括号
云纹 云纹是最为常见的传统纹样,象征着高升和如意,依据形态特征可分为单岐云、双岐云、三岐云、勾云纹、朵云纹、云头纹、云水纹、流云纹、三叉云等
云雷纹 商周时期,云纹多与雷纹共同被提起,被视为是云纹早期形态
回纹 是由陶器和青铜器上的雷纹衍化而来的几何纹样。回纹图案寓意吉祥富贵,所以民间称连续的回纹为“富贵不断头”
弦纹 弦纹是古代器物上最简单的传统装饰纹样,在青铜器上呈现为凸起的横线条。细弦纹像一条细长的带子平缚于陶器之上,细长、凸起;粗弦纹较宽,纹中间呈凹槽状,很像板瓦,因此又被称为“瓦纹”。另有作人字形的弦纹,称为“人字纹”或“人字弦纹”。
席纹 《中国陶瓷史》对其成因有解释说,它是制作陶坯时坯下所垫席类器物留下的印痕。
连珠纹 连珠纹又称“联珠纹”、“连珠”、“圈带纹”、“花蕊纹”,由一串彼此相连的圆圈或椭圆组成
漩涡纹 主流是认为起源于自然界的水涡,或者是源于对蛇形图腾的崇拜。象征自然和谐。寓意着太阳起落,四季划分
乳钉纹 最早出现在祭祀女性先人的祭器之上,感怀生命起源,表示对母亲的敬仰和怀念。而且钉谐音“丁”,故而也有祈求子孙满堂、人丁兴旺的寓意。战汉玉璧最常装饰乳钉纹,因为玉璧是“天”的象征,所以玉璧上的乳钉纹通常代表天上的星星。
条纹 又称“条形纹”、“线纹”,是一种最简单、最实用的传统装饰纹样。
曲折纹 又称“曲尺纹”、“波折纹”、“三角折线纹”,如同水波纹,猜测也可能是山的象形。规律的排布还给人一种生动活泼、刚强有力的秩序美。

其他参考文献
传统窗格图案几何纹饰及其艺术特征

根据企查查2021-8-18

Floor 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 总计 计数
4-7 1 1 1
8 1 2 2 1 1 1 1 1 10 8
10 1 1 1 1 1 1 1 2 2 7 2 20 11
12 1 1 1 1 1 1 4 2 12 8
14 2 1 2 1 1 7 5
16 2 1 1 2 2 1 1 10 7
18 1 2 2 7 1 1 1 15 7
20 1 2 2 5 3
22 2 2 1 1 2 3 11 6
26 2 1 3 1 1 8 5
28 1 8 1 3 3 1 1 18 7
30 1 2 3 2
总计 3 5 16 1 3 2 1 3 4 12 14 12 6 10 17 11 120
计数 3 4 6 1 2 2 1 2 2 8 7 7 5 6 7 7

知乎上的回答
tensorflow官方认证地址
考取google Tensorflow认证的流程
另一个老外的TensorFlow考试经历

Tensorflow考试准备

官方考试手册
配置考试环境
官方教程

学习教程

吴恩达课程学习笔记

AI技术文章

容量、过拟合和欠拟合
深度学习中过拟合、欠拟合问题及解决方案
熵,交叉熵,二分类交叉熵/Entropy, crossentropy, binary crossentropy
正则化
正则化消除过拟合

AI前沿思考

陈德旺对Deepmind的成功的探讨

笔记本电脑上训练

PS: 可以通过Nvidia的显卡加速, 不过在i7-8550U+MX150上测试, MX150似乎比CPU还慢.

使用MX150:

2021-08-23 11:01:05.658828: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll
2021-08-23 11:01:05.661719: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll
2021-08-23 11:01:05.667080: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusolver64_11.dll
2021-08-23 11:01:05.672298: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll
2021-08-23 11:01:05.681747: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2021-08-23 11:01:05.681986: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-08-23 11:01:05.682349: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-23 11:01:05.683329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce MX150 computeCapability: 6.1
coreClock: 1.341GHz coreCount: 3 deviceMemorySize: 2.00GiB deviceMemoryBandwidth: 37.33GiB/s
2021-08-23 11:01:05.683674: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-08-23 11:01:06.781669: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-08-23 11:01:06.781862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-08-23 11:01:06.781974: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-08-23 11:01:06.784405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1332 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-08-23 11:01:07.432975: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/5
2021-08-23 11:01:07.782500: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2021-08-23 11:01:08.920483: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
1875/1875 [==============================] - 6s 2ms/step - loss: 0.2974 - accuracy: 0.9155
Epoch 2/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.1430 - accuracy: 0.9569
Epoch 3/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.1057 - accuracy: 0.9685
Epoch 4/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0867 - accuracy: 0.9742
Epoch 5/5
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0741 - accuracy: 0.9764
313/313 - 1s - loss: 0.0704 - accuracy: 0.9784

Process finished with exit code 0

上面MX150的Epoch每一项在5~6秒, 而使用CPU只需要1~2秒:


D:\r\pyproj\TFproj1\venv\Scripts\python.exe D:/r/pyproj/TFproj1/main.py
2021-08-23 11:04:35.810645: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2.5.0
2021-08-23 11:04:39.140247: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll
2021-08-23 11:04:39.791839: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce MX150 computeCapability: 6.1
coreClock: 1.341GHz coreCount: 3 deviceMemorySize: 2.00GiB deviceMemoryBandwidth: 37.33GiB/s
2021-08-23 11:04:39.792157: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-08-23 11:04:39.808505: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2021-08-23 11:04:39.808676: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2021-08-23 11:04:39.813804: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll
2021-08-23 11:04:39.816603: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll
2021-08-23 11:04:39.822109: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusolver64_11.dll
2021-08-23 11:04:39.827115: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll
2021-08-23 11:04:39.832314: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2021-08-23 11:04:39.832637: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1766] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-08-23 11:04:39.833777: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-23 11:04:39.834660: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-08-23 11:04:39.834922: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      
2021-08-23 11:04:40.297728: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/5
1875/1875 [==============================] - 2s 840us/step - loss: 0.2934 - accuracy: 0.9141
Epoch 2/5
1875/1875 [==============================] - 1s 764us/step - loss: 0.1421 - accuracy: 0.9578
Epoch 3/5
1875/1875 [==============================] - 2s 853us/step - loss: 0.1091 - accuracy: 0.9674
Epoch 4/5
1875/1875 [==============================] - 2s 1ms/step - loss: 0.0881 - accuracy: 0.9730
Epoch 5/5
1875/1875 [==============================] - 2s 1ms/step - loss: 0.0763 - accuracy: 0.9759
313/313 - 0s - loss: 0.0729 - accuracy: 0.9780

Process finished with exit code 0

以前认为至少两条:
1、预测准;
2、对人好;
现在得加第三条:
3、执行力强;
要强成信仰一样。