A while ago I started building a website using Django as its backend and Vue.js as its frontend. Unlike other apps, however, there is a twist: it is multi-page. The website should resemble a normal website built natively on Django, with multiple urls and the ability to jump into any page (without faking it with vue-router) as well as Vue’s reactiveness. So, after digging around various tutorials and guides, none of which fully answered the question, I would like to share my piece in solving the equation.
Introduction and TLDR
This post will start with the basics of using Vue.js and Django. Then, it will move on to use more advanced tools such as webpack to serve the Vue.js frontend on Django. Finally, we will be modifying it a bit to serve multi-page Vue.js.
Since the Chinese “hack” allowing the mining GPU to run DirectX, the P106 has gained enormous attention within the international hardware community. Before the hack, prices on Taobao varied around 200 RMB (about 30 USD). After the hack, prices nearly doubled to 400 RMB (60 USD). So, then, what is the P106?
Hacking the P106
The P106 was originally meant to be a mining GPU. There are no display outputs, no video-encoding support, and no directX. This, however, was changed with the hack.
By modifying NVIDIA’s drivers, the P106 is able to be seen as a GTX1060. And, with Window’s feature to run a “high-performance” gpu through the display of an integrated gpu, as often used in laptops, we can use the P106 to run games and other gpu-demanding applications.
This tutorial was originally intended for Electron. However, I soon found that it applies to all platforms, not just Electron. Thus, feel free to continue even if you’re using something else. Everything should apply as long as it is HTML-JS based.
Using Live2d in live2d-widget.js
Over the past few months, I’ve been working on a RNN chatbot. However, I soon ran into a weird issue. In short, the network repeatedly outputted the same tokens (often <EOS> or <GO>). The longer version is on Stack Overflow.
After months of digging around, I’ve finally found the issue. When training a RNN (with TrainingHelper and BasicDecoder), Tensorflow expects the ground-truth inputs with <GO> tokens but then outputs without <GO>. Basically,
Encoder input: <GO> foo foo foo <EOS>
Decoder input/ground truth: <GO> bar bar bar <EOS>
Decoder output: bar bar bar <EOS> <EOS/PAD>
Since I used <GO> in both the inputs and outputs, the model repeated itself. (<GO> -> <GO>, bar -> bar).
After fixing this and a few other small issues, the chatbot started to produce acceptable results. I will be posting an update on the chatbot soon, as this is only a reminder to myself and a tip for the ones having the same issue.