RNN Repeating Tokens (Tensorflow)

Over the past few months, I’ve been working on a RNN chatbot. However, I soon ran into a weird issue. In short, the network repeatedly outputted the same tokens (often <EOS> or <GO>).  The longer version is on Stack Overflow.

After months of digging around, I’ve finally found the issue. When training a RNN (with TrainingHelper and BasicDecoder), Tensorflow expects the ground-truth inputs with <GO> tokens but then outputs without <GO>. Basically,

Encoder input: <GO> foo foo foo <EOS>
Decoder input/ground truth: <GO> bar bar bar <EOS>
Decoder output: bar bar bar <EOS> <EOS/PAD>

Since I used <GO> in both the inputs and outputs, the model repeated itself. (<GO> -> <GO>, bar -> bar).

After fixing this and a few other small issues, the chatbot started to produce acceptable results. I will be posting an update on the chatbot soon, as this is only a reminder to myself and a tip for the ones having the same issue.

Waifu GUI

Waifu GUI – A WPF GUI for Project Waifu

Waifu GUI

Project Waifu’s speaker verification was great, but it was difficult to use (You even had to manually add the paths inside the scripts). So, I wrote Waifu GUI — A C# WPF powered user interface that writes all of Project Waifu’s complex arguments for you.

As for now, Waifu GUI can pretty much handle everything Project Waifu has at the moment, ranging from getting MFCC data to tuning hyperparameters. It will continue to grow as Project Waifu expands.

Continue reading →

Project Waifu: Speaker Verification

Project Waifu

Project Waifu is a long-term machine learning/deep learning project I will be working on. I will not reveal too much about it, but here’s the first part of the pipeline: speaker verification.

Text-Independent Speaker Verification

Speaker verification is the process of recognizing the identity of the speaker which in this case, is either 1 (is who we want to identify) or 0 (not the person). A lot of algorithms online uses GMMs and/or creates profiles for speakers. For this project, a MLP (multi-layer perception – regular feed-forward neural network) is used and because of the way it is structured, the algorithm performs pretty well.

Continue reading →

Predicting Website Credibility Using a DNN

Over the last few weeks I’ve been working on a deep neural net to predict website credibility (i.e. how “reliable” it is). The features consist of basic website features such as its domain and a bag-of-words model.

Website Credibility

Website credibility is determined by a lot of things and a lot of the time there isn’t a right or wrong answer. Wikipedia, for example, is a notorious source because it can be edited by anyone. Nonetheless, Wikipedia does contain a lot of correct and is still considered unreliable.

Although there is no exact answer, we can often predict the credibility through many features such as the author, the “purpose” of the text, and even the date. (More can be found here)

Continue reading →