Polishing up the project
Compared to the previous weeks, this week we focused basically on polishing the project up. For our co-intern, she focused primarily on AB testing, so she polished up the two different projects: AB test with captions and another AB test with transcript. Whereas for me, I focused on merging the code from everywhere that we’ve worked on. From ASR switching to local/remote captions to local/remote transcripts with little things in between. So merging the code is not as easy as there are some bugs to hunt down after merging and get it fixed.
Before we decided not to enable auto start, and toggle our ASR instead. I spent two days trying to troubleshoot the Audio Context issue after auto starting. It’s difficult to troubleshoot this especially when it requires an user action to be taken then it can be resumed. I have a button click event where after the “video preview” of myself, I click “yes” to enter the room. As soon as I click “yes”, the audio context should be resumed because that’s an user action. That is a logical perspective. For some reason, it does not work like that at all. It kept on throwing console errors saying that audio context cannot be started without an user action. At some point, we were discussing about Speech-to-Text billing services. Well, After a couple weeks, we forgot to check our cloud services regularly to see the billing costs. To the point, the billing was higher than we initially thought. Given this fact, and Audio Context weren’t cooperating, we decided to switch to an ASR toggle. By an ASR toggle, the ASR won’t be enabled upon entering the room. Like majority of the project development timeline, the auto start was used to enable the ASR as soon as you enter the room. So this may be partially the reason why the billing costs were high. Nevertheless, we have enough research funds to cover this. So it’s a motivation to enable ASR toggle. So now, as soon as we enter the room, no ASR service will be started unless is taken by an user action. Get this? An user action, so therefore, the Audio Context could be resumed as well. So we’re hitting two birds with one stone. I am more than fortunate to solve this problem by doing it the different way.
The mid-late week, I primarly focused on getting the ASR toggle up and running. I’ve had a couple bugs to work through to get it working. It is working properly right now. The Azure/Web Speech ASR services can be switched. I am almost done polishing the project up to be ready to use as a proof of concept for our research. On the captions side, it’s working as expected with the code merged from other smaller projects we worked on. However, the next step is to merge the transcript side. I am halfway done with it. Another half, and this would be completed. Hoping to get it done by the first day next week, and then I can shift my focus to work on research information distribution such as assist Emelia in the research paper, develop a research project slideshow (Google slides/Powerpoint to be determined), and distribute any more additional information/research/stuff that I may forget to mention this week. To be continued!