I watched LinkedIn training “DevOps Foundations” with Carlos Nunez instructor. In section 3 he shows the use of RSpec and Capibara. I’ll extend a little bit as that section assumes a lot of things from the instructor side.
Step number 1 launches a Docker container with the Nginx web server. That is ok. Then he starts a second container which is Selenium.
Both containers use docker-compose. Nginx is launched with a Docker file. Selenium is not, instead just a raw image is downloaded.
Instructor launches a third container, which contains Rspec and Capybara. This container is specified again with a docker file
What is omitted is an explanation of Selenium, Capybara and RSpec. So let’s start.
Selenium is a web browser instance with an API tailored to be run from a container. The image used here also has a VNC server which is useful when using containers.
Capybara is a Test Framework with support to multiple languages.
RSpec is the ruby DSL.
Why we are using this combo is not explained.
Let’s see my containerfs launched with docker-compose:
There is no doubt, coding is getting more complex nowadays. I have to code in Python, C and C++ and I have usually used vi for small edits. For production I have used PyCharm, for C, C++ Visual Studio.
Visual Studio Code is a great editor with multiple extensions to support different languages. Although, I am used to vim and I always tried to setup intellisense to support different languages.
I like to use Visual Studio Code, however, I feel more productive with vi editor. In this post I’m setting up NeoVim with Coc to code in my favorite languages. In the past I tried vim with different plugins to have code completion.
Finally, I think NeoVim plus Coc is a good option to use vim.
COC requires nodejs and yarn. So, first step is to install those tools:
I was used to SetUp VirtualBox Guests with a Network Bridge in order to have bidirectional access from Host to Guest or Guest to Host. In the simpler NAT mode, I was only able to ssh from my Host to Guest but not ssh from Guest to Host.
Most of the time I used the Bridge Network Mode in Virtual Box, then I switched from Virtual Box to KVM/qemu/libvirtd. Thanks to virt-manager or Gnome Boxes it was relatively easy to use those tools instead of VirtualBox.
When using virt-manager, you have the option to setup the network mode from GUI. Unfortunately there is not an option in drop box to select Network bridge. Instead we have to create the bridge from command line.
Use Network Manager Client to create “br0” bridge interface, in my case the physical Ethernet network interface (From a ThinkPad docking) is: enp0s20f0u2u1i5.
sudo nmcli con add ifname br0 type bridge con-name br0
sudo nmcli con add type bridge-slave ifname enp0s20f0u2u1i5 master br0
Bring down physical Ethernet interface and bring up bridge br0.
sudo nmcli con down "Wired connection 1"
sudo nmcli connection up br0
Setup xml file to be used by virsh:
sudo virsh net-define ./kvm_br0.xml
sudo virsh net-start br0
sudo virsh net-autostart br0
You will see bridge br0 in drop box from virt-manager
My host now has IP Address 192.168.0.104
My KVM guess has IP Address 192.168.0.105. Then, thanks to bridge I can ssh from Ubuntu Guest to my Fedora Host!
You can modify bridge IP address manually as well. For instance, use:
I was looking for a setup to test my OAuth2 Knowledge and I found excellent videos and articles in internet. Specifically I followed this one:
Now, trying to create my own test I went to my Google Cloud Platform Console and create my Oauth2 Client Id and Consent Screen. Please, note that GCP offers different options to add a Client Id. I selected the one for Web Application Client.
OAuth2 needs consent screen
Now, lets test with OAuth2 debug tool: State is not in screenshot but I set it to “anti-forgery”. See success screen.
Then you clone SuperTux repository and open the project with Visual Studio Code.
Keep in mind that you need to use –recurse option or submodule option. Look in SuperTux wiki to clone correctly repoi.
At this part the video skips a lot of explanation.
Julia explains how to set the active key to Visual Studio Tools 2019 – amd64
She also explains how to setup build variant to Cmake: Debug Ready
At this point the build config is done but if you try to follow this sequence you will see this kind of error:
This error means that all third party libraries used by SuperTux are not installed in your system. In Julia’s case, she already installed those dependencies with vcpkg. So at this point is important to mention that we need to install vcpkg!
At this point, setup cmake configuration to point cmake to vcpkg. Open cmake extension settings from Visual Studio Code and setup settings.json. Notice that path is the same that the one displayed by vcpkg integrate install.
vcpkg will install by default 32 bits packages. By using the –triplet parameter you will install 64 bits version of packages instead of 32 bits.
Every time you install a package, click on the build gear button to generate cmake build file. You will get new error mentioning missing library/package. Once you generate successfully the build the tool will try to build SuperTux. Then you will get error that .lib files are missing
Those libs are provided by same project. Build them manually one by one. Select the target:
Then build every single target. lib files will be built.
Finally select SuperTux2 target. This will build exe file.
In my case, when trying to run or debug nothing happened. I launched manually exe file generated in build folder an error message showed that 2 dlls were not found:
squirrel.dll and sqstdlib.dll
Copy 2 dlls into build debug folder.
Finally, I wanted to try vcpkg.json manifest file support. I only found that feature included in latest roadmap:
I downloaded latest version 0.6.2 from git and then I followed my own steps. I got a different error when running cmake configure. Specifically with physfs_lib package
This time was more difficult to troubleshoot this issue. At the end I commented this line (940) from CMakeLists.txt:
This was not elegant but at the end allowed me to run cmake configure.
After that, the BUILD ALL option worked correctly and I did not have to create every submodule one by one.