OAuth2 with Google CloudPlatform

I was looking for a setup to test my OAuth2 Knowledge and I found excellent videos and articles in internet. Specifically I followed this one:

OAuth2 Plain…

Now, trying to create my own test I went to my Google Cloud Platform Console and create my Oauth2 Client Id and Consent Screen. Please, note that GCP offers different options to add a Client Id. I selected the one for Web Application Client.

OAuth2 needs consent screen

Now, lets test with OAuth2 debug tool: State is not in screenshot but I set it to “anti-forgery”. See success screen.

Continuation …

And it worked!

look at

Then use code to get token. I used Postman:

JWT bearer …

Alienware Command Center AWCC not loading thermal and overclock icons.

After I upgraded my Alienware to Windows Developers Preview I noticed AWCC was not loading thermal and overclock options.

Icons not loading …

Issue is narrowed down to XTUOCDriverService. This is not able to start and it is linked to Intel Tuning Utility. The driver should be used by Alienware OC Controls for overcloking.

However after I reinstalled Intel Tuning Utility this is not fixing issue. I moved temporally these files:

I reinstalled everything even XTUOCDriverService. However issue is not fixed.

XTUOCDriverService not able to load because weird error…

I noticed this issue after I moved my laptop to Windows Insider Preview Program

I think I need to report this to Microsoft.

Create Visual Studio Code Extension

I was wondering how extensions for Visual Studio Code are created. Googling I found this.

I followed instructions there and created my first demo extension.

Step number 1 is to install with npm :

npm install -g yo generator-code

Then, generate boilerplate code

yo code

You wil see:

What to generate?

Check the dialog:

Answering setu

Then, check typescript generated code and setup in package.json

F5 to launch the extension. New instance from Visual Studio Code will open

Check the simple message on the bottom

Hello World typescript demo

You can even debug the typescript code.

Build SuperTux with vcpkg and Visual Studio Code

I watched recent You Tube video: C++ Development with Visual Studio Code with Julia Reid.

https://www.youtube.com/watch?v=WqXrYfSKJXk

In the video, Julia shows how to use Visual Studio Code to build and even debug SuperTux2 video game:

SuperTux2 https://www.supertux.org/

It was interesting how to build this nice game with Visual Studio Code. I have used it for demo samples as I have used Visual Studio Community Edition to build more complex projects.

I followed the Video and I was surprised what easy was to build and generate such game:

This is my own build:

SuperTux built with Visual Studio Code

However, it was not that clear in Julia┬┤s video how to sort out all the issues faced when trying to build SuperTux from scratch.

So, before being successful to get SuperTux running as in screenshot I had to troubleshoot.

This post details what I did to get same results that Julia.

What is assumed is that you already installed Visual Studio Code:

https://code.visualstudio.com/download

Visual Studio Build Tools:

https://visualstudio.microsoft.com/visual-cpp-build-tools/

cmake Visual Studio Extension:

CMake extension

Then you clone SuperTux repository and open the project with Visual Studio Code.

Keep in mind that you need to use –recurse option or submodule option. Look in SuperTux wiki to clone correctly repoi.

At this part the video skips a lot of explanation.

Julia explains how to set the active key to Visual Studio Tools 2019 – amd64

She also explains how to setup build variant to Cmake: Debug Ready

At this point the build config is done but if you try to follow this sequence you will see this kind of error:

Missing dependencies

This error means that all third party libraries used by SuperTux are not installed in your system. In Julia’s case, she already installed those dependencies with vcpkg. So at this point is important to mention that we need to install vcpkg!

Clone vcpkg repo from:

https://github.com/microsoft/vcpkg.git

Go to git folder and run:

bootstrap-vcpkg.bat 

to build vcpkg.exe.

Then run:

vcpkg integrate install

Note the message displayed:

Path to add to settings.json

At this point, setup cmake configuration to point cmake to vcpkg. Open cmake extension settings from Visual Studio Code and setup settings.json. Notice that path is the same that the one displayed by vcpkg integrate install.

{
"cmake.configureSettings": {
"CMAKE_TOOLCHAIN_FILE": "C:/GitHub/vcpkg/scripts/buildsystems/vcpkg.cmake",
"VCPKG_BUILD": "ON"
},
"cmake.ctestPath": "",
"cmake.copyCompileCommands": "",
"cmake.configureOnOpen": false,
"cmake.generator": ""
}

Then, to enable tab completion:

vcpkg integrate powershell

Then install manually with vcpkg the dependencies. These are mentioned in video and also posted in INSTALL.md file:

“sdl2”, “sdl2-image”, “openal-soft”, “curl”, “libogg”, “libvorbis”, “freetype”, “glew”, “boost-date-time”, “boost-filesystem”, “boost-format”, “boost-locale”, “boost-system”, “physfs”

vcpkg install --triplet windows-64 sdl2

vcpkg will install by default 32 bits packages. By using the –triplet parameter you will install 64 bits version of packages instead of 32 bits.

Every time you install a package, click on the build gear button to generate cmake build file. You will get new error mentioning missing library/package. Once you generate successfully the build the tool will try to build SuperTux. Then you will get error that .lib files are missing

Those libs are provided by same project. Build them manually one by one. Select the target:

Select subproject or target

Then build every single target. lib files will be built.

Finally select SuperTux2 target. This will build exe file.

In my case, when trying to run or debug nothing happened. I launched manually exe file generated in build folder an error message showed that 2 dlls were not found:

squirrel.dll and sqstdlib.dll

Copy 2 dlls into build debug folder.

Finally, I wanted to try vcpkg.json manifest file support. I only found that feature included in latest roadmap:

UPDATE:

I downloaded latest version 0.6.2 from git and then I followed my own steps. I got a different error when running cmake configure. Specifically with physfs_lib package

This time was more difficult to troubleshoot this issue. At the end I commented this line (940) from CMakeLists.txt:


This was not elegant but at the end allowed me to run cmake configure.

After that, the BUILD ALL option worked correctly and I did not have to create every submodule one by one.

Hope this helps.

Install OpenShift in Fedora 31 with Container Development Kit CDK 3.11

CDK is the Container Developer Toolkit from Red Hat. It allows to setup RHEL based OpenShift.

The guide I followed is at:

https://developers.redhat.com/products/cdk/overview

The guide does not mention that you can install CDK in Fedora as it does specifically mention that supports installation in RHEL, macOS and Windows.

For this you will need to meet the following prerequisites:

  • KVM with libvirtd service or
  • VirtualBox (In my case, libvirtd was broken so I tested with Oracle VirtualBox 6.1.4)
  • RedHat Developer Subscription

The first step is to download minishift version for OpenShift: cdk-3.11.0-1-minishift-linux-amd64

Rename this file to minishift and then run:

minishift setup-cdk 

Then with the start option.

minishift start --memory 12G
minishift start with 12GB
minishift start. Continuation…
minishift start. Continuation
minishift start. Note instructions to log into console.

Launch console

minishift console
Launching console…

Then you will be able to log into cluster

minishift console after logging with developer user

The catalog looks impressive to me.

Please, note that:

CDK installs a single node OpenShift cluster. The version deployed is 3.11.157, this versions uses Kubernetes 1.11.

The latest version for Openshift is 4.3 and the latest version for Kubernetes is 1.17. (As March 2020). Openshift 4.3, however, uses Kubernetes 1.16.Apparently there is no CDK for OpenShift 4.x. It looks to me that the way to install a Development Environment is by using Red Hat Code Ready Containers.

https://www.openshift.com/try

Openshift vs Kubernetes

I want to describe the challenges one have to face when learning kubernetes. It turns out that is not that easy to describe what kubernetes is.

After digging a little bit the internet, I wanted to try kubernetes. The number of options available is outrageous. By chance, I opted to use Red Hat Open Container Kit. I learned this is a streamlined version of OpenShift which is the enterprise version of kubernetes from redhat.

Red Hat Open Container also is a streamlined version for minishift.

minishift will create a VM with kubernetes and docker setup for you. You have two options to setup minishift with virtualization. Either you use VirtualBox driver or kvm/libvirt. I wanted to use kvm/libvirt in my laptop with Fedora 31. I realized libvirtd is broken in my fedora 31 setup. So I was forced to use Virtual Box driver.

I also noticed this version of CDK only supports openshift 3 while RH is already in Openshift v4.

What I found confusing is the use of docker in CDK. While RedHat is pushing the use of cri-o and podman instead of docker. Add to the confusion the okd project.

Anyway I think is a little bit oudated the use of CDK, but let’s give it a try as it looks as a good start to learn about kubernetes.

I found this link which helped me to understand better:

https://cloudowski.com/articles/10-differences-between-openshift-and-kubernetes/

So the first step is to download cdk-3.11.0-1-minishift-linux-amd64

openshift console command launching web console after logging with developer user

Protocol Buffers in go

If you want to get started quickly with Google Protocol Buffers in the Go Programming Language lets follow the guide from

https://developers.google.com/protocol-buffers/docs/gotutorial

First we download from GitHub protocolbuf project:

git clone https://github.com/protocolbuffers/protobuf.git

Then we setup our Go project.

mkdir -p ~/protobuf/src/github.com/protocolbuffers/protobuf/examples/

From git repo, locate examples folder

cd ~/GIT/protobuf/examples

Then copy the source code to our Go project:

cp Makefile ~/protobuf/src/github.com/protocolbuffers/protobuf/examples
cp *go ~/protobuf/src/github.com/protocolbuffers/protobuf/examples
cp *proto ~/protobuf/src/github.com/protocolbuffers/protobuf/examples

Setup the GOHOME variable

export GOPATH=~/protobuf

Then we install protobuf protoc compiler: Download protoc from following location then unzip in your preferred folder

https://github.com/protocolbuffers/protobuf/releases

For instance I unzip into my protoc folder:

Unzipping protoc compiler

Update your path to locate protoc

export PATH=~/protoc/bin:$PATH

Then we install go plugin

go get -u github.com/golang/protobuf/protoc-gen-go

Update your path to locate protoc-gen-go. (Installed in previous step by go get)

export PATH=~/protobuf/bin:$PATH

This is the go project setup, now lets build the tutorial.

Go to examples folder

cd ~/protobuf/src/github.com/protocolbuffers/protobuf/examples
make clean
make go
Building Go Tutorial

Running the sample

Adding records
Listing records

Create gcp YouTube api key for mpsyt

After running mpsyt for a while, I started to face an issue with the latest branch:

Looking for “Tiesto” media

The following error was coming up when trying to search:

Error fetching data. Possible network issue.
Youtube Error 403: The request cannot be completed because you have exceeded your quota.


No results from search comman

Then, looking into mpsyt github repo you can find there the solution:

https://github.com/mps-youtube/mps-youtube/wiki/Troubleshooting

You will need to create from Google Cloud Platform key to access YouTube v3 API.

Go to GCP console, from your project create an API key for YouTube V3:

GCP console. YouTube Data API v3

Once you generate API v3, (I selected one for cli), I updated mpsyt client with the new key:

set api_key command inside mpsyt

Voila, now I’m able to use mpsyt downloder again to access my favorite stuff:

Migrating my docker containers to podman

In older posts, I showed how to create wordpress personal blog with docker. In particular, this website is hosted in my laptop using docker containers hosted in Fedora. Everything was good until I updated Fedora to version 31. Long story short, I was not able to start my containers with Fedora 31. The reason is that docker does not support cgroups v2. A new feature incorporated in latest kernels for Fedora 31. In order to star my containers the only available solution at this time is to revert kernel to use cgroups v1.

I’ll talk about control groups or cgroups in next post. Now, I’ll describe what I had to do migrate this web site from docker to podman.

The simple idea to remove docker daemon to manage containers sounds good to me, for that reason, and because I like new developments I would prefer to stay with original kernel setup in Fedora 31.

I have to be honest, in order to have this migration to work I relied on 2 laptops. Both with Fedora 31, but the one hosting docker with the change in kernel to switch to the use of cgroups v1. I needed a running docker version of my wordpress installation in order to migrate images.

So, I decided to migrate my wordpress website using docker with Fedora 31 and cgroups v1 to a new computer with Fedora 31 cgroups v2 and podman.

The theory is simple, as showed in samples from podman guides. In practice, I did face some issues:

From the machine running docker:

save mysql docker cotainer as a tar file

save wordpress docker container as a tar file

save volumes from linux filesystem used by mysql and wordpress db.

From the machine with podman:

copy tar files for container and volumes to machine

load images with podman.

create new containers from images

start containers.

The list of steps is really simple. However, the one big issue I faced was starting wordpress image in podman. The container will not start under podman because the http server inside docker image is using port 80. podman won’t be able to start this container because this restriction.

What I had to do was to go back to docker image for wordpress, edit the apache config files to change port from 80 to 8080. Save the container as a new image and then import this image to podman.

There are 2 ways to export a docker container, as a container or as an image. I selected the method for image then I create a container starting from the image.

Look at the current state of my docker containers in fedora 31 after I change kernel setup to use cgroups v1:

docker ps
Current containers

These are my old containers hosting this wordpress blog. Beware here, look at the ports section and notice how host port 80 is mapped to container port 80. If we export this container as is, this won’t be correctly handled by podman. Actually, podman will complain that container is using privileged port 80 and as podman is not run with root privileges you will not be able to launch the wordpress container hosting apache httpd with podman.

Let me show with images this scenario. Before this, lets migrate wordpressdb container. You take your container and then by using save or export docker commands you create a tar file. STOP here, you need to decide if you want to export a container or an image. Notice, that if you export an image then you need to import the image in podman and create a container from image. If you export a container from docker then you will import a container as well in podman.

For my mariadb container I would use both options, but for wordpress container hosting apache as well I will need to use the import/export image option.

docker save -o /tmp/wordpress.img wordpress

Note that we are saving the image wordpress as a tar file. The wordpress parameter is the name or the image.

Creating tar of image

Now lets move to computer with podman, in this machine we will use tar file and backup of volume to migrate image.