It’d be wise to request a KVM before you actually give it a try. Just in case. Learn from my mistakes.
]]>This is business as usual - go to the Hetzner robot, select your server, and boot into the rescue system. You’ll get an email with the root password or you can use your public keys.
First, we need to grab an image of the OS we want to install. I’ll use RHEL 9 as an example, but you can use any OS you want.
Then acquire the network configuration from the rescue system, you might need it later. For RHEL, it’s not necessary, you can just set your network configuration in the installer to automatic, then DHCP will handle the rest. It was necessary for Proxmox, though.
The INTERFACE_NAME
might need some tweaking, ID_NET_NAME_ONBOARD may not exist.
Kick off the installation with QEMU. You can use the following command as a template, just replace the image and the disk with your own.
Alternatively, if you have more than 2 disks, you can use -hda /dev/sda -hdb /dev/sdb -hdc /dev/sdc -drive file=proxmox-ve.iso,index=3,media=cdrom
or similar, because -cdrom conflicts with -hdc.
Now you can connect to the VNC server with your favorite VNC client over port 5900.
If your server uses UEFI boot, you need to add -bios /usr/share/qemu/OVMF.fd
to the QEMU command, or wherever your OVMF.fd is. (Hint: updatedb+locate)
After your OS is installed, you can shut down the QEMU session:
After the installation is finished, reboot the guest OS, then restart the QEMU session:
Connect to the guest via VNC again, and copy the network settings from step 1, also make sure you can log in remotely from the server.
This is the time to grab your public keys and add them to the guest OS.
Pro tip: curl https://github.com/username.keys >> ~/.ssh/authorized_keys
Shut down the guest OS gracefully:
Then reboot
the rescue system.
Get a list of RSAT components:
Get-WindowsCapability -Name RSAT* -Online | Select-Object -Property Name, State
Install what you need, e.g.:
Add-WindowsCapability -Name "Rsat.GroupPolicy.Management.Tools~~~~0.0.1.0" -Online
Azure offers a free-tier of 1 parallel jobs (or 10 for open source projects) with 1800 minutes per month. This might be enough for small projects, but if you have a larger project, you’ll need to pay for more parallel jobs. The pricing is based both on the number of parallel jobs you need, and the number of minutes you need per month.
The pricing is per month, and you pay for the maximum number of parallel jobs you need at any one time. This is calculated daily.
This means that for each and every day, you pay for the maximum number of parallel jobs that were in effect on that day. If you only use pipelines from 9-5, you pay the whole daily fee ($40 / 30 days) of all pipelines.
But if you turn remove all (paid) parallelization for the weekends, for example, you won’t pay for those days at all.
Since this isn’t documented anywhere, I’ve asked Azure support to confirm it:
]]>Our commerce service sends this value to Azure billing once every day. To determine what amount to send, we do the following:
Figure out what the maximum concurrency that has been set by the customer during the day.
It does not matter whether they reset it 0, since we look at the max value.
If on the next day, they set it 2 for a few minutes, then the bill for that day will be 2 x 1.33 = 2.66.
So, you can set it to 0 before non-working day for managing cost since it’s billed daily.*
This is different from your regular Azure AD / Entra directory. Log in to the Azure Portal, click “Create a resource”, and find Azure Active Directory B2C.
Here, click “Create new tenant”, and fill in its details.
Next, open the newly created tenant and go to user flows. Create 3 new flows: signup/signin, edit profile and reset password. WHen being asked for a version, select “recommended”.
Name: we will reference this (along with the domain entered above) in our app settings. If you enter SignUpSignIn
, the final name would be B2C_1_SignUpSignIn
Identity Providers: you can set up different identity providers, with email being the default one. Let’s keep it for now.
MFA: pick whatever you’d like, but keep in mind that some of these settings would generate extra charges
User attributes: here you can specify what pieces of data you’d like the user to provide, and what would be passed to your app in the token responses.
On the B2C tenant portal, create a new app registration. Provide the name and the redirect URI, which in our case would be something like https://localhost:5001/signin-oidc
.
Make sure you enable implicit flows.
Create your MVC app like this:
Add the following Nuget packages:
Now open Program.cs
and add the authentication middleware: app.UseAuthentication();
After that, open your appsettings.json and make the following changes:
]]>The general idea is to have a virtual network (this time at Hetzner) with a reverse proxy VM that has a couple of other VMs that have no public IPs. Sometimes these worker VMs also need love Internet access, so let’s set up a NAT gateway.
Add a route to the vnet, where the destination is 0.0.0.0/0, and the gateway is the IP of the… well, the gateway.
On the server echo 1 > /proc/sys/net/ipv4/ip_forward
, then iptables -t nat -A POSTROUTING -s '10.0.0.0/16' -o eth0 -j MASQUERADE
, of course replace any IP addresses as needed
On the clients ip route add default via 10.0.0.1
, then edit /etc/resolv.conf
and add nameservers, each in their own line, like nameserver 1.1.1.1
Update all machines, I don’t know why the original guide says this, but hey, yum update -y && yum upgrade -y
Lastly, to make everything persistent, on the server, edit /etc/NetworkManager/dispatcher.d/ifup-local
and add:
Finally, chmod +x /etc/NetworkManager/dispatcher.d/ifup-local
On the client, first do yum remove hc-utils -y
, then edit /etc/NetworkManager/dispatcher.d/ifup-local
and add:
And again, chmod +x /etc/NetworkManager/dispatcher.d/ifup-local
Once it’s done, I don’t want to touch it ever again.
]]>Now I probably should have been familiar with this, but it can be achieved easily by using the HasQueryFilter
method in your OnModelCreating
.
Basically, you can do something like this:
modelBuilder.Entity<Post>().HasQueryFilter(p => !p.IsDeleted);
Yup, that’s about it, now you don’t have to hard-code filters in your EF queries or repositories. This is especially useful when you have a lot of queries, or when you have queries that are generated by EF, such as when you use Include
.
You might be wondering how you can disable the filter, for example when you want to retrieve all records, including the soft-deleted ones. This can be done by using the IgnoreQueryFilters
:
var posts = db.Posts
.IgnoreQueryFilters()
.ToList();
Pretty cool, if you ask me.
]]>To do that I use the manual mode of Certbot with DNS verification, which requires you to create a DNS TXT record with a specific value. This is a bit cumbersome, but it works (or there might be a DNS plugin available for your provider). However the generated certificate cannot be directly imported to Azure Key Vault which I usually use for my projects.
For the purpose of this, note that the DNS challenge is not important, but in some scenarios it’s more feasible, for example one time I had to generate a certificate for a domain that was pointing to an Azure CDN endpoint, which in turn pointed to a https-only Azure Storage account.
This post is more of a note to self, but maybe it helps someone else as well. Also, the commands were for macOS, so your mileage may vary.
Generating the certificate is pretty straightforward, but make sure you pick RSA keys, as Azure Key Vault does not support ECDSA keys yet.
sudo certbot certonly -d example.com,www.example.com --manual --preferred-challenges dns --key-type rsa
The next step is to convert the certificate to a PFX file, which can be imported to Azure Key Vault. This can be done with the following command:
sudo openssl pkcs12 -export -in /etc/letsencrypt/live/example.com/fullchain.pem -inkey /etc/letsencrypt/live/example.com/privkey.pem -out ./export.pfx
The resulting PFX file can be imported to Azure Key Vault, thus making it usable for Azure CDN and other Azure services.
]]>The original problem is that in EF Core I didn’t find a trivial way to map a list of integers to a single column in a database table. Most of the solutions I found online were using a custom type mapper, which is a bit of an overkill for such a simple problem. Considering these are actual primitive types, and not foreign keys for an entity type, I also didn’t want to create a separate table for them.
My favorite solution was to create a custom converter and comparer for the List<int>
type that de-/serializes the list of numbers to JSON. This is a very simple solution, and it works with EF Core 6.0.
Now the caveat here is that while we save some complexity by not using a separate table, it also makes it harder to do any kind of queries against the numbers – in our situation it’s not a big deal as these are arbitrary numbers, but you may want to be mindful of this.
First, I’ve created a static class for these utils to not pollute the original DbContext.
Now to the DbContext. You have to add the converter and comparer to the OnModelCreating
method:
That’s it. Now your list of integers will be persisted as single column in the database table:
]]>Everything looked fine until I’ve noticed I couldn’t sign in, even though I was absolutely certain that my username and password was correct. After a not-so-quick googling here’s the workaround:
sudo nano /usr/NX/etc/server.cfg
EnablePasswordDB
1
cd /usr/NX/bin && sudo ./nxserver --useradd $USER