So I setup a kubernetes cluster on my local bare metal server.
I set it up using conjure-up canonical-kubernetes, this worked really well and started a kubernetes cluster with all I needed (it took a really long time on my 9 year old server).
So I started a small service on the cluster and wanted to access it from my laptop.
To do this I found I needed to route everything from my laptop to the iprange of the cluster to the bare metal server, I did this using this command sudo ip route add 192.168.122.0/24 via 192.168.0.27 dev enp0s25 on my laptop (this it BTW not persistent, so when I reboot I have to run it again) What this basically does is it says to my laptop “whenever I try to access anything on this cidr (192.168.122.0/24) - ask the bare metal server (192.168.0.27)”.
After doing this I could not understand why it would not work, if I was trying to ping the load balancer it would just refuse to connect.
After some thinking and searching on the interwebs I finally found out that the server also needed to know that it should handle requests to this iprange.
This is ip forwarding (NAT) sudo iptables -I FORWARD -p all -d 192.168.122.0/24 --out-interface virbr0 -j ACCEPT
So this lines says to iptables “forward all requests to the cidr to the virbr0 interface (the one used by lxc)”. After adding this line to the iptables on the server it worked and I could access my ingress controller and use kubectl.

As always reading something from Julia Evans was useful