集群内无法访问Kubernetes POD

I tried to install Kubernetes with kubeadm on 3 virtual machines with Debian OS on my laptop, one as master node and the other two as worker nodes. I did exactly as the tutorials on kubernetes.io suggests. I initialized cluster with command kubeadm init --pod-network-cidr=10.244.0.0/16 and joined the workers with corresponding kube join command. I installed Flannel as the network overlay with command kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.

The repsonse of command kubectl get nodes looks fine:

NAME        STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE
k8smaster   Ready    master   20h   v1.18.3   192.168.1.100   <none>        Debian GNU/Linux 10 (buster)   4.19.0-9-amd64   docker://19.3.9
k8snode1    Ready    <none>   20h   v1.18.3   192.168.1.101   <none>        Debian GNU/Linux 10 (buster)   4.19.0-9-amd64   docker://19.3.9
k8snode2    Ready    <none>   20h   v1.18.3   192.168.1.102   <none>        Debian GNU/Linux 10 (buster)   4.19.0-9-amd64   docker://19.3.9

The response of command kubectl get pods --all-namespaces doesn't show any error:

NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE    IP              NODE        NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-7hlnp             1/1     Running   9          20h    10.244.0.22     k8smaster   <none>           <none>
kube-system   coredns-66bff467f8-wmvx4             1/1     Running   11         20h    10.244.0.23     k8smaster   <none>           <none>
kube-system   etcd-k8smaster                      1/1     Running   11         20h    192.168.1.100   k8smaster   <none>           <none>
kube-system   kube-apiserver-k8smaster            1/1     Running   9          20h    192.168.1.100   k8smaster   <none>           <none>
kube-system   kube-controller-manager-k8smaster   1/1     Running   11         20h    192.168.1.100   k8smaster   <none>           <none>
kube-system   kube-flannel-ds-amd64-9c5rr          1/1     Running   17         20h    192.168.1.102   k8snode2    <none>           <none>
kube-system   kube-flannel-ds-amd64-klw2p          1/1     Running   21         20h    192.168.1.101   k8snode1    <none>           <none>
kube-system   kube-flannel-ds-amd64-x7vm7          1/1     Running   11         20h    192.168.1.100   k8smaster   <none>           <none>
kube-system   kube-proxy-jdfzg                    1/1     Running   11         19h    192.168.1.101   k8snode1    <none>           <none>
kube-system   kube-proxy-lcdvb                    1/1     Running   6          19h    192.168.1.102   k8snode2    <none>           <none>
kube-system   kube-proxy-w6jmf                    1/1     Running   11         20h    192.168.1.100   k8smaster   <none>           <none>
kube-system   kube-scheduler-k8smaster            1/1     Running   10         20h    192.168.1.100   k8smaster   <none>           <none>

Then i tried to create a POD with command kubectl apply -f podexample.yml with following content:

apiVersion: v1
kind: Pod
metadata:
  name: example 
spec:
  containers:
  - name: nginx 
    image: nginx

Command kubectl get pods -o wide shows that the POD is created on worker node1 and is in Running state.

NAME      READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
example   1/1     Running   0          135m   10.244.1.14   k8snode1   <none>           <none>

The thing is, when i try to connect to the pod with curl -I 10.244.1.14 command i get the following response in master node:

curl: (7) Failed to connect to 10.244.0.14 port 80: No route to host

但是工作节点1上的同一命令成功响应:

HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Sat, 23 May 2020 19:45:05 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes

I thought maybe that's because somehow kube-proxy is not running on master node but command ps aux | grep kube-proxy shows that it's running.

root     16747  0.0  1.6 140412 33024 ?        Ssl  13:18   0:04 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8smaster

Then i checked for kernel routing table with command ip route and it shows that packets destined for 10.244.1.0/244 get routed to flannel.

default via 192.168.1.1 dev enp0s3 onlink 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
169.254.0.0/16 dev enp0s3 scope link metric 1000 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.1.0/24 dev enp0s3 proto kernel scope link src 192.168.1.100 

一切对我来说看起来很好,我不知道该怎么办才能检查出什么问题。我想念什么吗?

更新:

If i start an NGINX container on worker node1 and map it's 80 port to port 80 of the worker node1 host, then i can connect to it via command curl -I 192.168.1.101 from master node. Also, i didn't add any iptable rule and there is no firewall daemon like UFW installed on the machines. So, i think it's not a firewall issue.