Skip to main content

Allow tinydns service through iptables firewall

If you use iptables and want to allow tinydns service answer iterative requests:

Note: Tinydns serves from 192.168.1.10 change yours accordingly, and INPUT number may vary for your list of rules.

# iptables -I INPUT 7 -p udp -s 0/0 --sport 1024:65535 -d 192.168.1.10 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT

# iptables -A OUTPUT -p udp -s 192.168.1.10 --sport 53 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT

# iptables -I INPUT 8 -p udp -s 0/0 --sport 53 -d 192.168.1.10 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT

# iptables -A OUTPUT -p udp -s 192.168.1.10 --sport 53 -d 0/0 --dport 53 -m state --state ESTABLISHED -j ACCEPT

# service iptables save

After saving rules, iptables --list command should give a list of rules like this:

# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:vnc-server
ACCEPT udp -- anywhere anywhere state NEW udp dpt:vnc-server
ACCEPT udp -- anywhere 192.168.1.10 udp spts:1024:65535 dpt:domain state NEW,ESTABLISHED
ACCEPT udp -- anywhere 192.168.1.10 udp spt:domain dpt:domain state NEW,ESTABLISHED
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- 192.168.1.10 anywhere udp spt:domain dpts:1024:65535 state ESTABLISHED
ACCEPT udp -- 192.168.1.10 anywhere udp spt:domain dpt:domain state ESTABLISHED


Comments

Popular posts from this blog

Creating Multiple VLANs over Bonding Interfaces with Proper Routing on a Centos Linux Host

In this post, I am going to explain configuring multiple VLANs on a bond interface. First and foremost, I would like to describe the environment and give details of the infrastructure. The server has 4 Ethernet links to a layer 3 switch with names: enp3s0f0, enp3s0f1, enp4s0f0, enp4s0f1 There are two bond interfaces both configured as active-backup bond0, bond1 enp4s0f0 and enp4s0f1 interfaces are bonded as bond0. Bond0 is for making ssh connections and management only so corresponding switch ports are not configured in trunk mode. enp3s0f0 and enp3s0f1 interfaces are bonded as bond1. Bond1 is for data and corresponding switch ports are configured in trunk mode. Bond0 is the default gateway for the server and has IP address 10.1.10.11 Bond1 has three subinterfaces with VLAN 4, 36, 41. IP addresses are 10.1.3.11, 10.1.35.11, 10.1.40.11 respectively. Proper communication with other servers on the network we should use routing tables. There are three

3 Node (Master Slave Slave) Redis Cluster with Sentinel

It is possible to make your Redis cluster Fault Tolerant and Highly Available by building a replica set and then monitor these nodes using sentinel for automatic failover. I am going to give an example setup to explain it. The structure is built with three nodes running one as a master and two as slaves. Master Node: (Centos 7.2) 192.168.1.11 Slave1 Node: (Centos 7.2) 192.168.1.12 Slave2 Node: (Centos 7.2) 192.168.1.13 Edit System settings on each node: /etc/sysctl.conf Disable transparent hugepage (transparent_hugepage=never) on each node: /etc/default/grub Apply grub config and reboot each node: Master Node: /etc/redis/6379.conf Slave1 Node: /etc/redis/6379.conf Slave2 Node: /etc/redis/6379.conf Master Node: /etc/redis/sentinel.conf Slave1 Node: /etc/redis/sentinel.conf Slave2 Node: /etc/redis/sentinel.conf Each Node: /etc/systemd/system/multi-user.target.wants/redis-server.service Each Node: /etc/