Saturday, January 29, 2011

How do I install 'repeat' on Ubuntu?

This StackOverflow question mentions a unix command called 'repeat'. It sounds like it does exactly what I want. From reading the question and answers, I think the user is on Mac OSX.

However that command is not installed by default on Ubuntu, and I can't find the package to install to get it. What should I install?

  • From the prompt, I'd guess it's a csh builtin.

    And from reading "man csh", that appears to be the case

      repeat count command
               The specified command, which is subject to  the  same  restric-
               tions  as  the  command  in the one line if statement above, is
               executed count times.  I/O  redirections  occur  exactly  once,
               even if count is 0.
    

    So in order to use it, either type "csh" and issue it from the command line, or write your script so that it uses #!/bin/csh as the interpreter at the top. Here are some csh basics to get you started.

    kmarsh : Better yet- don't get started on an obsolete, incompatible shell. Learn real shell programming and write yourself a repeat alias or function in Bash, a (mostly) Posix standard shell.
    Matt Simmons : Eh. I'm a bash guy, but csh doesn't bother me. I know a *lot* of people that would say the exact same thing as you, except change csh to bash and bash to korn. There's a lot of truth to the fact that korn is more advanced than bash. It's all what you're comfortable with and what gets the job done. csh is going to be around for a long, long time
    Joseph Kern : Or you could change csh to bash and bash to zsh.
    Raphink : While quite a few people have their favorite shell, I find that most sysadmins know bash, while there's few that know csh, zsh and others, and since Ubuntu comes with bash by default (for the users at least, it has dash for root), it's still nicer to play with bash when possible. That's just my opinion though.
  • I can't find this command on Ubuntu. It doesn't seem to exist. I even find it very weird that the post on StackOverflow says it's a builtin command when I can't find it on Ubuntu.

    Edit: Like Matt noted, it is a builtin csh command. The following are tips to do quite the same with bash.

    If what you want is to repeat a command n times, you can do that with a loop though:

    for i in {1..n}; do yourcommand; done
    

    For example, to print 100 times "It works", use:

    for i in {1..100}; do echo "It works"; done
    

    If you want to have a repeat function, you could add something like this to your ~/.bashrc:

    function repeat() { 
        local times="$1"; 
        shift; 
        local cmd="$@"; 
    
        for ((i = 1; i <= $times; i++ )); do 
           eval "$cmd"; 
        done 
     }
    

    Source your ~/.bashrc again with . ~/.bashrc and you can call it:

     $ repeat 2 date
    Mon Dec 21 14:25:50 CET 2009
    Mon Dec 21 14:25:50 CET 2009
    
     $ repeat 3 echo "my name is $USER"
    my name is raphink
    my name is raphink
    my name is raphink
    
    Matt Simmons : It's a "shell builtin", which means it's sort of like "echo" in that although there is a /bin/echo, if you just type "echo", it doesn't get executed. bash (or whatever your shell is) has an "echo" command that it runs instead, which prevents the system from having to launch another process.
    Matt Simmons : Although your way works as well
    Raphink : Yes Matt, I read your comment thanks. However, it's not a builtin in bash. The command doesn't exist when I use bash.
    Dennis Williamson : You can avoid calling the external `seq` by using `for ((i = 1; i <= $times; i++ ))`
    Raphink : Yes, that's probably more efficient Dennis, although I find the `seq` syntax more readable somehow.
    Dennis Williamson : You should note that in your "my name is" example, `$USER` is evaluated before the function is called and because of that, something that changes over time wouldn't be reflected during the repeated runs. In order to fix that, you'd have to do `eval "$cmd"` in your function instead of just `$cmd` and use single quotes around the argument to `repeat` to prevent early evaluation. From there, quoting issues just get hairier.
    Raphink : Nice suggestion Dennis. I'll fix my piece of code with this.
    From Raphink
  • You could use watch, which is a standard command available in any shell. For example:

    watch -n 5 date
    
    From Tobu

How to filter http traffic in Wireshark?

I suspect my server has a huge load of http requests from its clients. I want to measure the volume of http traffic. How can I do it with Wireshark? Or probably there is an alternative solution using another tool?

This is how a single http request/response traffic looks in Wireshark. The ping is generated by WinAPI funciton ::InternetCheckConnection() alt text

Thanks!

  • Ping packets should use an ICMP type of 8 (echo) or 0 (echo reply), so you could use a capture filter of:

    icmp
    

    and a display filter of:

    icmp.type == 8 || icmp.type == 0
    

    For HTTP, you can use a capture filter of:

    tcp port 80
    

    or a display filter of:

    tcp.port == 80
    

    or:

    http
    

    Note that a filter of http is not equivalent to the other two, which will include handshake and termination packets.

    If you want to measure the number of connections rather than the amount of data, you can limit the capture or display filters to one side of the communication. For example, to capture only packets sent to port 80, use:

    dst tcp port 80
    

    Couple that with an http display filter, or use:

    tcp.dstport == 80 && http
    

    For more on capture filters, read "Filtering while capturing" from the Wireshark user guide, the capture filters page on the Wireshark wiki, or pcap-filter (7) man page. For display filters, try the display filters page on the Wireshark wiki. The "Filter Expression" dialog box can help you build display filters.

    par : Sorry, I have forgot to mention the details of the "ping" request. This is Windows way of pinging. It seems icmp has no relation to my case.
    par : See the screenshot of the ping in Wireshark I just have attached
    Simeon Pilgrim : I changed the question from 'ping' to 'http' so you answer will not make sense in context, but I +1 because it's a good ping answer.
    From outis
  • It's not a ping. A ping, as already said by outis, is an ICMP echo request. Your trace displays the establishment and immediate termination of an HTTP connection, and that's what InternetCheckConnection() does. The IP in question, 77.222.43.228, resolves to http://repkasoft.com/, which, I guess, is the URL you pass to InternetCheckConnection().

    You can filter traffic with this IP by using capture or display filter host == 77.222.43.228.

  • Using Wireshark 1.2+ , I would run this batch file:

    :: Script to save a wireshark trace
    :: tshark -D to get interface id
    @echo off
    C:
    cd C:\Temp\NetTracing
    set PATH=%PATH%;C:\Program Files\Wireshark
    echo Tracing host 127.1 or 172.1.1.1 or 10.0.0.1
    
    tshark.exe -i 4 -a duration:900 -S -f "tcp port 80" -w trace.cap
    
    From djangofan

best RAID configuration for postgres

I'm purchasing a server with 8 SAS disks to perform database intensive procedures. Currently the main bottleneck is is large index scans in postgres.

I'm currently deciding between 8x300Gb 10k disks or 8x140Gb 15k disks as it would be more convenient to have 200Gb+ Logical space.

The spec sheet for the RAID controller states: "Integrated Hardware RAID-0, -1, -1E, optional RAID-5, -6, -10, -50, -60"

What would be the best RAID configuration, and what choice in disks would be most suitable?

I'm new to configuring RAID and postgres and appreciate the advice.

  • Go for the 8x146GB disks in a big RAID10 array (4 mirrored pairs striped together). This should provide you the best speed in terms of IO access.

    pstanton : does that mean with 4 mirrored pairs the logical disk space would be 146x2=292Gb?
    womble : No, it would be 146GB*4 since you've got four mirrored pairs of 146GB drives (so 584GB, less HDD manufacturer lie factor, filesystem and LVM overhead, etc).
    From rodjek
  • Integrated Hardware RAID-0, -1, -1E, optional RAID-5, -6, -10, -50, -60

    This sounds a little worrisome to me, it sounds like a low-end RAID controller. You want a good RAID controller that can keep up with 8 fast HDDs (that's actually not a given). If you have a fair amount of writes to your DB, then you really want a Battery Backup Unit, and to enable battery-protected write caching on the RAID controller.

    As for RAID disk layout, there are 2 common schools of thought:

    1. 2 disks in mirror for OS, 2 disks in mirror for DB transaction log, 4 disks in RAID 10 for main DB files.
    2. One big RAID 10 array using all disks, and all OS + log + datastore files on this array (see reasoning here, mirrored by BAARF).

    I would rather not take sides on the RAID volume design, it tends to become a bit of a fact-light discussion. Ideally you should experiment with different storage layouts and benchmark them for your specific workload. My gut feel is that all disks in RAID10 is faster and more robust over multiple workloads.

    One last thing, to make sure that OS partitions and RAID stripe boundaries are aligned (see here, Windows centric, but the principle is general). You can do this when you create the partitions.

    pstanton : thanks. i'm assuming the 'optional' part is an upgraded unit which we'll probably opt for. does that still sound low end?
    Jesper Mortensen : @pstanton: Yes, it still sounds low-end, because it could be a license key upgrade, not a new RAID controller. But there is no way for me to tell, you'll have to talk to your vendor about the controller performance, and perhaps battery backup capabilities.
    Chopper3 : I couldn't agree with you more Jesper, this sounds very worrying to me also.
    pstanton : the upgrade for a RAID 10 controller is an IBM ServeRAID M5015, is that worrying?
    pstanton : ... and is that better/worse than the HP Smart Array P410 ?
    Jesper Mortensen : @pstanton: Why don't you talk with your vendors? The IBM M5015 is a recent model, a midrange LSI logic unit, it should be fine for plain RAID10 which isn't so hard on the RAID controller. See http://www.redbooks.ibm.com/abstracts/tips0738.html
  • You should read the information at BAARF the Battle Against Any RAID Five (Four, ...err..., Free). Therefore, the suggestion to go with RAID 10 is good.

    And for database performance, use more faster disks (even if they're smaller) rather than fewer slower disks (even if they're bigger).

  • Don't forget to align your ext3/4 to your RAID stripe/stride size. (man mkfs.ext3/4 -> stride)

    By the way, is there any postgres setting to make its write match the stripe size ?

    (And google for RAID5 write hole)

    Magnus Hagander : Postgres will always do writes in 8Kb blocks. There is a compile time switch to change it, but usually you don't want to be touching that.
    From BenoƮt

Why does my ntpd not work?

Edit

I've tried all your suggestions, but it seems that ntpd just refuse to synchronize to the server.

[vivs@peter-centos ~]$ /usr/sbin/ntpq -np
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================
 192.168.0.30    .LOCL.           1 u   11   64    3    0.984  232732. 20083.2

Does this jitter "20083.2" indicates the time is manually changed?

I've turned off vmware's time synchronization.

Original Question

Here is the status of ntp

[root@peter-centos gw]# /usr/sbin/ntpq -pn
 remote           refid      st t when poll reach   delay   offset  jitter
=============================================
 192.168.0.30    .LOCL.           1 u  153 1024  377    0.950  1905553 274023.
*127.127.1.0     .LOCL.          10 l    9   64  377    0.000    0.000   0.001

You can see that it only synchronize to '127.127.1.0' which is the local clock.

Is it because of the offset it too large?

But after I manually set the date by date command, it still refuse to synchronize to 192.168.0.30

This is may ntp.conf

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict -6 ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp.org
#server 2.centos.pool.ntp.org
server 192.168.0.30 #blf
#broadcast 192.168.1.255 key 42         # broadcast server
#broadcastclient                        # broadcast client
#broadcast 224.0.1.1 key 42             # multicast server
#multicastclient 224.0.1.1              # multicast client
#manycastserver 239.255.254.254         # manycast server
#manycastclient 239.255.254.254 key 42  # manycast client

# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
#server 127.127.1.0     # local clock
#fudge  127.127.1.0 stratum 10

# Drift file.  Put this in a directory which the daemon can write to.
# No symbolic links allowed, either, since the daemon updates the file
# by creating a temporary in the same directory and then rename()'ing
# it to the file.
driftfile /var/lib/ntp/drift

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8
lkey 8
h the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8
lkey 8
olkey 8
lkey 8
  • That's a really large jitter value (274023). That indicates that you may have tried to change the time manually while ntpd was running. What you should do is stop ntpd, set the time to the correct time, and then restart ntpd.

  • First off, stop ntpd, and try to set the date using ntpdate {server}:

    /etc/init.d/ntp stop
    /usr/sbin/ntpdate 192.168.0.30
    

    Does this set your time correctly? Or does it time out?

    If it times out, try another NTP server:

    /usr/sbin/ntpdate pool.ntp.org
    

    From the high jitter, I would expect the ntpdate to work - once it has, reboot if possible (just restart ntpd if you can't reboot - though many services will get confused by such a time jump), and check ntpq -p again.

    ablmf : From all the answers. It seems that I should use 'ntpdate' to change date time when ntpd is not running. I tired and I found I could update to 192.168.0.30 by ntpdate. BUT, after I started ntpd again, I saw that ntpd is still not synchronize to 192.168.0.30. I've also removed the local clock from ntp.conf. From ntpq -p, I can see that the jitter is quite large. But I am sure I didn't change the date manual when ntpd is running.
  • Ah -- now it becomes clear:

    My machine is installed in vmware workstation. So, form all the answers, I guess maybe the jitter becomes so large is because that vmware adjust the time. I will see if I am right.

    Don't run ntp in a VM. The host computer doesn't guarantee CPU slices, so the VM's clock isn't accurate. As you see, ntp is trying to keep up with what looks to it like a wildly varying external clock and eventually gives up.

    The general answer to this problem is to not run ntp, to install the VMware tools and lock the VM's clock to the host's clock.

    The specific answer depends on the version of Linux you are running. I have some notes on CentOS (probably generally applicable to other RedHat family distributions) here.

    ablmf : There is also a walk around here : http://wiki.centos.org/TipsAndTricks/VMWare_Server I've tired, it works!
  • VMware has best practices for Linux timekeeping:

    http://kb.vmware.com/selfservice/microsites/search.do?language=en%5FUS&cmd=displayKC&externalId=1006427

    From Tom Kyle

How do I connect a 2008 server to a 2003 server active directory?

Our DC is running Windows Server 2003.

I've just set up Windows Server 2008 and have terminal server running on it. When setting the terminal server permissions, it was able to allow a group name that was read from the domain. In the DC the new terminal server shows up as a computer in the domain.

I can also log in as a user within the domain even though that user doesn't exist locally on the new server.

However, when I go to set sharing permissions on the new machine it doesn't show my domain as a location. Instead it is only looking at location "machinename" and not allowing domain to be seen or added. Is there something I'm missing?

Ok, lots of errors in the event log.

We have this:

The winlogon notification subscriber is taking long time to handle the notification event (Logon).

Followed by this:

The winlogon notification subscriber took 121 second(s) to handle the notification event (Logon).

Followed by:

The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has succesfully processed. If you do not see a success message for several hours, then contact your administrator.

I think this might be the same problem I'm having http://serverfault.com/questions/24420/primary-domain-controller-slow

Solved. The issue was that I had changed from DHCP to static and put the wrong DNS server IP in. i.e. firewall instead of DC/DNS server.

  • Make sure you're logged in to the terminal server as a domain account that has administrator rights on the terminal server. Otherwise, the permissions dialogs will default to the local security database.

    Matt : Yes I am logged on as an administrator who has full control to the domain and the terminal server.
    Matt : Having said that, I rebooted the server and now I am seeing hosting.local. But it's not able to search it.
    Matt : No it's gone again, no longer showing hosting.local, just machinename as a location. Very flaky!
  • The issue was with DNS. I had changed the 2008 server from DHCP to static IP and put the wrong DNS server IP address in.

    The DNS server needs to be our domain controller which is also running the DNS server for the Domain in our case. Once I changed that, the terminal server was able to locate the domain controller correctly.

    From Matt

DFS Replication on Lan (with multiple subnets)

target:
want to replicate a folder on 2 machines

situation: we have 2 windows 2003 servers (for this purpose) in a domain and each of these servers has 2 network cards; and each server have 2 ips, one on the LAN and one on the internet like:

20.20.0.100, 192.168.0.100

20.20.0.101, 192.168.0.101

problem: when I use DFS Management tool to create a replication between 2 servers it chooses public ips instead of LAN ips; and our LAN is much much faster. how can I tell the DFS Replication to use LAN ip?

  • I'm not aware of any configuration parameters for DFS-R to control the interfaces that it binds to, or to influence how it selects the partner interface to route traffic to. Doing a quick search, I'm coming up with this dirty hack from the Microsoft Storage Team blog (albeit from 2006) that indicates that you should use a HOSTS file on each replication set member to influence their name resoultion such that you effectively "force" them to use the private IP addresses.

    This is an ugly hack, and I'm typically violently opposed to using HOSTS files. In this case, though, it may well be the only way to accomplish what you're trying to do.

    Rather that doing the HOSTS file hack (which, if you do, you should document so that the next guy who works on it knows why it was done) I have one other idea you might try.

    Try putting a host route for the other host on each of the DFS-R replication set computers. If it works, make the the route persistent. I'm about 80/20 in thinking this won't work versus that it will, but it's worth a shot:

    Member 1: route add 20.20.0.101 mask 255.255.255.255 192.168.0.100

    Member 2: route add 20.20.0.100 mask 255.255.255.255 192.168.0.101

    That might just work to get that traffic flowing over the private network. (If I wasn't under orders from The Wife(tm) to get some house work done this morning I'd give it a try myself and tell you if it works... If she catches me writing on Server Fault this morning it will be bad... >smile<)

  • This is an assumption which will require some investigation on your part, but here's my thought:

    DFS Root and Link targets are identified by host name (although it is possible to create link targets based on ip address instead of host name), those host names are resolved using DNS, if you configure the NIC with the public ip address to not register with DNS (which is how it really should be configured, anyway) then only the internal ip address of the NIC on the LAN will be resolved for the host name of each server, which should force the replication to use the LAN NIC.

    This seems to be pretty close to the solution Evan referenced but without the need to use hosts files or static routes on each host.

    Evan Anderson : It's certainly possible that he wants the public NIC's address registered in DNS for other reasons. If not, though, then this is totally viable. I'd definitely prefer something simple to something ugly like HOSTS files or static routes.
    joeqwerty : I think you misunderstood me. What I mean is that ultimately DFS replication is going to occur between the ip addresses of the hosts that hold the DFS Root and Link targets. If you make sure that only the host names associated to the internal NIC are registered in DNS then when those host names are resolved to their ip address, they'll be resolved to the internal ip address and this may force the replication to use the LAN NIC.
    From joeqwerty
  • I found a way to achieve this!

    I am synching two servers, S1 (Windows Server 2008) and S2 (Windows Server 2008 R2), using DFSR.

    The way I did it was to change the dNSHostName attribute for each Server in Active Directory Users and Computers, to one on the second network card, eg, s1.system.int and s2.system.int.

    It seems DFSR looks at this attribute first before synching, and voila!

    However, I just rebooted and had to set the setting again and don't know yet how to make it stick in AD.

    I also don't know what else uses this dNSHostName attribute, so be warned!

Friday, January 28, 2011

Sharepoint - linked web parts?

I have a page with 2 web parts.

One of them is a list, where the users can add an item (their personal info).

Once they have entered their personal info, it should show up in the 2nd list (with the option to edit)

So in the end, users should be able to see the info of all the users in the 1st list, but only their own in the 2nd list.

Which approach should I use for this? Should both web parts be lists? On the first list I have a "Created By" column that is automatically set to the user's full name when they enter the info, if that helps.

  • Why 2 lists? 1 list, 2 webparts showing that 2 different views of that list. Could even be on different web part pages.

    1st web part shows all items, 2nd web part - Filter the view thusly: Created By = [Me]

    So, in the second list, all you see are items Created by you (or whomever is logged in)

connecting to sql server express 2008 remotely

i have a beginners question, and i apologize if it is stupid.

i am a beginner at sql server. i can do sql pretty well, but i dont know much about connecting.

i have:

Microsoft SQL Server Management Studio and SQL EXPRESS

what is the process of allowing remote connections to it?

i would like to leave my laptop online at home with the management studio running and would like to access my home sql server through a remote connection.

i would like to know

how do i allow one of my databases to accept remote connections? what would the connection string be? just my laptop's IP address or what? is it dangerous to accept remote connections? i have done the following BTW:

http://blogs.msdn.com/b/sqlexpress/archive/2005/05/05/415084.aspx

and when i do this it works:

SQLCMD -e -s localhost\sqlexpress,2301

however when i try to do this:

sqlcmd -e -s my.ip.add.ress\sqlexpress,2301

it does NOT work

anyway, after i do get this to work how would i connect to a specific db??

  • As a first note, you aren't enabling remote access to the database, but rather the instance that the DB(s) are in. It isn't dangerous to enable remote connections as long as you have strong security in the credentials, whether they are windows or SQL auth. To use features like replication, and make an express instance a subscriber, you need to enable remote connections.

    1. Open the SQL Server Surface Area Configuration, since this is Express, remote connections are disabled by default.
    2. Once you are in The SA Config window, click on Surface Area Config for Services and Connections.
    3. Click on Remote Connections under Database Engine
    4. Select Local and Remote Connections, and choose your type. (TCP, named pipes, or both)

    If you have done all that, make sure you have also started the SQL browser service on the machine.

    From Dan

How to update a package using puppet and a .deb file

I am trying to figure out the proper way to update/upgrade a deb package using puppet from a local source deb file. My current config looks like this...

class adobe-air-2-0-4 {

  file { "/opt/air-debs":
    ensure => directory
  }

  file { "/opt/air-debs/adobeair-2.0.4.deb":
    owner   => root,
    group   => root,
    mode    => 644,
    ensure  => present,
    source  => "puppet://puppet/adobe-air-2-0-4/adobeair-2.0.4.deb"
  }

  package { "adobeair":
    provider => dpkg,
    ensure => installed,
    source => "/opt/air-debs/adobeair-2.0.4.deb"
  }

}

I first copy the deb file down to the client machine and then use 'package' with the provider set to 'dpkg'. This works and I get the correct version installed.

My question is what is the proper way to update this package in the future. Can I simply change out the source file and puppet will know that it's a different version and update this package? How does puppet determine what version of a package it has installed versus the version of the source deb file?

I am pretty new to puppet, so if you have an suggestions for improvements to my existing config they are very much appreciated.

  • I also posted this question on the puppet users group and this was a response that I got back.

    If you add ensure latest it will check the source file against the currently installed package and install the new one if it is latest. I'm still not sure how you would roll back to an older version, but this seems to solve my problem for now.

    package { "puppet-dashboard":
     provider => dpkg,
     ensure   => latest,
     source   => "/tmp/puppet-dashboard_1.0.4rc2-1_all.deb"
    }
    

    Here is a link to the puppet user group post... http://groups.google.com/group/puppet-users/browse_thread/thread/53f5e7119012fb3e/59e8596701fcaf3f

    From delux247

What netmask should be used on an aliased address in same subnet as the primary IP?

I have an interface with an IP in a class B subnet. I want to add another IP in the came class B as an alias on the same interface. What netmask should I use? Some people say to use 255.255.255.255, while others say to use the regular netmask of the network, i.e. 255.255.0.0 in my case. Which is correct, and more importantly why?

In case it matters, I'm using Linux (CentOS 5)

  • It should be the same netmask as the regular network connection. It's just another IP sitting on the same wire, it needs to have a matching netmask. if you did /32 it wouldn't be able to talk to anything and everything would be a foreign host to it.

    troyengel : sorry, this is just untrue. not being a hater, but /32 on an aliased IP works just fine.
    From Zypher
  • I've seen it done both ways on a lot of servers, either way works just fine in practice. As long as your normal routing is correct and the network is going out the right gateway and device, a /32 will work just as well as a /24 or /16 on an aliased IP.

    Antoine Benkemoun : I agree even though it's not the cleanest solution :)
    troyengel : I agree - I *prefer* to match the netmask of the primary IP myself, but technically if they're in the same subnet a /32 will work. Not everyone cares about crossing their Ts and dotting their Is like we do.
    From troyengel
  • Since both IPs are on the same interface, I don't see how there would be any practical differences between using /16 and /32.

    Care to elaborate on what you're trying to achieve?

    From aix

How to create a share folder for multiple domains on Dreamhost

I have a Virtual Private server with Dreamhost. I'm trying to create a shared folder that all of my domains can access. In the folder I'd like to put PHP classes, and even static files like javascripts.

I've created a directory on the same level as my domain folders. I'd like to call a file via something like this... /home/username/shared/file.php. This isn't working however for static files, and I'm hoping some magic (like .htaccess maybe) will make this work.

This works:

<?php include('/home/username/shared/file.php'); ?>

This doesn't work:

<link rel="stylesheet" type="text/css" href="/home/username/shared/reset.css" media="screen" />

Alternatively, I realize that I could just place my static files inside of a domain, and simply point to them, but I'd like to know how to make this other configuration work.

  • You will never be able to include a server path-based file in an HTML file like that because the path is outside of the scope of public_html.

    I know what you're trying to do, but I honestly don't think it's easily accomplishable. Why not just set up a subdomain on a 'master' domain from which all domains source the files? Less desirable perhaps if you're going for the perfect SEO (because I believe it's technically classed as cross-domain referencing) but having certain globally-required files (images etc) on files.primarydomain.tld works very nicely for me.

    Also given the way Dreamhost handles different users' files, it can be a real pain. Probably best with their setup to just set up one username with one subdomain and have it handle just the global files, particularly as if you have one username per site, that username won't have filesystem permissions to access files held in another username's directory. (and .htaccess-mod-rewrite based rules will just be digging a hole for yourself, even if it is possible!)

    mikemick : Also, I guess if I put them on a service like Amazon S3 / Cloudfront it would essentially accomplish the same thing plus give me the benefits of a CDN. The main reason why I asked was because I am coming from a LAMP environment with Coldfusion, and I was able to perform the above methods to get to static files. Then again, these were Private Servers (not through Dreamhost), not Virtual Private Servers. Thanks for the input.
    Christopher : Yeah, S3/Cloudfront is probably more desirable for what you're trying to achieve - having gone through many hours of pain trying to wrestle this setup out of Dreamhost, and failing dismally, I hope nobody else has to put themselves through the same pain! Dreamhost really is bloody annoying, you can have files in /public_html/subfolder/ uploaded by two usernames - and of course, in true Unix style, one username can't see the other files (nor can it serve them in public). UNIXy behaviour is great except when it gets in the way ¬_¬ It's impossible to reassign ownership too, really frustrating.
  • Use a Symbolic Link (symlink).

    I originally asked this question on StackOverflow (I didn't know about this site yet), and it has now been answered. The answer and explanantion can be found HERE

    From mikemick

Keeping a persistent 3G connection on Windows

I'm in charge of managing an array of Windows Embedded 7 Standard based PCs (they act just like plain old Windows 7).

The computers have 3G cards as their only means of communication, and are on buses. Right now, the 3G cards are configured in NDIS mode which in theory will maintain the connection automatically and persistently. However, sometimes the 3G link fails and never comes back up.

Can someone help me out here? My basic requirements are

  • Internet is available always
  • If the connection fails, it is detected and retried

Is RAS/DUN more reliable for this sort of thing than NDIS?

  • i have run into this situation before. When you have scarce resources, only possible solution is to make a script so that you can connect as soon as you detect a disconnection. 3g Will disconnect if the back-haul network is not very well implemented. This is the main problem. RAS/DUN could be as bad as NDIS. But it may be possible to get that Always-On connection by MicroCell or any of those *cells.

    David Pfeffer : I actually have plenty of resources -- I'm only using about 20% of 1 GB of RAM. But how would I restart the NDIS connection, or do I have to switch to RAS/DUN for that?
    fmysky : if you have a blurry 3g connection RAS/DUN wont help either. you may have to implement some command-line scripts or may be java to re-connect in case it just disconnects randomly for few seconds even with a full network coverage
    David Pfeffer : How would one "reconnect" an NDIS connection, though?
    fmysky : just disable/re-enable the interface.take a look at entens answer
    From fmysky
  • I ran into a similar problem while working on a communcations substation and ended up writing a program to keep the connection alive. You can find my post on the situation here. Basically, in addition to keeping the connection live, the interface id assigned by Windows is changed each time you disable and reenable the interface.

    David Pfeffer : I guess as a weird follow-up, is there an easy way to determine programmatically which connection is the 3G vs the built-in Ethernet?
    entens : Do you mean which interface is the 3G modem? You can test the MAC address or you can identify the interface using any of the information made available in the `NetworkInterface` class (assuming .Net framework). More information can be found at http://msdn.microsoft.com/en-us/library/system.net.networkinformation.networkinterface.aspx
    David Pfeffer : Unfortunately the MAC address is unknown -- I'm going to be deploying to a bunch of these boxes. I don't think there's anything differentiating a LAN device from a WWAN device.
    entens : I meant looking for the vendor id in the MAC. The first 6 characters of the 3G modem MAC should always be the same if your using the same model.
    From entens

How to force a re-install from deb repository after local package screw-up?

Hi,

I mistakenly screwed up my tex-live installation on lenny, by trying to locally install some squeeze packages.

I've got a list of packages that are in state 'pU', and I'd like to replace them all by the a clean known to be working repository version.

How do I do that?

  • apt-get --reinstall install mypackage
    

    Option --reinstall will tell apt-get to install the given package, even if it believes that the same version is already installed.

    From sleske

Apache bandwidth throttling per client, by subnet

We're interested in restricting the number of requests per second and/or available bandwidth to HTTP clients, to stop accidental DoS. We provide free scientific data and web services, and sadly some users' scripts aren't well behaved.

I know there's lots of Apache mods that allow you to throttle per client IP address, but the problem is, sometimes we see people doing distributed crawling from their clusters (today this caused a load average > 200 incident!).

What I'd really like to do is throttle per /24 subnet, but without having to specify which subnets in advance.

Ideally, I'd also like to be able to do this as a proportion of a maximum cap, so if we're only seeing requests from one subnet, they get to use all the server's resources, but if two subnets are competing, they get to use half each.

Is this possible with either:

  • Apache mods
  • Traffic control
  • Proxy server
  • Something else?

Thanks!

EDIT: Couple of further things... If anything needs to be done at the network infrastructure level (e.g. routers) that's out of our responsibility and becomes an instant PITA. So I'm hoping to find a solution that only requires changes at the server level. Also please don't be offended if I take a while to pick a winner, this is a new topic to me so I want to read around the suggestions a bit :-)

  • If you are using HAProxy or can use it check see if this blog post helps </end_shameless_promotion_of_a_fellow_admin_and_company :)>

    Andrew Clegg : Thanks for the pointer, I'll leave a comment there to see if you can group sources by network.
    From Zypher
  • Be very careful. Simply slowing the network down means that you will be compounding any DOS attack - you need to limit connections before they arrive at the webserver.

    Consider - disks are very slow, and only handle one request at a time. One of the most important factors in determining webserver performance is the amount of I/O caching the OS can do - and this is limited by the amount of free memory on the system. Whenever a request comes in, an Apache process (or thread) is scheduled to handle it. That process will sit and hog memory and CPU for the whole time it needs to compose the response and send it across the internet to the client. Denying this memory to the I/O cache. One way to minimise the impact of this is to use a suitable reverse proxy in front of the webserver - e.g. squid which tuns as a single threaded server.

    Assuming you can avoid the problem of gumming up your webserver, then you might want to have a look at running a traffic shaper at the perimeter of your network. Linux now comes with tc as standard.

    (/me just googled 'linux tc' and got a picture of a girl in a bikini ;)

    In terms of identifying crawlers / real DDOS, then the answer is a lot more tricky. Certainly there's no off-the-shelf solution which works reliably for HTTP that I am aware of. However it should be possible to amend the detector in fail2ban to trigger lockout or throttling where you can detect an aberrant pattern. And the basic package can interpret high volumes of requests from a particular end point as such a pattern.

    symcbean : But mod_security is a good idea too
    Andrew Clegg : Following on from your first bit -- I guess one way to avoid gumming up the webserver would be to just reject connections with a 503 when a particular client range reaches its quota?
    symcbean : It would help - but the cost of calculating the quota could still be sufficiently high that you're not preventing a DOS atack - you might want to combine this with a tarpit in the firewall
    From symcbean

How to Host Multiple Domains / Web Sites on one IIS6 Server

I currently have an IIS6 server that hosts one web site/domain. I am developing another web site (completely separate) that I want to host on this same server. Both domains were purchased from GoDaddy.

I believe I will need a server-side ISAPI rewrite filter to internally route the incoming requests based on the domain name. I plan to use Ionic's ISAPI Rewrite Filter to do this because it is free. I know how to install the ISAPI filter and apply it to a web site in IIS, but I have no clue how I am going to route the incoming requests correctly (based on the domain).

Also, I don't know if it is wise to setup multiple "Web Sites" or "Virtual Directories". I am thinking that this will depend on how the configured.

How should I go about getting this accomplished?

  • You don't want rewrite rules at all, you do want to setup a new website configuration. IIS 6 can differentiate between websites via either using a new IP address (so the server has multiple IP addresses), or by using a host header to link a domain to a website configuration.

    Try starting here: Hosting Multiple Web Sites on a Single Server (IIS 6.0) and Using Host Headers to host multiple websites on IIS 6.0

    Josh Stodola : Thank you! I can't believe I have never seen this host header before. Is there an easy way I can see the host header that is currently coming in for the existing domain? I just want to see it.
    Josh Stodola : I was able to view it here: http://web-sniffer.net So can you think of any caveats with using the HOST header approach? Thanks again!
    Moo : Host headers are pretty much the standard way to host multiple websites on one server. The only real caveat I can think of is if you want a https website - those generally require one IP address per website. If you are just after a standard website, you should have no issues with host headers.
    Josh Stodola : Problems. The host can be www.whatever.com or whatever.com, right? So how do I resolve this issue?
    Moo : What issue? Add both hostnames to the host header list and the same website will be used for both of them. Or you can setup a second website for whatever.com and use an IIS redirect to redirect your users to www.whatever.com. Both solutions work - the first is easiest, the second is cleaner as you end up with one single hostname being used in your web apps.
    Josh Stodola : Ok, I got it figured out. The "Advanced" button allows you to add multiple identities - awesome! I was able to add one for localhost as well for some quick local testing. Thank you very much for your help, I am well on my way now.
    Josh Stodola : I thought I would come back and mention that I am still going to use the rewrite filter, to remove "www" and the default documents (like index.htm and default.aspx) from the incoming URLs.
    From Moo
  • All you're looking for is Host Headers. As long as the host headers are different, multiple sites can share the same port. Go to the properties of the site, and under the "Advanced" button next to the IP Address binding dropdown, you can edit the port and host header(s) for the site.

    No more drama than that. ;)

  • I see no problem here. IIS 6 can host hundreds of websites - even for on the same IP address (except for https you'll need a dedicated IP address) - distinguishing them by the host header (domain name).

    Read this Microsoft Support article: HOW TO: Use Host Header Names to Configure Multiple Web Sites in Internet Information Services 6.0:

    Microsoft Internet Information Services (IIS) permits you to map multiple Web sites with the same port number to a single IP address by using a feature called Host Header Names. By assigning a unique host header name to each Web site, this feature permits you to map more than one Web site to an IP address.

    I think Ionic's ISAPIRewrite filter is applied to web sites, each web having its own definition file. I use a similar filter (ISAPI Rewrite) on my IIS servers, with many web sites on the server without any problem.

    From splattne
  • Splattne and others are correct - IIS6 alone can host multiple web sites.

    A rewriter like IIRF is useful for rewriting requests as they come in. For example, you could rewrite a request on the server side, which arrives for host1.domain.com, to be served by the vdir "normally" associated to host2.

    URL Rewriting is not necessary in order to host multiple websites.

    From Cheeso
  • Adding to everyone's answer, I was having some issue configuring IIS to just do it for simple local testing environment. Thing is: it doesn't do on itself, as we would like.

    So gathering what others said here, I think the best way to have Multiple IIS Sites is using Host Header Names and that is quite easy to set up (thanks Moo for the links). On IIS 6, if you have a site already created, that's just going to "Bindings". Easy to find references on google for that.

    older IIS

    alt text

    IIS 6

    alt text

    Then I needed to set up the hosts file. That is simple enough for 1 site, but it's kind of a harsh for multiple sites, and that's why we'd expect to be able to do it on IIS. No such luck. Just do it yourself.

    HOSTS

    alt text

    Of course, that's all what everyone said already, except it's certainly better to do it on a DNS and all... But this is enough for local testing, and I think it should be built in IIS an option to do it.

    From Cawas

How to see on Linux what network interface and source IP address is used for a route to a specific destination host?

If I have multiple network interfaces (here: 2) on a Linux machine (here: Debian Lenny). How do I see, over what network interface (NIC) a route to a specific destination host is going and what source IP address is used by default?

I have though of using

ping -I nic1 desthost.example.com
ping -I nic2 desthost.example.com

too see if both ways are possible. (Here: Both ways are possible)

I looked up the routing table

ip route show

But it's quite complex, so I thought, there must by a small simple tool, to just tell me:

"To destination host desthost.example.com it takes interface nicX and source IP address 10.0.0.1"

What is the simplest way of getting this information?

(And I'd rather not use tcpdump and set the interfaces in promiscous mode.)

Thanks.

  • what about route -C

    From Jure1873
  • Use ip route get <ip>.

    sandoz : Thanks. That is what I was looking for.
    From weeheavy
  • i use netstat -Wcatnp

    From fmysky