Saturday, January 29, 2011

Bandwidth planning for TCP/IP and Ethernet overhead

During typical capacity planning processes i see a perhaps simplistic consideration for estimating only the page or data sizes that are expected to pass through the network pipes. That means like in regular application development the average HTTP page response size is 30KB, or the XML web service data size averages 100KB, such figures are taken raw and multiplied by the forecasted volume to get the expected bandwidth requirements.

It seems application developers and architects tend to forget, or are totally unaware of, the fact that application data are encapsulated into lower-layer transport layers for delivery. So they seldom consider the bandwidth eaten up by header overheads in TCP/IP packets and Ethernet frames.

I feel these overhead should be considered during bandwidth planning, but do not know how to accurately calculate these. Is there an basic formula or process that factors these overhead percentages? So far I have not seen any article that discusses this matter.

  • There isn't a single formula, because the amount of encapsulation overhead is dependent on the size of the data payload in the packets -- there's a lot more overhead if you're serving 10Mbps of DNS traffic than there is in serving 10Mbps of ISOs. The overheads are usually negligible, however, and swamped by the inaccuracies inherent in estimating request volume and response sizes. The safety margins built into your capacity planning should more than cover the overheads.

    icelava : yes different traffic type would result in different overhead percentages, so it would be nice to see if anybody's done some survey to estimate the ratio based on traffic/app type. Additionally, any simple process to measure this for one's own apps?
    womble : To measure for your own apps, just sniff the traffic over a representative period and do some simple calculations. I doubt that anyone's really done a decent survey of traffic types because it really doesn't matter -- I can guarantee you that there are about 100 more important things to do at the moment than worry about the protocol overhead of your traffic.
    From womble

SQL Server Partial Backups

We currently distribute snapshots of our database to our stores by using rdiff on compressed backups of the entire database.

In order to optimize how long it takes to both create these backups, and each store download them, we're looking to leave out some of the less important tables/rows.

The two largest tables are already in their own filegroups, but they aren't read-only since they are still used.

I've looked at horizontal partitioning for separating out the older data to a read-only filegroup, and I can exclude those from a backup using partial backups; however, I'm unclear on how to horizontally partition a table that already contains data, most of what I find is about creating a new partitioned table.

Is partitioning the recommended way of restricting what gets included in a full backup, or is there another way that I'm unaware of?

We are running SQL Server 2005 Enterprise.

  • Assuming that you want to still change your 'less important rows', put that data in a different (but still writeable) filegroup. Then leave that filegroup out of your backup set.

    ...and you can use ALTER TABLE to put a table into a particular partition scheme.

    ALTER TABLE: http://msdn.microsoft.com/en-us/library/ms190273.aspx

    Matt : The database in question is in Simple mode as we don't need/want log backups or to have to deal with truncating them since we don't use them. Simple mode won't allow you to specify the primary filegroup for a file-only backup, it wants us to use READ_WRITE_FILEGROUPS, which won't let us exclude the tables we want to.
    Rob Farley : Oh, that changes things somewhat. Have you considered using the full recovery model instead?
    Matt : Yes, but we don't use or have plans to use log backups, and it seems wasteful to run in full recovery mode if we just have to truncate the logs every so often to keep them from getting too large.
    From Rob Farley
  • Have you consider replication or log shipping? Sounds like a good case for it.

    Also, I don't know your specific environment but if your only reason NOT to run the database in Full-Recovery Mode is that you don't want to deal with t-log backup's then I would check with the business users whether or not being able to recover only to the last backup is "good enough" for them.

    Matt : Our daily transactions are stored in a separate database that is in full recovery mode. The database in question stores everything else that we don't care if we lose the current day's changes. We are trying to get a local copy of the database to each location to help with queries against that data, and to provide a source for that data in the event their WAN link goes down for any period of time. The only replication model that fits what we need is snapshot or merge, and those don't seem scalable to the number of remote databases we will have. We're looking at about 200-300 locations ATM.
    Frank Kalis : Ah, okay. Thanks for explanation!

"netcat -e" / "nc -e" on Mac OS X

netcat [...] -e program
nc [...] -e program

Is supposed to run program as a server (as far as its stdin and stdout are concerned). netcat on Mac OS X does not seem to have this -e option.

Why is that and is there a way to enable it?

  • It looks like you found one of many netcat-rewrites floating around the net. The One True Netcat uses -e to specify IPSEC parameters. Chances are you found a version that someone modified, or wrote from scratch, which is different than what most others call netcat.

    You'll have to find the source for that version and compile it. If you don't want to do that, it may be possible to do some tricks with redirection and/or a wrapper to set up the FDs properly before exec()ing a program.

    frou : What's the OG netcat? GNU version?
    Gerald Combs : Ncat (part of nmap 5) attempts to collect the best features of the various netcat variants in a modern, standardized reimplementation.
  • I don't have the -e flag on any version of netcat I checked (SLES11/Debian Lenny/Ubuntu Karmic, 10.5.8, 10.6.2). Don't you just need to set netcat to listening mode and user proper redirection of stdin/stdout?

    edit: Karmic has the -eflag.

    Dennis Williamson : Ubuntu 9.10 `man netcat`: "-e filename specify filename to exec after connect (use with caution). See the -c option for enhanced functionality."
    pfo : Seems I overlooked it, you are ofc right!
    From pfo

HTTP request does not reach the server occasionally. Why?

We host our web-service on a dedicated server. Sometimes (I'd say 1 out of 20) a response is not received from the server. That makes the browser fallback with time-out error.

An important detail: the request is not logged by Apache in this case. The server is not loaded, there are a lot of free memory and CPU power left.

I have profiled the problem case with tcpdump utility. These are the "good" and "bad" sessions traced by tcpdump. The request is the same in both experiments. Good - server returns response. Bad - no response, time-out error.

Do you see why the problem happens from these data? How can I move further to get closer to the source of the error?

I've replaced my real ip address with 123.45.67.890

---- Bad ----
12:23:36.366292 IP 123.45.67.890.61749 > myserver.superbservers.com.www: S 2125316338:2125316338(0) win 8192 <mss 1460,nop,wscale 2,nop,nop,sackOK>
12:23:39.362394 IP 123.45.67.890.61749 > myserver.superbservers.com.www: S 2125316338:2125316338(0) win 8192 <mss 1460,nop,wscale 2,nop,nop,sackOK>
12:23:45.365567 IP 123.45.67.890.61749 > myserver.superbservers.com.www: S 2125316338:2125316338(0) win 8192 <mss 1460,nop,nop,sackOK>
--------

---- Good ----
12:27:07.632229 IP 123.45.67.890.63914 > myserver.superbservers.com.www: S 3581365570:3581365570(0) win 8192 <mss 1460,nop,wscale 2,nop,nop,sackOK>
12:27:10.620946 IP 123.45.67.890.63914 > myserver.superbservers.com.www: S 3581365570:3581365570(0) win 8192 <mss 1460,nop,wscale 2,nop,nop,sackOK>
12:27:10.620969 IP myserver.superbservers.com.www > 123.45.67.890.63914: S 2654770980:2654770980(0) ack 3581365571 win 5840 <mss 1460,nop,nop,sackOK,nop,wscale 6>
12:27:10.838747 IP 123.45.67.890.63914 > myserver.superbservers.com.www: . ack 1 win 4380
12:27:10.957143 IP 123.45.67.890.63914 > myserver.superbservers.com.www: P 1:213(212) ack 1 win 4380
12:27:10.957152 IP myserver.superbservers.com.www > 123.45.67.890.63914: . ack 213 win 108
12:27:10.965543 IP myserver.superbservers.com.www > 123.45.67.890.63914: P 1:630(629) ack 213 win 108
12:27:10.965621 IP myserver.superbservers.com.www > 123.45.67.890.63914: F 630:630(0) ack 213 win 108
12:27:11.183540 IP 123.45.67.890.63914 > myserver.superbservers.com.www: . ack 631 win 4222
12:27:11.185657 IP 123.45.67.890.63914 > myserver.superbservers.com.www: F 213:213(0) ack 631 win 4222
12:27:11.185663 IP myserver.superbservers.com.www > 123.45.67.890.63914: . ack 214 win 108
--------

Details on the service.

This is a weather reporting service. It is written in Perl, backed by MySQL. The script uses several modules (from CPAN and our own).

The code is relatively simple. The script downloads the weather from another server, converts data format and returns XML response. The weather is cached in MyISAM DB. There is a world locations data-base (INNODB) that can also be requested via the script.

Hosting: SuperbHosting OS: Ubuntu

  • Try using tcpdump or wireshark to monitor the network traffic. That way at least you will know if there's a networking issue. I.e. check if the request hits the machine at all.

    Also, by default most browsers have limited (2) number of connections which can done to one and the same server. If your page has some javascript objects which "forget" to close a connections, etc., it might be that the browser never actually sends the request.

    par : Thanks, I will try tcpdump. I test my requests also with a perl client program, so the browser issues may be excluded.
    par : I have profiled server network traffic with tcpdump. See the update to the question. Can you tell from the dumps why the server does not respond?
    Sunny : From the dumps it appears that actually the request is received on the machine. Why apache does not pick it up is another story, and I have no answer for this. Can you try to run apache in the most verbose debug mode, and try to see something strange in the logs? What are you serving actually? Is it some script? Can you try to reproduce the problem with very simple static HTML page (no pics, just some text)
    par : Hi! I have added the details on our service. Yes the error repeats on a simple HTML file. The problem repeats from my home and from an internet-cafe.
    From Sunny
  • Can you try your request using only IP addresses? If so, this may help narrow down the problem.

    Are all the requests coming from the same location, which have the problem? If so, try another location, perhaps a laptop in a Starbucks or something. If it happens from more than one location, using different browsers, on a very simple page without AJAX or complicated Javascript, that is valuable information.

    If using the IP address works reliably, then it is likely DNS. Knowing the domain name in use may help narrow it down.

    par : Thank you for the valuable info. This is definetly not DNS. I have tested with a pure IP and response also gets missing occasionally.
    Michael Graff : Then it is either routing, the server (hosting company), or your client. I suspect Apache only logs the connection when it completes, as it typically reports status, bytes sent, etc. Perhaps if it gets a connection where the client sends no data or is otherwise broken, it simply never logs this. Try `tcpdump` or `wireshark` to see what is happening at the network level. Ideally, run this from the client as well, as it may be a IP fragmentation issue.
    par : I have profiled server network traffic with tcpdump. See the update to the question. Can you tell from the dumps why the server does not respond?
  • I'd go with Michael Gaff and then put some money on the hosting company - these kinds of traffic problems very easily occur with failing patch panels, nics, nic driver issues or bad cabling, amongst a thousand other infrastructure things.

    I'm counting on you having tried this from different locations (or have reports from other places with the same problems) and gotten the same problem regardless so we can rule out a problem at your end, correct?

    I'm a hardware freak so, I tend to lean towards hardware failures as the cause for weird software and network issues and mass destruction in general.

    par : Still have not tried to test from another location besides my home. Thank you Oskar, I will try to go a hardware way.
  • The problem was a large number of open TCP connections, a new connections was dropped occasionally because of this.

    From par

How to apply CHMOD command to any file with a specific name recursively in Linux?

I need to know how to apply a CHMOD command to only files that have a specific name recursively?

chmod 755 -R filename

Something like above but it should apply to any filename that exists in any sub folders.

  • This will find all files named filename, recursively from the current directory, and pass them to chmod.

    find -name filename | xargs chmod 755
    

    See the manpages for find and xargs for more advanced options. In particular, look at the -print0 flag and corresponding -0 to xargs if you get errors about files with spaces in the name.

    From calmh
  • There's also a exec parameter in find that will do the same:

    find . -name filename -exec chmod 755 {} \;
    
    packs : The trade off here is execution efficiency. Xargs will only call chmod approximately the minimum number of times necessary for the number of files, the exec option to find will call it once for each file found.
    Dennis Williamson : However, if your version of `find` supports it, you can use a `+` instead of the `\;` to get `xargs`-like functionality.
    From Dave Core
  • You can get the best of both worlds if you use the + operator to find to make it run chmod on many files at once.

    find . -name filename -exec chmod 755 '{}' +
    

    You should always put a ' around your {}, otherwise a filename with a space in it could mess things up. In this case it's not a problem, but I have a habit of doing it.

Services not starting on boot (but will start manually) on Ubuntu 9.10

After rebooting my machine, I have found many services have stopped starting automatically. If I start them with their init scripts once the machine is booted, they start without any problems. Strangely enough, if I run sudo init 5 once logged in, most services will start (although some still seem to refuse without manually running their init scripts.

I can't think of anything that's changed since I last rebooted my machine and system logs don't seem to show anything useful. Any help is always appreciated!

Update: I have verified both by using chkconfig and examining scripts in /etc/rc*.d folders that the services are configured to start and that the scripts run correctly.

  • Have you verified the services are activated for your default runlevel? You can get your current runlevel with runlevel. You can verify the activated runlevels with chkconfig --list <servicename>.

    womble : `chkconfig` isn't installed by default in Ubuntu; you need to install the `chkconfig` package.
    Michael Mior : This isn't an answer and should be posted as a comment. But yes, I have verified that the services are activated.
    From Christian
  • How did you add those services? Usually with Ubuntu you add and/remove links to the rc services with the update-rc.d command:

    root@hostname:~# /usr/sbin/update-rc.d
    usage: update-rc.d [-n] [-f] <basename> remove
           update-rc.d [-n] <basename> defaults [NN | SS KK]
           update-rc.d [-n] <basename> start|stop NN runlvl [runlvl] [...] .
           update-rc.d [-n] <basename> disable|enable [S|2|3|4|5]
                    -n: not really
                    -f: force
    

    I'd try to update the init scripts in question with that command to see if that addresses.

    Now if the services were not added via apt-get and your installed them manually, that's a different story. Which services in question did not start?

    Michael Mior : All the services were installed automatically with apt-get and were previously working. Services that didn't start include ssh, webmin, monit, and several others. I don't have a complete list. (As an aside, I installed monit AFTER I had this problem and it also does not start)
    Rilindo : I could be wrong, but it sounds like its at init 1, since most of the multi-user servers start at around 2 (for Ubuntu , at least): root@host:~# who -r run-level 2 2009-12-29 05:47 I thought that it is set to level 1 by accident, but as it turns out, there is no inittab for some time. :( that said, you may want to check anyway and if you see that it has been installed, change it to either level 2 or 3.
    From Rilindo
  • This is a confirmed bug of 9.10. See https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/497299

    I changed the line 9 of /etc/init/rc-sysinit.conf from "start on (filesystem and net-device-up IFACE=lo)" to "start on filesystem and net-device-up". It's worked for me.

    Michael Mior : I'll check the system when I get into the office today, but this sounds exactly like what I'm experiencing.
    Michael Mior : Thanks! I tried updating `/etc/init/rc-sysinit.conf` as you suggested,which didn't help. However, updating `/etc/network/interfaces` as suggested in the bug report did the trick!
    From wolfg

PHP/mySQL: how to debug a 'too many connections' error?

Hello!

I am working right now with PHP website on my client's dedicated server and I get periodically an error too many connections but ~5 users are browsing this page.

In phpMyAdmin I can see that max connections variable is set to 400.

On our local MySQL server in my corporation we have max connections set to 200 and other mates uses in the same time this server.

I heard that on their server some other web is using the same MySQL with big trafiic.

Here are my questions:

  • How to debug this problem?
  • Can I look into some list of SQL queries that was executed in last 10 minutes to check if it is our problem or their?
  • The symptoms you describe point to persistent connections. In some setups PHP does not handle persistent connections very well: it tends to open new connections rather than reuse the existing ones. I suggest you disable this feature and see if the issue persists:

    If you notice an improvement, you can try to fine tune persistent calls or get rid of them permanently.

    hsz : I also shoot that case. At all I can not modify their web/server configuration to make that changes (my web does not uses `persistent connections` anywhere). Tomorrow I am going to meet their man and I will ask them to do that.
    Michael Mior : This doesn't require modifying any server configuration. You would just need to update the PHP scripts for the application.
    hsz : Sure, but I have no access to the second webpage which is also using the same MySql server - so I can not update their PHP script that may uses `mysql_pconnect`. `phpinfo()` shows me that ` mysql.allow_persistent` is `On`. Changing that flag to `Off` will not give any results at all - it should be changet on their machine.
    hsz : They changed `mysql.allow_persistent` to `Off` and our CMS accelerated and now there are no problems with connections. Thanks! But I am still wondering how to check what scripts was connecting with MySql server?
    Álvaro G. Vicario : SHOW PROCESSLIST tells you the user, host, DB and (if you're lucky) query.
    hsz : But it will show me only current processes, right ? Are not there something like query history ?
    Álvaro G. Vicario : http://dev.mysql.com/doc/refman/5.1/en/query-log.html

Failing to send email on Ubuntu box (Karmic Koala)

I have a home network with an XP and Ubuntu (9.10) box. I have created a small test php script for checking that I can send emails from my machine. I am using the same php.ini file with the same [mail settings], yet the file works on my XP box, and fails on the Ubuntu box. I have included the script here, hopefully, someone can spot whats going wrong:

<?php

// send e-mail to ...
$to="myemail@hotmail.com";

// Your subject
$subject="Test Email";

// From
//$header="from: test script";
$header='From: host-email-username@hostdomain_here' . "\r\n" .

// Your message
$message="Hello \r\n";
$message.="This is test\r\n";
$message.="Test again ";

// send email
$sentmail = mail($to,$subject,$message,$header);

// if your email succesfully sent
if($sentmail){
echo "Email Has Been Sent .";
}
else {
echo "Cannot Send Email ";
}
?>

The emails have been spoofed for obvious reasons, but otherwise, the script is exactly as the one I tested

[Edit] I have since installed mailutils package on my Ubuntu box, now the script runs and returns 'Email has been sent'. However, the mail never arrives in my mail inbox (I've waited 1 day so far). Is there something else I need to be looking at?

  • Can you send mail directly from the command line on that machine? If not, then that's the problem -- the machine is not properly set up.

    I think the mail() function can use many different methods to send mail, with the default using sendmail or its replacement. Perhaps this part is where you need to look.

    : tx michael, but could you be a bit more specific?. For instance, how do I send mail from the command line (I'm relatively new to *nix)
    Michael Graff : I commonly cheat when I send test mail. Find a file (I use `/etc/motd` on my machines) and run: mail -s "test email" your-address@example.com < /etc/motd This will send you the contents of `/etc/motd`. If you don't like that, make a sample file with some junk in it and use that.
    Dennis Williamson : You can just pipe `echo "test message"` into the mail command for the message body.
    : tx for the tip Michael, I'll rember that. In the mean time I did a search and found that I had to install the mailutils package. the mail appears to have been sent (no error in the script) - allthough I am still waiting for the mail to arrive in my inbox. I'll check again later.
    : Hmm, 1 day later, still message has not arrived in my mailbox - thats not right. I think sending mail is still not working as it should on on my Ubuntu box. Does anyone know why the mail() command will return a success code (refering to the code snippet above), and yet the message FAILS to be delivered?
  • Try the following commands and check if they show anything that can be used to fix the problem

    more /var/log/maillog mailq

    : I dont have that file. The only files I have in the /var/log/ folder are: mail.err, mail.info, mail.warn and mail.log. All of these files have size 0 bytes (i.e. empty)
    Srikrishnan Chitoor : Do you get anything when you run "mailq" command?
    : Sri: I run the command, but got no output
  • All application needs to connect to a smpt on port 25(fefault) before it can send out the message. In your php application you have to mentioned which smtp to connect to. I think if you do not mention any, it uses the smtp installed in your local system(I am not a php developer myself so not sure). Also your ubuntu machine might not have the smtp server(eg postfix) configured to properly send mails out to the internet. There is a setup called as "local only" where in it delivers mails only to the local system. Best way to trouble shoot is, in one terminal type the command "tail -f /var/log/mail.log" This shows updates happening to that log file in real time. Then you execute your php script. you must be able to see what messages your smtp serevr gives you when you execute the script.If you post those messages here we might further try to solve your problem.

    : proy: I run the command as you suggested, then run my script. No messages were issued by the smtp server (the tail command did not show any output) ??? - whats going on !?
    proy : Thats strange. Can you do "netstat -pant | grep 25" and see if there is any smtp server running on the machine.
    : proy: I run the command you suggested (using sudo) and got the following output: tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1666/exim4 Hope that makes more sense to you than me (BTW, they are two seperate lines - the formatting is likely to get lost). The second line starts is 1666/exim4 HTH
    From proy
  • You probably need to set up sending mail through an SMTP server. For example, if you use Gmail, here is a good article with step by step instructions on how to send through Gmail's SMTP server: http://www.marksanborn.net/linux/send-mail-postfix-through-gmails-smtp-on-a-ubuntu-lts-server/

    Many of the he listed commands require super user access, or prefixing with 'sudo'.

    It worked for me on the first run through. If it doesn't for you, be sure to check the comments on that page, which provide some extra hints.

    From Trung
  • I have just tried your code with the sendmail_path uncommented in the php.ini:

    ; For Unix only.  You may supply arguments as well (default: "sendmail -t -i").
    sendmail_path = sendmail -t -i
    

    Also check your spam folder. I put my gmail account in the 'To' field and I found it in the Spam folder :)

    From ccheneson

IBM x345 raid-1 with 2 defunct drive, how do I get it back?

I have an IBM x345 server running raid-1 with 2 drives that went defunct after a reboot. ServeRaid has an x by both drives (36g scsi) marked defunct. The option to rebuild the defunct drives under ServeRaid is not there. Is there any way I can get my data back from this? Any help appreciated. Thank you.

  • what you can do is try to reinit the raid

    1. power down host, remove both drives
    2. power host up, go into raid controller setup and reset the config (the drives should be out, remember?)
    3. power host down again, insert the two drives
    4. power host up, go into raid controller bios, and create a new raid1 on the two drives, make sure you don't mix up their order, and very important - DO NOT LET THE NEW RAID INITIALISE - it will erase the disks
    5. if you know which drive failed first, force it offline, if not, try the procedure, forcing offline each of the drives and trying to boot using the other in a degraded raid array

    This should bring you to a host with one disk online, and the other offline, making the raid1 degraded, but usable enough to boot and back up the data

    : dyasny, thank you for your tip. I can not boot because the mbr is messed up. the initial proglem was system hang, reboot and got corrupted mbr. reboot to try to do a system repair and got no disk found. that's when I found out both drives were defunct when I got into raid management. from that, do you think i can still repair the raid-1 (mirror) confige even if both drives are defunct? thanks again, any help is grately appreciated!
    : ok, i just tried to reset raid-1 like you said but after shut down, removing the drives, reboot, reset controller config, shut down, reinstall drives and reboot, i still get defunct drives in raid config. it's not the controller because I am able to put 2 new drives in and configure raid-1 on it just fine. just my luck, how can i have both drives on a mirror go down at the same time. there's got to be something else wrong here that i'm not seeing. i can hear the drives spin up, raid controller list it but it can't access the drives...
    dyasny : controller or drive firmware might be to blame
    From dyasny
  • Time for a data recovery service like OnTrack. You definitely won't have any luck putting drives that were in a hardware RAID-1 into a computer and try to software-RAID them, that will not work, period.

    From mfinni
  • Try to use advanced options and "read configuration from drive to controller" in controller bios.

  • If you think it is just a bad MBR that is stopping the boot then try a CD boot using a Linux Live CD or a BartPE CD (yes, it's a lot of work to build it but it has helped me out multiple times since I built the first one). It will boot from CD and run from a RAM disk. If your hard drives are available but unbootable you will be able to browse them and start recovering your data.

    Or can you install the controller card and two disks into another server that already boots?

    From james

apache-user & root access

I want to develop few scripts in php that will invoke following commands; using exec() function

service network restart

crontab -u root /xyz/abc/fjs/crontab

etc.

The issue is that Apache executes script as apache user (I am on CentOS 5), regardless of adding apache into wheel or doing good, the bad and the ugly group assignment does not run commands (as mentioned above).

Following are my configurations;

My /etc/sudoers

root    ALL=(ALL)       ALL
apache  ALL=(ALL)       NOPASSWD: ALL
%wheel  ALL=(ALL)       ALL
%wheel  ALL=(ALL)       NOPASSWD: ALL

As I've tried couple of combination with sudoer & httpd.conf, the recent httpd.conf look something as follows;

my httpd.conf

User apache
Group wheel

my PHP script

exec("service network start", $a);
print_r($a);

exec("sudo -u root service network start", $a);
print_r($a);

Output

Array
(
    [0] => Bringing up loopback interface:  [FAILED]
    [1] => Bringing up interface eth0:  [FAILED]
    [2] => Bringing up interface eth0_1:  [FAILED]
    [3] => Bringing up interface eth1:  [FAILED]
)
Array
(
    [0] => Bringing up loopback interface:  [FAILED]
    [1] => Bringing up interface eth0:  [FAILED]
    [2] => Bringing up interface eth0_1:  [FAILED]
    [3] => Bringing up interface eth1:  [FAILED]
)

Without any surprise, when I invoke restart network services via ssh, using similar user like apache, the command successfully executes. Its all about accessing such commands via HTTP Protocol. I am sure cPanel/Plesk kind of software do use something like sudoer or something and what I am trying to do is basically possible. But I need your help to understand which piece I am missing?

Thanks a lot!

  • You'll have to write those as PHP commandline driven tools (make sure the package php-cli is installed via yum), then run them in the root crontab directly, not as a normal user. You probably want to write it in basic bash (shell) script as well, using php would be overkill if you're just running commands via exec().

    womble : I think you've got that backwards.
    troyengel : Not really, or if I do I don't understand what he's trying to accomplish. I'd never let the network service to be restarted by any sudoer, that action would require a direct root crontab setup to keep my server safe. *shrug* I'm having a hard time imagining why you want apache/php to restart your interfaces.
    ahmedshaikhm : Thanks. Infact at the moment, I am not able to run the commands at all, no matter if I add apache in sudoers (%apache ALL=(ALL) NOPASSWD: ALL), restart apache or anything.
    ahmedshaikhm : the script will switch the Proxy IP Addresses, and the way they are setup is that they are bind at eth0 and eth0:1 is the main interface.
    ahmedshaikhm : Thanks for taking time to answer, I was trying to choose two answers as your and Matt Simmons's answer seems too fit according to my expertise, however I found that I can only select one answer. And I choose Matt Simmons's answer because it may be more clear to people who face similar issue. Thanks once again!
    From troyengel
  • You should invoke the commands you need to run as a different user using sudo -- then set the sudoers configuration file appropriately so that the user that runs the PHP script is allowed to run the scripts as the other user. See sudoers(5) and about a dozen previous questions regarding the use of sudo.

    ahmedshaikhm : I did follow them, but still no success. I know that apache wont let it configure to run as root root (until & unless CFTAG is set and service is compiled). But by any chance, do you think it could be due to the fact that I am posing apache user as root? apache ALL=(ALL) NOPASSWD: ALL
    womble : That should work OK. Update your question with these details and the errors that you get.
    ahmedshaikhm : Thanks for suggestion of adding more information to the question, it really brought good answers. 1+ point!
    From womble
  • If you take troyengel's solution of throwing an entry in the root cron table, then modify it a little, you might have a workable solution.

    If you absolutely, positively need to use apache to signal an interface restart, why not have PHP create a file that acts as a flag, and then have your root cronjob that runs every minute (or whatever) check for the existence of that flag. If it exists, restart the interfaces. If it doesn't, die. (remember to have the cronjob remove the flag after it successfully restarts the interface).

    This accomplishes the goal (restarting the interface, triggered from apache/php) and circumsteps all of the possible problems involved with granting a web / scripting service root level access.

    ahmedshaikhm : Thanks seems like being PHP Developer I got only this option, because other good answers from answerers require sys-admin knowledge to do that.
    troyengel : enhance this idea to help avoid a race condition -- the PHP that drops the flag actually creates a bash friendly file semaphore that has the date/time stamp (YYYMMDDHHMMSS) that the cron job checks; if the time is outside a given range of acceptability then the cron could delete it as stale and not act, as well as the PHP checking to see if there is already a semaphore that was dropped. Basically you can add a tiny bit of code to either end to keep from whacking yourself in the knees with stale drop files/semaphores on the filesystem.
  • Dows this answers your question?

    ahmedshaikhm : Thanks for the pointer. It looked like some sys-admin job to setup audit2allow, it seems difficult to me as a php-developer. Regards!
  • you could use the sticky bit to make an shell script run as root. however, to do that is a little more involved.

    However, you can't just setuid on a shell script. You need to make a regular excecutable with say C/C++, something that doesn't call an interpreter like shell scripts do, and have that execute the shell script.

    ahmedshaikhm : Hi thanks for the answer but the thing is that I am PHP Developer and could not invest time as of now to write some C/C++ program to achieve my goal. Anyways, thanks for taking time to answer. Regards!
    From spatel
  • Hi All

    I'm still unable to get it, I have added apache user/group to sudoers, still it's unable to execute a few commands in my case I'm trying to execute "mount" command for mounting an ISO.

    Please suggest

    TIA Anish Sneh

    From Anish Sneh
  • Thanks all, got it

    -- Anish Sneh

    From Anish Sneh

Scheduled Tasks - Win 2003 - Issues and further explanation of MS/KB

Thsi page: http://support.microsoft.com/kb/308558 claims the following:

Note Administrators or users with administrator permissions can configure the Task Scheduler to send a notification when a scheduled task does not run as you set it to run. To do so, click Notify Me of Missed Tasks on the Advanced menu.

It also says:

You can view the log from the Scheduled Tasks window by clicking View Log on the Advanced menu.

I can't find what screen they are talking about. From control panel, I can click "Schedule Tasks" then "Add a scheduled task" or can click any of the tasks that I have already scheduled.

I'm running a Win2003-SP2 hosted/dedicated server.

I just found the handy command "schtasks" from the command prompt. It's showing me the status of "could not start" on three of my tasks. I just reset the password on one of them to see if that was the problem. I haven't really been monitoring them because I didn't believe any changes were made on this server.

Thanks,

Neal Walters

Update:

found the schedlgu.txt file:

"SEORank.job" (cscript) 1/9/2010 5:46:00 AM ** ERROR ** The attempt to retrieve account information for the specified task failed; therefore, the task did not run. Either an error occurred, or no account information existed for the task. The specific error is: 0x8004130f: No account information could be found in the Task Scheduler security database for the task indicated.

Will see if tomorrow this task runs, but would still like more info about what Microsfot KB was talking about. Is there a nicer interface that I'm missing?

Update 2 - Like I said twice already:

  • In the Scheduled Tasks window, select the Advanced menu from the menu bar. You should see the items referenced in the article.

    NealWalters : Apparently Win 2003 is different. I dont' see what this guy shows here: http://stackoverflow.com/questions/1015702/alternative-for-windows-task-scheduler That was perhaps my question: What "scheduled task window"? Like I said, from control panel - I can click "Schedule a new task" or click on one of the already scheduled tasks.
    joeqwerty : You're missing it. Go into Control Panel and open up Sheduled Tasks (double click the Scheduled Tasks item in Control Panel). Once you've got the Scheduled Tasks window open you can select the Advanced Menu.
    NealWalters : See update 2 in original post - what the heck are you talking about? Different version of Windows maybe? At home I have Win 7 - and I have a tool in Admin-Tools. But Win2003 is what you see above. (By the way - double click on "Scheduled Tasks" doesn't do anything).
    joeqwerty : @Neal: The problem is in the way you're trying to access the scheduled tasks from the Control Panel. You've got your Start Menu set to expand the Control Panel menu item so it winds up looking like your screen shot. With your mouse on the Control Panel menu item, double click Control Panel so it opens up as a window. Alternately, you can right click the Control Panel menu item and select Open from the context menu.
    NealWalters : Thanks, a picture is worth a 1000 words. I went to "start menu", rt-click properties, advanced tab, and under control panel changed "display as menu" to "display as link". Or - "right click explore" gets me there (instead of double-click). I was expecting some special utility to pop-up, but instead, it looks like a normal win explorer screen - but it now has an "Advanced" menu item.
    NealWalters : When I click "notify me of missed tasks" - what will happen? How does it notify me? This is remote computer. Next time I log on to remote desktop?
    joeqwerty : Glad you found it and hopefully you'll get to the bottom of the scheduled task issue.
    joeqwerty : From the W2K3 help system: If you enable the Notify Me of Missed Tasks setting, the missed tasks dialog box displays at the foreground of the computer display, in front of the logon screen, when there are missed tasks to report. Anyone who sits at the computer can view the list of missed tasks.
    From joeqwerty

Zabbix. Math operations with items -> graph

I am using Zabbix 1.8 for network management reporting.

I have an item, for example net.if.in[<if><,mode>]

I need to build graph using value (net.if.in[<if><,mode>])/n, where "n" is any integer.

I know just this way - in item's properties put a "multiplier" "1/n", but a think that is not true way :)

Help me please to find true way to do math operations with items and build a graph with it

  • Your only real option here as you say is to use the multiplier value. But bare in mind that using a multiplier will pre-process data with your multiplier then store the result in the database, so make sure this is what you want before changing it.

    If you want to keep the data as-is and transform it in graphs only, then you'll likely need to graph the data externally. Alternatively use another item to store the transformed data but this potentially doubles your disk space requirements.

    From rmyates
  • Also zabbix forum is a good place to ask such question. They are pretty helpful there.

    From Vitaliy
    1. if you don't need the data elsewhere, just use the multiplier

    2. if you do need the original data elsewhere, since zabbix version 1.8.1 you can create another item of type "calculated" and then graph that - for more details see http://www.zabbix.com/documentation/1.8/manual/config/items#calculated_items

    From Richlv

Subdomain doesn't work without www in front of it

I had a subdomain setup in cpanel before (sub.domain.com to redirect to domain.com/sub/) and suddenly today I noticed that it was rewriting the URL in the browser to www.domain.com/sub/ instead of just just leaving it as sub.domain.com. I wasn't sure what the problem was, but decided to delete my sub domain and recreate it in cpanel. Now I can only access it by going to www.sub.domain.com (sub.domain.com yields a server not found error).

EDIT: The problem seems to lie with the .htaccess file. After deleting it, things went back to normal. However now my new problem is how to configure the .htaccess file properly. I had tweaked it originally to try and accomplish the following:

I have three folders, live, staging, and dev:

1) Requests for domain.com or www.domain.com are directed to domain.com/live (and the url the user sees is always rewritten as www.domain.com).

2) Requests for staging.domain.com and dev.domain.com or directed to domain.com/staging and domain.com/dev, respectively, and the user sees staging.domain.com or dev.domain.com in the browser.

3) Any subdomains in general are not rewritten by the browser when I add it in cpanel.

My current .htaccess file accomplishes number 1 just fine, but rewrites staging.domain.com to www.domain.com/staging and so forth.

RewriteCond %{HTTP_HOST} !^www\.domain\.com$
RewriteRule ^(.*)$ http://www.domain.com/$1 [R=301,L]
RewriteRule ^$ /index.html

RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} !\..+$
RewriteCond %{REQUEST_URI} !(.*)/$
RewriteRule ^(.*)$ http://www.domain.com/$1/ [R=301,L]

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} !^/live/
RewriteRule ^(.*)$ /live/$1
  • I tweaked the logic of my original code and seem to have a viable working solution. Every test I've done has been successful, if anyone sees any flaws in the code, or how it might be optimized to fit the conditions posed in the original question, please chime in. I have little experience working with rewrites. Thanks.

    Working Code:

    # REDIRECTS
    #
    # Add www in front of all URLs except subdomains
    #
    RewriteCond %{HTTP_HOST} ^domain\.com$
    RewriteRule ^(.*)$ http://www.domain.com/$1 [R=301,L]
    RewriteRule ^$ /index.html
    #
    # Add forward slash at end of www URLs
    #
    RewriteBase /
    RewriteCond %{HTTP_HOST} ^www\.domain\.com$
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_URI} !\..+$
    RewriteCond %{REQUEST_URI} !(.*)/$
    RewriteRule ^(.*)$ http://www.domain.com/$1/ [R=301,L]
    #
    # Forward requests to live folder
    #
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteCond %{REQUEST_URI} !^/live/
    RewriteRule ^(.*)$ /live/$1
    

    *Note: I'm not sure why the last part of the code does not redirect subdomain URLs to a live folder. It works the way I want it to, just not sure why ^_^;

    From George

Linux distro recommendations for servers

I have limited Linux server experience. I am looking for recommendations on distros. What's your favorite for servers and why?

EDIT: Clarification: I'm hoping for one, general purpose distro, not specific to a particular function. Something for use on file servers, web servers, anything really. Community support, timely patching, admin friendly tools, good track record, etc. are more the angle I'm going for here.

  • Ubuntu

    Reason: The community - there is a big community and if you need to do something, you can nearly always find someone that has done it first and posted a great set of instructions.

    bruceatk : I started out with Red Hat and then Fedora, but after trying Ubuntu I can't go back. I have an Ubuntu server running in a VM on my windows server, I run Ubuntu desktop in a vm on my Vista desktop, and I dual boot my Mac mini to Ubuntu. I've been installing the desktop version and then installing server stuff as needed even on my server. I did it the other way once the first time and I just find it easier to start with the desktop version. Compatibility with my various hardware has been good and updates and upgrades have gone smoothly.
    From Guy C
  • Debian or Ubuntu (server) due to dist-upgrade. Any distro can be installed from scratch - but how many can be upgraded in place? I've got a debian machine that hasn't been reinstalled from scratch since approximately 1995.

    There are good communities around both, and also good documentation on making your own packages so you can make the package system work for you personally. It's Very Cool to be able to easily roll a custom package to deploy to your internal machines.

    Getting the most up-to-date versions of software can be a challenge, but backports have filled that gap pretty well.

    Brent : I have been using Debian on about 30 servers for the past 5 years and found it very good, stable and secure - except if you need really up-to-date stuff - for instance, if you are hosting Ruby on Rails websites you won't find the most recent versions in the repos. Ubuntu is great, if you're prepared to upgrade every 6 months - or stick to the LTS versions. Debian and Ubuntu are similar enough that you use both, without much of a learning curve.
    Mihai Limbăşan : @Brett Nesbitt: Agree 100%, but I have a minor nitpick: it's arguably much better from a stability point of view *not* to use the latest and greatest on a server.
    pjz : @Mihai Limbason: Sadly Debian's release schedule has historically been so slow that their packaged software can be *years* out of date, which is okay for some software but doesn't really suffice for fast-moving targets like python, ruby, django, svn, etc.
    warren : with using Ubuntu, just make sure to stick with the LTS releases
    From pjz
  • Debian is my server's distribution of choice : it's very stable, available for almost all server platforms and security is a keypoint on that distro !

    From paulgreg
  • Red Hat Enterpise Linux or CentOS. They are stable, you can buy support for them if you care to, and every major release has a guaranteed 5 year support cycle, which is invaluable (don't even get me started on the fedora server a semi-rogue admin installed here that we now can't upgrade).

    Also, it tends to be the supported solution when installing commercial software on linux (Oracle comes to mind, but we have other commercial products that state they'll install on anything, but they will only guarantee interop with Red Hat/CentOS)

    dagorym : Another flavor of this, similar to CentOS is Scientific Linux. It is rebranded RHEL just like CentOS but supported by the national labs and large universities. That's the distro I'm using for my sites
    wzzrd : The support cycle is *7* years of RHEL releases for security updates.
    Joseph Kern : When looking for a linux distro, look at the community first. What is their focus? In the case of RHEL, Debian, CentOS, SLES, and Scientific Linux, they focus on creating stable, well maintained platforms. Some people will swear up and down that the best Linux for job X "Is whatever you feel comfortable with." This avoids the question, and is more confusing than helpful to someone who doesn't feel comfortable with ANY of them.
    From jj33
  • I run all my servers at work on Gentoo. While a lot of fellow Linux sysadmins will consider me crazy, I must say I've had little to no troubles. One server run as a build host for the others (they're all the same hardware) and serves the binaries to the others. I like Gentoo a lot because it gives you loads of control about what is installed and what is not installed. Community is great, the forums are the best online. If you stay on the stable branch, it's smooth sailing. Currently I've been leaning a bit more towards Arch Linux because of the lack of compiling, but I haven't installed a server with it yet. Source based/meta distros can make great servers!

    Oleksandr Bolotov : Gentoo also has hardened profile (http://www.gentoo.org/proj/en/hardened/) for even higher security
    From Gert M
  • I just voted up the "function" comment, as it asks a relevant question .. the primary function matters, to some extent.

    The other key decision you have is whether you want access to professional support. If the answer is yes, get Red Hat. If the answer is no, then I would start with Ubuntu.

    Additional note .. Gert's (implied) point about using the same distro for multiple servers is also a great one!

    Hamish Downer : You can get professional support for Ubuntu from Canonical, but you can try ubuntu without paying anything.
    From tomjedrz
  • Slackware

    Since its first beta release in April of 1993, the Slackware Linux Project has aimed at producing the most "UNIX-like" Linux distribution out there. Slackware complies with the published Linux standards, such as the Linux File System Standard. We have always considered simplicity and stability paramount, and as a result Slackware has become one of the most popular, stable, and friendly distributions available.

    I learned a lot working with this distro, but you will need plenty of time and patience. Slackware is user-friendly, it's just very selective about who its friends are.

    jassuncao : You got that right :-)
    Lee B : Slackware is interesting to learn stuff on, and I've fond memories of using it as one of my first linux distros, but a system good for learning stuff is just that -- not a general purpose tool. For general use, I think ubuntu is probably the way to go. It's hard to beat have good, well maintained and integrated packages for every purpose available at the touch of a button, and being able to upgrade/remove/reconfigure the entire server at any time.
    alexandrul : YMMV, but I found Slackware to be rock-solid and as good as any other alternatives (paid or free). As a bonus, it forces you to learn some extra things along the way.
    From alexandrul
  • I recommend using CentOS.

    • It's the Red Hat Enterprise Linux recompiled from the Red Hat's sources
    • You can upgrade the distro without a support contract
    • Red Hat is usually used with commercial software, so being exposed before will help in your journey
    • Once you pick Red Hat you can go to other distributions and know what's important
    • Lots of information on internet
  • Ubuntu if your server is for learning about Linux and servers, CentOS if it's for production. I find the Ubuntu community to be more active then CentOS which may be helpful if your just getting started. CentOS doesn't have a new release every six months which is nice for a server OS. It also uses older but more tested versions of software which is important with a server.

    From Jared
  • Whatever you choose, if it will be exposed to the public internet, you want to ensure that it supports selinux. This has saved my butt on at least one occasion where otherwise I surely would have gotten hacked due to a security flaw in software I had installed. Not only did selinux totally block the attack, but via setroubleshoot it raised a pop-up window to warn me about it.

    From Eddie
  • I use Arch Linux. I like it for great package manager, "rolling" release, nice community package repository (ABS), nice configuration, great wiki. For me it is much like Gentoo, but with binary packages.

    You can find comparison with other distro on: Arch vs Others.

    From rkj
  • I use either CentOS or Ubuntu LTS releases. Both are stable, well-known and well-supported.

    From Jon Topper
  • I recommend Ubuntu server (LTS preferable for production) as it is the only free of charge Enterprise class linux server distribution.

    Peter Boughton : Huh? Are you claiming that CentOS is not a zero-cost Enterprise class server distribution?
    Kz. : No. It is build by community and never ever any big enterprise will support its software on CentOS.
    From Kz.
  • Personally, OpenSuSE for me. Mainly because of YaST and autoYaST; great package management, upgrades/updates, etc.

    From RainyRat
  • I think there are two approaches:

    1. Rolling distribution
    2. Release distribution

    ad.1 I personally prefer rolling distros, since there is no such thing as upgrade to a newer version (which either way might be risky). That's why I prefer Gentoo for my servers. Unfortunately setting up gentoo requires a bit more effort, yet thanks to portage system maintaince (if you use the stable branch) is really easy and painless. The very cool feature is slots and ability do downgrade to literally any old version of the package with one simple command. On the other hand gentoo packages, even in the stable branch tend to be much newer than Debian (see point 2) - if you need a machine with recent software gentoo is for you.

    ad.2 Release distro - I'd choose Debian here, mainly for it's popularity around all the hosting companies and a huge packet base. Although I don't use it myself it seems to be the most reasonable distribution with a really high security level. Deployment is fast, no need to compile packages (unless you really need to).

    Lee B : Debian excels at both rolling and release upgrades, but you're right that some distros fail at one or the other, and so it's an important distinction.
    From pielgrzym
  • Debian for it's stability and popularity