Unix Administrators' Journal|
[Most Recent Entries]
Below are the 20 most recent journal entries recorded in
Unix Administrators' LiveJournal:
[ << Previous 20 ]
[ << Previous 20 ]
|Thursday, June 23rd, 2011|
text editor tricks
wow, nobody's posted here for 6 months. anyone still here?
if so, i need some fun help! i'm running a "text editor wars" competition tomorrow at a geek conference and i need more tasks for the warriors to complete and see who can do it fastest in their editor of choice. i'm thinking stuff like "regexp search and replace", "find the missing brace", "rot13 this document" (i know, it's trivial in vi/emacs, but only if you know the tricks!), "indent this code", "reformat this paragraph".
my plan is to make each event standalone and do first/second/third place and see who ends up with the best average placement to see who wins. but this would be more fun with a whole lot more challenges.
help! (and, especially, example text with example solution text would be appreciated since my timeline is pretty compressed. i'm doing this as a volunteer effort, you're not doing my homework, no warranty express/implied, etc)
thanks in advance =)
|Sunday, December 26th, 2010|
Logically or physically consolidating directories of similar files?
Consider the common occurrence of a directory having files that all
have the same security permissions and that are usually accessed by
the same programs, such that several of the files are likely to be
read when one is read. Is there any filesystem technology or literature
regarding merging multiple files into a single security domain such
that having access to one fast-tracks your access to others (assuming
there is a significant overhead to security lookups) and/or merging the
files into a single I/O operation that loads all the files into memory
when a request for one is made?
I've seen applications that zip up their data files to improve the
read time, which incidentally puts all the files in one security domain,
but I would like to know if there is anything which has the same effect
and is transparent to applications and the user space. Imagine something
along the lines of mounting a zipped directory as a pseudo-filesystem,
or a computer system might show a directory in the filesystem but have
a tarfile on disk with something in the file system to tell the OS to
treat the tarfile as a directory. Is there a name for this kind of
Also, am I correct in assuming that solid-state drives and their ability
to read multiple files at once would minimize the I/O gain from merging the
several files into one resource on the hardware?
|Monday, September 21st, 2009|
|Thursday, May 14th, 2009|
Suggestions for a better way to solve the problem?
So, I'm out here looking for suggestions. My current situation involves file permissions.
I've got a solution, but it's not a good solution, IMHO, and I'm looking for better ideas.
Here's the situation:
User A and User B are in the same group
A file is created by User A
User B facilitates the transfer of said file off to a 3rd party
After the transfer is complete, I want User B to remove said file
User B is prohibited from having write access to said file or directory.
I started out by trying some file permissions stuff.
R-X doesn't seem to do the trick at all.
My current solution, IMHO, sucks.
I have User B create a file in /tmp
I have a cron job that runs every 5 minutes and checks for the presense of said file.
If the file exists, it removes both the files created by User A and User B
So, anyone got any ideas?
|Monday, March 2nd, 2009|
Vim is my favorite text editor. It keeps me fresh on vi in case I end up on an unfamiliar box, but it still has a lot of features.
One thing vim offers is some pretty convenient file encryption. Unfortunately there's a problem with it.
Vim has swap files it creates to allow for data recovery. This is where the problem is. If you open an encrypted file, anyone else can "recover" that file with an arbitrary pass phrase.
Here's how this breaks down:
- I'm logged into the box, and I decide to open my super-secret text file that I encrypted with vim.
- Someone else is on the box. They decide they want to open my super-secret text file too, so they open it with vim and are asked what they want to do with the swap file. They choose to recover it.
- Vim asks for the passphrase, but they can type in anything. They are now able to view the file.
- I have tried this with both vim 6.3 on Solaris 9 and vim 7.1.314 on Ubuntu 8.10, with the same results.
This problem can most likely be avoided by proper use of file permissions. It's also a reminder that it is best to use a layered approach to computer security.
|Sunday, December 7th, 2008|
Dead Tree Solaris 10 Manual
Does anyone know where I can find a decent dead-tree text (or free Kindle with included ebook :-P) for Solaris 10? The only half-way useful text I can find is the volnumious .pdfs on Sun's website. Surly there is a better way, but I haven't been able to find anything on Amazon or Ebay.
Thanks much for your help,schpydurx
|Thursday, September 18th, 2008|
multivolume tar strategies
Just encountered my first gtar "Path name too long for GNU multivolume header" error.
Can anybody point me to a suitable tape writing mechanism that will automagically span files across multivolumes without such a limitation? What would something like amanda do in this case, for example? Current Mood: clueless
|Thursday, August 28th, 2008|
Homegrown surveillance system
I am interested in building a 4 camera CCTV home surveillance system. I like the idea of using ZoneMinder
on Debian Edge so far, but I haven't actually gotten started yet. I've got beefier experience with Solaris but the software really was developed for Linux so I don't really want to go there unless I have to.
I also don't have my video capture card yet so recommendations are welcome. The interfaces either need to be standard coax or I need some sort of converter. I know one can send ethernet over coax, but can one practically send digital video over ethernet? (I do have Cat5E available.) After all this will be closed circuit - this traffic isn't going to hit anything else.
So, anyone got any helpful hints while this is still in the planning stages? Current Mood: mellow
|Wednesday, August 20th, 2008|
solaris 9 or 10?
I've got a 333 MHz Ultra 5 with 256 MiB. I've installed Solaris 10 on it to learn more about Solaris, but after speaking with another Unix admin I'm thinking that maybe 9 is a better option for me on this hardware. He says that 10 uses a lot more memory, and the biggest updates are optimizations for Java. Since I don't really have an interest in Java (at least in regards to this box), perhaps I'd get more out of it by downgrading to 9. Another perk would be the ability to make use of the SunPCi card in it. I won't be running a desktop...normally headless and ssh in only.I'd like to get more opinions on the subject. Should I go with 9 or 10 given my specs and needs?Update
: OK, so you've convinced me to "downgrade" to 9. How painful is that going to be? Will the installer balk at the presence of 10, or force me to reformat?
|Monday, July 21st, 2008|
|Wednesday, July 16th, 2008|
Ищу место для сервера (СПБ)
Коллеги, а не подскажет ли кто где бы разместить сервер, чтоб не дорого, канал хороший, ip выдавали ну и прочие вкусности. Да, желательно что бы разместить можно было на выходных или в вечернее время, потому как в остальное время работаю.
UPD: сервер - обычный ПК, корпус соответствующий..
|Thursday, July 3rd, 2008|
the state of backup technology
it's been...a long time since i was in charge of backups. well over a decade. using tar and hand-changed tapes that held less than a gigabyte are my history.
suddenly our managed backup solution is ditching us. i need to backup 200GB (uncompressed, much source code and some database dumps and graphics), most of which changes not-very-often, and i need to do it in a data center that is not at our office, and it would be nice if the solutions didn't take up much room in the rack at the data center and didn't cost an arm and a leg. we definitely don't need a robot if we can get decent-capacity tapes. i don't want to visit the data center every day, but every week is definitely acceptable. i do want to backup every day (incremental, obviously).
what's hip in scsi or net backup these days? and by hip i mean in the sweet spot price/sizewise. i'm willing to consider either NAS or something scsi-tape based (we don't have any FC infrastructure and i don't want to add it.)
i'm also interested to know if anyone has experience doing offsite, online backups with some reputable third party. sure, i can rsync to J Random Box, but i'd like some organization with accountability that runs in the $1-2k/mo price slot for our quantities. I've gotten quasi-quotes from a few like evault and amerivault in that price range, but personal experience about companies whose backup clients that don't suck would be awesome. i've used networker with distaste in the past, and getting tivoli running on our 64bit ubuntu was a challenge i'd like not to repeat--probably wouldn't have been too bad if we controlled the server as well. sigh.
thanks in advance for your help!
|Thursday, June 26th, 2008|
|Wednesday, May 14th, 2008|
Just a friendly reminder that BOTH
Debian and Ubuntu have SSH woes
and need to be patched.
|Monday, February 11th, 2008|
Name resolution on Solaris 8 + Sendmail
How often do people here ask for help after
fixing the problem? This solution has me curious, so here is the story:
We had a problem with Sendmail on Solaris 8 where it would list the From: address in outgoing mail as "user@foo" instead of "email@example.com". Fixing this was a pain because nothing appeared to be wrong. We ran find|grep searches to be sure every file in /etc with the host name in it was using the full "foo.org", we upgraded libresolver and sendmail to the latest patches available, we added "Dj", "DM" and "Dm" rules specifying "foo.org" to every .cf file in /etc/mail, we tried copying sendmail.cf and submit.cf files from another working system, none of it helped. All the while check-hostname, uname -a, uname -X, and the basic hostname command all reported the host name as the full "foo.org".
What eventually fixed the problem was adding the wrong
hostname to the /etc/hosts file. Where it had said:
22.214.171.124 foo.org sun450
We changed it to say:
126.96.36.199 foo.org foo sun450
That seemed to do it. Sendmail is now sending mail From: firstname.lastname@example.org instead of @foo. This leaves the question of why.
I will pose two sides to the question because there might be two different answers. Why was Sendmail using plain "foo" before when there is nothing on the system to specify a "foo" without a ".org"? And why did sendmail not pick up "foo.org" as the name matched to its external interface until I added just plain "foo" to the hosts file?
Thanks to anybody who could clear this up.
|Sunday, February 10th, 2008|
GRE tunnel with Solaris
Long-shot here: does anyone know if I can create GRE tunnel interface with Solaris 10? My interest is in using Squid 2.7 (yes, not stable yet) + WCCPv2, but would rather not get into a situation where MAC addresses have to be rewritten.
Obviously Linux can do this without issue, and I'm happy to throw together a Debian system for this purpose. I should mention that I have not yet tested WCCP on either OS yet and am still in the information gathering phase of this project. My knowledge of Solaris is not as good as Linux, but there area few reasons why Sol 10 would be a potentially good fit for this.
|Friday, February 1st, 2008|
escaping colons in filenames
I have this weird problem where no matter how I try to escape the colon in a file name, I cannot get rsync or scp to copy it, because it parses the colon and thinks that the string preceding the colon in the filename is a hostname (and complains that it cannot to remote to remote transfers, which is, uh, sort of common sense :)
So far, I've been working around this by sticking the files that have colons in them in a folder and them using -r to copy the contents of that folder, but I feel so ashamed having to resort to that (worse than the unnecessary use of cat(1)).
Edit: Here is the specific problem I am having:
% scp myfile\:foo somehost:/somedir/
error: unknown host myfile
scp "myfile:foo" somehost:/somedir/
error: unknown host myfile
% rsync "myfile:foo" somehost:/somedir/
error: cannot transfer files between remote hosts
|Wednesday, January 30th, 2008|
Why, pray tell - you denizens of the Solaris
world, must I have a DHCP server to jumpstart my X4200? I never had to have one using SPARC architecture, and its beyond my present understanding why one is required to jumpstart an x86 box with Solaris 10.
Here's one answer, but it doesn't make my question any less valid:
Jumpstart on SPARC machines traditionally made use of RARP, BOOTP etc to get the required information for building over the network. This method does not work on x86 based machines, and required that there be a boot server on every subnet.
DHCP is the method used by x86 based machines to start a non-attended network boot, and it also has the advantage of being able to traverse subnets through the use of DHCP forwarders.
However, its often a little more difficult to set up, configure and manage . To accomodate the x86 based servers, and the ability to jumpstart over multiple subnets, we have added DHCP support to the core toolkit.
There are also some new x86 specific configuration variables in the base module.
Possible DHCP configurations
* Local DHCP server hosted on the Jumpstart server.
* Remote DHCP server.
* Multiple remote DHCP servers.
If all you need to do is install x86 based servers, then the first option is the easiest to configure and set up. Additionally if you configure the routers as DHCP forwarders, you can also jumpstart on any subnet that can "see" this DHCP server. Unfortunately this scenario is slightly unrealistic as its likely that there's a real DHCP server lurking somewhere.
This is where option 2 becomes useful. If there is already an enterprise wide DHCP server, and its running Sun's DHCP, then you're in luck! All you need to do is set up an ssh relationship between your server and it, and JET will do the rest for you.
Finally, there may in fact be multiple DHCP servers, possibly one per subnet. JET allows you to specify which subnets are served by which dhcp server.
That's not an explanation as to why
it doesn't work. All my research points to x86 being able to support RARP and TFTP/BOOTP both. Sun calls the DHCP boot PXE's default
configuration suggesting alternate methods which do not require DHCP.
Taking advantage of this new twist however, I told my boss my jumpstart project was now complete...in theory.
|Thursday, January 24th, 2008|
X.org upgrade fucked up my gamma
I upgraded from X.org 7.2 to X.org 7.3 after upgrading to FreeBSD 6.3. I even did a clean X.org reinstall by killing every X-related port there was and rebuilding it.
Whence before, I didn't have to do gamma correction on my BenQ FP731 + Radeon X300, now in order to not be blinded when I'm in X, I had to:
[peter@cowbert]:~ % xgamma
-> Red 0.450, Green 0.450, Blue 0.400
But it's still not right. I've reset the monitor to more or less its defaults (sRGB temp, 40% contrast, 30% brightness). But at these funny xgamma settings (aren't gamma supposed be > 1 if they really need adjusting?), my whites are still a bit too bright, but the other colors are really too dark. And if I bump them up the colors never really fix themsevles but then everything gets all washed out and I get a headache :( It's as if the whites are too white and the blacks are too black and there's nothing I can do to get them to move from their respective extremes (because trying to lighten the blacks makes the whites worse).
Clearly something is borked, and it only happened after I upgraded X.
iodbc < unixODBC
Anybody wanna tell me why libiodbc doesn't ship with an isql-like tool (sorry, but iodbctest is just lame)?
Plus unixODBC also has DataManager (which isn't as cool as the MSSQL Enterprise viewer, but for quick-and-dirty schema lookups it's great). But the big thing about unixODBC's isql is that I can run a quick query by basically doing: isql DSN UID PWD < "SELECT col FROM tbl WHERE expr". I can then redirect it to a file or pipe it to mail or lpr (just like the in the 80s. :) Not to mention if I'm in interactive mode, it actually has query editing and query history functionality (probably using readline(3) or something)!
(I would use unixODBC, except for the fact that RedHate ships with iodbc and not with unixODBC :)