From | ARP |
---|---|
Subject | Re: What popular, large commercial websites run PostgreSQL? |
Date | April 29,2002 |
Msg-id | 002101c1ef8a$b822d3a0$0100a8c0@arp.homelinux.org Whole thread Raw |
Inresponseto | What popular, large commercial websites run PostgreSQL? (rich@annexia.org) |
List | pgsql-general |
Hi, I think this may also be interesting for the mailing list then : this is a copy of a message found in this list afewdays or weeks ago. I kept it because I thought it might be useful some day :-) ArnaudIt started so innocently...Good Morning Everyone,I have a general question about who is using Postgresql. This is not amarketing survey and any information I collect will only be used by me. Here's the background.I have a user who has developed a Visual Basic application that uses MSAccess files for it's data storage. Currently, this datafile is aboutfifty megs in size. There are about fifteen users who use these files inthe application, needless to say, this is having a severe impact on ournetwork. After much heartache and pain, I was able to convince him that weneed to look at a RDBMS to put the data on. Of course, I suggestedPostgres as an alternative to MS SQL server for many reasons. Linux runson all of my servers, I'm happy with it's performance and reliability. I'mcurrently running Postgres as my web server's backend. Opensource softwaredoes not scare me. However, his side of the camp comes from the Windowsworld. "It has to be MS SQL server. It'll be easier to program to thanany other server." "Opensource software isn't going any where." "Can wedepend on it?" are common questions and statements I have heard.I am not trying to start a ruckus or a flamewar, but I would like to knowwho's using Postgres out there. What's the application? How big are yourdatabases? Are you using Visual Basic or C to connect to it through ODBCor are you using a Web interface?Any information you can provide will be greatly appreciated.thank youCorey W. GibbsBrian Heaton wrote:Corey,My firm is currently using Postgres as the back-end of a militarynetwork monitoring app. This will end up being deployed in tacticalvehicles. Our databases tend to have 1 huge table (5-10M rows), 2-3medium tables (50-100K rows), and 2 smaller tables (5-10K rows). Our UIis currently in Java using JDBC (of course). We also interface directlyin C from a couple of utility and reporting apps.THX/BDHBrian Hirt wrote:For what it's worth:Our company runs MobyGames (http://www.mobygames.com) a project similarto IMDB, but for video and computer games. We exclusively usepostgres. We've been using it since december of 1998 (pg6.5.3) and havebeen very happy with it. The database is relatively small, around 1.5GBin about 200 tables. All of our pages are dynamically created, and weserve up about 1,000,000 pages a day (each page usually causes at least20-30 queries against the database.). Most of the database activity isselect queries, there is only about 0.5MB - 1.0MB of additional contentadded a day. The database runs on a single box and has performed well.When there have been problems with postgres, the developers have beenvery proactive about finding a solution, and the problems have alwaysbeen resolved within a day or two. From extensive past experience withboth Oracle and Sybase, I can say that's great.--brian hirtKym Farnik wrote:Hi - We use various SQL DBMSs including Postgres.The choice of DBMS depends on customer needs. RDare an Online Application Development company.We have positioned Postgres for the 'entry level'customer. This is a little misleading as someof those customers have quite large databases.By comparision our Govt accounts use Oracle (it's theDBMS of choice for the South Australian Govt).Some of our larger customers also use Oracle.One customer in the advertising/image processingindustry has a projected storage requirement of6 peta bytes. They are using Oracle on Solaris. :-)On the other hand our CMS product uses Postgres asdo companies like Ballon Aloft (www.balloonaloft.com.au)To quote our markting stuff...Introduction------------Recall Design use PostgreSQL (http://www.postgresql.org) as a DatabaseManagement System (DBMS) for web application projects. PostgreSQL is afree, open source DBMS product. This article discusses the advantages ofusing PostgreSQL over commercial databases such as Oracle and Microsoft SQLServer.Recall Design use Oracle for large and/or multi-server applications.Commercial DBMSs, such as Oracle, are used where specific features, such asSpatial, are required. We design our systems so that customers have theoption of migrating their DBMS from PostgreSQL to a commercial DBMS such asOracle. This allows customers to start low-cost with the option to expandas required.... More stuff from the Postgres site follows (with GNU legals)Jeff Fitzmyers wrote:One thing that has not been mentioned is the ability to start companieswith a very small budget.I am developing the webified office backend on an oldish Mac OSX laptopwith postgres / php / apache. I am Mr Mom and the laptop allows me toowork instantly with partners, clients and the main website. Theflexibility is fantastic.Ever tried to put oracle on a laptop?? A coworker has, and for somereason 5 high end laptops could keep him busy for a few days withoracle, java and configuration, etc. I think it took much longer then anhour just to load oracle. The first time I set up the Mac it took 30minutes to get everything going with no problems.I met a few of the developers at a past Linux expo. They seemed verynice and very capable. I am very pleased with the development pace andfocus of postgres. Each new release is like christmas :-) Plus thepostgres lists are great sources of education!Thanks, Jeff FitzmyersJeff Self wrote:I understand where you are coming from. I worked for a city governmentup until a year ago. I built our intranet using Linux on a discardedserver with apache and postgreSQL. But they didn't care about the factthat is was free. They wanted all data to be stored on the mainframe. Igot tired of the scene and I left to join Great Bridge. We know the restof this story.I'm now back in city government, although with a different city. Theyare much more open to creativity here and are allowing me to develop onLinux running postgreSQL. I'm in the process of developing a JobInformation System for our Personnel department, whom I work directlyfor, that will use Apache, PostgreSQL, JSP's, and some Perl. So I'm ahappy camper now.Put together a proposal for them. In one column, list the costs forinstalling PostgreSQL on your existing Linux servers. In the othercolumn, list the cost of a server running Windows XP/2000 with MS SQLserver. Don't forget to include the cost of licenses for all 15 usersand. Also throw in Visual Studio .net which was just announced the otherday. I believe its around $1000 per user. Let them decide.Steve Wolfe wrote:Since I've posted a number of times to this list, it's no big secretthat www.iboats.com is powered by Postgres. It's been rock-solid for us,and served us very well. Our data directory is about 1.5 gigs in size,spread out over a few hundred tables, some very small, some very large.We do all of our programming in Perl. Investers have never heard ofPostgres, and sometimes mention getting Oracle, so we tell them "Terrific,if you want us to get Oracle, we can do that. We'll just need an extrahalf-million dollars to do it with." Reality then slaps them in theface.....Tony wrote:The regional foundation for contempory art in Pays de la Loire, FranceContact database of about 30K people - mailingWorks database with about 700 works of Art - conservation, expoplanning...Library database with about 6000 booksClients are all Macs. The reason for leaving the world of closed sourcewas the cost per seat for client licences. There are 10 people using thedatabase. Interface is www, jdbc, jspThe public bit of the works data base will be linked to the web site aswill all of the library database.Lief Jensen wrote:Hi, I think we have a technically interesting product: The application: Logging Time & Attendance for employees, production time incl.machineryfor invoicing customers and efficiency reports, project times also forcustomer invoicing, salary calculations including all kinds of weirdemployee-contract specifics, and of course a lot of reports. The system: Little over 80 tables with an awfull lot of 'foreign keys' originallywithreferential integrity. Time-stamp input (logging events) range from fewhundreds a day to several thousands a day (not that much ;-). Rather heavyaccess in generating reports, though, since there is a lot of crossreferencing tables. In house this is running on PostgreSQL 7.1.2/3 on Linux(Slackware 8.0) AMD K7 500MHz 512MB RAM. The database is only around 50MBwithone table having ~20MB. The datacollection (time events like job start, jobstop, break start, break stop) is done on a small 'terminal' speciallydesigned for the purpose. These terminals are connected on a two-wirenetworkto a special controller, communicating with a computer using RS232. Theinterface program (called the OnLine program) is programmed in C++ and canrunon both Windows and Linux. In the in-house system the OnLine is runningdirectly on the database server. The OnLine program connects to thedatabaseusing ODBC, even on Linux. A little history: Our project started in the early days of M$Access (Access 2.0) whereeveryone sought this was the way to go :-(, at least in my surroundings, mycompany and our customers. The first project didn't go too well, the systemwas certainly too complex for Access 2.0 and Windows 3.11. First with thetransition to Access 97, the system started to be usable. However, it wasstill not performing very well and could only be used by small companies.Atthis time we started using Informix as the backend running on Linux. Thiswascertainly early days for Informix on Linux. It worked, but was difficult toadminister and hard for 'novices' like us to get it working good. The mainproblem was the ODBC driver on Windows and we tried 3 different brands(including Informix' ), in several different versions. All of them needed alot of modification in Access frontend. Access is certainly not SQL 'clean'and it is very hard to figure out what the JetEngine is doing. However, wegotit working, but performance was poor, some reports could take a couple ofdays(yes, more than 24 hours !!!) and when does a Windows machine run for thatlong ? ;-) I had been playing with PostgreSQL on my own for some years, and finallylast spring we decided to make the move and transfer all data to PostgreSQL7.1.2. As you all know installing and getting Postgres running is VERY easyand everything including transferring data (I needed to write af fewscriptsto do it and do a lot of testing) took only a few days. Having everythinginPG now the interesting part was to test performance, but first of coursethepostgres ODBC driver was easy to set up, worked at first shot, and now theperformance: reports formerly taking those days were now done in few hours,and with a bit of tweeking we got it down to about 1/2 hour and we reallydidn't optimize it (no stored procedures or such). Some simpler reports(withalmost same results as the heavy ones) I did for our intranet, showing upinsplit seconds. The system has now been running in-house for almost a year,nobreak-down, no down time on the database. No NT restart every now and then.(We have another in-house application running on WinNT/M$SQL Server thatneedsto be restarted every 2. week, even with 1.5GB RAM.) Additional: Have a look at OpenACS (http://www.openacs.org). This is the ACSsystemmoved to PostgreSQL !! A very interesting project. There is also referencestosties/people using PG. Greetings, LeifAndrew Gould wrote:My office performs financial and clinical data analysis to findopportunities to improve operations and the quality of patient care. Weused PostgreSQL 7.1.3 on FreeBSD to create a relational data model versionof most of our Decision Support System andintegrated data from additional data sources. We also have data for allinpatients discharged from nonrural hospitals in Texas during 1999 and2000. We use the state data to derive benchmarks; and apply the benchmarksto internal data. The database for internal data is currently 3GB. Thedatabase for the state data in 14GB.I am currently preparing to move the data from several MS Access databaseapplications to PostgreSQL databases. The users will never know anythingchanged.Since the hospital is mostly a Windows shop; we use MS Access 97 and 2000as front-ends via ODBC drivers.I have setup phpPgAdmin (Apache web server with PHP4) so that I can answersimple questions from any executive's office in the system.I have a Python script that obtains a current list of PostgreSQL databases. It renames existing .gz dump files to .gz.old. It then vacuums alldatabases and uses pg_dump and gzip to back them up into individual .gzfiles. The script is run by cron to ensure that even new databases arebacked up automatically on a weekly basis.Andrew GouldNick Frankhauser wrote:We're not in production yet, but our application needs to scale up by about70MB per year for each customer we add. All of our customers have about 10years worth of history to start with, So I figured them to be roughly 1GBeach initially. Since our "short list" is for about 35 customers, I mockedup a test database with 35GB of test data & had some family members poundthe web site with queries for a few hours. The response time was veryreasonable. Our demo site has a much smaller database behind it, but thedata was generated by the same random routines that created the largedatabase, so it shows roughly what sort of application we're running(http://www.doxpop.com). Performance seems to be about on par withSQL-server & Oracle, and I've never crashed the database unless I'm abusingroot privilege while stupid.Performance and reliability is just not a problem, and you can find it inmany products. I think the more important issue is support, and that'swhere the open-source community leaves the commercial sector in the dust.Here is my support experience:When I used MS SQL-server and Oracle in my last job, if I logged a supportcall, I'd be lucky to get a response within a day. Of course there is nosupport outside of normal office hours unless you pony up big money. If Ihad an interesting problem, it could take days to get escalated up to thepeople who understood & enjoyed challenging problems. -And of course even"standard" support was pretty pricey.When I was just starting out with PostgreSQL, I *really* screwed up mydatabase with some dumb last-minute changes at 11:30 PM the night before asales demo, I compounded the problem by moving my WAL files & generallydoing many of the things you shouldn't do. I posted frantic requests forhelp, and received the help I needed at about 2AM. By 3AM, I had receivedclarification after a second round of questions and by 5AM I was ready forthe demo. Around 6AM, two of the developer/guru people had lent theirexpertise as well. Not only did I get good support in the middle of thenight, I also got the personal attention of two developers during the timethat most support folks are still stumbling around in search of caffeine. Idon't think you can buy that kind of support anywhere.PostgreSQL is a part of our competitive advantage. Of course we try to giveback to the community by spending a little time each day being a part ofthat unusual 365 X 24 support staff on lists like this, but the time spentis minor compared to the savings- and our participation makes us betteradministrators.Holger Marzen wrote:On Fri, 15 Feb 2002, Corey W. Gibbs wrote:> any other server." "Opensource software isn't going any where." "Can we> depend on it?" are common questions and statements I have heard.Can we depend on it? That is the silliest question ever, baut hardlyanyone seeh to know why.The important thing about software "in production" is not the price.There is nothing wrong paying good money for good software. But softwarethat comes without source code is no good software. Why? Because themanufacturer drops support for every version withing a few years. Andthen you have software running that no-one can support.You could say: "OK, so we spend a lot of money every year again andupgrade to the latest version. We accept even the downtime." Yes, if youare lucky. But the manufacturer will finally merge with a competitor orsimply vanish. Bang!> I am not trying to start a ruckus or a flamewar, but I would like to know> who's using Postgres out there. What's the application? How big areyour> databases? Are you using Visual Basic or C to connect to it through ODBC> or are you using a Web interface?We use PostgreSQL as a database for web servers: raw data to generatenetwork statistics from (about 160.000 rows, growing) and user databasesfor access privileges. I am very happy that I found mod_auth_pgsql, soPostgreSQL tables can be used with .htaccess. Great!Many people use MySQL for these purposes (and it's OK for simpleapplications). But why use a lightweight database if I can enjoytransactions, triggers and so on with the full-function PostgreSQL?--Neal Lindsay wrote:[snip]> I am not trying to start a ruckus or a flamewar, but I would like to> know who's using Postgres out there.[snap]<ruckus>We use it at the small consulting company I work for to track time billedtojobs. The current front end is in Access97 with the backend in PG 7.1.3 (7tables). I developed it partway in 100% Access and transferred my tablestoa PG backend before I deployed it. Tastes great, less filling. Never hadastability problem. I am currently working on a more feature-full versionwith PG 7.2 on the back and PHP web forms on the front (25+ tables). Access(+ VBA) is like a lot of Microsoft products: they make easy things easy andslightly hard things darn near impossible. I like a lot of abstraction ontop of my DB, so Access wasn't cutting it. If the way you store it verysimilar to the way you see it though (and you don't mind the licensing)Access is pretty nice. Not for the backend though. You (and probablyeverybody else here) already know, but it bears repeating: Access is not agood multi-user database backend.</ruckus>Neal LindsayRaymond O'Donnell wrote:Ireland, for whom I've developed a number of web applications of varyingscale and complexity. The web server is a windows machine (we've justupgraded from NT4 to 2000) from which COM objects and ASP script talk viaODBC to a Linux machine running PostgreSQL.I'm also currently developing an application for a language school; this iswritten in Delphi and runs on Windows client machines from which, again, ittalks via ODBC to a Linux server running PostgreSQAndrew Sullivan wrote:We're running the first gTLD since .com, .org, and .net, and we'redoing itwith Postgres. Not Oracle. Not DB2. Not Sybase. And notMS SQL ServerAnd you know what? The Oracle developers can't believe how fast it is. Plus we're saving thousands in license fees. It does everything we wantand more, and it does it fast. It's stable, and a breeze to administer.> Are you using Visual Basic or C to connect to it through ODBC> or are you using a Web interface?We're using JDBC.Andy Samuel wrote:I use PostgreSQL with Kylix + ZeosDBO for a Point of Sales Application formy client. It has been great ! But the size of the database is not big.I'm currently developing a Hotel Information System with Delphi + ZeosDBO +PostgreSQL ( in Linux ). If you search the email archieve, you'll find somepeople use it with HUGEamount of data.Shane Dawalt wrote:I'm a network engineer at Wright State University (free software is good:-). I have used Postgresql for production work since version 6.2.3(1996/7). I use it for two primary operations. I use Perl most of the timewith the DBD:Pg and DBI:Pg modules. They work well. I also have used PGPwithin my Apache web server to access the database, also works very well.1) We have a large modem bank of around 253 modems. All modems log RADIUSauthentication messages as well as activity logs. Each night I process theRADIUS logs from the modem bank servers for the previous day. A perl scriptsummaries the info and stuffs it into the database. Modem sessions arestored for 1 year. I currently have about 1.9 million records in thedatabase taking about 500 megabytes. (I'm re-coding the thing to reducethis space ... I was stupid when I first wrote it.) I have other perl appsthat do a second-by-second accounting of all modems. They generate graphsor text output depending on which manager reads it.2) We have a large number of network ports on our campus with over 5,000active network ports for faculty/staff alone. We needed a way to enforceour network policies which disallow users from setting up their ownrepeated/switched/wireless devices (security issues). I have written adatabase and several perl apps that use SNMP to interrogate all of ourCisco switching devices for ethernet addresses which are updated in a largedatabase. Queries are then ran against the database to find people who arepotentially violating our policies and reports are generated. The softwarehas the ability to shut down the associated network ports automaticallythough this feature has not been enabled just yet. (I'm still in bug-squashmode.) These apps and the database are being hosted on a Digital 433 MHzpersonal workstation with a single processor. It works well, but wouldwork better if not for its measily 128 Mbytes of RAM. They are ratherdatabase-communication intensive. If I wrote some additional PLpgsqlfunctions within the database server itself then alot of the communicationswould vanish since most of the work would be performed on the server siderather than at the client side. This is for a rainy day though.Bill Gribble wrote:My company is using postgres in several related applications in retailpoint of sale and inventory management.Our point of sale system, OpenCheckout, uses postgres as its backend.The size of the databases varies according to the retail install, butfor a recent trade show demo we loaded up a craft and hobby industrydatabase of UPC codes and item information that contained about 800,000items. With that size database, random lookups on an indexed field(the UPC code) were reasonably quick. We haven't extensively testedwith large numbers of users but our early results are positive.We are also using postgres as a server for a fixed asset tracking systemwe are working on. Inventory management and computer service peoplewith wireless handhelds (compaq ipaqs running Linux) connect to apostgres server to get network configuration, service history, andhardware information from computers, switches, and even network jackplates keyed on a barcoded property tag. The user just scans the tagwith the integrated barcode scanner and can view or edit lots ofdifferent kinds of information.And we use the same handheld system to interface to our point of saleinventory database, for receiving people in the warehouse to scanincoming items into the database or for reordering people wandering theaisles of the store. Postgres lets us tie all this together prettyeasily.Sad to say :) we use SQLite when we have to go off the network andoperate disconnected with the handheld units. The ipaq just doesn'thave enough horsepower and storage space (32M of non-volatile storage,64M RAM) to run postgres locally plus all our software. We keep anaudit trail table and replay it when we can get wireless access to thepostgres server again.We access the database in a variety of ways. Most of our tools arewritten in Scheme and use a Scheme wrapper for the libpq libraries. Forthe accounting components we use a middleware layer based on the'gnucash' accounting engine, which provides a uniform financialtransaction API. The actual POS front end is written in Java (so it canuse the JavaPOS point of sale hardware driver standard) and gets many ofits configuration parameters from the same database using JDBC.Fernando San Martin Woerner Wrote:CoreyI was in your shoes 3 years ago, right now i'm using postgres in place ofms access, from vb with no problem, i fact there's a lot of things betterthan using access, i work in a medium size construcction company in Chile,we have a business of US$ 20 million by year, our database it's used frominternet by phone 56k/b connections, all clients software are programmed invb using odbc drivers and every things are ok. if you need some gui forwindows yo have pgexplorer or pgadmin and are very good.So ms sql server it's not a good option, first: price are expensive,secondyou need some ms OS to run it and there you loose your reliabity,perfomanceand security. postgres it's easy to program and there's a lot ofdocumentation and information plus you can get help from pgsql mailing listthat it's better than some technical support.try it. i did it and was a very good experienceregards---------------------------(end of broadcast)---------------------------TIP 6: Have you searched our list archives?http://archives.postgresql.org
pgsql-general by date:
Previous
From:Tom Lane
Date:
Subject:Re: pid gets overwritten in OSX
Next
From:Tom Jenkins
Date:
Subject:Re: What popular, large commercial websites run
By continuing to browse this website, you agree to the use of cookies. Go to Privacy Policy.