we get a many problems with Great Plains , no data integrity in program , also there are many dupliacate in transactions , it is very slow , they are saying it can use by 1000 users and 2M transaction , and it is no correacr word , if use just 5% you will get hang in system. i think it vere bad to microsoft to have this progra ---------------- This post is a suggestion for Microsoft, and Microsoft responds to the suggestions with the most votes. To vote for this suggestion, click the "I Agree" button in the message pane. If you do not see the button, follow this link to open the suggestion in the Microsoft Web-based Newsreader and then click "I Agree" in the message pane. http://www.microsoft.com/Businesssolutions/Community/NewsGroups/dgbrowser/en-us/default.mspx?mid=0f768e6a-73bb-4c65-8d2a-f536f7d55e8a&dg=microsoft.public.greatplains
Naser, Obviously you're having problems but frankly, it's not the application. Too many folks, myself included, make it through Sarbanes- Oxley requirements, IRS audits and all kinds of other audits. If data integrity were as bad you're experiencing none of us would survive. At a minimum it sounds like you have severe hardware and possibly network problems. Speed is a function of hardware. Data integrity and duplication may very well be network related. Look really hard at those things first. Then look at your implementation and how people are using the system. I realize you're probably feeling a lot of pain but your basic premise that GP has no data integrity is incorrect. Mark On Dec 4, 8:49 am, Naser <Na...@discussions.microsoft.com> wrote: > we get a many problems with Great Plains , no data integrity in program , > also there are many dupliacate in transactions , it is very slow , they are > saying it can use by 1000 users and 2M transaction , and it is no correacr > word , if use just 5% you will get hang in system. > > i think it vere bad to microsoft to have this progra > > ---------------- > This post is a suggestion for Microsoft, and Microsoft responds to the > suggestions with the most votes. To vote for this suggestion, click the "I > Agree" button in the message pane. If you do not see the button, follow this > link to open the suggestion in the Microsoft Web-based Newsreader and then > click "I Agree" in the message pane. > > http://www.microsoft.com/Businesssolutions/Community/NewsGroups/dgbro...
None of our customers are having these issues. Perhaps you need to work with your re-seller to identify the causes. You will need to look at your hardware and networking re: the speed issues. Memory is a big bottleneck so make sure you have lots of it. The system hanging does sound like an issue to do with insufficient memory on the server or workstations. Where are the duplicate entries occurring? Did your reseller assist with the installation and implementation? "Naser" <Naser@discussions.microsoft.com> wrote in message news:0F768E6A-73BB-4C65-8D2A-F536F7D55E8A@microsoft.com... > we get a many problems with Great Plains , no data integrity in program , > also there are many dupliacate in transactions , it is very slow , they are > saying it can use by 1000 users and 2M transaction , and it is no correacr > word , if use just 5% you will get hang in system. > > i think it vere bad to microsoft to have this progra > > ---------------- > This post is a suggestion for Microsoft, and Microsoft responds to the > suggestions with the most votes. To vote for this suggestion, click the "I > Agree" button in the message pane. If you do not see the button, follow this > link to open the suggestion in the Microsoft Web-based Newsreader and then > click "I Agree" in the message pane. > > http://www.microsoft.com/Businesssolutions/Community/NewsGroups/dgbrowser/en-us/default.mspx?mid=0f768e6a-73bb-4c65-8d2a-f536f7d55e8a&dg=microsoft.publi c.greatplains
Just to add to what Mark and Jamrock have said, if GP had serious integrity and speed problems, there wouldn't be 25-30,000 installs worldwide over the last 20 years. It sounds like you need to have your Value Added Reseller (VAR) analyze your hardware environment and system setup. If they can't do it, find one that can. Good luck, Frank Hamelly MCP-GP, MCT East Coast Dynamics www.eastcoast-dynamics.com
Dear Naser, The implementation of any ERP system in the word makes the difference! Most of Great Plains customers are satisfied with what they have! The thing which make the difference is the experience in this field, please specify what the issues you have in Great Plains and I am sure I will fix it all and satisfy your requirements. Regards, Mohammad Daoud | Technical Development Manager | Mobile +962 79 999 65 85 Tel. +962 6 554 3721 | daoudm@greatpbs.com http://www.greatpbs.com| http://www.facebook.com/group.php?gid=18895609248/ "Jamrock" wrote: > None of our customers are having these issues. Perhaps you need to work > with your re-seller to identify the causes. > > You will need to look at your hardware and networking re: the speed issues. > Memory is a big bottleneck so make sure you have lots of it. > > The system hanging does sound like an issue to do with insufficient memory > on the server or workstations. > > Where are the duplicate entries occurring? > > Did your reseller assist with the installation and implementation? > > > "Naser" <Naser@discussions.microsoft.com> wrote in message > news:0F768E6A-73BB-4C65-8D2A-F536F7D55E8A@microsoft.com... > > we get a many problems with Great Plains , no data integrity in program , > > also there are many dupliacate in transactions , it is very slow , they > are > > saying it can use by 1000 users and 2M transaction , and it is no correacr > > word , if use just 5% you will get hang in system. > > > > i think it vere bad to microsoft to have this progra > > > > ---------------- > > This post is a suggestion for Microsoft, and Microsoft responds to the > > suggestions with the most votes. To vote for this suggestion, click the "I > > Agree" button in the message pane. If you do not see the button, follow > this > > link to open the suggestion in the Microsoft Web-based Newsreader and then > > click "I Agree" in the message pane. > > > > > http://www.microsoft.com/Businesssolutions/Community/NewsGroups/dgbrowser/en-us/default.mspx?mid=0f768e6a-73bb-4c65-8d2a-f536f7d55e8a&dg=microsoft.publi > c.greatplains > > >
I would have to agree with Mark's, Jamrock's and Frank's comments regarding Great Plains. I recommend you review the system requirements to ensure your environment is capable of utilizing Great Plains. Great Plains 9.0 Systems Requirements https://mbs.microsoft.com/customersource/support/documentation/systemrequirements/compatibility_gp9.0_hardware.htm?printpage=false Great Plains 10.0 Systems Requirements https://mbs.microsoft.com/customersource/support/documentation/systemrequirements/system_requirements_gp10.htm?printpage=false Hope this helps, rc "Frank Hamelly, MCP-GP" wrote: > Just to add to what Mark and Jamrock have said, if GP had serious > integrity and speed problems, there wouldn't be 25-30,000 installs > worldwide over the last 20 years. > > It sounds like you need to have your Value Added Reseller (VAR) > analyze your hardware environment and system setup. If they can't do > it, find one that can. > > Good luck, > > Frank Hamelly > MCP-GP, MCT > East Coast Dynamics > www.eastcoast-dynamics.com > >
I can pile on. We run GP with users on 3 continents, 24x7 uptime, 150GB company database, with 100+ users. We are on version 9, planning to upgrade to 10 this summer, and don't foresee changing ERP systems even as we approach $1B in revenue. We are a consumer products manufacturer and distributor with direct sales and warehousing on 3 continents. Don't be hate'n on GP. Each GP installation is a living thing. The resources you engage to implement, enhance, and administer the application over the life of the software make or break the value you derive from it. The same goes for any ERP application out there.
Hey, hey, let's be honest here... ....any system that comes standard with Check Links and Reconcile routines is, by definition, expecting data integrity issues to occur. :-) Admittedly, GP works 99.99% of the time, but when it doesn't (for whatever reason), you can end up with corrupt data files - sometimes they can be easily fixed and sometimes they can't. You can argue all day long why the errors occurred (program bugs, telecommunications issues, user error, whatever), but the bottom line is corruption CAN occur - maybe not often, but it does. My biggest beef with the system is that it doesn't seem to use commits on the transactions. In this day and age, with SQL databases on a mission-critical system, in my opinion there's no excuse for not using commits on logical units of work to ensure the data integrity. If I'm wrong about this, someone please tell me. Naser may have exaggerated the extent of his problems and would benefit from a review of his environment, but let's not mislead people and imply that it's totally their fault if they're having issues with data corruption. The above was not meant to flame anyone - just to provide a bit of sympathy with someone who is obviously dealing with some frustrating issues. -- Bud Cool, Accounting System Manager HDA, Inc. Hazelwood, MO GP 9.0, SP2 "Naser" wrote: > we get a many problems with Great Plains , no data integrity in program , > also there are many dupliacate in transactions , it is very slow , they are > saying it can use by 1000 users and 2M transaction , and it is no correacr > word , if use just 5% you will get hang in system. > > i think it vere bad to microsoft to have this progra > > ---------------- > This post is a suggestion for Microsoft, and Microsoft responds to the > suggestions with the most votes. To vote for this suggestion, click the "I > Agree" button in the message pane. If you do not see the button, follow this > link to open the suggestion in the Microsoft Web-based Newsreader and then > click "I Agree" in the message pane. > > http://www.microsoft.com/Businesssolutions/Community/NewsGroups/dgbrowser/en-us/default.mspx?mid=0f768e6a-73bb-4c65-8d2a-f536f7d55e8a&dg=microsoft.public.greatplains
Noted. I think everyone will agree GP is not perfect. Not even close. I have my own beefs with it too. You also can't assume it doesn't use commits on transactions. I would argue with you on that. GP typically does a good job of rolling back transactions when there are connectivity problems. If it didn't use commits, or something comparable, then that wouldn't be possible. But, that is a far cry from " Great Plains has no data integrity , PLS do not sell it".
Yeah, I agree Naser was overreacting...but I know how frustrating it can be when your users ask why they have a transaction that got messed up and you don't have a good explanation for it. As far as the commit processing goes, I'm really skeptical that it's prevalent throughout the system. I know G/L posting tends to recover pretty well, but other things don't. I could list several examples of things that have happened in our system to support my hypothesis, but here's one that just happened today: A user posting a sales batch this morning got a message that the glpBatchCleanup stored procedure had a problem (why-who knows?). Upon further inspection, I discovered that the data had, in fact, been posted to the SOP30200 and SOP30300 tables, but part (not all) of the original batch data still existed in the SY00500, SOP10100, and SOP10200 tables. So, the system experienced an error of some sort and left data floating around that should have been deleted. If that's not crying out for some sort of commit validation, I don't know what is. The only way to fix this was to use SQL to delete the orphaned records from the tables. My point is - with the proper use of commit events, this should never have happened, no matter what caused it in the first place. -- Bud Cool, Accounting System Manager HDA, Inc. Hazelwood, MO GP 9.0, SP2 "MichaelJ" wrote: > Noted. I think everyone will agree GP is not perfect. Not even > close. I have my own beefs with it too. > > You also can't assume it doesn't use commits on transactions. I would > argue with you on that. GP typically does a good job of rolling back > transactions when there are connectivity problems. If it didn't use > commits, or something comparable, then that wouldn't be possible. > > But, that is a far cry from " Great Plains has no data integrity , PLS > do not sell it". >
Hi Bud, How often do you do database maintenance? Do you run checklinks on a regular basis as preventative maintenance? "Bud" <Bud@discussions.microsoft.com> wrote in message news:A212D0D9-8A24-4869-AD4D-0DD6CCC2713D@microsoft.com... > Yeah, I agree Naser was overreacting...but I know how frustrating it can > be > when your users ask why they have a transaction that got messed up and you > don't have a good explanation for it. > > As far as the commit processing goes, I'm really skeptical that it's > prevalent throughout the system. I know G/L posting tends to recover > pretty > well, but other things don't. I could list several examples of things that > have happened in our system to support my hypothesis, but here's one that > just happened today: A user posting a sales batch this morning got a > message > that the glpBatchCleanup stored procedure had a problem (why-who knows?). > Upon further inspection, I discovered that the data had, in fact, been > posted > to the SOP30200 and SOP30300 tables, but part (not all) of the original > batch > data still existed in the SY00500, SOP10100, and SOP10200 tables. So, the > system experienced an error of some sort and left data floating around > that > should have been deleted. If that's not crying out for some sort of commit > validation, I don't know what is. The only way to fix this was to use SQL > to > delete the orphaned records from the tables. My point is - with the proper > use of commit events, this should never have happened, no matter what > caused > it in the first place. > > > -- > Bud Cool, Accounting System Manager > HDA, Inc. Hazelwood, MO > GP 9.0, SP2 > > > "MichaelJ" wrote: > >> Noted. I think everyone will agree GP is not perfect. Not even >> close. I have my own beefs with it too. >> >> You also can't assume it doesn't use commits on transactions. I would >> argue with you on that. GP typically does a good job of rolling back >> transactions when there are connectivity problems. If it didn't use >> commits, or something comparable, then that wouldn't be possible. >> >> But, that is a far cry from " Great Plains has no data integrity , PLS >> do not sell it". >>
Bud, No flame detected when I read your post. Are there potential problems with GP? Yep. It wasn't handed down by God or space aliens. But I've worked in everything from a Quickbooks environment to an Oracle Financials environment and everything has issues. In a QB environment everything is so hidden, if the program screws up you may be completely hosed. On the Oracle side, I've never seen the old Great Plains or Microsoft actively try to hire good administrators or developers from their clients. I've seen it twice with Oracle and several others I know have had the same experience. I've also seen what a GP upgrade looks like compared to an SAP upgrade. (Thank God I was on the GP side.) The point is, I don't believe that GP works even 99.99% of the time but I'm ok with that. I can't afford triple redundant systems voting on whether my AP entry is being applied correctly (a la the space shuttle). It is not, however, the complete dog that Naser implies that it is. If he's anywhere close in his description of his issues, it's an environment problem. He got the reaction he did because of the content of the post. I've felt his pain but a little niceness goes a lot farther in the newsgroup. An awful lot of folks here owe at least part of their paycheck to GP and taking a whack at that won't win you any friends. Mark On Dec 5, 10:19 am, Bud <B...@discussions.microsoft.com> wrote: > Hey, hey, let's be honest here... > > ...any system that comes standard with Check Links and Reconcile routines > is, by definition, expecting data integrity issues to occur. :-) > > Admittedly, GP works 99.99% of the time, but when it doesn't (for whatever > reason), you can end up with corrupt data files - sometimes they can be > easily fixed and sometimes they can't. You can argue all day long why the > errors occurred (program bugs, telecommunications issues, user error, > whatever), but the bottom line is corruption CAN occur - maybe not often, but > it does. > > My biggest beef with the system is that it doesn't seem to use commits on > the transactions. In this day and age, with SQL databases on a > mission-critical system, in my opinion there's no excuse for not using > commits on logical units of work to ensure the data integrity. If I'm wrong > about this, someone please tell me. > > Naser may have exaggerated the extent of his problems and would benefit from > a review of his environment, but let's not mislead people and imply that it's > totally their fault if they're having issues with data corruption. > > The above was not meant to flame anyone - just to provide a bit of sympathy > with someone who is obviously dealing with some frustrating issues. > > -- > Bud Cool, Accounting System Manager > HDA, Inc. Hazelwood, MO > GP 9.0, SP2"Naser" wrote: > > we get a many problems with Great Plains , no data integrity in program , > > also there are many dupliacate in transactions , it is very slow , they are > > saying it can use by 1000 users and 2M transaction , and it is no correacr > > word , if use just 5% you will get hang in system. > > > i think it vere bad to microsoft to have this progra > > > ---------------- > > This post is a suggestion for Microsoft, and Microsoft responds to the > > suggestions with the most votes. To vote for this suggestion, click the "I > > Agree" button in the message pane. If you do not see the button, follow this > > link to open the suggestion in the Microsoft Web-based Newsreader and then > > click "I Agree" in the message pane. > > >http://www.microsoft.com/Businesssolutions/Community/NewsGroups/dgbro...
While I can't say on the 3rd party apps like Field Service, PA, etc, I know the whole core GP application does use transactional posting with transacton start/commit/rollback when posting all of the transaction types. However it doesn't do beginning to end transaction and instead does it bit by bit to avoid locking down the entire system for everyone when posting your 10,000 line sop transaction. So for sop posting for instance, we'll maybe set a transaction, update the customer records and data, then commit/rollback. Then assuming we're ok then do something else, commit/rollback and again and again. That's why batch recovery picks up where it left off. Your post has me a little puzzled because I was thinking that the gl posting only happens after the sop trx stuff is cleaned up already. So I'm a bit surpised you'd have this issue. I won't proclaim a photographic memory and am not digging through source code to prove I'm right/wrong so it sure is possible that isn't the case especially based on your event below. patrick developer support -- This posting is provided "AS IS" with no warranties, and confers no rights. "Bud" <Bud@discussions.microsoft.com> wrote in message news:A212D0D9-8A24-4869-AD4D-0DD6CCC2713D@microsoft.com... > Yeah, I agree Naser was overreacting...but I know how frustrating it can > be > when your users ask why they have a transaction that got messed up and you > don't have a good explanation for it. > > As far as the commit processing goes, I'm really skeptical that it's > prevalent throughout the system. I know G/L posting tends to recover > pretty > well, but other things don't. I could list several examples of things that > have happened in our system to support my hypothesis, but here's one that > just happened today: A user posting a sales batch this morning got a > message > that the glpBatchCleanup stored procedure had a problem (why-who knows?). > Upon further inspection, I discovered that the data had, in fact, been > posted > to the SOP30200 and SOP30300 tables, but part (not all) of the original > batch > data still existed in the SY00500, SOP10100, and SOP10200 tables. So, the > system experienced an error of some sort and left data floating around > that > should have been deleted. If that's not crying out for some sort of commit > validation, I don't know what is. The only way to fix this was to use SQL > to > delete the orphaned records from the tables. My point is - with the proper > use of commit events, this should never have happened, no matter what > caused > it in the first place. > > > -- > Bud Cool, Accounting System Manager > HDA, Inc. Hazelwood, MO > GP 9.0, SP2 > > > "MichaelJ" wrote: > >> Noted. I think everyone will agree GP is not perfect. Not even >> close. I have my own beefs with it too. >> >> You also can't assume it doesn't use commits on transactions. I would >> argue with you on that. GP typically does a good job of rolling back >> transactions when there are connectivity problems. If it didn't use >> commits, or something comparable, then that wouldn't be possible. >> >> But, that is a far cry from " Great Plains has no data integrity , PLS >> do not sell it". >>
For what it's worth, I have seen what Bud mentioned a few times. Something goes wrong during posting and the transaction ends up in both the posted and unposted tables. Without knowing the details of exactly what order everything is done in during posting, it 'feels' like the removal of the transactions from the work tables is the last step and if that's where the interruption occurs, that final 'clean up' is not done. The posted transactions and the related GL transactions have been absolutely fine in every instance of this that I have seen. The 2 modules I know I have seen this in are SOP and PM. Possibly POP, but I am not certain about that. I've seen this at different clients and different versions of GP, although it seems that it happens less frequently in the newer versions. Typically neither Check Links nor Reconcile fixes this and the duplicate records have to be removed from the work tables manually in SQL. There is always an error in the user interface related to this. In payables, when you go to Payables Transaction Inquiry - Vendor and bring up one of the vendors in the batch, you get a duplicate key error. In SOP, trying to look up one of the transactions this happened to gives an error. The only other thing I can say is that I have only seen this happen when there is either a system or application crash or a network interruption. In other words, it does not happen if GP is running smoothly, and there will always be something to indicate an issue. What I have found is that many users when presented with an error message or a crash, simply click OK, or restart without ever letting anyone know. This is completely understandable - they're busy, they don't want to seem like pests, they don't want to wait for hours with their system unavailable while someone investigates the issue. So users have to be asked and taught to report any and all errors or issues. And when they do, they need to be treated not like they are 'bothering' anyone, but like they are providing valuable input. That's the only way you're going to find out what's really going on. Any and every error message in GP will mean something. When it's running smoothly, there should be very few, if any, errors or crashes. And at the end of the day....I believe GP is a fantastic product. It may not be perfect, but I don't really think there is such a thing as perfect for this kind of application. -- Victoria Yudin Dynamics GP MVP Flexible Solutions, Inc. "Patrick [MSFT]" <prot@online.microsft.com> wrote in message news:%23zBC147NIHA.3940@TK2MSFTNGP05.phx.gbl... > While I can't say on the 3rd party apps like Field Service, PA, etc, I > know the whole core GP application does use transactional posting with > transacton start/commit/rollback when posting all of the transaction > types. However it doesn't do beginning to end transaction and instead > does it bit by bit to avoid locking down the entire system for everyone > when posting your 10,000 line sop transaction. So for sop posting for > instance, we'll maybe set a transaction, update the customer records and > data, then commit/rollback. Then assuming we're ok then do something else, > commit/rollback and again and again. That's why batch recovery picks up > where it left off. > > Your post has me a little puzzled because I was thinking that the gl > posting only happens after the sop trx stuff is cleaned up already. So > I'm a bit surpised you'd have this issue. I won't proclaim a photographic > memory and am not digging through source code to prove I'm right/wrong so > it sure is possible that isn't the case especially based on your event > below. > > patrick > developer support > > -- > This posting is provided "AS IS" with no warranties, and confers no > rights. > > > "Bud" <Bud@discussions.microsoft.com> wrote in message > news:A212D0D9-8A24-4869-AD4D-0DD6CCC2713D@microsoft.com... >> Yeah, I agree Naser was overreacting...but I know how frustrating it can >> be >> when your users ask why they have a transaction that got messed up and >> you >> don't have a good explanation for it. >> >> As far as the commit processing goes, I'm really skeptical that it's >> prevalent throughout the system. I know G/L posting tends to recover >> pretty >> well, but other things don't. I could list several examples of things >> that >> have happened in our system to support my hypothesis, but here's one that >> just happened today: A user posting a sales batch this morning got a >> message >> that the glpBatchCleanup stored procedure had a problem (why-who knows?). >> Upon further inspection, I discovered that the data had, in fact, been >> posted >> to the SOP30200 and SOP30300 tables, but part (not all) of the original >> batch >> data still existed in the SY00500, SOP10100, and SOP10200 tables. So, the >> system experienced an error of some sort and left data floating around >> that >> should have been deleted. If that's not crying out for some sort of >> commit >> validation, I don't know what is. The only way to fix this was to use SQL >> to >> delete the orphaned records from the tables. My point is - with the >> proper >> use of commit events, this should never have happened, no matter what >> caused >> it in the first place. >> >> >> -- >> Bud Cool, Accounting System Manager >> HDA, Inc. Hazelwood, MO >> GP 9.0, SP2 >> >> >> "MichaelJ" wrote: >> >>> Noted. I think everyone will agree GP is not perfect. Not even >>> close. I have my own beefs with it too. >>> >>> You also can't assume it doesn't use commits on transactions. I would >>> argue with you on that. GP typically does a good job of rolling back >>> transactions when there are connectivity problems. If it didn't use >>> commits, or something comparable, then that wouldn't be possible. >>> >>> But, that is a far cry from " Great Plains has no data integrity , PLS >>> do not sell it". >>> > >
Hey Bud Just some comments. Be glad you weren't here for the Btrieve days. I have seen rollback issues like you had but I believe the latest versions are much better. I think the reason the commits don't totally work is that GP is a Dex product that does a great deal in the Temp tables. I think SQL has a difficult time with this as it is not a pure transaction. Also what your reporting guys and don't let them Insert Into tables and other fun stuff like that. Jim Hummer Senior Consultant XL Reporter "Bud" wrote: > Yeah, I agree Naser was overreacting...but I know how frustrating it can be > when your users ask why they have a transaction that got messed up and you > don't have a good explanation for it. > > As far as the commit processing goes, I'm really skeptical that it's > prevalent throughout the system. I know G/L posting tends to recover pretty > well, but other things don't. I could list several examples of things that > have happened in our system to support my hypothesis, but here's one that > just happened today: A user posting a sales batch this morning got a message > that the glpBatchCleanup stored procedure had a problem (why-who knows?). > Upon further inspection, I discovered that the data had, in fact, been posted > to the SOP30200 and SOP30300 tables, but part (not all) of the original batch > data still existed in the SY00500, SOP10100, and SOP10200 tables. So, the > system experienced an error of some sort and left data floating around that > should have been deleted. If that's not crying out for some sort of commit > validation, I don't know what is. The only way to fix this was to use SQL to > delete the orphaned records from the tables. My point is - with the proper > use of commit events, this should never have happened, no matter what caused > it in the first place. > > > -- > Bud Cool, Accounting System Manager > HDA, Inc. Hazelwood, MO > GP 9.0, SP2 > > > "MichaelJ" wrote: > > > Noted. I think everyone will agree GP is not perfect. Not even > > close. I have my own beefs with it too. > > > > You also can't assume it doesn't use commits on transactions. I would > > argue with you on that. GP typically does a good job of rolling back > > transactions when there are connectivity problems. If it didn't use > > commits, or something comparable, then that wouldn't be possible. > > > > But, that is a far cry from " Great Plains has no data integrity , PLS > > do not sell it". > >