Wednesday, December 12, 2007

Step by Step Guide Installation SQL Server 2008

Aqui está provavelmente o 1º setp by step guide de instalação da nova versão do SQL Server.

A versão utilizada foi a SQL Server 2008 Developer Edition November CTP


O documento está disponivel aqui:
http://homepage.mac.com/r1card0/FileSharing1.html

Tuesday, October 23, 2007

Storage Top 10 Best Practices

Proper configuration of IO subsystems is critical to the optimal performance and operation of SQL Server systems. Below are some of the most common best practices that the SQL Server team recommends with respect to storage configuration for SQL Server.


Understand the IO characteristics of SQL Server and the specific IO requirements / characteristics of your application.
In order to be successful in designing and deploying storage for your SQL Server application, you need to have an understanding of your application’s IO characteristics and a basic understanding of SQL Server IO patterns. Performance monitor is the best place to capture this information for an existing application. Some of the questions you should ask yourself here are:
•What is the read vs. write ratio of the application?
•What are the typical IO rates (IO per second, MB/s & size of the IOs)? Monitor the perfmon counters:
1.Average read bytes/sec, average write bytes/sec
2.Reads/sec, writes/sec
3.Disk read bytes/sec, disk write bytes/sec
4.Average disk sec/read, average disk sec/write
5.Average disk queue length
•How much IO is sequential in nature, and how much IO is random in nature? Is this primarily an OLTP application or a Relational Data Warehouse application?

To understand the core characteristics of SQL Server IO, refer to SQL Server 2000 I/O Basics.

More / faster spindles are better for performance
•Ensure that you have an adequate number of spindles to support your IO requirements with an acceptable latency.
•Use filegroups for administration requirements such as backup / restore, partial database availability, etc.
•Use data files to “stripe” the database across your specific IO configuration (physical disks, LUNs, etc.).


Try not to “over” optimize the design of the storage; simpler designs generally offer good performance and more flexibility.
•Unless you understand the application very well avoid trying to over optimize the IO by selectively placing objects on separate spindles.
•Make sure to give thought to the growth strategy up front. As your data size grows, how will you manage growth of data files / LUNs / RAID groups? It is much better to design for this up front than to rebalance data files or LUN(s) later in a production deployment.

Validate configurations prior to deployment
•Do basic throughput testing of the IO subsystem prior to deploying SQL Server. Make sure these tests are able to achieve your IO requirements with an acceptable latency. SQLIO is one such tool which can be used for this. A document is included with the tool with basics of testing an IO subsystem. Download the
SQLIO Disk Subsystem Benchmark Tool.
•Understand that the of purpose running the SQLIO tests is not to simulate SQL Server’s exact IO characteristics but rather to test maximum throughput achievable by the IO subsystem for common SQL Server IO types.
•IOMETER can be used as an alternative to SQLIO.

Always place log files on RAID 1+0 (or RAID 1) disks. This provides:
•better protection from hardware failure, and
•better write performance. Note: In general RAID 1+0 will provide better throughput for write-intensive applications. The amount of performance gained will vary based on the HW vendor’s RAID implementations. Most common alternative to RAID 1+0 is RAID 5. Generally, RAID 1+0 provides better write performance than any other RAID level providing data protection, including RAID 5.

Isolate log from data at the physical disk level
•When this is not possible (e.g., consolidated SQL environments) consider I/O characteristics and group similar I/O characteristics (i.e. all logs) on common spindles.
•Combining heterogeneous workloads (workloads with very different IO and latency characteristics) can have negative effects on overall performance (e.g., placing Exchange and SQL data on the same physical spindles).

Consider configuration of TEMPDB database
•Make sure to move TEMPDB to adequate storage and pre-size after installing SQL Server.
•Performance may benefit if TEMPDB is placed on RAID 1+0 (dependent on TEMPDB usage).
•For the TEMPDB database, create 1 data file per CPU, as described in #8 below.

Lining up the number of data files with CPU’s has scalability advantages for allocation intensive workloads.
•It is recommended to have .25 to 1 data files (per filegroup) for each CPU on the host server.
•This is especially true for TEMPDB where the recommendation is 1 data file per CPU.
•Dual core counts as 2 CPUs; logical procs (hyperthreading) do not.

Don’t overlook some of SQL Server basics
•Data files should be of equal size – SQL Server uses a proportional fill algorithm that favors allocations in files with more free space.
•Pre-size data and log files.
•Do not rely on AUTOGROW, instead manage the growth of these files manually. You may leave AUTOGROW ON for safety reasons, but you should proactively manage the growth of the data files.

Don’t overlook storage configuration bases
•Use up-to-date HBA drivers recommended by the storage vendor
•Utilize storage vendor specific drivers from the HBA manufactures website
•Tune HBA driver settings as needed for your IO volumes. In general driver specific settings should come from the storage vendor. However we have found that Queue Depth defaults are usually not deep enough to support SQL Server IO volumes.
•Ensure that the storage array firmware is up to the latest recommended level.
•Use multipath software to achieve balancing across HBA’s and LUN’s and ensure this is functioning properly
•Simplifies configuration & offers advantages for availability
•Microsoft Multipath I/O (MPIO): Vendors build Device Specific Modules (DSM) on top of Driver

Thursday, September 20, 2007

Mapping SQL Server 2000 System Tables to SQL Server 2005 System Views

This topic shows the mapping between the SQL Server 2000 system tables and functions and the SQL Server 2005 system views and functions.

The following link show how to maps the system tables that are in the master database in SQL Server 2000 to their corresponding system views or functions in SQL Server 2005.

This information is provided by Microsoft here.

SQL Server 2005 System Views Map

The Microsoft SQL Server 2005 System Views Map shows the key system views included in SQL Server 2005, and the relationships between them.

You can download the .PDF poster here.

Who is friend? Who? :-)



Tuesday, September 11, 2007

BCPs IN / OUT (á lá Sybase) vs Data Export/Import Wizard SS2005

Exemplo de 2 Scripts que criam os comandos necessários para Exportar / Importar dados de uma BD de Origem para uma BD de Destino.
Muito util para a versão SQL2005, pois o Wizard Export Data está uma verdadeira DESGRAÇA!!!

Para Exportar os Dados:

USE [DBName]
GO


Select 'BCP "DBName.dbo.' + name + '" OUT "PathExportFiles\BCPOUT\' + name + '.OUT" -w -t"{{[[", -r"}}]]" -S %ServerName% -U%Username% -P%Password% -e "PathExportFiels\Error.txt"'
from sysobjects where xtype = 'u'

Gravar os Resultados num .Bat file e evocar através de uma Linha de Comandos...

Para Importar os Dados:

USE [DBName]
GO


Select 'BULK INSERT ' + name + ' FROM ''IMPORTFILEPATH\BCPIN\' + name + '.OUT '' WITH ( DATAFILETYPE = ''widechar'', FIELDTERMINATOR = ''{{[['', ROWTERMINATOR = ''}}]]'', CODEPAGE=''850'')'
from sysobjects where xtype = 'u'

Monday, April 02, 2007

Understanding "login failed" (Error 18456) error messages in SQL Server 2005

If the server encounters an error that prevents a login from succeeding, the client will display the following error mesage.

Msg 18456, Level 14, State 1, Server , Line 1
Login failed for user ''

Note that the message is kept fairly nondescript to prevent information disclosure to unauthenticated clients. In particular, the 'State' will always be shown to be '1' regardless of the nature of the problem. To determine the true reason for the failure, the administrator can look in the server's error log where a corresponding entry will be written. An example of an entry is:

2006-02-27 00:02:00.34 Logon Error: 18456, Severity: 14, State: 8.
2006-02-27 00:02:00.34 Logon Login failed for user ''. [CLIENT: ]

The key to the message is the 'State' which the server will accurately set to reflect the source of the problem. In the example above, State 8 indicates that the authentication failed because the user provided an incorrect password. The common error states and their descriptions are provided in the following table:

ERROR STATE ERROR DESCRIPTION

2 and 5 Invalid userid
6 Attempt to use a Windows login name with SQL Authentication
7 Login disabled and password mismatch
8 Password mismatch
9 Invalid password
11 and 12 Valid login but server access failure
13 SQL Server service paused
18 Change password required

Other error states indicate an internal error and may require assistance from CSS.

How to identify your SQL Server version and edition

Executar a Extended Store Procedure XP_MSVER no Servidor para verificar qual a versão do SQL Server.

EXEC XP_MSVER
GO

Por Exemplo:
ProductName = SQL Server
ProductVersion = 9.00.2047.00

Validar aqui: http://support.microsoft.com/kb/321185 a que Versão / Service Pack corresponde.

Thursday, March 29, 2007

The TPC-C database benchmark -- What does it really mean?


We explain how TPC-C works and what exactly it reports so you can interpret results

Vendors compete for database business largely on the basis of published benchmarks such as TPC-C. Yet users often do not understand very much about what goes into these benchmarks and what they mean. This article describes what the TPC-C is and how it can relate to your work. (2,000 words)

The TPC-C benchmark models the essence of a "typical" online transaction processing (OLTP) environment. One of the keys to understanding TPC-C results is the word "essence." Although the Transaction Processing Council (TPC) has done a good job of defining a benchmark that emulates the fundamental components of transaction processing environments, the nature of a generalized benchmark is such that it cannot feasibly represent very many actual environments.

The TPC-C benchmark simulates a large wholesale outlet's inventory management system. The operation consists of a number of warehouses, each with about ten terminals representing point-of-sale or point-of-inquiry stations. Transactions are defined to handle a new order entry, inquire about order status, and settle payment. These three user interaction transactions are straightforward.

Two other transactions model behind-the-scenes activity at warehouses. These are the stocking level inquiry and the delivery transactions. The stocking level inquiry scans a warehouse's inventory for items which are out of stock or are nearly so. The delivery transaction collects a number of orders and marks them as having been delivered. One instance of either of these transactions represents much more load than an instance of the new order, order status, or payment transactions.

The TPC-C is a descendant of the previous TPC-A. In fact, changing the names of the fields in the new order transaction effectively produces a duplicate of the TPC-A transaction. Despite this clear lineage, the newer TPC-C is far richer in functionality than its predecessors.

In addition to having many more types of transactions, TPC-C also mandates a much higher level of (simulated) user interaction. While the TPC-A application consisted of exactly one call to scanf(3) and one call to printf(3), TPC-C requires an entire application program to accept user input. The application's operation is precisely specified, to prevent subtly different interpretations of the specification to result in large variations in benchmark scores. The specification even mandates the appearance of the user interface screen on the terminals!

Although this is a major improvement over TPC-A, the TPC-C application still falls short of representing the typical database application. The most significant deficiency is that user input is not validated using the methods common to most applications. Most commercial applications are built with some sort of forms package, such as Windows for Data or JYACC. In addition to managing screen formats, these packages normally handle validation of input against the database. Typically, input data is validated on a field-by-field basis, as soon as the user leaves the field (such as using tab or return). In contrast, the TPC-C specification merely mandates that the input is validated; it does not specify how or when. So vendors customarily validate all input in a single batch, right before attempting to run the transaction. This reduces the number of interactions between the application and database and saves a great deal of overhead compared with normal applications. It's certainly possible to write applications this way, but in practice this approach is taken only when performance is critical, because it requires more programming effort and can sometimes be confusing to end users.

If your applications do not do batch input validation, you'll have to aim for considerably higher performance from your system. Although the SQL code to validate input is almost always very simple compared to the transactions themselves, most applications do a tremendous amount of it. As a result, it's often wise to add a third to a half to the target system's capability if you don't have an existing system to measure.

WAN considerations
Input validation is especially relevant when the clients and servers communicate over a wide area network, because SQL data is customarily transmitted over the network in relatively inefficient form. DBMS systems communicate between client and server via TCP/IP, and for a variety of complex reasons, they send each column of each row in a separate packet. A column is something like a first name or salary, although it might be something quite large, such as a compressed photographic image. Most columns are pretty small, averaging less than 200 bytes, so the TCP/IP overhead of 48 bytes per packet becomes significant. The overhead isn't a big deal on LANs, but on the restricted bandwidth of a WAN, this can be an issue.

Even more problematic is the end-to-end round trip time on WANs. On a network such as an Ethernet, round-trip time might be one to five milliseconds, while the same trip on a wide area network could easily take 100 times as long. When the application makes a single call to the DBMS for validation like TPC-C, network round-trip time might not be significant. Many applications do so many round trips that the entire client/server configuration could easily miss its performance goals for this reason alone.

Client/server implementations
TPC-C is virtually always run in client/server mode, meaning that the reported score is for a cluster of systems. Almost universally, vendors separate the many instances of the user application from the core database system. The only thing that runs on the machine that is reported is the database engine itself. For example, consider a result such as "Sun Ultra Enterprise 6000, 23,143 tpm-C using 16 processors, Oracle\x117.3.3, Solaris 2.6, and 11 Ultra-1/170 front-end systems." The approximately 20,000 simulated users log into one of the eleven front-end systems, and their SQL requests are sent to the Ultra Enterprise 6000 for processing.

This arrangement can have significant bearing on the interpretation of TPC-C results. If you are trying to size a system that will run application code as well as the database engine, you'll get quite an unpleasant surprise by relying too directly on TPC-C results. Fortunately, this sort of arrangement represents the minority of applications. The dominant database processing architecture is now client/server, in which the front-end application code runs on client systems, such as a PC or workstation, and the database system runs only the database engine itself.

The only fly in this ointment is that there are relatively few discrete client systems in most TPC-C configurations. For example, in the previous example there are only eleven client systems, each handling nearly 1,900 users. This type of client concentration is unlikely to occur in the real world. A system supporting 20,000 users would usually be connecting to more than 10,000 different client systems. The number of client systems is important, because vendors always take advantage of the limited number of clients systems and use a transaction processing (TP) monitor or some other form of connection multiplexor. This optimization isn't available if your application has 1,000 client systems, each connecting once to the DBMS server. The result is that there are 1,000 client connections on the server. However, TPC-C configurations universally use a TP monitor or some other software to reduce the number of active connections to just one to 10 per client system. As a result, there are many fewer active connections on the server, making it far easier to manage.

Batch processing
Another consideration that TPC-C does not take into account is batch processing. Most real OLTP applications have at least two distinct components: an online portion that creates and processes transactions and a batch portion that reports on period work. Often these batch jobs also reconcile daily activity with master databases or extract data to support related decision support processing. For example, bill processing and invoice reconciliation are tasks that are almost always handled in batch jobs.

The TPC-C is far richer than either TPC-A or TPC-B in this regard because it includes the delivery and stocking level transactions. Both of these transactions manipulate far more than the individual records associated with line items and orders; instead they deal with groups of business transactions. However, both of these operations are quite small compared to typical batch operations. Real applications often include significant batch components. For example, the Oracle Financials application suite contains the concept of a "concurrent manager," essentially a batch processing stream used to handle large and unwieldy processing requests that would not be interactive in nature.

Because batch processing requires no user interaction, it tends to consume processor and I/O resources much more quickly than online users. It's not unusual for individual batch jobs to consume an entire processor and attendant I/O resources. When your application has a significant batch component, TPC-C is unlikely to reflect your environment very well. Unfortunately, there isn't much you can do to extrapolate TPC-C results to reflect this workload, either.

TPC-C reporting rules
One of the curious -- and very misleading -- things about TPC-C scores is that they only report the rate of the new order transaction. The other four transactions are used only as background load to provide a context for the new order transactions. I'm not completely sure why the TPC designed the reporting rules this way, but this often confuses users of the results. The background transactions are defined to be at least 57 percent of the mix, so new orders are at most 43 percent of the work. This means that a score of 1000 (new order) transactions per minute actually represents over 2300 transactions (of all types) per minute. Anyone attempting to size a system "according to TPC-C" should account for true amount of work being done in the reported runs.

TPC-C scores in context
We've seen that delivered transaction rates are somewhat more than doubled, and that they are most relevant in a client/server environment. But what do these rates really mean? Let's take a look at a large-scale result, but not one of the top scores, the Ultra Enterprise 4000 using Informix 7.3 and ten Ultra-1/170 clients. The reported transaction rate is 15,461 transactions per minute, so this combination delivered about 39,955 transactions (of all five types) each minute. Servicing about 15,000 users, this appears to be a really big system. If we deflate the score by the additional 50 percent work (or so) necessary to handle real-life input validation, the score becomes 10,312 tpm-C. That's a lot of transactions every minute!

Server consolidation
Without consolidating multiple applications onto a single system, most systems have no requirement for anything like this type of throughput. With multiple applications running on a single system, transaction requirements can approach these levels, especially when the applications handle very large populations of users. TPC-C does not reflect these environments at all. It uses a single application with a single database instance, and the database locking strategies are designed accordingly. When many applications are consolidated onto a single system, they ordinarily do not use a single database instance, and multiple applications almost never share databases.

Scalability of multidatabase configurations is different than that of single database systems as used in TPC-C. Scalability of a given system might be better or worse than seen in TPC-C. Scalability might be worse due to a variety of considerations, such as processor cache saturation or resource management issues within either the operating system or DBMS. Scalability might be better in a multidatabase or multi-instance configuration if the applications have suitable locking strategies and particularly when little or no data is shared between applications.

Summing it up
The TPC-C is best used for approximate comparisons between generally similar systems. Because it is a highly optimized application with characteristics such as a single application, batch input validation, client/server configuration with very few client systems and minimal batch processing, TPC-C doesn't predict actual end-user performance as well as one might like. By considering many of these common deviations from real workloads, a user can plan a configuration without unrealistic expectations.

TPC-B used the same transaction as TPC-A, namely core of the most basic ATM teller transaction. The main difference between TPC-A and TPC-B is that the latter has no think time.

The TPC-C specification mandates that a specific percentage of transactions include invalidate data, forcing at least a few transaction rollbacks.

Versoes e tipos de Licenciamento disponiveis para o SQL Server 2005


A família de produtos SQL Server 2005 foi reformulada para responder melhor às necessidades de cada um dos segmentos de clientes que, como base de dados para fins generalistas e de custo reduzido, irá oferece valor e funcionalidade sem precedentes quando comparada com soluções da concorrência.

As quatro novas edições oferecem um vasto leque de funcionalidades, desde a elevada disponibilidade e escalabilidade robusta a ferramentas avançadas de Business Intelligence, desenvolvidas para dar mais poder a todos os utilizadores de uma organização ao oferecer uma plataforma de gestão de informação mais segura, fiável e produtiva. Adicionalmente, com a redução dos tempos de indisponibilidade das aplicações, escalabilidade e desempenho robustos, e maior controlo da segurança, o SQL Server 2005 representa um passo em frente assinalável para o suporte aos sistemas empresariais mais exigentes a nível mundial. Uma vez que o SQL Server faz parte do Windows Server SystemTM, os clientes obtêm os benefícios adicionais de um TCO reduzido e de um desenvolvimento mais rápido através das capacidades de integração e gestão melhoradas que resultam da estratégia de engenharia comum implementada em todos os produtos Windows Server System.

A gama SQL Server 2005 compreenderá seguintes produtos:

• SQL Server 2005 Enterprise Edition — plataforma de dados e análise completa para aplicações de negócio de grandes dimensões.

• SQL Server 2005 Standard Edition — plataforma de dados e análise completa para aplicações de negócio desenvolvida para as médias empresas.

• SQL Server 2005 Workgroup Edition — solução de base de dados fácil de utilizar, simples e de custo acessível para as pequenas e médias empresas.

• SQL Server 2005 Express Edition — versão fácil de utilizar, gratuita do SQL Server 2005 desenvolvida para o desenvolvimento de aplicações simples centradas em dados.

Juntamente com a gama de produtos SQL Server 2005, a Microsoft anunciou a disponibilidade imediata do SQL Server 2000 Workgroup Edition. Esta versão terá o mesmo objectivo que o SQL Server 2005 Workgroup Edition, mas terá por base a funcionalidade do SQL Server 2000.

Reconhecendo as vantagens da disponibilização de edições Workgroup e Standard para as pequenas e médias empresas, a Dell Inc. anunciou hoje ser o primeiro fornecedor a incluir as edições Workgroup e Standard do SQL Server 2000 e do SQL Server 2005 nos seus servidores Dell PowerEdge. Como prova do valor que o SQL Server acrescenta nos servidores Dell, a Dell publicou hoje os resultados impressionantes de uma teste da relação preço/desempenho para TPC-C* utilizando o SQL Server 2000 Workgroup Edition nos servidores Dell PowerEdge com um custo de USD 1,40/tpmC.
"A indústria mudou desde a introdução do SQL Server 2000. Com a nova gama de produtos SQL Server 2005, aumentámos o valor das nossas soluções de gestão de dados para oferecer mais opções aos clientes." afirmou Paul Flessner, vice-presidente sénior da divisão Server Applications na Microsoft. "Aumentámos a funcionalidade do SQL Server 2005 Standard Edition e do SQL Server 2005 Enterprise Edition, e alargámos a plataforma com o SQL Server 2005 Workgroup Edition e o SQL Server 2005 Express Edition. Estamos agora melhor equipados para oferecer soluções que respondem aos requisitos tecnológicos e orçamentais dos nossos clientes. O nosso objectivo é o de tornar as soluções de análise e de gestão de dados de classe empresarial acessíveis a um vasto número de clientes retirando, ao mesmo tempo, a complexidade dos sistemas de base de dados, e tudo isso com um TCO mais reduzido."

Licenciamento flexível para o SQL Server 2005

Com o SQL Server 2005, os clientes podem tirar partido do licenciamento processadores multicore recentemente anunciado pela Microsoft, o qual permite aos clientes licenciar o SQL Server 2005 numa base por processador individual, em oposição a outras ofertas que efectuam o licenciamento por core. Este novo modelo aumenta o desempenho informático e a funcionalidade e permite aos clientes obterem mais valor dos seus investimentos em tecnologia. Ao contrário dos principais concorrentes, a Microsoft lidera também a indústria na elevada disponibilidade ao ser o primeiro fornecedor de bases de dados a permitir aos clientes utilizar servidores failover passivos com o SQL Server 2005 sem a necessidade de licenças adicionais, o que permite aos clientes criar ambientes de elevada disponibilidade a custo reduzido. As secções abaixo oferecem informação acerca das funcionalidades e preços de cada uma das edições nos Estados Unidos da América; os preços variam conforme a região.

O SQL Server 2005 pode ser licenciado das seguinte três formas para responder aos requisitos específicos de cada cliente:

•Licença por processador, uma licença em separado para cada um dos processadores num servidor a executar o SQL Server
•Servidor mais CALs de dispositivo, uma licença em separado para cada um dos servidores a executar o SQL Server, mais uma licença de acesso de cliente (CAL) para cada dispositivo cliente
•Servidor mais CALs de utilizador, uma licença em separado para cada um dos servidores a executar o SQL Server, mais uma licença de acesso de cliente (CAL) para cada utilizador a aceder ao servidor
SQL Server 2005 Enterprise Edition

O SQL Server 2005 Enterprise Edition é uma plataforma de análise e de gestão de dados totalmente integrada para aplicações empresariais, com muitas novas funcionalidades para responder aos requisitos cada vez mais complexos dos clientes de grandes dimensões. Esta edição oferece o particionamento de dados, elevada disponibilidade com mirroring de bases de dados, capacidades complexas de integração e analíticas, relatórios ad hoc com o Report Builder, funcionalidade de snapshot de bases de dados e operações paralelas e online completas, assegurando um sistema fiável e poderoso para suportar aplicações para empresas grandes e em crescimento. Disponível a um preço estimado de USD 24.999 dólares por processador ou USD 13.499 dólares por servidor (25 CALs), o SQL Server 2005 Enterprise Edition oferece escalabilidade robusta, elevada disponibilidade e funcionalidades Business Intelligence (BI) avançadas, tornando-o na solução mais abrangente com o preço mais atractivo do mercado para as empresas. A Microsoft continuará também a oferecer o SQL Server 2005 Developer Edition para programadores que necessitem de criar e testar aplicações baseadas no SQL Server.
SQL Server 2005 Standard Edition

O SQL Server 2005 Standard Edition é uma plataforma de análise e de gestão de dados completa desenvolvido para médias empresas e infra-estruturas que necessitem de sistemas de elevada disponibilidade. Esta edição inclui funcionalidade melhorada anteriormente disponível apenas no SQL Server 2000 Enterprise Edition, tais como elevada disponibilidade com mirroring e clustering de bases de dados, e suporte integrado para 64 bits em sistemas x64 e Itanium, o que oferece às médias empresas maior flexibilidade para o seu crescimento antes de investir no Enterprise Edition. O Standard Edition suporta até quatro processadores, bases de dados com tamanho ilimitado e memória de sistema ilimitada. Disponível a um preço estimado de USD 5.999 por processador ou USD 2.799 por servidor (10 CALs), o SQL Server 2005 Standard Edition incluirá também o SQL Server Integration Services, SQL Server Analysis Services e o SQL Server Reporting Services, oferecendo aos clientes maior funcionalidade de Business Intelligence sem qualquer custo adicional.
Edições SQL Server 2000 e SQL Server 2005 Workgroup

O Workgroup Edition é o mais recente produto disponível para o SQL Server 2000 e o SQL Server 2005, e oferecerá uma solução de base de dados acessível e fácil de usar concebida especificamente para as organizações pequenas e médias. O produto é a solução ideal para clientes que pretendem obter excelentes funcionalidades de base de dados num produto fácil de gerir, com maior escalabilidade do que o SQL Server 2005 Express Edition. Disponível a um preço estimado de USD 3.899 por processador ou USD 739 por servidor (cinco CALs), o Workgroup Edition suporta até dois processadores, bases de dados com um tamanho ilimitado e dois gigabytes de memória, fazendo dele uma solução de base de dados completa extremamente económica.

SQL Server 2005 Express Edition

O SQL Server 2005 Express Edition substitui o Microsoft Data Engine (MSDE) para SQL Server 2000 e é uma versão gratuita e redistribuível do motor de base de dados do SQL Server 2005. Esta edição oferece, aos programadores principiantes a forma mais rápida de aprender, desenvolver e implementar aplicações centradas em dados e de dimensões reduzidas, e aos clientes e parceiros a forma mais rápida de começar a utilizar o SQL Server 2005. Esta edição será também útil para as empresas de maiores dimensões que pretendam dedicar bases de dados mais pequenas a projectos de desenvolvimento. Para além disso, os parceiros podem incorporar e redistribuir o SQL Server 2005 Express Edition juntamente com as suas aplicações. Disponível gratuitamente via transferência a partir da Web, o SQL Server 2005 Express Edition incluirá uma ferramenta de gestão gráfica; um assistente e controlos de relatórios; replicação, um cliente SQL Service Broker, funcionalidade nativa de encriptação de bases de dados e suporte para a gestão de chaves; e suporta para Common Language Runtime (CLR) e Extensible Markup Language (XML).

Dell aumenta valor para o cliente com o SQL Server

Dell anunciou hoje que será a primeira empresa a oferecer o SQL Server 2000 e o SQL Server 2005 Workgroup Edition e Standard Edition juntamente com os seus servidores Dell PowerEdge. Esta decisão oferece às pequenas e médias empresas uma opção simples e unificada para a aquisição do servidor Dell PowerEdge juntamente com o licenciamento do SQL Server. O suporte de software avançado para o SQL Server 2000 e para as edições Workgroup e Standard do SQL Server 2005, incluindo resolução de problemas e acesso por 30 dias à linha de suporte Getting Started da Dell, será também disponibilizado pela Dell directamente aos clientes.

A Dell disponibilizou também resultados do novo teste preço/desempenho para TPC-C utilizando o SQL Server 2000 Workgroup Edition em servidores Dell PowerEdge. O novo recorde de USD 1,40/tpmC é um melhoramento de 10 cêntimos relativamente à primeira posição ocupada anteriormente pela Microsoft e Dell, de USD 1,50/tpmC utilizando o SQL Server 2000 Standard Edition.
"A Dell e a Microsoft mantêm uma aliança de longa data no desenvolvimento de soluções integradas que oferece aos clientes valor e ajuda as empresas a crescer." afirmou Linda York, vice-presidente das divisão de alianças globais, no grupo de produto, da Dell. "A combinação entre os servidores Dell PowerEdge com processadores Xeon de 64 bits da Intel, e o Microsoft SQL Server 2005 é um resultado dos nossos relacionamentos que oferece aos clientes uma plataforma de servidor poderosa e fiável e responde às exigências das aplicações de base de dados actuais e futuras."

Acerca do SQL Server

O Microsoft SQL Server, parte da família do Windows Server System, é uma solução completa de análise e de base de dados para a disponibilização rápida da futura geração de soluções escaláveis de e-commerce, especializadas e de business intelligence. Reduz significativamente o tempo necessário para disponibilizar essas soluções oferecendo simultaneamente a escalabilidade necessária para grande parte dos ambientes mais exigentes. Para obter mais informação acerca do Microsoft SQL Server visite http://www.microsoft.com/sql .

Acerca do Windows Server System

O Microsoft Windows Server System é software de servidor integrado disponibilizando a infra-estrutura para operações de TI, desenvolvimento de aplicações e trabalhos de TI e integração. Desenvolvido com base no sistema operativo Windows ServerTM e desenvolvido de acordo com os Common Engineering Criteria, o Windows Server System tem como objectivo facilitar aos profissionais de TI ligarem e gerirem os seus ambientes de TI. Como estão integrados para oferecer maior segurança e facilidade de gestão, os produtos Windows Server System ajudam as empresas a reduzirem a complexidade e a diminuírem os custos. Todos os produtos Windows Server System suportam normas abertas da indústria incluindo aquelas baseadas em XML para promover a interoperabilidade com outras plataformas. Para obter mais informações acerca do Windows Server System, visite http://www.microsoft.com/windowsserversystem .

Tuesday, January 09, 2007

Um processo que executa no kernel Physical Address Extension (PAE) pode experimentar dano memória em Windows Server 2003

Sintomas:
Em Microsoft Windows Server 2003, qualquer processo modo usuário, componente modo kernel, ou driver que seja executado no Physical Address Extension (PAE) kernel podem experimentar dano memória. Portanto, o computador poderá de forma imprevisível parar de responder.

Resolução:
Para resolver esse problema, obter o pacote serviço mais recente para Microsoft Windows Server 2003. Para obter informações adicionais, clique no número abaixo para ler o artigo na Base de Dados de Conhecimento da Microsoft::
889100 (http://support.microsoft.com/kb/889100/)

ID do artigo:895575

Última revisão:quarta-feira, 26 de julho de 2006

Revisão:2.2