Friday, June 30, 2017
Quick Introduction to SQL Server Profiler
When working with SQL Server, you might run across a situation where it is just not running fast enough. While there could be many reasons for this, there are tools that can help you track down just what is going on behind the scenes.
SQL Server Management Studio’s SQL Server Profiler — or just Profiler — is a tool that can be used to monitor queries run on your database.
SSL Connections in MySQL 5.7
Recently, I was working on an SSL implementation with MySQL 5.7, and I made some interesting discoveries. I realized I could connect to the MySQL server without specifying the SSL keys on the client side, and the connection is still secured by SSL. I was confused and I did not understand what was happening.
In this blog post, I am going to show you why SSL works in MySQL 5.7, and how it worked previously in MySQL 5.6.
ORMs Should Update Changed Values, Not Just Modified Ones
In this article, I will establish how the SQL language and its implementations distinguish between changed values and modified values, where a changed value is a value that has been “touched” but not necessarily modified, i.e. the value might be the same before and after the change.
Many ORMs, unfortunately, either update all of a record’s values or only the modified ones. The first can be inefficient and the latter can be wrong. Updating the changed values would be correct.
How Do You Know You're Hitting Capacity in MySQL? [Video]
So your app is using MySQL as the backend, and you’ve hit a few performance hiccups. Maybe you’ve even hit straight up roadblocks. And right now you are wondering if you have hit the wall with capacity for MySQL and are asking yourself if it is time to do something really drastic, and really painful, or if there is a smarter way to give your app more runway on its data store.
First of all, we’re sorry you are in a tight spot. We’ve been there, and we feel for you. In fact, our company was founded just to deliver a drop-in replacement for MySQL that does truly scale-out, and harness the power of distributed computing. But ClustrixDB is not for everyone, and no one recommends moving your data unless you really have to in order to survive.
Thursday, June 29, 2017
Schema Sharding With MariaDB MaxScale 2.1 (Part 2)
In this second installment of schema sharing with MariaDB MaxScale to combine SchemaRouter and ReadWriteSplit MaxScale routers, we'll go through the details of implementing it in order to shard databases among many pairs of master/slave servers.
Before we move forward, I would like to highlight that you need to remove any duplicate database schemas (schemas present on both shards). Schemas should exist in just one of the shards, as when the SchemaRouter plugin starts, it’s going to read all the database schemas that exist on all the shards to get to know where the query will be routed. Take care to use mysql_secure_instalation and remove test database schema to avoid the below error when sending queries to MaxScale that will resolve them with the backend database servers (note: this rule to avoid having duplicate databases among shards does not apply for MySQL system database):
MySQL Replication Options and Their Challenges
Previously we discussed replication for MySQL systems. In this post, we will discuss various MySQL replication options.
For MySQL, replication is the primary go-to strategy for high availability (HA) and, to some extent, scale. By providing additional copies of the primary database, a MySQL system using replication can withstand the loss of the master database by promoting the slave to be the new master and updating the application endpoints. In addition, replication can provide scale by providing additional read-only copies of the database for the application to leverage. This offloads application read requests from the master, allowing the master to focus more on writes.
NoSQL Review: ArangoDB 3.2 Beta
ArangoDB is a hybrid, or multi-model, NoSQL Document and Graph store. This is becoming a common combination; it provides a lot of flexibility and power. I’ve been watching ArangoDB for three years since they started, so it’ll be interesting to see what has changed.
Vital Statistics Latest release: Version 3.2 Beta (June 13, 2017; current production version is 3.1).Commercial backer: ArangoDB GmbH (span out of triAGENS GmbH, an IT Consultancy in Germany).
Website: Here
Twitter: @arangodb
Licensing: Community core (Apache 2.0) with enterprise-supported version.
Sales model: Subscription model, including support for the community version, not just enterprise.
Release press release: Here Release full details: As above. What’s New
The biggest change is the use of Facebook’s RocksDB key-value store as a storage engine (I’ll review RocksDB separately in a future article). This is a huge change that will require load testing in your applications, as consistency and locking work differently than the previous mmfiles method.
Tracing MongoDB Queries to Code With Cursor Comments
In this short blog post, we will discuss a helpful feature for tracing MongoDB queries: Cursor Comments.
Cursor CommentsMuch like other database systems, MongoDB supports the ability for application developers to set comment strings on their database queries using the Cursor Comment feature. This feature is very useful for both DBAs and developers for quickly and efficiently tying a MongoDB query found on the database server to a line of code in the application source.
Wednesday, June 28, 2017
Using Performance Insights to Analyze Performance of Amazon Aurora with PostgreSQL Compatibility
Microsoft shares tips on how to protect your information and privacy against cybersecurity threats
Schema Sharding with MariaDB MaxScale 2.1
Most of the time, when you start a database design, you don’t imagine how your applications need to scale. Sometimes, you need to shard your databases among some different hosts and then on each shard, you want to split reads and writes between master and slaves. This blog is about MariaDB MaxScale being able to handle different databases across shards, and splitting up reads and writes into each of the shards to achieve the maximum level of horizontal scalability with MariaDB Server.
After reading this blog you will be able to:
I Wrote My Own Database!
It's been one of the moments that I've been unconsciously waiting for ever since I started programming. I mean, writing your own database is not something you do every day. Actually, you should never do that unless you have a very, very good reason to do so. Otherwise, you're probably wasting someone's time and money, and adding a fair bit of risk in case of failures.
Driving ForcesThat said, let's explore some of the "very, very good" reasons that would justify writing your own data store instead of using an existing one.
Financial Services and Neo4j: Anti-Money Laundering
Reducing the risk of money laundering presents a similar challenge to that of fraud detection when it comes to today’s financial services landscape.
Firms need to know where funds come from and where they are headed, but criminals use indirection to make it difficult to follow the money from one point to another. In order to tackle the problem head on, financial services enterprises need a technology powered by data connections.
Data Replication Automation
This article discusses a technology of data replication automation where specific records get copied from a source database into a target database based on Oracle/MySQL DB.
The Use CaseConsider the following use case. One of the services a QA group provides is replicating data environments — for example, copying specific records (which are uncovering some unseen business logic defects) from a staging environment into a testing environment to reproduce and debug an issue. Replication automation saves the effort of manually recreating data in the testing environment, which is usually complex, time-consuming, and error-prone. Additionally, the results can be unfaithful to the situation being examined in the source environment.
Tuesday, June 27, 2017
Introducing Amazon Connect Dashboard - Monitoring Contact Center Performance on AWS
Challenges of Sharding MySQL
MySQL databases are sharded for the best of reasons. At some point, the MySQL workload needs to scale, and scaling RDBMSs is hard (Clustrix’s unofficial motto!). Adding read scale to RDBMSs is straightforward via replication. But adding write scale? That’s the real trick, and herein lies some of the challenges of sharding MySQL. After scaling up your MySQL instance to the largest instance available and adding several read slaves, what’s the next step? Multi-master solutions can add additional write scale, but only for separate applications; each application must write to a different master to get that scale. If you have a single MySQL application needing write scale, i.e., ability to fan out writes to multiple MySQL servers simultaneously, MySQL DBAs often start to investigate sharding.
What Is Sharding?Sharding is a scale-out approach in which database tables are partitioned, and each partition is put on a separate RDBMS server. For MySQL, this means each node is its own separate MySQL RDBMS managing its own separate set of data partitions. This data separation allows the application to distribute queries across multiple servers simultaneously, creating parallelism and thus increasing the scale of that workload. However, this data and server separation also creates challenges, including sharding key choice, schema design, and application rewrites. Additional challenges of sharding MySQL include data maintenance, infrastructure maintenance, and business challenges, and will be delved into in future blogs.
Database Deployment Monitoring
It was great talking to Pete Pickerill, co-founder and V.P. of Product Strategy, Robert Reeves, Co-founder and CTO, and Ben Geller, V.P. of Marketing at Datical about their new Deployment Monitoring Console (DMC) that automatically monitors the status of every database deployment across the enterprise.
Datical DMC provides visibility into the outcomes of all database deployments, detailing the scope of changes applied and pinpointing the causes of failures — a process that can typically take days or weeks to complete. All levels of the organization, from DBAs to application developers and QA to IT management, can easily audit the database, measure release velocity, and monitor deployments.
Simple Solution for Metrics Targets on MSSQL
I needed a flexible solution to keep measurements with target values in the SQL Server database. After playing with some tables and functions, I came out with a simple, clean, and flexible solution that also fits well for many other scenarios besides the one I had to support. This blog post summarizes my work and provides all the SQL stuff needed to reproduce it.
Creating and Designing a Metrics DatabaseWe start with creating a database with a minimal set of tables needed to track measurements:
Database Fundamentals #4: Create a Database
SQL Server provides multiple ways to perform most functions. In order to maximize your understanding of how SQL Server works and have as many different mechanisms as possible for getting work done, you’ll use the GUI and TSQL to learn how to create and drop databases. You can then use whichever mechanism works best for you at the moment.
Using each method, we’ll first create a database using the least amount of work possible, just so you can see how easy it is to create a database. We’ll go over how to remove databases from the system, getting rid of the database you just created. From there we’ll drill down to create another database, exploring some of the different mechanisms you can use to change how databases get created. Then, we’ll clean up behind ourselves and remove all those databases, too. Performing the actions repeatedly will help you to understand what you’re doing better and increase your retention of the information.
Monday, June 26, 2017
How Datadog is using AWS and PagerDuty to Keep Pace with Growth and Improve Incident Resolution
AWS Knowledge Center Videos: How do I compare RDS parameter values in different parameter groups?
ArangoDB 3.2 Beta Release: Pluggable Storage Engine with RocksDB, a ClusterFoxx, and More
We’re excited to release the beta of ArangoDB 3.2. It’s feature-rich, well-tested, and hopefully plenty of fun for all of you. Keen to take it for a spin? Get ArangoDB 3.2 beta here.
With ArangoDB 3.2, we’re introducing the long-awaited pluggable storage engine and its first new citizen, RocksDB from Facebook.
Inventory Management in MongoDB: A Design Philosophy I Find Baffling
I’m reading MongoDB in Action right now. It is an interesting book and I wanted to learn more about the approach to using MongoDB, rather than just be familiar with the feature set and what it can do. But this post isn’t about the book. It is about something that I read, and as I was reading it, I couldn’t help but put down the book and actually think it through.
More specifically, I’m talking about this little guy. This is a small Ruby class that was presented in the book as part of an inventory management system. In particular, this piece of code is supposed to allow you to sell limited inventory items and ensure that you won’t sell stuff that you don’t have. The example is that if you have 10 rakes in the stores, you can only sell 10 rakes. The approach that is taken is quite nice, by simulating the notion of having a document per each of the rakes in the store and allowing users to place them in their cart. In this manner, you prevent the possibility of a selling more than you actually have.
Secure Binlog Server: Encrypted Binary Logs and SSL Communication
The 2.1.3 GA release of MariaDB MaxScale, introduces the following key features for the secure setup of MaxScale Binlog Server:
The binlog cache files in the MaxScale host can now be encrypted. MaxScale binlog server also uses SSL in communication with the master and the slave servers.The MaxScale binlog server can optionally encrypt the events received from the master server: the setup requires a MariaDB (from 10.1.7) master server with encryption active and the mariadb10-compatibility=On option set in maxscale.cnf. This way both master and MaxScale will have encrypted events stored in the binlog files. How does the Binary log encryption work in MariaDB Server and in MaxScale?
Sunday, June 25, 2017
SQL Data Mask: Masking Configurations and Reports
SQL Data Mask is the latest prototype to come out of the Foundry, Redgate’s research and development division. It copies your database while anonymizing personal data. You can use it to mask your databases right now, free of charge.
In our last update, we shipped the first on-premises version of the app, previous versions only work in Azure in our foundry labs page.
Saturday, June 24, 2017
Tooling Improvements in Couchbase 5.0 Beta
Tooling improvements have come to Couchbase Server 5.0 Beta. In this blog post, I’m going to show you some of the tooling improvements in:
Query plan visualization to better understand how a query is going to execute. Query monitoring to see how a query is actually executing. Improved UX highlighting the new Couchbase Web Console. Import/export the new cbimport and cbexport tooling.Some of these topics have been covered in earlier blog posts for the developer builds (but not the Beta). For your reference:
Friday, June 23, 2017
Mocking Database Endpoints in MUnit Tests
We will create a sample Mule application that accepts HTTP requests and queries the full results from a table in a database. I will use Derby in-memory DB for this demo. The next section will talk about how you could set up the database. Please click on this link to read more about Apache Derby.
Most of the online MuleSoft tutorial teaches you how to run Derby DB in embedded mode, here I will teach you to run it in server mode, which means you do not need to set up any spring bean and create any customized database initialization java codes, no none of these jiggery-pokery stuff (as any Kiwi would say, aye).
Deploy a PHP With Couchbase Application as Docker Containers
Earlier in the year, I wrote about containerizing applications written in various development technologies that communicate with Couchbase Server. For example, I had written about deploying a Golang application with Docker, a Java application with Docker, and a Node.js application with Docker. This time around we’re going to take a look at how to deploy a PHP container that communicates with a Couchbase Server container.
We’re going to create an automatically provisioned Couchbase node and simplistic PHP application that writes and reads data from the Couchbase NoSQL node.
New Driver Features for MongoDB 3.6
At MongoDB World this week, we announced the big features in our upcoming 3.6 release. I’m a driver developer, so I’ll share the details about the driver improvements that are coming in this version. I’ll cover six new features — the first two are performance improvements with no driver API changes. The next three are related to a new idea, “MongoDB sessions,” and for dessert, we’ll have the new Notification API.
Since 3.4, MongoDB has used wire protocol compression for traffic between servers. This is especially important for secondaries streaming the primary’s oplog: we found that oplog data can be compressed 20x, allowing secondaries to replicate four times faster in certain scenarios. The server uses the Snappy algorithm, which is a good tradeoff between speed and compression.
Thursday, June 22, 2017
MongoDB Indexes With Spring Data
When working with large amounts of data, the use of indexes will greatly improve the time it takes for your queries to run by storing part of a collection’s data in a form that is easy to traverse. To add some indexes to your collections, you could run some functions directly via the Mongo Shell — or Spring Data can be used to handle it for you. As the title suggests, that's what we will be looking into in this post.
Let's start with some background information about why we should use indexes. As mentioned in the introduction indexes allows us to query vast amounts of data in a more efficient way which reduces the time taken to retrieve the results. This might seem negligible with smaller sets of data but as the size of documents and collections increase this time difference between having indexes or not is definitely recognizable.
The MySQL High Availability Landscape in 2017, Part 1: The Elders
In this blog, we’ll look at different MySQL high availability options.
The dynamic MySQL ecosystem is rapidly evolving many technologies built around MySQL. This is especially true for the technologies involved with the high availability (HA) aspects of MySQL. When I joined Percona back in 2009, some of these HA technologies were very popular – but have since been almost forgotten. During the same interval, new technologies have emerged. In order to give some perspective to the reader, and hopefully, help to make better choices, I’ll review the MySQL HA landscape as it is in 2017. This review will be in three parts. The first part (this post) will cover the technologies that have been around for a long time: the elders. The second part will focus on the technologies that are very popular today: the adults. Finally, the last part will try to extrapolate which technologies could become popular in the upcoming years: the babies.
The Classic Northwind Database Converted to the NoSQL World
This blog post uses the classical Northwind example from Microsoft to show how you can migrate from a traditional relational database to a NoSQL cloud database.
Northwind Traders Access database is a sample database that shipped with Microsoft Office suite. The Northwind database contains the sales data for a fictitious company called Northwind Traders, which imports and exports specialty foods from around the world. Developers (back in the 90s) used it to learn the MS Access product.
The Cloud and Global Policy Solutions: Policy as a Strategic Enabler for Cloud Adoption
This Is What It's Like When Data Collides (Relational + JSON)
In the past, a benefit to using non-relational databases (for example, NoSQL) was its simple and flexible structure. If the data was structured, a relational database was deployed. If it was semi-structured (for example, JSON), a NoSQL database was deployed.
Today, a relational database like MariaDB Server (part of MariaDB TX) can read, write and query both structured and semi-structured data, together. MariaDB Server supports semi-structured data via dynamic columns and JSON functions. This blog post will focus on JSON functions with MariaDB Server, using examples to highlight one of they key benefits: data integrity.
Wednesday, June 21, 2017
The Secrets of N1QL (SQL on JSON)
In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.
I’m very excited to feature today Gerald Sangudi and Keshav Murthy, the creators of the N1QL query language, CouchBase’s SQL-based JSON querying language.
Database Fundamentals #3: What's in a Database?
It’s worth noting that a lot of people will never need to create their own database. You may never create your own tables or other data structures, either. You may only ever run backups and restores and manipulate the security on the system and let application installs create databases for you. That’s completely understandable and perfectly in line with the needs of many businesses and many accidental DBAs. However, it’s a good idea to understand what this stuff is and how it works as part of understanding SQL Server.
A Database Is Actually FilesYou need to store information that you want to be able to retrieve later. It’s necessary that you organize that information. If you were working with a word processing program, you store different documents in different files. You really wouldn’t put all of your documents into a single, large file. SQL Server functions in a very similar manner. While you have a server, you’re not going to simply store all the various types of information necessary to run your business in one large pile within that server. Instead, you’re going to organize that information. The initial organizational mechanism for SQL Server is the database. A database allows you to keep sets of information in separate storage areas. Further, it allows you to isolate the security for those different sets of information so that you can control who gets to see or modify that data.
Getting Started With NoSQL Using Couchbase Server and PHP
A few days ago I wrote about using PHP with Docker and Couchbase, but I never really got into best practices of going all in with NoSQL. For example, how do you read and write data with Couchbase Server while using PHP? What happens when you need to create some advanced queries or create high-performance indexes?
We’re going to see some examples for using Couchbase Server with PHP, an extension to the previous tutorial around containerizing the database and web application.
Data Innovation: Sharing Data in the Cloud for Greater Innovation and Citizen Service (122971)
Data Hero has arrived!!!
Fun With SQL: Functions in Postgres
DZone Database Zone Fun With SQL: Functions in Postgres In our previous Fun with SQL post on the Citus Data blog, we covered w...
-
DZone Database Zone Monitoring OpenWRT With Telegraf What's the most popular open-source router software in the world? OpenWRT...
-
DZone Database Zone Next-Level MySQL Performance: Tarantool as a Replica Refactoring your MySQL stack by adding an in-memory NoSQL...
-
DZone Database Zone How to Use SQL Complete for T-SQL Code I was recently working on a project with several stored procedures, fun...