#SQL: Update xml field from value from other table
We are going to update Title field in XML below. Assume we have incorrect values there in our Library table. We will find the correct values by using Title field from AllBooks table.
<Book xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Title>Incorrect Title</Title>
</Book>
UPDATE Library SET info.modify('replace value of (/Book/Title/text())[1] with sql:column("Title")') FROM Library LEFT OUTER JOIN AllBooks ON Library.BookID=AllBooks .ID WHERE info.value('data((/Book/Title)[1])','nvarchar(max)') like 'Incorrect Title%'
#PowerShell: Register AWS EC2 instances in Amazon Route53 (new way)
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.
Just in case – Amazon EC2
I use this script https://github.com/konstantinvlasenko/cloud/blob/master/Register-CNAME.ps1 to provide meaningful names for them.
Simple usage:
$config = @{ DomainName = 'mylab.com' } $name = "www.$($config.DomainName)" # get instances $instance = (Get-EC2Instance $InstanceId).RunningInstance # update R53 .\Register-CNAME.ps1 $config $name $instance.PublicIpAddress
Advanced usage (register in another account; registering A record):
$config = @{ DomainName = 'mylab.com'; AssumeRoles = @{ R53 = @{ ARN = 'arn:aws:iam::600021112340:role/Route53'; SessionName = 'Friends' } }; } $name = "www.$($config.DomainName)" # get instances $instance = (Get-EC2Instance $InstanceId).RunningInstance # update R53 .\Register-CNAME.ps1 $config $name $instance.PublicIpAddress 'A'
#SharePoint 2013 #PowerShell: How to get user permissions report
function Get-SPPermissionsReport($web, $recursive) { $web | Get-SPUser | % { New-Object PSObject -Property @{ UserLogin = $_.UserLogin 'Roles given explicitly' = $_.Roles 'Roles given via groups' = $_.Groups | %{$_.Roles} Groups = $_.Groups Url = $web.Url } } if($recursive) { $web.Webs | % { Get-SPPermissionsReport $_ $recursive } } } $web = Get-SPWeb http://yoursharepoint/sites/department Get-SPPermissionsReport $web $true | Sort-Object UserLogin | Out-GridView
Then you can apply an additional filter by user or url right in the GridView.
You can add an additional matching criteria in the GridView: e.g. match by Role/Group name
Apply GeoTrust certificate to AWS ELB
Here is the tool which I used to verify correctness of my AWS ELB SSL configuration.
- Certificate Name – put here whatever you want
- Private Key – copy and past content of server.key file
- Public Key Certificate – copy and paste content of your_site_name_ee.cer file
This will be enough to pass validation by the tool. But you will get one warning. Because GeoTrust is quite new player on the market. Old browser doesn’t have information about GeoTrust. So GeoTrust provides intermediate certificates. You need to set Certificate Chain field if you care about old browsers.
- Certificate Chain – copy and paste content of GeoTrust Extended Validation SSL CA – G2.txt and then copy and paste content of GeoTrust Primary Certification Authority.txt
Backup you on-prem/local #PostgreSQL database to cloud #AWS #RDS snapshot
#PostgreSQL #MySQL backup for $0.026 + $0.125 per GB-month
Total costs of the backup operation: $0.026 + $0.125 per GB-month
Script below will do the following steps:
- create PostgresSQL AWS RDS t1.micro instance
- wait till instance started by using aws rds describe-db-instances
- get AWS RDS instance address
- copy database
- create AWS RDS instance snapshot and terminate instance by using aws rds delete-db-instance
- send notification by using AWS SNS service
Prerequisites:
- pip install awscli
- Configuring the AWS Command Line Interface
#!/bin/bash SNSTOPIC=arn:aws:sns:us-east-1:000000000000:MYTOPIC RDSINSTANCE=MYBACKUP _now=$(date +"%d%m%Y") BACKUPNAME="$RDSINSTANCE$_now" DATABASE=MYDB DBADMIN=admin export PGPASSWORD=MYPASSWORD # create RDS instance aws rds create-db-instance --db-instance-identifier $RDSINSTANCE --allocated-storage 5 --db-instance-class db.t1.micro --no-multi-az --engine postgres --master-username $DBADMIN --master-user-password $PGPASSWORD --db-name $DATABASE --backup-retention-period 0 # wait till instance started while [ `aws rds describe-db-instances --db-instance-identifier $RDSINSTANCE | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["DBInstances"][0]["DBInstanceStatus"]'` != "available" ]; do sleep 10; done # get instance address AWSHOST=`aws rds describe-db-instances --db-instance-identifier $RDSINSTANCE | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["DBInstances"][0]["Endpoint"]["Address"]'` # copy database pg_dump -U postgres dbname=$DATABASE | psql --host=$AWSHOST --username=$DBADMIN --dbname=$DATABASE # terminate instance and create snapshot aws rds delete-db-instance --db-instance-identifier $RDSINSTANCE --final-db-snapshot-identifier $BACKUPNAME # send notification aws sns publish --topic-arn $SNSTOPIC --subject 'AWS RDS BACKUP' --message 'Done'
Doctrine 1.x log all queries
To log all queries we need to use Event Listeners
class QueryDebuggerListener extends Doctrine_EventListener { public function preStmtExecute(Doctrine_Event $event) { $q = $event->getQuery(); $params = $event->getParams(); while (sizeof($params) > 0) { $param = array_shift($params); if (!is_numeric($param)) { $param = sprintf("'%s'", $param); } $q = substr_replace($q, $param, strpos($q, '?'), 1); } error_log($q); } } $queryDbg = new QueryDebuggerListener(); $dbh = new PDO($dsn, $user, $password); $conn = Doctrine_Manager::connection($dbh); $conn->addListener($queryDbg);
Restore #AWS RDS instance from a database snapshot by using AWS CLI
The AWS Command Line Interface is a unified tool to manage your AWS services.
I am using AWS RDS PostgreSQL for my application. I don’t need to run this application 24/7/365. So I have the ability to stop periodically my instances to save some money in my pocket. Nobody wants to do repeated tasks manually. So am I.
Below is the bash script which I use to restore db instance from a snapshot and wait till availability status:
aws rds restore-db-instance-from-db-snapshot --db-instance-identifier myDBinstanseName --db-snapshot-identifier myDBinstanseName-final-snapshot --db-instance-class db.t1.micro --no-multi-az while [ `aws rds describe-db-instances --db-instance-identifier myDBinstanseName| python -c 'import json,sys;obj=json.load(sys.stdin);print obj["DBInstances"][0]["DBInstanceStatus"]'` != "available" ]; do sleep 10; done
Note: This script is part of bigger automation. So I don’t run it manually:)
EC2 Instance Metadata Query Tool
On Ubuntu, you can use the ec2metadata script to query the EC2 instance Metadata from within a running EC2 instance.
$ ec2metadata ami-id: ami-00000000 ami-launch-index: 0 ami-manifest-path: (unknown) ancestor-ami-ids: unavailable availability-zone: us-east-1a block-device-mapping: ami root instance-action: none instance-id: i-00000000 instance-type: rus1.universe local-hostname: ip-10-72-174-90.ec2.internal local-ipv4: 10.72.174.90 kernel-id: aki-88aa75e1 mac: unavailable profile: default-paravirtual product-codes: unavailable public-hostname: ec2-00-000-000-000.compute-1.amazonaws.com public-ipv4: 00.000.000.000 public-keys: ['ssh-rsa MYKEY...'] ramdisk-id: unavailable reserveration-id: unavailable security-groups: mysg user-data: unavailable
How to relaunch CloudInit script
sudo rm /var/lib/cloud/sem/* sudo /etc/init.d/cloud-init start