My overarching goal is to generate sets of random Symptom records for each Enrollee in a drug study, so that for each cycle (period of time), the code will insert a random number of random records for each enrollee.I'm trying to return a number of random records from a table, but inside a table-valued function... (which could be my problem).[code="sql"]CREATE FUNCTION dbo.ufn_GetTopSymptoms ( @enrollID INT , @CTCVersion VARCHAR(20) , @NumRecords INT)RETURNS TABLEASRETURN SELECT TOP(@NumRecords) ID , @enrollID AS EnrolledPatientID FROM dbo.Data WHERE [Version] = @CTCVersion ORDER BY NEWID();GO[/code]but that [code="sql"]ORDER BY NEWID()[/code] clause is illegal apparently, because here's the error it throws:Msg 443, Level 16, State 1, Procedure ufn_GetTopSymptoms, Line 13Invalid use of a side-effecting operator 'newid' within a function.I was hoping I could return a set of enrollmentIDs and then use CROSS APPLY to generate a random set of records for each enrollmentID... is this not possible with APPLY? I was trying to avoid using a cursor...The idea is basically to create all the Symptom records for all the patients in treatment cycle at once by using Enrollee OUTER APPLY dbo.ufn_GetTopSymtoms(dbo.Enrollment.EnrolleeID)but that's clearly not working. Is there a way to do this without resorting to a cursor? I saw Paul White's (outer and cross apply) articles, but maybe I misapplied what I read.Any pointers on how to do this right?Thanks!Pieter
↧
Return random records in a table-valued function?
↧
Why partition function works for datetime2 but not for datetime1
Hi all,[code="sql"]DECLARE @DatePartitionFunction nvarchar(max) = N'CREATE PARTITION FUNCTION DatePartitionFunction (datetime) AS RANGE RIGHT FOR VALUES (';DECLARE @i datetime = '2007-09-01 00:00:00.000';WHILE @i < '2008-10-01 00:00:00.000'BEGIN SET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10)) + '''' + N', '; SET @i = DATEADD(MM, 1, @i); ENDSET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10))+ '''' + N');';EXEC sp_executesql @DatePartitionFunction;GOMsg 7705, Level 16, State 2, Line 1Could not implicitly convert range values type specified at ordinal 1 to partition function parameter type.[/code]however if I change to datetime2 it works[code="sql"]DECLARE @DatePartitionFunction nvarchar(max) = N'CREATE PARTITION FUNCTION DatePartitionFunction (datetime2) AS RANGE RIGHT FOR VALUES (';DECLARE @i datetime2 = '2007-09-01 00:00:00.000';WHILE @i < '2008-10-01 00:00:00.000'BEGIN SET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10)) + '''' + N', '; SET @i = DATEADD(MM, 1, @i); ENDSET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10))+ '''' + N');';EXEC sp_executesql @DatePartitionFunction;GOCommand(s) completed successfully.[/code]with reference to http://technet.microsoft.com/en-us/library/ms187802.aspx[quote]input_parameter_typeIs the data type of the column used for partitioning. All data types are valid for use as partitioning columns, except text, ntext, image, xml, timestamp, varchar(max), nvarchar(max), varbinary(max), alias data types, or CLR user-defined data types.[/quote]in this case why isn't datetime works?version is as follow:[code="sql"]Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64) Dec 28 2012 20:23:12 Copyright (c) Microsoft Corporation Enterprise Evaluation Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)[/code]from http://msdn.microsoft.com/en-us/library/cc645993.aspx[quote]Table and index partitioning is supported in this edition[/quote]so I don't know why it fails!thanks a lot!
↧
↧
Case sensitive pattern matching
I have a set of data where a column contains titles which have been formatted as follows:"FirstWordSecondWordThirdWord...." etc.That is, all the words have been concatenated but can be visually separated by their capital first letters.For reporting purposes, I need to break this column into the separate words so that it looks like:"First Word Second Word Third Word...." etc.Any thoughts as to how this can be achieved?
↧
Problems to learn...
Hi Guys...In my earlier post I have come to know about GROUPS AND ISLANDS problem.. Can you guys assist me some more problems & solutions like that to learn? that will be really helpful to handle circumstances .Thanks in advance.
↧
Incremental load
Hi,We need to implement incremental load in database. A sample scenario is, there is a view (INCOMEVW) which is build on top of a query likeCREATE VIEW INCOMEVWASSELECT CLIENTID,COUNTRYNAME,SUM(OUTPUT.INCOME) AS INCOME(SELECT EOCLIENT_ID AS CLIENTID,EOCOUNTRYNAME AS COUNTRYNAME,EOINCOME AS INCOME FROM EOCLIENT C INNER JOIN EOCOUNTRY CT ON C.COUNTRYCODE=CT.COUNTRYCODEUNION ALLSELECT ENCLIENT_ID AS CLIENTID,ENCOUNTRYNAME AS COUNTRYNAME,ENINCOMEAS as INCOME FROM ENCLIENT EC INNER JOIN ENCOUNTRY ECT ONEC.COUNTRYCODE=ECT.COUNTRYCODE) OUTPUTGROUP BY CLIENTID,COUNTRYNAMEThis is a sample view. As of now there is a full load happening from the source(select * from INCOMEVW) and loads to target table tbl_Income.We need to pick only the delta and load to the target table using a staging. The challenge is,1) If we get the delta(Insert,update or deleted rows in the source tables EOCLIENT,EOCOUNTRY,ENCLIENT,ENCOUNTRY, how to load the incremental to single target table tbl_Income.2) How to do the Sum operation with group by in incremental load? 3) We are planning to have a daily incremental load and thinking to create the same table structure as source with Date and Flag column to identify the date and whether that source row is an Insert or Update or Delete with the flag. But not sure how to frame something like this view and load to single target with Sum operations.Any suggestion??
↧
↧
Calculation in sql query ..
Hi All,we are facing challege to Calculate percentage as highlighted cell in below mention SnapShot .Any help if we can calculate in sql query will be very helpful . A1 A2 12 24 36 48 2010 56 31 6 19.35% 7 22.58% 2011 45 36 5 14% 4 11.11% 2012 56 12 2 16.66% 3 25% 2013 78 12 3 25.00% 235 111 16 14.41% 14 14.14% For 14.41% = 16/111For 14.14 % = 14/111-12 and So on further Column on 36, 48.... Thanks,Alok
↧
Insert into table with an identity columns from an other table
Hi Everyone I just created a new table with over 100 Columns and I need to populated just the first 2 columns.The first columns to populate is an identify column that is the primary key. The second column is a foreign_key to an other column and I am trying to populate this columns with all the values from the foreign_key value. This is what I am trying to do.column1 = IDcolumn2= P_CLIENT_DSET IDENTITY_INSERT PIM1 ONINSERT INTO PIM1 (P_CLIENT_ID) SELECT Client.ID FROM P_ClientSo I am trying to insert both an identity values and a value from an other table while leaving the other columns blank. How do I go about doing this.Thanks.
↧
Table Structure
Hi all,I have tables with structures like this:[code="sql"]/****** Object: Table [dbo].[ServiceCallJobPhoto] Script Date: 12/24/2013 12:48:50 PM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TABLE [dbo].[ServiceCallJobPhoto]( [ServiceCallJobPhotoId] [int] IDENTITY(1,1) NOT NULL, [InstanceId] [nchar](3) NOT NULL, [SiteId] [nchar](3) NOT NULL, [ServiceCallJobId] [int] NULL, [WorkDate] [datetime] NOT NULL, [MechanicId] [int] NOT NULL, [PhotoDescription] [nvarchar](75) NOT NULL, [PhotoData] [nvarchar](max) NOT NULL, [StatusFlag] [bit] NOT NULL, [DateAdded] [datetime] NOT NULL, [AddedBy] [nvarchar](75) NOT NULL, [DateChanged] [datetime] NULL, [ChangedBy] [nvarchar](75) NULL, [LocalDateChanged] [datetime] NOT NULL, [LocalChangedBy] [nvarchar](75) NOT NULL, [SyncTimeStamp] [timestamp] NOT NULL, CONSTRAINT [PK_ServiceCallJobPhoto] PRIMARY KEY CLUSTERED ( [ServiceCallJobPhotoId] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY], CONSTRAINT [uq1_ServiceCallJobPhoto] UNIQUE NONCLUSTERED ( [InstanceId] ASC, [SiteId] ASC, [ServiceCallJobId] ASC, [MechanicId] ASC, [WorkDate] ASC, [PhotoDescription] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]GOALTER TABLE [dbo].[ServiceCallJobPhoto] WITH CHECK ADD CONSTRAINT [FK_ServiceCallJobPhotoJobId] FOREIGN KEY([ServiceCallJobId])REFERENCES [dbo].[ServiceCallJob] ([ServiceCallJobId])GOALTER TABLE [dbo].[ServiceCallJobPhoto] CHECK CONSTRAINT [FK_ServiceCallJobPhotoJobId]GOALTER TABLE [dbo].[ServiceCallJobPhoto] WITH CHECK ADD CONSTRAINT [FK_ServiceCallJobPhotoMechId] FOREIGN KEY([MechanicId])REFERENCES [dbo].[Employee] ([EmployeeId])GOALTER TABLE [dbo].[ServiceCallJobPhoto] CHECK CONSTRAINT [FK_ServiceCallJobPhotoMechId]GO[/code]As, the PK is created on an Identity Key column which is not going to be used in joins of stored prcedures.Also, unique key is created on columns having datatype Datetime, nvarchar. So, my concern is that as while performing joins clustered index will never be used and this leads to table scan and also there may be performance impact due to column having datatype datetime used in keys.What can be done to improve this?Thanks in advance...
↧
How to convert deinterlace Panasonic HVX200 1080i MXF to ProRes for FCP easily
I have a Panasonic camera, model AG-HVX200. The camera is compatible with DVCPRO HD, DVCPRO50, DVCPRO25 and consumer DV recording standards. I shot some MXF files with HVX200 on DVCPRO settings for 4x3 aspect ratio. Now when I use log and transfert to import these MXF files into FCP6, a few MXF footage have some bad interlacing. When I view the MXF files in the capture window, they do not appear to have interlacing issues. Later I realize that capture window only shows a deinterlaced image, so any interlacing issues would not be apparent there. However, the problem is still there. I look for some useful workaround to convert and [b][url=http://www.aunsoft.com/mxf-converter-pro-mac/]deinterlace HVX200 MXF to ProRes MOV[/url][/b]. Some guy suggest me use Aunsoft TransMXF Pro for Mac, a MXF conversion program that is better than FCP to convert MXF to ProRes MOV and simultaneously remove interlaces. What's more, this powerful MXF to FCP Converter is really a master to preserve and separate multiple audio tracks in MXF file, which can convert MXF to multiple audio tracks ProRes 422, ProRes 422 LT, ProRes 422 HQ, ProRes 4444 for FCP. Here I would like to share the brif workflow to convert and deinterlace HVX200 MXF to ProRes MOV for FCP.Step1. Accurately load MXF to TransMXF Pro for MacClick "add video" or "add folder" to load MXF file. Here I directly drag MXF to the HVX200 MXF Converter, fast. This program also supports batch conversion.In order to avoid of no audio in source MXF file, you'd better to preview loaded MXF files in the preview window to guarantee the output encoded MOV files will also be with sound. You can capture MXF images easily with Snapshot function when preview MXF file.Check "Merge into one file", you can join MXF clips into one big file.[img]http://www.aunsoft.com/images/product/transmxf-pro-mac/transmxf-pro-mac-input-20131224.png[/img]Step2. Choose ProRes 422 HQ MOV as output formatClick "format" and under the FCP menu, you can quickly pick ProRes 422 HQ MOV. You can also enter FCP in the search bar directly.ProRes 422 is most suggested option for FCP. ProRes 422 HQ is recommended here for offering even greater headroom to preserve the quality of even the most demanding. [img]http://www.aunsoft.com/images/product/transmxf-pro-mac/transmxf-output-20131107.jpg[/img]Step3. Deinterlace Panasonic HVX200 1080i MXF filesClick “Editor” button and then click "Effect". Tick the checkbox of "Deinterlacing" to [b][url=http://www.aunsoft.com/convert_sony_pmw-ex1r_xdcam_mxf_to_prores_422_hq_mov_for_fcp_x/]deinterlace HVX200 1080i MXF files[/url][/b] before converting. In this way, the ProRes MOV generated from raw MXF files will be viewed flawlessly in FCP. Here you are allowed to crop unwanted video parts, trim video length and add some special effects. You can preview the raw MXF and output MOV videos simultaneously here.You can also customize profile of output MOV via click "settings" in the main interface to reset video bitrates, video frame rate, video size, audio channels, etc.[img]http://www.aunsoft.com/images/product/transmxf-pro-mac/transmxf-pro-mac-deinterlace-20131224.png[/img]Step4: Convert MXF files into ProRes MOVClick "Start Conversion" icon, The [b][url=http://www.aunsoft.com/mxf-converter-pro-mac/]HVX200 MXF to FCP Converter[/url][/b] will convert MXF to ProRes 422 HQ MOV for FCP. After MXF to ProRes 422 conversion, you will be able to get the output files via clicking on "Open" button and then view and play converted HVX200 MXF in FCP without bad interlacing.What's more, TransMXF Pro for Mac has the capability to convert/transcode/rewrap MXF file to multiple audio tracks with MOV, MKV, MP4, as well as tons of common output video formats, such as MKV, FLV, WMV, AVI, MPEG, etc, which ever expand the MXF applied range.[b]Click More[/b][b][url=http://www.aunsoft.com/mxf_converter_reviews/]MXF converter reviews[/url][/b][b][url=http://www.aunsoft.com/mxf_products_comparison/]mxf products comparison[/url][/b][b][url=http://www.aunsoft.com/transcode-mxf-to-fcpx/]Ultimate Software Solution for MXF, P2 MXF[/url][/b][b][url=http://www.aunsoft.com/preserve_and_separate_multiple_audio_tracks_in_canon_xf300_mxf_for_fcp_7/]TransMXF Pro for Mac[/url][/b][b][url=http://www.aunsoft.com/what-is-mxf-convert-mxf-p2-mxf-to-mov-edit-mxf-fcp/]convert Panasonic P2 MXF to ProRes mov for FCP?[/url][/b][b][url=http://www.aunsoft.com/imedia-converter-mac/]iMedia Converter for Mac[/url][/b][b][url=http://www.aunsoft.com/available_download_mxf_viewer_to_view_and_play_jvc_gy-hm650_mxf_file_in_avid/]view and play JVC GY-HM650 MXF file in Avid[/url][/b][b][url=http://www.aunsoft.com/apply_codec_to_panasonic_dvcpro_hd_p2_cards_mxf_for_imovie_8/9/11_with_30_seconds/]MXF to FCP Converter[/url][/b]
↧
↧
Update Temp table where the column names are unknown
In a stored procedure I dynamically create a temp table by selecting the name of Applications from a regular table. Then I add a date column and add the last 12 months. See attachment.So far so good. Now I want to update the data in columns by querying another regular table. Normally it would be something like:[code]UPDATE ##TempTableSET [columName] = (SELECT SUM(columName)FROM RegularTableWHERE FORMAT(RegularTable.Date,'MM/yyyy') = FORMAT(##TempMonths.x,'MM/yyyy'))[/code]However, since I don't know what the name of the columns are at any given time, I need to do this dynamically.So my question is, [b]how can I get the column names of a Temp table dynamically while doing an Update?[/b]Thanks!
↧
Finding Dependencies of Objects
Hi There, I need to do some clean up activities in my databases... So i intended to drop unwanted tables and views.for that I need to find, whether the table being used by any other objects ?How could I achieve it?thanks in advance
↧
uniqueidentifier as identity field
Hello everyone,So for years I was using the int identity(1,1) primary_key for all the tables I created, and then in this project I decided, you know, I like the uniqueidentifier using newsequentialid() to ensure a distinctly unique primary key.then, after working with the php sqlsrv driver, I realized huh, no matter what, i am unable to retrieve the scope_identity() of the insert So of course I cruised back to the MSSMS and realized crap, I can't even make the uniqueidentifier an identity. So now I'm wondering 2 things...1: Can I short cut and pull the uniqueidentifier of a newly inserted record, even though the scope_identity() will return null or2: do I now have to add a column to each table, keep the uniqueidentifier (as all my tables are unified by that relationship) and also add a pk field as an int identity(1,1) primary_key, in order to be able to pull the scope_identity() on insert...after days and hours of google searching on this problem, I finally decided to come and talk to you all...I'd like to solve this before this product gets too far into it's production lifespan.cheers.
↧
ShrinkFile on large database with TRUNCATEONLY
We have a large OLAP database, about 2.5 TB spread out over 3 data files on three different drives, and recently someone ran a query that created a table that continued to grow until the data files filled the available disk space (about 3 TB total - 1 TB per drive). Tonight I plan on running a full backup (it's in Simple mode) and running a ShrinkFile on all three files sequentially with TRUNCATEONLY just so it will remove the space after the last extent. Just to verify I'm not missing anything, does anyone see any issues with doing this on such a large database? Or is there any way to tell ahead of time how much space this will recover? Granted running a DB Shrink is one of those things you just don't do, but this is a one-time shot and unavoidable to get the file size back under control.Thanks.
↧
↧
Stored procedure taking too long to run
I have an issue with a store procedure that takes roughly 25 minutes to complete. the table that this store procedure is running against has rougly 12 million records. I am not sure if I have coded this in a most efficient way. I am thinking that even though this works, I might need to consider creating a different route altogether.Here is the SP:USE [FIS2]GO/****** Object: StoredProcedure [dbo].[admin_DeleteE7TimeClockDetailDuplicates] Script Date: 12/30/2013 16:07:26 ******/SET ANSI_NULLS OFFGOSET QUOTED_IDENTIFIER OFFGOALTER PROCEDURE [dbo].[admin_DeleteE7TimeClockDetailDuplicates] AS--If Time Clock Details Table doesn't have the number of columns we're expecting, then abortDECLARE @ColumnCount INTSET @ColumnCount = (SELECT COUNT(column_name) FROM information_schema.columns WHERE table_name = 'ps_micros_e7_time_clock_details' AND column_name NOT LIKE 'keyCol')IF @ColumnCount <> 37 -- As of 5/21/10 BEGIN SELECT 'TABLE ps_micros_e7_time_clock_details appears to have changed ... unable to remove duplicates!' AS Error RETURN 1 ENDIF EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE xtype = 'U' AND id = object_id( N'tempdb..#StoreOIDs')) DROP TABLE #StoreOIDsDECLARE @PollDate AS VARCHAR(10) SET @PollDate = CONVERT(VARCHAR(10), GETDATE(), 101)DECLARE @StoreOID AS VARCHAR(50)SELECT DISTINCT store_oid INTO #StoreOIDs FROM ps_micros_e7_time_clock_details ORDER BY store_oidWHILE (SELECT COUNT(store_OID) FROM #StoreOIDs) > 0BEGINSET @StoreOID = (SELECT TOP 1 store_oid FROM #StoreOIDs)IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE xtype = 'U' AND id = object_id( N'tempdb..#StoreRecs')) DROP TABLE #StoreRecsBEGIN TRANSACTION-- Select out all distinct records for a given store into a temp tableSELECT DISTINCT store_oid, Seq, EmplSeq, JobSeq, OvertimeRuleSeq, ReasonDefSeq, ClockInStatus, ClockInTimeUtc, ClockOutStatus, ClockOutTimeUtc, InAdjustEmplSeq, OutAdjustEmplSeq, RegularSeconds, Overtime1Seconds, Overtime2Seconds, Overtime3Seconds, Overtime4Seconds, AccumulatedDailySeconds, AccumulatedPeriodSeconds, RegularPay, Overtime1Pay, Overtime2Pay, Overtime3Pay, Overtime4Pay, RegularRate, AccumulatedDays, ConsecutiveDays, Computed, BreakSeconds, PaidBreakSeconds, PaidBreakPay, ReportingTimeSeconds, ReportingTimePay, AdjustedClockInTime, AdjustedClockOutTimeINTO #StoreRecsFROM ps_micros_e7_time_clock_detailsWHERE store_oid = @StoreOIDIF @@ERROR <> 0 GOTO ERR_HANDLER-- Delete all records for this store from time clock details tableDELETE FROM ps_micros_e7_time_clock_details WHERE store_oid = @StoreOIDIF @@ERROR <> 0 GOTO ERR_HANDLER-- Insert distinct records back in for the given storeINSERT INTO ps_micros_e7_time_clock_detailsSELECT store_oid, @PollDate AS pollDate, '' AS batchId, Seq, EmplSeq, JobSeq, OvertimeRuleSeq, ReasonDefSeq, ClockInStatus, ClockInTimeUtc, ClockOutStatus, ClockOutTimeUtc, InAdjustEmplSeq, OutAdjustEmplSeq, RegularSeconds, Overtime1Seconds, Overtime2Seconds, Overtime3Seconds, Overtime4Seconds, AccumulatedDailySeconds, AccumulatedPeriodSeconds, RegularPay, Overtime1Pay, Overtime2Pay, Overtime3Pay, Overtime4Pay, RegularRate, AccumulatedDays, ConsecutiveDays, Computed, BreakSeconds, PaidBreakSeconds, PaidBreakPay, ReportingTimeSeconds, ReportingTimePay, AdjustedClockInTime, AdjustedClockOutTimeFROM #StoreRecsIF @@ERROR <> 0 GOTO ERR_HANDLERDELETE FROM #StoreOIDs WHERE store_oid = @StoreOIDIF @@ERROR <> 0 GOTO ERR_HANDLERCOMMIT TRANSACTIONENDRETURN 0ERR_HANDLER: SELECT 'Unexpected error occurred!' ROLLBACK TRANSACTION RETURN 1Any suggestions?thanks in advancePC
↧
Hierarchy relationship SQL query [T-SQL]
Anyone can help me to come out a query for below ? The objective is to transform entity as shown in expected output.[code="other"]Child Parent Name0001 0001 HQ0100 0001 HQ Accounting Dept0200 0001 HQ Marketing Dept0300 0001 HQ HR Dept0101 0100 Branch North 111 0102 0100 Branch North 1120201 0200 Branch North 1130301 0300 Branch North 1148900 0300 Branch North 1150387 8900 Sub Branch North 115Expected output----------------Level1 Level2 Level3 Level4 Name0001 0100 0101 N/A Branch North 1110001 0100 0102 N/A Branch North 1120001 0200 0201 N/A Branch North 1130001 0300 0301 N/A Branch North 1140001 0300 8900 0387 Sub Branch North 115[/code]
↧
How to fetch PDF files from a folder and saving it in the database
I have a small project to be done in which I need to fetch the pdf file from a my system and save it in database and also fetch the name of it and save it in the database.
↧
How to create staging table to handle incremental load
Hi Team,We are designing a Staging layer to handle incremental load. I want to start with a simple scenario to design the staging.In the source database There are two tables ex, tbl_Department, tbl_Employee. Both this table is loading a single table at destination database ex, tbl_EmployeRecord. The query which is loading tbl_EmployeRecord is, SELECT EMPID,EMPNAME,DEPTNAME FROM tbl_Department D INNER JOIN tbl_Employee E ON D.DEPARTMENTID=E.DEPARTMENTID.Now, we need to identify incremental load in tbl_Department, tbl_Employee and store it in staging and load only the incremental load to the destination.The columns of the tables are,tbl_Department : DEPARTMENTID,DEPTNAME tbl_Employee : EMPID,EMPNAME,DEPARTMENTIDtbl_EmployeRecord : EMPID,EMPNAME,DEPTNAME Kindly suggest how to design the staging for this to handle Insert, Update and Delete.RegardsJim
↧
↧
Difference between two date/times and flag each hour in between
I'm stumped!I have a request where i would like to get the start date/time and end date/time and flag (with an int) which hours (24 hour clock) have values between the two dates. Example car comes into service on 2013-12-25 at 0800 and leaves 2013-12-25 at 1400 the difference is 6 hours and i need my table to show Column: Hour_6 Value: 0Column: Hour_7 Value: 0Column: Hour_8 Value: 1Column: Hour_9 Value: 1Column: Hour_10 Value: 1Column: Hour_11 Value: 1Column: Hour_12 Value: 1Column: Hour_13 Value: 1Column: Hour_14 Value: 0As i'm working away at it i'm trying to figure out how i could use a Time Dimension table for this but dont really see much. So far i have the difference between the two times in hours (hour_diff) and the start hour (min_hour) so i would like to do something where i update the first hour (min_hour) and update columns based on the numbers of hours (hour_diff)Advice on how i can go about this would be greatly appreciated ! Thank you and Happy New Years!
↧
Where can i find the latest world map shape file?
I am creating a SSRS report with geographic world map. I am able to to download few shape files and use them.However, the maps which i have downloaded misses few locations(for e.g. South Sudan which was formed few years back).Where can i find the latest world map shape file?Regards,Kethe
↧
T-SQL Help
Hi,CREATE TABLE #TempTable ( [GROUP] VARCHAR(100), [Status] Varchar(10), CreateDate Datetime, ClosedDate DateTime, RequestID INT)INSERT INTO #TempTable ( [GROUP] , Status , CreateDate , ClosedDate , RequestID )VALUES ( 'A' , -- GROUP - varchar(100) 'Open' , -- Status - varchar(10) '10-21-2013 21:33:41' , -- OpenedDate - datetime '01-01-1970 00:00:00' , -- ClosedDate - datetime 1234 -- RequestID - int )INSERT INTO #TempTable ( [GROUP] , Status , CreateDate , ClosedDate , RequestID )VALUES ( 'A' , -- GROUP - varchar(100) 'Closed' , -- Status - varchar(10) '10-21-2013 09:14:41' , -- OpenedDate - datetime '11-01-2013 00:00:00' , -- ClosedDate - datetime 2345 -- RequestID - int )INSERT INTO #TempTable ( [GROUP] , Status , CreateDate , ClosedDate , RequestID )VALUES ( 'A' , -- GROUP - varchar(100) 'Open' , -- Status - varchar(10) '10-23-2013 09:11:41' , -- OpenedDate - datetime '11-23-2013 00:00:00' , -- ClosedDate - datetime 4567 -- RequestID - int )INSERT INTO #TempTable ( [GROUP] , Status , CreateDate , ClosedDate , RequestID )VALUES ( 'A' , -- GROUP - varchar(100) 'Closed' , -- Status - varchar(10) '1-1-2013 09:03:41' , -- OpenedDate - datetime '08-15-2013 00:00:00' , -- ClosedDate - datetime 8600 -- RequestID - int ) INSERT INTO #TempTable ( [GROUP] , Status , CreateDate , ClosedDate , RequestID )VALUES ( 'B' , -- GROUP - varchar(100) 'Closed' , -- Status - varchar(10) '06-01-2013 09:12:41' , -- OpenedDate - datetime '08-02-2013 00:00:00' , -- ClosedDate - datetime 1111 -- RequestID - int )INSERT INTO #TempTable ( [GROUP] , Status , CreateDate , ClosedDate , RequestID )VALUES ( 'B' , -- GROUP - varchar(100) 'Closed' , -- Status - varchar(10) '07-01-2013 09:44:41' , -- OpenedDate - datetime '09-03-2013 00:00:00' , -- ClosedDate - datetime 222 -- RequestID - int )INSERT INTO #TempTable ( [GROUP] , Status , CreateDate , ClosedDate , RequestID )VALUES ( 'B' , -- GROUP - varchar(100) 'Closed' , -- Status - varchar(10) '01-01-2013 09:33:41' , -- OpenedDate - datetime '12-01-2013 00:00:00' , -- ClosedDate - datetime 322 -- RequestID - int )INSERT INTO #TempTable ( [GROUP] , Status , CreateDate , ClosedDate , RequestID )VALUES ( 'B' , -- GROUP - varchar(100) 'Open' , -- Status - varchar(10) '06-01-2013 09:33:41' , -- OpenedDate - datetime '01-01-1970 00:00:00' , -- ClosedDate - datetime 333 -- RequestID - int ) SELECT * FROM #TempTable ORDER BY [Group],CreateDate Desc -- total Count of tickets created on or before createdate by groups example : --For Group A total tickets opened on or before CreateDate 1/1/2013 =1 --For Group A total tickets opened on or before CreateDate 10/21/2013 =3 -- total Count of tickets Closed on or after createdate by groups --Number of tickets closed on or after 1/1/2013 for Group A = 3 (date ClosedDate should be used to compare with CreateDate 1/1/2013) (3 = 2013-08-15 00:00:00.000,2013-11-01 00:00:00.000,2013-11-23 00:00:00.000) --Number of tickets closed on or after 10/21/2013 for Group A = 2 (date ClosedDate should be used to compare with CreateDate 10/21/2013) (2 = 2013-11-01 00:00:00.000,2013-11-23 00:00:00.000) --The final result should look LIKE the below : SELECT 'A' AS [GROUP],'1/1/2013' AS CreateDate ,1 AS OpenedTicketstillDate, 3 AS closedTicketTillDate UNION SELECT 'A' AS [GROUP],'10/21/2013' AS CreateDate ,3 AS OpenedTicketstillDate, 2 AS closedTicketTillDate UNION SELECT 'A' AS [GROUP],'10/23/2013' AS CreateDate ,4 AS OpenedTicketstillDate, 2 AS closedTicketTillDate UNION SELECT 'B' AS [GROUP],'06/01/2013' AS CreateDate ,1 AS OpenedTicketstillDate, 3 AS closedTicketTillDate UNION SELECT 'B' AS [GROUP],'06/22/2013' AS CreateDate ,2 AS OpenedTicketstillDate, 0 AS closedTicketTillDate UNION SELECT 'B' AS [GROUP],'07/01/2013' AS CreateDate ,3 AS OpenedTicketstillDate, 0 AS closedTicketTillDate UNION SELECT 'B' AS [GROUP],'01/05/2014' AS CreateDate ,4 AS OpenedTicketstillDate, 3 AS closedTicketTillDateDROP TABLE #TempTable Thanks,PSB
↧