text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
How can my program have 2 languages? Hi! In my program i have an option at the preferences and you can change the language to English->Greek or Greek->English. I have a bool that when the language is Greek it is true and when it is English it is false. So what i am doing is at every message box or dialog i first check if the bool is greek or english and if it is true ( greek ) i translate the dialog to greek or if it is a messagebox the text is at greek.. So i am sure that this isn't the correct way to do that.. Can you tell me what is the correct way to provide my program with more than the the english language? I am using Ubuntu. There's a good overview at Qt have good integrated system for it. See "QTranslator": - valandil211 last edited by")@ Also, you might want to re-think your design for switching. What if your application becomes a best-seller, and you'd like to expand to Turkey? Or to Italy? Even if in the UI you keep the choice simple, I'd make sure it is easy to add other languages as well in the rest of your application. [quote author="Joey Dumont" date="1309262146"")@ [/quote] That's what am i using.. [quote author="Vass" date="1309261862"]Qt have good integrated system for it. See "QTranslator":[/quote] Could i have some tips at this? I don't know nothing about QTranslator.. Any example with a simple layer saying "Hello", i don't know something more than just a Class Reference :/ EDIT: I am now reading Qt Linguist Manual. I hope it will help me.. The following "FAQ": contains an example that can be useful. I just created my first translation file.. I will try to do it my own.. I will check the FAQ later.. Thanks :) - valandil211 last edited by If you consider your problem as solved, could you please the thread title and add [SOLVED]. Thanks! You should read documentations carefully and you will have no problem. In case that you couldn't managed to do the stuff, do these: 1- In your qt project file (.pro) add this line: @TRANSLATIONS += program_gr.ts@ Of course names and number of translations are optional. 2- start lupdate tool (which is an executable installed with Qt SDK) given your project file name. for example: @ lupdate ./program.pro @ this scans your source code tree and determines translatable strings, then generates a XML-based translation file. in above example generates program_gr.ts. 3- Now you can perform actual translation! open your translation file with Qt Linguist and translate strings. then save the file and exit. It's easy ! 4- Now that you have translated strings, you should make binary translation file. just call lrealease and give your ts file name: @ lrealease program_gr.ts @ this will generate profram_gr.qm that you can load in your program. 5- when application starts, load your qm file using QTranslator class. in a widget-based application it may look like this: @ #include <QtGui/QApplication> #include <QTranslator> #include "mainwindow.h" int main(int argc, char argv[]) { QApplication a(argc, argv); QTranslator translator; translator->load(a.applicationDirPath()+"/program_gr.qm"); // or other place that you put your .qm file a.installTranslator(translator); MainWindow w; w.show(); return a.exec(); } @ note that you can load translations dynamically and use them whenever you need. but you should be careful about calling translation method for all of dialogs. Basically i saw a tutorial here ( ). It is the same with yours except that i can't understant what the guy is saying at Indonesia language.. Also at step 5 i have this @QTranslator translator; translator.load("program_gr"); a.installTranslator(&translator);@ Ok so i create the gm file. BUT. As i said my program is for Ubuntu so i will make a deb file ( like setup.exe files at Windows.. ).. How can i provide the greek translation file and how after the program is installed users can translate the app to greek? ( Install my greek translation file so they can see the app translated at greek ) Ok, it looks like an "Installation and Deployment" problem... In linux directories used for configuration files are: /etc/program and /home/user_name/.program and /usr/share/program/ Due to linux file system standards I suggest put your translation files in /usr/share/program/translations. Ok thanks for the info! I will try it.. When everything is ok i will edit my post to [SOLVED] :D The translations at /usr/share/program/translations must be at .ts format or .qm format? [quote author="Leon" date="1309285305"]The translations at /usr/share/program/translations must be at .ts format or .qm format?[/quote] .qm .ts format for developers (translation source) .qm format for end users (translation binary) [quote author="Leon" date="1309285305"]The translations at /usr/share/program/translations must be at .ts format or .qm format?[/quote] You need only qm files in runtime. ts files are used to translate. at runtime you want to load translated data. Also notice that there is no magic with /usr/share/program/translations. that was just a suggestion. you could put your translations in every other location... Ok thanks guys.. I am working on it.. soroush i know it is a suggestion :P - mlong Moderators last edited by The .qm files can also be included as resources, too. I am thinking of having english greek dutch and french language at my program.. Should i have a combobox with this 4 languages only ( using the resource file) or should i check for file existences at the path that i have my translations and then add them to the combobox? Second way sounds better - this allow your users adding new languages for your app without re-compilation. of course if your public .ts file. Yes but who would do that? I think i will go with the first one and if anyone translate my app i will add the language at another version.. :) Hello again! What do you suggest? After the language has been changed ( at a combobox probably ) instantly everything change to the language you selected with "this way": or change the language after you restart the application? What I did was this: 1- Create a proxy subclass of QDialog named ProxyDialog wich have only one method: retranslateUI() 2- Subclass all dialogs from ProxyDialog 3- When you need to translate UI, just iterate over children of type ProxyDialog* and call retranslate UI for all of them. this is possible using findChildren function. So you suggest that i should translate everything instantly? Yes. I'm not sure if there is a better way or not, but I looked around a lot and couldn't fount anything. This works well for my applications. You don't have to restart your application to see translation results. All open dialogs will be translated on-the-fly. Ok then, thanks again!
https://forum.qt.io/topic/7075/how-can-my-program-have-2-languages/3
CC-MAIN-2019-43
en
refinedweb
I am recent in Java, I can read and understand code, know the high level (can do UML) and just recently I finally started to understand how to structure code so I can start writing. I need help for the following, which will provide me another milestone into the understanding of the code NOTE: if anyone can point to a tutorial that would explain with full examples (not only a constructor out of context, or a class, but also for that example how to call it, and what is returned..., I would gladly appreciate. for now that would help me. also, if I am steering in the wrong direction, please let me know and correct me in a line or two. I have a main, where the input is taken from the user for 1 value (this valuewill define the rows and colons for an array) = this is ok. I need to create the array. then I need to populate it with random numbers (after that I will add more manipulations on it, but then I will be able to do so, as long as I pass that stage) from that value I will need to generate more arrays (with derived sizes) but leave them with null values. now I already did that in my main, but that is linear programming, and once again: I need to bridge the gap between UML and finished code. for that I went the way of 1_ creating a class MyArr 2_ create a constructor with takes an int mySide as argument and use the int to create a 2 dimension array[mySide][mySide] as I understand, when called from the main, that should instantiate an object of MyArr class. 3_ create a method "populate" which can be called from main with the object instantiated passed to it as an argument. it takes that object and populate the array created (not certain of the terminology here) in other words, seen from the main. create a new instance of the Myarr (which in fact is / contains an array of the size passed in call. then if needed, call Myarr.populate(????) that will populate that array. then I can create another instance of Myarr with a different size provided and not populate it (or leave it filled with null I paste a skeleton. as you can see, I left empthy most of the components for the class but creating the array when the class is instantiated would look like: int[][] a1 = new int[mySide][mySide] then calling it would be something like: MyArr baneArr = new MyArr(mySideInt); and here is the first problem I face: baneArr will be an object. first question: is that correct? if yes, then second step: populate will be a nested loop with random numbers (I have no problem there besides how to access that object's array? something like: myArr.populate(baneArr) and if that is the way how to I use it in populate: baneArr[counterX][counterY] = myRandomNumber ? I tried many ways over the past week end, and still face too many issues where I am not passing the correct type of argument, or if I fix that then I cannot set the values in the command described above. I have 15 years in telecoms, and never touched programming besides reading code, and heavy bash /perl scripting. this thing is not part of a project, nor an assignment, it is the only way I can understand this and a single example would be very good to help me move forward. I learned everything by taking examples and playing with them, unfortunately all the example/tutorial I see beyond hello world and such only show pieces of code without the full working code. then it jumps to code so complex that is it beyond reverse engineering for intermediate learners. in between I cannot find information that would fill the gap, and I have been looking for years; this time I am not giving up. please help public class Main { public static void main(String[] args) { // take the size the user wants String mySizeStr = JOptionPane.showInputDialog(null, "enter the required size", "input the size",JOptionPane.QUESTION_MESSAGE); // get the square root for that given size double mySringDbl = Double.parseDouble(mySizeStr); double mySideDbl = Math.sqrt(mySringDbl); // convert it to an int (required for an array) int mySideInt = (int) mySideDbl; // call MyArray ??? } public class MyArr { // here I need a constructor public MyArr(int size){ } public void populate( ???? ){ } } Edited by macosxnerd101: Welcome to DIC! This post has been edited by macosxnerd101: 24 June 2010 - 09:13 AM
http://www.dreamincode.net/forums/topic/178892-create-an-array-class/
CC-MAIN-2017-26
en
refinedweb
I was going through and commenting c++ source code to gain a better understanding of it, when I came across this range checking code. I would like to know what the maximum call to LargeAllocate would be and how or why range checking might be done in this way. Also if you see errors in my comments please let me know. Thanks in advance dword declaration: code (it stretches 160 lengthwise so you may want to copy it into you favorite text editor first)code (it stretches 160 lengthwise so you may want to copy it into you favorite text editor first)Code:typedef unsigned int dword; Edit: I think I made a mistake in my comments. Is it true that the constants (255 and 32767) are up-converted for comparison to the length of dword, and if so would it be unsigned or signed?Edit: I think I made a mistake in my comments. Is it true that the constants (255 and 32767) are up-converted for comparison to the length of dword, and if so would it be unsigned or signed?Code://///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// // // MEMBER FUNCTION: idHeap::Allocate // // Allocate memory based off of the bytes needed. There is some funky magic here with checking ranges and I am not sure of the speed increase over // the loss in readability // // ~~ GLOBAL VARIABLE: c_heapAllocRunningCount // Gets incremented to keep the count of heap allocations current. // // ~~ MACRO: USE_LIBC_MALLOC // Forces use of standard C allocation functions for debugging and preformance testing. // /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// void* idHeap::Allocate(const dword bytes){ //Checks to see if bytes is a zero, if it is then there isn't anything to allocate if(!bytes){ //FIX. COULD WE NOT JUST CHECK TO SEE IF IT WAS LESS THAN 1 HERE AND AVOID THE MAGIC LATER? if(bytes < 1){ return NULL; } //Increment the count of heap allocations c_heapAllocRunningCount++; //Allocate memory via our custom functions or standard C functions #if USE_LIBC_MALLOC return malloc(bytes); #else //Check if bytes is within the range 0 to 255 (it was checked earlier in the function to be non-zero so the desired range is actually 1 to 255). This is //done by "not-ing" the value 255 (denoted by the ~) which is represented by the compiler as two's compliment -- meaning 255 in binary looks like 01111111. //Not-ing it would give the value 10000000 (-256). So if you preform a bitwise "and" on the number 'bytes' then only values within the range 0 to 255 //would return 0. Also, 255 is the signed one byte integer maximum. if(!(bytes & ~255)){ return SmallAllocate(bytes); } //Same as the previous check except that the desired range is 1 to the maximum value of a signed 2 byte integer. if(!(bytes & ~32767)){ return MediumAllocate(bytes); } //This basically means that the unsigned 4 byte integer 'bytes' is greater than the range 1 to 32,767 (or over 15 bit in length). In turn that means bytes' //is in the range 32,768 (short int maximum) to 4,294,967,295 (unsigned int maximum) or 4gb. However, apparently c++ has it so you cant have constant decimal //numbers over 2,147,483,647 or 2gb in a function call or variable assignment -- so as of this revision I don't know what the maximum call to this //would be (2gb or 4gb). return LargeAllocate(bytes); #endif }
https://cboard.cprogramming.com/cplusplus-programming/144307-help-complicated-strange-magic.html
CC-MAIN-2017-26
en
refinedweb
Dependency Management Using Git Submodules As an iOS developer, dependency management is something you’ll eventually encounter in your coding adventures. Whether you want to integrate someone else’s open source project, add a library from a third party service, or even reuse code across your own projects, dependency management helps you manage these complex code relationships — and guard against some messy problems. In this iOS dependency management tutorial, you’ll learn how to use Git Submodules to manage dependencies for your iOS application. This will include both a private dependency for something like shared code between your own code bases, as well as a separate example where you pull in an outside third party dependency as a Git Submodule. Getting Started Download the starter project for this tutorial. Build and run, and you should see the following: You can try to select a photo, but the app won’t do much in response. Throughout the rest of this tutorial, you’ll add behavior to the app while integrating other dependencies with Git Submodules. First — a little bit of background on dependency management. What Is Dependency Management? Dependency management is a concept that spans all software development disciplines, not just iOS development. It’s the practice of using configuration mechanisms to add extra code, and therefore extra features, to your software. Probably the most basic form of dependency management is to simply copy and paste code into your own app. There are several problems with this approach though: - The original reference is lost. When you copy and paste code, there’s no reference back to the original spot where the code was found, and it’s easily forgotten about. - Updates aren’t easily integrated. When changes are made to the original code you copied, it becomes very hard to track what’s changed so you can apply those changes back to your cut and pasted code. Some third party libraries can have thousands of lines of code, spread across hundreds of files, and it’s impossible to keep things synchronized manually. - Version information isn’t maintained. Proper software development practices call for versioning releases of your code. You’ll find this consistent in third party libraries you use in your projects. When you copy and paste code, there’s no easy way to know you’re using version 1.2.2 of library XYZ, and how will you remember to update your code when version 1.2.3 is released? I’m sure it wasn’t hard to convince you copy and pasting code is a terrible idea. :] Dependency Management Tools There are several great tools to manage dependencies in iOS development, but it can be confusing to know which one to use. CocoaPods might be the most popular. It certainly has a large number of libraries available for use. Carthage is the younger cousin to CocoaPods. While newer, it’s written in Swift and some find it easier to use than CocoaPods. Then there’s the Swift Package Manager, which is even newer to the scene and is stewarded by Apple through the open source community. These are just some of the big players in the iOS dependency management game — and there’s even more options beyond those. But what if I told you you didn’t need to use an additional tool to manage your dependencies? Would you Git excited? :] If you’re already using Git for version management for your iOS project, you can use Git itself to manage your dependencies. In the next section, you’ll see how to manage dependencies using Git Submodules. Let’s Git started! Working With A Private Dependency As an iOS developer, you’ll often work on more than one project. You’ll also find yourself repeatedly using the same pieces of code to solve similar problems. You can easily use Git Submodules to create a dependency from one project (the main project) to another personal project (the private dependency). Connecting a Private Dependency Open Terminal and navigate to the folder of your sample project. Execute ls to see the contents of the folder. You’ll know you’re in the right place when it looks like this: AndyRW|⇒ cd PhotoTagger PhotoTagger|master ⇒ ls PhotoTagger PhotoTagger.xcodeproj PhotoTagger|master ⇒ The first thing you need to do is initialize the project as a Git repository. Execute git init: PhotoTagger|⇒ git init Initialized empty Git repository in /Users/andyo/Documents/AndyRW/PhotoTagger/.git/ PhotoTagger|master⚡ ⇒ This sets up the current folder and its contents as a Git repository, though nothing is actually version managed yet. Next, execute git add . followed by git commit -m "Initial project": PhotoTagger|master⚡ ⇒ git add . PhotoTagger|master⚡ ⇒ git commit -m "Initial project" [master (root-commit) 1388581] Initial project 13 files changed, 1050 insertions(+) create mode 100755 .gitignore create mode 100755 PhotoTagger.xcodeproj/project.pbxproj create mode 100755 PhotoTagger.xcodeproj/project.xcworkspace/contents.xcworkspacedata create mode 100755 PhotoTagger/AppDelegate.swift create mode 100755 PhotoTagger/Assets.xcassets/AppIcon.appiconset/Contents.json create mode 100755 PhotoTagger/Base.lproj/LaunchScreen.storyboard create mode 100755 PhotoTagger/Base.lproj/Main.storyboard create mode 100755 PhotoTagger/Info.plist create mode 100755 PhotoTagger/PhotoColor.swift create mode 100644 PhotoTagger/TagsColorTableData.swift create mode 100755 PhotoTagger/TagsColorsTableViewController.swift create mode 100755 PhotoTagger/TagsColorsViewController.swift create mode 100755 PhotoTagger/ViewController.swift PhotoTagger|master ⇒ This adds the contents of the project to version management and takes a snapshot of the contents as a commit. Execute git status to confirm the state of the local repository; i.e. to confirm there are no outstanding changes that haven’t been committed: PhotoTagger|master ⇒ git status On branch master nothing to commit, working tree clean PhotoTagger|master ⇒ This means your local Git repository sees no local changes. That’s a good thing, since you haven’t changed anything in the code base. Now you’ve confirmed the state of the local Git repository for the project, it’s time to create your private dependency. For this tutorial, the private dependency you create will be a reusable project that helps specify URL paths to the Imagga API. Not only will it be useful for this project, but any other future projects you create that use the Imagga API will be able to reuse this private dependency. In Xcode, select File\New\Project…. Select Cocoa Touch Framework\Next. Enter ImaggaRouter as the Product Name. Click Next, and navigate to the parent of the PhotoTagger project folder. Then click Create to create the new project. You should now be looking at an empty Xcode project representing your new project. Folder-wise, this project should be in the same parent folder as the PhotoTagger folder. Now you have your private dependency project created, you’re ready to designate it as your first Git Submodule. First, the private dependency project needs to be initialized as a Git repository itself. In Terminal, navigate into the ImaggaRouter folder and execute git init: AndyRW|⇒ cd ImaggaRouter ImaggaRouter|⇒ git init Initialized empty Git repository in /Users/andyo/Documents/AndyRW/ImaggaRouter/.git/ This initializes the ImaggaRouter project as a Git repository. Next you need to add and commit the empty project. Execute git add . followed by git commit -m "Initial ImaggaRouter": ImaggaRouter|master⚡ ⇒ git add . git % ImaggaRouter|master⚡ ⇒ git commit -m "Initial ImaggaRouter" [master (root-commit) 554d7a1] Initial ImaggaRouter 36 files changed, 517 insertions(+) This tells Git to be aware of the files from the empty project. Committing them marks a “snapshot” of the state of the files. This concludes setting up ImaggaRouter as a Git repository with an initial set of files. Now you need to add it as a submodule of PhotoTagger. First, create a folder hierarchy to store your dependencies. Navigate to the root folder of PhotoTagger and execute mkdir Frameworks; mkdir Frameworks/Internal: AndyRW|⇒ cd PhotoTagger PhotoTagger|⇒ mkdir Frameworks; mkdir Frameworks/Internal PhotoTagger|⇒ This step isn’t technically necessary for working with Git Submodules, but this folder hierarchy is a good way to keep track of the locations of dependencies in your project. Now to finally identify ImaggaRouter as a dependency! From the root folder of PhotoTagger, execute git submodule add ../ImaggaRouter Frameworks/Internal/ImaggaRouter/: PhotoTagger|master ⇒ git submodule add ../ImaggaRouter Frameworks/Internal/ImaggaRouter/ Cloning into '/Users/andyo/Documents/AndyRW/PhotoTagger/Frameworks/Internal/ImaggaRouter'... done. PhotoTagger|master⚡ ⇒ This command tells the Git repository for PhotoTagger about the dependency on another Git repository (the one for ImaggaRouter). You’ll see this step creates a new file as well: .gitmodules. [submodule "Frameworks/Internal/ImaggaRouter"] path = Frameworks/Internal/ImaggaRouter url = ../ImaggaRouter This file contains the actual definition for the submodule. You’ll also notice this file is marked as a new file from Git’s perspective. Execute git status to see the current state of the local repository: PhotoTagger|master⚡ ⇒ git status On branch master Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: .gitmodules new file: Frameworks/Internal/ImaggaRouter PhotoTagger|master⚡ ⇒ It’s nice git status is reporting changes to both the .gitmodules file and the new directory for ImaggaRouter. On the other hand, it’s not so nice git status leaves out any information about the submodule itself. Since submodules are treated like nested repositories, git status will not report on submodules by default. Luckily, this can be changed. Execute git config --global status.submoduleSummary true to change this default: PhotoTagger|master⚡ ⇒ git config --global status.submoduleSummary true PhotoTagger|master⚡ ⇒ Check the output of git status again: PhotoTagger|master⚡ ⇒ git status On branch master Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: .gitmodules new file: Frameworks/Internal Submodule changes to be committed: * Frameworks/Internal 0000000...554d7a1 (1): > Initial ImaggaRouter PhotoTagger|master⚡ ⇒ Awesome! git status now reports on the state of the submodule as well as the main project, and it indicates to you specifically what will be committed. At this point, Git is aware of the new submodule for the project, but hasn’t actually marked a snapshot of the state. To do that, you’ll repeat the same steps from earlier. Execute git add ., followed by git commit -m "Add ImaggaRouter dependency": PhotoTagger|master⚡ ⇒ git add . PhotoTagger|master⚡ ⇒ git commit -m "Add ImaggaRouter dependency" [master 6a0d257] Add ImaggaRouter dependency 2 files changed, 4 insertions(+) create mode 100644 .gitmodules create mode 160000 Frameworks/Internal PhotoTagger|master ⇒ Now the local Git repository for the PhotoTagger project has taken a snapshot of the current project and its configuration with a dependency on ImaggaRouter. Now you need to add the ImaggaRouter project to the Xcode project for PhotoTagger. In Finder, navigate within the PhotoTagger folder to Frameworks/Internal/ImaggaRouter and drag ImaggaRouter.xcodeproj into the root of the PhotoTagger Xcode project. Adding the ImaggaRouter project to Xcode makes the code within it (although there’s none yet) available for use within the PhotoTagger project. You also need to link the framework with the target. Do this in the General settings for the PhotoTagger target: This will result in a change to PhotoTagger.xcodeproj/project.pbxproj which will need to be committed. First, you can verify there is a local change that needs to be committed by executing git status. PhotoTagger|master⚡ ⇒ git status On branch master Your branch is ahead of 'origin/master' by 1 commit. (use "git push" to publish your local commits) Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: PhotoTagger.xcodeproj/project.pbxproj no changes added to commit (use "git add" and/or "git commit -a") To commit the change, execute git add . followed by git commit -m "Add ImaggaRouter project to Xcode": PhotoTagger|master⚡ ⇒ git add . PhotoTagger|master⚡ ⇒ git commit -m "Add ImaggaRouter project to Xcode" [master 911dee9] Add ImaggaRouter project to Xcode 1 file changed, 36 insertions(+) PhotoTagger|master ⇒ Congratulations – you’ve successfully connected two private projects via Git Submodules! :] Pulling Changes From A Private Dependency When sharing code between projects, you’ll often find you need to make changes to the shared code, and also make those changes available to other projects. Git Submodules make this easy. You’ll add some code to ImaggaRouter, commit those changes, then use those changes from PhotoTagger. Add a new Swift file to the ImaggaRouter project and name it ImaggaRouter.swift. Replace its contents with: import Foundation public enum ImaggaRouter { static let baseURLPath = "" static let authenticationToken = "Basic xxx" case content case tags(String) case colors(String) var path: String { switch self { case .content: return "/content" case .tags: return "/tagging" case .colors: return "/colors" } } } This code begins to flesh out a routing enum for interacting with the Imagga API. Now add and commit these changes to the ImaggaRouter repository. From the root of the ImaggaRouter project, execute git commit -am "Add initial ImaggaRouter path": ImaggaRouter|master⚡ ⇒ git commit -am "Add initial ImaggaRouter path" [master 1523f10] Add initial ImaggaRouter path 3 files changed, 33 insertions(+) rewrite ImaggaRouter.xcodeproj/project.xcworkspace/xcuserdata/andyo.xcuserdatad/UserInterfaceState.xcuserstate (80%) create mode 100644 ImaggaRouter/ImaggaRouter.swift ImaggaRouter|master ⇒ This adds the most recent changes (adding an initial implementation of ImaggaRouter.swift) to the local Git repository. git commitinstead of using git add X; git commit -m "Message". The -amwill add all untracked files and any files with changes to the commit. So in this case you had multiple files with changes instead of doing multiple git add Xyou managed to perform the functionality in one line. Now the private dependency has been updated with changes, it’s time to pull those into PhotoTagger. From the root of the PhotoTagger project, navigate into the submodule folder for ImaggaRouter, and execute git pull: ImaggaRouter|master ⇒ pwd /Users/andyo/Documents/AndyRW/PhotoTagger/Frameworks/Internal/ImaggaRouter ImaggaRouter|master ⇒ git pull Updating 1523f10..4d9e71a Fast-forward .gitignore | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ImaggaRouter.xcodeproj/project.xcworkspace/xcuserdata/andyo.xcuserdatad/UserInterfaceState.xcuserstate | Bin 9652 -> 9771 bytes 2 files changed, 66 insertions(+) ImaggaRouter|master ⇒ This retrieves the latest changes from the submodule. You can verify this by opening ImaggaRouter.swift and taking a peek through for the changes you just made. The submodule is maintained as a a separate Git repository within the main project’s folder hierarchy. This is useful to know because all the same Git commands can be used to inspect that repository as well. For example, from the submodule folder Frameworks/Internal/ImaggaRouter, execute git log to look at the commits for the submodule. Since you just updated it, the latest commit should be as you would expect: commit 1523f10dda29649d5ee281e7f1a6dedff5a8779f Author: Andy Obusek <andyo@xyz.com> Date: Mon Feb 13 20:08:29 2017 -0500 Add initial ImaggaRouter path ... cut ... And just to observe the differences, that this really is a separate repository, navigate back to the root folder of PhotoTagger and execute git log: ImaggaRouter|master ⇒ cd ../../.. PhotoTagger|master⚡ ⇒ pwd /Users/andyo/Documents/AndyRW/PhotoTagger PhotoTagger|master⚡ ⇒ git log commit 7303c65cc0f18174cb4846f6abe5cbfb57e17607 Author: Andy Obusek <andyo@aweber.com> Date: Mon Feb 13 20:24:13 2017 -0500 Add ImaggaRouter project to Xcode Notice how the latest commit message is different? That’s one indication you’re in a different Git repository. While you’re in the root folder of PhotoTagger, execute git status: PhotoTagger|master⚡ ⇒ git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: Frameworks/Internal/ImaggaRouter (new commits) Submodules changed but not updated: * Frameworks/Internal/ImaggaRouter 1523f10...4d9e71a (1): > Add initial ImaggaRouter path no changes added to commit (use "git add" and/or "git commit -a") PhotoTagger|master⚡ ⇒ This status message tells you three important pieces of information: - There are local changes that haven’t been committed. - There have been updates to the ImaggaRouter submodule. - There are specific commits that are new in the ImaggaRouter submodule. To finally integrate the latest changes to ImaggaRouter, execute git add . followed by git commit -m "Update ImaggaRouter": PhotoTagger|master⚡ ⇒ git add . PhotoTagger|master⚡ ⇒ git commit -m "Update ImaggaRouter dquote> " [master ad3b7f8] Update ImaggaRouter 1 file changed, 1 insertion(+), 1 deletion(-) PhotoTagger|master ⇒ You’ve now made changes to the private dependency and pulled those changes back into the main project. You’re getting pretty good at this! :] Working With A 3rd Party Dependency Now you’ll add Alamofire as a dependency to your project as a Git Submodule. Alamofire is a popular networking library for iOS. Adding Alamofire Adding an external dependency is very similar to a private dependency. The only difference from what you’ve done so far is you’ll add Alamofire via its public github.com repository. From the root folder of PhotoTagger, create a new folder under Frameworks named “External” by executing the following: mkdir Frameworks/External Then execute git submodule add Frameworks/External/Alamofire: PhotoTagger|master ⇒ mkdir Frameworks/External PhotoTagger|master ⇒ git submodule add Frameworks/External/Alamofire Cloning into '/Users/andyo/Documents/AndyRW/PhotoTagger/Frameworks/External/Alamofire'... remote: Counting objects: 5924, done. remote: Total 5924 (delta 0), reused 0 (delta 0), pack-reused 5924 Receiving objects: 100% (5924/5924), 2.51 MiB | 4.86 MiB/s, done. Resolving deltas: 100% (3937/3937), done. PhotoTagger|master⚡ ⇒ This adds Alamofire as a Git Submodule into a new sub-folder named Frameworks/External/Alamofire. Execute git status to reveal the local repository’s knowledge of Alamofire needs to be committed. To do this, execute git add . followed by git commit -m 'Add Alamofire': PhotoTagger|master⚡ ⇒ git status On branch master Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: .gitmodules new file: Frameworks/External/Alamofire Submodule changes to be committed: * Frameworks/External/Alamofire 0000000...fa3c6d0 (660): > [PR #1927] Fixed bug in README example code around default headers. PhotoTagger|master⚡ ⇒ git add . PhotoTagger|master⚡ ⇒ git commit -m "Add Alamofire" [master 1b3e30b] Add Alamofire 2 files changed, 4 insertions(+) create mode 160000 Frameworks/External/Alamofire PhotoTagger|master ⇒ Now you can add Alamofire.xcodeproj to your project. Just as before with ImaggaRouter.xcodeproj, drag Alamofire.xcodeproj into your project. To use Alamofire, you need to add the framework as a Linked Framework to the main PhotoTagger target’s General settings. Adding an external dependency was as simple as that! Sometimes you come across the need to remove a dependency. Maybe it’s an old library you’re ready to stop using. Or maybe you just wanted to try out that latest and greatest new hot framework. Either way, it’s good to know how to remove dependencies that have been added as Git Submodules. Removing A Dependency To remove a Git Submodule dependency, first add ReactiveSwift as a dependency. Execute git submodule add Frameworks/External/ReactiveSwift: PhotoTagger|master ⇒ git submodule add Frameworks/External/ReactiveSwift Cloning into '/Users/andyo/Documents/AndyRW/PhotoTagger/Frameworks/External/ReactiveSwift'... remote: Counting objects: 42067, done. remote: Compressing objects: 100% (37/37), done. remote: Total 42067 (delta 14), reused 0 (delta 0), pack-reused 42028 Receiving objects: 100% (42067/42067), 15.24 MiB | 5.37 MiB/s, done. Resolving deltas: 100% (25836/25836), done. PhotoTagger|master⚡ ⇒ Now you’ve added ReactiveSwift as a dependency. You can verify this by listing the contents of the folder where it resides by executing ls Frameworks/External/ReactiveSwift : PhotoTagger|master⚡ ⇒ ls Frameworks/External/ReactiveSwift CONTRIBUTING.md Cartfile Cartfile.private Cartfile.resolved Carthage CodeOfConduct.md Documentation LICENSE.md Logo Package.swift README.md ReactiveSwift-UIExamples.playground ReactiveSwift.playground ReactiveSwift.podspec ReactiveSwift.xcodeproj ReactiveSwift.xcworkspace Sources Tests script PhotoTagger|master⚡ ⇒ To properly remove the dependency after it’s been committed, you’ll need to commit the dependency. Once again, execute git add . followed by git commit -m "Add ReactiveSwift dependency": PhotoTagger|master⚡ ⇒ git add . PhotoTagger|master⚡ ⇒ git commit -m "Add ReactiveSwift" [master ebb1a7c] Add ReactiveSwift 2 files changed, 4 insertions(+) create mode 160000 Frameworks/External/ReactiveSwift Now that ReactiveSwift was added as a dependency, you’re going to remove it. To remove it, type: git rm Frameworks/External/ReactiveSwift: PhotoTagger|master ⇒ git rm Frameworks/External/ReactiveSwift rm 'Frameworks/External/ReactiveSwift' PhotoTagger|master⚡ ⇒ This marks ReactiveSwift to be entirely removed from your local repository and filesystem. At this point, the changes need to be committed. Execute git commit -m "Remove ReactiveSwift": PhotoTagger|master⚡ ⇒ git commit -m "Remove ReactiveSwift" [master 557bab4] Remove ReactiveSwift 2 files changed, 4 deletions(-) delete mode 160000 Frameworks/External/ReactiveSwift PhotoTagger|master ⇒ And boom, it’s gone! Wiring It All Up You’ll need a bit of additional code before you can tag images in your app. Rather than copy and paste a bunch of code without much explanation, the final section of this tutorial provides a wired-up solution for you. You’ll just need a secret token from the Imagga API — read on to learn how to get one. The Imagga API You might recognize this API from our Alamofire Tutorial: Getting Started. tutorial. Imagga requires an authorization header in each HTTP request, so only people with an account can use their services. Go to and fill out the form. After you create your account, check out the dashboard: Listed down in the Authorization section is your secret token. Copy it into the clipboard. Note: Make sure you copy the whole secret token. Scroll over to the right and verify you copied everything. In the final project, open ImaggaRouter.swift and use your secret token as the value for authenticationToken. Where to Go From Here? Normally a final, completed version of the tutorial project is made available to you for download. Since this tutorial is made up of two projects connected via a Git Submodule, it seemed more fitting to provide the final project via a Git remote on github.com. In addition, there’s one common task left to be explained that goes right along with this: cloning a repository that has a submodule dependency. Bonus: Cloning A Repository With Submodules To access the final and completed compilation of ImaggaRouter and PhotoTagger, you’ll clone a remote repository where they are stored. To do this, execute git clone --recursive: temp|⇒ git clone --recursive Cloning into 'PhotoTagger'... remote: Counting objects: 40, done. remote: Compressing objects: 100% (24/24), done. remote: Total 40 (delta 11), reused 40 (delta 11), pack-reused 0 Unpacking objects: 100% (40/40), done. Submodule 'Frameworks/External/Alamofire' () registered for path 'Frameworks/External/Alamofire' Submodule 'Frameworks/Internal/ImaggaRouter' () registered for path 'Frameworks/Internal/ImaggaRouter' Cloning into '/Users/andyo/Downloads/temp/temp/PhotoTagger/Frameworks/External/Alamofire'... Cloning into '/Users/andyo/Downloads/temp/temp/PhotoTagger/Frameworks/Internal/ImaggaRouter'... Submodule path 'Frameworks/External/Alamofire': checked out 'c9c9d091b308a57ff9a744be4f2537ac9c5b4c0b' Submodule path 'Frameworks/Internal/ImaggaRouter': checked out 'ceb7415e46829c8a732fdd084b42d95c2f453fa2' Submodule 'Frameworks/External/Alamofire' () registered for path 'Frameworks/Internal/ImaggaRouter/Frameworks/External/Alamofire' Cloning into '/Users/andyo/Downloads/temp/temp/PhotoTagger/Frameworks/Internal/ImaggaRouter/Frameworks/External/Alamofire'... Submodule path 'Frameworks/Internal/ImaggaRouter/Frameworks/External/Alamofire': checked out 'c9c9d091b308a57ff9a744be4f2537ac9c5b4c0b' temp|⇒ The --recursive flag on the normal git clone command ensures all submodules are cloned at the same time. You can see in the output Alamofire and ImaggaRouter are also cloned. By default, this doesn’t happen with git clone. To try these out together, you’ll need to connect the ImaggaRouter project as a dependency of the PhotoTagger project, and add your own secret token for the Imagga API. For further reading, check out the following: - Git Submodules in the Pro Git book - How to Use CocoaPods with Swift - Carthage Tutorial: Getting Started Andy Obusek - Tech Editor Adrian Strahan - Editor Chris Belanger - Final Pass Editor Darren Ferguson - Team Lead Andy Obusek
https://www.raywenderlich.com/155150/dependency-management-using-git-submodules
CC-MAIN-2017-26
en
refinedweb
1. Hints and tips¶ The following are some examples of the use of the inline assembler and some information on how to work around its limitations. In this document the term “assembler function” refers to a function declared in Python with the @micropython.asm_thumb decorator, whereas “subroutine” refers to assembler code called from within an assembler function. 1.1. Code branches and subroutines¶ It is important to appreciate that labels are local to an assembler function. There is currently no way for a subroutine defined in one function to be called from another. To call a subroutine the instruction bl(LABEL) is issued. This transfers control to the instruction following the label(LABEL) directive and stores the return address in the link register ( lr or r14). To return the instruction bx(lr) is issued which causes execution to continue with the instruction following the subroutine call. This mechanism implies that, if a subroutine is to call another, it must save the link register prior to the call and restore it before terminating. The following rather contrived example illustrates a function call. Note that it’s necessary at the start to branch around all subroutine calls: subroutines end execution with bx(lr) while the outer function simply “drops off the end” in the style of Python functions. @micropython.asm_thumb def quad(r0): b(START) label(DOUBLE) add(r0, r0, r0) bx(lr) label(START) bl(DOUBLE) bl(DOUBLE) print(quad(10)) The following code example demonstrates a nested (recursive) call: the classic Fibonacci sequence. Here, prior to a recursive call, the link register is saved along with other registers which program logic requires to be preserved. @micropython.asm_thumb def fib(r0): b(START) label(DOFIB) push({r1, r2, lr}) cmp(r0, 1) ble(FIBDONE) sub(r0, 1) mov(r2, r0) # r2 = n-1 bl(DOFIB) mov(r1, r0) # r1 = fib(n-1) sub(r0, r2, 1) bl(DOFIB) # r0 = fib(n-2) add(r0, r0, r1) label(FIBDONE) pop({r1, r2, lr}) bx(lr) label(START) bl(DOFIB) for n in range(10): print(fib(n)) 1.2. Argument passing and return¶ The tutorial details the fact that assembler functions can support from zero to three arguments, which must (if used) be named r0, r1 and r2. When the code executes the registers will be initialized to those values. The data types which can be passed in this way are integers and memory addresses. With current firmware all possible 32-bit values may be passed and returned. If the return value has the most significant bit set a Python type hint should be employed to enable MicroPython to determine whether the value should be interpreted as a signed or unsigned integer: types are int or uint. @micropython.asm_thumb def uadd(r0, r1) -> uint: add(r0, r0, r1) hex(uadd(0x40000000,0x40000000)) will return 0x80000000, demonstrating the passing and return of integers where bits 30 and 31 differ. The limitations on the number of arguments and return values can be overcome by means of the array module which enables any number of values of any type to be accessed. 1.2.1. Multiple arguments¶. Assembler functions have no means of determining the length of an array: this will need to be passed to the function. This use of arrays can be extended to enable more than three arrays to be used. This is done)) 1.2.2. Non-integer data types¶ These may be handled by means of arrays of the appropriate data type. For example, single precision floating point data may be processed as follows. This code example takes an array of floats and replaces its contents with their squares. from array import array @micropython.asm_thumb def square(r0, r1): label(LOOP) vldr(s0, [r0, 0]) vmul(s0, s0, s0) vstr(s0, [r0, 0]) add(r0, 4) sub(r1, 1) bgt(LOOP) a = array('f', (x for x in range(10))) square(a, len(a)) print(a) The uctypes module supports the use of data structures beyond simple arrays. It enables a Python data structure to be mapped onto a bytearray instance which may then be passed to the assembler function. 1.3. Named constants¶ Assembler code may be made more readable and maintainable by using named constants rather than littering code with numbers. This can be achieved by: MYDATA = const(33) @micropython.asm_thumb def foo(): mov(r0, MYDATA) The const() construct causes MicroPython to replace the variable name with its value at compile time. If constants are declared in an outer Python scope they can be shared between multiple assembler functions and with Python code. 1.4. Assembler code as class methods¶ MicroPython passes the address of the object instance as the first argument to class methods. This is normally of little use to an assembler function. It can be avoided by declaring the function as a static method - e.g: class foo: @staticmethod @micropython.asm_thumb def bar(r0): add(r0, r0, r0) 1.5. Use of unsupported instructions¶ These can be coded using the data statement as shown below. While push() and pop() are supported the example below illustrates the principle. The necessary machine code may be found in the ARM v7-M Architecture Reference Manual. Note that the first argument of data calls such as data(2, 0xe92d, 0x0f00) # push r8,r9,r10,r11 indicates that each subsequent argument is a two byte quantity. 1.6. Overcoming MicroPython’s integer restriction¶ The STM32 chip includes a CRC generator. Its use presents a problem in MicroPython because the returned values cover the full gamut of 32-bit quantities whereas small integers in MicroPython cannot have differing values in bits 30 and 31. This limitation is overcome with the following code, which uses the assembler to put the result into an array and Python code to coerce the result into an arbitrary precision unsigned integer. from array import array import stm def enable_crc(): stm.mem32[stm.RCC + stm.RCC_AHB1ENR] |= 0x1000 def reset_crc(): stm.mem32[stm.CRC+stm.CRC_CR] = 1 @micropython.asm_thumb def getval(r0, r1): movwt(r3, stm.CRC + stm.CRC_DR) str(r1, [r3, 0]) ldr(r2, [r3, 0]) str(r2, [r0, 0]) def getcrc(value): a = array('i', [0]) getval(a, value) return a[0] & 0xffffffff # coerce to arbitrary precision enable_crc() reset_crc() for x in range(20): print(hex(getcrc(0)))
http://docs.openmv.io/reference/asm_thumb2_hints_tips.html
CC-MAIN-2017-26
en
refinedweb
Here you can use any function as you wish No more words , Let't begin 1 :) Fill your information 2 :) Upload and parse File 3 :) Show File info 4 :) Delete File 5 :) Commit task 6 :) Monitor File The PNGs themselves were kept on a linked list, where each node would contain (in short) some basic information about the PNG, a pointer to mmap'ed memory where the PNG was placed, and the size of said mmap'ed memory area. The critical parts of the task were related to three menu options: 2 (upload), 4 (delete) and 6 (monitor), so I'll focus on them. Starting with the most boring one, 4 :) Delete File function removed the specified PNG from the linked list, unmapped the memory chunk and freed all the structures (PNG descriptor, list node). As far as I'm concerned it was correctly implemented and for the sake of this write up the most interesting part was the munmap call: munmap(i->mmap_addr, i->mmap_size); Going to the next function, 6 :) Monitor File spawned a new thread, which (in an infinite loop) waited for a condition to be met (new file uploaded) and displayed a message. It basically boiled down to the following code: while ( !pthread_mutex_lock(&mutex) ) { while ( !ev_file_added ) pthread_cond_wait(&cond, &mutex); ... puts("New file uploaded, Please check"); ... } And the last, and most important part, was the 2 :) Upload and parse File function, which worked in the following way: - It asked the user for a 32-bit word containing data size (limited at 1 MB). - And then received that many bytes from the user. - Then inflated (i.e. zlib decompressed) the data (limited at 32 MB). - And did some simplistic PNG format parsing (which, apart from the width and height, could basically be ignored). - After that it mmap'ed an area of size width * height (important!) and copied that amount of decompressed data there. - And then it set entry->mmap_size to the size of decompressed data (so there was a mismatch between what was mapped,and what would be unmapped when deleting). So actually what you could do (using functions 2 and 4) is unmap an adjacent area of memory to one of the PNG areas. But how to get code execution from that? At this moment I would like to recommend this awesome blogpost (kudos to mak for handing me a link during the CTF): The method I used is exactly what is described there, i.e.: - I've allocated two 8 MB areas (i.e. uploaded two PNGs), where one area was described correctly as 8 MB and the other incorrectly as 16 MB block, - I've freed the correctly allocated one (i.e. deleted it from the list). - And then I used option 6 to launch a new thread. The stack of the new thread was placed exactly in the place of the PNG I just unmapped. - And then I've unmapped the second PNG, which actually unmapped the stack of the new thread as well (these areas were next to each over). Since the thread was waiting for a mutex it didn't crash. - At that moment it was enough to upload a new 8 MB PNG that contained the "new stack" (with ROP chain + some minor additions) for the new thread (upload itself would wake the thread) and the woken thread would eventually grab a return address from the now-controlled-by-us stack leading to code execution. At that point my stage 1 ROP leaked libc address (using puts to leak its address from .got table) and fetched stage 2 of ROP, which run execve with /bin/sh. This was actually a little more tricky since the new thread and the main thread were racing to read data from stdin, which made part of my exploit always end up in the wrong place (and this misaligned the stack_) - but its nothing that cannot be fixed with running the exploit a couple of times. And that's it. Full exploit code is available at the end of the post (I kept the nasty bits - i.e. debugging code, etc - in there for educational reasons... I guess). +--^----------,--------,-----,--------^-, | ||||||||| `--------' | O `+---------------------------^----------| `_,---------,---------,--------------' / XXXXXX /'| /' / XXXXXX / ` /' / XXXXXX /`-------' / XXXXXX / / XXXXXX / (________( 007 James Bond `------' 1 :) Fill your information 2 :) Upload and parse File 3 :) Show File info 4 :) Delete File 5 :) Commit task 6 :) Monitor File 1 :) Fill your information 2 :) Upload and parse File 3 :) Show File info 4 :) Delete File 5 :) Commit task 6 :) Monitor File 1 :) Fill your information 2 :) Upload and parse File 3 :) Show File info 4 :) Delete File 5 :) Commit task 6 :) Monitor File ls -la total 84 drwxr-xr-x 22 root root 4096 Mar 9 11:42 . drwxr-xr-x 22 root root 4096 Mar 9 11:42 .. drwxr-xr-x 2 root root 4096 Mar 9 11:45 bin drwxr-xr-x 3 root root 4096 Mar 9 11:48 boot drwxr-xr-x 17 root root 2980 Mar 9 13:10 dev drwxr-xr-x 85 root root 4096 Mar 19 14:12 etc drwxr-xr-x 3 root root 4096 Mar 18 17:49 home lrwxrwxrwx 1 root root 31 Mar 9 11:42 initrd.img -> /boot/initrd.img-3.16.0-4-amd64 drwxr-xr-x 14 root root 4096 Mar 9 11:43 lib drwxr-xr-x 2 root root 4096 Mar 9 11:41 lib64 drwx------ 2 root root 16384 Mar 9 11:40 lost+found drwxr-xr-x 3 root root 4096 Mar 9 11:40 media drwxr-xr-x 2 root root 4096 Mar 9 11:41 mnt drwxr-xr-x 2 root root 4096 Mar 9 11:41 opt dr-xr-xr-x 112 root root 0 Mar 9 13:10 proc drwx------ 4 root root 4096 Mar 19 14:12 root drwxr-xr-x 17 root root 680 Mar 19 14:07 run drwxr-xr-x 2 root root 4096 Mar 9 11:49 sbin drwxr-xr-x 2 root root 4096 Mar 9 11:41 srv dr-xr-xr-x 13 root root 0 Mar 17 21:12 sys drwx-wx-wt 7 root root 4096 Mar 20 05:17 tmp drwxr-xr-x 10 root root 4096 Mar 9 11:41 usr drwxr-xr-x 11 root root 4096 Mar 9 11:41 var lrwxrwxrwx 1 root root 27 Mar 9 11:42 vmlinuz -> boot/vmlinuz-3.16.0-4-amd64 cat /home/*/flag flag{M3ybe_Th1s_1s_d1ffer3nt_UAF_Y0U_F1rst_S33n} #! upload_file(s, fname): with open(fname, "rb") as f: d = f.read() return upload_string(s, d) def png_header(magic, data): return ''.join([ pack(">I", len(data)), magic, data, pack(">I", 0x41414141), # CRC ]) def make_png(w, h): return ''.join([ "89504E470D0A1A0A".decode("hex"), # Magic png_header("IHDR", pack(">IIBBBBB", w, h, 8, 2, 0, 0, 0 # 24-bit RGB )), png_header("IDAT", ""), png_header("IEND", ""), ]) def upload_png(s, w, h, final_sz, padding="", pbyte="A"): png = make_png(w, h) while len(png) % 8 != 0: png += "\0" png += padding print len(png), final_sz png = png.ljust(final_sz, pbyte) png = png.encode("zlib") if len(png) > 1048576: print "!!!!!!! ZLIB: %i vs %i" % (len(png), 1048576) s.sendall("2\n") s.sendall(dd(len(png))) s.sendall(png) print s.recvuntil(MENU_LAST_LINE) def upload_string(s, d): z = d.encode("zlib") s.sendall(dd(len(z))) s.sendall(z) def upload_file_padded(s, fname, padding): with open(fname, "rb") as f: d = f.read() return upload_string(s, d + padding) MENU_LAST_LINE = "6 :) Monitor File\n" READ_INFO_LAST_LINE = "enjoy your tour\n" def del_entry(s, n): s.sendall("4\n") s.sendall(str(n) + "\n") print s.recvuntil(MENU_LAST_LINE) def spawn_monitor(s): s.sendall("6\n") print s.recvuntil(MENU_LAST_LINE) def set_rdi(v): # 0x4038b1 pop rdi # 0x4038b2 ret return ''.join([ dq(0x4038b1), dq(v) ]) def set_rsi_r15(rsi=0, r15=0): # 0x4038af pop rsi # 0x4038b0 pop r15 # 0x4038b2 ret return ''.join([ dq(0x4038af), dq(rsi), dq(r15), ]) def call_puts(addr): # 0400AF0 return ''.join([ set_rdi(addr), dq(0x0400AF0), ]) def call_read_bytes(addr, sz): # 400F14 return ''.join([ set_rdi(addr), set_rsi_r15(rsi=sz), dq(0x400F14), ]) def stack_pivot(addr): # 0x402ede pop rsp # 0x402edf pop r13 # 0x402ee1 ret return ''.join([ dq(0x402ede), dq(addr - 8) ]) def call_sleep(tm): return ''.join([ set_rdi(tm), dq(0x400C30), ]) def go(): global HOST global PORT s = gsocket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) # Put your code here! print s.recvuntil(MENU_LAST_LINE) #s.sendall("1\n") #s.sendall("A" * 20) #s.sendall("1\n") #print s.recvuntil(READ_INFO_LAST_LINE) #s.sendall("0000000000000001A") #s.sendall("B" * 1) # 2, 8 #time.sleep(0.5) #s.sendall("1\n") #d = s.recvuntil(READ_INFO_LAST_LINE) #print d #sth = d.split(" , enjoy your tour")[0].split("Welcome Team ")[1] #print sth.encode("hex") upload_png(s, 10, 10, 0x1000) upload_png(s, 1, 8392704, 8392704) # 1 upload_png(s, 1, 8392704, 8392704 + 8392704) del_entry(s, 1) spawn_monitor(s) del_entry(s, 1) # Now we hope that not all threads run. padding = [] for i in range((8392704 - 128) / 8): # ~1mln if i < 900000: padding.append(dq(0)) else: padding.append(dq(0x4141414100000000 | i)) padding[0xfffd9] = dq(0x0060E400) padding[0xfffda] = dq(0x0060E400) del padding[0xfffdd:] VER = 0x4098DC VER_STR = "1.2.8\n" CMD = "/bin/sh\0" #CMD = CMD.ljust(64, "\0") rop = ''.join([ call_sleep(1), call_puts(VER), call_puts(0x060E028), call_puts(0x060E029), call_puts(0x060E02A), call_puts(0x060E02B), call_puts(0x060E02C), call_puts(0x060E02D), call_puts(0x060E02E), call_puts(0x060E02F), call_puts(VER), call_read_bytes(0x0060E400, 512 + len(CMD)), stack_pivot(0x0060E410) ]) padding.append(rop) padding = ''.join(padding) #print "press enter to trigger" #raw_input() print "\x1b[1;33mTriggering!\x1b[m" upload_png(s, 1, 8392704, 8392704, padding) s.recvuntil(VER_STR) puts_libc = '' for x in s.recvuntil(VER_STR).splitlines(): if len(x) == 0: puts_libc += "\0" else: puts_libc += x[0] if len(puts_libc) == 8: break puts_libc = rq(puts_libc) LIBC = puts_libc - 0x06B990 print "LIBC: %x" % LIBC rop2 = ''.join([ #dq(0x402ee1) * 16, # nopsled dq(0x401363) * 16, set_rsi_r15(0x0060EF00, 0), set_rdi(0x0060E400 + 512), dq(LIBC+0xBA310), # execve ]) rop2 = rop2.ljust(512, "\0") rop2 += CMD * 16 s.sendall("PPPPPPPP" + rop2 + (" " * 16)) # Interactive sockets. t = telnetlib.Telnet() t.sock = s t.interact() # Python console. # Note: you might need to modify ReceiverClass if you want # to parse incoming packets. #ReceiverClass(s).start() #dct = locals() #for k in globals().keys(): # if k not in dct: # dct[k] = globals()[k] #code.InteractiveConsole(dct).interact() s.close() HOST = '202.120.7.216' PORT = 12345 #HOST = '127.0.0.1' #HOST = '192.168.2.218' #PORT = 1234 go() Nice write-up ! Question: Couldn't you have unmapped the stack and remapped with new PNG containing the execve shellcode instead of racing with the main thread for stdin ? Thanks! :) So what I didn't write above (and probably should have) is that the mmapped area was RW-, so execution was not possible on the PNG pages (and NX was enabled). Therefore I had to fallback to the usual methods (i.e. ROP). The racing could have been avoided if I would know libc address before preparing stage 1 ROP (so I could use execve function call or syscall gadgets). I could know it since there was a small stack leak in option 1, but I decided to ignore it and leak libc in stage one, and then do the race. The race was rather simple btw - according to my measurements the main thread won the race for stdin read about 4-10 times before the controlled thread would win it. Each time the main thread won the race it grabbed 1 byte from stdin, therefore 4-10 bytes were missing each time. Given that I used an additional alignment in my shellcode (so it could be 'eaten off' by the main thread) it boiled down to hitting the correct number of races won by main thread. My bet was on 8, 16 or any other product of 8, since that was that was the size of my ROP nop sled (i.e. if one item of the nopsled would be correctly 'eaten off', the rest of the shellcode would execute). So the theoretical probability of this happening was bout 1/7, and in practice I hit in in 2-3 try (after I got my stage 2 working locally that is). In the end the race wasn't a big problem. It just had to be accounted for :) One thing to add: - the main thread did read(1) on stdin - the controlled thread did read(controlled size, probably limited to TCP window size cross kernel buffering done during thread being asleep) on stdin So after the controlled thread won the race, the main thread stopped being a problem instantly (i.e. the rest of the stage 2 payload from that point on was read correctly).
http://blog.dragonsector.pl/2017/03/0ctf-2017-uploadcenter-pwn-523.html
CC-MAIN-2017-26
en
refinedweb
Optimizing C++/General optimization techniques/Input/Output Contents Store text files in a compressed format[edit] Disk have much less bandwidth than processors. By (de)compressing on the fly, the CPU can speed up I/O. Text files tend to compress well. Be sure to pick a fast compression library, though; zlib/gzip is very fast, bzip2 less so. The Boost Iostreams library contains Gzip filters that can be used to read from a compressed file as if it were a normal file: namespace io = boost::iostreams; class GzipInput : public io::filtering_istream { io::gzip_decompressor gzip; std::ifstream file; public: GzipInput(const char *path) : file(path, std::ios_base::in | std::ios_base::binary) { push(gzip); push(file); } }; Even if this is not faster than "raw" I/O (e.g. if you have a fast solid state disk), it still saves disk space. time.. The C++ standard does not define a memory mapping interface and in fact the C interfaces differ per platform. The Boost Iostreams library fills the gap by providing a portable, RAII-style interface to the various OS implementations.
http://en.wikibooks.org/wiki/Optimizing_C%2B%2B/General_optimization_techniques/Input/Output
CC-MAIN-2014-42
en
refinedweb
Popular ActionScript 3 Snippets Tagged 'metadata' - metadata as3 actionscript3 xmp saved by 1 person XMP metadata from JPG posted on March 24, 2011 by pjetr ActionScript 3 import actionscript flash flv video metadata college cuepoints saved by 1 person import necessary actionscript to work with FLV & metadata [ie. cuepoints] posted on June 6, 2009 by stiobhart « Prev [Page 1 of 1] Next »
http://snipplr.com/popular/language/actionscript-3/tags/metadata/
CC-MAIN-2014-42
en
refinedweb
Subject: Re: [hwloc-devel] perl bindings From: Bernd Kallies (kallies_at_[hidden]) Date: 2011-01-21 12:31:16 On Fri, 2011-01-21 at 17:09 +0100, Samuel Thibault wrote: > Bernd Kallies, le Thu 20 Jan 2011 20:35:04 +0100, a écrit : > > On Thu, 2011-01-20 at 20:22 +0100,? > > > > See > > > > > > > > > > > > Well, I meant the ones for which you had to invent a name. These are > mostly the same as the C interface, hwloc_ prefix stripped. Are there > names that don't exactly map to a C function? I see there is a "not in > hwloc" section, I guess it is supposed to contain them all? Yes, most of the OO methods are named like the C function with hwloc_ or hwloc_topology_ prefixes stripped. Sometimes _get_ was stripped, when it sounds better, e.g. $->depth instead of $t->get_depth Sometimes words are reversed, like hwloc_obj_type_sprintf($o) but $o->sprintf_type hwloc_obj_sprintf($t,$o) but $t->sprintf_obj($o) There also exist functions that operate with hwloc_obj_t and have a hwloc_topology_t as first argument, but it is unused. So I decided to put these functions in the Sys::Hwloc::Topology and in the Sys::Hwloc::Obj namespaces, e.g. hwloc_obj_is_in_subtree($t,$o1,$o2) $t->obj_is_in_subtree($o1,$o2) $o1->is_in_subtree($o2) > > > HWLOC_XSAPI_VERSION always returns a version number (may be 0) > > Ok > > > HWLOC_HAS_XML flag if hwloc was built with XML support > > Why do you need it? At worse the xml functions would fail. This is to be able to decide if coded calls of xml functions in a perl script should be executed or not. If the C lib was generated without xml, then the wrapper does not contain the wrapper functions. So the symbol table of a perl script is different. If one has a perl script that calls these functions, then it will not byte-compile. The HWLOC_HAS_XML constant can be used much like an #ifdef in C to provide alternatives for a perl script without having different scripts for every variant. It may be nice for C programmers to provide the value of the HWLOC_HAVE_XML cpp constant in hwloc.h for the same reason. > > hwloc_compare_objects compares two Sys::Hwloc::Obj by C pointer value > > Ok. > > > hwloc_bitmap_ids returns bitmap bits as list of decimal numbers > > That seems perl-specific indeed. > > > hwloc_bitmap_list_sscanf parses a list format cpuset ASCII string > > hwloc_bitmap_list_sprintf outputs a list format cpuset ASCII string > > hwloc_bitmap_includes reverse of hwloc_bitmap_isincluded > > I guess these could be added to the C API? Brice said that he tries to add the _list_ things in hwloc 1.2. > > > > > Ok, that perl I can read :) > > I'd say you shouldn't care about providing all the hwloc_cpuset_* > functions, since these names are deprecated in the C API. Currently the wrapper compiles with hwloc 0.9 .. at least 1.1, and generates different version dependent codes from the same source. The cpuset API functions are only provided when one compiles the wrapper against hwloc 1.0. With 1.1 they are not provided. > > $mapa = hwloc_bitmap_dup($map) > > Same issue as in Python: when a const bitmap is returned by a hwloc > function, the user shouldn't be able to modify it. Hard to implement. In C this is achieved with prototypes via the C compiler. There exists no such pendant in Perl. One would have to work with proxy objects that have a readonly attribute, and maintain this. > > hwloc_bitmap_from_ulong($set,$mask) > > hwloc_bitmap_from_ith_ulong($set,$i,$mask) > > hwloc_bitmap_set_ith_ulong($set,$i,$mask) > > $val = hwloc_bitmap_to_ulong($set) > > $val = hwloc_bitmap_to_ith_ulong($set,$i) > > Same issue as in Python (but with different answer): AIUI, perl doesn't > have unbound integers, so has a limitation, but is possibly not exactly > like C longs. I guess these should just use the regular perl integer > name and size? Internally perl handles integers as long or ulong. The problem with wrappers is the correct cast between the content of a perl scalar (may be a long or ulong or string or double or ...) and the needed C type. I'll check what happens when one reaches UINT_MAX and the like. > About area membind, same remark as for python: if someone uses perl to > drive C-library computations, it may be useful, but else it probably > doesn't make sense in pure perl. Agreed, thatswhy these functions are currently not in the perl wrapper. > Samuel > _______________________________________________ >
http://www.open-mpi.org/community/lists/hwloc-devel/2011/01/1876.php
CC-MAIN-2014-42
en
refinedweb
>> 11, 2011 REMARKS BY THE PRESIDENT ON AMERICAN JOBS ACT IBEW Local #5 Training Center Pittsburgh, Pennsylvania 2:15 P.M. EDT THE PRESIDENT: Thank you very much. Thank you. (Applause.) Thank you. Thank you. (Applause.) Thank you very much. (Applause.) Thank you. Thank you, everybody. Please have a seat. Have a seat. It is great to be back in Pittsburgh! (Applause.) And it is wonderful to be here at IBEW Local #5. I had a chance to take a tour of your facilities, where you're training workers with the skills they need to compete for good jobs. And I see some of the guys that I met on the tour, both the instructors and the students who are here, and it's an example of how, if we get a good collaboration between business and labor and academia, that there is no reason why we cannot continue to have the best trained workers in the world. (Applause.) And that's got to be one of our best priorities. So I'm here to talk about how we can create new jobs -- particularly jobs doing what you do best, and that's rebuilding America. I brought some folks along with me, as well. We've got members of my Cabinet and my administration. We've got your mayor, Luke Ravenstahl, is here. Where's Luke? Right here. (Applause.) Your county executive, Dan Onorato, is here. (Applause.) And one of my dearest friends, who I stole from the Steelers to serve as the United States Ambassador to Ireland -- Dan Rooney is in the house. (Applause.) And congratulations, Steelers. You guys did a little better than my Bears last night. (Laughter.) I've also brought a group of leaders with a wide range of new ideas about how we can help companies hire and grow, and we call them our White House Jobs Council. They come from some of the most successful businesses in the country -- GE, Southwest, Intel. They come from labor -- we've got Rich Trumka on here from the AFL-CIO. We've got universities and people across the board who are intimately involved in growing companies, venture capitalists. Most importantly, they come from outside of Washington. And I told them, when we formed this council, I want to hear smart, forward-thinking ideas that will help our economy and our workers adapt to changing times. And together, they've done some extraordinary work to make those ideas happen. So I just want to personally thank every single one of the Job Council members for the great work that they're doing. And they issued a jobs report today -- we're implementing a bunch of their ideas; it's going to make a difference all across the country. So thank you very much. (Applause.) Well, one of our focuses today was on entrepreneurship. And we did this because the story of America's success is written by America's entrepreneurs; men and women who took a chance on a dream and they turned that dream into a business, and somehow changed the world. We just lost one of our greatest entrepreneurs, and a friend, Steve Jobs, last week. And to see the outpouring of support for him and his legacy tells a story about what America's all about. We like to make things, create things, new products, new services that change people's lives. And that's what people strive to do every day in this country. And most of the time people's dreams are simple: Start-ups and storefronts on Main Street that let folks earn enough to support their family and make a contribution to their community. And sometimes their dreams take off and those start-ups become companies like Apple or Fed-Ex or Ford; companies that end up hiring and employing hundreds of thousands of Americans and giving rise to entire new industries. And that spirit of entrepreneurship and innovation is how we became the world's leading economic power, and it's what constantly rejuvenates our economy. So entrepreneurship is how we're going to create new jobs in the future. And I'm proud to say that just last month Pittsburgh won a federal grant to promote entrepreneurship and job creation by expanding your already successful energy and health care industries in under-served parts of this city. So we're very excited about what Pittsburgh is doing here. (Applause.). But right now, our economy needs a jolt. Right now. (Applause.) And today, the Senate of the United States has a chance to do something about jobs right now by voting for the American Jobs Act. (Applause.) Now,'ve said this could grow the economy significantly and put significant numbers of Americans back to work. And no other jobs plan has that kind of support from economists -- no plan from Congress, no plan from anybody.. Today is the day when every American will find out exactly where their senator stands on this jobs bill. Republicans say that one of the most important things we can do is cut taxes. Then they should be for this plan. This jobs bill would cut taxes for virtually every worker and small business in America. Every single one. (Applause.) If you're a small business owner that hires new workers or raises wages, you will get another tax cut. If you hire a veteran, you get a tax cut. People who have served overseas should not have to fight for a job when they come home. (Applause.) This jobs bill encourages small business owners and entrepreneurs to expand and to hire. The Senate should pass it today. Hundreds of thousands of teachers and firefighters and police officers have been laid off because of state budget cuts. I'm sure, Luke, you're seeing it here in Pittsburgh. You're having to figure out how to we make sure that we keep our teachers in the classroom. The Jobs Council is uniform in believing that the most important thing for our competitiveness, long term, is making sure our education system is producing outstanding young people who are ready to go work. (Applause.) So this jobs bill that the Senate is debating today would put a lot of these men and women back to work right now, and it will prevent a lot more from losing their jobs. So folks should ask their senators, why would you consider voting against putting teachers and police officers back to work? Ask them what's wrong with having folks who have made millions or billions of dollars to pay a little more. Nothing punitive, just going back to the kinds of tax rates that used to exist under President Clinton, so that our kids can get the education they deserve. There are more than a million laid-off construction workers who could be repairing our roads and bridges, and modernizing our schools right now. Right now. (Applause.) That's no surprise to you. Pittsburgh has a lot of bridges. (Laughter.) Has about 300 of them. Did you know that more than a quarter of the bridges in this state are rated structurally deficient? Structurally deficient -- that's a fancy way of saying, they need to be fixed. There are nearly 6,000 bridges in Pennsylvania alone that local construction workers could be rebuilding right now. The average age of bridges around Pittsburgh is 54 years old. So we're still benefiting from the investments, the work that was done by our grandparents, to make this a more successful, more competitive economy. Here in Pittsburgh, 54 years old, the average age of these bridges -- 13 years older than the national average. The Hulton Bridge over in Oakmont was built more than 100 years ago. There are pieces of it that are flaking off. How much longer are we going to wait to put people back to work rebuilding bridges like that? This jobs bill will give local contractors and local construction workers the chance to get back to work rebuilding America. Why would any senator say no to that? In line with the recommendations of my Jobs Council, my administration is cutting red tape; we're expediting several major construction projects all across the country to launch them faster and more efficiently. We want to streamline the process, the permitting process, just get those things moving. So we're doing our job, trying to expedite the process. Now it's time for Congress to do their job. The Senate should vote for this jobs bill today. It should not wait. It should get it done. (Applause.) Now, a lot of folks in Congress have said they won't support any new spending that's not paid for. And I think that's important. We've got to make sure we're living within our means so that we can make the vital investments in our future. That's why I signed into law $1 trillion in spending cuts over the summer. And we'll find more places to cut those things that we don't need. We can't afford everything. We've got to make choices; we've got to prioritize. Programs that aren't working, that aren't giving us a good bang for the buck, that aren't helping to grow the economy, that aren't putting people back to work -- we're going to have to trim those back. So we're willing to make tough choices. The American people, they're already tightening their belts. They understand what it's all about to make tough choices. But if we want to create jobs and close the deficit, then we can't just cut our way out of the problem. We're also going to have to ask the wealthiest Americans to pay their fair share. If they don't, we only have three other choices: We can either increase the deficit, or we can ask the middle class to pay more at a time when they're just barely getting by -- haven't seen their wages or incomes go up at all, in fact, have gone down over the last decade -- or we can just sit back and do nothing. And I'm not willing to accept any of those three options. (Applause.) Whenever I talk about revenue, people start complaining about, well, is he engaging in class warfare, or why is he going after the wealthiest. Look, because I've been fortunate and people bought a bunch of my books, I'm in that category now. (Laughter.) And in a perfect world with unlimited resources, nobody would have to pay any taxes. That's not the world we live in. We live in a world where we've got to make choices. So the question we have to ask ourselves as a society, as a country, is, would you rather keep taxes exactly as they are for those of us who benefited most from this country -- tax breaks that we don't need and weren't even asking for -- or do we want construction workers and electrical workers to have jobs rebuilding our roads and our bridges and our schools? Would we rather maintain these tax breaks for the wealthiest few, or should we give tax cuts to the entrepreneurs who might need it to start that business, launch that new idea that they've got? Or tax breaks to middle-class families who are likely to spend this money now and get the economy moving again? This is a matter of priorities. And it's a matter of shared sacrifice. And, by the way, if you ask most wealthy Americans, they'll tell you they're willing to do more. They're willing to do their fair So it's time to build an economy that creates good, middle-class jobs in this country. It's time to build an economy that honors the values of hard work and responsibility. It's time to build an economy that lasts. And that's what this jobs bill will help us do. The proposals in the American Jobs Act aren't just a bunch of random investments to create make-work jobs. They're things we have to do if we want to compete with other countries for the best jobs and the newest industries. We have to have the most educated workers. This week, I'm going to be hosting the President of South Korea. I had lunch with him in Seoul, South Korea. He told me -- I said, what's your biggest problem? He says, "The parents are too demanding. I'm having to import teachers because all our kids want to learn English when they're in first grade." So they're hiring teachers in droves at a time when we're laying them off? That doesn't make any sense. We've got to have the best transportation and communications networks in the world. We used to have the best stuff. We used to be the envy of the world. People would come to our countries and they would say, look at -- look at the Hoover Dam, look at the Golden Gate Bridge. Now people go to Beijing Airport and they say, I wish we had an airport like that. We can't compete that way, playing for 2nd or 3rd or 4th or 8th or 15th place. We've got to support new research and new technology -- innovative entrepreneurs; the next generation of manufacturing. Any one of the business leaders here today will tell you that's true.. Our prosperity has to be built on what we make and what we sell around the world, and on the skills of our workers and the ingenuity of our business people. (Applause.) We have to restore the values that have always made this a great country -- the idea of hard work and responsibility that's rewarded; everybody, from Main Street to Wall Street, doing their fair share, playing by the same set of rules. And so, Pittsburgh, that starts now and I'm going to need your help. Your senators are voting today on this jobs bill. (Applause.) So this is gut-check time. Any senator who votes "no" should have to look you in the eye and tell you what exactly they're opposed to. These are proposals that have traditionally been bipartisan. Republicans used to want to build roads and bridges. That wasn't just a Democratic idea. We've all believed that education was important. You've got to come -- if you're voting no against this bill, look a Pittsburgh teacher in the eye and tell them just why they don't deserve to get a paycheck again and, more importantly, be able to transmit all those -- all that knowledge to their kids. Come tell the students why they don't deserve their teacher back, so now they've got overcrowded classrooms, or arts classes or music classes or science classes have been cut back. Come and look at a construction worker here in Pittsburgh or an electrical worker in the eye. Tell them why they shouldn't be out there fixing our bridges or rebuilding our schools and equipping them with the latest science labs or the latest Internet connection. Explain why people should have to keep driving their kids across bridges with pieces falling off. Or explain to a small business owner or workers in this community why you'd rather defend tax breaks for the wealthiest few than fight for tax cuts for the middle class. I think they'd have a hard time explaining why they voted no on this bill other than the fact that I proposed it. (Applause.) I realize some Republicans in Washington have said that even if they agreed with the ideas in the American Jobs Act, they're wary of passing it because it would give me a win. Give me a win? This is not about giving me a win. It's why folks are fed up with Washington. This is not about giving anybody a win. It's not about giving Democrats or Republicans a win. It's about giving the American people who are hurting out there a win -- (applause) -- about giving small businesses, entrepreneurs, and construction workers a win. (Applause.) It's about giving the American people -- all of us, together -- a win. I was talking to the Jobs Council -- by the way, not everybody here has necessarily voted for me. (Laughter.) But they're patriots and they care about their country. And we were talking about how, in normal times, these are all common-sense ideas. These aren't radical ideas. These are things that, traditionally, everybody would be for, particularly at a time of emergency like we're in, where so many people are out of work and businesses want to see more customers. So, for folks outside of Washington, being against something for the sake of politics makes absolutely no sense. (Applause.) It makes absolutely no sense. (Applause.) And the next election is 13 months away. The American people don't have the luxury of waiting 13 months. They don't have the luxury of watching Washington go back and forth in the usual fashion when this economy needs to be strengthened dramatically. A lot of folks are living week to week, paycheck to paycheck, even day to day. They need action, and they need action now. They want Congress to do what they were elected to do -- put country ahead of party; do what's right for our economy; do what's right for our people. (Applause.) In other words, they want Congress to do your job. (Applause.) And I've said this to some folks in the other party. I've said, I promise you, we'll still have a lot of stuff to argue about, even if we get this thing done, about the general direction of the country and how we're going to build it and how we're going to out-educate and out-innovate and out-build other countries around the world. There will be a lot of time for political debating. But right now, we need to act on behalf of the American people. So, for those of you who are in the audience, or those of you who are watching, I need you to call, email, tweet, fax, or you can write an old-fashioned letter -- I don't know if people still do that -- (laughter) -- let Congress know who they work for. Remind them what's at stake when they cast their vote. Tell them that the time for gridlock and games is over. The time for action is now. And tell them to pass this bill. If you want construction workers on the job -- pass the bill. If you want teachers back in the classrooms -- pass the bill. If you want tax cuts for your family and small business owners -- pass this bill. If you want our veterans to share in the opportunity that they upheld and they defended -- do the right thing, pass this bill. (Applause.) Now is the time to act. I know that this is a moment where a lot of folks are wondering whether America can move forward together the way it used to. And I'm confident we can. We're not a people who just sit by and watch things happen to us. We shape our own destiny. That's what's always set us apart. We are Americans, and we are tougher than the times we're in right now. We've been through tougher times before. We're bigger than the politics that has been constraining us. We can write our own story. We can do it again. So let's meet this moment. Let's get to work and show the rest of the world just why it is that America is the greatest country on Earth. Thank you very much, everybody. God bless you. God bless America. END 2:39 P.M. EDT ----- Unsubscribe The White House . 1600 Pennsylvania Avenue, NW . Washington DC 20500 . 202-456-1111
http://www.wikileaks.org/gifiles/docs/24/2472650_-os-remarks-by-the-president-on-the-american-jobs-act-.html
CC-MAIN-2014-42
en
refinedweb
how do i make this linehow do i make this lineCode:#include<iostream> using namespace std; void reverseInput(); void main(){ reverseInput(); } void reverseInput(){ cout<<"Enter 0 to end."; cout<<"Enter a number: "; int num; cin>> num; int first; if(num !=0){ first = num; reverseInput();//will keep calling until it reaches 0. cout<<"The numbers in reverse order are "; cout<<" "<<first<<", "<<endl;//now it prints from the end. } } "The numbers in reverse order are "; to display only once when i end the input by entering 0. right now when i end the input by entering 0, that line will print when each number is printed.
http://cboard.cprogramming.com/cplusplus-programming/96311-recursivle-function-help.html
CC-MAIN-2014-42
en
refinedweb
CosmosDataReportingComponent10 209226 Back to Data Reporting Design Contents Change History Workload Estimation '* - includes other committer work (e.g. check-in, contribution tracking) '** - documentation may span multiple release. Note an additional enhancement is associated with a programmer's guide (ER 210134) Purpose This enhancement will cover the development of a web application user interface. Requirements The following are basic requirements needed to create a web user interface. - Views are required to visualize data from some external datasource. These views are constructed from basic widgets such as tables, trees, button, etc. Expoliters should have the ability to create new views and deployed these new views within the COSMOS UI infrastructure. Furthermore, the infrastructure will provide some common used views that can be configured by exploiters. These common views are developed specifically for System Management operations. These views are further explained in enhancement 208603 (Register Visualzations with a Query Response - programmin...). - A mechanism is required to layout Views within the page. Exploiters of the infrastructure should have the ability to create different "flavors" of the ui by configuring the page layout. The ui infrastructure heavily relies on the dojo toolkit. Rather than trying to reinvent the wheel the COSMOS ui infrastructure makes use of many of the dojo toolkit design patterns. The COSMOS ui utilizes the dojo toolkit to do the following - object declaration/inheritance - pub/sub messaging - method connections - attach points - asynchronous requests The COSMOS UI design implores several dojo programming models. Familiarly with the Dojo toolkit is required to understand the COSMOS ui infrastructure. This document will not cover Dojo concepts. Design Pages Templates A page template is a concept introduced by the COSMOS UI infrastructure. These are html files that create the structure of the page. Exploiters can associate an attachpoint with a section within the page. Exploiters can then register a view to a particular attachpoint. When the page template is rendered the COSMOS UI runtime will place the view in the associated section on the page. Consider the following example: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <title>My User Interface</title> <script type="text/javascript" src="dojo/dojo/dojo.js" djConfig="isDebug: false, parseOnLoad: true"> dojo.require("cosmos.widget.*"); dojo.require("cosmos.utility.*"); dojo.require("dojo.parser"); </script> <style type="text/css"> @import "dojo/dojo/resources/dojo.css"; @import "dojo/dijit/themes/tundra/tundra.css"; </style> </head> <body class="tundra"> <table style="background-color:#eee" width="100%"> <tr><td> <div dojoType="cosmos.widget.WidgetContainer" attachPoint="nav"></div> </td></tr> <tr><td> <div dojoType="cosmos.widget.WidgetContainer" attachPoint="details"></div> </td></tr> </table> </body> Notice that the template file is a regular html file. This gives the exploiter alot of flexibility to layout the page. Also notice that attach points are created by declaring a "comos.widget.WidgetContainer" element and setting the "attachPoint" attribute to a tagname. This will identify these section of the page as attach points. As a result any views that are registered to any of these attach points are placed in the particular section within the page. It should be noted that the following lines are always required in a page template: <script type="text/javascript" src="dojo/dojo/dojo.js" djConfig="isDebug: false, parseOnLoad: true"> dojo.require("cosmos.widget.*"); dojo.require("cosmos.utility.*"); dojo.require("dojo.parser"); </script> This will enable the dojo parser and include the COSMOS and dojo javascript files. Widgets Views are rendered by creating dojo widgets. The default views provided by the COMSOS ui are all widgets that have been inherited or composed of basic dojo widgets. The default widgets are developed in javascript files and deployed under /COSMOSUI/cosmos/widgets Since these views are dojo widgets exploiters familiar with working with dojo widgets can either create these widgets programmically or declaratively. Refer to the dojo documentation to understand how this is done. Creating new widgets It is recommended that new widgets should be created using the dojo toolkit programming model. This will make deployment of the widgets easier with the COSMOS UI infrastructure. However it's not a requirement. Lets look at creating a tree widget to get an understanding how the COSMOS UI infrastructure exploits the dojo programming model. dojo.provide("my.widget.Tree'); dojo.declare( // class "my.widget.Tree", // superclass [dijit.Tree], // member variables/functions { title:'This is my tree', postCreate: function(){ alert(this.title); //call superclass my.widget.Tree.superclass.postCreate.apply(this, arguments); } } ); The above example shows a widget class that inherits the behavior of dijit.Tree. Exploiters can instantiate this new widget programmically as follows: var mywidget = new my.widget.Tree({title:"this is my title"}); The above line will create my custom widget and change the title. Note that the properties of the widget can change by passing in a javascript object. Also note that the javascript object is in JSON format which is quite nice. As a result, exploiters can configure dojo widgets by providing JSON data. The COSMOS UI infrastructure exploits this behavior. View Configuration Files The COSMOS UI introduces a notion of configuration files named as view.jprop. These configuration files will configure a particular view within the browser page. For example consider the following configuration file: { clazz: "cosmos.widget.Navigator", initialize: "MDRTreeJSON", Query: {nodeClass:'mdr*'}, initQueryHandler: "handler/json/nav.json", publish: ['properties', 'detail'] } The COSMOS UI runtime will process the above configuration file to configure a view within the page. A "cosmos.widget.Navigator" dojo widget will be instantiated and initialized with a set of properties specified in the configuration file. Therefore the above configuration file will translate to the following javascript code in the COSMOS UI runtime: var mywidget = new cosmos.widget.Navigator({initialize: "MDRTreeJSON",Query: {nodeClass:'mdr*'},initQueryHandler: "handler/json/nav.json",publish: ['properties', 'detail']}); Note that the "clazz" property is a specialized property that the COSMOSUI makes use of to determine the widget class to instantiate. Now let us look at trying to create a page that has three quadrants with different views. Let us create the following page: Note that I have a tree widget on the left hand side, a XML viewer on the top right quadrant and a properties table on the bottom right quadrant. To construct this page we first create the following report template. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <title>COSMOS Web User Interface</title> <script type="text/javascript" src="dojo/dojo/dojo.js" djConfig="isDebug: false, parseOnLoad: true"> </script> <style type="text/css"> @import "dojo/dojo/resources/dojo.css"; @import "dojo/dijit/themes/tundra/tundra.css"; .container{ width: 100%; height: 800px; background: #eeeeee; } </style> <script> dojo.require("cosmos.widget.*"); dojo.require("cosmos.utility.*"); dojo.require("dojo.parser"); dojo.require("dijit.layout.SplitContainer"); dojo.require("dijit.layout.ContentPane"); dojo.require("dijit.TitlePane"); </script> </head> <body class="tundra"> <table width="100%" border="0" cellspacing="0" cellpadding="0" class="wpsBannerEnclosure"> <tr> <td height="35" align="left" bgcolor="#F5F5F5"><img alt='Cosmos Data Visualization UI' title='Cosmos Data Visualization UI' src='views/cosmos/images/bannerLeft.gif'/></td> <td height="35" align="right" bgcolor="#F5F5F5"></td> </tr> </table> <table style="background-color:#eee" width="100%"> <tr><td style="padding: 10pt"></td></tr> </table> <div dojoType="dijit.layout.SplitContainer" orientation="horizontal" sizerWidth="10" activeSizing="false" class="container"> <div dojoType="dijit.layout.ContentPane" sizeShare="30" sizeMin="20"> <div dojoType="dijit.TitlePane" title=" Lazy Load Tree" class="navigator"> <div dojoType="cosmos.widget.WidgetContainer" attachPoint="nav"></div> </div> </div> <div dojoType="dijit.layout.SplitContainer" orientation="vertical" sizerWidth="10" activeSizing="false" class="container"> <div dojoType="dijit.layout.ContentPane" sizeShare="40" sizeMin="20"> <div class="detail" dojoType="dijit.TitlePane" title=" Details" sizeShare="60" sizeMin="20"> <div dojoType="cosmos.widget.WidgetContainer" attachPoint="detail"></div> </div> </div> <div dojoType="dijit.layout.ContentPane" sizeShare="40" sizeMin="20"> <div dojoType="cosmos.widget.WidgetContainer" attachPoint="properties"></div> </div> </div> </div> </body> Note that I use html tags and dojo widgets to construct the page. Since the page is a regular html page I can utilize existing layout widgets such as the dijit.layout.SplitContainer dojo layout container to construct my layout. I can utilize other layout containers defined by the DOJO toolkit () At certain sections within the page I add the following declartive text: <div dojoType="cosmos.widget.WidgetContainer" attachPoint="detail"></div> This above text declares a "detail" attachpoint. Any view associated with a "detail" attach point will be rendered within this page section. Now let us look at configuring a tree view and attaching the view to the "nav" attach point specified in the page template. We create the following properties file. { clazz: "cosmos.widget.Navigator", query: {nodeClass:'mdr*'}, initQueryHandler: "handler/json/nav.json", publish: ['properties', 'detail'] } Note that the "clazz" property refers to a cosmos dojo widget that is able to render a tree. This dojo widget is called "comos.widget.Navigator" and has a set of configuration properties. The following shows the properties that can be configured for the "cosmos.widget.Navigator" dojo widget: query: the cosmos.widget.Navigator widget stores it's data model as a dojo data source (). As a result this property defines the query string to fetch the information to show the top level nodes initQueryHandler: a url to a datafeed that provides the JSON data structure that will show the inital root nodes publish: a list of topics to publish node selection events to. Refer to the dojo event system for more information() I would save the configuration properties file as view.jprop under the views directory as follows: /views/nav/view.jprop By saving the configuration file under /views/nav this "attaches" this view to the "nav" attach point. Similarly I would create configuration files for the properties table and BIRT report view and store them under /views/properties/view.jprop and /views/detail/view.jprop respectively. The view can further be configured depending on the type of data that is presented in the view. For example, different icons can be associated with different nodes in the tree to signify certain visualization cues. As mentioned in the "Data Tagging" section, the data feeds may tag the data with metadata information. The views can uses this meta data to add visual decorators when representing the data within the view. The views can also use this metadata to change how the view behaves. For example, expanding a node in a tree may instantiate a different query object. Exploiters can define a data property file to associate a metadata tag to visual and behavioral cues. The following is an example of a data property file: { mdr:{ menuType: {clazz:"cosmos.widget.QueryMenu", query:{clazz:"cosmos.query.BasicQuery", queryHandler:"handler/cosmos/widget/Navigator/stat.json"}, label: "Create Query"}, icon: "images/cmdbf_query.gif" }, mdrcbe:{ menuType: {clazz:"cosmos.widget.QueryMenu", query:{clazz:"cosmos.query.CMDBfQuery", queryHandler:"handler/cosmos/widget/Navigator/stat.json"}, label: "sUBMITqUERY"}, icon: "images/cbe.gif", expandQuery:{clazz:"cosmos.query.BasicQuery", queryHandler:"handler/cosmos/widget/Navigator/cbe.json"} }, mdrstat:{ icon: "images/stat.gif", expandQuery:{clazz:"cosmos.query.BasicQuery", queryHandler:"handler/cosmos/widget/Navigator/stat.json"} } } The above example shows a data file that defines visual and behavioral cues for a tree widget. The tree will show the "images/cmdbf_query.gif" icon for any data tagged with a value of "mdr". Also a popup menu will be shown for any node that is visualizing data that is tagged with a value of "mdr". Data Tagging The COSMOS UI implores a RESTful Web 2.0 design pattern where the data is served by data feeds. The data feeds provide JSON datastructures. The structure of the JSON data is consumed by the view widget. Note that the JSON data structure provides a contract between the backend data and the user interface. This architecture provides a clear decoupling between the raw data and the user interface. As a result, new UI widgets, that conform to the JSON contract, can be constructed to create different visualizations. The following is an example of a JSON structure used to send a list of three nodes to a tree widget. { identifier: "object", label: "title", items:[ {tag:"mdrstat", title:"Statistical Data",object:“21"}, {tag:"mdrcbe",title:"Monitoring Data",object:"14"}, {tag:"mdr",title:"Asset Repository", object:“91"},} ] } The above example defines three objects that have three attributes (tag, title, and object). The tree widgets uses these atributes to construct nodes within the tree. The title attribute signifies the name of the node, the object attribute provides a unique id and the tag attribute provides meta data information. The tag attribute is an important attribute that is processed by the COSMOS widgets. The widgets makes use of the tag attribute to determine if certain visual or behavorial cues should be applied to the node. For example, a specific icon can be rendered for data with specific tag values or different menu items can be shown in the menu bar for specific tagged data. Data Feeds The COSMOS UI infrastructure will implore a RESTful service architecture. Data models required by the DOJO widgets will be produced by JSON feeds via HTTP requests as illustrated below: Note that query objects are created to create the binding logic between the DOJO widget and the HTTP request as mentioned in the previous sections. The following is an overview how data feeds will be constructed. A request delegator will receive the request and instantiate a particular outputter that will handle the request. The request delegator will provide interfaces to deserialize the request into a set of parameters that the outputter understands. The request delegator will also provide a global store that outputters can save state data. Note that the outputter themselves are stateless and can only change the state of the store. Client browsers can make a request to a particular data feed by constructing the appropriate HTTP request. Consider the following request /COSMOSFeeds?service=/org/eclipse/cosmos/provisional/tree/json¶m1=2343245¶m2=select COSMOS will provide a single service that will receive requests from the client to generate a particular data feed. The "service" parameter will dictate the outputter to instantiate. For example, the above request will cause the request delegator to instantiate org.eclispe.cosmos.provisional.tree.json.Outputter. The delegator will pass a parameter map with two parameters: param1 and param2. Exploiters can define custom outputters and add the outputter class to the classpath of the request delegator. The following define the IOuputter, IParameters and IStore interfaces. IOutputter /** * Provides data feeds */ public interface IOutputter { /** * A resolver class that will generate unique ids. unique ids * may be required by the outputter to identify particular items in * the generated output. * @param idResolver id resolving class */ public abstract void setIdResolver(IIDResolver idResolver); /** * Writes content to a PrintWriter. An input map is passed to this method * that the render method will use to generate the data feed * @param output a PrintWriter that method will write to * @param input an input map that contains name value pairs * @throws Exception */ public abstract void render(PrintWriter output, IParameters input) throws Exception; /** * This method is called write after instantiating the outputer. A persisted storage * object is passed that outputters can use to save state * @param store a persistent storage object * @param parameters an input map that contains name value pairs * @throws Exception */ public abstract void initalize(IStore store, IParameters parameters) throws Exception; } IParameters /** * Provides a list of name value map that is used by outputters as input * parameters. */ public interface IParameters { /** * Returns a parameter value with an associated key name * @param name - key name * @return value of the parameter */ public String getParameter(String name); } IStore /** * Persistent storage used by outputters to save state */ public interface IStore { /** * Returns a value from the store with provided key name * @param name - key name * @return stored value */ public abstract Object getAttribute(String name); /** * Stores a value with an associated key name * @param name - key name * @param value - value to store */ public abstract void setAttribute(String name, String value); } Error Handling Client-side error handling is discussed in enhancement 209223 Deployment Model As explained in the previous sections there are several components involved in the COSMOS UI infrastructure. These components are as follows: - Page Templates - Widgets - View Configuration Files The COSMOS UI infrastructure defines a deployment model to deploy the above components: /dojo -Directory where dojo widgets and classes are deployed /pages -Directory where pages are deployed /views -Directory to store view and data configuration files The page directory is further structured as follows: /pages/<namespace>/index.html - defines the page template /pages/<namespace>/images - defines images associated with the page template /pages/<namespace>/css - defines style sheets associated with the page template Note a namespace is associated with a page template. This allows the COSMOS ui infrastructure to server many different pages. For example I can define two diffent pages with different layout and view configurations such as: /pages/cosmos/index.html /pages/cosmosBlue/index.html To access each page I would enter the following urls respectively: The Dojo directory is further structured as follows: /dojo/cosmos - contains comsmos dojo widgets and classes /dojo/cosmos/provisional/widget - defines cosmos widgets /dojo/cosmos/provisional/utility - defines cosmos utility classes /dojo/cosmos/provisional/data - defines cosmos data classes Note that the above naming convention follows a similar pattern suggested by the Eclipse Naming document () Exploiters can deploy their own widgets and classes under the 'dojo' directory. For example, the following defines custom classes under the dojo/mywidget directory: /dojo/mywidget The view directory contains the view and data property file. The structure of the view directory is explained in the previous section.
http://wiki.eclipse.org/CosmosDataReportingComponent10_209226
CC-MAIN-2014-42
en
refinedweb
Seam Book (Yuan & Heute) Hello World Example Annotation John Peters Greenhorn Joined: May 25, 2007 Posts: 18 posted Feb 11, 2008 13:27:00 0 In Michael & Thomas' Seam book, in the chapter 2 Hello World example, why is the "person" variable outjected? Doesn't the "@Name" annotation on the Person entity already make the "person" available to Seam? Thanks! SLSB: @Stateless @Name("manager") public class ManagerAction implements Manager { @In @Out private Person person; @Out private List<Person> fans; @PersistenceContext private EntityManager em; public String sayHello(){ em.persist(person); person = new Person(); fans = em.createQuery("select p from Person p").getResultList(); return null; } Entity: @Entity @Name("person") public class Person implements Serializable { private long id; private String name; @Id @GeneratedValue public long getId(){ return id; } public void setId(long id){ this.id = id; } public String getName(){ return name; } public void setName(String name){ this.name = name; } } XHTML: <body> <h:form> Please Enter your name:<br /> <h:inputText<br /> <h:commandButton </h:form> <h :D ataTable <h:column> <h :o utputText </h:column> </h :D ataTable> </body> [ February 11, 2008: Message edited by: John Peters ] Hussein Baghdadi clojure forum advocate Bartender Joined: Nov 08, 2003 Posts: 3479 I like... posted Feb 11, 2008 23:46:00 0 You use @Name to tell Seam that this is a Seam component and the component will be bijected under this name (you can override it however). To actually outject a component you need to use @Out. HTH. John Peters Greenhorn Joined: May 25, 2007 Posts: 18 posted Feb 12, 2008 06:39:00 0 Hey John, thanks for the reply. I think I'm following you, but I'd like some clarification, please: I was playing around with the SLSB and removed the "@Out" annotation on the "person" variable, so the SLSB looks like this: @Stateless @Name("manager") public class ManagerActionBean implements ManagerAction { @In private Person person; @Out private List<Person> fans; @PersistenceContext private EntityManager em; public String sayHello(){ em.persist(person); person = new Person(); fans = em.createQuery("select p from Person p").getResultList(); return null; } } Afer redeploying the EAR, everything still worked without any problems. My confusion is why did the authors annotate the "person" class (that is injected in the SLSB) with "@Out" to begin with? It appears Seam was able to create an entity bean just by using the "@Name" annotation on the entity bean itself, and the "@Out" annotation on the SLSB for the "person" class was redundant. Is this correct or am I missing something? Thanks again, for your help. I've been reading everything I can on bijection and still haven't had the epiphany I need. Hussein Baghdadi clojure forum advocate Bartender Joined: Nov 08, 2003 Posts: 3479 I like... posted Feb 13, 2008 01:40:00 0 Think of it in this way: When you annotate a Seam component with @Out, you are telling Seam that you want to store this component under some scope. Remember those lines: PurchaseOrder po = new PurchaseOrder(); session.setAttribute("purchaseOrder", po); When you are using @Out, you are doing the same thing. Alternativley, when you are using @In, you are telling Seam you want to get the object from a desired scope (context in Seam parlance) Remember this line: session.getAttribute("purchaseOrder"); ? In your case, yes sure, you can remove @Out from person declaration but you can't write this in your view page: <h utputText Because it is not stored under a scope, you have to write: <h utputText Actually, you have to avoid the excessive use bijection as it could harm the performance. John Peters Greenhorn Joined: May 25, 2007 Posts: 18 posted Feb 15, 2008 14:01:00 0 Sorry for the late reply. I think I'm getting it now, thanks to your help. One thing I noticed that I think helped me finally get it was that when the @Out annotation was removed from: private Person person; the text box no longer was blank after a submit. I figured out that it wasn't blank because after the "person" variable was set to a new blank person, (person = new Person() )it wasn't "pushed" back out to Seam to pick up. Let me restate it so I make sure I get it: You're using the @In and @Out to inject and outject items from the Seam contexts which are held in different scopes (session,conversation,page,etc) under names defined by the @Name annotation. When the xhtml form is submitted, it creates an entity bean (in this case, a Person)based off of how the "value" tags are set up on the "inputText". The @In annotation allows the SLSB to bring in that entity bean from the Seam context into the variable named "person" and persist it. After the entity is persisted, the "person" variable is set to a new, empty person, outjected back into the Seam context where it is rerendered on the xhtml page as a blank Person entity. I agree. Here's the link: subject: Seam Book (Yuan & Heute) Hello World Example Annotation Similar Threads datatables and selectOneMenu default Value Problem Input in a Nested table Problem with "rendered" Flag Problem with getRowData() method Datatable with scrollbar in JSF All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/60852/oa/Seam-Book-Yuan-Heute-World
CC-MAIN-2014-42
en
refinedweb
Microsoft Responds To "Like OS X" Comment samzenpus posted more than 4 years ago | from the imitation-is-the-greatest-form-of-flattery dept. (5, Insightful) sopssa (1498795) | more than 4 years ago | (#30071330) Random person thinks he knows everything, grows an ego and tells "juicy" stuff to press to boost that said ego while actually knowing nothing. Nothing to see here. But I suspect lots of Linux/Mac OSX fanatics will be coming in 3.. 2.. 1.. Re:ego (5, Funny) Anonymous Coward | more than 4 years ago | (#30071388) (5, Insightful) L4t3r4lu5 (1216702) | more than 4 years ago | (#30071772) For every Off-topic mod you get, you'll be almost guaranteed one Insightful mod. As long as you're against Jack Thompson. Which I am! Re:ego (5, Funny) zmollusc (763634) | more than 4 years ago | (#30071394) ..0 .... OMGBBQ!!!!! Gnome is bettar than both!!!!! and anyway it all comes from PARC work blah blah GEM blah blah Amiga blah Re:ego (1, Insightful) OscarGunther (96736) | more than 4 years ago | (#30071484) Who mod parent a troll?! Have you no sense of humor, sir? Re:ego (5, Funny) Mitchell314 (1576581) | more than 4 years ago | (#30071516) Maybe it's a KDE user who did it. Re:ego (5, Funny) Anonymous Coward | more than 4 years ago | (#30071638) (-1, Redundant) cadeon (977561) | more than 4 years ago | (#30071758) Please Mod Parent Troll. Re:ego (0) Anonymous Coward | more than 4 years ago | (#30071888) Please Mod Parent Troll. Re:ego (0, Offtopic) danbert8 (1024253) | more than 4 years ago | (#30071940) Why waste the mod points? He's an AC and he's at 0, if you don't like reading shit, raise your threshold. Re:ego (0, Flamebait) coinreturn (617535) | more than 4 years ago | (#30071408) Re:ego (4, Funny) Jarik C-Bol (894741) | more than 4 years ago | (#30071458) [penny-arcade.com] Re:ego (4, Funny) gbjbaanb (229885) | more than 4 years ago | (#30071498) so did xkcd. [xkcd.com] Re:ego (5, Insightful) sitarlo (792966) | more than 4 years ago | (#30071456) Save face? (4, Insightful) professorguy (1108737) | more than 4 years ago | (#30071574) They ain't trying to save face. They are trying to save a lawsuit loss (i.e., money). Re:Save face? (1) L4t3r4lu5 (1216702) | more than 4 years ago | (#30071798) Re:ego (0, Flamebait) RedK (112790) | more than 4 years ago | (#30071478) Or you know, this guy just let out the big dirty secret and in an attempt to save face, the "Windows team" puts out an official response that claims the contrary even though at this point it's pretty obvious to anyone with 1 functionning eye, trying to kill the first guy's credibility in order to sweep all of this under the rug. The end the night by sucking their collective thumb and weeping for their mommies to "make it all go away". See, anyone can say anything about it. The few people who know the actual truth (the first guy and the Windows team) won't ever tell us the real truth. Re:ego (1) gbjbaanb (229885) | more than 4 years ago | (#30071542) in other news, Microsoft employee says "iPhone is better than WinMobile", cue Microsoft fanbois to criticise employee and distract everyone from the frickin' obvious. Re:ego (1) socsoc (1116769) | more than 4 years ago | (#30071790) iPhone is better than WinMobile There's nothing new or newsworthy about that statement... Re:ego (1) loupgarou21 (597877) | more than 4 years ago | (#30071616) I'm not so sure that he was really looking to juice his ego. I'm guessing that he probably knew exactly what he was talking about, but his phrasing was very poor. It's not that they were trying to copy Apple with the redesigned UI, they probably did a lot of testing and probably interviewed a lot of computer users about what they do and don't like about the UIs of various operating systems, and probably got a lot of comments, especially from Mac users, about how the average user wants a very simplified user interface where the things they use frequently are easily accessible to the user, and the more complex things, like computer settings are somewhat hidden from the user, harder to get to and accidentally change. So while redesigning the UI, they tried to take that philosophy of a more user-centric UI, not that they were trying to copy the Mac OS interface. Put aside the ego... (1) h4rm0ny (722443) | more than 4 years ago | (#30071672)! Inaccurate, uninformed and soon... (0) Anonymous Coward | more than 4 years ago | (#30071344) ... unemployed. Re:Inaccurate, uninformed and soon... (1) turing_m (1030530) | more than 4 years ago | (#30071390) sell:nike air max jordan shoes,coach,gucci,handbag (-1, Offtopic) coolforsale (1677136) | more than 4 years ago | (#30071364) Something About Bill (0) Anonymous Coward | more than 4 years ago | (#30071396) Bill: "STEP INTO MY OFFICE!" Idiotic Microsoft Employee: "Why?" Bill: "Cause you're fuckin' fired!" they've been copying Mac all along... (1) wvmarle (1070040) | more than 4 years ago | (#30071404) Re:they've been copying Mac all along... (5, Insightful) JerryLove (1158461) | more than 4 years ago | (#30071480):they've been copying Mac all along... (1) Shrike82 (1471633) | more than 4 years ago | (#30071546):they've been copying Mac all along... (1) Mitchell314 (1576581) | more than 4 years ago | (#30071550) Re:they've been copying Mac all along... (0) Anonymous Coward | more than 4 years ago | (#30071628) If Xerox STAR had never been created, the differences between PCs today would be the clickiness of keyboards and whether or not your display could handle 132-column mode. On the upside, these PCs would STILL be running terminal emulation most of the time. Billions of reboots would have been prevented. Even better, Microsoft's involvement would be limited to a nifty BASIC interpreter, long since obsoleted by other languages. Best of all, mainframes would still be king of the hill and IT would be a great career option. As soon as we can build a time travel device, I propose we go back and eliminate Xerox STAR just to see what happens. Re:they've been copying Mac all along... (1) Dolohov (114209) | more than 4 years ago | (#30071656) More to the point: what on earth is wrong with copying good interface ideas? As a Microsoft stockholder I'd be far more upset if they *didn't* look at Mac OS when designing Windows! Re:they've been copying Mac all along... (0) Anonymous Coward | more than 4 years ago | (#30071712) So Mac copied Xerox Star... Says the person whose never seen a Xerox Star. Yes, there are a few similarities, but nothing like Win-Mac. Re:they've been copying Mac all along... (0) Anonymous Coward | more than 4 years ago | (#30071858) Re:they've been copying Mac all along... (1) digitalhermit (113459) | more than 4 years ago | (#30071868) would the, uh, hi-fi system. And for almost every action I would need to check the junk desk drawer on the bottom left of my screen where I'd find everything else that I needed. Of course, I could start moving things from the drawer to my desktop but in a few days it would be so cluttered that it would be difficult to find anything, especially since the calculator would be the exact same size as my television and my notepad and my journal (that is to say, about 0.75" wide and tall). We might as well have chosen a steering wheel metaphor or a buggy whip metaphor. Our interfaces seem to border on the ridiculous. On this laptop I'm using right now, there are a dozen extra buttons for media, wi/fi, hibernation, home (not sure what that one does, but it has an image of a house on it). They're all tiny buttons, less than a centimeter square. There are lots of LEDs too. There's no "Check Engine" light though, or a fuel gauge though. But that would be more useful than a hard drive busy light to me. Why do I have to go to three menus to increase the font size in a document? Hell, I'd like to be able to messy select a line of text and pinch expand the font size. I want to be able to move text around by dragging (some apps can do this). I want consistent behaviour in my web browser as in my document editor. If I want to cut an image from my screen and save it to a file, I shouldn't have to launch two applications to do it (and I don't mean some Alt-PrtScr that saves a bitmap to my desktop but a way to lasso select almost *anything* in a vector format image). The thing is, we have the hardware power but the interfaces are so clunky that using the power is difficult. Underwriters (0, Troll) SgtChaireBourne (457691) | more than 4 years ago | (#30071624) And in 2003: And in 2005: It is truly bizarre that average people allow the shills to make noise promoting such incompetence. Look at their search engine payment bug [softpedia.com] and you are reminded yet again what kind of people they must scrape the bottom of the barrel to get. Not just known-nothings, but fresh-out-of-school ones at that. Sadly that scam has gone on for a generation. What happens if they get into schools or colleges and start posing as staff or faculty?? Re:Underwriters (1) MetalPhalanx (1044938) | more than 4 years ago | (#30071788) . Things not to do if you like your job (5, Insightful) Random5 (826815) | more than 4 years ago | (#30071410) Re:Things not to do if you like your job (1) jDeepbeep (913892) | more than 4 years ago | (#30071612). (1) DoctorNathaniel (459436) | more than 4 years ago | (#30071636) Always a classic screw-up. Re:Things not to do if you like your job (4, Funny) L4t3r4lu5 (1216702) | more than 4 years ago | (#30071816) What Apple does right (5, Interesting) BadAnalogyGuy (945258) | more than 4 years ago | (#30071418) (1) Shivetya (243324) | more than 4 years ago | (#30071466) they seem to go the other with iTunes. I still see no reason for Apple to not allow sizing windows from any corner, let alone hiding/moving the Apple bar at top. Re:For everything Apple does one way (3, Insightful) Mitchell314 (1576581) | more than 4 years ago | (#30071560) Re:What Apple does right (4, Insightful) Procasinator (1173621) | more than 4 years ago | (#30071482):What Apple does right (2, Informative) Dupple (1016592) | more than 4 years ago | (#30071548) Re:What Apple does right (2, Funny) DarthBart (640519) | more than 4 years ago | (#30071588) Right click? What is this right click you speak of? Re:What Apple does right (1) wolrahnaes (632574) | more than 4 years ago | (#30071894) input, but in both cases they have missed the mark. The "Mighty Mouse" scroll ball was too tiny to be of any use and right-clicking required lifting your left finger off the mouse surface entirely. The new "Magic Mouse" solves the scroll ball issue, but for some reason still requires lifting the left to click with the right. Not an Apple hater though, this post typed from a Macbook Pro with an Apple aluminum keyboard, but with a Logitech G5 handling the mousing duties. I love the platform, just wish King Jobs would get his head out of his ass regarding the single button thing. Re:What Apple does right (2, Interesting) Procasinator (1173621) | more than 4 years ago | (#30071642):What Apple does right (1, Troll) joh (27088) | more than 4 years ago | (#30071698) You press Control-F2 and use the cursor keys to get to Some Option. Re:What Apple does right (3, Informative) Procasinator (1173621) | more than 4 years ago | (#30071736) Which is slower, as I mentioned in a reply to another poster who brought this up. Might not be important to some people, but to me, it's a feature I miss in Mac OS X land. Re:What Apple does right (1, Informative) Anonymous Coward | more than 4 years ago | (#30071886)? Control-F2 to give the menu bar keyboard focus, then use the arrow buttons or first letters of the menu items. Check out the Keyboard pane in the system preferences for other keyboard navigation options. (I found this in less than three minutes, by the way; it's amazing what one can figure out, when one is more interested in learning than complaining.) Re:What Apple does right (4, Informative) gtomorrow (996670) | more than 4 years ago | (#30071726):What Apple does right (1) gtomorrow (996670) | more than 4 years ago | (#30071822) OOOPS! My reply was meant for the GP and not parent poster...ehmmm, yeh. Insomma, not for Dupple but for Procasinator. Re:What Apple does right (3, Informative) TheRaven64 (641858) | more than 4 years ago | (#30071608) (1) Procasinator (1173621) | more than 4 years ago | (#30071688) This is my problem - I do use this manner. It's handy because I don't have to learn the various different short cuts accross different applications. It also allows me to explore the various commands quickly in a new application or get to commands without shortcuts without leaving my keyboard. control-F2 is something, but it's more keyboard presses to be worth it. As in Control+F2, right, right, right, there is my menu option. So it doesn't allow quick access to actions or exploration without using the mouse. I know I can configure short cuts to actions I often access, but tbh, I prefer not having too. Re:What Apple does right (5, Informative) TheRaven64 (641858) | more than 4 years ago | (#30071802) Re:What Apple does right (2, Insightful) Procasinator (1173621) | more than 4 years ago | (#30071934) Re:What Apple does right (2, Informative) joh (27088) | more than 4 years ago | (#30071646) (3, Informative) caseih (160668) | more than 4 years ago | (#30071938):What Apple does right (0, Troll) mdwh2 (535323) | more than 4 years ago | (#30071580) Microsoft wants things to be orthogonal, logical, menu driven, hierarchical, and otherwise fully featured. Apple takes the approach that the user doesn't want to fuss with all sorts of menus and submenus (no two button mouse for years!) MS have dropped the menu approach (think Office) - but personally I prefer the menu approach. And Apple's OSs have had menus for years, anyway. Apple applications still make use of two buttons, which you have to clumsily press a control key to access. applications which do not necessarily have any UI themes in common with each other. No, it's Apple who are the worst offenders here - just look at how Quicktime and Itunes on Windows completely fail to comply with the Windows UI standards. In my experience, Quicktime and Itunes are the worst UIs I've encountered - anything but elegant. I have trouble finding out how to do simple tasks in Itunes (e.g., getting it to recognise updated mp3 ID tags). Only yesterday, I plugged someone's Ipod into my computer so we could watch something - only to find the software had renamed files into random garbage, distributed across randomly named folders in no apparent logical order. We had to guess via file sizes, and try every single one until we came across it. Apple, it Just Works! And what does "elegant" even mean? What's your objective definition, and your evidence for this assertion? As always, subjective assertions without evidence get modded up simply because they are pro-Apple, whilst I bet I - even though I give clear examples and evidence - will get modded down, simply because these facts do not fit with an Apple moderator's worldview (how does moderation work these days, anyway? I haven't had any for years, and it seems they're only given out to those who mod up pro-Apple posts these days...) Microsoft is doing a lot to emulate Apple. And frankly, it's about time. God, I hope not. And with "Macs" these days being Apple branded PCs, I'd say the reverse is true. Re:What Apple does right (2, Informative) Mitchell314 (1576581) | more than 4 years ago | (#30071598) Re:What Apple does right (1) Anonymous Coward | more than 4 years ago | (#30071706) > Apple's interface is elegant but inflexible. Everything fits into the existing scheme and runs perfectly within that scheme. Bullshit. Just look at the "zoom button" debacle on OSX. There is no "maximize window" functionality. The little green button with a "+" in it often makes the window *smaller* - or minimizes it (in the case of iTunes). The OS is filled with these massive problems because it has been simplified to the point of being retarded. Re:What Apple does right (1) jordibares (1276026) | more than 4 years ago | (#30071714) Re:What Apple does right (1) Xest (935314) | more than 4 years ago | (#30071774) is relevant in the context of the actions being carried out for their applications too. So the question is, whilst to some people like you and I the simplified context relevant system seems better, is there an underlying reason many others hate it? Do they simply dislike change? or is there something else there, like a context based system being more confusing for them because things aren't always where they were? For what it's worth though I actually hate many of the Windows 7 changes, the new gadget system is appalling compared to the sidebar. Gadgets are useless because they're either on the desktop, out the way, and you have to explicitly switch to the desktop to see them in which case if you have to explicitly switch they may as well just be applications or alternatively they can be set to be always on top which means they obscure any windows you're working with underneath them. The sidebar ensured this wasn't a problem by allowing Windows to resize around the sidebar meaning they were both always on top, always available and yet never in the way. I also found the taskbar changes unhelpful on a large screen, although it's great on the small screen of my netbook where taskbar space is limited, but on my 24" screen at 1900x1200 the new system only uses up about 20% of the length of the taskbar and yet I have to take extra clicks to find the window I want because they're all hidden in their groups. I reverted back to the classic taskbar where the Window I want is available instantly by using the full taskbar. I even find the start menu since Vista much less efficient to navigate too in all honesty, if you don't type in the name of the program and want to click through because you don't know what icon was added the pre-Vista start menu was far more efficient. As I say though, I do like Microsoft's ribbon interface. For me it's all about the speed and efficiency at which I can work, and much of the Windows Vista / 7 UI changes seem to add the amount of mouse movement and clicks I need to make, the Ribbon UI however does not as it puts what I need right in front of me when I need it. Re:What Apple does right (0) Anonymous Coward | more than 4 years ago | (#30071812) Windows' interface is flexible but clumsy. While this has gotten much better in later versions, we're still looking at deeply nested menus, and applications which do not necessarily have any UI themes in common with each other. Please, please do not try to hang "applications which do not necessarily have any UI themes in common with each other" on Windows alone. Any GUI operating system that allows skinning of applications, whether by design, or by sheer bloodyminded overriding of low-level user interface drawing routines, will eventually have this problem as soon as some programmer decides that he doesn't like the "standard, boring, old window style" and he then proceeds to inflict his new "vision" of what the interface should look like on the user. Then you wind up with non-rectangular windows in garish colors of the programmer's choosing with buttons that don't look like buttons, no visible menu bar, and few, if any, of the features users are used to seeing in their application windows. Re:What Apple does right (1) zmollusc (763634) | more than 4 years ago | (#30071892) try a more modern mac and see how things have altered. So? (5, Insightful) war4peace (1628283) | more than 4 years ago | (#30071436) We are living in a twisted, perverted world, where one can't express an opinion without being beheaded by both the press and the company he's working for. God help us all! Hi (5, Funny) Anonymous Coward | more than 4 years ago | (#30071440) I'm a Mac and Windows 7 was MY idea News of "staff restructuring"... (0) Anonymous Coward | more than 4 years ago | (#30071442) ...coming to the guy in 3... 2... 1... If only.... (-1, Troll) Anonymous Coward | more than 4 years ago | (#30071444) Re:If only.... (0) Anonymous Coward | more than 4 years ago | (#30071494) employee who 'inaccurate and uninformed' (4, Insightful) hibernia (35746) | more than 4 years ago | (#30071462) Re:employee who 'inaccurate and uninformed' (0) Anonymous Coward | more than 4 years ago | (#30071596) Mistaken is the Official Rebuttal not the comment (1) viraltus (1102365) | more than 4 years ago | (#30071468) Hello Streisand (3, Insightful) je ne sais quoi (987177) | more than 4 years ago | (#30071474):Hello Streisand (2, Insightful) bruno.fatia (989391) | more than 4 years ago | (#30071644). If this is true... (0, Troll) sitarlo (792966) | more than 4 years ago | (#30071476) Re:If this is true... (4, Insightful) recoiledsnake (879048) | more than 4 years ago | (#30071510) Windows 7 is still clunky, slow, and unstable. Citation needed. I use Windows 7 and it's certainly not one of those. Re:If this is true... (-1, Troll) sitarlo (792966) | more than 4 years ago | (#30071600) Re:If this is true... (2, Informative) kannibal_klown (531544) | more than 4 years ago | (#30071514) this is true... (0) Anonymous Coward | more than 4 years ago | (#30071554) You don't know how to use it properly then... I am no Windows fan, I have Snow Leopard, Windows 7, XP and Linux (Suse, RedHat, CentOS and unbreakable Linux) all on differing machines... All in all, Windows 7 has been stable as a rock on my machine...no problems to report apart from lacking Samsung Scanner Drivers...not MS' fault. This is not like OS X! (5, Funny) zebslash (1107957) | more than 4 years ago | (#30071490) Microsoft has issued an official rebuttal: "We never used OS X as a source of inspiration in the design of Windows 7. This is completely uninformed. We used KDE 4 instead". Ideas don't occur in a vacuum (3, Insightful) Interoperable (1651953) | more than 4 years ago | (#30071506) Hi, my name is Steve Jobs... (1, Redundant) DaRanged (735002) | more than 4 years ago | (#30071532) Should've named it Vista7 or Vista-II instead.. (1) jkrise (535370) | more than 4 years ago | (#30071534):Should've named it Vista7 or Vista-II instead.. (1) smitty777 (1612557) | more than 4 years ago | (#30071658) Not sure about that. I think they're trying to distance themselves from Vista. I do agree with you in concept, tho. Re:Should've named it Vista7 or Vista-II instead.. (1) Xest (935314) | more than 4 years ago | (#30071860) earmed from it's crappy earlier releases. That was close (1) lyinhart (1352173) | more than 4 years ago | (#30071540) Linux users (-1, Troll) Anonymous Coward | more than 4 years ago | (#30071556) Are butt hurt that they cant /dev/null their /etc/fstab Paging Mr. Balmer (1) m0s3m8n (1335861) | more than 4 years ago | (#30071578) Defenseable (0) Anonymous Coward | more than 4 years ago | (#30071586) Sounds to me like the "Liar, liar, pants on fire defense" I'm a Mac (1, Redundant) Jezza (39441) | more than 4 years ago | (#30071594) I'm a Mac, and Windows 7 was my idea! I agree with MS (-1, Troll) Anonymous Coward | more than 4 years ago | (#30071630) Bad Analogy (courtesy MS) (1) smitty777 (1612557) | more than 4 years ago | (#30071632) FTA: "When the sun is shining there’s no incentive to change the roof on your house. It’s only when its raining that you realise there’s a problem." Ahem....um...so I guess by rain, you mean some sort of Katrina like attention getter? Sheesh... Look and Feel (2, Interesting) Adrian Lopez (2615) | more than 4 years ago | (#30071734). "built on that very stable core Vista technology" (0) Anonymous Coward | more than 4 years ago | (#30071750) From the article: "it’s built on that very stable core Vista technology, which is far more stable than the current Mac platform, for instance." Apple's development model, for years, has been to perpetually tweak and improve on their existing operating system code. Not to mention it's Unix, which has been around since the dinosaurs. He even says in the article that XP was completely rebuilt for Vista, which was then gutted again for this new Vista2. He wants to talk about stability? Why am I surprised? M$, You Stupid Fools (-1, Troll) Anonymous Coward | more than 4 years ago | (#30071814) Deny, deny, deny... it doesn't change the fact that M$ ripped most of its UI improvements directly from OS/X. Imagine if someone did that to M$... you'd be sued into oblivion. M$ you suck. Sounds like.... (1) SendBot (29932) | more than 4 years ago | (#30071900) sounds like someone doesn't want to get sued by apple for defamation. Someone got called out (2, Insightful) onyxruby (118189) | more than 4 years ago | (#30071912). If you believe in... (0) Anonymous Coward | more than 4 years ago | (#30071962) evolution. Then the chances of a random chain of events leading to Windows 7 looking like OS X is possible. It might even be the only explanation.
http://beta.slashdot.org/story/127122
CC-MAIN-2014-42
en
refinedweb
When you think Ruby, what is your first association? Oh, it's Rails? Hmm. What is you think Ruby feature? Monkey patching? Yep, me too. Programmers have stronger and more heated opinions about monkey patching than adolescent girls have about glitter. Globally modifying class functionality at runtime? Why yes, that does sound dangerous. With a little imagination you can imagine a doomsday scenario where two libraries wrestle with each other continually to replace some piece of functionality with their own preferred version, and with each release the library maintainers modify their code to move their library back to the top of heap1. However, monkey patching does provide a solution an important problem: dynamically adding functionality to a class at run-time. Over in Python land we don't have a great solution to this problem, and we've kind of decided we're okay with that: better not to have a zoo than have the lions occasionally escape and gnaw on small children. However, there is a great solution to this problem. One that allows dynamically adding functionality to a class--nay, to classes--while respecting namespaces and not mucking up the reasonable assumptions in other peoples' code (primarily the assumption that a function won't, without warning, suddenly begin behaving differently, which is a rather important assumption for writing even moderately deterministic code): Like many great ideas that have slowly seeped into modern programming languages, this idea comes from Lisp--more specifically, Common Lisp--but this one in particular hasn't yet resurfaced in a mainstream language. The solution? Generic functions (also called multi-methods) like those in CLOS. Let's do a few examples of what Python might be like if it had generic function based OO. class Person(object): name = None title = None def greet(Person p): print "Hello %s %s" % (p.name, p.title) So the key difference is that some parameters are typed with a class. Lets say that anything that isn't explicitly typed can be of any type, so we're using an eclectic mix of strong typing and duck-typing (similar to Objective-C). The first advantage we get is that we can add functionality to a class from any module, not just in its class declaration. For example, if the above code is in a module named PersonModule, then we could write this code in another module: from PersonModule import Person, greet def farewell(Person p): print "Goodbye %s %s" % (p.name, p.title) def greet_goodbye(Person p): print greet(p) + ", " farewell(p) p = Person() p.name = 'Will' p.title = 'Mr.' greet(p) # print "Hello Mr. Will" farewell(p) # print "Goodbye Mr. Will" So we can add methods to an object declared in a different module, but what if we want to override an object's default behavior in our module? Easy as pie. from PersonModule import Person, greet_goodbye def farewell(Person p): return "k thnx bai %s" % p.name p = Person() greet_goodbye(p) # prints "Hello Mr. Will, k thnx bai Will" So that's kind of cool, right? Sure, I'm pulling this out of thin air and can't even prove to you that what I'm suggesting is possible, but this is how OO works with CLOS. There is a working precedent. And it's awesome. The joy of generic functions goes a bit further than this as well, giving us some functionality that feels similar to the type matching found in Erlang or Scala. Consider this code: class Person: name = None class Dog: name = None def speak(a): 'If none of the more specific patterns match, falls back here.' print "This is a %s" % a def speak(Person p): print "My name is %s" % p def speak(Dog d): print "Woof" def walk(Person p, Dog d): print "%s takes %s for a walk" % (p,d) def walk(Dog d, Person p): print "%s cannot walk %s" % (d,p) def walk(Person p, Person p): print "Um. That's weird." a = Person() a.name = "Will" b = Dog() b.name = "Leo" c = Person() c.name = "Jim" walk(a,b) # "Will takes Leo for a walk" walk(b,a) # "Leo cannot walk Will" walk(a,c) # "Um. That's weird." For those who have used 'real' pattern matching, this will seem like a very cumbersome syntax, but fortunately generic methods don't just give us pattern matching, along with the heavier syntax comes heavier capabilities: - Rather than only matching on types, the ability to use classes for pattern matching as well. (You could also phrase this as, classes are a valid type for matching.) That includes giving us polymorphism by specializing methods on increasingly specific classes ( Programmerinstead of Person, PerlProgrammerinstead of Programmer). - Ability to override handling of a specific pattern, without rewriting the rest of the patterns. That said, it doesn't necessarily allow for all of the functionality of pattern matching, because you can't customize the order in which it will attempt to match the pattern (will always go from more specific to more general in a predictable and consistent order, while pattern matching will let you define any ordering your heart desires)2. Also, depending on the implementation it might not allow specializing on values (instead of just types), but it would be fairly intuitive to add value-specialization to the syntax: def add(a,b): return a + b def add(a, 0): return a That would open up some pretty interesting doors. Doors I would like to walk through. (Actually, looks like those doors are already discussed here, and opened here. Now I just need to start walking. Err, and I'd like the syntax to be native. Maybe a pre-compiler of some sort is in order.) This sounds stupid, doesn't it? It is, and the tragedy is that something comparable occurs all the time in the worlds of anti-virus and toolbars.↩ Depending on the specific implementation (basically, where one establishes the trade-off between explicitly importing all multi-methods you'll use versus them being implicitly imported when you load a module to save typing), it is possible to create a predictable system for multiple inheritance, although at the expense of many many more imports.↩
http://lethain.com/the-subtle-joys-of-generic-methods/
CC-MAIN-2014-42
en
refinedweb
Online Assistants on E-Commerce can be Helpful assistant is really very helpful. You can easily employ ghost writers or even...Online Assistants on E-Commerce can be Helpful The task is tedious when... assistants on E-commerce which may prove very helpful and productive. The online Struts validation not work properly - Struts Struts validation not work properly hi... i have a problem with my struts validation framework. i using struts 1.0... i have 2 page which...) { this.address = address; } } my struts-config.xml Tomcat Quick Start Guide Tomcat Quick Start Guide  ... fast tomcat jsp tutorial, you will learn all the essential steps need to start... that work with Java inside HTML pages. Now we can take any existing HTML Struts Reference Struts Reference Welcome to the Jakarta Online Reference page, you will find everything you need to know to quick start your Struts... Struts Validation framework with example Struts Tiles example Struts validator framework work in Struts validator framework work in Struts How does validator framework work in Struts Struts Books started really quickly? Get Jakarta Struts Live for free, written by Rick Hightower... for Struts applications, and scenarios where extending Struts is helpful (source code... it. Instead, it is intended as a Struts Quick Start Guide to get you going. Once you Java Kick Start - Java Beginners feedback. Im very muck exicted to work with java programming. Thank You... of examples related to different technologies like, jsp, servlet, struts, ejb... in core java, collections then you can start with JSP and Servlets. http tiles - Struts Struts Tiles I need an example of Struts Tiles Java Web Start and Java Plug-in of cache for Java Web Start or Java Plug-in will no longer work. Existing applications... Java Web Start Enhancements in version 6  ... Java Web Start should check for updates on the web, and what to do sing validator framework work in struts sing validator framework work in struts How does client side validation using validator framework work in struts Struts Quick Start Struts Quick Start Struts Quick Start to Struts technology In this post I will show you how you can quick start the development of you struts based project... of the application fast. Read more: Struts Quick Start Struts Console visually edit Struts, Tiles and Validator configuration files. The Struts Console... Struts Console The Struts Console is a FREE standalone Java Swing How about this site? Java services What is Java WebServices? If you are living in Dallas, I'd like to introduce you this site, this home security company seems not very big, but the servers of it are really good. Dallas Alarm systems | Site Map | Business Software Services India Struts 2.18 Tutorial Section Struts 2.1.8 | Struts 2.1.8 Features | Downloading and installing... with Struts Tiles | Using tiles-defs.xml in Tiles Application | Struts Useful Negotiation Tips on Outsourcing, Helpful Negotiation Tips that one or two issues need more time to be sorted out. Transfer the work.... Moving the work to the outsourcer before the contract is signed gives... consultants may work for their own vested interests rather than help you How does Social Media Marketing Work Marketing really works for your website. How does Social Media Marketing Work...How does Social Media Marketing Work In this section we will understand.... and then the friends list. Try to create long friend list. Then you can start where to start - Java Beginners is that I shall start practicing from the day one to make myself get the knowledge of java.Shall i get a series of questions or programmes for a beginner to work on. Hi, Thanks for using RoseIndia.net You can start learning java tomcat server start up error - Struts tomcat server start up error Hai friends..... while running... org.apache.catalina.core.StandardService start INFO: Starting service Catalina Sep 5, 2009 4:49:08 AM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache MyEclipse Hibernate Tutorial This tutorial is helpful to understand how hibernate work in MyEclipse Submitting Web site to search engine in the search, you will start getting hits to your site without and cost. ... Registering Your Web Site To Search Engines... Web Sites Once your web site is running, the next job for you Struts Articles , but the examples should work in any container. We will create a Struts plugin class... popular Struts features, such as the Validation Plug-In and Tiles Plug... portal is in leveraging Struts Tiles. Portals are, in essence, a set Successful Tips for Offshore Outsourcing,Helpful Tips of Success Offshore Outsourcing , it is not a cakewalk to get your work done through a company that operates from a different... progress of the work. Anticipate Delays One of the most important steps in planning Struts Tutorials is provided with the example code. Many advance topics like Tiles, Struts Validation... how to start and stop Tomcat, install Struts, and deploy a Struts application... great tutorials posted on this site and others, which have been very helpful which tags work on which browsers which tags work on which browsers Is there a site that shows which tags work on which browsers Struts - Framework /struts/". Its a very good site to learn struts. You dont need to be expert...Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary Hibernate Tools Update Site Hibernate Tools Update Site Hibernate Tools Update Site In this section we... Site. The anytime you can user Hibernate Tools Update Manager from your eclipse import package.subpackage.* does not work to this Java Web Start Enhancements in version 6 Start or Java Plug-in will no longer work. Existing applications cache... Java Web Start Enhancements in version 6  ... supported. It describes the applications preferences for how Java Web Start Struts + HTML:Button not workin - Struts Struts + HTML:Button not workin Hi, I am new to struts. So pls... in same JSP page. As a start, i want to display a message when my actionclass...: http struts for clarifying my doubts.this site help me a lot to learn more & more technologies like servlets, jsp,and struts. i am doing one struts application where i... the following links: Struts - Framework , Struts : Struts Frame work is the implementation of Model-View-Controller...Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary PHP and MySQL Work Well Together to upload items or comments on a site. The MySQL database can work especially well... important for anyone to use. The first thing to know is that the PHP script can work well for a dynamic site. It can help to understand this by seeing Struts Works the container gets start, it reads the Struts Configuration files and loads... gets start up the first work it does is to check the web.xml file and determine... How Struts Works   java - Struts give me idea how to start with Hi Friend, Please clarify what do you want in your project. Do you want to work in java swing? Thanks If statement doesn't work ,(doesn't print alert message when user dont field name and email) a download when a user hit click jf he fields name and email but doesn"t work for my site...If statement doesn't work ,(doesn't print alert message when user dont field...'; $subject = 'Message from a site visitor '.$field_name; $bodymessage = 'From Offshore Outsourcing Tips,Useful Offshore Outsourcing Tips,Helpful Outsourcing Tips competitive. Companies and people have to work with people who they know.... For instance, if you are outsourcing work from India, you need to go... of how things work Professional Web Design Services For You Web Site Professional Web Design Services For You Web Site  ... designer If the content of your website is such that it requires art work... requirements, he will in turn give you a rough sketch of the proposed .shtml Hope that the above links will be helpful for you. Thanks Struts Guide ? - - Struts Frame work is the implementation of Model-View-Controller (MVC) design... the official site of Struts. Extract the file ito... Struts Guide   struts - Struts struts Hi, I need the example programs for shopping cart using struts with my sql. Please send the examples code as soon as possible. please... Hope that it will be helpful for you. Thanks struts <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>... }//execute }//class struts-config.xml <struts String Start with Example String Start with Example  ... that start from the specified character in java. The following program checks...() function it will return 'true' and display a message "The given string is start Developing Struts PlugIn . There are many PlugIns available for struts e.g. Struts Tiles PlugIn, Struts... start-up. Following example shows how to declare the Tiles PlugIn: <plug... declaration instructs the struts to load and initialize the Tiles plugin for your Subset Tag (Control Tags) Example Using Start Subset Tag (Control Tags) Example Using Start  ... the start parameter. The start parameter is of integer type. It indicates... a subset of it. The parameter start is of integer type and it indicates used in a struts aplication. these are the conditions 1. when u entered... Info function isEmpty(elem) { var str = elem.value; if(str == null || str.length == 0) { //// here i have to set Struts Validation - Struts /struts/StrutsCustomValidator.shtml Hope that it will be helpful for you...Struts Validation Hi friends.....will any one guide me to use the struts validator... Hi Friend, Please visit the following links Struts Projects Struts Projects Easy Struts Projects to learn and get into development ASAP. These Struts Project will help you jump the hurdle of learning complex Struts Technology. Struts Project highlights: Struts Project to make Really Simple History (RSH) Really Simple History (RSH) The Really Simple History (RSH) framework makes it easy for AJAX applications to incorporate bookmarking and back and button support. By default, AJAX Struts 1 Tutorial and example programs and reached end of life phase. Now you should start learning the Struts 2 framework... with Struts Tiles In this lesson we will create Struts Tiles...Struts 1 Tutorials and many example code to learn Struts 1 in detail. Struts 1 struts compilation - Struts struts compilation how to compile struts example Hi Friend, Please visit the following link: Hope that it will be helpful for you. Thanks Struts Tutorial: Struts 2 Tutorial for Web application development, Jakarta Struts Tutorial framework you can actually start learning the concepts of Struts framework.... Many advance topics like Tiles, Struts Validation Framework, Java Script... Struts 2 Tutorials - Jakarta Struts Tutorial Learn Struts Migration of Struts and Hibernate - Struts Friend, Please visit the following link: Hope that it will be helpful for you. Thanks...Migration of Struts and Hibernate How to struts can call Struts Links - Links to Many Struts Resources like Tiles, Struts Validation Framework, Java Script validations are covered... covers Struts 1.2. The course is usually taught on-site at customer locations... Struts Links - Links to Many Struts Resources Jakarta WEB SITE WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology) like theme selection in orkut like forgot password feature.. or any more features Sorry but its struts... code to solve the problem : For more information on struts visit String lastIndexOf(String str) String lastIndexOf(String str)  ... the lastIndexOf(String str) method of String class. We are going to use lastIndexOf(String str) method of String class in Java. The description of the Regarding struts validation - Struts Regarding struts validation how to validate mobile number field should have 10 digits and should be start with 9 in struts validation? Hi... always start with 9 return true else false You set attributes maxlength html...:// String indexOf(String str) String indexOf(String str)  ... the indexOf(String str) method of String class. We are going to use indexOf(String str... about the indexOf(String str) method through the following java program String equalsIgnoreCase(String Str) String equalsIgnoreCase(String Str)  ... explanation about the equalsIgnoreCase(String Str) method of String class. We are going to use equalsIgnoreCase(String Str) method of String Download and Installing Struts 2 with the struts-blank application. Downloading Struts 2.0 Visit the Struts download site...Download Struts 2.0 In this section we will download and install the Struts 2.0 on the latest version Struts integration with EJB in JBOSS3.2 projects group. Struts is a frame work for building really complex... Controller) Architecture. It is open source and free. Struts frame work...-blank.war. Copy that struts-blank.war file to c:\tomcat5\webapps. Start the tomcat start and deploy start and deploy how to deployee java web application in glassfish by using netbeans6.7 JSP Tutorial For Beginners With Examples ; is useful for the beginners who want to start to make their carrier in creating... and Database In this section you will read about how to work with the database... to work with the session in JSP. This section will describe you how to do
http://roseindia.net/tutorialhelp/comment/4028
CC-MAIN-2014-42
en
refinedweb
This tutorial shows how to transform a traditional monolithic core banking application, which is implemented in Node.js, into a modern microservices architecture by using IBM Cloud Pak for Applications. Cloud Pak for Applications speeds the development of applications that are built for Kubernetes by using agile DevOps processes. Running on Red Hat OpenShift, the Cloud Pak provides a hybrid, multicloud foundation that is built on open standards, enabling workloads and data to run anywhere. It integrates two main open source projects: Kabanero and Appsody. This tutorial uses a sample monolithic banking application, which is illustrated in the following architecture diagram: There are five tightly coupled services within this application: - Admin login ( admin_login.ejs) - Admin dashboard ( admin.ejs) - User login ( user_login.ejs) - User dashboard ( users.ejs) - Not found ( notfound.ejs) If too much workload or user traffic occurs on one service, then all of the other interconnected services can be affected. Or the complete project can go down, which is one of the major disadvantages of monolithic architectures. To break down this monolithic application, you separate the admin services ( admin_login.ejs and admin.ejs) and user services ( user_login.ejs and users.ejs) into microservices so they can run independently. Both services have different functions, so the new application is able to scale them depending on the workload. The two new microservices are: To do this, you put the admin services into one project and the user services into another, and then deploy them both to a central GitHub repo. Both have their own dependencies and run independently, as you can see in the following architecture diagram. (Don’t worry if this does not fully make sense to you right now. The tutorial steps explain it further.) Prerequisites To complete the steps in this tutorial, you need: - Docker on your local computer. - Visual Studio Code for local development. - Access to a Red Hat OpenShift on IBM Cloud cluster with IBM Cloud Pak for Applications. - A GitHub account and some knowledge of git commands. Estimated time After the prerequisites are installed, this tutorial will take about 90 minutes to complete the steps. Steps - Clone the GitHub repository - Install Codewind in Visual Studio to create a microservice test and deploy to GitHub - Create GitHub tokens - Initialize Tekton and integrate with the central GitHub repository - Verify that the microservices are up and running Step 1. Clone the GitHub repository - Open your terminal and change your directory by using the cd downloadscommand. (Or any other directory in which you want to clone the project.) - Run the command: git clone. Open the project in Visual Studio. Step 2. Install Codewind in Visual Studio to create a microservice test and deploy to GitHub What is Codewind and why does this tutorial use it? In the present era, one of the biggest challenges for a developer is to build and deploy cloud-native applications. Many actions are required to build a perfect solution on the cloud and you need to build images, create containers, debug, analyze the different logs, assess performance metrics, and rebuild the containers with each code change. That’s why this tutorial uses Eclipse Codewind, an opensource project that helps you achieve all of the above actions quickly with ready-made, container-based project templates and can easily be integrated with your visual code integrated development environment (IDE). Learn more about Codewind. Since you know which services will be converted into microservices, start by initializing Codewind in Visual Studio with the following tasks: - Open Visual Studio. - Select Extensions and search for Codewind. - Select Install and kindly wait, since it will take some time to initialize. - Once successfully installed, you will see the Codewind section. - Select Codewind and start the local Codewind. - Right-click local and select Create New Project. - Select Kabanero Node.js Express simple template. - Select the folder where you want to initialize the template and name it micro-admin. (This step can take five to ten minutes to initalize.) Once your template is initalized successfully, kindly open the folder where you created micro-admin. You will see the newly created template. Next, you will break down the monolithic application in three stages. First, visit the folder where you cloned the monolithic application in Step 1. In that folder, open the app.jsfile and copy the following lines: const express = require("express") const path = require('path'); const app = express(); app.set('view engine', 'ejs'); app.set('views', path.join(__dirname, 'views')); app.use(express.static(path.join(__dirname, 'node_modules'))); app.use(express.static(path.join(__dirname, 'public'))); app.get("/", function(req,res){ res.render("admin_login"); }); app.get("/admin_login", function(req,res){ res.render("admin_login"); }); app.get("/admin_in", function(req,res){ var Name = req.query.name; var Password = req.query.pass; if (Password =="123") { console.log("Successfully logged in as admin"); res.render("admin"); } else{ res.render("notfound.ejs"); } }); module.exports.app = app; Then, go to your new micro-adminfolder and replace the app.jsfile with the copied version. Second, copy the complete public folder located within the folder where you cloned the monolithic application, and paste it into your new micro-adminfolder. Third, open the views folder located within the folder where you cloned the monolithic application, and copy only the admin.ejs, admin_login.ejs, and notfound.ejsfiles. Paste those files into your new micro-adminfolder. Your structure should now look like the following: Open your terminal inside Visual Studio and run the command npm install ejs. This will install the Embedded JavaScript templating that you will use for front-end styling. - Go to Codewind in Visual Studio and look for your project there, running as micro-admin. Right-click it and select Open Application to open the page. From there, select enable project (if it is disabled) and then select build. Check Application Endpoint to see where your application is running. To test your application, right-click micro-admin, select Application Monitor, and hit the application two or three times to see the changes. Run appsody buildin your Visual Studio terminal. You don’t have to worry and spend your time on a deployment configuration file since Codewind will create it for you. You only need to focus on your application development. After the above command executes successfully, you will see a new generated file called app-deploy.yamlon the left hand side of your screen. This file will help you in a later step to deploy the application on Cloud Pak for Applications. Note: If you do not have a namespace section, please add it as follows: apiVersion: appsody.dev/v1beta1 kind: AppsodyApplication metadata: namespace: kabanero creationTimestamp: null labels: image.opencontainers.org/title: micro-admin stack.appsody.dev/id: nodejs-express stack.appsody.dev/version: 0.2.8 name: micro-admin .... You successfully created the Admin microservice. Go back to the beginning of this Step and repeat tasks 6 to 17 to create the second microservice, naming it micro-user. This time, your app.jsfile will be for users, so copy the code below during task 10: const express = require("express") const path = require('path'); const app = express(); app.set('view engine', 'ejs'); app.set('views', path.join(__dirname, 'views')); app.use(express.static(path.join(__dirname, 'node_modules'))); app.use(express.static(path.join(__dirname, 'public'))); app.get("/user_login", function(req,res){ res.render("user_login"); console.log("User login"); }); app.get("/user_in", function(req,res){ var Name = req.query.name; var Password = req.query.pass; if (Password =="123") { console.log("Successfully logged in as user"); res.render("users"); } else{ res.render("notfound.ejs"); } }); app.listen(3000 , function(){ console.log("App is running"); }); Also, after task 12, you should see a structure like this for your new micro-userfolder: Once you finish testing and creating the User microservice, individually upload both microservices to the central GitHub repository. Note: If you have any difficulty executing this step to create both microservices, please check out the following sample repositories that were created using Codewind: Step 3. Create GitHub tokens Before you initialize Tekton, it is really important to create two GitHub tokens for your admin and user microservices: - Open GitHub and log into your account. - Click your profile photo to expand the account profile menu. - Within the menu, click Settings > Developer settings > Personal access tokens. - Click the Generate new token button. - Give your first token a descriptive name by typing tekton-app-userinto the Note field. Select the scopes, or permissions, you’d like to grant this token. To use your token to access repositories from the command line, select the repo checkbox. Click the Generate token button. - Copy the token to your clipboard. It is important that you do this. For security reasons, after you navigate off the page, you will not be able to see the token again. - To create your second token, click the Generate new token button again. - Give your second token a descriptive name by typing tekton-app-admininto the Note field. - Select the scopes, or permissions, you’d like to grant this token. To use your token to access repositories from the command line, select the repo checkbox. - Click the Generate token button. Copy the second token to your clipboard. It is important that you do this for both tokens. Once both tokens are created, you will see a page similar to the one below: Step 4. Initialize Tekton and integrate with the central GitHub repository What is Tekton and why does this tutorial use it? Tekton is a powerful, yet flexible, Kubernetes-native open source framework for creating continuous integration and continuous delivery (CI/CD) systems. This tutorial uses Tekton because it is a built-in tool for IBM Cloud Pak for Applications that connects the GitHub central repository and a webhook that lifts and shifts application source code from your local development to the cloud. Learn more about Tekton. To initialize Tekton, perform the following tasks: - Open your Red Hat OpenShift web console. - Once you are logged in successfully, select Kabanero from the My Project section. - Select Cloud Pak for Applications from the menu. You should see the following screen: Click the Instance tab. Within the Tools section, select Tekton. You should see the following screen: Select Webhooks from the menu and proceed to create two webhooks for your microservices ( micro-adminand micro-user). For the first webhook, enter w1-adminin the Name field, the Repository URL field, and micro-token-1in the Access Token field. Click Create. For the second webhook, enter w2-userin the Name field, the Repository URL field, and micro-token-2in the Access Token field. Click Create. Check that Tekton and GitHub are successfully connected by opening your two repositories. Go to the micro-adminrepository and under settings, select Webhooks from the menu. If the pipeline is connected properly, you will see a link such as..(you may have different link). Follow the same procedure to check the micro-userrepository. Important: Do not worry if you get an error notice. This will resolve after the repositry code is updated. Make some changes in the micro-adminand micro-userrepositories that were created in Step 2 to trigger your Tekton pipeline. First, open the micro-adminrepository. Inside the views folder, open the admin.ejsfile and make some changes, such as searching for My Dashboardand capitalizing the text to be MY DASHBOARD. After you are done, commit the file. Perform the same type of procedure within the micro-userrepository, making similar changes to some text within the user.ejsfile. After you are done, commit the file. Open your Tekton dashboard. Under the Tekton dropdown list, select PipelineRuns. Wait until the rows under the Status column display All tasks completed executing, which indicates you successfully integrated your central repo to your Tekton instance on IBM Cloud Pak for Applications. Important: Perform the changes in each repository separately. For example, perform the changes in the User repository first and after it is successfully built and deployed, then update the Admin repository. Or vice versa. For more details about Tekton, check out this great tutorial. Step 5. Verify that the microservices are up and running - Open the OpenShift dashboard. - Select Applications from the menu. - Select Routes and you should then see your two microservices up and running on the Routes page. To run the application, click the links within the Hostname column. Here is a sample screen capture of the user interface: Here is a sample screen capture of the admin interface: Conclusion In this tutorial, you learned how to modernize a Node.js application, transforming it from a monolithic architecture into a microservices architecture using Cloud Pak for Applications. By independently running two projects containing related services, you can scale them depending on the workload. In addition, you can integrate as many microservices as you want without affecting or scaling down the complete project.
https://developer.ibm.com/depmodels/cloud/tutorials/modernize-a-monolithic-nodejs-application-into-a-microservices-architecture-using-ibm-cloud-pak-for-applications/
CC-MAIN-2020-50
en
refinedweb
Not. Import the ElementTree object, open the relevant .xml file and get the root tag: import xml.etree.ElementTree as ET tree = ET.parse("yourXMLfile.xml") root = tree.getroot() There are a few ways to search through the tree. First is by iteration: for child in root: print(child.tag, child.attrib) Otherwise you can reference specific locations like a list: print(root[0][1].text) To search for specific tags by name, use the .find or .findall: print(root.findall("myTag")) print(root[0].find("myOtherTag")) Import Element Tree module and open xml file, get an xml element import xml.etree.ElementTree as ET tree = ET.parse('sample.xml') root=tree.getroot() element = root[0] #get first child of root element Element object can be manipulated by changing its fields, adding and modifying attributes, adding and removing children element.set('attribute_name', 'attribute_value') #set the attribute to xml element element.text="string_text" If you want to remove an element use Element.remove() method root.remove(element) ElementTree.write() method used to output xml object to xml files. tree.write('output.xml') Import Element Tree module import xml.etree.ElementTree as ET Element() function is used to create XML elements p=ET.Element('parent') SubElement() function used to create sub-elements to a give element c = ET.SubElement(p, 'child1') dump() function is used to dump xml elements. ET.dump(p) # Output will be like this #<parent><child1 /></parent> If you want to save to a file create a xml tree with ElementTree() function and to save to a file use write() method tree = ET.ElementTree(p) tree.write("output.xml") Comment() function is used to insert comments in xml file. comment = ET.Comment('user comment') p.append(comment) #this comment will be appended to parent element Sometimes we don't want to load the entire XML file in order to get the information we need. In these instances, being able to incrementally load the relevant sections and then delete them when we are finished is useful. With the iterparse function you can edit the element tree that is stored while parsing the XML. Import the ElementTree object: import xml.etree.ElementTree as ET Open the .xml file and iterate over all the elements: for event, elem in ET.iterparse("yourXMLfile.xml"): ... do something ... Alternatively, we can only look for specific events, such as start/end tags or namespaces. If this option is omitted (as above), only "end" events are returned: events=("start", "end", "start-ns", "end-ns") for event, elem in ET.iterparse("yourXMLfile.xml", events=events): ... do something ... Here is the complete example showing how to clear elements from the in-memory tree when we are finished with them: for event, elem in ET.iterparse("yourXMLfile.xml", events=("start","end")): if elem.tag == "record_tag" and event == "end": print elem.text elem.clear() ... do something else ... <Catalog> <Books> <Book id="1" price="7.95"> <Title>Do Androids Dream of Electric Sheep?</Title> <Author>Philip K. Dick</Author> </Book> <Book id="5" price="5.95"> <Title>The Colour of Magic</Title> <Author>Terry Pratchett</Author> </Book> <Book id="7" price="6.95"> <Title>The Eye of The World</Title> <Author>Robert Jordan</Author> </Book> </Books> </Catalog> Searching for all books: import xml.etree.cElementTree as ET tree = ET.parse('sample.xml') tree.findall('Books/Book') Searching for the book with title = 'The Colour of Magic': tree.find("Books/Book[Title='The Colour of Magic']") # always use '' in the right side of the comparison Searching for the book with id = 5: tree.find("Books/Book[@id='5']") # searches with xml attributes must have '@' before the name tree.find("Books/Book[2]") # indexes starts at 1, not 0 tree.find("Books/Book[last()]") # 'last' is the only xpath function allowed in ElementTree tree.findall(".//Author") #searches with // must use a relative path
https://sodocumentation.net/python/topic/479/manipulating-xml
CC-MAIN-2020-50
en
refinedweb
Statistics › Distributions › Continuous › Weibull › The two-parameter Weibull probability density functionController: CodeCogs Contents Interface C++ The Weibull distribution is a two-parameter distribution named after Waloddi Weibull. It is often also called the Rosin-Rammler distribution when used to describe the size distribution of particles. The probability density function, given by There is an error with your graph parameters for PDF with options x=50:150 a=5:20:4 b=100In its standard form, b=1, therefore Error Message:Function PDF failed. Ensure that: Invalid C++ There is an error with your graph parameters for PDF with options x=0:2 a=5:20:4 b=1 Error Message:Function PDF failed. Ensure that: Invalid C++ References: - M.Abramowitz and I.A.Stegun, Handbook of Mathematical Functions, 1964 chapt.26.1; - Weibull, W. (1951) A statistical distribution function of wide applicability. J. Appl. Mech.-Trans. ASME 18(3), 293-297 Example 1 #include <iostream> #include <codecogs/statistics/distributions/continuous/weibull/pdf.h> using namespace Stats::Dists::Continuous::Weibull; int main() { std::cout << "PDF(105,20,100) = " << PDF( 105, 20, 100 ) << std::endl; return 0; }Output PDF(105,20,100) = 0.035589 Parameters Returns - probability density value Authors - Anatoly Prognimack (Mar 19, 2005) Developed and tested with: Borland C++ 3.1 for DOS and Microsoft Visual C++ 5.0, 6.0 Updated by Will Bateman (March 2005) Source Code Source code is available when you agree to a GP Licence or buy a Commercial Licence. Not a member, then Register with CodeCogs. Already a Member, then Login.
https://codecogs.com/library/statistics/distributions/continuous/weibull/pdf.php
CC-MAIN-2020-50
en
refinedweb
From: Greg Colvin (gcolvin_at_[hidden]) Date: 2001-06-27 10:33:31 From: John Max Skaller <skaller_at_[hidden]> > Greg Colvin wrote: > > > I think I like it, but I'm not seeing an easy way to > > create a thread object that goes away when the thread > > exits. > > That's just as (im)possible as a Window object > that goes away when the window it represents dies. It's not impossible, as I posted a solution already using ref-counting. Windows provides a DuplicateHandle function that would also work, unless the thread object needed more data than just the handle. > There is a solution, which is that the object > is only seen by the thread/window itself (and the object > suicides as required). > > Otherwise, you end up blocking in the destructor. > I think that this is feasible, I've seen it done > in coroutine and thread classes before. > > I think the question is: does the bare bones > solution _prevent_ wrapping up a thread_manager object > with these semantics? > > I think it is important _not_ to pre-empt the users > favourite technique for handling threads: that's why I think > the basic design is probably right (rather than the handle > design). > > Consider a solution which blocked in the destructor. > How would that coexist with a garbage collector?? > The way I see it, the basic design should: > > 1) set up parameters in the constructor, > but NOT start the thread > > 2) use a method for each operation > > 3) provide a 'wait on death' method > > 4) do nothing in the destructor except > let go of the thread > > This allows a thread-manager object to wrap the class > and > > 1) start up on construction > 2) block in the destructor > > For example. [important: only one example of what a user might want] > > It also allows deriving new classes from BOTH the thread > and thread-manager objects, independently. > > To put it another way, a lightweight wrapper which covers > existing APIs and doesn't try to do much work can't be wrong: > all it does is provide a common interface to conforming APIs. Yes, so far as it goes. > It may not do everything one wants. But as long as it doesn't > stop you doing what you want, it has achieved a single > vital goal: platform independence. My concern is that some common idioms would require wrappers that could be done more efficiently in terms of the underlying APIs. That is, I might need to reference count handles that are already reference counted by the OS, and might need to arrange to catch exceptions, decrement counters, and close handles in the thread function, this duplicating some of what our library-provided function has to do anyway. So the design I like is a thread class (or namespace) that provides a few static functions like create(), sleep(), and yield(), and a thread_ref class with copy semantics that provides alive() and join(). > And we can move on, and have a go at one or more management > schemes, aimed at a higher level of usage, without > alienating people that don't like any of the solutions. > > Just pretend you're a committee :-) Uugh. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/06/13734.php
CC-MAIN-2020-50
en
refinedweb
📅 11 September, 2019 – Kyle Galbraith you declare what you want your final state to be and the tool does the work to get you there. But, this is only one approach. Another approach to creating your infrastructure via code is via imperative options. Unlike declarative where we specify what, an imperative approach specifies how. Instead of saying “I want these 3 AWS resources”, an imperative approach says “Here is how to create these 3 AWS resources”. The difference between the two is subtle, one focuses on obtaining the end state. The other focuses on defining how to get the end state. There are advantages and disadvantages to both approaches. Declarative approaches have the advantage of operating on the end state. This means by nature they tend to know the current state of your infrastructure. If in a declarative framework I provision 10 EC2 instances and later need 15 instead of 10, it knows I only need 5 more. Declarative infrastructure as code tends to be in languages that we don’t use every day like HCL, YAML, or JSON. This also means they can be tricky to reuse or modularize to share across projects. Imperative approaches can often use languages we already know. We can define our infrastructure in the code we use every day like Python, TypeScript, or even Ruby. This means all the tools we have for testing, reuse, and sharing are usable here. But, it also means that we often lose the notion of the end state. Taking our example from above, 10 EC2 instances to 15 in an imperative framework creates 15 new EC2 instances instead of adding 5 more to our existing 10. This is because the imperative approach defines how to get the final state we want, not what the final state should be. The imperative approach has been on the rise recently. Many folks would prefer that infrastructure code is in the same language they use every day. So in this post, we are going to take a look at how we can define some of our AWS infrastructure using the new project, AWS Cloud Development Kit. Before we can start defining some infrastructure using the CDK, we need to complete a few prerequisites. First, we need to install AWS CDK via npm: npm $ npm install -g aws-cdk $ cdk --version 1.6.1 (build a09203a) Now that we have the CDK installed, let’s configure a sample project that we can start experimenting in. For this post, we are going to make use of Typescript, but the CDK is available in Python, JavaScript, Java, and .NET. $ mkdir cdk-post $ cd cdk-post $ cdk init sample-app --language=typescript Applying project template sample-app for typescript Initializing a new git repository... Executing npm install... npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN cdk-post@0.1.0 No repository field. npm WARN cdk-post@0.1.0 No license field. # Useful commands * `npm run build` compile typescript to js * `npm run watch` watch for changes and compile * `cdk deploy` deploy this stack to your default AWS account/region * `cdk diff` compare deployed stack with current state * `cdk synth` emits the synthesized CloudFormation template If we take a look at the sample app that cdk created we should see a file at lib/cdk-post-stack.ts. When we open that file we should see that there is some code in it that provisions an SQS queue and an SNS topic. cdk lib/cdk-post-stack.ts import sns = require('@aws-cdk/aws-sns'); import subs = require('@aws-cdk/aws-sns-subscriptions'); import sqs = require('@aws-cdk/aws-sqs'); import cdk = require('@aws-cdk/core'); export class CdkPostStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); const queue = new sqs.Queue(this, 'CdkPostQueue', { visibilityTimeout: cdk.Duration.seconds(300) }); const topic = new sns.Topic(this, 'CdkPostTopic'); topic.addSubscription(new subs.SqsSubscription(queue)); } } What we have here is a Stack, a collection of AWS resources that should get created, maintained, and removed together. In a CDK world, an application or service can consist of one or more stacks. We can see how this stack gets created by CDK by taking a look at bin/cdk-post.ts. Stack bin/cdk-post.ts #!/usr/bin/env node import cdk = require('@aws-cdk/core'); import { CdkPostStack } from '../lib/cdk-post-stack'; const app = new cdk.App(); new CdkPostStack(app, 'CdkPostStack'); Here we see that an App gets created via cdk and the CdkPostStack we looked at before gets attached to it. Let’s go ahead and deploy this sample app via the cdk on our command line. App CdkPostStack $ cdk deploy IAM Statement Changes ┌───┬─────────────────────┬────────┬─────────────────┬───────────────────────────┬─────────────────────────────────────────────────────┐ │ │ Resource │ Effect │ Action │ Principal │ Condition │ ├───┼─────────────────────┼────────┼─────────────────┼───────────────────────────┼─────────────────────────────────────────────────────┤ │ + │ ${CdkPostQueue.Arn} │ Allow │ sqs:SendMessage │ Service:sns.amazonaws.com │ "ArnEquals": { │ │ │ │ │ │ │ "aws:SourceArn": "${CdkPostTopic}" │ │ │ │ │ │ │ } │ └───┴─────────────────────┴────────┴─────────────────┴───────────────────────────┴─────────────────────────────────────────────────────┘ (NOTE: There may be security-related changes not in this list. See) Do you wish to deploy these changes (y/n)? y CdkPostStack: deploying... CdkPostStack: creating CloudFormation changeset... 0/6 | 17:36:05 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata 0/6 | 17:36:05 | CREATE_IN_PROGRESS | AWS::SQS::Queue | CdkPostQueue (CdkPostQueueBA7F3D07) 0/6 | 17:36:05 | CREATE_IN_PROGRESS | AWS::SNS::Topic | CdkPostTopic (CdkPostTopic28394E2B) 0/6 | 17:36:05 | CREATE_IN_PROGRESS | AWS::SQS::Queue | CdkPostQueue (CdkPostQueueBA7F3D07) Resource creation Initiated 0/6 | 17:36:05 | CREATE_IN_PROGRESS | AWS::SNS::Topic | CdkPostTopic (CdkPostTopic28394E2B) Resource creation Initiated 1/6 | 17:36:05 | CREATE_COMPLETE | AWS::SQS::Queue | CdkPostQueue (CdkPostQueueBA7F3D07) 1/6 | 17:36:06 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata Resource creation Initiated 2/6 | 17:36:06 | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata 3/6 | 17:36:16 | CREATE_COMPLETE | AWS::SNS::Topic | CdkPostTopic (CdkPostTopic28394E2B) 3/6 | 17:36:18 | CREATE_IN_PROGRESS | AWS::SNS::Subscription | CdkPostQueue/CdkPostStackCdkPostTopic7A3E421F (CdkPostQueueCdkPostStackCdkPostTopic7A3E421F4679B27C) 3/6 | 17:36:18 | CREATE_IN_PROGRESS | AWS::SQS::QueuePolicy | CdkPostQueue/Policy (CdkPostQueuePolicyC7FE0F0B) 3/6 | 17:36:19 | CREATE_IN_PROGRESS | AWS::SNS::Subscription | CdkPostQueue/CdkPostStackCdkPostTopic7A3E421F (CdkPostQueueCdkPostStackCdkPostTopic7A3E421F4679B27C) Resource creation Initiated 3/6 | 17:36:19 | CREATE_IN_PROGRESS | AWS::SQS::QueuePolicy | CdkPostQueue/Policy (CdkPostQueuePolicyC7FE0F0B) Resource creation Initiated 4/6 | 17:36:19 | CREATE_COMPLETE | AWS::SNS::Subscription | CdkPostQueue/CdkPostStackCdkPostTopic7A3E421F (CdkPostQueueCdkPostStackCdkPostTopic7A3E421F4679B27C) 5/6 | 17:36:19 | CREATE_COMPLETE | AWS::SQS::QueuePolicy | CdkPostQueue/Policy (CdkPostQueuePolicyC7FE0F0B) 6/6 | 17:36:21 | CREATE_COMPLETE | AWS::CloudFormation::Stack | CdkPostStack ✅ CdkPostStack What do we see? Quite a few interesting things. -y CloudFormation If we log into the AWS Console we should see that we now have SQS queue and an SNS topic that starts with CdkPostStack. Additionally, we should see that we have a CloudFormation stack that contains all our resources. Now let’s go back to our sample application and make a few changes to see how updates are handled in CDK. For our example let’s add another SQS queue and change the timeout of our existing queue. So now our CdkPostStack should look like this. export class CdkPostStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); const queue = new sqs.Queue(this, 'CdkPostQueue', { visibilityTimeout: cdk.Duration.seconds(600) }); const dev2Queue = new sqs.Queue(this, 'Dev2Queue', { visibilityTimeout: cdk.Duration.seconds(180) }); const topic = new sns.Topic(this, 'CdkPostTopic'); topic.addSubscription(new subs.SqsSubscription(queue)); } } We now have a new queue, Dev2Queue, and we have updated our CdkPostQueue to have a visibility timeout of 600 seconds rather than 300. Let’s use CDK to update our infrastructure and see what happens. Dev2Queue CdkPostQueue $ cdk deploy -y CdkPostStack: deploying... CdkPostStack: creating CloudFormation changeset... 0/5 | 12:40:55 | CREATE_IN_PROGRESS | AWS::SQS::Queue | Dev2Queue (Dev2Queue5997490B) 0/5 | 12:40:56 | UPDATE_IN_PROGRESS | AWS::SQS::Queue | CdkPostQueue (CdkPostQueueBA7F3D07) 0/5 | 12:40:56 | CREATE_IN_PROGRESS | AWS::SQS::Queue | Dev2Queue (Dev2Queue5997490B) Resource creation Initiated 1/5 | 12:40:56 | CREATE_COMPLETE | AWS::SQS::Queue | Dev2Queue (Dev2Queue5997490B) 2/5 | 12:40:56 | UPDATE_COMPLETE | AWS::SQS::Queue | CdkPostQueue (CdkPostQueueBA7F3D07) 2/5 | 12:41:00 | UPDATE_COMPLETE_CLEA | AWS::CloudFormation::Stack | CdkPostStack 3/5 | 12:41:01 | UPDATE_COMPLETE | AWS::CloudFormation::Stack | CdkPostStack ✅ CdkPostStack Awesome! What we see after this deploy is that our new queue is created and our existing queue gets updated as expected. We were able to make changes in our code and see them get reflected in a declarative way via CloudFormation. But wait, what does this mean? It’s worth taking a pause at this point and revisiting our imperative versus declarative discussion from earlier. AWS CDK is an imperative tool. As we saw we can create our infrastructure via Typescript. But, when we ran cdk deploy we saw that a CloudFormation changeset gets created. CloudFormation is a declarative infrastructure as code tool. cdk deploy This is where the debate between the two approaches for provisioning your AWS resources gets confusing. Behind the scenes, CDK is still leveraging CloudFormation to do the provisioning. In a way, we are getting all the benefits of the declarative approach with all the benefits of the imperative approach as well. We saw further evidence of this when we updated our existing queue and created a new queue. Our entire stack wasn’t recreated as you would expect in a strict imperative approach. Instead, we saw a declarative update happen, the queue got updated in place and our new queue got created. CDK is providing us with an imperative interface that we can use to represent our infrastructure as code. Underneath the hood, it is still making use of the declarative framework, CloudFormation. It can give us everything that our normal programming patterns and practices allow. Why? Because it is a framework that we can use in the languages we already know. This means that we can do things like test the code that provisions our infrastructure. A problem that has proven to be somewhat tricky in a declarative world like Terraform or CloudFormation. Modularizing and sharing code that creates common pieces of infrastructure is possible in AWS CDK. In a CDK world, these are referred to as constructs. Where a construct can consist of one or more AWS resources that need to get created together. As we saw in our example we could create a construct that others could reuse to provision both queues and our SNS topic. That said, declarative frameworks like Terraform have this notion as well in the form of modules. Some folks are very excited about clear imperative approaches for representing their infrastructure in code that they use every day. Other folks are die-hard declarative fans that are OK with learning another language to gain the benefits of a stateful approach. But as we saw in our look at AWS Cloud Development Kit (CDK), the debate between the two may not be all that important anyway. There are still pros and cons to each, but those tend to be more tool-specific than method specific. What we saw with CDK is that they blur the line between the two. Represent your code in your everyday languages and they will handle the declarative part behind the scenes using CloudFormation. The key takeaway here is that representing your infrastructure in code is a win all around. What CDK shows us is that more tools are being created to increase the adoption of this practice. If the imperative approach isn’t your jam, that’s fine. But, you should find the tool and method that does work for you and get to work using it. Any infrastructure as code is still better than no infrastructure as
https://blog.kylegalbraith.com/2019/09/11/imperative-infrastructure-as-code-using-aws-cdk/
CC-MAIN-2020-50
en
refinedweb
Merge pull request #12 from dart-lang/stereotype441-patch-1 Find SDK properly when invoked from inside SDK tests. A library to help in building Dart command-line apps. In particular, cli_util provides a simple, standardized way to get the current SDK directory. Useful, especially, when building client applications that interact with the Dart SDK (such as the analyzer). import 'dart:io'; import 'package:cli_util/cli_util.dart'; import 'package:path/path.dart' as path; main(args) { // Get sdk dir from cli_util Directory sdkDir = getSdkDir(args); // Do stuff... For example, print version string File versionFile = new File(path.join(sdkDir.path, 'version')); print(versionFile.readAsStringSync()); } Please file feature requests and bugs at the issue tracker.
https://dart.googlesource.com/cli_util/+/refs/tags/0.0.1+3
CC-MAIN-2020-50
en
refinedweb
Closed Milestone expired on May 19, 2016 8.0.1 The 8.0.1 release will be a new, super-major release. - [ "[ANNOUNCE] Glasgow Haskell Compiler 8.0.1, release candidate 1"] - [ "[ANNOUNCE] Glasgow Haskell Compiler 8.0.1, release candidate 2"] - [ "[ANNOUNCE] GHC 8.0.1 release candidate 3 available"] - [ "[ANNOUNCE] GHC 8.0.1 release candidate 4 available"] - [ "[ANNOUNCE] GHC 8.0.1 is available!"] See also [[Status/GHC-8.0.1]] for more details. Migration Guide: [[Migration/8.0]] Unstarted Issues (open and unassigned) 261 - [bug] ModOrigin: hidden module redefined - -fth-dec-file uses qualified names in binding positions - -fth-dec-file uses qualified names from hidden modules - UnicodeSyntax documentation lists wrong symbols - Several profiling tests give different results optimised vs. unoptimised - cross building integer-gmp is running target program on build host - Touching a file that uses TH triggers TH recompilation flood - malloc and mallocArray ignore Storable alignment requirements - Debug.Trace.trace is too strict - dep_orphs in Dependencies redundantly records type family orphans - Simplifer non-determinism leading to 8 fold difference in run time performance - Improve flag description in the user guide - prefetch primops are not currently useful - LLVM vs NCG: floating point numbers close to zero have different sign - Lack of type information in GHC error messages when the liberage coverage condition is unsatisfied - GHC not recognizing INPUT(-llibrary) in linker scripts - ghc-pkg complains about missing haddock interface files - GHC unnecessarily sign/zero-extends C call arguments - Inferring Safe mode with GeneralizedNewtypeDeriving is wrong - I/O manager causes unnecessary syscalls in send/recv loops - long compilation time for module with large data type and partial record selectors - Arithmetic overflow from (minBound :: Int) `quot` (-1) - ghc -M doesn't handle addDependentFile or #included files - Building an empty module with profiling requires profiling libraries for integer-gmp - -ffull-laziness does more harm than good - one-shot compilation + TH doesn't see instances that is seen in batch mode - type nats solver is too weak! - reify module list in TH - Filesystem related tests failed on solaris (SmartOS) - unexpected behavior with encodeFloat on large inputs - bad alignment in code gen yields substantial perf issue - Interface hashes include time stamp of dependent files (UsageFile mtime) - Assertion failure when using multithreading in debug mode. - Bug in hsc2hs --cross-safe - Error on pattern matching of an existential whose context includes a type function - RebindableSyntax and Arrow - Superclass methods are left unspecialized - GHCI core dumps when used with VTY - building GHC overwrites the installed package database if GHC_PACKAGE_PATH is set - StablePtrs should be organized by generation for efficient minor collections - Location in -fdefer-type-errors - Handling ImplicitParams in Instance Declaration - Cross compilation support for LLVM backend - Hard ghc api crash when calling runStmt on code which has not been compiled - Using -with-rtsopts=-N should fail unless -threaded is also specified - Room for GHC runtime improvement >~5%, inlining related - Generated C code under -prof -fprof-auto -fprof-cafs very slow to compile - GHC API reports CPP errors in confusing ways - GHC compile times are seriously non-linear in program size - DefaultSignatures conflict with default implementations - CAPI doesn't work with ghci - Panic: mkNoTick: Breakpoint loading modules with -O2 via API - Identical alts/bad divInt# code - rule not firing - When building GHC: Failed to load interface for `GHC.Fingerprint' - Recompilation check fails for TH unless functions are inlined - Callstack depends on way (prof, profasm, profthreaded - INLINEing top-level patterns causes ghc to emit 'arity missing' traces - Stack trace truncated too much with indirect recursion - GHC's -fprof-auto does not work with LINE pragmas - Cannot recover (good) inlining behaviour from 7.0.2 in 7.4.1 - reject reading rationals with exponent notation - Profiling with -p not written if killed with SIGTERM - GHCi runtime linker cannot link with duplicate common symbols - Type operators are not accepted as variables in contexts - Top level splice in Template Haskell has over-ambitious lexical scope? - INLINABLE fails to specialize in presence of simple wrapper - hsc2hs forces wordsize (i.e. -m32 or -m64) to be the choice of GHC instead of allowing a different (or no/default choice) - Offer a compiler warning for failable pattern matches - -faggressive-primops change caused a failure in perf/compiler/parsing001 - The -L flag should not exist - simple program fails with -shared on mac - Slow 64-bit primops on 32 bit system - unreg compiler: warning: conflicting types for built-in function ‘memcpy’ - Inlined functions aren't fully specialised - Compiling with -O makes some expressions too lazy and causes space leaks - Runtime error when allocating lots of memory - Add blockST for nested ST scopes - Better inlining test in CoreUnfold - increase error message detail for module lookups failure due to hi references - Another SpecConstr infelicity - GHC.ConsoleHandler does not call back application when Close button is pressed - literate markdown not handled correctly by unlit - Finding the right loop breaker - Windows: Dynamic linking doesn't work out-of-the-box - Re-linking avoidance is too aggressive - Accept expressions in left-hand side of quasiquotations - LLVM compiles Updates.cmm badly - Optimisations give bad core for foldl' (flip seq) () - Poor -fspec-constr-count=n warning messages - do not consider associativity for unary minus for fixity resolution - reject unary minus in infix left hand side function bindings that resolve differently as expressions - GHC API messes up signal handlers - CPP+QuasiQuotes confuses compilation errors' line numbers - ffi005 fails on OS X - Support for ABI versioning of C libraries - ghc-pkg should check for existence of extra-libraries - Strange display behaviour in GHCi - Handle multiline input in GHCi history - "Fix" pervasive-but-unnecessary signedness in GHC.Prim - section parse errors, e.g. ( let x=1 in x + ) - Avoid Haddock-links to the Prelude - Improve inlining for local functions - SpecConstr for join points - Parsing of lambdas is not consistent with Haskell'98 report. - Comparisons against minBound/maxBound not optimised for (Int|Word)(8|16|32) - Bad error reporting when calling a function in a module which depends on a DLL on Windows - Layout and pragmas - ghc -M should emit dependencies on CPP headers - Bizzarely bloated binaries - Allocation where none should happen - Generated ghc man page missing xrefs - CInt FFI exports do not use C int in _stub.h header file - package.conf.d should be under /var, not /usr - Returning a known constructor: GHC generates terrible code for cmonad - Over-eager GC when blocked on a signal in the non-threaded runtime - Avoid reconstructing dictionaries in recursive instance methods - GHC's GC default heap growth strategy is not as good as other runtimes - Int / Word / IntN / WordN are unequally optimized - ghc FFI doesn't support thiscall - divInt# floated into a position which leads to low arity - LDFLAGS ignored by build system - Make a way to tell GHC that a pragma name should be "recognised" - Document -pgmL (Use cmd as the literate pre-processor) - Avoid unnecessary evaluation when unpacking constructors - Improve SpecConstr for join points - aborting an STM transaction should throw an exception - Optimizer misses unboxing opportunity - MutableByteArray# is slower than Addr# - num009 fails on OS X 10.5? - Compilation of large source files requires a lot of RAM - Needless reboxing of values when returning from a tight loop - inlining defeats seq - unhelpful error message for a misplaced DEPRECATED pragma - STM slightly conservative on write-only transactions - Precedence and associativity rules ignored when mixing infix type and data constructors in a single expression - Make distclean (still) doesn't - Broken link testing - Derived Read instances for recursive datatypes with infix constructors are too inefficient - ghc runs preprocessor too much - ghc panic with mutually recursive modules and template haskell - GHCi on x86_64, cannot link to static data in shared libs - Control.Exception.assert should perhaps take an implicit call stack - Test suite: Support non-utf8 locale - Make quot/rem/div/mod with known divisors fast - Template Haskell for cross compilers (port from GHCJS) - Improve `mkInteger` interface - ghc does not expose branchless max/min operations as primops - Improve parser error reporting in `ghc-pkg` - cross compiling for x86_64 solaris2 - explore ways to possibly use more tag bits in x86_64 pointers - Implement unloading of shared libraries - make better/more robust loopbreaker choices - Reading ./.ghci files raises security issues - Defer other kinds of errors until runtime, not just type errors - Remove in-tree gmp - Improve join point inlining - Track -dynamic/-fPIC to avoid obscure linker errors - GHC should use the standard binary package - Refactor Template Haskell syntax conversions - Improve CPR analysis - Optimisation: Nested CPR - Make it easy to find documentation for GHC and installed packages - Optimisation: eliminate unnecessary heap check in recursive function - Generalise the ! and UNPACK mechanism for data types, to unpack function arguments - Avoidance of unaligned loads is overly conservative - Explore when to apply static argument transformation - Warning Suppression - Program location for thread error messages - Access to module renaming with reifyModule, in TemplateHaskell - Applicative Comprehensions - Implement deprecation-warnings for class-methods to non-method transitions - Expose ghc-bin code as a library - Allow expressions in patterns - Allow `State# s` argument/result types in `ccall` FFI imports - Branchless arithmetic operations - add anyToAddr# :: (#a#)-> Addr# primop (inverse of addrToAny#) - Show parenthesised output of expressions in ghci - Pattern synonym used in an expression context could have different constraints to pattern used in a pattern context - Annotation reification with types in TH - equip GHC with an accurate internal model of floating point - add idris style EDSL support for deep embedding lambdas - prefetch# isn't as general as it should be (currently the general version isn't type safe) - data families and TH names do not mix well (e.g. cannot use TH deriving) - UNPACK polymorphic fields - SafeHaskell implying other options - Allow compatible type synonyms to be the return type of a GADT data constructor. - runghc (runhaskell) should be able to reload code on editing - GHC does not generate great code for bit-level rotation - Need for extra warning pragma for accidental pattern matching in do blocks - Allow the evaluation of declaration splices in GHCi - Allow CAFs kept reachable by FFI to be forcibly made unreachable for GC - Specialise INLINE functions - Give more detailed information about PINNED data in a heap profile - Use a class to control FFI marshalling - GHCi commands case insensitive - Allow type signature in export list - arrow analogs of lambda case and multi-way if - Add the ability to statically define a `FunPtr` to a haskell function - Allow defining kinds alone, without a datatype - Add compilation stage plugins - Holes with other constraints - Allow both INLINE and INLINABLE for the same function - FFI and CAPI needs {-# INCLUDE #-} back? - Shorter qualified import statements - Support pin-changing on ByteArray#s - Template Haskell: support for Haddock comments - Warnings about impossible MPTCs would be nice - Less noisy version of -fwarn-name-shadowing - Licensing requirements and copyright notices - need a version of hs_init that returns an error code for command-line errors - Support static linker semantics for archives and weak symbols - Misfeature of Cmm optimiser: no way to extract a branch of expression into a separate statement - Allow unicode sub/superscript symbols in operators - Pragma to SPECIALISE on value arguments - Warning about module abbreviation clashes - Warning about variables with leading underscore that are used anyway - Remove indirections caused by sum types, such as Maybe - Make event tracing conditional on an RTS flag only - Consider usage files in the GHCi recompilation check - Loop strength reduction for array indexing - Implement TDNR - Loop optimization: identical counters - Allow specifying .hi files of imports on command line in batch mode - Interactive "do" notation in GHCi - Relax restrictions on type family instance overlap - Template Haskell lets you reify supposedly-abstract data types - New primops for indexing: index*OffAddrUsing# etc - Two sided sections - Please consider adding support for local type synonyms - RTS GC Statistics from -S should be logged via the eventlog system - Make ghci's -l option consistent with GNU ld's -l option - allow existential wrapper newtypes - lift restrictions on records with existential fields, especially in the presence of class constraints - allow to set ghc search path globally (a'la CPATH) - Improve granularity of UndecidableInstances - Some mechanism for eliminating "absurd" patterns - Find import declaration importing a certain function - Show type of most recent expression in GHCi - GHC API is not thread safe - Allow for multiple linker instances - How to start an emacs editor within ghci asynchronously with :edit filename.hs :set editor emacs & don't go - split rts headers into public and private - Add dynCompileCoreExpr :: GhcMonad m => Bool -> Expr CoreBind -> m Dynamic to ghc-api - warn about language extensions that are not used - Report out of date interface files robustly - Revise the rules for -XExtendedDefaultRules - Treat -X flags consistently in GHCi - Avoid excessive specialisation in SpecConstr - Add an option to read file names from a file instead of the command line - Warning for missing export lists - provide -mwindows option like gcc - Warn about usage of `OPTIONS_GHC -XLanguageExtension` - :browse limitations (browsing virtual namespaces, listing namespaces) - Improve Template Haskell error recovery - Warn if functions are exported whose types cannot be written - explicitly importing deprecated symbols should elicit the deprecation warning - Allow unconstrained existential contexts in newtypes - Add a total order on type constructors - [Debugger] Watch on accesses of "variables" - Automatic heap profile intervals Ongoing Issues (open and assigned) 106 - -fth-dec-file outputs invalid case clauses - Linker script patch in rts/Linker.c doesn't work for (non-C or non-en..) locales - Clean up GHC.RTS.Flags - map/coerce rule does not fire until the coercion is known - More lazy orphan module loading - Foldable doesn't have any laws - Comment in GHC.Base about GHC.Prim does not appear to be correct - ghc --print-(gcc|ld)-linker-flags broken - Add warning for invalid digits in integer literals - Massive blowup of code size on trivial program - Global big object heap allocator lock causes contention - Strange slowness when using async library with FFI callbacks - GHC.Base.{breakpoint, breakpointCond} do nothing - defer StackOverflow exceptions (rather than dropping them) when exceptions are masked - One-shot mode is buggy w.r.t. hs-boot files - need types to express constant argument for primop correctness - Demand analyser is unpacking too deeply - dynamicToo001 fails on Windows - Hackage docs for base library contain broken links - Opportunity to improve CSE - GHC doesn't optimize (strict) composition with id - Bug with PolyKinds, type synonyms & GADTs - Regression in optimisation time of functions with many patterns (6.12 to 7.4)? - LLVM incorrectly hoisting loads - ghc-7 assumes incoherent instances without requiring language `IncoherentInstances` - Compiling DynFlags is jolly slow - hSetNewlineMode and hSetEncoding can be performed on closed and semi-closed handles - Add flag to configure that skips overwriting of symlinks on install - Inlining the single method of a class can shadow rules - Inlining depends on datatype size, even with INLINE pragmas - Panic using mixing list with parallel arrays incorrectly - isInstance does not work for compound types - Register allocators can't handle non-uniform register sets - The `Read` instance of `Rational` does not support decimal notation - Show instance for integer-simple is not lazy enough - Liberate case not happening - GHC retains unnecessary binding - Adding a type signature changes heap allocation into stack allocation without changing the actual type - Performance regression 7.0 -> 7.2 (still in 7.4) - Cannot tell from an exception handler whether the exception was asynchronous - core lint error with arrow notation and GADTs - Getting stdout and stderr as a single handle from createProcess does not work on Windows - Sharing across functions causing space leak - Polymorphic instances aren't automatically specialised - check_overlap panic (7.1 regression) - Missing type checks for arrow command combinators - Improve consistency checking for family instances - stub header files don't work with the MS C compiler - Non-standard compile plus Template Haskell produces spurious "unknown symbol" linker error - (^^) is not correct for Double and Float - The dreaded SkolemOccurs problem - Primitive constant unfolding - Strict constructor fields inspected in loop - GHC Bindist is Broken on FreeBSD/amd64 - SpecConstr should exploit cases where there is exactly one call pattern - The Ord instance for unboxed arrays is very inefficient - Permission denied error with runProcess/openFile - floor(0/0) should not be defined - Missed optimisation with dictionaries and loops - -fhpc inteferes/prevents rewrite rules from firing - cpuTimePrecision is wrong - hpc mix files for Main modules overwrite each other - Stack check for AP_STACK - Missed opportunity for let-no-esape - arrow notation: incorrect scope of existential dictionaries - Smooth out the differences between `compiler/utils/Pretty.hs` and `libraries/pretty` - The list modules need a bit of post-BBP shaking - audit ghc floating point support for IEEE (non)compliance - Using GADT's to maintain invariant in GHC libraries - Add rules involving `coerce` to the libraries - clean up dependency and usages handling in interface files - Add dummy undefined symbols to indicate ways - LLVM: Improve alias analysis / performance - Soft heap limit flag - Fix LLVM backend for PowerPC - Make a proper options parser for the RTS - LLVM: Stack alignment on OSX - New codegen: allocate large objects using allocateLocal() - Avoid generating C trigraphs - implement waitForProcess using signals - Make the External Package Table contain ModDetails not ModIface - Useful optimisation for set-cost-centre - State a law for foldMap - Improve enumFromX support for OverloadedLists - symbols should/might be type level lists of chars - Get rid of HEAP_ALLOCED on Windows (and other non-Linux platforms) - Resource limits for Haskell - Add SIMD support to x86/x86_64 NCG - Stride scheduling for Haskell threads with priorities - generalizing overloaded list syntax to Sized Lists, HLists, HRecords, etc - Generic1 deriving: Can we replace Rec1 f with f :.: Par1? - mkWeakMVar is non-compositional - Add tryWriteTBQueue to Control.Concurrent.STM.TBQueue - Allow declaration splices inside declaration brackets - parBufferWHNF could be less subtle - "guarded instances": instance selection can add extra parameters to the class - CPR optimisation for sum types if only one constructor is used - Polymorphic Data.Dynamic - GHC.Conc modifyTVar primitive - System.Posix.Signals should provide a way to set the SA_NOCLDWAIT flag - :info printing instances often isn't wanted - Enumeration of values for `Sys.Info.os`, `Sys.Info.arch` - Word type to Double or Float conversions are slower than Int conversions - ghc --cleanup - Offer control over branch prediction Completed Issues (closed) 928 - SMP primitives broken on power(pc) - Documentation will not build on platforms where GNU make is not called make - PowerPC: Unsupported relocation against x0 - Split ghc-boot so we have better dependency hygiene - Undefined stg_sel_17_upd_info symbols on OS X - Configure script doesn't check libdw version - Filtering of cost-center profiler output no longer works - Merge "Skip TEST=TcCoercibleFail when compiler_debugged" - Implement `-f(no-)version-macros` flag for controlling version macro generation - GHC 8 superclass chain constraint regression - Error in optCoercion - 'Strict' extension is incompatible with 'deriving' mechanism - TYPE 'UnboxedTupleRep is a lie - Merge some TypeInType fixes - Constant folding on 'mod/Word' - incorrect result - Regression using NamedFieldPuns with qualified field names - Allow plugins to define built-in rules - LLVM code generator produces mal-formed LLVM blocks - assertPprPanic, called at compiler/types/TyCoRep.hs:1932 - Core lint error in result of Specialise for TEST=T3220 WAY=optasm - Core lint error in simplifier when compiling Rules1 with -O -dcore-lint - Heterogeneous type equality evidence ignored - Cannot declare hs-boot declaration if there is already a value in scope - document TypeInType - Possible type-checker regression in GHC 8.0 when compiling `microlens` - Panic (ASSERT failed) in compiler/types/TyCoRep.hs:1939 - Optimize cmpTypeX - GHC 8.0 can't be bootstrapped with GHC 8.0 - BangPatterns-related behavior regressions on GHC 8.0 - Using Cabal 1.22 against GHC 8.0 results in unhelpful errors - catch _|_ breaks at -O1 - GHC falls into a hole if given incorrect kind signature - stg_ap_pp_fast doesn't pass the argument in the arity=1 case - Levity polymorphism checks are inadequate - Use of typechecker plugin erroneously triggers "unbound implicit parameter" error - Terrible failure of type inference in visible type application - GHC.Prim does not export Constraint - Cannot export operator newtype - Make unrecognised `-W` flags a warning rather than an error - GHC 8.0-rc1's linker does not work in OSX - Binary distributions seem to lack haddock docs - Panic with -XStrict: StgCmmEnv: variable not found - GHC 8.1.20160111 fails to bootstrap itself. - -XTypeInType uses up all memory when used in data family instance - Incorrect failure of type-level skolem escape check - No match in record selector ctev_dest - Ill-kinded instance head involving -XTypeInType can invoke GHC panic - Type mismatch in local definitions in Haskell 98 code - deriving Ix with custom ifThenElse causes "Bad call to tagToEnum#" - Solver hits iteration limit in code without recursive constraints - `-Woverlapping-patterns` induced memory-blowup - Redundant superclass warnings being included in -Wall destroys the "3 Release Policy" - Possible type-checker regression in GHC 8.0 - Regression when deriving Generic1 on poly-kinded data family - GHC panic when calling typeOf on a promoted data constructor - panic! TEST=tc198: lookupVers2 GHC.Stack.Types CallStack - Dysfunctional `__GLASGOW_HASKELL_TH` macro - haddock and Cabal regression - Type checker regression introduced by visible type-application - Programs compiled with GHC master segfault when run with +RTS -h - GHC HEAD uses up all memory while compiling `genprimcode` - DfltProb1(optasm): panic CoreToStg.myCollectArgs - T6031: *** Core Lint errors : in result of Common sub-expression *** - Undeclared `CCS_MAIN` in unregisterised build - GHC hangs/takes an exponential amount of time with simple program - Extend ghc environment file features - Regression typechecking type synonym which includes `Any`. - GHCi on Windows segfaults - Program doesn't preserve semantics after pattern synonym inlining. - Please add initial platform support for sparc64 - GHCi doesn't qualify types anymore - Turning on optimisations produces SEGFAULT or Impossible case alternative
https://gitlab.haskell.org/ghc/ghc/-/milestones/38
CC-MAIN-2020-50
en
refinedweb
Paul Kimmel on VB/VB .NET : Creating Visual Studio .NET Add-Ins Implementing the IDTExtensibility2 Interface The Add-In wizard provides an implementation for OnConnection automatically. OnConnection initializes the applicationObject and the AddInInstance references. These objects are used to create and insert a NamedCommand and add a menu item to the Tools menu on lines 34 to 67. The applicationObject references refers to the Development Tools Environment (DTE), which is the root object representing the host IDE. The addInInstance object is a reference to the specific instance of the Add-In, ensuring that an invocation refers to a specific object instance. The other four IDTExtensibility2 interface methods are implemented as empty procedures. Add code to these procedures if you need additional code for initialization, startup, or de-initialization. Implementing the IDTCommandTarget Interface The IDTCommandTarget interfaces are implemented too. QueryStatus determines if the command is available, returning this state in the statusOption parameter, and Exec is represents the point of invocation. Choose to implement those interface methods that you need to support your Add-In and leave the rest as empty procedures. Insert your response code between lines 76 and 77; for example, you might simply insert DoExecute method and implement your custom behavior beginning in the DoExecute method. Insert the statement MsgBox("MyAddIn") on line 77 and press F5 to run and test the Add-In. Pressing F5 will run a second copy of Visual Studio.Net with the Add-In available on the tools menu. Click the new Add-In menu item, and you will see the message box with the text MyAddIn displayed. (Keep in mind that the setup target created by the wizard will be compiled too, so be patient when you press F5. Building both the Add-In and setup project may take a couple of minutes.) Debugging Add-Ins in Visual Studio.Net Before you register your Add-In and put it into general purpose use you can debug it from VS .NET. The wizard sets debug properties indicating that VS .NET the host application. When you press F5 VS .NET will run another instance of the IDE and allow you test your Add-In. Set a break point in the source code of the Add-In in the first instance of the IDE. When you run your Add-In from the second instance it will halt when you breakpoint is hit. At that point you can debug your Add-In as you would any other application. Registering Add-Ins Add-Ins are assemblies. You can register Add-Ins as private assemblies for personal use or in the Global Assembly Cache (GAC) for shared used. Both types of registration are covered here. Private Assembly Registration Applications, like an Add-In DLL, have application settings that are stored in the Registry. You may have heard that .Net assemblies support xcopy deployment. This is true of .Net assemblies but not of .Net assemblies that are used by COM-based applications. Add-Ins use the system.Runtime.InteropServices which suggests that Add-Ins are used by COM-based applications, specifically the Add-In manager. For this reason you will need to register your Add-Ins. Additionally you will need to add registry settings allowing the Add-In Manager to display the Add-In in the Add-In Manager. There are several steps that you must perform to register your Add-In assembly. The first thing you need to do after you have tested your assembly is to run regasm.exe. The regasm.exe utility is found by default in the \winnt\Microsoft.Net\Framework. The command to register an assembly is regasm <path\>myaddin.dll /codebase where myaddin is the name of your Add-In, including the path information. The second step is to add information, instructing the Add-In manager how to make your Add-In accessible. You can create a registry script by copying the structure of the following listing in a text file with a .reg extension. Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\7.0\AddIns\MyAddIn.Connect]"FriendlyName"="MyAddIn""Description"="My First AddIn""LoadBehavior"=dword:00000001"CommandLineSafe"=dword:00000001"CommandPreload"=dword:00000001 Figure 1: Registry entries describe the Add-In in the Add-in Manager. The preceding registry script adds a key to the registry. Replace MyAddIn.Connect with the namespace and class of your Add-In. (Connect is the class name created by the Add-In wizard by default.) FriendlyName is the name of the Add-In that is displayed in the Available Add-Ins list of the Add-in Manager (see figure 1). Description indicates the text shown in the Description field, and the three remaining keys indicate the load behavior of the Add-In. Shared Assembly RegistrationShared assemblies are stored in the Global Assembly Cache, called the GAC. The GAC can be viewed by navigating Windows Explorer to the \winnt\assembly folder. When you navigate to this folder the GAC snap-in is automatically loaded by Windows Explorer (see figure 2). Figure 2: The Global Assembly Name cache folder: a plug-in used by Windows Explorer automatically when you navigate to the \winnt\assembly folder. Global assemblies can be shared. Global assemblies are distinguished by the string name, public key rather than the file name. Hence you may have more than one file with an identical name in the GAC, but you will need to generate a strong name for shared assemblies. Strong names are generated by the Add-In wizard, or you can explicitly use the sn.exe utility to generate a string name file. When you have run the Add-In wizard, added your custom code, and tested the Add-In then you are ready to register it. The gacutil.exe program can be used to add an assembly to the GAC. The command is gacutil /i <path\>myaddin.dll where myaddin is the name of your Add-In. (By default the gacutil.exe utility is located in the "C:\Program Files\Microsoft.NET\FrameworkSDK\Bin\gacutil.exe" directory.) Include the complete path information for gacutil.exe and your Add-In. If you browse to the shared assembly directory (see figure 2) then you will be able to determine that your registry has been added to the GAC. Finally you will need to add the registry entries necessary for the Add-in Manager to manage the Add-In. In early versions of beta 2 this process seems to be a little unreliable. This is to be expected from beta software. Expect refinements in the creation and testing of Add-Ins in release versions of Visual Studio.Net. Return to this column for more information on shared assemblies and building Add-Ins as revisions to VS.NET are made available.. # # # Page 2 of 2 This article was originally published on September 18, 2001
https://www.developer.com/lang/other/article.php/10942_886091_2/Paul-Kimmel-on-VBVB-NET--Creating-Visual-Studio-NET-Add-Ins.htm
CC-MAIN-2020-50
en
refinedweb
Hi Sebastian, thanks for the quick response! That helps. Best Budo Hi Sebastian, thanks for the quick response! That helps. Best Budo My PRs are now merged and the changes will be available in 2.5.5. @memphiz : selecting the channel through the UI will now switch to the right channel even if you have several channels with the same number. As command for this channel, you can still use the channel number but you can also use the channel id which is of course the solution when you have several channels with the same number. To know what are the channel ids, there is now a console command to display them. Great! Works perfect! Thanks a lot, Olli Hint: You have this error while using NEXT/PREVOIUS? [INFO ] [.lgwebos.internal.MediaControlPlayer] - Only accept PlayPauseType, RewindFastforwardType, RefreshType. Type was class org.eclipse.smarthome.core.library.types.NextPreviousType. To get your Player item work correctly use this simple workaround: rule "Set working Player FASTFORWARD Command" when TV_Player received command NEXT then TV_Player.sendCommand(FASTFORWARD) end rule "Set working Player PREVIOUS Command" when TV_Player received command PREVIOUS then TV_Player.sendCommand(REWIND) end @Lolodomo: The fix for the code seems to be simple, too: From: import org.eclipse.smarthome.core.library.types.RewindFastforwardType; to import org.eclipse.smarthome.core.library.types.NextPreviousType; ... else if (NextPreviousType.NEXT== command) { handler.getSocket().fastForward(getDefaultResponseListener()); } else if (NextPreviousType.PREVIOUS== command) { handler.getSocket().rewind(getDefaultResponseListener()); ... in MediaControlPlayer.java Regards, Olli Edit: latest version works fine. @sprehn I had a play with pywebostv to see how it dealt with the volume when using an AVR device. To my surprise it works perfectly. It tells you the cause of the event, volumeUp/Down. The actual volume level seems to be fairly accurate. It works perfectly, it would be great if you could add support for it. {u'scenario': u'mastervolume_ext_speaker_arc', u'muted': False, u'changed': [u'volume'], u'volume': 26, u'active': False, u'action': u'changed', u'cause': u'volumeUp'} Can somebody help me? It depends on what version of the binding you are running. Until version 2.5.4 or 2.5.5, you can get the key by opening the configuration page of your thing in Paper UI. In 2.5.5 or 2.5.6, this will be masked in Paper UI (for security purpose) but you have now a console command to show you the value of the key. Sorry, I don’t remember if my change is already in 2.5.5 or not. So first check in Paper UI and if masked use the console command. Were you able to figure this out? No, if I restart the openHAB service, the tv ask again: Do I want to connect? Like the first time. I took the code and added a debug line to print the key and then after getting the key, went back to the current version. Thanks, it is not enough pain to do that, but I will some day when I will have enough time. Sorry, I missed your message. I have 2.5.5 but no accesskey in console. openhab> lgwebos Usage: smarthome:lgwebos <thingUID> applications - list applications Usage: smarthome:lgwebos <thingUID> channels - list channels openhab> smarthome:lgwebos lgwebos:WebOSTV:LGTV accesskey Usage: smarthome:lgwebos <thingUID> applications - list applications Usage: smarthome:lgwebos <thingUID> channels - list channels openhab> So just pair with your TV and check in Paper Ui the value of the key. Copy and paste this value in your config file. That’s all. Nothing is there. The device id line is empty. Enable the debug logs and show us your logs when pairing your TV. Is your thing ONLINE after pairing? Yes, it is online. I set debug logging, restart service and connect to tv again. Can you tell me a string what I can find in the debug log? It is huge. Once paired and your OH has completed its startup, your command should spit out the accesskey. I went the route of recompiling the binding code and telling it to print the accesskey because I couldn’t get the command right. I tried your command in the console with my and it worked. If you can’t get it to work, I’d suggest you remove the binding, reload it, accept the discovered TV, accept the pairing on your TV, and then try the command again.
https://community.openhab.org/t/lgwebos-binding-for-lg-webos-tvs/4726/766
CC-MAIN-2020-50
en
refinedweb
Summary Lets you set how mosaic dataset overviews are generated. The settings made with this tool are used by the Build Overviews tool. Usage This tool is used when there are specific parameters you need to set to generate your overviews, such as - Defining the location to write the files - Defining an extent that varies from the boundary - Defining the properties of the overview images, such as the resampling or compression methods - Defining the overview sampling factor Use the Build Overviews tool to generate the overviews after they've been defined with this tool. You can use a polygon feature class to define the footprint of the overview. If you do not wish to use all the polygons in the feature class you can make a selection on the layer in the table of contents or use a tool such as Select Layer By Attribute or Select Layer By Location to select the desired polygons. The default tile size is 128 by 128. The tile size can be changed in the Environment Settings. This tool can take a long time to run if the boundary contains a large number of vertices. Syntax DefineOverviews(in_mosaic_dataset, {overview_image_folder}, {in_template_dataset}, {extent}, {pixel_size}, {number_of_levels}, {tile_rows}, {tile_cols}, {overview_factor}, {force_overview_tiles}, {resampling_method}, {compression_method}, {compression_quality}) Derived Output Code sample DefineOverviews example 1 (Python window) This is a Python sample for DefineOverviews. import arcpy arcpy.DefineOverviews_management("c:/workspace/fgdb.gdb/md01", "c:/temp", "#", "#", "30", "6", "4000", "4000", "2", "CUBIC", "JPEG", "50") DefineOverviews example 2 (stand-alone script) This is a Python script sample for DefineOverviews. #Define Overviews to the default location #Define Overviews for all levels - ignore the primary Raster pyramid #Define Overviews compression and resampling method import arcpy arcpy.env.workspace = "C:/Workspace" arcpy.DefineOverviews_management("DefineOVR.gdb/md", "#", "#", "#", "#", "#", "#", "#", "#", "FORCE_OVERVIEW_TILES", "BILINEAR", "JPEG", "50") Environments Licensing information - Basic: No - Standard: Yes - Advanced: Yes
https://desktop.arcgis.com/en/arcmap/latest/tools/data-management-toolbox/define-overviews.htm
CC-MAIN-2020-50
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byShawn Holland Modified over 5 years ago 1 Web Services Web Services are the basic fundamental building blocks of invoking features that can be accessed by an application program. The accessibility to provide the features across different platforms that are developed using different languages, and can be used by any remote application, over the internet and avoid securely restrictions. 2 SOAP web services SOAP is essentially an XML-based protocol for invoking remote methods.(verb oriented vs. noun oriented REST WS) Communication between the service and the application is through a standard format called XML (eXtensible Markup Language) which is universal and is accepted on any platform across the distributed network. 3 SOAP (Simple Object Access Protocol) SOAP is a protocol based on HTTP protocol that by means of which Web Services is enabled; it is the format that is acclaimed universally due to its nature of standardization and rules that it implements.. ple-web-service-with-jax-ws.html ple-web-service-with-jax-ws.html 4 UDDI (Universal Discription, Discovery and Integration): UDDI is specifically a governing body that monitors the publication and discovery of the web services implementation with respect to message communication between the application and also at the enterprise level. Web service is deployed on Web server (Apache Tomcat, IIS) or Application server(Java EE, Glassfish, WebLogic) 6 Simple calculation web service (add, subtract) @webservice 7 WSDL (Web Service Description Language): WSDL is an XML-based language that describes the interface to SOAP services It consists of the description about the web service, which is basically a file with.wsdl extension that you can find in your application folder. 8 WSDL basics: written in XML, describe web services, locate web services WSDL structure data types used by the WS messages (I/O parameters) set of operations(Interface) communication protocols used by the WS 9 WSDL structure (types)... 10 WSDL structure (message) 11 WSDL structure (portType) 12 Simple Object Access Protocol: structure SOAP <soap:Envelope xmlns:...... 13 SOAP structure (request) <soap:Envelope xmlns: 45 12 14 SOAP structure (response) <soap:Envelope xmlns: 57 15 A complete example of JAX-WS Webmethod int add(int a, int b), the SOAP request and response for this method might look like the following: SOAP request 1 2 17 names a, specifies a (URI) at which it is available and refers to a binding for the port specifies the style of interaction (e.g. RPC) transport protocol used (e.g. HTTP) s defined along with encodingStyle for their and specifies a set of named s which refer to the s used by each for and a describes a one-way message (request or response) consists of a number of s referring to parameters or return values, each of which has a type (e.g. xsd:string) describes all data types used between client and server (XML schema is default type system) 21 Calculator interface The Calculator interface defines the Service Endpoint Interface (SEI) for the Web Service.- service-with-jax-ws.html- service-with-jax-ws.html Calculator.java package org.apache.geronimo.samples.jws; import javax.jws.* @WebService(name="CalculatorPortType", targetNamespace = "") public interface Calculator { @WebMethod public int add(@WebParam(name = "value1") int value1, @WebParam(name = "value2") int value2);} 22. 23 CalculatorService.java package org.apache.geronimo.samples.jws; import javax.annotation.Resource;import javax.jws.WebService; import javax.xml.ws.WebServiceContext; @WebService(serviceName = "Calculator", portName="CalculatorPort", endpointInterface = "org.apache.geronimo.samples.jws.Calculator", targetNamespace = "", wsdlLocation = "WEB-INF/wsdl/CalculatorService.wsdl") public class CalculatorService implements Calculator @Resource private WebServiceContext context; public int add(int value1, int value2) { System.out.println("User Principal: " + context.getUserPrincipal()); return value1 + value2; }} The web.xml descriptor is used to deploy the Web Service. If the web.xml descriptor is not provided, it will be automatically generated during deployment. 24 JSP-based JAX-WS client The add.jsp is a basic client for the CalculatorService Web Service. Apache Geronimo Sample Application - JAX-WS Calculator 25 Value 1: Value 2: <% String value1 = request.getParameter( "value1" ); String value2 = request.getParameter( "value2" ); if (value1 != null && value1.trim().length() > 0 && value2 != null && value2.trim().length() > 0) { try { int v1 = Integer.parseInt(value1); int v2 = Integer.parseInt(value2); InitialContext ctx = new InitialContext(); Service service = (Service)ctx.lookup("java:comp/env/services/Calculator"); Calculator calc = service.getPort(Calculator.class); int sum = calc.add(v1, v2); out.println("Result: " + v1 + " + " + v2 + " = " + sum); } catch ( Exception e ) { out.println("Error: " + e.getMessage()); } ); }%> Similar presentations © 2020 SlidePlayer.com Inc.
http://slideplayer.com/slide/3415292/
CC-MAIN-2020-50
en
refinedweb
Ok, Like this ? subprocess.call(['test.sh', str(domid)]) Documentation is available on the python website If your scritp has any syntax-error it won't show up in the menu at all - the above code dows have a synatx error on the very first line of code from gimpfu import* (missign an space before the *) One easy way to check for syntax errors is to try to run the scritp as stad alone (it will fail when it can't find the "gimpfu" module outside GIMP, but by that time, the syntax is parsed - another way is to use a lint utility like pyflakes to check the syntax. Other, run time errors, that your script might contain should appear in a pop-up window when running it from GIMP - at that stage, you can only update your script and retry it from the menus. If you change the input or output parameters form your script, though, you have to restart GIMP . And yes, the "file location" is a problem -. The cronjobs you schedule in your settings are not actually added to the crontab until you run python manage.py crontab add. RTD: Well, in my experience, I would say that Java is not very fond of being ran by means of script. I did a quick google search. Try the ProcessBuilder, it looks perfect for this situation, in my opinion. I hope this helps you! :) The error says "unbound variable: myimage". This is telling you that you are referring to a variable/function called "myimage," but you never defined that variable, so it doesn't know what "myimage" represents. Do you know what "myimage" is supposed to represent? Did you copy that from another person's script? The function gimp-file-load returns a list containing the image that you opened. You need to use the "car" function to extract the first entry from that list, so that it can be stored in your "img" variable. So instead of (img (myimage (gimp-file-load RUN-INTERACTIVE filename filename))) it should say (img (car (gimp-file-load RUN-INTERACTIVE filename filename))) Also, I think you might want to use RUN-NONINTERACTIVE instead. Similarly, I think you will need to change putty is interactive command line. Try the below. bash variables can be used. #!/bin/bash su - mqm -c "echo 'DISPLAY QLOCAL (<QUEUENAME>) CURDEPTH'|runmqsc QUEUEMANAGER". You can make your script start with the line: #!/usr/bin/env python Or, depending on your version: #!/usr/bin/env python3.3 Then you must make your script executable with a chmod, and it should work like you want!). This is a shell scripting question more than it is a python one. However, I think your issue is " > test.txt" the ">" will start from a blank file each time instead of appending the results. try " >> test.txt". Using commandArgs like this: args <- commandArgs(trailingOnly = TRUE) arg1 <- args[1] arg2 <- args[2] [...your code...] Also make sure that the Rscript executable is in your PATH.' import MySQLdb con = MySQLdb.connect(...) cursor = con.cursor() try: # do stuff with your DB finally: con.close() The finally clause is executed on success as well as on error (exception). If you hit Ctrl-C, you get a KeyboardInterrupt exception. Just some background on the methods I've tried to solve my above problem, before I go on to the answer proper: subprocess.call subprocess.Popen execve the double fork method along with one of the above () By the way, none of the above worked for me. Whenever I killed off the web application that executes the bash script (which in turns spawns some background processes we shall denote as Q now), the processes in Q will in a round-robin fashion, take over the port occupied by the web application, so I had to kill them one by one before I could restart my web application. After many days of living with this problem and moving on to other parts of my project, I thought of some random Stack Overflow posts and()
http://www.w3hello.com/questions/How-to-Execute-Python-Fu-script-from-shell-via-Gimp
CC-MAIN-2018-17
en
refinedweb
Are you sure? This action might not be possible to undo. Are you sure you want to continue? Introduction The integrity of the Volunteer Income Tax Assistance (VITA) and Tax Counseling for the Elderly (TCE) Programs depends on maintaining public trust. All taxpayers using VITA/TCE services should be confident they are receiving accurate return preparation and quality service. All volunteers are responsible for providing the highest quality and best service to taxpayers. Along with this responsibility, all volunteers must sign and date Form 13615, Volunteer Standards of Conduct Agreement each year, stating they will comply with the Quality Site Requirements (QSR) and uphold the highest ethical standards. Furthermore, all IRS Stakeholder Partnerships, Education and Communication (IRS-SPEC) Partners must sign Form 13533, Sponsor Agreement, certifying they will adhere to the strictest standards of ethical conduct. Form 13533 is valid for one year after the signature date. New volunteers must complete the Volunteer Standards of Conduct (VSC) Training. Returning volunteers are encouraged to review the VSC Training as a refresher. All VITA/TCE volunteers must pass a VSC certification test with a score of 80% or higher. The VSC Training will provide: • An explanation of the six Volunteer Standards of Conduct defined on Form 13615 • Information on how to report possible violations • Consequences of failure to adhere to the program requirements • Examples of situations that raise questions on ethical behavior • An overview of the components included in a complete Intake/Interview & Quality Review Process Why are we doing this? During recent filing seasons, the Treasury Inspector General for Tax Administration (TIGTA) and IRS-SPEC discovered unacceptable practices at a few VITA/TCE sites. In response to these issues, IRS-SPEC enhanced the Volunteer Standards of Conduct. The intent is to provide guidance and a structure for regulating VITA/TCE volunteers and to protect taxpayers. When unscrupulous volunteers intentionally ignore the law, it compromises the integrity of the VITA/TCE Programs and the public’s trust. Unfortunately, due to the actions of a few, the VITA/TCE Programs’ integrity and trust have been tested. In these cases, IRS-SPEC can and does take appropriate actions against the partners and volunteers involved. IRS-SPEC is ultimately responsible for oversight of the VITA/TCE Programs. The agency often receives complaints from taxpayers, partners, and congressional members when assessment notices are issued. IRS-SPEC researches and responds to all inquiries, but ultimately it is the partner’s/sponsor’s responsibility to take corrective actions. Standards of Conduct (Ethics) 1 Objectives What do I need? At the end of this lesson, using your reference materials, you will be able to: • List the six Volunteer Standards of Conduct • Describe unethical behavior and how to use the external referral process to report unethical behavior • Identify consequences for failing to comply with the standards • Explain how volunteers are protected • List the basic steps volunteers are required to use during the Intake/ Interview & Quality Review Process □ Form 13614-C, Intake/ Interview & Quality Review Sheet □ Form 13615, Volunteer Standards of Conduct Agreement □ Publication 1084, IRS Volunteer Site Coordinator Handbook □ Publication 4299, Privacy, Confidentiality, and Civil Rights – A Public Trust □ Publication 5101, Intake/ Interview & Quality Review Training Unethical Defined □ Publication 5088, Site Coordinator Training IRS-SPEC defines unethical as not conforming to agreed standards of moral conduct, especially within a particular profession. In most cases, unethical behavior is acted upon with the intent to disregard the established laws, procedures, or set policies. Do not confuse an unethical action with a lack of knowledge or a simple mistake. example If volunteer Mary prepares a return, which includes a credit the taxpayer does not qualify for because Mary did not understand the law, Mary did not act unethically. However, if Mary knowingly allowed a credit for which the taxpayer did not qualify, Mary committed an unethical act and violated the Volunteer Standards of Conduct. Volunteer Standards of Conduct (VSC) Often volunteers face ethical issues, which arise in unexpected situations requiring quick decisions and good judgment. In many cases, the volunteer will react to unusual situations and not realize until after the fact that an ethical dilemma occurred. The Volunteer Standards of Conduct were developed specifically for free tax preparation operations. Form 13615, Volunteer Standards of Conduct Agreement – VITA/TCE Programs, applies to all conduct and ethical behavior affecting the VITA/TCE Programs. Volunteers must agree to the standards prior to working in a VITA/ TCE free return preparation site. All participants in the VITA/TCE Programs must adhere to these Volunteer Standards of Conduct: 1. Follow the ten Quality Site Requirements (QSR). All taxpayers using the services offered through the VITA/TCE Programs should be confident they are receiving accurate return preparation and quality service. The purpose of QSR is to ensure VITA/TCE sites are using consistent site operating procedures that will ultimately assist with the accuracy of volunteer prepared returns. See Publication 5166, VITA & TCE Quality Site Requirements, for a full description of each QSR. Non-adherence to the Quality Site Requirements only become violations of the VSC if volunteers refuse to comply with the QSR. If the problem is corrected, it is not a violation of the VSC. The ten QSR are briefly described below: 2 Standards of Conduct (Ethics) QSR#1, Certification New volunteers must complete the VSC Training. Returning volunteers are encouraged to review the VSC Training as a refresher. All VITA/TCE volunteers must pass a VSC certification test with a score of 80% or higher. Volunteers who answer tax law questions, instruct tax law classes, prepare or correct tax returns, and/or conduct quality reviews of completed tax returns must be certified in tax law and Intake/Interview & Quality Review Process. At a minimum, all VITA/TCE instructors must be certified at the Advanced level or higher (based on the level of tax topics taught). At a minimum, quality reviewers must be certified to the Basic certification level or higher (including the specialty levels) based on the complexity of the tax return. New volunteers in positions that require tax law certification must take the Intake/Interview & Quality Review Training by reviewing Publication 5101, Intake/Interview & Quality Review Training. Returning volunteers are encouraged to review Publication 5101 as a refresher. All tax law-certified volunteers and site coordinators are required to pass the Intake/Interview & Quality Review certification test with a score of 80% or higher. Site coordinators must complete Site Coordinator Training annually by reviewing Publication 1084, Site Coordinator Handbook, and Publication 5088, Site Coordinator Training. In addition, site coordinators are required to pass the Intake/Interview & Quality Review certification test even if they do not perform tasks that require tax law certification. New for 2017: VITA/TCE volunteers covered under Treasury Department Circular No. 230, Regulations Governing Practice before the Internal Revenue Service, have the option to take the Circular 230 Federal Tax Law Update certification as their tax law certification. These volunteers are required to certify in Volunteer Standards of Conduct and Intake/Interview & Quality Review prior to taking the Circular 230 Federal Tax Law Update certification. In addition, SPEC established the miniif the volunteer covered by Circular 230 is going to perform the duties of mum certification requirea site coordinator, they are required to take the Site Coordinator Training. ments for volunteers who are Circular 230 contains rules and regulations governing certain professionals authorized under Circular 230; however, partners may (attorneys, certified public accountants, enrolled agents, etc.) representing establish additional certification taxpayers before the Internal Revenue Service. For more information about requirements for their volunvolunteers covered under Circular 230, see Publications 4396-A, Partner teers. Volunteers should check Resource Guide, and Publication 1084, Site Coordinator Handbook. with the sponsoring SPEC QSR#2, Intake/Interview & Quality Review Process Partner. All volunteer return preparation sites must use Form 13614-C, Intake/Interview & Quality Review Sheet, for every return prepared. It is a requirement for all IRS tax law-certified volunteers to use a complete intake and interview process when preparing tax returns. To promote accuracy, this process must include an interview with the taxpayer while reviewing and completing or correcting Form 13614-C prior to preparing the return. All volunteer prepared returns must be quality reviewed and discussed with the taxpayer. A quality review must include a discussion with the taxpayer and an explanation of the taxpayer’s responsibility for the accuracy of their tax return. Quality reviews should be conducted by a designated reviewer or by peer-to-peer review. SPEC encourages the quality reviewers to be the most experienced people in tax law application. QSR#3, Confirming Photo Identification and Taxpayer Identification Numbers (TIN) Site coordinators are required to have a process in place to confirm taxpayer identities. This process must include using acceptable documents to confirm taxpayer identities by reviewing: • Photo identification for primary and secondary taxpayers; and • Social Security Numbers (SSN) or Individual Taxpayer Identification Numbers (ITIN) for everyone listed on the tax return. Standards of Conduct (Ethics) 3 At a minimum, volunteers will validate taxpayers’ identities and identification numbers prior to preparing the tax return, before the return is transmitted electronically, or before a copy of the return is given to the taxpayer. Married Filing Jointly (MFJ) taxpayers must both be present at the site (not necessary at the same time) or produce a power of attorney for the spouse who is unable to travel to the site. QSR#4, Reference Materials All sites must have at least one copy (paper or electronic) of the following reference materials available for use by the IRS tax law-certified preparers and quality reviewers: • Publication 4012, Volunteer Resource Guide • Publication 17, Your Federal Income Tax for Individuals Site/local coordinators are required to have a process in place to ensure all Volunteer Tax Alerts or AARP Cyber Tax Messages have been reviewed and discussed with all volunteers, within five days after IRS issuance. QSR#5, Volunteer Agreement All volunteers (preparers, quality reviewers, greeters, etc.) must complete the VSC certification test and agree to comply with the VSC by signing and dating Form 13615 prior to working at a site. New volunteers must take the VSC Training and returning volunteers are encouraged to take the training. Form 13615 is also used to capture the levels of tax law certification the volunteer has achieved. See the chart that follows for the certification Greeters or client facilitators that will not answer tax law paths. Form 13615 is not valid until the sponsoring partner, site coordinaquestions are only required tor, or other partner-designated official has verified the required certification to certify in the Volunteer level(s) and checked proper identification (photo ID) for the volunteer prior to Standards of Conduct. the volunteer working at the VITA/TCE site. Site coordinators who prepare tax returns, provide tax law assistance, correct rejected returns, or quality review tax returns must certify in tax law to the level required for the complexity of the returns. If they do NOT perform any of these duties, they are not required to certify in tax law, as shown by the dotted line in the certification paths chart. 4 Standards of Conduct (Ethics) QSR#6, Timely Filing All sites must have a process in place to ensure every return is electronically filed or delivered to the taxpayer in a timely manner. QSR#7, Civil Rights Title VI of the Civil Rights Act of 1964 information must be displayed or provided to taxpayers at the first point of contact between the IRS tax law-certified volunteer and the taxpayer even if a return is not completed. QSR#8, Site Identification Number It is critical that the correct Site Identification Number (SIDN) is reported on all returns prepared by VITA/TCE sites. QSR#9, Electronic Filing Identification Number The correct Electronic Filing Identification Number (EFIN) must be used on all returns prepared. QSR#10, Security All guidelines discussed in Publication 4299, Privacy, Confidentiality and Civil Rights – A Public Trust, must be followed. Publication 4299 outlines the need to protect the physical and electronic data gathered for tax return preparation and keep confidential the information provided by the taxpayer. Included in these guidelines is the need to protect any client identification numbers, user names, and passwords used at the site. Partners and volunteers must not share client identification numbers, user names, and/or passwords. For additional information on Quality Site Requirements, refer to Publication 5166, Quality Site Requirements, or search “Strengthening the Volunteer Programs” on. 2. Do not accept payment, solicit donations, or accept refund payments for federal or state tax return preparation. “Free” means we do not accept compensation for our services. Therefore, we do not want to confuse the taxpayer by asking for donations. Donation or tip jars located in the return preparation or taxpayer waiting area are a violation of this standard. A client may offer payment, but always refuse with a smile and say something like, “Thank you, but we cannot accept payment for our services.” If someone insists, recommend cookies or donuts for the site. Taxpayers can make cash donations to the sponsoring organization, but not in the tax preparation area. Refer taxpayers who are interested in making cash donations to the appropriate website or to the site coordinator for more information. example You finish a time-consuming return and the client is very grateful. On her way out, the client stops by and tries to sneak a $20 bill in your pocket, saying, “I would have paid ten times that at the preparer across the street.” Return the money and explain that you cannot accept money for doing taxes, but the center may appreciate a donation which can be made at the center’s downtown office or via their website. Donation or tip jars can be placed in another area at the site as long as that area does not give the impression that the site is collecting the funds for return preparation. This cannot be in the entry, waiting, tax preparation, or quality review areas. Taxpayers’ federal or state refunds cannot be deposited into VITA/TCE volunteers’ or any associated partners’ personal or business bank/debit card accounts. Generally, VITA/TCE sites should only request direct deposit of a taxpayer’s refund into accounts bearing the taxpayer’s name. Standards of Conduct (Ethics) 5 3. Do not solicit business from taxpayers you assist or use the knowledge gained about them (their information) for any direct or indirect personal benefit for yourself or any other specific individual. As a volunteer, you must properly use and safeguard taxpayers’ personal information. Furthermore, do not use confidential or nonpublic information to engage in financial transactions, and do not allow its improper use to further your own or another person’s private interests. example You are a volunteer preparer and an accountant. You cannot solicit business from the taxpayer. example You are the site’s greeter. Your daughter asks you to take candy orders at the site for her school fundraiser. You explain to her that as a VITA/TCE volunteer you cannot solicit personal business. Keep taxpayer and tax return information confidential. A volunteer preparer may discuss information with other volunteers at the site, but only for purposes of preparing the return. Do not use taxpayer information for your personal or business use. example Your primary business includes selling health insurance policies. During the interview, you find out the taxpayer lost access to health insurance in January of the current year. You cannot offer to sell the taxpayer health insurance through your business. Securing consent There will be some instances when taxpayers will allow their personal information to be used other than for return preparation. Under Internal Revenue Code § 7216, all volunteer sites using or disclosing taxpayer data for purposes other than current, prior, or subsequent year tax return preparation must secure two consents from the taxpayer: consent to use the data and consent to disclose the data. The site coordinator will have a process in place if consents are required at your VITA/TCE site. Exceptions to required consents Volunteer sites that use or disclose the total number of returns (refunds or credits) prepared for their taxpayers at their site (aggregate data) for fundraising, marketing, and publicity are not required to secure taxpayers’ consent. This information cannot include any Personally Identifiable Information (PII), such as the taxpayer’s name, SSN/ITIN, address or other personal information, and does not disclose cells containing data from fewer than ten tax returns. This exception does not apply to the use or disclosure in marketing or advertising of statistical compilations containing or reflecting dollar amounts of refunds, credits, rebates, or related percentages. For additional information on IRC 7216 required consents, refer to Publication 4299, Privacy, Confidentiality, and Civil Rights – A Public Trust. 6 Standards of Conduct (Ethics) 4. Do not knowingly prepare false returns. It is imperative that volunteers correctly apply tax law to the taxpayer’s situation. While a volunteer may be tempted to bend the law to help taxpayers, this will cause problems down the road. Volunteers must, resulting in an extreme burden. In addition, the taxpayer may seek damages under state or local law from the SPEC Partner for the volunteer’s fraudulent actions. Even so, the IRS would still seek payment of the additional taxes, interest, and penalties from the taxpayer. example A volunteer preparer told the taxpayer that cash income does not need to be reported. The return was completed without the cash income. The quality reviewer simply missed this omission and the return was printed, signed, and e-filed. The volunteer preparer has violated this standard. However, since the quality reviewer did not knowingly allow this return to be e-filed incorrectly, the quality reviewer did not violate this standard. Remember not to confuse an unethical action with a lack of knowledge or a simple mistake. example A volunteer prepares a fraudulent return by knowingly claiming an ineligible dependent. The taxpayer received a notice from IRS disallowing the dependent and assessing additional taxes, interest, and penalties. The taxpayer may seek money from the SPEC Partner, but must still pay the IRS the additional taxes, interest, and penalties. Hardship on the taxpayer For a low-income taxpayer, it could be impossible to make full payment and recover from return fraud. If full payment is not received, the taxpayer will receive several demand notices. If full payment is still not received, the taxpayer will be sent through the IRS collection process. This could also involve the filing of a tax lien that will affect the taxpayer’s credit report, or a levy (garnishment) on their bank accounts and/or wages. The taxpayer may be eligible for an installment agreement, but it could take several years to pay the IRS debt. example A taxpayer’s return fraudulently contains the Earned Income Tax Credit (EITC). The taxpayer has already received the refund when an audit notice is issued. During the audit, the taxpayer cannot provide documentation to support the EITC claim. The taxpayer is disallowed $3,000 in EITC and now has a balance due of over $4,000, including penalties and interest. This amount reflects only the EITC disallowance. An additional disallowance of the dependency exemption, Head of Household (HOH) filing status, and Child Tax Credit (CTC) could generate a balance of over $6,000. Standards of Conduct (Ethics) 7 Identity Theft Nationwide, identity theft continues to grow at an alarming rate. Unfortunately there have been instances of unscrupulous volunteers using information they have obtained at a VITA/TCE site to steal the identity of taxpayers. For example, using a stolen SSN to file a false tax return to obtain the refund is a form of identity theft. Any suspicion of identity theft will be reported to IRS Criminal Investigation (CI) and Treasury Inspector General for Tax Administration (TIGTA). The IRS considers this a very serious crime and has put in place measures to detect possible identity theft situations at VITA/TCE sites. The IRS is continually implementing new processes for handling returns, new filters to detect fraud, new initiatives to partner with stakeholders, and a continued commitment to investigate the criminals who perpetrate these crimes. example Jane, an IRS tax law-certified volunteer, is working at a VITA site on the first day the site is open. She has volunteered to electronically file the tax returns for the site to help out the site coordinator. Therefore, she has been given the needed permission level in the tax preparation software. That day, Joe, the site coordinator, opens the locked VITA file cabinet and discovers an e-file acceptance report he forgot to destroy from the previous year. He asks Jane to take the report down the hall to the shredder because it has several SSNs listed. Jane puts the report in her purse without Joe’s knowledge. Later that night at home, Jane opens the VITA tax preparation software and prepares falsified tax returns for the eight SSNs listed on the report she took from the VITA site that morning. She makes sure the returns all have high refunds. Jane puts her own bank account information in the direct deposit fields and electronically files the returns. Jane has stolen the identity of these eight taxpayers by preparing false federal tax returns to steal the refunds. Jane will soon discover SPEC has a system that extracts information pertaining to tax returns filed through the VITA/TCE Programs where multiple tax refunds are being deposited into a single bank account. Jane’s actions will be reported to IRS CI and TIGTA. 5. Do not engage in criminal, infamous, dishonest, notoriously disgraceful conduct, or any other conduct deemed to have a negative effect on the VITA/TCE Programs. Volunteers may be prohibited from participating in VITA/TCE Programs if they engage (past and future) in criminal, infamous, dishonest, or notoriously disgraceful conduct, or any other conduct prejudicial to the government. Take care to avoid interactions that discredit the program. In addition, a taxpayer may look to state or local law to seek money from the SPEC Partner for a volunteer’s fraudulent actions. Allowing an unauthorized alien to volunteer at a VITA/TCE site is prohibited. An “unauthorized alien” is defined as an alien not lawfully admitted into the United States. All volunteers participating in the VITA/TCE Programs must reside in the United States legally. Site coordinators are required to ask for proof of identity with a photo ID for each volunteer. However, site coordinators or partners are not required to validate the legal status of volunteers. Therefore, by signing Form 13615, volunteers are certifying that they are legal residents. 8 Standards of Conduct (Ethics) If you have information indicating that another volunteer has engaged in criminal conduct or violated any of the Volunteer Standards of Conduct, immediately report such information to your site coordinator and/or email IRS at WI.VolTax@irs.gov. Consequences Volunteers performing egregious activities are barred from volunteering for VITA/TCE Programs, and may be added to a registry of barred volunteers. The taxpayer is liable for any tax deficiency resulting from fraud, along with interest and penalties, and may seek money from the preparer and the SPEC Partner. example A partner’s program director was convicted of embezzling funds from an unrelated organization. The program director’s criminal conduct created negative publicity for the partner. The partner was removed from the VITA/TCE Programs. example A taxpayer’s refund was stolen by a volunteer return preparer at a VITA site. The taxpayer sought monetary damages from the SPEC Partner for the volunteer’s fraudulent actions. 6. Treat all taxpayers in a professional, courteous, and respectful manner. To protect the public interest, the IRS and its employees, partners, and volunteers must maintain the confidence and esteem of the people we serve. All volunteers are expected to conduct themselves professionally in a courteous, businesslike, and diplomatic manner. Volunteers take pride in assisting hard-working men and women who come to VITA/TCE sites for return preparation. Taxpayers are often under a lot of stress and may wait extended periods for assistance. Volunteers may also experience stress due to the volume of taxpayers needing service. This situation can make patience run short. It is important to remain calm and create a peaceful and friendly atmosphere. example You finish a difficult return for Millie, who has self-employment income, several expenses, and very few records. In addition, her son turned 25 and moved out early in the year. She owes the IRS about $50. After you carefully explain the return, Millie sputters, “You don’t know what you’re doing. I always get a refund! My neighbor is self-employed and she got $1,900 back.” In this situation, you should take a deep breath and courteously explain that every return is different. If necessary, involve the site coordinator. Standards of Conduct (Ethics) 9 Taxpayer Civil Rights require a reasonable accommodation in order to participate or receive the benefits of a program or activity funded or supported by the Department of the Treasury – Internal Revenue Service. A reasonable accommodation is any change made in a business environment that allows persons with disabilities equal access to programs and activities. Taxpayers with Limited English Proficiency (LEP) may require language assistance services in order to participate or receive the benefits of a program or activity funded or supported by the Internal Revenue Service. Language assistance services may include oral interpretation and written translation, where necessary. Site coordinators at federally assisted sites are responsible for ensuring that reasonable requests for accommodation are granted when the requests are made by qualified individuals with disabilities and that reasonable steps are taken to ensure that LEP persons have meaningful access to its programs or activities. For additional guidance, please visit the Site Coordinator Corner and review the Fact Sheets on Reasonable Accommodation and Limited English Proficiency. If a taxpayer believes that he or she has been discriminated against, a written complaint should be sent to the Department of the Treasury - Internal Revenue Service at the following address: Operations Director, Civil Rights Division Internal Revenue Service, Room 2413 1111 Constitution Avenue, NW Washington, DC 20224 For all inquiries concerning taxpayer civil rights, contact the Civil Rights Division at the address referenced above, or e-mail edi.civil.rights.division@irs.gov. Due Diligence By law, tax return preparers are required to exercise due diligence in preparing or assisting in the preparation of tax returns. IRS-SPEC defines due diligence as the degree of care and caution reasonably expected from, and ordinarily exercised by, a volunteer in the VITA/TCE Programs. This means, as a volunteer, you must do your part when preparing or quality reviewing a tax return to ensure the information on the return is correct and complete. Doing your part includes confirming a taxpayer’s (and spouse’s, if applicable) identity and providing top-quality service by helping them understand and meet their tax responsibilities. Generally, IRS certified volunteers may rely in good faith on information from a taxpayer without requiring documentation as verification. However, part of due diligence requires volunteers to ask a taxpayer to clarify information that may appear to be inconsistent or incomplete. When reviewing information for its accuracy, volunteers need to ask themselves if the information is unusual or questionable. Make an effort to find the answer When in doubt: • Seek assistance from the site coordinator 10 Standards of Conduct (Ethics) • Seek assistance from a tax preparer with more experience • Reschedule/suggest the taxpayer come back when a more experienced tax preparer is available • Reference/research publications (i.e. Publication 17, Publication 4012, Publication 596, etc.) • Research for the answer • Call the VITA/TCE Hotline at 1-800-829-VITA (8482) • Research the Interactive Tax Assistance (ITA) on to address tax law qualifications • Advise taxpayers to seek assistance from a professional tax preparer If at any time a volunteer becomes uncomfortable with the information and/or documentation provided by a taxpayer, the volunteer should not prepare the tax return. Failure to Comply with the Standards of Conduct Who enforces the standards? Because the U.S. tax system is based on voluntary compliance, taxpayers are able to compute their own tax liability. Most taxpayers compute their tax accurately, but at times unscrupulous taxpayers and preparers evade the system by filing fraudulent returns. For this reason, some sponsoring organizations may choose to perform background checks on their volunteers. The VITA/TCE Programs are operated by sponsoring partners and/or coalitions outside the IRS. However, IRS is responsible for the oversight of these programs. Generally, volunteers are selected by partners and not by the IRS. A volunteer tax preparer serves an important role. In fact, SPEC Partners and their volunteers are the most valuable resources in the volunteer tax preparation program. IRS has the responsibility for providing oversight to protect the VITA/TCE Programs’ integrity and maintain taxpayer confidence. IRS-SPEC recognizes its volunteers’ hard work and does not want it overshadowed by a volunteer’s lapse in judgment. How are the standards enforced? To maintain confidence in VITA/TCE Programs, IRS-SPEC enhanced Form 13615, Volunteer Standards of Conduct Agreement. The intent is to provide guidance to volunteers and a structure for regulating ethical standards. If conduct violating the standards occurs at a VITA/TCE site, IRS-SPEC will recommend corrective actions. If the site cannot remedy the conduct, then IRS-SPEC will discontinue its relationship and remove any government property from the site. In cases of malfeasance, illegal conduct, and/or management practices that violate the VSC, IRS-SPEC may terminate a grant. A volunteer’s conduct could put a site or partner in jeopardy of losing its government funding. What if an unethical situation is discovered at a site? If volunteers, site coordinators, or taxpayers identify potential problems at the partner, site, or volunteer level that they feel may require additional, independent scrutiny, they can report these issues using the external referral process (VolTax) by emailing WI.Voltax@irs.gov. SPEC employees and managers who identify unethical behavior or violations to the VSC will use an internal referral process. Volunteer’s role in reporting questionable activity Honest taxpayers and tax preparers preserve the tax system’s integrity. To sustain confidence in the VITA/ TCE Programs, you should report violations that raise substantial questions about another volunteer’s honesty, trustworthiness, or fitness as a tax preparer. Standards of Conduct (Ethics) 11 Taxpayers and tax preparers who violate tax law are subject to civil and criminal penalties. Any person who willfully aids or assists in, or procures, counsels, or advises the preparation or presentation of a materially false or fraudulent return is subject to criminal punishment. IRS-SPEC will refer violations to the IRS Criminal Investigation Division or the Treasury Inspector General for Tax Administration. You can report a violation by emailing WI.Voltax@irs.gov. Site Coordinator’s Responsibility If a site coordinator determines a volunteer has violated the Volunteer Standards of Conduct, the site coordinator needs to immediately remove the volunteer from all site activities and notify both the partner and IRS-SPEC with the details of the violation. The site coordinator can notify IRS-SPEC by either contacting their SPEC Relationship Manager or using the external referral process (VolTax). If the site coordinator contacts the territory, the territory will use the internal referral process to elevate the referral to headquarters. It is critical that SPEC Headquarters be notified as quickly as possible of any potential misconduct by any volunteers to preserve the integrity of the VITA/TCE Programs. example While reading the newspaper, Violet, the site coordinator at Pecan Public Library, learns that one of her volunteers, Dale, was arrested for identity theft. The article indicates Dale has been using other people’s identities to apply for credit cards and then using these cards for unauthorized purchases. Violet sends an e-mail to WI.voltax@irs.gov with the details from the news article. When the site opens the next day, Violet pulls Dale aside and advises him that he cannot work at the site due to his arrest on identity theft charges. External Referral Process The external referral process (VolTax) provides taxpayers, volunteers, site coordinators, and others an avenue to report potential unethical problems encountered at VITA/TCE sites. Volunteers and taxpayers can send an The e-mail address is available in: • Publications 4836 and 4836(SP), VITA and TCE Free Tax Preparation Program • Form 13614-C, Intake/Interview & Quality Review Sheet • Publication 730, Important Tax Records Envelope All VITA and TCE sites are required to display Publications 4836 and 4836(SP), or D-143 for AARP sites, in a visible location to ensure taxpayer awareness of the ability to make a referral. It is critical that volunteers and taxpayers immediately report any suspicious or questionable behavior. The IRS will investigate the incidents reported to the email address to determine what events occurred and what actions need to be taken. In addition, your reported violations should be shared with your sponsoring partner and local SPEC Territory Office. Taxpayers and tax preparers who violate tax law are subject to civil and criminal penalties. Any person who willfully aids or assists in, procures, counsels, or advises the preparation of a false or fraudulent return is subject to criminal punishment. 12 Standards of Conduct (Ethics) Volunteer Registry Volunteers and partners released from the VITA/TCE Programs for egregious actions can be added to the IRS-SPEC Volunteer Registry. The IRS-SPEC Director will determine if a volunteer or partner should be added to the registry. The purpose of the registry is to notify IRS-SPEC employees of volunteers and partners who were removed from the VITA/TCE Programs. The registry will include partner or individual names, locations, and affiliated agency or sponsors. Volunteers and/or partners on this list are unable to participate in VITA/TCE Programs indefinitely. Egregious actions include, but are not limited to, one or more of the following willful actions: • Creating harm to taxpayers, volunteers or IRS employees • Refusing to adhere to the Quality Site Requirements • Accepting payments for return preparation at VITA/TCE sites • Using taxpayer personal information for personal gain • Knowingly preparing false returns • Engaging in criminal, infamous, dishonest, notorious, disgraceful conduct • Any other conduct deemed to have a negative impact on the VITA/TCE Programs What is the impact on VITA/TCE Programs? Unfortunately, one volunteer’s unethical behavior can cast a cloud of suspicion on the VITA/TCE Programs as a whole. IRS-SPEC has closed tax sites due to unethical behavior, which left taxpayers without access to free tax preparation in their community. The consequences to the tax site or sponsoring organization may include: • Terminating the partnership between the IRS and the sponsoring organization • Discontinuing IRS support • Revoking or retrieving the sponsoring organization’s grant funds • Deactivating IRS Electronic Filing Identification Number (EFIN) • Removing all IRS products, supplies, and loaned equipment from the site • Removing all taxpayer information • Disallowing use of IRS logos What is the impact on taxpayers? A taxpayer is responsible for paying only the correct amount of tax due under the law. However, an incorrect return can cause a taxpayer financial stress. Although a return is accepted, it may not be accurate. Acceptance merely means the required fields are complete and that no duplicate returns exist. It is imperative to correctly apply the tax laws to the taxpayer’s situation. While a volunteer may be tempted to bend the law to help taxpayers, this will cause problems in the future. How might the taxpayer find relief? If tax collection would cause significant hardship, the taxpayer may be able to find relief. Significant hardship means serious deprivation, not simply economic or personal inconvenience to the taxpayer. In this case, collection action may stop, but interest and penalties will continue to accrue until the balance is paid in full. What if the taxpayer is not telling the truth? As described under VSC #4, the tax controversy process can be long and drawn-out. A volunteer who senses Standards of Conduct (Ethics) 13 that a taxpayer is not telling the truth should not ignore it. Conduct a thorough interview to ensure there is no misunderstanding. If that does not resolve the matter, refer the taxpayer to the site coordinator. Remember, if a volunteer is not comfortable with the information provided from the taxpayer, the volunteer is not obligated to prepare the return. Taxpayer review and acknowledgement After the return is finished, an IRS tax law-certified volunteer must briefly discuss the filing status, exemptions, income, adjusted gross income, credits, taxes, payments, and the refund or balance due with the taxpayer. If the taxpayer has any questions, concerns, or requires additional clarification about the return, the volunteer must assist the taxpayer. If necessary, ask the site coordinator for assistance. Tax returns include the following disclosure statements: • For the Taxpayer: “Under penalties of perjury, I declare that I have examined this return and accompanying schedules and statements, and to the best of my knowledge and belief, they are true, correct, and complete.” • For the Preparer: “Declaration of preparer (other than the taxpayer) is based on all information of which preparer has any knowledge.” Volunteers must remind taxpayers that when they sign the return (either by signing Form 1040, U.S. Individual Income Tax Return or signing Form 8879, IRS e-file Signature Authorization), they are stating under penalty of perjury that the return is accurate to the best of their knowledge. Volunteer Protection Act Public Law 105-19, Volunteer Protection Act of 1997 (VPA) generally protects volunteers from liability for negligent acts they perform within the scope of their responsibilities in the organization for whom they volunteer. The VPA is not owned or written exclusively for Internal Revenue Service. This is a public law and relates to organizations that use volunteers to provide services. What is a volunteer? Under the VPA, a “volunteer” is an individual performing services for a nonprofit organization or a governmental entity (including as a director, officer, trustee, or direct service volunteer) who does not receive for these services more than $500 total in a year from the organization or entity as: • Compensation (other than reasonable reimbursement or allowance for expenses actually incurred), or • Any other thing of value in lieu of compensation Although an individual may not fall under the VPA definition of a “volunteer,” which means they may not be protected under the VPA, they are still considered volunteers by the VITA/TCE Programs. To ensure protection, those who do not fit this VPA volunteer definition should seek advice from their sponsoring organization’s attorneys to determine liability protection rights. What does the VPA do? The purpose of the VPA is to promote the interests of social service program beneficiaries and taxpayers and to sustain the availability of programs, nonprofit organizations, and governmental entities that depend on volunteer contributions. It does this by providing certain protections from liability concerns for volunteers serving nonprofit organizations and governmental entities. The VPA protects volunteers from liabilities if they were acting within the scope of the program and harm was not caused by willful or criminal misconduct, gross negligence, reckless misconduct, conscious, flagrant indifference to the rights or safety of the individual harmed by the volunteer. The VPA does not protect conduct 14 Standards of Conduct (Ethics) that is willful or criminal, grossly negligent, reckless, or conduct that constitutes a conscious, flagrant indifference to the rights or safety of the individual harmed by the volunteer. Volunteers should only prepare returns that are within their tax law certification level, their site’s certification level, and the level of certification under the VITA/TCE Programs. See the Scope of Service Chart in Publication 4012 for more information. In general, if volunteers are performing their responsibilities while adhering to the Volunteer Standards of Conduct, they are protected. However, local and state laws still must be considered. Sponsoring organizations should seek advice from their attorneys to determine how this law protects their volunteers. Instructions for Completing Training, Certification, and the VSC Agreement Before working at a VITA/TCE site, all volunteers must present a current-year VSC Agreement (Form 13615) to the sponsoring partner and/or site coordinator with the volunteer section completed, signed, and dated. When the volunteer signs Form 13615, they are agreeing to adhere to the VSC. Form 13615 is also used to capture the levels of certification the volunteer has achieved. Form 13615 is not valid until it is signed and dated by the sponsoring partner, site coordinator, instructor, or other partner-designated official after verifying the volunteer’s identity (with photo ID) and certification level. Volunteers may view training and take the certification tests by using: • Link & Learn Taxes (preferred), OR • The following products, available for download at: – Publication 4961, VITA/TCE Volunteer Standards of Conduct – Ethics Training – Publication 5101, Intake/Interview & Quality Review Training – Form 6744, Volunteer Assistor’s Test/Retest For more information on the certification levels and process, see Publication 4491, VITA/TCE Training Guide, Course Introduction or Link & Learn Taxes, Course Introduction. Volunteers using Link & Learn Taxes must: • Pass the VSC certification test with a score of 80% or higher. Only new volunteers are required to view the VSC Training before taking the VSC certification test. In addition, new volunteers planning to be a site coordinator or hold a position requiring tax law certification are also required to view the Intake/Interview & Quality Review Training before taking the associated certification test. • Complete the Intake/Interview & Quality Review certification and pass the appropriate tax law certification tests (Basic, Advanced, etc.) if preparing returns, performing quality review, or other position requiring tax law knowledge. Site coordinators not performing duties that require tax law certification must also pass the Intake/Interview & Quality Review certification. • Check the Volunteer Agreement digital signature checkbox in Link & Learn Taxes acknowledging that Form 13615, Volunteer Standards of Conduct Agreement, has been read and agreed to. – After each test, the Link & Learn system on VITA/TCE Central will add the letter “P” to Form 13615 indicating a passing score for the VSC Training and (if applicable) Intake/Interview & Quality Review certification and tax law certification levels. • Finish the form by completing the applicable fields (if missing): name, home address, site name, partner name, daytime phone number, e-mail address, volunteer position, and any other required fields. • Print and review the form and give the completed form to the partner-designated official or site coordinator. – The partner-designated official or site coordinator will verify your identity by using your photo identification, and certify by signing and dating the form. Standards of Conduct (Ethics) 15 Volunteers using the paper test must: • Take the VSC certification test in Publication 4961 or Form 6744. New volunteers must review the VSC Training in Publication 4961 prior to taking the certification test. • Complete the Intake/Interview & Quality Review certification test in Form 6744 if they will be certifying in tax law or if they are a site coordinator. New volunteers must view the Intake/Interview & Quality Review Training (Publication 5101) prior to taking the certification test. • Use Form 6744 to take and pass the appropriate tax law certification tests (Basic, Advanced, etc.) if they will be preparing returns, performing quality review, or other position requiring tax law testing. VSC and tax law certification can be completed by using Publication 4961, Form 6744, VITA/TCE Volunteers Assistor’s Test/Retest, or by using Link & Learn Taxes online. If Link & Learn Taxes is used, volunteers can certify by signing Form 13615 electronically after all required tests are completed with a passing score. • Complete the volunteer section of Form 13615, Volunteer Standards of Conduct Agreement, by adding full name, home address, sponsoring partner/site name, daytime phone number, e-mail address, volunteer position, and number of volunteer years. • Sign and date Form 13615. Instructors will: • Use Publication 4961 to administer the VSC training and test. • Review Publication 5101, Intake/Interview & Quality Review Training when instructing new volunteers. This publication can be downloaded from or secured from your SPEC Relationship Manager. • Use Form 6744 to administer the certification tests. • Provide any information that volunteers do not know, such as the partner name. • Mark “P” for the VSC and Intake/Interview & Quality Review tests, indicating passing scores. • Mark “P” for each appropriate tax law certification level indicating a passing score. • Return the form to each volunteer for their signature and date. • Use photo identification to verify the volunteer’s identity and certify by signing and dating Form 13615. • Provide additional processing instructions for the form. Resolving Problems In general, the site coordinator is the first point of contact for resolving any problems that a volunteer may encounter. If a volunteer feels an ethical issue can’t be handled by the site coordinator, email IRS at WI.VolTax@irs.gov and/or contact the local IRS-SPEC Relationship Manager. The following chart lists common issues that a taxpayer may have and where they can be referred. Publication 5136, Service Guide, also may be helpful when a taxpayer has a question unrelated to tax preparation. Publication 5136 can be located at. For this type of issue: The appropriate action is: Individual or company is violating the tax laws Use Form 3949-A, Information Referral. Complete this form online at. Print the form and mail to: Internal Revenue Service, Fresno, CA, 93888. 16 Standards of Conduct (Ethics) For this type of issue: The appropriate action is: Victims of identity theft Refer taxpayers to Identity Protection Specialized Unit at 1-800-908-4490. The Protection Specialized Unit may issue these taxpayers a notice. Volunteers may prepare returns for taxpayers who bring in their current CP01A Notice or special PIN (6 digit IPPIN). Include the IPPIN on the software main information page. Instructions are located at:. Taxpayers believe they are victims of discrimination Refer taxpayers to: (Written complaints) Operations Director, Civil Rights Division; Internal Revenue Service, Room 2413; 1111 Constitution Ave., NW; Washington, DC 20224. Taxpayers have account questions such as balance due notices and transcript or installment agreement requests Taxpayers should be referred to: • If they want to make a payment, they will click on Pay Your Tax Bill icon. • If they are requesting an installment agreement they will select, Can’t Pay Now? • If they have a notice, they will enter understanding your notice in the Search feature on IRS.gov. If they still need help, refer the taxpayer to a local Taxpayer Assistance Center or they can call the toll-free number 1-800-829-1040. Federal refund inquiries Refer the taxpayer to and click on Where’s My Refund? State/local refund inquiries Refer to the appropriate state or local revenue office. Taxpayers have been unsuccessful in resolving their issue with the IRS Tell taxpayers that the Taxpayer Advocate Service can offer special help to a taxpayer experiencing a significant hardship as the result of a tax problem. For more information, the taxpayer can call toll free 1-877-777-4778 (1-800-829-4059 for TTY/TDD) or go to and enter Taxpayer Advocate in the Search box. Exercises Using your reference materials, answer the following questions. Question 1: Taxpayer Edna brings her tax documents to the site. She completes Form 13614-C, Intake/ Interview & Quality Review Sheet. She indicates in Part III of Form 13614-C that she has self-employment income along with other income and expenses. Joe, a volunteer tax preparer, reviews Form 13614-C with Edna. He asks if she brought all of her documents today, and asks to see them. Included in the documents is Form 1099-MISC, Miscellaneous Income, showing $7,500 of non-employee compensation in Box 7. She tells Joe that she has a cleaning business that provides services to local businesses. Edna says she also received $4,000 in cash payments for additional cleaning work. When Joe asks if she received any documentation supporting these payments, she says no, the payments were simply paid to her for each cleaning job she performed. Standards of Conduct (Ethics) 17 At this point, Joe suggests that because the IRS has no record of the cash payments, Edna does not need to report these payments on her return. Edna is concerned and feels like she could “get in trouble” with the IRS if she does not report all of her income. Joe assures her that the chance of the IRS discovering that she did not report cash income is very small. Joe prepares Form 1040, Individual Income Tax Return. On Schedule C, Line 1 he reports only the $7,500 reported in Box 7 of Form 1099-MISC. When Joe completes the return, he hands it to Edna to sign Form 8879, IRS e-file Signature Authorization. A. Is there a Volunteer Standards of Conduct violation? If yes, describe. B. What should happen to the volunteer? C. What should the volunteer have done? Question 2: Taxpayer George completes Form 13614-C indicating in Part II that his marital status is single with one dependent, Amelia. Volunteer preparer Marge reviews the intake form and the taxpayer’s information documents. When Marge asks if Amelia is related to George, he says no, that Amelia is the child of a personal friend who is not filing a tax return. Amelia’s mother told George to claim the child and even gave him Amelia’s Social Security card. Marge then asks whether George provided more than one-half of Amelia’s support, but George says no. He goes on to say that he should be able to claim Amelia as a dependent because no one else is claiming her. Marge agrees that although Amelia is not George’s qualifying child or relative, he can still claim her as a dependent because no one else will. Marge goes on to suggest that the child could be listed as George’s niece who lives with him, so that he can file as a Head of Household and claim the Earned Income Tax Credit (EITC). Marge completes Form 13614-C, Section B, accordingly. Marge assures George that chances of the IRS discovering that he and Amelia are not related would be very small. Marge prepares the return with the Head of Household status and claiming the EITC and Child Tax Credits for “qualifying child” Amelia. George signs Form 8879. A. Is there a Volunteer Standards of Conduct violation? If yes, describe. B. What should happen to the volunteer? C. What should the volunteer have done? Question 3: Taxpayer Isabel’s completed Form 13614-C indicates that she does not have an account to directly deposit a refund. When volunteer James prepares Isabel’s return, it shows that Isabel is entitled to a $1,200 refund. James tells Isabel that a paper check may take up to 6 weeks to arrive, but if she has the funds directly deposited to a checking account, the amount would be available in up to 21 business days. He offers to have the money deposited to his own checking account, stating that on receipt of the money he would turn it over to her. Isabel agrees and allows James to enter his routing number and account information on her return. James gives the money to Isabel when he receives it. A. Is there a Volunteer Standards of Conduct violation? If yes, describe. B. What should happen to the volunteer? 18 Standards of Conduct (Ethics) Question 4: While volunteer James is completing Isabel’s return, he notes that she is single and asks her if she would like to meet some evening at a local bar so they could get to know each other better. Although Isabel says that she would prefer that he not call her, James says he does not give up that easily and that he will call her later in the week. Isabel reports the conversation to the site coordinator before she leaves the site. A. Is there a Volunteer Standards of Conduct violation? If yes, describe. B. What should happen to the volunteer? Question 5: Volunteer John is preparing a return for taxpayer Max, who sold stock during the tax year. Max says he does not want to report capital gains and tells John that the cost basis on the stock sold was equal to or higher than the sales price. Based on his own stock portfolio, John believes Max is lying. John explains to Max that if the IRS examines the return, the cost basis will have to be supported by written statements or other documents of the purchases. Max says he understands, but he still wants the return completed with the amounts he has given to John. After John completes the return and Max signs Form 8879, the return is e-filed. A. Is there a Volunteer Standards of Conduct violation? If yes, describe. B. What should happen to the volunteer? Question 6: When Joelle, site coordinator, returns from a lunch break, she notices the waiting area is nearly empty. When she asks Greeter Jade what happened, Jade says that volunteer Nathan and a taxpayer had a loud, bitter argument, and many taxpayers got concerned and left. Joelle takes Nathan to a private area and asks him to explain what happened. Nathan says the taxpayer became upset when Nathan told him that as a noncustodial parent he had to have a signed Form 8332, Release/Revocation of Release of Claim to Exemption for Child By Custodial Parent, or he could not claim his children as dependents. Nathan admits that he got angry when the taxpayer started name calling. Nathan says he told the taxpayer, “If you don’t like our free service, then you can go somewhere else.” Nathan also says there was a lot of yelling and cussing on both sides and then the taxpayer left the site. A. Is there a Volunteer Standards of Conduct violation? If yes, describe. B. What should happen to the volunteer? C. What should the volunteer have done? Intake/Interview & Quality Review Processes Introduction Taxpayers should be confident they receive quality service when using services offered through the VITA/TCE Programs. This includes having an accurate tax return prepared. A basic component of preparing an accurate return begins with a conversation with the taxpayer and includes asking the right questions. Form 13614-C, Intake/Interview & Quality Review Sheet, is a tool designed to assist IRS tax law-certified volunteers in asking the necessary questions to obtain the information necessary to prepare an accurate tax return. IRS reviews indicate that tax return accuracy is improved when Form 13614-C is used correctly with an effective interview of the taxpayer. Standards of Conduct (Ethics) 19 Purpose of this Training The training provided here will educate all volunteers, especially greeters, who are not certified in tax law and who work in the intake area, on their role and involvement in the return preparation process. All volunteers need to understand the process used at a site to prepare a tax return from start to finish. This process should be explained to the taxpayer when they enter the site. This training is designed to only provide an overview of the Intake/Interview & Quality Review Process so all IRS volunteers understand their responsibilities. All site coordinators and volunteers who answer tax law questions, instruct tax law classes, prepare or correct tax returns, and/or conduct quality reviews of completed tax returns must be certified in Intake/Interview & Quality Review in addition to the tax law and VSC certification requirements. The certification test is based on the more detailed training on how to use the intake sheet to prepare and quality review tax returns. The detailed training is available on VITA/TCE Central and by downloading Publication 5101, Intake/Interview & Quality Review Training, from. The detailed training is required for new IRS tax law-certified volunteers and is recommended for returning volunteers. Adherence to the Intake/Interview Process Tool Form 13614-C is a tool similar to what is required when a taxpayer visits a professional tax preparer or uses tax preparation software. It is a starting point to engage the taxpayer in discussion to gather all the necessary information to prepare an accurate tax return. Just like any tool, it has to be used properly to reach the desired outcome. Each year the IRS SPEC has seen improvements with using Form 13614-C. In most cases, taxpayers are completing their sections. However, many tax law-certified volunteer preparers do not: • Look at the information completed by the taxpayer • Engage in a conversation with the taxpayer • Clarify any “unsure” answers the taxpayer has marked example During TIGTA and SPEC shopping reviews, analysts posed as taxpayers at volunteer return preparation sites. The “taxpayers” checked the question on Form 13614-C indicating they had interest income but did not provide a Form 1099-INT. Many volunteers never asked about the interest income during the interview. As a result, the interest income was omitted from the tax return and the tax return was incorrect. Had a thorough interview and review of the Form 13614-C been conducted by the tax law-certified volunteer, the interest income would have been discovered and an accurate return would have been prepared. The Intake Process Unless noted, most steps of the intake process can be done by a greeter who has not been certified in tax law. An experienced IRS tax law-certified volunteer should be consulted when tax law questions require clarification at any point during the intake process. The Intake/Interview & Quality Review Process includes the following components to ensure volunteers obtain the necessary information to prepare an accurate return: 1. The Intake Process: a. Greeting the taxpayer b. Ensuring the taxpayer and spouse, if applicable, have photo identification c. Verifying the taxpayer has SSN or ITIN required documentation 20 Standards of Conduct (Ethics) d. Explaining the return preparation process e. Providing Form 13614-C to the taxpayer for completion, explaining documents required f. Determining the return certification level, and g. Assigning the taxpayer to a qualified tax preparer 2. The Interview Process a. Interviewing the taxpayer b. Checking photo identification for the taxpayer and spouse, if applicable, and verifying SSN or ITIN for everyone on the return c. Preparing the tax return 3. The Quality Review Process a. Inviting the taxpayer to participate b. Reviewing the return for accuracy (The steps for performing the quality review are listed on Form 13614-C, Part VII.) c. Informing taxpayers they are responsible for the information on their tax return Greet the taxpayer During this stage, an assessment should be made to ensure the taxpayer has everything the tax preparer needs to prepare the tax return. Performing this task right away ensures taxpayers are not wasting their time by waiting and then being turned away for reasons that could have been discovered early. The volunteer working in the intake area should: • Make sure the taxpayer and spouse, if applicable, have brought photo identification with them to show the return preparer and/or the quality reviewer. • Verify they have SSN cards and/or ITIN letters or cards, or other acceptable verification, for everyone on the return. More information on acceptable documentation is found in Publication 4299. • Ask the taxpayer if they have received and brought all their tax documents, like Forms W-2 and 1099-R. • If the site has gross income limits, take a quick check to make sure the taxpayer(s) income is below the limit. • Verify both spouses are at the site that day if filing a joint tax return. Explain the steps of the Intake/Interview & Quality Review Process to the taxpayer Explain the Intake/Interview & Quality Review Process so that the taxpayers understand that they are expected to: • Complete Form 13614-C prior to having the return prepared • Be interviewed by the return preparer and answer additional questions as needed • Participate in a quality review of their tax return by someone other than the return preparer Provide the taxpayer Form 13614-C Ask the taxpayer to complete pages 1, 2 and 3 of Form 13614-C. An IRS tax law-certified volunteer might need to offer assistance in the following cases. As a reminder, Form 13614-C is required to be used at all VITA/TCE sites. Standards of Conduct (Ethics) 21 If taxpayers… Then a tax law-certified volunteer should… Cannot complete the form for any reason Fill out the form by asking them the questions and recording their answers. Do not understand a question, they can mark “unsure” Assist them with answering the question. Have income, expenses, or life events not listed on Form 13614-C, which might indicate an outof-scope tax return Review the information and determine if the return is within scope for the site requirements and volunteer certifications. Determine the certification level of the tax return A greeter can perform this part of the process. When a greeter is not available, an IRS tax law-certified preparer should go through similar steps before starting the return preparation. • Page 2 of Form 13614-C identifies the required tax law certification level for each question. The levels are identified as B (Basic), A (Advanced), M (Military), HSA (Health Savings Accounts). • Determine the potential certification level required for the tax return based on how the intake sheet was completed. All questions marked as “yes” and “unsure” should be reviewed to determine the highest certification level needed to prepare the return or to discuss the “unsure” responses. • The volunteer assigning or selecting the tax return for preparation must understand how to identify the certification level required for that return. • The volunteer will also want to ensure the taxpayer does not have other income or expense items that may be out of scope for the program or site. The greeter, if not tax law-certified, may need to enlist the assistance of a tax law-certified volunteer to make the final determination on potential out-of-scope issues. If the greeter cannot assign the taxpayer to a tax law-certified preparer with the required certification level listed on Form 13614-C, the greeter is required to seek assistance to determine if the taxpayer’s return can be prepared at the site. The determination will be based on a combination of the site’s return preparation policy and Scope of Service Chart listed in Publication 4012. This will ensure taxpayers are not mistakenly turned away from the site. example A taxpayer completes Form 13614-C, answering “Yes” to the question, “Have a Health Savings Account?” The certification level next to this question is HSA (Health Savings Accounts). All other checked questions show the certification level B (Basic). Because of the need for HSA knowledge, the taxpayer should be assigned to a volunteer who is certified in the HSA course. Assign tax return to the volunteer preparer Every site is required to have a process for assigning taxpayers to volunteer preparers who are certified at or above the level required to prepare their return. The method for identifying certification levels for volunteers can include indicators on name badges, stickers, nameplates, or other partner-created products. Having the certification levels easily identified will assist the site coordinator, greeter, or whoever is responsible for assigning the tax return. SPEC has an optional ID badge (Form 14509) that can be used for this purpose or the site can use its own method to satisfy this requirement. 22 Standards of Conduct (Ethics) The Interview Process Only IRS tax law-certified volunteers may interview the taxpayer. All IRS tax law-certified volunteer preparers and site coordinators are required to certify in the Intake/Interview & Quality Review Process. Publication 5101 provides detailed training on how to perform the interview process with the taxpayer. The basic steps are: • Verify taxpayer ID. Check photo identification for the taxpayer (and spouse, if applicable) and request verification of SSN or ITIN for everyone listed on the tax return. • Review Form 13614-C. Make sure the taxpayer has answered all required questions on Form 13614-C. Any questions left blank or marked “unsure” must be clarified and the correct answer should be recorded on Form 13614-C. • Interview the taxpayer. Use probing questions to develop and/or clarify information on the intake sheet and to confirm the information provided by the taxpayer is complete and accurate. • Review documentation. Look at all supporting documentation provided by the taxpayer (Forms W-2, 1099, payment receipts, etc.). • Verify certification level. Make sure the taxpayer’s return is within the preparer’s certification level and within the scope of the VITA/TCE Programs. Make all dependent exemption and filing status determinations before preparing the return. Preparing the Tax Return After interviewing the taxpayer, the IRS tax law-certified preparer enters information into the software and prepares the tax return. The Quality Review Process The quality reviewer assigned to a taxpayer should have a certification equal to or above the level needed to prepare the tax return. The site is required to have a process in place for assigning tax returns to the appropriate quality reviewer. Volunteers are not permitted to quality review a tax return that they prepared. example Following preparation of the tax return in the previous HSA example, a quality reviewer assigned to this taxpayer must also have HSA certification. The taxpayer must be interviewed during the Quality Review Process. The last step of the quality review is informing the taxpayer of their responsibility for the information on the tax return. The taxpayer must be advised to review the return to ensure the information is accurate and complete. Standards of Conduct (Ethics) 23 Summary • All volunteers must agree to the Volunteer Standards of Conduct (VSC) outlined on Form 13615. The partner-designated official or site coordinator must verify the identity (with photo identification) and certification level of the volunteer before the volunteer is allowed to work at the site. • Failure to comply with the standards may adversely affect the taxpayer, the site, the partner and the VITA/ TCE Programs. • Violations of the VSC will not be tolerated. If a violation is discovered, appropriate actions will be taken, up to removal of the volunteer, closing of the site, and discontinuing IRS support to the sponsoring partner. • Review Publication 1084, Site Coordinator Handbook, for actions the site coordinator should take if a VSC violation is identified. • The Volunteer Protection Act generally protects volunteers from liability as long as they are acting in accordance with the standards. • Volunteers and partners with questions about the standards should contact their IRS-SPEC Relationship Manager. Summary of the Intake/Interview & Quality Review Processes To meet VITA/TCE Quality Site Requirements, volunteers must perform each of the following tasks during the intake/interview process: • Verify the identity (photo ID) and address of the taxpayer(s) and request verification of SSN or ITIN for everyone listed on the tax return. • Explain the tax preparation process and encourage taxpayers to ask questions throughout the interview. • Complete Form 13614-C, Intake/Interview & Quality Review Sheet. – Verify all items in the taxpayer section have been answered – Note changes and clarifications provided by the taxpayer on the form • Interview the taxpayer using probing questions to confirm the information provided on Form 13614-C is complete and accurate. • Review all supporting documentation provided by the taxpayer (Forms W-2, 1099, payment receipts, etc.). If the taxpayer has income or expenses listed on the return that do not require a source document and none were provided, the intake sheet should be notated to show a verbal response was provided. To meet VITA/TCE Quality Site Requirements, a quality review requires all of the following: • Inviting the taxpayer to participate. The taxpayer must be involved during the Quality Review Process because the quality reviewer needs to be able to ask additional questions. • Reviewing the return for accuracy using: – Form 13614-C, with all sections completed, – The completed tax return, and – All documents provided by the taxpayer, including those used to verify identity, income, expenses, payments, and direct deposit. • Advising the taxpayers of their responsibility for the information on the tax return. 24 Standards of Conduct (Ethics) Exercise Answers Answer 1 A. Yes, Standard 4, knowingly preparing a fraudulent return. B. Volunteer should be removed and barred from working at a VITA/TCE site and added to the Volunteer Registry. C. Cash income should be reported as income on Schedule C. Answer 2 A. Yes, Standard 4, knowingly preparing a fraudulent return. Although the taxpayer insisted on including the dependent, Marge knew this was wrong. B. Volunteer should be removed and barred from working at a VITA/TCE site and added to the volunteer registry. C. Volunteer should educate George on dependent eligibility using Publication 4012, Volunteer Resource Guide, refuse to prepare the tax return, or report the incident to the site coordinator. Answer 3 A. Yes, Standard 2, do not accept payment, solicit donations, or accept refund payments for federal or state tax return preparation. Although the volunteer’s intention was to help Isabel get her refund sooner by having it direct deposited instead of mailed, putting it into his own account is problematic and could raise the question of misappropriation of a tax refund or be perceived as receiving payment for tax return preparation. Generally, VITA/TCE volunteers should only request direct deposit of a taxpayer’s refund into accounts bearing the taxpayer’s name. B. Volunteer must be counseled that he cannot put any other taxpayer’s refund into his own account. If this continues, he will be removed and barred from the site. Answer 4 A. Yes, Standard 3, using knowledge gained from the taxpayer for volunteers’ personal benefit. B. He should be reminded that he cannot use taxpayer’s personal information (marital status and phone number) for his benefit. Answer 5 A. Maybe. Even though Max insists on using the cost basis he provides to John, as long as John has conducted a thorough interview, especially about the stock sales, he can prepare the return. John should remind Max that taxpayers sign their returns under penalty of perjury, and that Max is ultimately responsible for the return. If Max tells John that the basis amounts are wrong and John prepares the return anyway, then John is violating Standard 4, knowingly preparing a false return. B. As long as John did not knowingly prepare a false return, nothing should happen. However, if John does know the information is false, then he should be removed, barred from the site, and he could be added to the Volunteer Registry. Answer 6 A. Yes, Standard 6. Volunteers must deal with people at the site with courtesy and in a respectful and professional manner. B. Nathan should be warned that future outbursts will result in his immediate removal as a volunteer. C. Nathan should have taken a deep breath and courteously explained the Form 8332 requirements using Publication 4012. If the situation still could not be resolved, Nathan should have requested the taxpayer speak to the site coordinator upon her return. Standards of Conduct (Ethics) 25
https://fr.scribd.com/document/327918481/Standards-of-Conduct-Training-2017
CC-MAIN-2018-17
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards detail/mutex.hpp provides several mutex types that provide a consistent interface for OS-supplied mutex types. These are all thread-level mutexes; interprocess mutexes are not supported. This header file will try to guess what kind of system it is on. It will auto-configure itself for Win32 or POSIX+pthread systems. To stub out all mutex code, bypassing the auto-configuration, #define BOOST_NO_MT before any inclusion of this header. To prevent ODR violations, this should be defined in every translation unit in your project, including any library files. namespace details { namespace pool { // Only present if on a Win32 system class Win32_mutex { private: Win32_mutex(const Win32_mutex &); void operator=(const Win32_mutex &); public: Win32_mutex(); ~Win32_mutex(); void lock(); void unlock(); }; // Only present if on a POSIX+pthread system class pthread_mutex { private: pthread_mutex(const pthread_mutex &); void operator=(const pthread_mutex &); public: pthread_mutex(); ~pthread_mutex(); void lock(); void unlock(); }; // Present on all systems class null_mutex { private: null_mutex(const null_mutex &); void operator=(const null_mutex &); public: null_mutex(); ~null_mutex(); static void lock(); static void unlock(); }; // This will be one of the types above typedef ... default_mutex; } // namespace pool } // namespace details Each mutex is always either owned or unowned. If owned, then it is owned by a particular thread. To "lock" a mutex means to wait until the mutex is unowned, and then make it owned by the current thread. To "unlock" a mutex means to release ownership from the current thread (note that the current thread must own the mutex to release that ownership!). As a special case, the null_mutex never waits. May include the system headers <windows.h>, <unistd.h>, and/or <pthread.h>.)
https://www.boost.org/doc/libs/1_34_0/libs/pool/doc/implementation/mutex.html
CC-MAIN-2018-17
en
refinedweb
As part of my series on starting a business, this post will cover some of basic legal considerations you’ll want on your radar when you start a business. Forms of Ownership Likely at the same time you are exploring names for your business, you may be thinking about the structure your business will take. This is an extremely important decision, and it’s wise to consult with an accountant and attorney so they can help you select the best form of ownership for your business. Here is a summary of the options you have (as presented by the U.S. Small Business Administration, visit the site for a breakdown of advantages and disadvantages of each option). - Sole Proprietorship: Most small businesses start out as sole proprietorships. Sole proprietors own all the assets of the business and the profits generated by it. They also assume complete responsibility for any of its liabilities or debts. In a sole proprietorship, you are one in the same with the business. - Partnership: A partnership requires two or more people who share ownership of a business. Like proprietorships, the law does not distinguish between the business and owners. The partners should have a legal agreement that outlines how decisions will be made, profits will be shared, disputes will be resolved, how future partners will be admitted to the partnership, how partners can be bought out, and what steps will be taken to dissolve the partnership when needed. - who elect a board of directors to oversee the major policies and decisions. Corporations can also elect to be an “S Corp,” which enables the shareholder to treat the earnings and profits as distributions and have them pass through directly to their personal tax return. - Limited Liability Company: An LLC is a mix of structures, combining the limited liability features of a corporation and the tax efficiencies and operational flexibility of a partnership. LLCs must not have more than two of the four characteristics that define corporations: limited liability to the extent of assets, continuity of life, centralization of management, and free transferability of ownership interests. Your business structure will determine how your business is organized, how you are taxed and how the business is managed. While your business structure can be changed in the future, it’s best to consider all of the options before choosing one. Licenses and Permits In most cases, you will need a license issued by your city and/or county when you start your business. Some towns also require a special zoning permit if you will be conducting business out of your home. A call your town clerk can help you determine what the requirements are and what the fee for registering will be. As with determining your business structure, you may benefit from consulting an attorney as you navigate the list of required licenses and registrations. Taxes Your form of business will determine how you file your income tax returns, and you may be required to file estimated tax returns and pay estimated taxes quarterly. This is where the assistance of an accountant is invaluable. Here are the four general types of business taxes: - Income Tax: All businesses except partnerships must file an annual income tax return (partnerships file an information return). The form you use depends on how your business is organized. The federal income tax is a pay-as-you-go tax. You must pay the tax as you earn or receive income during the year. - Self-Employment Tax: Self-employment tax is a social security and Medicare tax primarily for individuals who work for themselves. Your payments tax contribute to your coverage under the social security system. - Employment Tax: If you have employees, you as the employer have certain employment tax responsibilities that you must pay and forms you must file, including: social security and Medicare taxes, federal income tax withholding, and federal unemployment tax. - Excise Tax: Although it doesn’t apply to many small businesses, you may have to pay an excise tax if you operate a certain type of business or sell certain products. Specific excise taxes include environmental taxes, communications and air transportation taxes, and fuel taxes. Lastly, as covered in a previous post, don’t forget that the name of your business has legal implications as well. Since my experience in business is U.S.-based, this legal overview applies to U.S. businesses. If you have resources for the legalities of starting a business in another country, please add them to the comments. This post is a guide of some legal considerations related to starting a business and should not replace advice from an attorney, accountant or other professional. Additional resources: - Business.gov, U.S. Government Business Website - Forms of Business Ownership, About.com Canada - Small Business and Self-Employed Tax Center, Internal Revenue Service
https://www.sitepoint.com/legalities-of-starting-a-business/
CC-MAIN-2018-17
en
refinedweb
This documentation is archived and is not being maintained. IDataContractSurrogate.GetDeserializedObject Method Visual Studio 2008 Namespace: System.Runtime.Serialization During deserialization, returns an object that is a substitute for the specified object. Assembly: System.Runtime.Serialization (in System.Runtime.Serialization.dll) Parameters - obj - Type: System.Object The deserialized object to be substituted. - targetType - Type: System.Type The Type that the substituted object should be assigned to. Return ValueType: System.Object The substituted deserialized object. This object must be of a type that is serializable by the DataContractSerializer. For example, it must be marked with the DataContractAttribute attribute or other mechanisms that the serializer recognizes. The following example shows an implementation of the GetDeserializedObject method. public object GetDeserializedObject(object obj, Type targetType) { // This method is called on deserialization. // If PersonSurrogated is being deserialized... if (obj is PersonSurrogated) { //... use the XmlSerializer to do the actual deserialization. PersonSurrogated ps = (PersonSurrogated)obj; XmlSerializer xs = new XmlSerializer(typeof(Person)); return (Person)xs.Deserialize(new StringReader(ps.xmlData)); } return obj; } -:
https://msdn.microsoft.com/en-us/library/system.runtime.serialization.idatacontractsurrogate.getdeserializedobject(v=vs.90).aspx
CC-MAIN-2018-17
en
refinedweb
Documentation Public Member Functions | Static Public Member Functions | Protected Attributes | Static Protected Attributes | List of all members Urho3D::Thread Class Referenceabstract Operating system thread. More... #include <Thread.h> Inheritance diagram for Urho3D::Thread: Detailed Description Operating system thread. The documentation for this class was generated from the following files: - /home/travis/build/urho3d/Urho3D/Source/Urho3D/Core/Thread.h - /home/travis/build/urho3d/Urho3D/Source/Urho3D/Core/Thread.cpp
https://urho3d.github.io/documentation/1.4/class_urho3_d_1_1_thread.html
CC-MAIN-2018-17
en
refinedweb
import java.net.DatagramPacket; 21 import java.net.InetAddress; 22 23 /*** 24 * A class derived from TFTPRequestPacket definiing a TFTP write request 25 * packet type. 26 * <p> 27 * Details regarding the TFTP protocol and the format of TFTP packets can 28 * be found in RFC 783. But the point of these classes is to keep you 29 * from having to worry about the internals. Additionally, only very 30 * few people should have to care about any of the TFTPPacket classes 31 * or derived classes. Almost all users should only be concerned with the 32 * {@link org.apache.commons.net.t} class 33 * {@link org.apache.commons.net.t receiveFile()} 34 * and 35 * {@link org.apache.commons.net.t sendFile()} 36 * methods. 37 * 38 * 39 * @see TFTPPacket 40 * @see TFTPRequestPacket 41 * @see TFTPPacketException 42 * @see TFTP 43 ***/ 44 45 public final class TFTPWriteRequestPacket extends TFTPRequestPacket 46 { 47 48 /*** 49 * Creates a write request packet to be sent to a host at a 50 * given port with a filename and transfer mode request. 51 * 52 * @param destination The host to which the packet is going to be sent. 53 * @param port The port to which the packet is going to be sent. 54 * @param filename The requested filename. 55 * @param mode The requested transfer mode. This should be on of the TFTP 56 * class MODE constants (e.g., T). 57 ***/ 58 public TFTPWriteRequestPacket(InetAddress destination, int port, 59 String filename, int mode) 60 { 61 super(destination, port, TFTPPacket.WRITE_REQUEST, filename, mode); 62 } 63 64 /*** 65 * Creates a write request packet of based on a received 66 * datagram and assumes the datagram has already been identified as a 67 * write request. Assumes the datagram is at least length 4, else an 68 * ArrayIndexOutOfBoundsException may be thrown. 69 * 70 * @param datagram The datagram containing the received request. 71 * @throws TFTPPacketException If the datagram isn't a valid TFTP 72 * request packet. 73 ***/ 74 TFTPWriteRequestPacket(DatagramPacket datagram) throws TFTPPacketException 75 { 76 super(TFTPPacket.WRITE_REQUEST, datagram); 77 } 78 79 /** 80 * For debugging 81 * @since 3.6 82 */ 83 @Override 84 public String toString() { 85 return super.toString() + " WRQ " + getFilename() + " " + T(getMode()); 86 } 87 }
http://commons.apache.org/proper/commons-net/xref/org/apache/commons/net/tftp/TFTPWriteRequestPacket.html
CC-MAIN-2018-17
en
refinedweb
Have you ever wanted to open a console in the middle of an application that doesn't usually support one? This article explains exactly how to master the console. The console is a bit of a mystery to many .NET programmers. You can create a console application very easily and there is a Console class which allows you to interact with the user at a very basic level. The problems start when you are in the middle of some non-console-based project and suddenly it would be useful to pop-up a console window. You might think, given that there is a Console class, that this should be easy. All you should have to do is create an instance of Console and start using it. Of course when you look at the situation a little more carefully this isn't possible because Console is a static class and hence there is no way of creating an instance. At this point you might be tempted to give up and program your own class using a form of some kind, but it is possible to use a console in any application including a .NET forms or WPF application. The first thing to realise is that the console window is provided by the operating system not the .NET framework. It’s the command prompt that you use to do jobs that can't easily be achieved via the usual GUI. There are OS native API calls which create and destroy the console and these are used in the .NET Console application template to provide a console for the Console class to wrap. As there can be only one console attached to a process at any one time the Console class works in a very simple way. When you reference any Console method it simply makes use of the currently attached console. So in principle if you manually create a console you should be able to use the Console static class to work with it and so avoid having to create your own wrapper. The console API is very simple. There is an API call that will create and attach a new console: [DllImport("kernel32",SetLastError = true)]static extern bool AllocConsole(); If it is successful it returns true and generally the only reason for it to fail is that the process already has a console attached. If you want to discover the error code that the call generated use the .NET method: Marshal.GetLastWin32Error() If a console already exists and is attached to another process you can attach it to the current process using: [DllImport("kernel32.dll", SetLastError = true)]static extern bool AttachConsole( uint dwProcessId); In most cases the console that you want to attach belongs to the parent process and to do this you can use the constant: const uint ATTACH_PARENT_PROCESS= 0x0ffffffff; There is also a FreeConsole API call that will dispose of any currently attached console: [DllImport("kernel32.dll", SetLastError = true, ExactSpelling = true)]static extern bool FreeConsole(); There is also a long list of other console API calls but in the main you don't need these because the Console class provides you with managed alternatives. For example, there is a Set and Get ConsoleTitle API call, but the Console property Title does the same job by calling the API for you. Putting theory into practice is very easy. First you need to make sure you have all the necessary declarations: using System.Runtime.InteropServices; [DllImport("kernel32", SetLastError=true)] static extern bool AllocConsole();[DllImport("kernel32", SetLastError=true)] static extern bool AttachConsole( uint dwProcessId);const uint ATTACH_PARENT_PROCESS = 0x0ffffffff; A single method is all we need to either create or attach an existing console: public void MakeConsole(){ if (!AttachConsole(ATTACH_PARENT_PROCESS)) { AllocConsole(); };} You can add some error handling to this to make it more robust but if there is an error the only consequence is that the Console class subsequently doesn’t work. To test it out try: MakeConsole();Console.Beep();Console.Title="My Console";Console.WriteLine("Hello console, World!");Console.Write("Press a key to continue...");Console.Read(); This makes the console beep, changes its title and writes some suitable messages. There is one small subtle "gotcha" that you need to keep in mind. If you generate the console yourself then the user cannot redirect input/output from/to a file. That is, if your application is MyApp.exe then MyApp > MyTextFile.txt works if MyApp is a console application but doesn't work if you create the console. What you have to do in this case is detect the arguments "> MyTextFile.txt" when your application creates the console. For example, to redirect the output file you would use: public void MakeConsole(){ if (!AttachConsole(ATTACH_PARENT_PROCESS)) { AllocConsole(); }; string[] cmd=Environment.GetCommandLineArgs(); if (cmd[1] == ">") { Console.Write(cmd[2]); FileStream fs1=new FileStream(cmd[2], FileMode.Create); StreamWriter sw1=new StreamWriter(fs1); Console.SetOut(sw1); }} This uses the Environment object to retrieve the command line arguments and then tests for an output redirection in cmd[1]. If it finds a ">" it then creates a FileStream and then a StreamWriter which it sets the standard output to. From this point on everything sent to the Console is stored in the file. The only complication is that you have to remember to close the file when the application is finished with the console. For example: Console.Title="My Console";Console.Out.Close(); Of course a better design would be to put the Out.Close in a finalise method. You can write similar code to redirect the standard input stream and even to form pipes and interpret other strange command line syntax. [ ... ]
http://i-programmer.info/programming/c/1039-using-the-console.html
CC-MAIN-2018-17
en
refinedweb
Documentation Public Member Functions | Static Public Member Functions | Private Member Functions | Private Attributes | List of all members Urho3D::Material Class Reference Describes how to render 3D geometries. More... #include <Material.h> Inheritance diagram for Urho3D::Material: Collaboration diagram for Urho3D::Material: Detailed Description Describes how to render 3D geometries. Member Function Documentation Load resource from stream. May be called from a worker thread. Return true if successful. Reimplemented from Urho3D::Resource. Here is the call graph for this function: The documentation for this class was generated from the following files: - /home/travis/build/urho3d/Urho3D/Source/Urho3D/Graphics/Material.h - /home/travis/build/urho3d/Urho3D/Source/Urho3D/Graphics/Material.cpp
https://urho3d.github.io/documentation/1.5/class_urho3_d_1_1_material.html
CC-MAIN-2018-17
en
refinedweb
KinoSearch::Plan::FullTextType - Full-text search field type. The KinoSearch code base has been assimilated by the Apache Lucy project. The "KinoSearch" namespace has been deprecated, but development continues under our new name at our new home: my $polyanalyzer = KinoSearch::Analysis::PolyAnalyzer->new( language => 'en', ); my $type = KinoSearch::Plan::FullTextType->new( analyzer => $polyanalyzer, ); my $schema = KinoSearch::Plan::Schema->new; $schema->spec_field( name => 'title', type => $type ); $schema->spec_field( name => 'content', type => $type ); KinoSearch::Plan::FullTextType is an implementation of KinoSearch::Plan::FieldType tuned for "full text search". Full text fields are associated with an Analyzer, which is used to tokenize and normalize the text so that it can be searched for individual words. For an exact-match, single value field type using character data, see StringType. my $type = KinoSearch::Plan::FullTextType->new( analyzer => $analyzer, # required boost => 2.0, # default: 1.0 indexed => 1, # default: true stored => 1, # default: true sortable => 1, # default: false highlightable => 1, # default: false ); Indicate whether to store data required by KinoSearch::Highlight::Highlighter for excerpt selection and search term highlighting. Accessor for "highlightable" property. KinoSearch::Plan::FullTextType isa KinoSearch::Plan::TextType isa KinoSearch::Plan::FieldType isa KinoSearch::Object::Obj. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~creamyg/KinoSearch-0.315/lib/KinoSearch/Plan/FullTextType.pod
CC-MAIN-2016-07
en
refinedweb
So what I started thinking about was how I would want to go about designing portable code that takes ObjC into account but is not limited to platforms that support ObjC. Obviously I need to write much of the code in a language other than ObjC if this is the goal. One simple way is to write all the busy code in C and use pre-compiler directives to build with either ObjC or C where the code touches the required interfaces (GUI and such.) I'm not really keen on that idea because I would like to start with an object-oriented design and build an implementation with sensible classes. That leads me to Objective-C++ and I think it is not only a good answer but a powerful one. The reason I say that is I should be able to apply design patterns as needed and apply good object-oriented design to the whole project without worrying about the portability of my classes. My ultimate goal is to write most of the code in C++ and provide C++ interfaces that can be implemented where a platform implementation is needed and the proper instantiation would be obtained from class factories. This is a fairly common object-oriented design approach. Thinking about a GUI, specifically, the idea is to be able to implement GUI classes to an interface so that the application is GUI implementation neutral. It wouldn't care if the implementation were GTK, Qt, Cocoa, etc. And ultimately it wouldn't care if the implementation is in Objective-C. The problem then is implementing those C++ interfaces with Objective-C classes. All Objective-C++ is, essentially, is the Objective-C language built with a C++ compatible compiler (g++, etc). Nothing was done to create compatibility between the two class types. C++ classes can contain ObjC elements and ObjC classes can contain C++ elements. What you can't do is extend or implement a C++ class with an Objective-C class and vice versa. You can't cast between the two class types obviously. So I decided the first solution is essentially to use bridges which is what I'm going to demonstrate here. The idea is that I have a C++ interface that is implemented by an Objective-C class. Obviously the ObjC class can't inherit from the interface so it inherits from NSObject or a child of it. The thing that binds the ObjC implementation to the C++ interface is a C++ bridge class that implements the interface. The bridge contains the ObjC implementation which is allocated when the bridge is constructed and destroyed when the bridge is destructed. All calls into the bridge are directed into the ObjC class. This is very simple and quite effective. You can pass the C++ bridge into methods that accept the interface and they are none the wiser. So I have posted a lot of information just to show something that is really incredibly simple. My main goal here is to get feedback on how this could be improved, if there are other methods that might be better, and what pitfalls could be looming. This example does not consider the handling of exceptions in any way. That is something I have not dug into yet. It does not demonstrate passing C++ references though that should work. I would like to experiment with that more. One of the things I am thinking about is how the bridge could be built with macros so that the developer can simply build the Objective-C interface and issue a macro for the class and each method implemented so that there aren't a bunch of bridge headers hanging about. If anyone has some ideas on that it would be great to see. I am pretty sure I could slap some macros together pretty easily but, again, the exceptions are something to think about. Anyway, here is an example of what I have done so far. Like I said, this is not complicated and it works very well. Here is a UML class diagram for the three important players here: So the first thing I need to code up is that C++ interface and here it is: // // CppInterface.h // TestCppInterface // #ifndef TestCppInterface_CppInterface_h #define TestCppInterface_CppInterface_h class CppInterface { public: virtual ~CppInterface() {}; virtual void methodA() = 0; virtual void methodB() = 0; virtual int value() const = 0; virtual void setValue(int value) = 0; }; #endif Alright. So this is a common C++ interface. Nothing special here. Next would be implementing the interface with an Objective-C class. Obviously, as already stated, I can't inherit from this interface so I will implement the interface but inherit from NSObject. (A little bit of expanded thought... this could actually be an Objective-C interface itself and the bridge could actually be used for multiple ObjC implementations which could be handled by the class factory.) Here is the declaration: // // CppInterfaceImpl.h // CppInterface // #import <Foundation/Foundation.h> @interface CppInterfaceImpl : NSObject { int value; } @property int value; -(void) methodA; -(void) methodB; @end Here is the definition: // // CppInterfaceImpl.m // CppInterface // #import <Foundation/Foundation.h> #import "CppInterfaceImpl.h" @implementation CppInterfaceImpl @synthesize value; -(void) methodA { NSLog(@"Called %s", __FUNCTION__); } -(void) methodB { NSLog(@"Called %s", __FUNCTION__); } @end Notice I did not use the .mm extension for this source. It is not necessary for this class but it COULD be for a different one. It might be a good decision to just say all ObjC sources will use the .mm extension in this kind of project so there is no worry about a code or interface change forcing a file extension change. So I used the features of the language for the getter and setter methods. Otherwise it looks pretty much how you would expect an implementation of the interface to look. Now for the glue in the middle. The bridge is very simple: // // CppInterfaceOCBridge.h // TestCppInterface // #ifndef TestCppInterface_CppInterfaceOCBridge_h #define TestCppInterface_CppInterfaceOCBridge_h #include "CppInterface.h" #import "CppInterfaceImpl.h" class CppInterfaceOCBridge : public CppInterface { public: CppInterfaceOCBridge(); virtual ~CppInterfaceOCBridge(); virtual void methodA(); virtual void methodB(); virtual int value() const; virtual void setValue(int value); private: CppInterfaceImpl* m_OCObj; }; inline CppInterfaceOCBridge::CppInterfaceOCBridge() { m_OCObj = [[CppInterfaceImpl alloc] init]; } inline CppInterfaceOCBridge::~CppInterfaceOCBridge() { [m_OCObj release]; } inline void CppInterfaceOCBridge::methodA() { [m_OCObj methodA]; } inline void CppInterfaceOCBridge::methodB() { [m_OCObj methodB]; } inline int CppInterfaceOCBridge::value() const { return [m_OCObj value]; } inline void CppInterfaceOCBridge::setValue(int value) { [m_OCObj setValue: value]; } #endif With that we are ready to instantiate an Objective-C class as a CppInterface. I created an Objective-C main that does that. While I have been mentioning class factories I did not actually use one in the example. The main is just going to instantiate the bridge and exercise it: // // main.mm // TestCppInterface // #import <Foundation/Foundation.h> #include "CppMain.h" #include "CppInterfaceOCBridge.h" int main (int argc, const char * argv[]) { @autoreleasepool { CppInterface *a = new CppInterfaceOCBridge; NSLog(@"Calling C++ methods from within Objective-C!"); a->methodA(); a->methodB(); a->setValue(5); NSLog(@"Value is %i", a->value()); CppMain cppMain(*a); cppMain.run(); delete a; } return 0; } I wanted to go ahead and try passing this class into a method of a plain C++ class that has no ObjC in it (and uses the .cpp extension) so I created the CppMain class and you can see it exercised there. Here is that class: // // CppMain.h // TestCppInterface // #ifndef TestCppInterface_CppMain_h #define TestCppInterface_CppMain_h #include "CppInterface.h" class CppMain { public: CppMain(CppInterface& interface); ~CppMain(); void run(); private: CppInterface& m_Interface; }; #endif // // CppMain.cpp // TestCppInterface // #include <iostream> #include "CppMain.h" using namespace std; CppMain::CppMain(CppInterface& interface) : m_Interface(interface) { } CppMain::~CppMain() { } void CppMain::run() { cout << "Running from CppMain!" << endl; m_Interface.methodA(); m_Interface.methodB(); m_Interface.setValue(28); cout << "Value is " << m_Interface.value() << endl; } You might notice a terrible practice in this code related to pointers and reference storing. It was just an experiment. It's going to be okay. So here we can see what we ultimately really want in work. Since most of the code will be C++ and C++ class references will be getting tossed around it is the ability to call the Objective-C class from an unaware C++ class that fulfills the goals of the experiment. Here is the output when I run this application: 2012-01-16 17:13:20.170 TestCppInterface[8480:707] Calling C++ methods from within Objective-C! 2012-01-16 17:13:20.173 TestCppInterface[8480:707] Called -[CppInterfaceImpl methodA] 2012-01-16 17:13:20.174 TestCppInterface[8480:707] Called -[CppInterfaceImpl methodB] 2012-01-16 17:13:20.175 TestCppInterface[8480:707] Value is 5 Running from CppMain! 2012-01-16 17:13:20.175 TestCppInterface[8480:707] Called -[CppInterfaceImpl methodA] 2012-01-16 17:13:20.176 TestCppInterface[8480:707] Called -[CppInterfaceImpl methodB] Value is 28 Program ended with exit code: 0 It is very easy to find the C++ output because, unlike the ObjC output, it does not get timestamped. So as I said, I'm looking to expand on this and see how it might be streamlined for use in a large project. If anyone is aware of another way to achieve the stated goals I would love to see what you have come up with. I think what I have done here has to be one of the first ideas anyone going down this road would consider. I think I will be messing with some ideas for macros in the next few days and I will show what I have come up with at that point. I also am really interested in delving into error handling and supporting exceptions. Thanks for looking.
http://www.dreamincode.net/forums/topic/263167-implementing-c-interfaces-with-objective-c-classes/page__pid__1532348__st__0
CC-MAIN-2016-07
en
refinedweb
. 301 + Dirkiller on February 16th, 2012 at 4:39 am said: … AND I NEED ON PLEASE!!!! 302 + Dirkiller on February 16th, 2012 at 4:40 am said: … AND I NEED TWISTED METAL – HEAD ON PLEASE!!!! 303 + Garylisk on February 16th, 2012 at 8:40 am said: Crisis Core + Kindgom Hearts plz. 304 + Chevy_Is_Life on February 16th, 2012 at 12:23 pm said: Grand Theft Auto Vice City Stories & China Town Wars! D: 305 + unliving-12 on February 16th, 2012 at 8:13 pm said: i want Patapon 3 to be Vita ready!!! i was looking forward to playing it on the vita but it’s not ready yet :( will games become vita ready in big waves? what i mean is, a bunch becoming vita ready at a time? or what 306 + KillingSpree0 on February 16th, 2012 at 11:07 pm said: Please put kingdom hearts birth by sleep on psn quick! 307 + togue on February 17th, 2012 at 2:24 pm said: I was able to move my digital Gran Turismo PSP game to the PSVITA and it isn’t on the list. Just an FYI for others who are curious 308 + Kamandol on February 17th, 2012 at 7:03 pm said: Add WTF: Work Time Fun please. That game was insane. 309 + ProFrothegreat on February 18th, 2012 at 9:11 am said: Probably been said, but I thought it was funny that most of the most wanted games (LBP, Modnation, MGS:PW) have vita versions out or coming soon. Thinking that these games will hit after those versions have had a bit of time on the market. 310 + Hope_Estheim on February 18th, 2012 at 1:18 pm said: I don’t know if it has been mentioned yet, but Patapon 1 & 3 are not on the list. I guess I can understand on having Patapon 1 on there, but Patapon 3 definitely should be! Just pointing that out! 311 + Adactus on February 18th, 2012 at 9:47 pm said: KH: Birth by Sleep as well as both of the Star Ocean games ASAP. Please and Thank You. 312 + unliving-12 on February 19th, 2012 at 11:34 am said: yea i agree, patapon 3 is the best one imo by a LONG shot 313 + JRMONEYSTAKZ on February 19th, 2012 at 3:22 pm said: I’ve redownloaded a bunch of minis I had on the list but when I use content manager nothing showed up. None of them are installed on ps3 just downloaded. This sucks any help? 314 + LegendarySzoke on February 19th, 2012 at 5:13 pm said: Hoping like heck that NFS Carbon, Tron Evolution, Marvel Ultimate Alliance and Star Wars Battlefront Elite Squadron join this list soon. 315 + TROOPER265 on February 20th, 2012 at 6:10 am said: Okay, you guys are going to love this. I downloaded Star Wars Battlefront II to my Playstation 3 and then with the cable installed it to my PSP AND IT WORKS! It’s not one of the games that’s on list. then do what ParkingtigersX posted here: ”.” Also on this menu is a way to adjust the right joystick for PSP GAMES. I wonder how many other games that aren’t listed can be downloaded per the PS3 that aren’t available on the Vita PSN STORE that work? 316 + mrkojoko on February 20th, 2012 at 6:25 pm said: def jam fight for ney york the takeover please dont forget to add it :) 317 + Fylian on February 21st, 2012 at 7:02 am said: I can’t seem to get Age of Zombies to copy to my Vita. I tried Content Manager through my PS3, PSN store on my Vita and even my downloads list on the Store. Please remedy this bc i am dying to play using the dual analog. Thanks 318 + ShadowNextGen on February 22nd, 2012 at 7:08 pm said: So…Sony has already screwed over their fan base by not giving us access to the UMD transfer program. Now there are games that I bought digitally over PSN that I can’t put on the Vita? Are you serious? Is this an early April Fool’s foke? 319 + LIZZ1ER0SE on February 22nd, 2012 at 7:24 pm said: Dear sony, how about releasing five emulated PSP games every day instead of 275 games in god knows how long? Cheers. 320 + foxdarterx on February 22nd, 2012 at 8:00 pm said: crisis core? please 321 + jncasanova on February 22nd, 2012 at 8:01 pm said: I see some titles here and in the PSVita section of the PS3 store that can’t be copied to my PSVita! One example is Age of Zombies, it’s not even available in the PSVita store. Could anyone tell me how you did to copy these games?? 322 + unliving-12 on February 22nd, 2012 at 10:41 pm said: i really wanna play my psp games on my vita 323 + ItaChu on February 23rd, 2012 at 3:35 am said: @jncasanova same here man I cant download Age of Zombies minis …. and the vita cant even see ANY of the minis on my PS3 to copy them to the Vita …. looks like they need to update the Vita 324 + Dratsabius on February 23rd, 2012 at 4:53 am said: Sadly (for me anyway) I recently bought the digital version of Final Fantasy IV: The Complete Collection in anticipation of playing it on Vita. It is on my PS3 XMB but it will not copy to Vita. I have also tried to redownload it on the Vita itself. No dice. What a pity that the supported games (according to the list provided) actually don’t work at all. :( 325 + BigPoppaChunk on February 23rd, 2012 at 6:22 am said: there are a bunch of minis on the list that dont work age of zombies, i must run, lets golf, vempire. and alot not on the list that do work little big planet madden 2010 wwe 2009 unbound saga capcom classics gijoe peggle bejewelled 2 every day shooter super stardust portable king of pool puzzle quest savage moon final fantasy crystal defenders these all work by downloading them to your ps3 and keeping them in bubble form then plugging in your ps vita into ur ps3 and transferring them via content manager. PLEASE add your games that work that are not on the list and any on the list that dont work aswell!!! 326 + BigPoppaChunk on February 23rd, 2012 at 8:39 am said: killzone liberation works 327 + ninjaluke84 on February 23rd, 2012 at 9:09 am said: I know that they will update this list with games currently on the psn. However I sure would like Kingdom Hearts Birth By Sleep to be added to psn and made vita compatible. 328 + suchti2404 on February 23rd, 2012 at 11:03 am said: In germany no Assasains Creed Bloodlines and no Monster hunter freedom unite camey to the release how it is in other lands? 329 + jncasanova on February 23rd, 2012 at 1:42 pm said: I was able to copy and play: Little Big Planet PSP (even with add ons) Auditorium Assassins Creed Bloodlines Star Wars Clone Wars Republic Heroes LocoRoco Midnight Carnival I think maybe they didn’t write Little Big Planet PSP and Super Stardust PSP because there are newer versions comming. Hope my Tetris and Angry Birds minis work on my Vita soon but I guess there will have to be a Firmware Update for that. 330 + Kmedina1 on February 23rd, 2012 at 3:00 pm said: what about dragonball z tenkaichi tag team and shin budokai another road and naruto games come on their so fun i need those games 331 + ggerry72 on February 23rd, 2012 at 4:22 pm said: when will patapon 3 be available? 332 + unliving-12 on February 23rd, 2012 at 5:21 pm said: yea why did they make patapon 2 compatible but not 3 -_- 3 is so much better, awesome game 333 + solo013 on February 23rd, 2012 at 6:58 pm said: I’m not sure why, but a lot of the minis on this list are not showing up in the store on my vita itself. However, I do see them in the ps3s store under the vita section. I also have LittleBigPlanet PSP showing up in my PS3 to Vita transfer management for anyone wanting to try that one out on the Vita. 334 + X138999 on February 23rd, 2012 at 7:07 pm said: I noticed Age of Zombies is on the list but I can’t get it on Vita’s Store. Is it still possible to get it from the PS3? And why are certain titles highlighted? 335 + solo013 on February 23rd, 2012 at 8:12 pm said: @334 I just tried Age of Zombies and about 20 others showing up in the PS3 store under the Vita section, and none of them will show up in the content management app. Only the Minis in the store on the Vita itself seem to be available. I’m glad I noticed this too, because I was going to buy several I saw on the PS3’s Vita section to have a few extra things to play on the Vita. 336 + Teflon02 on February 23rd, 2012 at 9:20 pm said: Anyone know if the PSP Sega Genesis collection works? Thanks in advance I’m getting My Vita on the 29th so i cant check myself 337 + Dratsabius on February 24th, 2012 at 1:33 am said: For the record, I tried to copy Final Fantasy IV to the Vita from my PS3 again last night and this time it worked!!! It is now on Vita and working flawlessly! I also tried to copy Valkyria Chronicles 2 across and it failed on the first attempt. I tried a second time and it worked fine. Looks like the copy mechanism is a little wonky, but it does work with some persistance. Anyway – I (mostly) take back my snarky comment of yesterday. Hugs n kisses Sony :) 338 + killer0881 on February 24th, 2012 at 8:31 am said: The best games aren’t available for vita yet, like Crisis Core, or any of the resistance or killzone titles for psp. The 2 GOW games are available but i already have the collection for PS3, so i have no interest. 339 + NoleHater on February 24th, 2012 at 8:33 am said: I know a couple more games that are not on this list that will play on the vita. Grand Theft Auto Vice City Stories, Kill Zone Liberation, Pinball Hereos, Metal Gear Solid Peace Walker 340 + NoleHater on February 24th, 2012 at 8:34 am said: I know a couple more games that are not on this list that will play on the vita. Grand Theft Auto Vice City Stories, Kill Zone Liberation, Metal Gear Solid Peace Walker,and Pinball Heroes 341 + killer0881 on February 24th, 2012 at 8:41 am said: I didn’t know killzone liberation would work, i should be able to “redownload at no extra cost” then. 342 + dq5209 on February 24th, 2012 at 3:25 pm said: gta liberty city stories works and all you need is a ps3 343 + RaziiJinx on February 24th, 2012 at 4:59 pm said: I Must Run! mini does not show up on PSN nor in the Content Manager to download to Vita. And yes, I have re-downloaded it on my PS3 as well. 344 + BD-S-I-C on February 24th, 2012 at 7:54 pm said: They really really need to make all psp titles compatible. I have only ever purchased 2 psp titles from PSN: Mortal Kombat Unchained Manhunt 2 Neither of these titles are compatible…..come on sony… 345 + malkuth3 on February 25th, 2012 at 3:24 am said: This sucks ass!! Should’ve come here before I spent hours in front of my PC trying to transfer my PSP games to my Vita! This is so stupid! Something like this should be very easy especially if you have Media Go and a valid PSN account!! I also discovered that I can’t transfer most of my music to my Vita as well (no, I don’t own any pirated downloads!!)! even though they are all either CD copies or iTunes! Why? I have the Content Asst activated but my Media Go is not recognizing my Vita?! 346 + juggalotus53 on February 25th, 2012 at 6:17 am said: please say that Silent Hill: Origins will be one of the psp titles that will be playable on Vita 347 + adray69 on February 25th, 2012 at 6:43 am said: @346 I can confirm that silent hill origin is indeed compatible @345 the reason it won’t let you copy from itunes is because music from itunes has drm added to it. If it’s just a regular mp3 that you’ve copied from a cd than it shouldn’t have any problem copying to you vita. 348 + freewillgnome on February 25th, 2012 at 5:46 pm said: Is there any news as to whether patapon 3 is going to be able to be played on the vita, and if so if it will be within the near future, because they already have patapon 2 and i would really like to have number 3 as well! 349 + sithlord9 on February 26th, 2012 at 8:44 am said: is it possible that this list only applies for US games / PSN accounts? I downloaded Syphon Filter: Logan’s Shadow on the PS3 (vita does not allow me) but cannot see it in the content manager. On the contrary I got Resistance downloaded via the Vita. Also Tron could be copied via PS3. 350 + sithlord9 on February 26th, 2012 at 8:50 am said: To add to my earlier comment and while waiting for an official sony reply a quick question for those that got Killzone working: Are you guys talking about the EU or US version? Maybe we should all state which territory we are from. In my case i got to run the Germany/Europe-versions of – Resistance – Tron (only via PS3) – NOVA – Warhammer 40k – Force Unleashed – Obscure – Worms Battle Islands I could not get to run (not downloadable via vita / not seen in content manager): – Star Wars Battlefront Renegade Squadron – Syphon Filter: Logan’s Shadow
http://blog.us.playstation.com/2012/02/09/how-to-download-psp-titles-to-ps-vita/comment-page-7/
CC-MAIN-2016-07
en
refinedweb
, "Running and Testing EJB/JPA Components" JDeveloper includes step-by-step wizards for creating EJB projects, entities, persistence units, session beans, and message-driven beans. You can build entities from online or offline database definitions and from application server data source connections. There is also seamless integration with JPA and TopLink technology to provide a complete persistence package. JDeveloper supports EJB 3 the need for home and component interfaces and the requirement for bean classes for implementing javax.ejb.EnterpriseBean interfaces. The EJB bean class can be a pure Java class (POJO), and the interface can be a simple business interface. The bean class implements the business interface. Use of Annotations Instead of Deployment Descriptors - Metadata annotation is an alternative to deployment descriptors. Annotations specify bean types, different attributes such as transaction or security settings, O-R mapping and injection of environment or resource references. Deployment descriptor settings override metadata annotations. Dependency Injection - The API for lookup and use of EJB environment and resource references is simplified, and dependency injection is used instead. Metadata annotation is used for dependency injection. Enhanced Lifecycle Methods and Callback Listener Classes - Unlike previous versions of EJB, you do not have to implement all unnecessary callback methods. Now you designate any arbitrary method as a callback method to receive notifications for lifecycle events. A callback listener class is used instead of callback methods defined in the same bean class. Interceptors - An interceptor is a method that intercepts a business method invocation. An interceptor method is defined in a stateless session bean, stateful session bean, or a message-driven bean. An interceptor class is used instead of defining the interceptor method in the bean class. Simple JNDI Lookup of EJB - Lookup of EJB is simplified and clients do not have to create a bean instance by invoking a create() method on EJB and can now directly invoke a method on the EJB. Simplified Beans - Session beans are pure Java classes and do not implement javax.ejb.SessionBean interfaces. The home interface is optional. A session bean has either a remote, local, or both interfaces and these interfaces do not have to extend EJBObject or EJBLocalObject. Metadata Annotations - Metadata annotations are used to specify the bean or interface and run-time properties of session beans. For example, a session bean is marked with @Stateless or @Stateful to specify the bean type. Lifecycle Methods and Callback Listeners - Callback listeners are supported with both stateful and stateless session beans. These callback methods are specified using annotations or a deployment descriptor. Dependency Injection - Dependency injection is used either from stateful or stateless session beans. Developers can use either metadata annotations or deployment descriptors to inject resources, EJB context or environment entries. Interceptors - Interceptor methods or interceptor classes are supported with both stateful and stateless session beans. Message-Driven Beans (MDBs) Simplified Beans - Message-driven beans do not have to implement the javax.ejb.MessageDriven interface; they implement the javax.jms.MessageListener interface. Metadata Annotations - Metadata annotations are used to specify the bean or interface and run-time properties of MDBs. For example, an MDB is marked with @MessageDriven for specifying the bean type. Lifecycle Methods and Callback Listeners - Callback listeners are supported with MDBs. These callback methods are either specified using annotations or the deployment descriptor. Dependency Injection - Dependency injection is used from an MDB. You either use metadata annotations or deployment descriptors to inject resources, EJB context, or environment entries used by an MDB. Interceptors - Interceptor methods or interceptor classes can be used with MDBs. Entities - Java Persistence API (JPA) Simplified Beans (POJO Persistence) - EJB 3.0 greatly simplifies entity beans and standardizes the POJO persistence model. Entity beans are concrete Java classes and do not require any interfaces. The entity bean classes support polymorphism and inheritance. Entities can have different types of relationships, and container-managed relationships are manually managed by the developer. Entity Manager API - EJB 3.0 introduces the EntityManager API that is used to create, find, remove, and update entities. The EntityManager API introduces the concept of detachment/merging of entity bean instances similar to the Value Object Pattern. A bean instance may be detached and may be updated by a client locally and then sent back to the entity manager to be merged and synchronized with the database. Metadata Annotations - Metadata annotations greatly the query capability for entities with Java Persistence Query Language (JPQL). JPQL enhances EJB-QL by providing additional operations such as bulk updates and deletes, JOIN operations, GROUP BY HAVING, projection and sub-queries. Also dynamic queries can be written using EJB QL. Lifecycle Methods and Callback Listeners - Callback listeners are supported with entity beans. Callback methods are either specified using annotations or a deployment descriptor. JDeveloper includes a complete set of features to set up the EJB business layer of an enterprise application. You can start by using the step-by-step wizard to create the framework for your EJB web application, setting up the model layer of your enterprise application. You can then use wizards to create entities that correspond to database tables. You can then use a wizard to create session beans and facades and to build a persistence unit. Oracle ADF provides components to enable data controls. When you are ready, you can use the JDeveloper integrated server capabilities to test it. JDeveloper includes tools for developing EJB applications, as described in the following sections. Section 12.3.1.1, "Creating Entities" Section 12.3.1.2, "Creating Session Beans and Facades" Section 12.3.1.3, "Deploying EJBs" Section 12.3.1.4, "Testing EJBs Remotely" Section 12.3.1.5, "Registering Business Services with Oracle ADF Data Controls" Use the entity wizards to create entities or to create entities from tables using online, offline, or application server data source connections. Use the Entity Beans from Tables Wizard to reverse-engineer entities from database tables. In the entity wizards you can select or add a.7.1, "Using Session Facades". When you create a session bean with the wizard, you have the option of generating session facade methods for every entity in the same project. You can choose which core transactional methods to generate, get() and set() accessors, and finder methods on the entities. If you create new entities or new methods on entities, you can update your existing session facade by right-clicking it in the Navigator and choosing Edit Session Facade. JDeveloper provides Oracle WebLogic Server as a container for deployed EJBs. A JDeveloper server-specific deployment profile is generated by default. You can also create a WebLogic-specific deployment profile. For more information, see Section.2, "How to Test EJB/JPA Components Using a Remote Server". ADF provides components for enabling data controls for your entities. Your Java EE application integrates selective components as you manually add a data control for your entities. For more information, see "Using ADF wizard. When you get to the EJB Name and Options page, be sure to check Generate Session Facade Methods.This automatically adds the session facade methods to your session bean. Note that you can create and edit session facade methods for all entities in your project by right-clicking your session bean and choosing Edit Session Facade. JDeveloper automatically recognizes new entities in your project and new methods on the entities. To register the business services model project with the data control: Right-click your session bean in the Navigator and choose Create Data Control. This creates a file called DataControls.dcx which contains information to initialize the data control to work with your session bean. To run and test your application: You have now created the basic framework for the model layer for a web-based EJB application. Use this framework to test your application as you continue building it. For more information, see Section 12.10, "Running and Testing EJB/JPA Components". To deploy your application: The integrated server runs within JDeveloper. You can run and test EJBs using this server and then deploy your EJBs with no changes to them. You do not need to create a deployment profile to use this server, nor do you have to initialize it. Create the deployment descriptor, ejb-jar.xml using the Deployment Descriptor wizard, and then package your EJB modules for deployment with your application. The Java EE design patterns are a set of best practices for solving recurring design problems. Patterns are ready-made solutions that can be adapted to different problems, and leverage the experience of successful Java EE developers. JDeveloper can help you implement the following Java EE design patterns in your EJB applications: MVC - The MVC pattern divides an application into three parts, the Model, View, and Controller. The model represents the business services of the application, the view is the portion of the application that the client accesses, the controller controls the flows and actions of the application and provides seamless interaction between the model and view. The MVC pattern is automatically implemented if you choose the Fusion Web Application (ADF) or Java EE Web Application template when you begin your project. Session Facade - The session facade pattern contains and centralizes complex interactions between lower-level EJBs (often JPA entities). It provides a single interface for the business services of your application. For more information, see Section 12.7, "Implementing Business Processes in Session Beans". Business Delegate - The business delegate pattern decouples clients and business services, hiding the underlying implementation details of the business service. The business delegate pattern is implemented by the data control, which is represented in JDeveloper by the Data Control Palette. For more information, see "Using ADF is defined in the Java Persistence API. JPA entities adopt a lightweight persistence model designed to work seamlessly with Oracle TopLink and Hibernate. The major enhancements with JPA entities are: JPA Entities are POJOs Metadata Annotations for O-R Mapping Inheritance and Polymorphism Support Simplified EntityManager API for CRUD Operations Query Enhancements JPA entities are now POJOs (Plain Old Java Objects) and there are no component interfaces required for them. JPA entities support inheritance and polymorphism as well. Example 12-1 contains the source code for a simple JPA entity. Example 12-1 Source code for a simple JPA entity ; } ... } Note that the bean class is a concrete class, not an abstract one, as was the case with CMP 2.x entity beans. The O-R mapping annotations allow users to describe their entities with O-R mapping metadata. This metadata is then used to define the persistence and retrieval of entities. You no longer have to define the O-R (object Relational) mapping in a vendor-specific descriptor. The example above uses the @Entity, @Table, and @Column annotations to specify at the class level that this is an entity, and to specify the underlying database table and column names for the entity. You can also use mapping annotations to define a relationship between entities, as shown in Example-3 Joined Subclass { ... } The javax.persistence.EntityManager API is used for CRUD (Create, Read, Update, and Delete) operations on entity instances. You no longer have to write code for looking up instances and manipulating them. You can inject an instance of EntityManager in a session bean and use persist() or find() methods on an EntityManager instance to create or query entity bean objects, as show in Example 12-4. Example 12-4 EntityManager in a Session Bean @PersistenceContext); } } Queries are defined in metadata. You may now specify your queries using annotations, or in a deployment descriptor. JPA entities support bulk updates and delete operations through JPQL (Java Persistence Query Language). For more information, see Section 12.6.7, "JDK 5 Annotations for EJB/JPA". JDeveloper offers you two easy wizards to create your JPA entities. You can create entities from online or offline databases, add a persistence unit, define inheritance strategies, and select from available database fields. The Entity from Tables wizard allows you to create entities from online or emulated offline databases, as well as from. JDeveloper provides support for the SDO (Service Data Objects) data application development framework. Use the SDO 2.0 framework and API to easily modify business data regardless of how it is physically accessed. SDO encapsulates the backend data source, offers a choice of static or dynamic programming styles, and supports both connected and disconnected access. SDO handles XML parser operations, and automatically integrates the data parsing logic with the application. For more information, "Integrating Service-Enabled Application Modules" in patterns and best practices SDO is a unified framework for data application based on the concept of disconnected data graphs. A data graph is a collection of tree-structured or graph-structured data objects. To enable development of generic or framework code that works with Data Objects, it is important to be able to introspect on Data Object metadata, which exposes the data model for the Data Objects. As an alternative to Java reflection, SDO provides APIs to access metadata stored in XML schema definition (XSD) files that you create, based on the entity or data model information detailed in your EJB session beans. The SDO feature in JDeveloper can be used as an EJB service or as an ADF-BC service. If you choose to use an ADF-BC service you need add the listener reference to your weblogic-application.xml file. For more information, see Section 12.6.5, "How to Create an SDO Service Interface for JPA Entities". For more information and specifications on SDO, see the OSOA (Open Service Oriented Architecture) at You can easily create a service interface API to access JPA entity data through either an EJB session bean or a plain old Java object (POJO). This service class exposes operations for creating, retrieving, updating, and deleting the JPA entities in your JDeveloper J2EE application. To create a SDO service interface: Start with an EJB session bean, or an ordinary Java class (POJO), that exposes CRUD methods for one or more JPA entities. You can use the wizard to create your session beans. For more information, see Section 12.7.2, "How to Create a Session Bean". In the Structure window, right-click your EJB session Bean or POJO and choose Create Service Interface. Select the methods you want to make available in your service API. By default all of the methods in your session bean interface are selected. Click the checkbox to select or unselect a method. In this release, when you create a service interface, your original session bean file and the remote (or local) interface are modified. New methods are added that match the original ones, but they reference newly defined SDO data objects instead of JPA entities. These SDO data objects match the JPA entities and are defined in XSD files, which are also added to your project, and their names are appended with SDO, such as DeptSDO or EmployeeSDO. Select Backup File(s) to create a backup of your original session bean file. Click OK. To use an EJB/POJO SDO ADF-BC service from a fabric composite using SDO external bindings, you need to set up the Weblogic application deployment listener to invoke the ServiceRegistry logic. Set this up by adding the listener reference to your weblogic-application.xml file. To add the listener reference: Add the code in Example 12-5 to the weblogic-application.xml which by default is located in <workspace-directory>/src/META-INF. Example 12-5 Code Added to weblogic-application.xml <listener> <listener-class> oracle.jbo.client.svc.ADFApplicationLifecycleListener </listener-class> </listener> Once this listener is added, JDeveloper automatically registers the SDO service application name _JBOServiceRegistry_ into the fabric service registry in the composite.xml. When you create your SDO service interface, the necessary files to support your service interface are automatically created. These files include the following: SessionEJBBeanWS.wsdl - This file describes the capabilities of the service that provides an entry point into an SOA application or a reference point from an SOA application. The WSDL file provides a standard contract language and is central for understanding the capabilities of a service. SessionEJBBeanWS.xsd - This is an XML schema file that defines your service interface methods in terms of SDO data types. All of the entities that were contained in your session bean interface will have a corresponding DataObject element in this schema file. At runtime, these DataObjects are registered with the SDO runtime by calling XSDHelper.INSTANCE.define() method. A static type-specific DataObject is defined for each SDO type. When you deploy the JDeveloper integrated server, database tables are automatically created for every entity that does not have a corresponding existing mapped table. One database table will be generated per unmapped JPA entity. Note:Primary key referential integrity constraints will be generated, but other constraints may not be. To generate database tables from JPA entities: Create your JPA entity using the modeling tools or the Create Entity wizards. For more information, see Section used to generate artifacts such as interfaces. An annotation is a metadata modifier that is added to a Java source file. Annotations are compiled into the classes by the Java compiler at compile time, and can be specified on classes, fields, methods, parameters, local variables, constructors, enumerations, and packages. Annotations can be used to specify attributes for generating code, for documenting code, or for providing services like enhanced business-level security or special business logic during runtime. Every type of annotation available for your EJB/JPA classes can also, alternatively, be added to an XML deployment descriptor file. At runtime the XML will override any annotations added at the class level. Annotations are marked with the @ symbol, such as this stateless session bean annotation: @Stateless public class MySessionBean For more information on annotations for EJB 3.6.8, "How to Annotate Java Classes.". Annotations are available to indicate the bean type. Adding your bean type annotation to a regular class turns it into an EJB. The following types of annotations are available: Is Stateless Session Bean. Choose TRUE or FALSE to annotate your class as a stateless session bean. Is Stateful Session Bean. Choose TRUE or FALSE to annotate your class as a stateful session bean. Is Message Driven Bean. Choose TRUE or FALSE to annotate your class as a message driven bean. Annotations support a new Java Persistence API as an alternative to entity beans. The following types of annotations are available: Is JPA Entity. Choose TRUE or FALSE to annotate your class as a JPA entity. Is JPA Mapped Superclass. Choose TRUE or FALSE to annotate your class as a JPA mapped superclass. Is JPA Embeddable. Choose TRUE or FALSE to annotate your class as JPA embeddable. Once you transform your regular Java class into an EJB/JPA component, or if you used one of the EJB/JPA wizards to create the component, the Property Inspector displays a different set of contextual options, which you can use to add or edit annotations for the various members within the component class. During design time, JDeveloper provides you with the list of available annotations to insert into your classes. The options change depending on what type of class you are working on, and what member you have selected. You can annotate any regular Java class to turn it into an EJB/JPA component. Once the class is defined with annotations as an EJB/JPA, you can easily customize the component with a variety of member-level annotations available to choose from in the JDeveloper, select the class you want to annotate. In the Structure window, double-click the member you want to annotate. As an alternative, if your class is already open in the Java source editor, put your curser in the location where you intend to insert your annotation. In the Property Inspector, choose the tab corresponding to your EJB/JPA type. Choose from any of the annotations available for the specific member you have selected. When you create entities from database tables, foreign keys are interpreted as relationships between entities. You can further define these relationships, create new relationships, or map existing relationships to existing tables using the JDeveloper modeling tools. With the modeling tools you can represent relationships as lines between entities, and change the relationships by changing the line configurations. For more information, see Section 23.3, "Modeling EJB/JPA Components on a Diagram.". Java Persistence Query Language (JPQL) offers a standard way to define relationships between entity beans and dependent classes by introducing abstract schema types and relationships in the deployment descriptor. JPQL also defines queries for navigation using abstract schema names and relationships. The JPAQL query string consists of two mandatory clauses: SELECT and FROM, and an optional WHERE clause. For example: select d from Departments d where d.department_name = ?1 There are two kinds of methods that use JPQL, finder methods and select methods. Finder methods are exposed to the client and return either a single instance, or a collection of entity bean instances. Select methods are not exposed to the client, they are used internally to return an instance of cmp-field type, or the remote interfaces represented by the cmr-field. The Java Persistence API lets you declaratively map Java objects to relational database tables in a standard, portable way that works both inside a Java EE 5 application server and outside an EJB container. This approach greatly simplifies Java persistence and provides an object-relational mapping approach. With Oracle TopLink you can configure the JPA behavior of your entities using metadata annotations in your Java source code. At run-time the code is compiled into the corresponding Java class files. To designate a Java class as a JPA entity, use the @Entity annotation, as shown in Example 12-6. You can selectively add annotations to override defaults specified in your deployment descriptors. For more information on JPA Annotations, see the TopLink JPA Annotation Reference at. A Java service facade implements a lightweight testing environment you can run without an application server. With EJB 3.0 the Java service facade is similar to an EJB session facade, because you can generate facade methods for entities in the same persistence unit, without the container. Separating workflow with Java service facades eliminates the direct dependency of the client on the participant JPA objects and promotes design flexibility. Although changes to participants may require changes in the Java service facade, centralizing the workflow in the facade makes such changes more manageable. You change only the Java service facade rather than having to change all the clients. Client code is also simpler because it now delegates the workflow responsibility to the session facade. The client no longer manages the complex workflow interactions between business objects, nor is the client aware of interdependencies between business objects. You may choose to make the Java service class runnable by generating a sample Java client with a main() method. Use the JDeveloper Java service facade wizard to create a Java class as a service facade to entities. To create a new Java service facade select the File menu, then New, then Business Tier, then EJB, then Java Service Facade. You can also create a data control from a service facade. In the Application Navigator, right-click the name of the service facade, then select Create Data Control. From the Bean Data Control Interface Chooser dialog, you can choose to implement oracle.binding.* data control interfaces. The interfaces are TransactionalDataControl, UpdatableDataControl, and ManagedDataControl. For more information, select the Help button in the dialog. A session bean represents a single client inside the application server. To access an application deployed on the server, the client invokes the session bean methods. The session bean performs work for its client, shielding the client from complexity by executing business tasks inside the server. A session bean is similar to an interactive session. A session bean is not shared and has only one client, in the same way that an interactive session can have only one user. Like an interactive session, a session bean is not persistent as it does not save data to the database. When the client terminates, its session bean appears to terminate and is no longer associated with the client. Create your session beans and session bean facades using the JDeveloper Session Bean Wizard. For more information, see Section 12.7.2, "How to Create a Session Bean.". There are two types of session beans: Stateful.. With JDeveloper you can select to automatically generate your session facade methods any time you create a session bean through the session bean wizard. This creates a session bean that functions as a session facade for your business workflow. For more information, see Section 12.7.2, "How to Create a Session Bean.". The session facade is implemented as a session bean. The session bean facade encapsulates the complexity of interactions between the business objects participating in a workflow by providing a single interface for the business services of your application.The session facade manages the relationships between numerous BusinessObjects and provides a higher level abstraction to the client. Session facades can be either stateful or stateless, which you define while creating a session facade in the wizard. For more information on session facades, see the Oracle Technology Network at Use the wizard to automatically implement a session facade when you create a session bean, and to choose the methods you want to implement. Once you've created EJB entities, any session beans you create in the same project are aware of the entities and the methods they expose. Use the session bean wizard to create a new session bean or session facade bean. Or you can create a session bean using the modeling tools. To create a session bean or session facade using a wizard: a method so it will not be exposed. For more information on session facades, see the Core J2EE Pattern Catalog at. You can also create a session facade manually by creating a local reference between a session bean and an entity. To create a local reference: Create a session bean, if you have not already done so. Create a local reference between the beans: In the bean class - If you are using EJB 3, select an EJB. In the Structure pane, right-click the EJB, then choose Enterprise Java Beans (EJB), then choose Properties. In the Bean Method Details dialog, edit details, as necessary. When finished, click OK. You can add fields to EJBs on an EJB diagram or through the EJB Module Editor., select an EJB. In the Structure pane, right-click the EJB, then choose Enterprise Java Beans (EJB) node, then choose New Field. In the Field Details dialog, add details, as necessary. When finished, click OK. You can remove fields from EJBs, as described below. To remove a field on an EJB Diagram: Click in the fields compartment (the first compartment) on an EJB. Highlight the field and press the Delete key. To remove a field using the, select an EJB. In the Structure pane, double-click the field to locate it in the source file. In the source file, delete the field. Environment entries are name-value pairs that allow you to customize the bean's business logic. Since environment entries are stored in an enterprise bean's deployment descriptor, a bean's business logic can be changed without changing its source code. For example, an EJB that calculates an order might give a discount depending on the number of items ordered, a certain status (silver, gold, platinum), or for a promotion. Before deploying the bean's application you could assign the discount a certain percentage. When the application runs, a method would call the environment entry to find out the discount value. If you wanted to change that percentage in a different deployment, you would not need to change the source code, you would just need to change the value in the environment entries for the deployment descriptor. Environment entries are annotated in the source code. For the complete EJB 3.0 Java Community Process specifications and documentation, see. Depending on how your develop your application, there are different methods of exposing data to clients. If you're using the Oracle ADF framework, the preferred method of exposing data to clients is to implement the session facade design pattern and drop the session bean onto the data control palette. This option vastly simplifies data coordination and is only available in the JDeveloper Studio release. For more information, see Section.2, "Developing Applications with JavaServer Faces." A resource reference is an element in a deployment descriptor that identifies the component's coded name for the resource. Resource references are used to obtain connector and database connections, and to access JMS connection factories, JavaMail sessions, and URL links. To add or modify EJB 3.0 resource references: Go to your source code to annotate resource references. A primary key is a unique identifier with one or more persistent attributes. It identifies one instance of a class from all other instances of the same type. Use primary keys to define relationships and to define queries. Each JPA entity instance must have a primary key. To accommodate your database schema, you can define simple primary keys from persistent fields or composite primary keys from multiple persistent fields. You can also define automatic primary key value generation to simplify your JPA entity implementation. The simplest way to specify a simple primary key is to use annotations for a single primitive, or JDK object type entity field as the primary key. You can also specify a simple primary key at deployment time using deployment XML. To configure a simple primary key using annotations: In your JPA entity implementation, annotate the primary key field using the @Id annotation, as shown in Example 12-7. Example 12-7 Configuring Primary Key Using Annotations import javax.ejb.Entity; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.Column; @Entity @Table(name = "EMP") public class Employee implements java.io.Serializable { private int empNo; private String eName; private String birthday; private Address address; private int version; public Employee() { { @Id @Column(name="EMPNO") public int getEmpNo() { return empNo; } ... } Package and deploy your application. To configure a simple primary key using deployment XML: In your JPA entity implementation, implement a primary key field, as shown in Example 12-8. For certain ADF Faces features, a designated primary key is required. For example, if you have an ADF Faces table that uses an af:tableSelectMany component, you will need to specify a primary key to be able to implement sorting. When you create EJB/JPA entities from tables (using EJB 3, find the attribute you want as the primary key and set the PrimaryKey value to true. JDeveloper automatically provides a complete set of data control components when you build an ADF Fusion web application. When you build a Java EE application, and/or an EJB project, you assign ADF data controls on your individual session beans. This adds a data control file with the same name as the bean. For.". An EJB module is a software unit comprising one or more EJBs, a persistence unit, and an optional EJB deployment descriptor. A JDeveloper project contains only one EJB module. At deploy-time, the module is packaged as an ejb.jar file. Entity beans were once packaged in the EJB JAR file along with the session and message-driven beans. Today, with JPA entities and the persistence unit technology, at deploy-time, they are packaged in their own JAR file, persistenceunit.jar. Now your entity beans (JPA entities) are contained separately, in a JPA persistence archive JAR, which includes a persistence.xml file. The JPA persistence unit does not have to be part of the EJB module package, but can be bundled inside the ejb.jar file. JDeveloper project can contain only one EJB module. When you create your first session or message-driven bean in a project, a module is automatically established, if one does not already exist. You are given the option of choosing the EJB version and the persistence manager for your new EJB module. When you deploy your project you convert the aggregate of session and message-driven beans, plus deployment descriptor into an a EJB JAR file (.jar file), ready for deployment to an application server or as an archive file. By confining the persistence unit to its own JAR file, the persistence unit can easily be reused in other applications. For more information, see Section 9.1, "About Deploying Applications." A JPA persistence unit is comprised of a persistence.xml file, one or more optional orm.xml files, and the managed entity classes that belong to the persistence unit. A persistence unit is a logical grouping of the entity manager, data source, persistent managed classes, and mapping metadata. A persistence unit defines an entity manager's configuration by logically grouping details like entity manager provider, configuration properties, and persistent managed classes. Each persistence unit must have a name. Only one persistence unit of a given name may exist in a given EJB-JAR, WAR, EAR, or application client JAR. You can package a persistence unit in its own persistence archive and include that archive in whatever Java EE modules require access to it. The persistence.xml file contains sections or groupings, these groupings correspond to your entities, and run-time data related to the entities. When you create a new entity using the entity wizards, and if you have an existing persistence unit in the project, the entity will be inserted into its own section in the persistence.xml. If you do not have an existing persistence unit, one will be created automatically, with a section included for the entity definitions. The JAR file or directory, whose META-INF directory contains the persistence.xml file, is called the root of the persistence unit. An EJB 3 and press Delete. You can import existing EJBs from a JAR file or from a deployment descriptor. To import an EJB module, or a subset of EJBs within an EJB module into a project: From the File menu, choose Import. In the Import dialog, choose EJB JAR (.jar) File. Follow the steps in the wizard. To import an EJB deployment descriptor (ejb-jar.xml) file: From the File menu, choose Import. In the Import dialog, choose EJB Deployment Descriptor (ejb-jar.xml) File. Follow the steps in the wizard Note:If you import a deployment descriptor using this wizard, and then use the wizard to import more files, the wizard caches the last used descriptor file, JAR file, and descriptor source directory in the IDE preferences file for convenience. This makes it easier to do tasks such as splitting an EJB module into multiple modules, importing multiple JAR files residing in the same directory, etc. To import a WebLogic deployment descriptor (weblogic. To avoid conflicts, if an EJB with the same name already exists in your existing module, that EJB will not be imported.; the sample client utility can be used to create a client for either type. client and choose Run. The Message pane shows you the running output. JDeveloper provides support for JUnit regression testing for your EJBs. JUnit is an open source Java regression testing framework that comes as an optional feature in JDeveloper. To use this feature you'll need to install the JUnit extension. Use JUnit to write and run tests that verify your code. After you install the JUnit extension, you can use the simple wizard to select your session bean or Java class files, to select the methods that you want to test within those files, and then to start the JUnit test. To run a JUnit test on an EJB: Install the Junit extension from the JDeveloper Help menu. For more information, see Section,.
http://docs.oracle.com/cd/E16162_01/user.1112/e17455/dev_ejb_jpa.htm
CC-MAIN-2016-07
en
refinedweb
Dynamically set WCF Endpoint in Silverlight When you add a WCF service reference to a Silverlight Application, it generates the ServiceReference.ClientConfig file where the URL of the WCF endpoint is defined. When you add the WCF service reference on a development computer, the endpoint URL is on localhost. But when you deploy the Silverlight client and the WCF service on a production server, the endpoint URL no longer is on localhost instead on some domain. As a result, the Silverlight application fails to call the WCF services. You have to manually change the endpoint URL on the Silverlight config file to match the production URL before deploying live. Now if you are deploying the Silverlight application and the server side WCF service as a distributable application where customer install the service themselves on their own domain then you don’t know what will be the production URL. As a result, you can’t rely on the ServiceReference.ClientConfig. You have to dynamically find out on which domain the Silverlight application is running and what will be the endpoint URL of the WCF service. Here I will show you an approach to dynamically decide the endpoint URL. First you add a typical service reference and generate a ServiceReference.ClientConfig that looks like this: <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_ProxyService" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="None" /> </binding> <binding name="BasicHttpBinding_WidgetService" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_ProxyService" contract="DropthingsProxy.ProxyService" name="BasicHttpBinding_ProxyService" /> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_WidgetService" contract="DropthingsWidgetService.WidgetService" name="BasicHttpBinding_WidgetService" /> </client> </system.serviceModel> </configuration> As you see, all the URL are pointing to localhost, on my development environment. The Silverlight application now need to dynamically decide what URL the Silverlight app is running from and then resolve the endpoint URL dynamically. I do this by creating a helper class that checks the URL of the Silverlight application and then decides what’s going to be the URL of the endpoint. public class DynamicEndpointHelper { // Put the development server site URL including the trailing slash // This should be same as what's set in the Dropthings web project's // properties as the URL of the site in development server private const string BaseUrl = ""; public static string ResolveEndpointUrl(string endpointUrl, string xapPath) { string baseUrl = xapPath.Substring(0, xapPath.IndexOf("ClientBin")); string relativeEndpointUrl = endpointUrl.Substring(BaseUrl.Length); string dynamicEndpointUrl = baseUrl + relativeEndpointUrl; return dynamicEndpointUrl; } } In the Silverlight app, I construct the Service Client this way: private DropthingsProxy.ProxyServiceClient GetProxyService() { DropthingsProxy.ProxyServiceClient service = new DropthingsProxy.ProxyServiceClient(); service.Endpoint.Address = new EndpointAddress( DynamicEndpointHelper.ResolveEndpointUrl(service.Endpoint.Address.Uri.ToString(), App.Current.Host.Source.ToString())); return service; } After creating the service client with default setting, it changes the endpoint URL to the currently running website’s URL. This solution works when the WCF services are exposed from the same web application. If you have the WCF services hosted on a different domain and you are making cross domain calls to the WCF service then this will not work. In that case, you will have to find out what’s the domain of the WCF service and then use that instead of localhost.
http://weblogs.asp.net/omarzabir/dynamically-set-wcf-endpoint-in-silverlight
CC-MAIN-2016-07
en
refinedweb
Many in polynomial time, which means there’s a way to cheat. That being said, OptaPlanner solves the 1 000 000 queens problem in less than 3 seconds. Here’s a log to prove it (with time spent in milliseconds): INFO Opened: data/nqueens/unsolved/10000queens.xml INFO Solving ended: time spent (23), best score (0), ... INFO Opened: data/nqueens/unsolved/100000queens.xml INFO Solving ended: time spent (159), best score (0), ... INFO Opened: data/nqueens/unsolved/1000000queens.xml INFO Solving ended: time spent (2981), best score (0), ... How to cheat on the N Queens problem The N Queens problem is not NP-complete, nor NP-hard. That is math speak for stating that there’s a perfect algorithm to solve this problem: the Explicits Solutions algorithm. Implemented with a CustomSolverPhaseCommand in OptaPlanner it looks like this: public class CheatingNQueensPhaseCommand implements CustomSolverPhaseCommand { public void changeWorkingSolution(ScoreDirector scoreDirector) { NQueens nQueens = (NQueens) scoreDirector.getWorkingSolution(); int n = nQueens.getN(); List<Queen> queenList = nQueens.getQueenList(); List<Row> rowList = nQueens.getRowList(); if (n % 2 == 1) { Queen a = queenList.get(n - 1); scoreDirector.beforeVariableChanged(a, "row"); a.setRow(rowList.get(n - 1)); scoreDirector.afterVariableChanged(a, "row"); n--; } int halfN = n / 2; if (n % 6 != 2) { for (int i = 0; i < halfN; i++) { Queen a = queenList.get(i); scoreDirector.beforeVariableChanged(a, "row"); a.setRow(rowList.get((2 * i) + 1)); scoreDirector.afterVariableChanged(a, "row"); Queen b = queenList.get(halfN + i); scoreDirector.beforeVariableChanged(b, "row"); b.setRow(rowList.get(2 * i)); scoreDirector.afterVariableChanged(b, "row"); } } else { for (int i = 0; i < halfN; i++) { Queen a = queenList.get(i); scoreDirector.beforeVariableChanged(a, "row"); a.setRow(rowList.get((halfN + (2 * i) - 1) % n)); scoreDirector.afterVariableChanged(a, "row"); Queen b = queenList.get(n - i - 1); scoreDirector.beforeVariableChanged(b, "row"); b.setRow(rowList.get(n - 1 - ((halfN + (2 * i) - 1) % n))); scoreDirector.afterVariableChanged(b, "row"); } } } } Now, one could argue that this implementation doesn’t use any of OptaPlanner’s algorithms (such as the Construction Heuristics or Local Search). But it’s straightforward to mimic this approach in a Construction Heuristic (or even a Local Search). So, in a benchmark, any Solver which simulates that approach the most, is guaranteed to win when scaling out. Why doesn’t that work for other planning problems? This algorithm is perfect for N Queens, so why don’t we use a perfect algorithm on other planning problems? Well, simply because there are none! Most planning problems, such as vehicle routing, employee rostering, cloud optimization, bin packing, … are proven to be NP-complete (or NP-hard). This means that these problems are in essence the same: a perfect algorithm for one, would work for all of them. But no human has ever found such an algorithm (and most experts believe no such algorithm exists). Note: There are a few notable exceptions of planning problems that are not NP-complete, nor NP-hard. For example, finding the shortest distance between 2 points can be solved in polynomial time with A*-Search. But their scope is narrow: finding the shortest distance to visit n points (TSP), on the other hand, is not solvable in polynomial time. Because N Queens differs intrinsically from real planning problems, is a terrible use case to benchmark. Conclusion Benchmarks on the N Queens problem are meaningless. Instead, benchmark implementations of a realistic competition. OptaPlanner‘s examples implement several cases of realistic competitions.
http://www.javacodegeeks.com/2014/05/cheating-on-the-n-queens-benchmark.html
CC-MAIN-2016-07
en
refinedweb
#include <deal.II/base/data_out_base.h> Flags controlling the details of output in deal.II intermediate format. At present no flags are implemented. Definition at line 989 of file data_out_base.h. An indicator of the current file format version used to write intermediate format. We do not attempt to be backward compatible, so this number is used only to verify that the format we are writing is what the current readers and writers understand. Definition at line 997 of file data_out_base.h.
http://www.dealii.org/developer/doxygen/deal.II/structDataOutBase_1_1Deal__II__IntermediateFlags.html
CC-MAIN-2016-07
en
refinedweb
You can subscribe to this list here. Showing 2 results of 2 Greetings- I am trying to find out if Mailman can be installed on a 1&1 Business level Shared Linux hosting package. I have discovered such things as this: Which makes it sound like since 1&1 has separated servers into Web / Mail / Database that it is not possible to install Mailman. That makes Mailman sound like it is intended for servers smaller than a $10 per month plan... or dedicated which is out of the budget. We have shell access. We can set up cron jobs. I know the hostname to an internal SMTP that does not require authentication to connect to it. (Useful for things like setting up an email list server.) I guess I am a bit bewildered by such posts (like the above URL) as I have managed to dig up. Along with... Steps in the installation that speak of a requirement to add a user and a group: Changing group ownership of the directory: Can I not just skip those steps since this is a shared hosting environment? Please advise. -- Michael Lueck Lueck Data Systems Hi! I've just set up a MailManager (MailManager-2.1.tar.gz and = 3rdParty-2.1-rc7.tar.gz), with zope2.10 from Debian testing (Lenny). = There are several errors of this type popping up: 2007-11-09T16:53:24 ERROR Application Could not import = Products.MailManager Traceback (most recent call last): File "/usr/lib/zope2.10/lib/python/OFS/Application.py", line 708, in = import_product product=3D__import__(pname, global_dict, global_dict, silly) File "/home/john/myzope/Products/MailManager/__init__.py", line 10, in = ? import MailManager, MMUserFolder File "/home/john/myzope/Products/MailManager/MailManager.py", line 92, = in ? from Products.MailManager.Reporting import ReportingDataMixin, = manage_addQueueReportingEngine File "/home/john/myzope/Products/MailManager/Reporting.py", line 60, = in ? globals()) File = "/usr/lib/zope2.10/lib/python/Products/PageTemplates/PageTemplateFile.py"= , line 89, in __init__ content =3D open(filename).read() IOError: [Errno 2] No such file or directory: = '/home/john/myzope/Products/MailManager/www/manage_addQueueReportingEngin= eForm.zpt' (/home/john/testzope is where I put my Zope instance) It tries to load a few non-existant page template files, these are: Reporting.py, line 60, referring to = www/manage_addQueueReportingEngineForm.zpt MailManager.py:4348, www/master_fullwidth.zpt MailManager.py:4383, www/Test.zpt ruleset/zope.py:83, ruleset/www/manage_addRulesetEngineForm.zpt I'm guessing Test.zpt is the same as test.zpt, but the others are = completely missing. The last one even seems to be pointing at the wrong = directory (whould probably be under www, not ruleset/www). I've created = dummy files with those names, and MailManager happily runs so far but it = will probably bug out as soon as I hit those pages. I cannot find those = files anywhere, including SVN (tested with the complete SVN sources, = with the same issue). Where are they, or if they don't exist, what = should be done with the references to them? Regards, John St=E4ck
http://sourceforge.net/p/mailmanager/mailman/mailmanager-users/?viewmonth=200711
CC-MAIN-2016-07
en
refinedweb
CS CODEDOM Parser is utility which parses the C# source code and creates the CODEDOM tree of the code (general classes that represent code, part of .NET Framework - namespace System.CodeDom) . Current version (0.1) is limited - it parses code down to type members and their parameters, it has very limited support for expressions and it does not parse the statements inside members. There are two main reasons for why I stayed at this level now On the other hand it also parses source code comments, so it can be used to analyze the interdependencies of code and comments. Also the stability of this version is low - it's kind of alpha version. If anybody wants to help get this thing further he is welcomed. The parser is based on Mono - CSharp Compiler code. I was looking around little bit around for available C# parser and C# parser building tools (I wanted C# parser in C#) and finally decided for Mono. For more details about exploitation of Mono parser and other possibilities I explored see section C# parser Tools. At first I thought it is great idea to use language independent syntax tree and CodeDom looks nice. If some code analysis tool is build on it, it can work for any .NET language. Just need to change parser and rest is the same, sounds cool. But, after I've got into the CodeDom, I have found that a lot of language features (and not just C#, basically for any language) is missing and it is not possible to parse the source code fully. The main problem is in expressions and statements, where CodeDom has very limited set of classes - there is for instance no support for unary operation and more more issues. I decided to continue with CodeDom, even with its limitations, because it was enough for purposes of analyzing code for coding standards (at least what I need now - it also enables to keep comments and code in one tree, which is something I liked), but it is open issue for the future development. Here is list of issues I've found (and there is more,): own syntax tree, but I still like the idea of the independent language tree structure, which can be used in different tasks). Reporting of errors and warnings should be improved (unify codes and messages, unify error reporting, Report class should store reported errors). Also parser should be improved to indicate location of syntax elements more exactly in the source file. Better separation between the parser and CODEDOM builder is also needed. If somebody likes the tool and wants to help with its improvements, he is welcome. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here leppie null public class CSharpCodeProvider : CodeDomProvider { public CSharpCodeProvider(); public override ICodeCompiler CreateCompiler(); public override ICodeGenerator CreateGenerator(); public override TypeConverter GetConverter(Type type); public override string FileExtension { get; } } public abstract class CodeDomProvider : Component { protected CodeDomProvider(); public abstract ICodeCompiler CreateCompiler(); public abstract ICodeGenerator CreateGenerator(); public virtual ICodeGenerator CreateGenerator(string fileName); public virtual ICodeGenerator CreateGenerator(TextWriter output); public virtual ICodeParser CreateParser(); public virtual TypeConverter GetConverter(Type type); public virtual string FileExtension { get; } public virtual LanguageOptions LanguageOptions { get; } } //From the SS CLI public virtual ICodeParser CreateParser() { return null; } //From Anakrino public virtual ICodeParser CreateParser() { return null; } //and finally the IL CodeDomProvider.CreateParser .maxstack 8 L_0000: ldnull L_0001: ret General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/2502/C-CodeDOM-parser?fid=4186&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&select=734743&fr=1
CC-MAIN-2016-07
en
refinedweb
Friend.) Here is the canonical “Hello, world” example app: import tornado.ioloop import tornado.web class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") application = tornado.web.Application([ (r"/", MainHandler), ]) if __name__ == "__main__": application.listen(8888) tornado.ioloop.IOLoop.instance().start() We attempted to clean up the code base to reduce interdependencies between modules, so you should (theoretically) be able to use any of the modules independently in your project without using the whole package. A Tornado web application maps URLs or URL patterns to subclasses of tornado.web.RequestHandler. Those classes define get() or post() methods to handle HTTP GET or POST requests to that URL. This code maps the root URL / to MainHandler and the URL pattern /story/([0-9]+) to StoryHandler. Regular expression groups are passed as arguments to the RequestHandler methods: class MainHandler(tornado.web.RequestHandler): def get(self): self.write("You requested the main page") class StoryHandler(tornado.web.RequestHandler): def get(self, story_id): self.write("You requested the story " + story_id) application = tornado.web.Application([ (r"/", MainHandler), (r"/story/([0-9]+)", StoryHandler), ]) You can get query string arguments and parse POST bodies with the get_argument() method: class MainHandler(tornado.web.RequestHandler): def get(self): self.write('<html><body><form action="/" method="post">' '<input type="text" name="message">' '<input type="submit" value="Submit">' '</form></body></html>') def post(self): self.set_header("Content-Type", "text/plain") self.write("You wrote " + self.get_argument("message")) Uploaded files are available in self.request.files, which maps names (the name of the HTML <input type="file"> element) to a list of files. Each file is a dictionary of the form {"filename":..., "content_type":..., "body":...}. If you want to send an error response to the client, e.g., 403 Unauthorized, you can just raise a tornado.web.HTTPError exception: if not self.user_is_logged_in(): raise tornado.web.HTTPError(403) The request handler can access the object representing the current request with self.request. The HTTPRequest object includes a number of useful attributes, including: See the class definition for tornado.httpserver.HTTPRequest for a complete list of attributes. In addition to get()/post()/etc, certain other methods in RequestHandler are designed to be overridden by subclasses when necessary. On every request, the following sequence of calls takes place: Here is an example demonstrating the initialize() method: class ProfileHandler(RequestHandler): def initialize(self, database): self.database = database def get(self, username): ... app = Application([ (r'/user/(.*)', ProfileHandler, dict(database=database)), ]) Other methods designed for overriding include: There are three ways to return an error from a RequestHandler: The default error page includes a stack trace in debug mode and a one-line description of the error (e.g. “500: Internal Server Error”) otherwise. To produce a custom error page, override RequestHandler.write_error.). In Tornado 2.0 and earlier, custom error pages were implemented by overriding RequestHandler.get_error_html, which returned the error page as a string instead of calling the normal output methods (and had slightly different semantics for exceptions). This method is still supported, but it is deprecated and applications are encouraged to switch to RequestHandler.write_error. There are two main ways you can redirect requests in Tornado: self.redirect and with the RedirectHandler. You can use self.redirect within a RequestHandler method (like get) to redirect users elsewhere. There is also an optional parameter permanent which you can use to indicate that the redirection is considered permanent. This triggers a 301 Moved Permanently HTTP status, which is useful for e.g. redirecting to a canonical URL for a page in an SEO-friendly manner. The default value of permanent is False, which is apt for things like redirecting users on successful POST requests. self.redirect('/some-canonical-page', permanent=True) RedirectHandler is available for your use when you initialize Application. For example, notice how we redirect to a longer download URL on this website: application = tornado.wsgi.WSGIApplication([ (r"/([a-z]*)", ContentHandler), (r"/static/tornado-0.2.tar.gz", tornado.web.RedirectHandler, dict(url="")), ], **settings) The default RedirectHandler status code is 301 Moved Permanently, but to use 302 Found instead, set permanent to False. application = tornado.wsgi.WSGIApplication([ (r"/foo", tornado.web.RedirectHandler, {"url":"/bar", "permanent":False}), ], **settings) Note that the default value of permanent is different in self.redirect than in RedirectHandler. This should make some sense if you consider that self.redirect is used in your methods and is probably invoked by logic involving environment, authentication, or form submission, but RedirectHandler patterns are going to fire 100% of the time they match the request URL. You can use any template language supported by Python, but Tornado ships with its own templating language that is a lot faster and more flexible than many of the most popular templating systems out there. See the tornado.template module documentation for complete documentation.="61oETzKXQAGaYdkL5gEmGeJJFuYh7EQnp2XdTP1o/Vo=")": "61oETzKXQAGaYdkL5gEmGeJJFuYh7EQnp2XdTP1o/Vo=", "login_url": "/login", } application = tornado.web.Application([ (r"/", MainHandler), (r"/login", LoginHandler), ], **settings) If you decorate post() methods with the authenticated decorator, and the user is not logged in, the server will send a 403 response. Tornado comes with built-in support for third-party authentication schemes like Google OAuth. See the tornado.auth for more details. Check out the Tornado Blog example application for a complete example that uses authentication (and stores user data in a MySQL database).": "61oETzKXQAGaYdkL5gEmGeJJFuYh7EQnp2XdTP1o/Vo=", "login_url": "/login", "xsrf_cookies": True, } application = tornado.web.Application([ (r"/", MainHandler), (r"/login", LoginHandler), ], **settings) If xsrf_cookies is set, the Tornado web application will set the _xsrf cookie for all users and reject all POST, PUT, and DELETE requests that do not contain a correct _xsrf value. If you turn this setting on, you need to instrument all forms that submit via POST to contain this field. You can do this with the special function xsrf_form_html(), available in all templates: <form action="/new_message" method="post"> {{. You can serve static files from Tornado by specifying the static_path setting in your application: settings = { "static_path": os.path.join(os.path.dirname(__file__), "static"), "cookie_secret": "61oETzKXQAGaYdkL5gEmGeJJFuYh7EQnp2XdTP1o/Vo=", ; capturing groups are passed to handlers as method arguments.) You could do the same thing to serve e.g. sitemap.xml from the site root. support these caching semantics. Here is the nginx configuration we use at FriendFeed: location /static/ { root /var/friendfeed/static; if ($query_string) { expires max; } } localized template: <html> <head> <title>FriendFeed - {{ _("Sign in") }}</title> </head> <body> <form action="{{ request.path }}" method="post"> <div>{{ _("Username") }} <input type="text" name="username"/></div> <div>{{ _("Password") }} <input type="password" name="password"/></div> <div><input type="submit" value="{{ _("Sign in") }}"/></div> {{ get_user_locale in your request handler:. You can load all the translations for your application using the tornado.locale.load_translations method. It takes in the name of the directory which should contain CSV files named after the locales whose translations they contain, e.g., es_GT.csv or fr_CA.csv. The method loads all the translations from those CSV files and infers the list of supported locales based on the presence of each CSV file. You typically call this method once in the main() method of your server: def main(): tornado.locale.load_translations( os.path.join(os.path.dirname(__file__), "translations")) start_server(). See the tornado.locale documentation for detailed information on the CSV format and other localization methods. Tornado supports UI modules to make it easy to support standard, reusable UI widgets across your application. UI modules are like special functional home.html, you reference the Entry module rather than printing the HTML directly: {% for entry in entries %} {% module Entry(entry) %} {% end %} Within entry.html, you reference the Entry module with the show_comments argument to show the expanded form of the entry: {%. When a request handler is executed, the request is automatically finished. Since Tornado uses a non-blocking I/O style, you can override this default behavior if you want a request to remain open after the main request handler method returns using the tornado.web.asynchronous decorator. When you use this decorator, it is your responsibility to call self.finish() to finish the HTTP request, or the user’s browser will simply hang: class MainHandler(tornado.web.RequestHandler): @tornado.web.asynchronous def get(self): self.write("Hello, world") self.finish() Here is a real example that makes a call to the FriendFeed API using Tornado’s built-in asynchronous HTTP client:). Tornado includes two non-blocking HTTP client implementations: SimpleAsyncHTTPClient and CurlAsyncHTTPClient. The simple client has no external dependencies because it is implemented directly on top of Tornado’s IOLoop. The Curl client requires that libcurl and pycurl be installed (and a recent version of each is highly recommended to avoid bugs in older version’s asynchronous interfaces), but is more likely to be compatible with sites that exercise little-used parts of the HTTP specification. Each of these clients is available in its own module (tornado.simple_httpclient and tornado.curl_httpclient), as well as via a configurable alias in tornado.httpclient. SimpleAsyncHTTPClient is the default, but to use a different implementation call the AsyncHTTPClient.configure method at startup: AsyncHTTPClient.configure('tornado.curl_httpclient.CurlAsyncHTTPClient') Tornado’sHandler(tornado.web.RequestHandler, tornado.auth.GoogleMixin): @tornado.web.asynchronous def get(self): if self.get_argument("openid.mode", None): self.get_authenticated_user(self._on_auth) return self.authenticate_redirect() def _on_auth(self, user): if not user: self.authenticate_redirect() return # Save the user with, e.g., set_secure_cookie() See the tornado.auth module documentation for more details. If you pass debug=True to the Application constructor, the app will be run in debug mode. In this mode, templates will not be cached and. Debug mode is not compatible with HTTPServer‘s multi-process mode. You must not give HTTPServer.start an argument greater than 1 if you are using debug mode. The automatic reloading feature of debug mode is available as a standalone module in tornado.autoreload, and is optionally used by the test runner in tornado.testing.main. At FriendFeed, we use nginx as a load balancer and static file server. We run multiple instances of the Tornado web server on multiple frontend machines. We typically run one Tornado frontend per core on the machine (sometimes more depending on utilization). false; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass; } } }. You can create a valid WSGI application from your Tornado request handlers by using WSGIApplication in the wsgi module instead of using tornado.web.Application. Here is an example that uses the built-in WSGI CGIHandler to make a valid Google AppEngine application: import tornado.web import tornado.wsgi import wsgiref.handlers class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") if __name__ == "__main__": application = tornado.wsgi.WSGIApplication([ (r"/", MainHandler), ]) wsgiref.handlers.CGIHandler().run(application) See the appengine example application for a full-featured AppEngine app built on Tornado.
http://www.tornadoweb.org/en/branch2.1/overview.html
CC-MAIN-2016-07
en
refinedweb
I. Preparation Here are some steps to take before writing any ASP.NET code: - Create new ASP.NET MVC application. - Download jQuery webcam plugin and extract it. - Put jquery.webcam.js, jscam.swf and jscam_canvas_only.swf files to Scripts folder of web application. Now we are ready to go. Create webcam page We start with creating default page of web application. I’m using Index view of Home controller. @{ ViewBag. </script> <script> $("#Camera").webcam({ width: 320, height: 240, mode: "save", swffile: "@Url.Content("~/Scripts/jscam.swf")", onTick: function () { }, onSave: function () { }, onCapture: function () { webcam.save("@Url.Content("~/Home/Capture")/"); }, debug: function () { }, onLoad: function () { } }); </script> } <h2>Index</h2> <input type="button" value="Shoot!" onclick="webcam.capture();" /> <div id="Camera"></div> We initialize webcam plugin in additional scripts block offered by layout view. To send webcam capture to server we have to use webcam plugin in save mode. onCapture event is the one where we actually give command to send captured image to server. Button with value “Shoot!” is the one we click at right moment. Saving image to server hard disk Now let’s save captured image to server hard disk. We add new action called Capture to Home controller. This action reads image from input stream, converts it from hex dump to byte array and then saves the result to disk. Credits for String_To_Bytes2() method that I quickly borrowed go to Kenneth Scott and his blog posting Convert Hex String to Byte Array and Vice-Versa. public class HomeController : Controller { public ActionResult Index() { return View(); } public void Capture() { var stream = Request.InputStream; string dump; using (var reader = new StreamReader(stream)) dump = reader.ReadToEnd(); var path = Server.MapPath("~/test.jpg"); System.IO.File.WriteAllBytes(path, String_To_Bytes2(dump)); } private byte[] String_To_Bytes2(string strInput) { int numBytes = (strInput.Length) / 2; byte[] bytes = new byte[numBytes]; for (int x = 0; x < numBytes; ++x) { bytes[x] = Convert.ToByte(strInput.Substring(x * 2, 2), 16); } return bytes; } } Before running the code make sure you can write files to disk. Otherwise nasty access denied errors will come. Testing application Now let’s run the application and see what happens. Whoops… we have to give permission to use webcam and microphone to Flash before we can use webcam. Okay, it is for our security. After clicking Allow I was able to see picture that was forgot to protect with security message. This tired hacker in dark room is actually me, so it seems like JQuery webcam plugin works okay :) Conclusion jQuery webcam plugin is simple and easy to use plugin that brings basic webcam functionalities to your web application. It was pretty easy to get it working and to get image from webcam to server hard disk. On ASP.NET side we needed simple hex dump conversion to make hex dump sent by webcam plugin to byte array before saving it as JPG-file. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/using-jquery-webcam-plugin
CC-MAIN-2016-07
en
refinedweb
The. If the specified XPath points to a node with more than one child, or if the node pointed to has a non-text node child, then Oracle returns an error. The optional namespace_string must resolve to a VARCHAR2 value that specifies a default mapping or namespace mapping for prefixes, which Oracle uses when evaluating the XPath expression(s). For documents based on XML schemas, if Oracle can infer the type of the return value, then a scalar value of the appropriate type is returned. Otherwise, the result is of type VARCHAR2. For documents that are not based on XML schemas, the return type is always VARCHAR2. The following example takes as input the same arguments as the example for EXTRACT (XML) . Instead of returning an XML fragment, as does the EXTRACT function, it returns the scalar value of the XML fragment: SELECT warehouse_name, EXTRACTVALUE(e.warehouse_spec, '/Warehouse/Docks') "Docks" FROM warehouses e WHERE warehouse_spec IS NOT NULL; WAREHOUSE_NAME Docks -------------------- ------------ Southlake, Texas 2 San Francisco 1 New Jersey Seattle, Washington 3
http://docs.oracle.com/cd/B13789_01/server.101/b10759/functions047.htm
CC-MAIN-2016-07
en
refinedweb
Difference between revisions of "User:Rick.barkhouse.oracle.com/VTD" Revision as of 11:38, 14 December 2012 VTD-XML Investigation VTD-XML () is a high-performance XML processing model that deals with XML in a binary form, instead of the traditional text form. VTD-XML parses an XML document and builds an internal data structure representing the entire XML document in byte[] form. Each "token" of the XML document is represented as the following 64-bit integer: VTD stands for Virtual Token Descriptor. VTD-XML Core Concepts Unmarshalling a VTD-XML document VTDGen vg = new VTDGen(); // from existing byte[] // true indicates namespace aware vg.setDoc(byte[]); vg.parse(true); // - or - // from file vg.parseFile("old.xml",false)
http://wiki.eclipse.org/index.php?title=User:Rick.barkhouse.oracle.com/VTD&diff=prev&oldid=324699
CC-MAIN-2016-07
en
refinedweb
is the set up of an automated build and deployment environment for SOA Suite. The goal is to automatically build Oracle’s SOA Order Booking demo application and deploy it to an Oracle Application Server that runs the SOA Suite. Oracle’s demo application is selected because it uses all components in the SOA Suite: the brand new ESB, BPEL and Business Rules. The demo application also includes a number of Web Services that are implemented in Java. This is the first in a small series of posts. Focus of part 1 is on describing the nuts and bolts of the automated build environment and subsequently on building the Web Services and BPEL components in the SOA demo application. Goal of part 2 is to automatically build and deploy the ESB components. Finally, in part 3, I will focus on the new unit test capabilities of Oracle BPEL processes. Rationale The last two years I have been working with Oracle SOA products, especially BPEL. In my current project a team of approximately 8 people develops and maintains a fairly large Oracle BPEL based system. One of our responsibilities is to keep the development, test and production environments up to date. Also, we regularly set up new environments, e.g. for testing implementations for new clients of the customer. Especially when we work towards a new release the team rolls out new releases on a daily basis. Although we use Subversion and the build procedure is supported with Ant, one person in the team constantly finds himself occupied with release management and troubleshooting. There is a need for improved control and further automation of this process. Therefore, the team is investigating the use of Luntbuild, a build automation and management tool. Also, the customer plans a migration from the current BPEL release 10.1.2 towards the latest Oracle SOA Suite release 10.1.3.1. Clear business value A Service Oriented Architecture (SOA) bridges the gap between business and information technology. In a typical SOA environment, business processes are constructed from a variety of business services. Each business service may be implemented using different information systems or technologies making a SOA implementation a true integration effort. But the effort is worth your while as all the architectural benefits of the evolution in enterprise integration made their way into SOA. To name a few: - Widely adopted Web Services standards allow easy assimilation of specialised components (favouring buy over build) and - Loosely coupled services that are based on proven messaging technologies provide a highly scalable and robust platform. In the end, all these loosely coupled pieces of technology, each having their own versions and release schedule, need to interoperate for the system to work. That makes release management and integration testing in a SOA environment an important and potentially daunting task. Implement strict procedures and appropriate tools to control this from the very beginning. As this blog post outlines it is not that hard to automate these repetitive and tedious build and deployment tasks. In the end this will save you valuable time and helps to provide your customers with consistent quality systems. Nuts and bolts for creating your own build and deployment server For my build server I installed a number of components. Obviously, to start with I installed an Oracle XE database and Oracle Application Server 10.1.3.1 with the accompanying Oracle SOA Suite. No need to install the Ant build tool as it is bundled with the Application Server. Luntbuild The open source tool Luntbuild is not only selected because it is used in my current project. The main reason is that Luntbuild is able to deal with dependencies between different projects. In Luntbuild, a project is the metaphor for anything that you want to build, a ‘buildable unit’ in Luntbuild terms. For example, it is possible to build and deploy a Web Service first and when that task is done, build and deploy the BPEL process that uses the Web Service. Also, Luntbuild (optionally) stores its build data in an Oracle database allowing you to easily track, manage and report on your daily builds. Subversion My source code version control system of choice is Subversion. But Luntbuild does not discriminate; it supports a wide variety of version control systems like CVS, Visual Sourcesafe or Clearcase. Subversion features a powerful command-line interface. Also, there is a great plug-in for Oracle JDeveloper. Alternatively, you can use graphical clients like Tortoise or SmartSVN to access a Subversion repository. I created my repository and subsequently retrieved a working copy using the following two lines: svn import C:\oracle\product\soademo -m "Initial import"<br />svn checkout svn://localhost/projecten/svnrepos/soademo soademo Notice the uri in the checkout command that starts with ‘svn://’. Subversion also supports the file based interface, the similar command being: svn checkout soademo The samples provided for the Luntbuild configuration indicate that this file-based interface can be used. Using it, I stumbled upon a known problem. Hence, we need a server for accessing Subversion. In larger, multi-user environments, Subversion is probably accessed via an Apache server. For my single-user build system Svnserve provides a lightweight and very easy to use stand-alone alternative. Svnserve is already bundled with Subversion. Ant build scripts for the Web Services components The Quick Start guide for the SOA Order Booking application assumes that all components are installed from JDeveloper. In order to automate builds, we need build scripts. For Oracle BPEL projects these Ant build scripts are created automatically by JDeveloper. The demo application does not come with build scripts for the Web Services neither for the ESB components. In this paragraph I outline the creation of an Ant build script for the CreditService. A good starting point is the Ant build script generator in JDeveloper that can be executed to create an Ant build file for the project as it is defined in JDeveloper. Use it to generate a skeleton that at least contains the classpath settings for the Java libraries in the project. Than, add Ant targets for compiling the Java classes that comprise the Web Service as well as for creating Web and Enterprise archive files. The latter two are shown here: <!-- Create war file --><br /><target name="warfile" depends="rebuild"><br /> <echo message="Creating the Web Application module (war) file"/><br /> <delete><br /> <fileset dir="${dir.deploy}" includes="${warfile.name}"/><br /> </delete><br /><br /> <war destfile="${dir.deploy}/${warfile.name}" <br /><br /> <classes dir="${dir.output}" includes="**/*"/><br /> <webinf dir="${dir.html.web-inf}" includes="**/*"/><br /> </war> <br /><br /> <!-- Clean up the build information --><br /> <delete dir="${dir.output}"/> <br /></targ et><br /><br /><!-- Creat e ear file --><br /><target name="earfile" depends="warfile"><br /> <echo message="Creating Enterprise Archive (ear) file"/><br /> <delete><br /> <fileset dir="${dir.deploy}" includes="${appl.name}.ear"/><br /> </delete><br /><br /> <ear destfile="${dir.deploy}/${appl.name}.ear" appxml="${dir.deploy}/application.xml"><br /> <fileset dir="${dir.deploy}" includes="*.war,orion-application.xml"/><br /> </ear><br /></target><br /> Finally, use the oracle:deploy Ant task for OC4J that comes with the Oracle Application Server. This task utilizes the OC4J client administration tools in order to deploy a module to OC4J. Oracle provides clear instructions for use with OC4J standalone, for the container running in an Application Server setting as well as for a clustered environment. Simply add a namespace declaration to your Ant build file, extend the build path so that Ant can access the Oracle Ant task specific libraries and add the oracle:deploy target to the build.xml file: <target name="deploy" depends="earfile"><br /> <echo message="Deploying Enterprise Archive (ear) file"/><br /> <oracle:deploy<br /></target><br /> The important thing here is to have the deployer URI right. Whether the build was successful can be tested with the Application Server Control that comes with a facility to test Web Services. This is shown in the following screenshot: Create similar build scripts or enhance this Ant script for building and deploying the CustomerService and the RapidService of the SOA Demo Application. Take into account that these use different libraries. Building BPEL Suitcases using Ant In SOA Suite 10.1.3.1 standard Ant scripts are used for building and deploying BPEL projects (suitcases). The build.xml file for a new BPEL project is generated automatically when a new BPEL project is created in JDeveloper. It is great to see that the Ant build scripts are now actually used by JDeveloper for BPEL deployments. When running Ant for building the SOAOrderBooking BPEL process I got the message BUILD FAILED: Error while deploying decision services. Setting verbose=â€true†on the deployDecisionServices task reveals the problem: deployDecisionServices:<br /> [echo]<br /> [echo]--------------------------------------------------------------<br /> [echo]| Deploying decision services for SOAOrderBooking on localhost, port 80<br /> [echo]--------------------------------------------------------------<br /> [echo]<br />[deployDecisionServices] Start of deploying decision services.<br />[deployDecisionServices] Deploy decision service in directory C:\projecten\soademo\SOAOrderBooking\decisionservices\.svn<br />[deployDecisionServices] Start deploying decision service from directory C:\projecten\soademo\SOAOrderBooking\decisionservices\.svn to J2EE context /rules/default/SOAOrderBooking/1.0/.svn<br />[deployDecisionServices] Replace placeholders in file C:\projecten\soademo\SOAOrderBooking\decisionservices\.svn\war\WEB-INF\wsdl\.svn.wsdl<br />[deployDecisionServices] Replace placeholders failed for C:\projecten\soademo\SOAOrderBooking\decisionservices\.svn\war\WEB-INF\wsdl\.svn.wsdl<br />[deployDecisionServices] Error in ant execution: Replace placeholders in WSDL file failed<br /><br />BUILD FAILED<br />C:\projecten\soademo\SOAOrderBooking\build.xml:126: Error while deploying decision services on server "localhost"<br /> It turns out that the .svn directory that is used by Subversion for administration purposes clutters things up. Apparently, Oracle’s deployDecisionServices Ant task dives into any subdirectory within the decisionservices directory in an attempt to deploy that ‘Decision Service’. This is a minor glitch that requires a workaround. At this point, the pre-deploy and post-deploy targets that are part of the build file come in handy. A simple workaround is to temporarily remove the .svn directory like this: <target name="pre-deploy"><br /> <!-- Add tasks here to be performed prior to process deployment. --><br /> <!-- Temporarily remove the Subversion administration directory from<br /> the decisionservices as it will break the build<br /> --><br /> <move file="${basedir}/decisionservices/.svn"<br /><br /></target> <br /> And when we are done, put it back: <target name="post-deploy"><br /> <!-- Add tasks here to be performed after process deployment. --><br /> <move file="${tempdir}/decisionservices/.svn"<br /><br /></target> <br /> Using the Oracle BPEL Process Manager Client API for managing the environment Oracle BPEL provides easy to use yet powerful versioning capabilities allowing you to run multiple versions of a process simultaneously. This way, existing instances of version 1.0 can finish while version 2.0 is used for new instances of that process. Upon deployment of a new version it likely needs to be marked as the default. The BPEL console provides screen functions for that purpose but that obviously is not very helpful for build and deployment automation. The Java client API (that is also used by the BPEL console) comes to the rescue. For marking a new process as the default revision, the following code will do: public void markProcessAsDefault(String processId, <br /> String revision) throws ServerException {<br /> if (locator == null)<br /> locator = getLocator();<br /> IBPELProcessHandle iBPELProcessHandle = <br /> locator.lookupProcess(processId, revision);<br /> if (iBPELProcessHandle != null && <br /> !iBPELProcessHandle.isDefaultRevision()) {<br /> iBPELProcessHandle.markAsDefaultRevision();<br /> }<br />}<br /> The Client API is packed with other useful functions, e.g. for clearing the WSDL cache, something you need desperately during BPEL deployments. But the client API is also very useful for performing tedious management tasks like removing finished process instances from your BPEL system in order to keep the dehydration store lean and mean. The pre-deploy and post-deploy targets in the Ant build files for a BPEL process also come in handy for performing these additional tasks. The following Java class extends Ant by turning the code snippet for marking a process revision as the default into an Ant task that can be invoked from the post-deploy target: package nl.amis.soa.ant;<br /><br />import org.apache.tools.ant.Task;<br />import org.apache.tools.ant.BuildException;<br /><br />import nl.amis.soa.bpel.BPELBuildUtils;<br /><br />public class MarkBPELProcessAsDefault extends Task {<br /><br /> private String processId;<br /> private String revision;<br /><br /> public MarkBPELProcessAsDefault() {<br /> }<br /><br /> public void setProcessId(String processId) {<br /> this.processId = processId;<br /> }<br /><br /> public String getProcessId() {<br /> return processId;<br /> }<br /><br /> public void setRevision(String revision) {<br /> this.revision = revisi on; <br /> }<br /><br /> public String getRevision() {<br /> return revision;<br /> }<br /><br /> public void init() {<br /> super.init();<br /> }<br /><br /> public void execute() throws BuildException {<br /> if (processId != null && revision != null) {<br /> BPELBuildUtils bpelBuildUtils = new BPELBuildUtils();<br /> try {<br /> bpelBuildUtils.markProcessAsDefault(processId, revision);<br /> } catch (Exception e) {<br /> throw new BuildException("Process with id " + processId + <br /> " and revision " + revision + <br /> " could not be marked as default." +<br /> e.getMessage());<br /> }<br /> } else {<br /> throw new BuildException("Process not correcly identified; either processId or revision not given.");<br /> }<br /> }<br />}<br /> Configuring Luntbuild Now that the entire toolkit is installed and the Ant build scripts are created, it is time to configure Luntbuild. I simply created Luntbuild projects for each component in the SOA demo application. Configuration is straightforward and I will not go into much detail here.. Important for building the BPEL processes is the correct setting of the environment variables. There is a simple trick to get there right: open a BPEL Developer Prompt window and grab the environment variables from there. Notice that there is still an OBANT_CLASSPATH environment variable set . For setting up dependencies between projects Luntbuild provides multiple strategies. I set up dependencies between the projects preserving the order that is outlined in the SOA demo application quick start guide and have each project trigger the next in order. Since this blog post is not a Luntbuild configuration guide, I will not discuss the remaining configuration details here. Stop talking, start building! In the way I have arranged the dependencies starting the build for the BPEL processes and Web Services of the SOA Suite demo application requires the start of the build and deployment of the SelectManufacturer process. Pictures speak louder than words: Note that the ESB components in the demo application are missing, I created these with the help of JDeveloper. As indicated earlier, automating that step will be the subject of a following post. Concluding remarks So far so good: with modest effort I was able to set up my build environment and have reached the goals set for part 1. That confirms it is not hard to automate repetitive and tedious build and deployment tasks indeed. It also shows that Oracle has effectively leveraged industry-standard build and deployment techniques for its BPEL product. And I have not even touched on the possibilities of using tokens for deploying to multiple environments. In-the-small, this blog demonstrated simple yet effective usage of technology to make your life easier. It applies to any software engineering effort but is especially relevant to SOA-based systems in which interoperation of loosely coupled components is important. Hope this helps you save valuable time and helps to provide your customers with consistent quality systems. I have a peculiar problem in deploying a BPEL process(SOAOrderBooking.jpr in SOADEMO given by oracle) using ANT. The following is the error i get while deployment: Error happened when reading wsdl at “”, because “Error reading import of: Failed to read WSDL from: HTTP connection error code is 407″. When i tried to access all the wsdls by typing the urls in my browser, i could get the wsdls but cannot access the third wsdl mentioned in the above error: where tranningchett is the host name of my machine and infotech is the NT domain. The same wsdl is visible if I access through the url: Please advice how can i resolve this. Thanks in advance. Hoi Sjoerd, Goed verhaal, to-the-point, goed toegelicht met code voorbeelden en screenshots. Leuk om iets van je te lezen. Groet, Rob Nice Post. Can’t wait for part 2.
https://technology.amis.nl/2007/02/23/soa-suite-build-deployment-and-test-automation-part-1/
CC-MAIN-2016-07
en
refinedweb
I am new to programming but i have done basic input output operation, now i want to go for link list programming starting from single list. so, before that i am having problem with DEv C++ comipler it is not compiling source file. code is:-> Code: #include <iostream> int main() { using namespace std; cout << "hey"; system("pause"); return 0; } Select All and error listed below are. i:\gw\lib\crt2.o(.text+0x8) In function `_mingw_CRTStartup': [Linker error] undefined reference to `__dyn_tls_init_callback' [Linker error] undefined reference to `__cpu_features_init' i:\gw\lib\crt2.o(.text+0x8) ld returned 1 exit status This post has been edited by JackOfAllTrades: 15 May 2012 - 08:58 AM
http://www.dreamincode.net/forums/topic/277866-help-need-for-programming-link-list-in-dev-c/
CC-MAIN-2016-07
en
refinedweb
Android Essentials: Using the Contact Picker This tutorial will not only show you how to launch the contact picker and get results, but also how to use those results in Android SDK 2.0 and above. Getting Started This tutorial will start out simple, but then we’ll get in to some of the technical details of using Contacts with the ContactsContract class, which was introduced in API Level 5. Make sure you have your Android development environment installed and configured correctly. You’re free to build upon any application you have, start a new one from scratch, or follow along using the InviteActivity code in our open source project. We’ll then be adding functionality to allow a user to choose one of their existing contacts and send a canned message to them. We’re going to dive right in, so have all of your code and tools ready. Finally, make sure your device or emulator has some contacts configured (with names and emails) within the Contacts application. Step 1: Creating Your Layout There are two essential form controls necessary for the contact picker to work. First, we need an EditText field where the resulting email will show. Second, we need some way for the user to launch the contact picker. A Button control works well for this. The following Layout segment has both of these elements defined appropriately: <RelativeLayout android: <EditText android:</EditText> <Button android:</Button> </RelativeLayout> This layout XML is part of a larger layout. Here’s what it looks like in the layout designer, complete with the string resources filled out: Step 2: Launching the Contact Picker Now you need to write the code to handle the Button push, which will launch the contact picker. One of the most powerful features of the Android platform is that you can leverage other applications’ functionality by using the Intent mechanism. An Intent can be used along with the startActivityForResult() method to launch another Android application and retrieve the result. In this case, you can use an Intent to pick a contact from the data provided by the Contacts content provider. Here’s the implementation of doLaunchContactPicker(): import android.provider.ContactsContract.Contacts; import android.provider.ContactsContract.CommonDataKinds.Email; private static final int CONTACT_PICKER_RESULT = 1001; public void doLaunchContactPicker(View view) { Intent contactPickerIntent = new Intent(Intent.ACTION_PICK, Contacts.CONTENT_URI); startActivityForResult(contactPickerIntent, CONTACT_PICKER_RESULT); } Note: The import commands are important here. Make sure you’re using the Contacts class from the ContactsContract and not the older android.provider.Contacts one. Once launched, the contacts picker in your application will look something like this: Step 3: Handling the Results Now you are ready to handle the results of the picker. Once the user taps on one of the contacts in the picker, focus will return to the calling Activity (your application’s Activity). You can grab the result from the contacts picker by implementing the onActivityResult() method of your Activity. Here you can check that the result matches your requestCode and that the result was good. Your onActivityResult() method implementation should be structured like this: protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (resultCode == RESULT_OK) { switch (requestCode) { case CONTACT_PICKER_RESULT: // handle contact results break; } } else { // gracefully handle failure Log.w(DEBUG_TAG, "Warning: activity result not ok"); } } You’ll get a result other than RESULT_OK if the user cancels the operation or if something else goes wrong. Step 4: Reading the Result Data The final parameter to onActivityResult is an Intent called “data.” This parameter contains the results data we are looking for. Different Intents will return different types of results. One option for inspecting the results is to display everything found in the Extras bundle in addition to the data Uri. Here’s a code snippet that will show all of the Extras, should any exist: Bundle extras = data.getExtras(); Set keys = extras.keySet(); Iterator iterate = keys.iterator(); while (iterate.hasNext()) { String key = iterate.next(); Log.v(DEBUG_TAG, key + "[" + extras.get(key) + "]"); } Uri result = data.getData(); Log.v(DEBUG_TAG, "Got a result: " + result.toString()); We’re not really interested in the Extras bundle for the contacts picker because it doesn’t contain the information we need. We just want the Uri which will lead us to the important contact details. Step 5: Understanding the Result In the onActivityResult() callback, we are supplied the Uri to the specific contact that the user chose from the contact picker. Using this Uri directly would allow us to get the basic Contact data, but no details. However, we are interested in determining the email address of the contact. So, an easy way to deal with this is to just grab the contact id from the Uri, which is the number at the end of the path: The full Uri looks something like: content://com.android.contacts/contacts/lookup/0r7-2C46324E483C324A3A484634/7 In this case, the resulting id would simply be 7. We can retrieve the contact identifier using the getLastPathSegment() method, as follows: // get the contact id from the Uri String id = result.getLastPathSegment(); Step 6: Querying the Contacts Database for Email Now that you have the identifier for the chosen contact, you have all the information you need to query the Contacts content provider directly for that contact’s email address. Android content providers are a powerful way of sharing data amongst applications. The interface to them is similar to that of a database and many are database backed, using SQLite, but they need not be. One way you can query the contacts content provider for the appropriate contact details is by using the default ContentResolver with one of the ContactsContract.CommonDataKinds subclasses. For email, you can use the ContactsContract.CommonDataKinds.Email class as follows: // query for everything email cursor = getContentResolver().query( Email.CONTENT_URI, null, Email.CONTACT_ID + "=?", new String[]{id}, null); Some other useful ContactsContract.CommonDataKinds subclasses include Phone, Photo, Website, Nickname, Organization, and StructuredPostal. Step 7: Viewing the Query Results Certainly, you could read the class documentation for the ContactsContract.CommonDataKinds.Email class and determine what kind of results to expect. However, this is not always the case so let’s inspect the results of this call. This is a very handy trick if you are working with a content provider that has less-than-adequate documentation, or is not behaving as expected. This snippet of code will show you, via LogCat output, every column and value that is returned from the query to the content provider: cursor.moveToFirst(); String columns[] = cursor.getColumnNames(); for (String column : columns) { int index = cursor.getColumnIndex(column); Log.v(DEBUG_TAG, "Column: " + column + " == [" + cursor.getString(index) + "]"); Now you can see that, indeed, they really did mean for the email to come back via a column called DATA1, aliased to Email.DATA. The Android Contacts system is very flexible, and this sort of generic column name shows where some of that flexibility comes from. The email type, such as Home or Work, is found in Email.TYPE. Step 8: Retrieving the Email We have all of the data we need to actually get the email address, or addresses, of the contact picked by the user. When using database Cursors, we have to make sure they are internally referencing a data row we’re interested in, so we start with a call to the moveToFirst() method and make sure it was successful. For this tutorial, we won’t worry about multiple email addresses. Instead, we’ll just use the first result: if (cursor.moveToFirst()) { int emailIdx = cursor.getColumnIndex(Email.DATA); email = cursor.getString(emailIdx); Log.v(DEBUG_TAG, "Got email: " + email); } It’s important to remember that a contact may have many addresses. If you wanted to give the user the option of choosing from multiple email addresses, you could display your own email chooser to pick amongst these after the user has chosen a specific contact. Step 9: Updating the Form After all that work to get the email address, don’t forget to update the form. You might also consider informing the user if the contact didn’t have any email address listed. EditText emailEntry = (EditText)findViewById(R.id.invite_email); emailEntry.setText(email); if (email.length() == 0) { Toast.makeText(this, "No email found for contact.", Toast.LENGTH_LONG).show(); } And there it is: Step 10: Putting it All Together We skipped over two important items in this tutorial that are worth mentioning now. First, we didn’t include any error checking; we did this for clarity, but in production code, this is an essential piece of the solution. An easy way to implement some checking would be to to wrap just about everything in a try-catch block. Second, you need to remember that Cursor objects require management within your Activity lifecycle. Always remember to release Cursor objects when you are done using them. Here’s the complete implementation of the onActivityResult() method to put these points in perspective: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (resultCode == RESULT_OK) { switch (requestCode) { case CONTACT_PICKER_RESULT: Cursor cursor = null; String email = ""; try { Uri result = data.getData(); Log.v(DEBUG_TAG, "Got a contact result: " + result.toString()); // get the contact id from the Uri String id = result.getLastPathSegment(); // query for everything email cursor = getContentResolver().query(Email.CONTENT_URI, null, Email.CONTACT_ID + "=?", new String[] { id }, null); int emailIdx = cursor.getColumnIndex(Email.DATA); // let's just get the first email if (cursor.moveToFirst()) { email = cursor.getString(emailIdx); Log.v(DEBUG_TAG, "Got email: " + email); } else { Log.w(DEBUG_TAG, "No results"); } } catch (Exception e) { Log.e(DEBUG_TAG, "Failed to get email data", e); } finally { if (cursor != null) { cursor.close(); } EditText emailEntry = (EditText) findViewById(R.id.invite_email); emailEntry.setText(email); if (email.length() == 0) { Toast.makeText(this, "No email found for contact.", Toast.LENGTH_LONG).show(); } } break; } } else { Log.w(DEBUG_TAG, "Warning: activity result not ok"); } } You’ve now got everything you need to complete the application. Remember, though, that if you’re working with real data, take care not to spam your friends too much. ☺ Conclusion In this tutorial, you’ve learned how to launch the Contacts picker and retrieve the chosen result. You also learned how to inspect the results and retrieve the email address for the picked contact using the contacts content provider. You can use this method to retrieve all sorts of information about a given contact.Powered by
http://code.tutsplus.com/tutorials/android-essentials-using-the-contact-picker--mobile-2017
CC-MAIN-2016-07
en
refinedweb
Code covered by the BSD License by Fabrice Fabrice (view profile) 2 files 173 downloads 4.48077 23 Nov 2009 (Updated 06 Apr 2011) Extracts automatically comments from your Matlab .m files using Doxygen to generate documentation This file was selected as MATLAB Central Pick of the Week | Watch this File This package allows you to extract automatically comments from your Matlab .m files using Doxygen to generate documentation. This package provides : - a perl script (m2cpp.pl) used to filter the .m files so that Doxygen can understand them - a template for the Doxyfile file (configuration file for Doxygen) which has to be modified according to the location of your code - documentationGuidelines.m, an .m file which describes how you should comment your code so that Doxygen can extract it and create nice documentation - classDocumentationExample.m : an .m file describing possible comment for classes - all the documentation (html format) automatically generated by Doxygen from the two .m files (see Doc/html/index.html), which provides informations about installation and how to write Doxygen comments. Installation details : - You need to have the Doxygen software installed (version 1.5.9 or newer required (tested with version 1.7.1)) - You need to have perl installed (perl is shipped with Matlab, located usually in $matlabroot\sys\perl\win32\bin) - unzip the DoxygenMatlab.zip to C:\DoxygenMatlbab (for example) - get the Doxyfile file from the C:\DoxygenMatlbab directory and replace the default Doxyfile provided by Doxygen - edit the Doxyfile file (or use the DoxyWizard tool provided by Doxygen) to modify a few settings : o EXTENSION_MAPPING=.m=C++ o FILTER_PATTERN=*m=C:\DoxygenMatlbab\m2cpp.pl o PERL_PATH=<path to your perl version> o INPUT=<directory where your documented code is located> o OUTPUT_DIRECTORY=<directory where you want to generate your documentation> o STRIP_FORM_PATH=<directory where your documented code is located> Note for Windows users : In certain circumstances, the association between .pl files and the perl executable is not well configured, leading to "Argument must contain filename -1 at C:\DoxygenMatlab\m2cpp.pl line 4" when running doxygen. To work around this issue, you should execute the following lines in a Windows command prompt ("cmd") : assoc .pl=PerlScript ftype PerlScript=C:\Program Files\MATLAB\R2010b\sys\perl\win32\bin\perl.exe %1 %* (don't forget to replace the path to the perl.exe file with yours in the line above) Hello, I am generating a Doxygen documentaion for .m file.Firstly I generate it by simply editing the doxygen.conf file, I add FILE_PATTERNS= .m and EXTENSION_MAPPING= .m=c++. But it did not produce correct documentation of .m file. Now I am following your post "Using Doxygen with matlab and download "Doxygenmatlab" package. I have tried to understand all things, But unable to understand. Guide me what I have to do? possible typos in the instructions and doxyfile. should be FILTER_PATTERNS otherwise doxygen ignoring the tag Great file, using it since a long time. However, now finally changed to Windows 7 I get the error/warning message warning: "Warning: Found ';' while parsing initializer list! (doxygen could be confused by a macro call without semicolon)" and my .m files are not parsed any longer. Do anybody know the workaround?! (Sonsoles earlier had the same problem but didn´t posted the solution;-() This script is great! It would be fantastic to have also a method to generate the source browser correctly with the comments stripped. I've tried to strip the comments from the source files with a perl filter used in FILTER_SOURCE_FILES. But then the hyperlinks from the documentation to the code are wrong. Any suggestion on how to do this?. try sphinx with the matlabdomain Thank you for creating this Fabrice. I will take a look at it and provide feedback. -Adithya Hello Fabrice. My name is Eckard Klotz. I\'m the developer behind the two sourceforge projects: Moritz : MuLanPa : Both are open source freeware-projects and should be used as an extension of Doxygen to create algorithm describing diagrams like nassi shneiderman or uml activity diagrams. Today in both projects. The snapshot of the project Moritz contains in the windows-distribution Moritz2_WIN32_2013_05_27.zip in the folder LangPack\pascxal your filter. You will find in the file ReadMe_2013_05_27.txt The following paragraphs: Since this distribution contains doxygen-configurations also and with the version 1.9.08 the languages Matlab and Pascal are supported also two third-party filters are added in the associated LangPack folders: Matlab: Filter: m2cpp.pl Author: Fabrice Internet: Pascal: Filter: pas2dox.exe Author: Darren Bowles Internet: I hope that this is OK for you but please don't hesitate to argue if you have some doubts about this. You will find a forum-section in both projects. Best regards, Eckard Klotz. Thanks Fabrice for your file. Now it is working in my matlab but i want to know if with this GraphViz options are available. i mean class graphs and caller graphs among others. Thanks again I had to update m2cpp.pl for line endings on my Mac. In VIM, I used ":set ff=unix" and that did the trick. Hello!! i am runing Doxygen in my matlab code and the question is, it should show some graph as well. i got the following errors: C:/Users/Sonsoles/Documents/Fortran/Kompressorkoden/get_o_over_c.m:23: warning: Found ';' while parsing initializer list! (doxygen could be confused by a macro call without semicolon) C:/Users/Sonsoles/Documents/Fortran/Kompressorkoden/get_omega_s.m:21: warning: Found ';' while parsing initializer list! (doxygen could be confused by a macro call without semicolon) C:/Users/Sonsoles/Documents/Fortran/Kompressorkoden/Xfunc.m:21: warning: class for member `M::::::calculate:' cannot be found. thanks for the help I am under Windows XP R2011b. When I run Doxygen , the perl file m2cpp.pl shows up on the screen for each M-file that is processing. I have to close the file manually. Moreover , your perl file requires to change all our syntax since the filter extracts only lines beginning with %>. For function description, same issue , we have to use a specific syntax. The other submission "mtoc++ - Doxygen filter for Matlab and tools " is better , it doesn't require any change in our codes. I definitely prefer m2html which is plug and play. Hello Fabrice! Finally - poh - I have m2cpp running in Windows 8 with Matlab 7.0.4! First I had problems with access rights for involved programs, had to set Run as administrator, also had to patch in registry to make perl accept command line arguments. Then m2cpp.pl crashed at row 43. After having modified by inserting "use FileHandle;" at first row and replaced row 43 "open(my $in, $my_fic);" by " my $in = new FileHandle; # Fix for old version of Perl $in-> open($my_fic); " things run nicely!!! Thanks for your submission Håkan Fridén hakan.friden@frideninfotech.eu Question: Is this supposed to work for call graphs, like on M2HTML? The *inheritance* graphs are being generated properly. But I don't see any call graphs for any of my functions or class methods. Thanks ok I just found out that I have to use the \fn keyword with the input name written exactly the same as what m2cpp generates. For example, I need to add a line saying %> \fn my_func_name(in par1,in par2) Thanks for m2cpp a very nice work! I'm trying to use it with doxygenwizard 1.8.2 under windows 7. I believe that I've followed all instructions but I just got an empty index.html page. What might be the problem? Below are my logs Adding custom extension mapping: .m will be treated as language c++ Searching for include files... Searching for example files... Searching for images... Searching for dot files... Searching for msc files... Searching for files to exclude Searching for files to process... Searching for files in directory C:/Project/xxx Reading and parsing tag files Preprocessing C:/Project/xxx/xxx1.m... Parsing file C:/Project/xxx/xxx1.m... Preprocessing C:/Project/xxx/xxx2.m... Parsing file C:/Project/xxx/xxx2.m... Building group list... Building directory list... Building namespace list... Building file list... Building class list... Associating documentation with classes... Computing nesting relations for classes... Building example list... Searching for enumerations... Searching for documented typedefs... Searching for members imported via using declarations... Searching for included using directives... Searching for documented variables... Building member list... Searching for friends... Searching for documented defines... Computing class inheritance relations... Computing class usage relations... Flushing cached template relations that have become invalid... Creating members for template instances... Computing class relations... Add enum values to enums... Searching for member function documentation... Building page list... Computing page relations... Determining the scope of groups... Sorting lists... Freeing entry tree Determining which enums are documented Computing member relations... Building full member lists recursively... Adding members to member groups. Computing member references... Inheriting documentation... Generating disk names... Adding source references... Adding xrefitems... Sorting member lists... Computing dependencies between directories... Generating citations page... Counting data structures... Resolving user defined references... Finding anchors and sections in the documentation... Combining using relations... Adding members to index pages... Generating style sheet... Generating example documentation... Generating file sources... Generating file documentation... Generating page documentation... Generating group documentation... Generating class documentation... Generating namespace index... Generating graph info page... Generating directory documentation... Generating index page... Generating page index... Generating module index... Generating namespace index... Generating namespace member index... Generating annotated compound index... Generating alphabetical compound index... Generating hierarchical class index... Generating member index... Generating file index... Generating file member index... Generating example index... finalizing index lists... symbol cache used 16/65536 hits=192 misses=16 lookup cache used 0/65536 hits=0 misses=0 finished... When enabling GENERATE_HTMLHELP the search engine (SEARCHENGINE) should be disabled. I'll do it for you. *** Doxygen has finished Hi, I really like the doxygen integration and I'll use it from now on. Only a minor feature is missing from my point of view. I would like to comment on class variables after the definition and not before. Is that somehow possible or is there a quick fix to enable the tool for that? e.g. classdef CameraModel properties ImageSize = [640 480]; %> Camera image size in pixel [x y] end end Sorry I'm not really familiar with perl so that I couldn't do it by myself. Sebi Actually someone helped me already to solve the problem. The file m2cpp.pl is formatted for DOS. One has to reformat it for Linux in order to get everything to work correctly (I did it with the tofrodos package). The program is great :-) It *almost* works for me and I suspect it may have to do with a Linux problem. I receive the following error messages sh: /media/sda5/Giovanni/Utilities/Matlab_utilities/Matlab_exchange_downloads/DoxygenMatlab/m2cpp.pl: not found and the html files after this do nto contain any documentation (as no c++-style comments are produced). The m2cpp.pl is there, though, I have tried to move it around and I still receive corresponding error messages. Could anyone tell me where I have to start looking for solving this problem? Thanks in advance. Great contribution, thanks a lot! Dear Fabrice, is your package able to document code that uses the old class specification (before use of "classdef")? Thanks a lot, Tom Hello everyone I get an error by using the pearl-script like follows: Reading and parsing tag files Preprocessing D:/Daten/Projekt../src/fcn.m... Can't use an undefined value as filehandle reference at D:\Daten\Projekt\..\Filter\m2cpp.pl line 47 (formaly line 53). Parsing file D:/Daten/Projekt/../src/fcn.m... I tried to implent the patches from Bastian (07.07.2011) but the only the line-number changed from 43 to 47. I use the pearl-version of Matlab 2009b on windows XP-SP3 together with Doxygen version 1.7.4 Thanks for your help, Eckard Klotz. :-) Wonderful interface to doxygen, very easy to implement. One very small niggle is the shebang at the top of the perl script in the latest update points to perl.exe -- this needs to be edited before it will work on *nix / OS X... LeFlaux, I guess you problem is a Doxygen issue (maybe the EXTRACT_PRIVATE is set to NO in your Doxyfile). If not, could you send me a test case so that I can reproduce your problem ? Dear Fabrice, thank you for this great and useful piece of software! Just one little question: I got directories named `private' in my Matlab-project. The documentation of .m files contained in such directories are not displayed. Any suggestions? The same problem (see the post of the 11 Mar 2011) happens when the attribute name begins with "end" (for instance "endDate"). Thx a lot for this great tool ! I experienced a little problem when using it on "@" folder class definition. As strange as it looks, when the name of my function (defined in an external .m file inside the @ folder) begins with a "m" letter, the function does not appear in the generated class description. It works just fine when I change it in another letter. Any idea? thanks Hi Fabrice, I noticed a small problem. At multiple inheritance from a class from a package-directory the base class name is defined incorrectly. For example: +My - package-directory BaseClass.m - base class in +My directory SomeClass.m - some class in +My directory "SomeClass" define: classdef SomeClass < My.BaseClass end When you create a documentation obtained that SomeClass is inherited from the "My", a not from "My.BaseClass" or "BaseClass". :) It seems good, however I do encounter some problems: one of my scripts is a classdef deriving from 'handle' : classdef MyClass < handle the parser gives an error since < is not closed by a > Is there a way to avoid this ? Another problem is that the parser gives errors on non-ascii letters (é, è, ...) I've been using and enjoying your product. I have a quick question though. 1)I am using Matlab 2007a with only fuctions, i.e., not classes. I cannot get doxygen to create any graphs, either using graphviz or the included graphing library. I'm familiar with doxygen, and was wondering if your product in conjunction with doxygen supports graph generation via function calls alone? I uses a similiar perl script posted somewhere online to convert from .m to .c and it could generate graphs in doxygen. Thank you for your time, the .m is included below, in which no graph is generated (I have graph generation enabled in doxygen and all the setting in expert are correct aslwell I believe). %> @brief Brief description of the function %> %> More detailed description. %> %> \latexonly %> $\bigl(\begin{smallmatrix} %> a&b\\ c&d %> \end{smallmatrix} \bigr)$ %> \endlatexonly %> %> @param arg1 First argument %> @param arg2 Second argument %> %> @retval out1 return value for the first output variable %> @retval out2 % ====================================================================== function [out1, out2] = hhh( arg1, arg2) out1 = arg2; out2 = arg1; c = fb(out1); end % ====================================================================== %> @brief Brief description of the function %> %> More detailed description. %> %> \latexonly %> $\bigl(\begin{smallmatrix} %> a&b\\ c&d %> \end{smallmatrix} \bigr)$ %> \endlatexonly %> %> @param arg1 First argument %> %> @retval out1 return value for the first output variable % ====================================================================== function [out1] = fb( arg1) out1 = arg2; end 2) when I use /latexonly /endlatex only Doxygen commands to put in a matrix in latex, my matrices/equations have the "///" comment denoted for c++, this is not a problem, but if there is an easy fix I'll like to know. Thanks for the update, Fabrice! Works perfect. Linux users may find my tweaked Doxyfile for this m2cpp.pl tool useful: By the way, does anybody know how to add citations to the documentation from Doxygen using LaTeX? I mean, one have \cite commands in the documentation, and those cited sources should appear in the doxygen-generated docs. Any ideas? Just awesome! :) Thank You very much for this tool! Note on m2cpp.pl: Linux users should change the first line to #!/usr/bin/perl) Fabrice, please, update your files on fileexchange. Last version works fine! Great utility, thanks. Question about using it with multiple-file classes - I get it to work with a class in a single file, but when I set up a class in an @folder, I don't seem to get the function based methods (in separate files) located within my doxygen derived class. Another strange point with inheritance. Imagine class Father inherited by two classes Son1 and Son2. In Father there is a method Father::doIt() overloaded by Son1::doIt(), but inherited directly from Father by Son2. The strange point is that in Son2 documentation you’ll see comments about Father::doIt() (that is normal) followed by «Reimplemented in Son1» (that is weird — or may be this is a standard C++ documentation agreement ?) I mean, with a brief form of abstract methods. With the following example : methods (Abstract) function calcPMInitiale(obj) end projection(obj, iteration) end The first method will be detected, but not the second (correct definition for MatLab). But I think it doesn't work properly with abstract methods... A small correction to define classes (classdef (Hidden, Sealed) SomeClass) are handled correctly: if (/(^\s*classdef)\s*(\s*\([\w,\s]+\s*\))?\s*([\w\d_]+)\s*<?\s*([\w\d_]+)?(.*)/) I think the problem with this line in the file "m2cpp pl": if (/(^\s*classdef)\s*(\s*\(Enumeration\s*\))?\s*([\w\d_]+)\s*<?\s*([\w\d_]+)?(.*)/) "Enumeration" is undocumented attribute? I've corrected as follows: if (/(^\s*classdef)\s*(\s*\([\w]+\s*\))?\s*([\w\d_]+)\s*<?\s*([\w\d_]+)?(.*)/) and added the else condition: ... else { $className = $3; $classDef = "class ".$className.":public $4"; } This works, but I think it is not quite correct. Hello! I've found a bug. Define a class with attributes: classdef (Hidden) SomeClass < handle end ...or with any other class attribute. Such a definition is Parsed incorrectly. Fabrice, You could fix this bug? I don't understand anything in Perl. :) Hello! Do not use Perl, which comes with Matlab! It does not work correctly. Use ActivePerl. Thank you, Fabrice, for a really nice tool! It seems, as Felix has noted, that only classes contained in one single .m-file are supported, and not the folder (@-)structure. Furthermore, the private class-properties are displayed as public ones. Is there any chance that you could fix this? I'm aware that the Matlab style of Access, SetAccess, and GetAccess might be tricky, but it would be really helpful. Thanks a lot! Does this tool handle collaboration diagrams? When a class is composed using another class, it does not show up in the collaboration diagram. The collaboration diagram is always the same as the class diagram. Am I doing something wrong? Thank you very much for this great tool. Could it be that class hierarchies are not properly detected when the classes are in a package? Fabrice, Thanks for looking in to this, changing the extension from m to .m fixed the problem with 1.7.1. ? Very nice tool - many thanks! I found I had to use Doxygen 1.6.1, not 1.7.1 (latest). I see from you've seen something similar before. Ed, I changed the m2cpp.pl file so that unix users can use it. Thanks for your feedback. I had a problem using the script under Unix (Linux) since the first line of the perl script, which defines the interpreter to use (/usr/bin/perl), was terminated using a carriage-return (\r) instead of a line feed (\n). Because of this the perl interpreter could not be found. Since the first line only makes sense in a Unix environment, I think the line-ending character for the first line (at least) should be changed to a line-feed. Mike, you can comment only variables outside function block (this is a Doxygen limitation), that is : - arguments passed to functions, for example : %> @brief Description of the foo function %> @param a this is a description of var a %> @param b this is a description of var b function foo(a, b) ... end - properties in a classdef definition, for example : classdef foo properties %> this is a description of the var a a %> this is a description of the var b b .. end ... end Fabrice, thank for the program! The only question I have: is it possible to comment variables by such script? I mean, if I have a = [1,2,3,4]; %> this is a description of the var can it be displayed in doxygen-generated docs? Lien-Chi, Maybe your perl exe is not properly installed (it seems it is the case for the perl provided by Matlab). You could try the following workarounds : 1. Set the following variables in the Doxyfile : INPUT_FILTER=perl m2cpp.pl FILE_PATTERNS=*.m 2. If it doesn't work you should try to install ActivePerl : with this version of perl, everything is working fine. Why I got the following message? "Argument must contain filename -1 at C:\Users\user\Desktop\DoxygenMatlab\m2cpp.pl line 4" Nice! Would be useful if it can also generate enum classes by handling something like this, classdef(Enumeration) Color < int32 enumeration Red(0) Green(1) Yellow(2) end end This is exactly what I was looking for, and to see that it works on subfunctions and object oriented code is simply brilliant. This is a wonderful replacement to mtoc that works wonderfully. Many kudos to the author for simplifying my life and increasing my productivity. Brilliant! Added support for : * @-folder for classes * multiple inheritance * class attributes * private / protected / public properties and methods * abstract methods * constant properties * events * ignored arguments (~) Fixed a few bugs : - property names (and method names) beginning with end are now allowed (as pointed out by Vincent) - inheritance with classes containing a dot is now supported (as pointed out by Evgeni Pr)
http://www.mathworks.com/matlabcentral/fileexchange/25925-using-doxygen-with-matlab
CC-MAIN-2016-07
en
refinedweb
Given a sample code: import java.io.File; public class Test { public static void main(String[] args) throws Exception { File file = new File("test.txt"); System.out.println(file.exists()); }} What will the result if it is compiled and run at first time? (A) true (B) false (C) Compile time error. (D) Raise run-time exception. (B) If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorial/java/scjp/part10/question2.html
CC-MAIN-2013-20
en
refinedweb
part 12 is something I think I am not able to solve. My code was this: // Template, major revision 3 // IGAD/NHTV - Jacco Bikker - 2006-2009 #include "string.h" #include "surface.h" #include "stdlib.h" #include "template.h" #include "game.h" using namespace Tmpl8; class Tank { public: Tank() { x = 0; y = 4 * 64; rotation = 0; } void Move( Surface* a_Screen) { x++; if ( x > ( 16 * 64 ) ) x = 0; tank.Draw( x, y, a_Screen ); } int x, y, rotation; }; Tank mytank; void Game::Init() { } void Game::Tick( float a_DT ) { mytank.Move( m_Screen ); } The errors I got: 1>c:\-\part12\game.cpp(25) : error C2065: 'tank' : undeclared identifier 1>c:\-\part12\game.cpp(25) : error C2228: left of '.Draw' must have class/struct/union I thought it was weird, since this is exactly what was written in the tutorial. Luckily, I thought I understood the problem, so I changed 'tank.Draw' to 'Tank.Draw'. Unfortunately, this gave me these errors (both the same?): 1>c:\-\part12\game.cpp(25) : error C2143: syntax error : missing ';' before '.' 1>c:\-\part12\game.cpp(25) : error C2143: syntax error : missing ';' before '.' This is where I got stuck..does anyone know how to solve this problem? Rimevan
http://devmaster.net/forums/topic/16018-problem-with-c-tutorial-part-12-classes/page__p__83815?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2013-20
en
refinedweb
Windows API Sets New for Windows 8 and Windows Server 2012, API Sets are strongly named API contracts that provide architectural separation between an API contract and the associated host (DLL) implementation. API Sets rely on operating system support in the library loader to effectively introduce a namespace redirection component to. API Sets available in Windows 8 and Windows Server 2012 This page lists the API Sets available for use in Windows 8 and Windows Server 2012. For convenience two "umbrella" libs, MinCore.lib and MinCore_Downlevel.lib, are provided in the Microsoft Windows Software Development Kit (SDK) that encompasses the API surface defined in API Sets plus additional APIs that are contained in well-layered system DLLs.. Lib to link to: MinCore.lib The API Sets listed in this table are the DLL names to use for delay load. Lib to link to: MinCore_Downlevel.lib The API Sets listed in this table are the DLL names to use for delay load. The API Set DLL is the DLL name to use for delay load. Send comments about this topic to Microsoft Build date: 11/29/2012
http://msdn.microsoft.com/en-us/library/windows/desktop/hh802935(v=vs.85).aspx
CC-MAIN-2013-20
en
refinedweb
14 June 2012 06:36 [Source: ICIS news] By Alfa Li SINGAPORE (ICIS)--Spot bitumen prices in the ?xml:namespace> Sharp falls in crude and fuel oil prices have been dragging down bitumen prices in the Duri crude, which is a major pricing benchmark for heavy crudes, has declined by 16.5% from late April to mid-June, while the price of Singaporean 180CST fuel oil decreased by 17.4% over the same period, according to data from C1 Energy. Demand for China-based traders largely stopped purchases in late April as they already had enough inventories, while end-users’ demand from Meanwhile, bitumen production in Singaporean producers would have to cut prices to attract buyers, traders said, citing that the city-state's bitumen may only turn attractive at $550/tonne FOB. (.
http://www.icis.com/Articles/2012/06/14/9569300/singapore-bitumen-to-extend-downtrend-asia-in-oversupply.html
CC-MAIN-2013-20
en
refinedweb
The QUndoView class displays the contents of a QUndoStack. More... #include <QUndoView> This class was introduced in Qt 4.2. The QUndoView class displays the contents of a QUndoStack.:. Constructs a new view with parent parent and sets the observed stack to stack. Constructs a new view with parent parent and sets the observed group to group. The view will update itself autmiatically whenever the active stack of the group changes. Destroys this view. Returns the group displayed by this view. If the view is not looking at group, this function returns 0. See also setGroup() and setStack(). Sets the group displayed by this view to group. If group is 0, the view will be empty. The view will update itself autmiatically whenever the active stack of the group changes. See also group() and setStack(). Sets the stack displayed by this view to stack. If stack is 0, the view will be empty. If the view was previously looking at a QUndoGroup, the group is set to 0. See also stack() and setGroup(). Returns the stack currently displayed by this view. If the view is looking at a QUndoGroup, this the group's active stack. See also setStack() and setGroup().
http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qundoview.html
CC-MAIN-2013-20
en
refinedweb
x:Array Markup Extension Provides general support for arrays of objects in XAML through a markup extension. This corresponds to the x:ArrayExtension XAML type in [MS-XAML]. Type is a required attribute for all x:Array object elements. A Type parameter value does not need to use an x:Type markup extension; the short name of the type is a XAML type, which can be specified as a string. In the .NET Framework XAML Services implementation, the relationship between the input XAML type and the output CLR Type of the created array is influenced by service context for markup extensions. The output Type is the UnderlyingType of the input XAML type, after looking up the necessary XamlType based on XAML schema context and the IXamlTypeResolver service the context provides. When processed, the array contents are assigned to the ArrayExtension.Items intrinsic property. In the ArrayExtension implementation, this is represented by ArrayExtension.Items. In the .NET Framework XAML Services implementation, the handling for this markup extension is defined by the ArrayExtension class. ArrayExtension is not sealed, and could be used as the basis for a markup extension implementation for a custom array type. x:Array is more intended for general language extensibility in XAML. But x:Array can also be useful for specifying XAML values of certain properties that take XAML-supported collections as their structured property content. For example, you could specify the contents of an IEnumerable property with an x:Array usage. x:Array is a markup extension. Markup extensions are typically implemented when there is a requirement to escape attribute values to be other than literal values or handler names, and the requirement is more global than just putting type converters on certain types or properties. x:Array is partially an exception to that rule because instead of providing alternative attribute value handling, x:Array provides alternative handling of its inner text content. This behavior enables types that might not be supported by an existing content model to be grouped into an array and referenced later in code-behind by accessing the named array; you can call Array methods to get individual array items. All markup extensions in XAML use the braces ({,}) in their attribute syntax, which is the convention by which a XAML processor recognizes that a markup extension must process the attribute value. For more information about markup extensions in general, see Type Converters and Markup Extensions for XAML. In XAML 2009, x:Array is defined as a language primitive instead of a markup extension. For more information, see Built-in Types for Common XAML Language Primitives. WPF Usage Notes Typically, the object elements that populate an x:Array are not elements that exist in the WPF XAML namespace, and require a prefix mapping to a non-default XAML namespace. For example, the following is a simple array of two strings, with the sys prefix (and also x) defined at the level of the array. [xaml] <x:Array Type="sys:String" xmlns:x="" xmlns: <sys:String>Hello</sys:String> <sys:String>World</sys:String> </x:Array> For custom types that are used as array elements, the class must also support the requirements for being instantiated in XAML as object elements. For more information, see XAML and Custom Classes for WPF.
http://msdn.microsoft.com/en-us/library/ms752340.aspx
CC-MAIN-2013-20
en
refinedweb
IntroductionEvents.Event handler is a method that has the same signature as the event and this method is executed when the event occurs.To define an event you need first to define a delegate that contains the methods that will be called when the event raised, and then you define the event based on that delegate.Example:public class MyClass{ public delegate void MyDelegate(string message); public event MyDelegate MyEvent;}Raising an events is a simple step. First you check the event agaist a null value to ensure that the caller has registered with the event, and then you fire the event by specifying the event by name as well as any required parameters as defined by the associated delegate.Exampleif (MyEvent != null) MyEvent(message);So far so good, in the previous section you saw how to define an event and the delegate associated with it and how to raise this event.Now you will see how the other parts of the application can respond to the event. To do this you just need to register the event handlers.when you want to register an event handler with an event you must follow this pattern:MyClass myClass1 = new MyClass();MyClass.MyDelegate del = new MyClass.MyDelegate(myClass1_MyEvent);myClass1.MyEvent += del;or you can do this in one line of code myClass1.MyEvent += new MyClass.MyDelegate(myClass1_MyEvent); //this is the event handler //this method will be executed when the event raised. static void myClass1_MyEvent(string message) { //do something to respond to the event.}Let's see a full example to demonstrate the concept: namespace EventsInCSharp public class MyClass { public delegate void MyDelegate(string message); public event MyDelegate MyEvent; //this method will be used to raise the event. public void RaiseEvent(string message) { if (MyEvent != null) MyEvent(message); } } class Program static void Main(string[] args) MyClass myClass1 = new MyClass(); myClass1.MyEvent += new MyClass.MyDelegate(myClass1_MyEvent); Console.WriteLine("Please enter a message\n"); string msg = Console.ReadLine(); //here is we raise the event. myClass1.RaiseEvent(msg); Console.Read(); //this method will be executed when the event raised. static void myClass1_MyEvent(string message) Console.WriteLine("Your Message is: {0}", message); }we are doing great, but what if you want to define your event and it's associated delegate to mirrors Microsoft's recommended event pattern. To do so you must follow this patten: public delegate void MyDelegate(object sender, MyEventArgs e); public event MyDelegate MyEvent;As you can see the first parameter of the delegate is a System.Object, while the second parameter is a type deriving from System.EventArgs.The System.Object parameter represents a reference to the object that sent the event(such as MyClass), while the second parameter represents information regarding the event.If you define a simple event that is not sending any custom information, you can pass an instance of EventArgs directly.let's see an example: namespace MicrosoftEventPattern public delegate void MyDelegate(object sender, MyEventArgs e); public class MyEventArgs : EventArgs public readonly string message; public MyEventArgs(string message) { this.message = message; } public void RaiseEvent(string msg) MyEvent(this, new MyClass.MyEventArgs(msg)); static void myClass1_MyEvent(object sender, MyClass.MyEventArgs e) if (sender is MyClass) MyClass myClass = (MyClass)sender; Console.WriteLine("Your Message is: {0}", e.message); }we are done now, in my next article i'll show you how to define your custom event to use it in a custom control in a windows application. thanks~ Building a UNIX Time to Date Conversion Custom Control in C# Anonymous methods in C# Thanks so much! your article its great! I wait your article about "custom control in a windows application." How can I use Split event? Hi!! How can I use split event? This is Simply ExcellentThanksRaviKumar Bhuvanagiri
http://www.c-sharpcorner.com/uploadfile/Ashush/events-in-C-Sharp/
CC-MAIN-2013-20
en
refinedweb
In this tutorial I will explain how you can use the properties defined in .properties file in your spring application This type of configuration is very useful in web application development. You can put configurations for production, development and test environments is separate properties files. You can easily configure the environment by just changing the properties file. Suppose you are developing an email sending component for your web application. You may create following property file: File: application.properties admin.email=deepak@roseindia.net mail.server.ip=localhost And following mailer bean: package users.components.utils; import java.util.*; import javax.mail.*; import javax.mail.internet.*; public class MailerBean{ private String adminEmail; private String mailServerIP; public String getAdminEmail() { return adminEmail; } public void setAdminEmail(String adminEmail) { this.adminEmail = adminEmail; } public String getMailServerIP() { return mailServerIP; } public void setMailServerIP(String mailServerIP) { this.mailServerIP = mailServerIP; } public void sendEmail(String receipent, String message){ //Code to send email here } } Now in the applicationContext.xml file you can add the following code to use the properties defined in the application.properties file: <!--Bean to load properties file --> <bean id="placeholderConfig" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="location" value="classpath:application.properties"/> </bean> <bean id="MailerBean" class="users.components.utils.MailerBean"> <property name="adminEmail" value="${admin.email}" /> <property name="mailServerIP" value="${mail.server.ip}" /> </bean> Now you can get the MailerBean in your program to use the Mailer Bean functionality. If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorial/spring/spring3/web/applicationcontext.xml-properties-file.html
CC-MAIN-2013-20
en
refinedweb
Java developers are familiar with the performance best practice of using a StringBuffer in a loop instead of concatenating String objects. However, most developers have never seen the difference in terms of bytecode for one approach vs. the other. There is a tool included with the Java Development Kit (JDK) called javap that can show you why you would want to adopt this best practice. Javap takes a class and dumps information about its methods to standard out. It doesn't decompile the code into Java source code, but it will disassemble the bytecode into the bytecode instructions defined by the Java Virtual Machine specification. Javap is useful when you want to see what your compiler is doing for (or to) you, or when you want to see what effect a code change will have on the compiled class file. Let's use the StringBuffer vs. String scenario mentioned above as an example. Below is a contrived class that has two methods that return a String consisting of the numbers 0 to n, where n is supplied by the caller. The only difference between the two methods is that one uses a String to build the result, and the other uses a StringBuffer. public class JavapTip { public static void main(String []args) { } private static String withStrings(int count) { String s = ""; for (int i = 0; i < count; i++) { s += i; } return s; } private static String withStringBuffer(int count) { StringBuffer sb = new StringBuffer(); for (int i = 0; i < count; i++) { sb.append(i); } return sb.toString(); } } Now let's take a look at what javap outputs when it's run against the class with the -c option. The -c option tells javap to disassemble the bytecode found in the class. Running it looks like this: >javap -c JavapTip The output of the command relative to this tip is: Method java.lang.String withStrings(int) 0 ldc #2 <String ""> 2 astore_1 3 iconst_0 4 istore_2 5 goto 30 8 new #3 <Class java.lang.StringBuffer> 11 dup 12 invokespecial #4 <Method java.lang.StringBuffer()> 15 aload_1 16 invokevirtual #5 <Method java.lang.StringBuffer append(java.lang.String)> 19 iload_2 20 invokevirtual #6 <Method java.lang.StringBuffer append(int)> 23 invokevirtual #7 <Method java.lang.String toString()> 26 astore_1 27 iinc 2 1 30 iload_2 31 iload_0 32 if_icmplt 8 35 aload_1 36 areturn Method java.lang.String withStringBuffer(int) 0 new #3 <Class java.lang.StringBuffer> 3 dup 4 invokespecial #4 <Method java.lang.StringBuffer()> 7 astore_1 8 iconst_0 9 istore_2 10 goto 22 13 aload_1 14 iload_2 15 invokevirtual #6 <Method java.lang.StringBuffer append(int)> 18 pop 19 iinc 2 1 22 iload_2 23 iload_0 24 if_icmplt 13 27 aload_1 28 invokevirtual #7 <Method java.lang.String toString()> 31 areturn The output is a little cryptic if you've never seen Java assembler before, but hopefully you can see that the withString method creates a new StringBuffer instance each time through the loop. Then, it appends the current value of the existing String to the StringBuffer and appends the current value of the loop. Finally, it calls toString on the buffer and assigns the results to the existing String reference. This is in contrast to the withStringBuffer method, which only calls the existing StringBuffer's append method each time through the loop. There's no object creation and no new String references. In this case, we already knew that using StringBuffer instead of String was a good idea; if we didn't, then javap would have helped us find the answer. You won't often find yourself in circumstances that require a Java disassembler, but when you do, it's nice to know that you already have one on your machine and that it's simple to use. If you're interested, take a look at javap's other options—you might find features that will come in handy in your environment. Delivered each Thursday, our free Java newsletter provides insight and hands-on tips you need to unlock the full potential of this programming language. Automatically sign up today!
http://www.techrepublic.com/article/examine-class-files-with-the-javap-command/5815354
CC-MAIN-2013-20
en
refinedweb
SpellCheck.NET is free online spell checking site. Whenever I need to check my spelling I visit this site, so I decided to write a parser for this site. I wrote this parser with C# and wrapped it up in a DLL file and called it Word.dll. In this article I will show you how to parse a HTML page using regular expressions. I will not explain all the source code since it is available for download. My main purpose of this project is to demonstrate how to parse a HTML page using regular expressions. Before this project I have never worked with regular expressions seriously, so I decided to use regular expressions. In this project I have learned a lot about C# regular expressions and .NET framework. The difficult part was in this project writing regular expressions pattern. So I referred to different sites and books to get the right pattern. Here are some useful sites to check out. Word.dll has one public class and two public methods Include "using Word.dll" at the top of file for the object reference. SpellCheck word = new SpellCheck(); This method will check the word and return true or false. If the word is correct then it will return true otherwise false. bool status = false; status = word.CheckSpelling("a word"); This method will return the collection of suggested words. foreach(string suggestion in word.GetSpellingSuggestions("a word")) { System.Console.WriteLine( suggestion ); } regular expression pattern @"(correctly.)|(misspelled.)" regular expression pattern @"(suggestions:)" regular expression pattern @"<blockquote>(?:\s*([^<]+) \s*)+ </blockquote>" Source file is included in zip format for download. Calling Word.dll wrapper class: This is how you would call this wrapper class in your application. using System; //Word.dll using Word; /// <summary> /// Test Harness for SpellCheck Class /// </summary> class TestHarness { /// <summary> /// testing Word Class /// </summary> [STAThread] static void Main(string[] args) { SpellCheck word = new SpellCheck(); bool status = false; string s = "youes"; Console.WriteLine("Checking for word : " + s ); // check to see if the word is not correct // return the bool (true|false) status = word.CheckSpelling(s); if (status == false) { Console.WriteLine("This word is misspelled : " + s); Console.WriteLine("Here are some suggestions"); Console.WriteLine("-------------------------"); foreach( string suggestion in word.GetSpellingSuggestions(s) ) { System.Console.WriteLine( suggestion ); } } else if (status == true) { Console.WriteLine("This word is correct : " + s ); } } } Run the "compile.bat" file at the DOS prompt, it will create necessary files. This is how your screen would look like after you execute TestHarness.ex
http://www.codeproject.com/Articles/2469/SpellCheck-net-spell-checking-parsing-using-C
CC-MAIN-2013-20
en
refinedweb
If you work on a Microsoft network, chances are you're using Active Directory (AD). Active Directory stores information about network resources for a domain. This information requires specific authority for update, but is typically open to authenticated users for query. I developed this tool to allow for exactly these queries. It provides a list of known (to the network) domains, and allows the user to view groups, group membership, users, and user details without the need to dive into LDAP queries. In short, it's easy to use, quick, and provides more information than the typical user really needs. This tool was developed using the .NET Framework 2.0 only. There are no interop assemblies or Win32 API calls involved in ADSI operations, there is one Win32 API called for the About box animation. This is a .NET 2.0 Windows Forms application. Many of the organizations that I work for utilize AD to manage application and resource access by groups. Unfortunately for me (and others), many of these organizations do not permit access to the Microsoft Active Directory tools, so verifying that a particular user has been given membership to a particular group can be a bit of a pain. Hence, this tool was born. The UI itself is pretty straightforward. Just a typical .NET Windows Forms application. The meat of the application is located in the ADLookup class. This class performs all of the AD activities used to populate the lists in the UI. Perusing the source will provide you with an introduction (possibly a rude one) to the world of AD searches in the .NET environment. ADLookup If you look at the image above, the arrows indicate that a selection in a list will trigger the automatic update of the list being pointed to. In addition, if a user is selected from the Users in Group list, that user will be selected in the Domain Users list as well, triggering subsequent updates. Likewise, a selection in the Groups for User list will select that group in the Groups in Domain list, triggering subsequent updates. The numbers in parenthesis above each list indicate how many elements are in the list. This gives an at-a-glance answer to one of the most common AD questions: "How many users are in group xx?" To utilize the search, you need to select a search option from the Search menu, or you can right-click on either the Groups in Domain, Users in Group, or Users in Domain lists. When you select one, a pop-up window will display for you to enter your search data. The search data you enter is used as a Regular Expression to evaluate against the data in the selected list, so feel free to use .NET Regular Expressions to perform your fuzzy search. Only the first match of your search criteria is selected. When it is selected, the appropriate lists will be updated in their content as well. There are three methods in the ADLookup class that deserve a little attention here. These three methods are used to decode arrays of bytes that are returned from the AD query in the user properties collection. First, the easy one - SIDToString: SIDToString /// <summary> /// Convert a binary SID to a string. /// </summary> /// <param name="sidBinary">SID to convert.</param> /// <returns>String representation of a SID.</returns> private string SIDToString(byte[] sidBinary) { SecurityIdentifier sid = new SecurityIdentifier(sidBinary, 0); return sid.ToString(); } The best part of this method is that there's virtually nothing to converting a Windows SID (security identifier) bit array to a human readable string. The next one is a Registry lookup used to determine the currently active time bias on the system. This is a value used by the system to convert from Greenwich Mean Time (GMT) to the local time. /// <summary> /// Retrieve the current machine ActiveTimeBias. /// </summary> /// <returns>an integer representing the ActiveTimeBias in hours.</returns> private int GetActiveBias() { // Open the TimeZone key RegistryKey key = Registry.LocalMachine.OpenSubKey(@"SYSTEM\CurrentControlSet" + @"\Control\TimeZoneInformation"); if (key == null) return 0; // Pick up the time bias int Bias = (int)key.GetValue("ActiveTimeBias"); // Close the parent key key.Close(); // return the result adjusted for hours (instead of minutes) return (Bias / 60); } This value is always subtracted from GMT to arrive at the local time. Where I live, we use daylight savings time as well as standard time, so my ActiveTimeBias value will be either 7 (Pacific Daylight Time [PDT]) or 8 (Pacific Standard Time [PST]). ActiveTimeBias The last method we will visit here is called DecodeLoginHours. Within the properties collection for a user in AD, there exists the ability to limit the hours that a user can log in to a system. This property consists of an array of 21 bytes, where each bit represents a one hour span beginning with Midnight Sunday GMT. Note that I said GMT. This is where the ActiveTimeBias comes in. By performing the subtraction, we're able to re-align the bit-array to machine time. Obviously, this bit-array is not friendly to humans, so we decode it into something that we can easily read. Within the UI, it will show up in the Properties for User list as Login Hours: > Click to view <. Naturally, the user needs to click the item in the list to get the following display: DecodeLoginHours /// <summary> /// Translate the hours into something readable. /// </summary> /// <param name="HoursValue">Hours to convert.</param> /// <returns>A string indicating the hours of availability.</returns> private string DecodeLoginHours(byte[] HoursValue) { // See if we have anything if (HoursValue.Length < 1) return string.Empty; // Pick up the time zone bias int Bias = GetActiveBias(); // Convert the HoursValue array into a character array of 1's and 0's. // That's a really simple statement for a bit of a convoluted process: // The HoursValue byte array consists of 21 elements (21 bytes) where // each bit represents a specified login hour in Universal Time // Coordinated (UTC). These bits must be reconstructed into an array // that we can display (using 1's and 0's) and associated correctly to // each of the hour increments by using the machines current timezone // information. // Load the HoursValue byte array into a BitArray // This little trick also allows us to read through the array from // left to right, rather than from right to left for each of the 21 // elements of the Byte array. BitArray ba = new BitArray(HoursValue); // This is the adjusted bit array (accounting for the ActiveTimeBias) BitArray bt = new BitArray(168); // Actual index in target array int ai = 0; // Copy the source bit array to the target bit array with offset for (int i = 0; i < ba.Length; i++) { // Adjust for the ActiveTimeBias ai = i - Bias; if (ai < 0) ai += 168; // Place the value bt[ai] = ba[i]; } // Time to construct the output int colbump = 0; int rowbump = 0; int rowcnt = 0; StringBuilder resb = new StringBuilder(); resb.Append(" ------- Hour of the Day -------"); resb.Append(Environment.NewLine); resb.Append(" M-3 3-6 6-9 9-N N-3 3-6 6-9 9-M"); resb.Append(Environment.NewLine); resb.Append(_DayOfWeek[rowcnt]); for (int i = 0; i < bt.Length; i++) { // Put in a 0 or a 1 resb.Append((bt[i]) ? "1" : "0"); colbump++; rowbump++; // After 24 elements are written, start the next line if (rowbump == 24) { // Make sure we're not on the last element if (i < (bt.Length - 1)) { rowbump = 0; colbump = 0; resb.Append(Environment.NewLine); rowcnt++; resb.Append(_DayOfWeek[rowcnt]); } } else { // Insert a space after every 3 characters // unless we've gone to a new line if (colbump == 3) { resb.Append(" "); colbump = 0; } } } // Return the result return resb.ToString(); } LoginHours TimeBias This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/28735/ADSI-Hunter?msg=4469217
CC-MAIN-2013-20
en
refinedweb
#include "splitzer.h" using namespace std; // find all the lines that refer to each word in the input map<string, vector<int> > xref(istream& in, vector<string> find_words(const string&) = split) { funtion body; } error: line 3, expected initializer before using error: line 6, expected constructor, destructor, or type conversion before "<" token either I don't know what the error is about, or theres nothing wrong in the code (when compared with other codes that works) yet, the code snippet are given by a book. help anyone? Try looking in splitzer.h for an error. What is splitzer.h ? Can you post the full program ? As Bob said, look in splitzer.h for the cause of this error. That is the only possibility as the cause of these errors.
http://www.daniweb.com/software-development/cpp/threads/433693/weird-error
CC-MAIN-2013-20
en
refinedweb
Here's a quick example of some Ruby source code, showing how I used Ruby's ternary operator in a method that prints a CSV record for a class I defined: def print_csv_record last_name.length==0 ? printf(",") : printf("\"%s\",", last_name) first_name.length==0 ? printf(",") : printf("\"%s\",", first_name) city.length==0 ? printf(",") : printf("\"%s\"", city) printf("\n") end As you can see from that method, each line uses the ternary operator to make the following decision: - If the length of the given field equals 0, use the first printfstatement. - Otherwise, use the second printfstatement. It may not seem like much, but that's it. The cool thing is that it's much shorter than the equivalent if/ then syntax, and it's still very easy to read. If you'd like a little more information on the Ruby ternary operator, feel free to read on. General syntax of the ternary operator As you can gather from the previous example, the general syntax for Ruby's ternary operator looks like this: test-expression ? if-true-expression : if-false-expression In my previous example, my first test-expression looked like this: last_name.length==0 and its if-true-expression looked like this: printf(",") and its if-false-expression looked like this: printf("\"%s\",", last_name) Hopefully demonstrating the general syntax of the ternary operator helped make my earlier code a little more understandable. A second example If you'd like one more example of the ternary operator, here's one that uses a numeric test comparison, followed by two possible puts statements, one that will be executed if the test evaluates to true, and another that will be executed if the test evaluates to false: # set the speed speed = 90 # somewhere later in the program ... speed > 55 ? puts("I can't drive 55!") : puts("I'm a careful driver") As you might guess, this segment of code will print: I can't drive 55! (Which also summarizes my driving habits.) The ternary operator is cool. It can shorten your if/ then statements, and still keeps your Ruby code readable.
https://alvinalexander.com/blog/post/ruby/examples-ruby-ternary-operator-true-false-syntax
CC-MAIN-2020-10
en
refinedweb
tag:code.tutsplus.com,2005:/categories/emberjs Envato Tuts+ Code - EmberJS 2018-05-29T12:45:47Z've built a complete guide to help you <a href="">learn JavaScript</a>, whether you're just getting started as a web developer or you want to explore more advanced><ul class="roundup-block__contents posts--half-width roundup-block--list"> <li class="roundup-block__content"><a class="roundup-block__content-link" href=""><img class="roundup-block__preview-image" data-<div class="roundup-block__primary-category topic-code">JavaScript</div> <div class="roundup-block__content-title">Modern JavaScript Fundamentals</div> <div class="roundup-block__author">Dan Wellman</div></a></li> <li class="roundup-block__content"><a class="roundup-block__content-link" href=""><img class="roundup-block__preview-image" data-<div class="roundup-block__primary-category topic-webdesign">JavaScript</div> <div class="roundup-block__content-title">JavaScript for Web Designers</div> <div class="roundup-block__author">Adi Purdila</div></a></li> <li class="roundup-block__content"><a class="roundup-block__content-link" href=""><img class="roundup-block__preview-image" data-<div class="roundup-block__primary-category topic-code">Angular 2+</div> <div class="roundup-block__content-title">Modern Web Apps With Angular</div> <div class="roundup-block__author">Andrew Burgess</div></a></li> <li class="roundup-block__content"><a class="roundup-block__content-link" href=""><img class="roundup-block__preview-image" data-<div class="roundup-block__primary-category topic-code">React</div> <div class="roundup-block__content-title">Modern Web Apps With React and Redux</div> <div class="roundup-block__author">Andrew Burgess</div></a></li> </ul> 2018-05-29T12:45:47.000Z 2018-05-29T12:45:47.000Z Adam Brown tag:code.tutsplus.com,2005:PostPresenter/cms-26177 What Is JavaScript? <p>To say that JavaScript is on the rise in web development would be an understatement. In fact, years ago, famed programmer <a href="" rel="external">Jeff Atwood</a> coined <strong>Atwood's Law</strong> in which <a href="" rel="external">he stated</a>:</p><blockquote>Any application that can be written in JavaScript, will eventually be written in JavaScript.<br> </blockquote><p>At the time of writing this article, there are so many JavaScript frameworks and libraries that it's overwhelming to know where to start, especially if you're a beginner. </p><p>And I know, much of what we publish here is geared towards those who already have experience in writing web applications or doing something in web development. But that's that not the target audience for this article. </p><p>Instead, this is being written specifically for those of you who have never (or barely) written a line of JavaScript, and want to learn more about the language and understand what's out there. Furthermore, we want to cover how it's used and what to expect from it.</p><p>In short, if you're a seasoned professional, then this article isn't for you; however, if you're curious about getting into JavaScript but aren't sure where to start, then perhaps this primer will help set you in the right direction.</p> <h3><strong>Learn JavaScript: The Complete Guide</strong></h3> <p>We’ve built a complete guide to help you <a href="">learn JavaScript</a>, whether you’re just getting started as a web developer or you want to explore more advanced topics.</p> <h2>JavaScript Defined</h2><p>You've likely heard JavaScript referred to as "a client-side scripting language", which is another way of saying that it's a programming language that runs in a web browser. </p><p>Alternatively, <a href="" rel="external">Wikipedia defines it this way</a>:</p><blockquote>JavaScript is a high-level, dynamic, untyped, and interpreted programming language. It has been standardized in the ECMAScript language specification.<br> </blockquote><p.</p><ul> <li> <strong>High-Level</strong>.).</li> <li> <strong>Dynamic</strong>. Languages that are dynamic allow developers to extend certain aspects of the language by adding new code or introducing new objects (such as a <code class="inline">Post</code> object) while the program is running versus needing to compile the program. This is a powerful feature of JavaScript.</li> <li> <strong>Untyped</strong>. If you have any programming experience, then you've likely come across certain types of languages that require you to declare the type of variable with which you're working. For example, perhaps your variable will store a <code class="inline">string</code> or a <code class="inline">boolean</code>. In JavaScript, this is not necessary. Instead, you simply declare a variable with the <code class="inline">var</code> keyword.</li> <li> <strong>Interpreted</strong>..</li> <li> <strong>Standardized</strong>. JavaScript <em>is</em> standardized (its official name being <a href="" rel="external">ECMAScript</a>) which means that any browser that implements the standard will offer the same features as any other browser. Were it not standardized then Chrome may provide some features that Edge does not and vice versa.</li> </ul><p>Now that we've covered the attributes of the language, we can discuss certain aspects and nuances about the language. </p><p>Though all of the above is important, it's also essential to know how the language works (especially if you've worked with other languages) so that you don't go into development with pre-conceived ideas as to how it <em>might</em> work or how it <em>should</em> work.<br></p><p>Instead, I'd rather cover how it <em>does</em> work so that you can get started writing code and understand exactly what it is that you're doing.</p><h3>About the Language</h3><p>Above all else, JavaScript is an object-oriented programming language, but it likely differs a bit from what you usually see (if you've previously used an object-oriented programming language).</p><p>JavaScript is what's called a prototypal language. This means that all of the objects in JavaScript, like <code class="inline">String</code>, are based on prototypes. </p><p>This allows us, as developers, to add additional functionality to the objects through the use of <a href="" rel="external">prototypal inheritance</a>:</p><blockquote>Prototype-based programming is a style of object-oriented programming in which behaviour reuse (known as inheritance) is performed via a process of cloning existing objects that serve as prototypes.<br> </blockquote><p>I'd argue that if you've never worked with an object-oriented language before, then you may have an advantage at this point because you have no conceptual model to shift in order to think about how this works.</p><p>If, on the other hand, you <em>have</em> worked in these types of languages then I think it's worth distinguishing how prototypal inheritance differs from classical inheritance:</p><ul> <li>In <strong>classical inheritance</strong>, we, as developers, will write a class. Multiple objects can be created from this single class. Furthermore, we can then write another class that inherits from this class and then create instances of <em>those</em> classes. In this situation, subclasses are sharing code with their base class. So when you create an instance of a subclass, you're getting the functionality of both the subclass and the parent class.</li> <li>In <strong>prototypal inheritance</strong>, <code class="inline">Number</code> then it will first look for the method on that object. If it doesn't find it, then it will move up the chain until it finds the method (which may live on the base <code class="inline">Object</code>).</li> </ul><p>Finally, and perhaps the most important thing to note, is that when you make a change to an object via its prototype, then it's accessible to everyone who uses that object (at least within the context of your environment).</p><p>It's really powerful, it's really cool, but it also takes a slight shift in thinking if you're not used to working in an environment like that.</p><h2>How Do We Use JavaScript?</h2><p>In terms of how we actually put JavaScript to use, it ultimately depends on what your goals are. At one point, working with JavaScript meant that you needed to "make something happen" on a web page. It was meant to control the behavior.</p><p>This could be introducing an element, removing (or hiding) an element, or things like that. Then the web advanced a little bit and browsers were able to make asynchronous calls to the server, handle the response, and then change the state of the page based on this response.</p><p>All of this is achieved via <a href="" rel="external">Ajax</a>. If you're reading this, you're likely familiar with the term. If you're not, you can think of it as a way for JavaScript to make a call to the server hosting the page and then handle the response it receives <em>all without reloading the page</em>.</p><p>But it's matured even beyond that. </p><p>Google has developed a highly sophisticated JavaScript parsing engine known as <a href="" rel="external">V8</a>, and other browsers are working to provide optimal JavaScript performance, as well. </p><figure class="post_image"><img alt="The landing page for Chrome V8 Googles JavaScript Engine" data-</figure><p>In fact, we're now able to write JavaScript on the server using tools like <a href="" rel="external">Node.js</a>. Furthermore, we're even able to build hybrid applications that run on our mobile devices. This means we're able to build solutions for our phones, our tablets, and our desktop computers through the use of JavaScript.</p><figure class="post_image"><img alt="The homepage for Nodejs a runtime engine for writing JavaScript on the server" data-</figure><p>And this is coming from a language that was once used as a way to animate things on a screen. All of this to say is that if you're new to JavaScript, don't underestimate it.</p><h3>"What Should I Expect From the Language?"</h3><p>All of the above is interesting to read, and it's fun to see what we're able to do, but from a purely practical perspective, what can we expect from the JavaScript language? </p><p>Regardless of whether you're new to the language or you're looking to learn a new language when you've come from another background, you've got a level of expectations as to what the language can offer. </p><p. </p><p>But covering its built-in objects? That's something we can review before ending this article:</p><ul> <li> <strong>Object</strong>. The base object from which all other objects inherit some of their basic functionality.</li> <li> <strong>Function</strong>.).</li> <li> <strong>Boolean</strong>. This object serves as an object wrapper for a boolean value. In many languages, booleans are a data type that's either <code class="inline">true</code> or <code class="inline">false</code>. In JavaScript, you can still work with those values, but they are to be understood as objects.</li> <li> <strong>Number</strong>. In many programming languages, there are primitive types such as <code class="inline">float</code>, <code class="inline">int</code>, <code class="inline">double</code>, and so on. In JavaScript, there's only a number type, and it too is an object.</li> <li> <strong>Date</strong>.).</li> <li> <strong>String</strong>. Nearly every programming language has a primitive string data type. JavaScript isn't much different except that, as you'd expect, the string is an object with properties of its own.</li> </ul><p:</p><pre class="brush: javascript noskimlinks noskimwords">var example_string = 'Hello world!'; var example_boolean = true; var example_number = 42;</pre><p>But, ultimately, they are still objects.</p><p>To be clear, these are the <em>basic</em> objects. There are far more advanced objects that are worth exploring, especially if you're going to be working with error handling, various types of collections beyond Arrays, and so on.</p><p>If you're interested in reading more about these, then I highly recommend checking out <a href="" rel="external">this page</a> in the Mozilla Developer Network.</p><h3>What Libraries and Frameworks Are Available?</h3><p>If you've been keeping up with the various frameworks, libraries, and other tools that exist in the JavaScript economy, then you're by no means behind on just how vibrant the economy has become.</p><p.</p><ul> <li> <a href="" rel="external">jQuery</a> is a library that aims to provide a cross-browser API that allows you to "write less, do more."</li> <li> <a href="" rel="external">Angular</a> is a JavaScript framework that aims to make building single-page applications easier.</li> <li> <a href="" rel="external">React</a> is a JavaScript library for building user interfaces.</li> <li> <a href="" rel="external">Backbone</a> aims to give structure to web applications through the use of models, collections, and views.</li> <li> <a href="" rel="external">Ember.js</a> is another framework for "creating ambitious web applications".</li> <li>And more.</li> </ul><p>This is <em>far</em> from a complete list of what's available, but it is a start, and it's a handful of options that those getting familiar with JavaScript should at least be aware of, even if you don't do any work with them.</p><p>And as you begin to learn JavaScript and start to pick up some of these tools, you may find just how popular some of them are when it comes to some of your very own favorite applications.</p><h3>Learning JavaScript</h3><p>As you've come to expect, Envato is all for "teaching skills to millions worldwide". So what would a post like this be if it didn't include links to some of our more popular JavaScript articles and courses?</p><ul> <li><a href="" rel="external">Quiz: JavaScript ES6, Do You Know the Right Tool for the Job?</a></li> <li> <a href="" rel="external">Keeping Promises With JavaScript</a><br> </li> <li> <a href="" rel="external">Creating Single Page Applications With WordPress and Angular.js</a><br> </li> <li> <a href="" rel="external">The Genius of Template Strings in ES6</a><br> </li> <li> <a href="" rel="external">JavaScript ES6 Fundamentals</a><br> </li> <li> <a href="" rel="external">Testing Angular Directives</a><br> </li> <li> <a href="" rel="external">JavaScript for Windows 10 Universal Apps</a><br> </li> </ul><p>All of these resources are ideal for getting started with JavaScript and adding it to your repertoire of web development skills.</p><h2>Conclusion</h2><p>When it comes to web development, JavaScript is here to stay. Though you may not use what's considered to be "vanilla JavaScript" and opt for one of the many libraries and/or frameworks that are available, JavaScript is a language almost every web developer should know.</p><p>Of course, not <em>everyone</em> works on the front-end. Some are purely server-side developers; some are purely client-side developers. Nonetheless, we all have to work together to make sure the various parts of our applications are communicating with one another.</p><p>To that end, it's at least important to understand how data from the client-side is sent to the server-side via JavaScript, and how it's processed on the server-side and then returned to the client-side to be used in whatever manner.</p><p>Don't be so quick to write off JavaScript just because you're not a front-end developer. Odds are, someone you're working with is using it and will need your work to tie parts of the application together.</p><p>Granted, this article is just scratching the surface. As I said at the beginning, the purpose of the article is to explain what JavaScript is, how it's used, and what to expect from it, particularly for those who are just getting started with the language.</p> <a href="" rel="external" target="_blank">Envato marketplace</a>.<br></p><p>If you've enjoyed this article, you can also check out my courses and tutorials on <a href="" rel="external">my profile page</a>, and, if you're interested, you can read more articles specifically about WordPress and WordPress development <a href="" rel="external">on my blog</a>. </p><h2>Additional Resources</h2><ul> <li><a href="" rel="external" target="_blank">Head First JavaScript Programming</a></li> <li><a href="" rel="external">Eloquent JavaScript by Marijn Haverbeke</a></li> <li><a href="" rel="external">Douglas Crockford's JavaScript: The Good Parts</a></li> <li><a href="" rel="external">JavaScript at the Mozilla Developer Network</a></li> </ul> 2016-05-05T12:00:23.000Z 2016-05-05T12:00:23.000Z Tom McFarlin tag:code.tutsplus.com,2005:PostPresenter/cms-23956 New Course: EmberJS Framework Basics <p>Our new <a href="" target="_self">EmberJS Framework Basics</a> course covers everything you need to know to get started building web apps in Ember. </p><p>Tuts+ instructor <a href="" target="_self">Rem Zolotykh</a> learned.</8ht8mkuyqn" width="600" height="375" frameborder="0" scrolling="no" class="wistia_embed" name="wistia_embed"></iframe></p> 2015-05-08T07:09:56.000Z 2015-05-08T07:09:56.000Z Andrew Blackman tag:code.tutsplus.com,2005:PostPresenter/net-35817 Getting Into Ember.js: Part 5 <p><strong><em>Editor.</em></strong></p> <p>In <a href="">part 3</a> of my Ember series, I showed you how you can interact with data using Ember's <code><a href="">Ember.Object</a></code> main base class to create objects that define the methods and properties that act as a wrapper for your data. Here's an example:</p> <p><!--more--></p> <pre class="brush: js noskimlinks noskimwords">App.Item = Ember.Object.extend(); App.Item.reopenClass({ all: function() { return $.getJSON('?').then(function(response) { var items = []; response.items.forEach( function (item) { items.push( App.Item.create(item) ); }); return items; });</pre> <p>In this code, we subclass <code>Ember.Object</code> using the "<code>extend()</code>" and create a user-defined method called called "<code>all()</code>" that makes a request to Hacker News for JSON-formatted results of its news feed.</p> <p>While this method definitely works and is even <a href="">promoted by Ember-based Discourse</a> as their way of doing it, it does require that <em>you</em>.</p> <p.</p> <p>Now that Ember RC8 is out and v1 seems to be coming around the corner, I felt it was a good time to start exploring <a href="">Ember Data</a> and see what it offers.</p> <h2>Ember Data</h2> <p:</p> <blockquote> <p.</p> </blockquote> <p>So as I mentioned, it's meant to abstract out a lot of the complexities of working with data.</p> <h2>Using Ember Data</h2> <p>If you've read my previous tutorials, you should be very familiar with how to set up a page to leverage Ember. If you haven't done so, you should go to the <a href="">Ember.js home page</a> <a href="">API docs</a> for <code>models</code>, scroll to the bottom and download the library. Additionally, you can go to the <a href=""><code>builds</code> page</a> to pull down the latest builds of any Ember-related library.</p> <p>Adding Ember Data is as simple as adding another JavaScript file to the mix like this:</p> <pre class="brush: html noskimlinks noskimwords"> ></pre> <p>This now gives you access to Ember Data's objects, method and properties.</p> <p>Without any configuration, Ember Data can load and save records and relationships served via a RESTful JSON API, provided it follows certain conventions.</p> <h2>Defining a Store</h2> <p>Ember uses a special object called a <code>store</code> to load models and retrieve data and is based off the Ember <code>DS.Store</code> class. This is how you'd define a new store:</p> <pre class="brush: js noskimlinks noskimwords">App.Store = DS.Store.extend({ ... });</pre> <p>If you remember from my previous articles, <code>"App"</code> is just a namespace created for the application level objects, methods and properties for the application. While it's not a reserved word in Ember, I would urge you to use the same name as almost every tutorial and demo I've seen uses it for consistency. </p> <p>The store you create will hold the models you create and will serve as the interface with the server you define in your adapter. By default, Ember Data creates and associates to your store a REST adapter based off the <code>DS.RestAdapter</code>.</p> <p>You can also define your own adapter for those situations where you need more custom control over interfacing with a server by using the <code>adapter</code> property within your store declaration:</p> <pre class="brush: js noskimlinks noskimwords">App.Store = DS.Store.extend({ adapter: 'App.MyCustomAdapter' });</pre> <h2>Defining Models</h2> <p>The code I listed at the top of this tutorial was an example of how to use <code>Ember.Object</code> to create the models for your application. Things change a bit when you define models via Ember Data. Ember Data provides another object called <code>DS.Model</code> which you subclass for every model you want to create. For example, taking the code from above:</p> <pre class="brush: js noskimlinks noskimwords">App.Item = Ember.Object.extend();</pre> <p>It would now look like this:</p> <pre class="brush: js noskimlinks noskimwords">App.Item = DS.Model.Extend()</pre> <p:</p> <pre class="brush: js noskimlinks noskimwords">App.Item.reopenClass({ all: function() { return $.getJSON('?').then(function(response) { var items = [];</p> response.items.forEach( function (item) { items.push( App.Item.create(item) ); }); return items; });</pre> <p>While I had to create my own method to return all of the results from my JSON call, Ember Data provides a <code>find()</code> method which does exactly this and also serves to filter down the results. So in essence, all I have to do is make the following call to return all of my records:</p> <pre class="brush: js noskimlinks noskimwords">App.Item.find();</pre> <p>The <code>find()</code> method will send an Ajax request to the URL.</p> <p>This is exactly what attracts so many developers to Ember; the forethought given to making things easier.</p> <p>One thing to keep in mind is that it's important to define within the model the attributes you plan on using later on (e.g. in your templates). This is easy to do:</p> <pre class="brush: js noskimlinks noskimwords">App.Post = DS.Model.extend({ title: DS.attr('string') });</pre> <p>In my demo app, I want to use the title property returned via JSON so using the <code>attr()</code> method, specify which attributes a model has at my disposal.</p> <p>One thing I want to mention is that Ember Data is <em>incredibly</em> picky about the structure of the JSON returned. Because Ember leverages directory structures for identifying specific parts of your applications (remember the naming conventions we discussed in my <a href="">first Ember article</a>?), it makes certain assumptions about the way that the JSON data is structured. It requires that there be a named root which will be used to identify the data to be returned. Here's what I mean:</p> <pre class="brush: js noskimlinks noskimwords">{ 'posts': [{ 'id': 1, 'title': 'A friend of mine just posted this.', 'url': '' }] }[js] <p>If you had defined it like this:</p> [js]{ { 'id': '1', 'title': 'A friend of mine just posted this.', 'url': '' }, { 'id': '2', 'title': 'A friend of mine just posted this.', 'url': '' }, }</pre> <p>Ember Data would've totally balked and thrown the following error:</p> <blockquote> <p>Your server returned a hash with the key id but you have no mapping for it.</p> </blockquote> <p>The reason is that since the model is called <code>"App.Post"</code>, Ember Data is expecting to find a URL called "posts" from which it will pull the data from. So if I defined my store as such:</p> <pre class="brush: js noskimlinks noskimwords">App.Store = DS.Store.extend({ url: '' });</pre> <p>and my model like this:</p> <pre class="brush: js noskimlinks noskimwords">App.Post = DS.Model.extend({ title: DS.attr('string') });</pre> <p>Ember Data would assume that the Ajax request made by the <code>find()</code> method would look like this:</p> <pre class="brush: js noskimlinks noskimwords"></pre> <p>And if you were making a request for a specific ID (like find(12)), it would look like this:</p> <pre class="brush: js noskimlinks noskimwords"></pre> .</p> <h2>The Demo App</h2> <p.</p> <p>First I create my application namespace (which you would do for any Ember app):</p> <pre class="brush: js noskimlinks noskimwords">// Create our Application App = Ember.Application.create({});</pre> <p>Next, I define my data store and I declare the <code>url</code> from where the model will pull the data from:</p> <pre class="brush: js noskimlinks noskimwords">App.Store = DS.Store.extend({ url: ''; });</pre> <p>In the model, I specify the attribute: <code>title</code>, which I'll use in my template later on:</p> <pre class="brush: js noskimlinks noskimwords">// Our model App.Post = DS.Model.extend({ title: DS.attr('string') });</pre> <p>Lastly, I associate the model to the route via the model hook. Notice that I'm using the predefined Ember Data method <code>find()</code> to immediately pull back my JSON data as soon as the app is started:</p> <pre class="brush: js noskimlinks noskimwords">// Our default route. App.IndexRoute = Ember.Route.extend({ model: function() { return App.Post.find(); } });</pre> <p>In the template for the root page (index), I use the <code>#each</code> Handlebars directive to look through the results of my JSON data and render the title of each of my posts:</p> <pre class="brush: html noskimlinks noskimwords"> <script type="text/x-handlebars" data- <h2>My Posts</h2> <ul> {{#each post in model}} <li>{{post.title}}</li> {{/each}} </ul> </script></p></pre> <p>That's it! No Ajax call to make or special methods to work with my data. Ember Data took care of making the XHR call and storing the data.</p> <h2>Fin</h2> <p>Now, this is incredibly simplistic and I don't want to lead you to believe it's all unicorns and puppy dogs. As I went through the process of working with Ember Data, I found myself wanting to go back to using <code>Ember.Object</code>.</p> <p>So I urge you to jump in and begin tinkering with it, especially those that have a very strong ORM background and could help shape the direction of Ember Data. Now is the best time to do that.</p> 2013-11-26T15:33:15.000Z 2013-11-26T15:33:15.000Z Rey Bango tag:code.tutsplus.com,2005:PostPresenter/net-33447 Resources to Get You Up to Speed in Ember.js <p>You've probably noticed a lot of chatter lately about the <a href="">Ember.js</a> framework and rightfully so. It aims to make it substantially easier to build single-page web apps by abstracting a lot of the complexities for writing scalable and maintainable MVC-based code. And developers are jumping on-board in droves.</p> <p><!--more--></p> <p. </p> <p.</p> <hr> <h2>The Resources</h2> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">Nettuts' Ember Series</a></h4> <p>I'm going to be a little biased here because I'm the author of this series, but the feedback I've received tells me that I did a decent job of outlining the basics of Ember. <a href="">The</a> <a href="">four</a>-<a href="">part</a> <a href="">series</a> takes you through the core concepts of Ember, setting up the framework, using templates, defining your model, routing and a whole lot more.</p> <p. </p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">Nettuts' Free "Let’s Learn Ember" Course</a></h4> <p>Free is a great thing, especially when it comes to Ember training and we've served up a full premium course gratis to our readers. Check out the full<br> set of videos which walk you from setting up Ember to building an app.</p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">Emberjs.com</a></h4> <a href="">recorded a great video</a> on how to build an app in Ember which is now part of the intro section of the docs.</p> <p.</p> <p>In addition, the <a href="">community section</a> of the site helps you learn about how to contribute to the project, meet new developers or find help. And don't forget that with Ember being open-source, the source is easily available to you on <a href="">Github</a>.</p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">Ember Discussion Forum</a></h4> <p. </p> <p>Just note that depending on the topic or question, you may be asked to post on <a href="">Stack Overflow</a> for better results. In looking at Stack Overflow that's not necessarily a bad thing since the Ember section there is VERY active.</p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">EmberWatch</a></h4> <p.</p> <p>With that said, though, EmberWatch has categorized the content to make it easier to find the type of stuff you want to learn from. Whether it's a screencast, book, podcast or post, EmberWatch has you covered.</p> <p>I'd also recommend following them on <a href="">Twitter</a> for the latest updates to the site.</p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">Ember 101 Video Series</a></h4> <p>I've not met <a href="">Ryan Florence</a> in person, but have had enough online exchanges with him to know he's incredibly smart. He knows JavaScript REALLY well so when I saw him jump into Ember, I was incredibly excited. </p> <p>He didn't fail taking on a project called Ember 101 with the intent to help new developers get up-to-speed in Ember. The best part about it is that his videos are technically sound and FREE. </p> <p>The series walks you through all of the core aspects of jumping in Ember and Ryan made sure to include an explanation on each page as well as sample code to work with. </p> <p>I would definitely urge you to check this great resource out as you're starting your Ember journey.</p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">PeepCode's Fire Up Ember.js Video</a></h4> <p.</p> <p>The saying, "You get what you pay for" definitely applies here because it's super high-quality work.</p> <p></p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">HandlebarsJS</a></h4> <p>Ember apps rely <strong>HEAVILY</strong> on templates. In fact, in my opinion, if you're not going to use templates, not only are you in for a really rough time but you might as well just build everything without Ember. </p> <p.</p> <p>The Ember docs will highlight certain key parts of creating templates, especially when it comes to data binding, but for the full picture of what you can do, you should checkout the Handlebars API.</p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">The Discourse Discussion Platform</a></h4> <p <a href="">code of a real-world Ember system</a>. </p> <p>This is a really big deal because it's one thing to attempt to learn by the school of hard knocks and another to be able to check out a system built by highly-regarded developers like <a href="">Jeff Atwood</a> and <a href="">Robin Ward</a>. And because this is such a high-profile Ember project, it's bound to get a lot of scrutiny and code review. I can't stress enough how valuable a learning resource this is.</p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">Robin Ward AKA Evil Trout</a></h4> <p.</p> <p>Be sure to also catch him on <a href="">Twitter</a> and don't be afraid of his scary avatar. He's actually a pretty nice guy.</p> <p></p> <hr> <div class="webroundup"> <div> <img data- </div> </div> <h4><a href="">Billy's Billing Developer Blog</a></h4> <p>I only recently found this blog for <a href="">Billy's Billing</a>,.</p> <p>Additionally, I like the fact that they're not trying to teach you Ember basics. They're posting up things that they've obviously struggled with and want to share the solution. A great example is their <a href="">post on representing data structures as trees in Ember</a>. </p> <hr> <h2>Ramping Up</h2> .</p> <p.</p> 2013-07-24T22:36:03.000Z 2013-07-24T22:36:03.000Z Rey Bango tag:code.tutsplus.com,2005:PostPresenter/net-31517 Getting Into Ember: Part 4 <p>In my <a href="">previous tutorial</a>, I touched on how to use <code>Ember.Object</code> to define your models and work with datasets. In this section, we'll look more closely at how Ember uses the <a href="">Handlebars templating framework</a> to define your app's user interface.</p> <p><!--more--></p> <hr> <h2>Client-side Templates</h2> <p>Most server-side developers are used to using templates to define markup that will be dynamically filled on the fly. If you've ever used ASP.NET, ColdFusion, PHP or Rails then it's pretty much assured you know what I'm talking about. </p> <p>JavaScript Client-side templating has really taken off of late especially because of the focus on building more desktop-like experiences. This means that more of the processing is done on the client-side with data being mainly pulled via server-side API requests.</p> <p>I remember writing about client-side templates some time ago when the <a href="">jQuery Template plugin</a> was first released. Nearly three years later, it's still the most read post on my blog, showing how interest in client-side templating has risen. Since then, a number of other frameworks have been released, offering rich features and supportive communities. <a href="">Handlebars</a> is one of the more popular options and the framework chosen by the Ember project to power it's templating needs. This makes sense as Handlerbars was created by Ember.js co-founder and core team member, <a href="">Yehuda Katz</a>. Note, though, that I'm not planning on doing comparisons between templating frameworks and I will strictly focus on Handelbars since this is what Ember.js uses by default.</p> <p>In the previous articles, I showed some very basic templates in the code:</p> <pre class="brush: html noskimlinks noskimwords"> <script type="text/x-handlebars"> <h2><strong>{{firstName}} {{lastName}}</strong></h2> </script></pre> <p.</p> <hr> <h2>The Syntax</h2> <p). </p> <p>The first thing any template needs is a script tag definition. Most of you have probably defined script tags to load your JavaScript library. In fact, you've already done this to load Handlebars into your Ember project:</p> <pre class="brush: html>There's a slight difference with using it to define a template. First, we're specifying a <code>type</code> attribute of "text/x-handlebars". This <code>type</code>":</p> <pre class="brush: html noskimlinks noskimwords"> <script type="text/x-handlebars" data- ... </script></pre> <p>When your application starts, Ember scans the DOM for <code>naming conventions</a> is so important. In the example above, this template will be automatically associated to the employee route and controller you created in your application. Again, I can't stress enough how these naming conventions will make your development much easier.</p> <p>Ember is reliant on URLs to determine the resources that need to be used and the templates that need to be rendered. Let's imagine that you had a profile page with the URL "/profile". You would have a resource, called <code>profile</code> that would load specific resources for that URL (like a route object) and you would also have a template by the same name. We reviewed defining resources and route objects in <a href="">part 2 of my Ember series</a> so if you're not sure about what I'm discussing, be sure to hop back there to refresh yourself on this.</p> <p>When you visit that URL, Ember knows it needs to load these resources and parse the template you've defined. It does this via its naming conventions, knowing that because you went to "/profile" it needs to load the resources defined in the <code>profile</code>, and render the template, named <code> <script type="text/x-handlebars"> <h2><strong>{{firstName}} {{lastName}}</strong></h2> </script></pre> <p>In this case, the <code>{{firstName}}</code> and <code>{{lastName}}</code> expressions will be parsed by Ember and replaced by actual data. In addition, Ember sets up observers so that as your data changes, your template is automatically updated and the updates reflected to the user of your app.</p> <p>So far, I've shown you a very simple example, but the takeaway is that:</p> <ul> <li>Ember uses a special type attribute to define templates. </li> <li>Templates use standard markup along with delimited expressions, which are parsed on the client-side. </li> <li>These templates have the full feature set capabilities of Handlebars. </li> <li>Ember sets up observers to dynamically update your user interface data, as it changes. </li> </ul> <p>This offers a lot of flexibility in how you structure your user interface. Let's continue looking at the features that are available.</p> <hr> <h2>Advanced Expressions</h2> <p>Remember that Ember leverages Handlebars, so you have access to its full breadth of expressions here. Conditional expressions are a must, in order to render almost anything useful; Handlebars offers quite a number of options. </p> <p>Let's say that I had a JSON dataset that looked like this:</p> <pre class="brush: js noskimlinks noskimwords"> " }</pre> <p>If I wanted to ensure that the <code>title</code> data is available, I could add a conditional "if" statement by using the <code>#if</code> expression:</p> <pre class="brush: js noskimlinks noskimwords"> {{#if item.title}} <li>{{item.title}} - {{item.postedAgo}} by {{item.postedBy}}</li> {{/if}}</pre> <p>This checks to see if <code>item.title</code> is not undefined, and continues processing the subsequent expressions for the <code>title</code>, <code>postedAgo</code> and <code>postedBy</code> data expressions. </p> <p>Since this dataset contains more than one "record", it's safe to assume that we'd probably want to loop over each element of <code>item</code>. That's where the <code>{{#each}}</code> expression comes into play. It allows you to enumerate over a list of objects. So, again, keeping in mind that templates are a combination of markup and Handlebars expressions, we can use the <code>#each</code> expression to loop through every item available within our Ember model object. Remember that the Ember model is derived from the controller, which is associated to the template, via Ember's naming conventions. </p> <pre class="brush: html noskimlinks noskimwords"> <ul> {{#each item in model}} {{#if item.title}} <li>{{item.title}} - {{item.postedAgo}} by {{item.postedBy}}</li> {{/if}} {{/each}} </ul></pre> <p>This would render out something similar to:</p> <pre class="brush: html noskimlinks noskimwords"> <ul> <li>Tearable Cloth Simulation in JavaScript - 1 hour ago by NathanKP</li> <li>Netflix now bigger than HBO - 2 hours ago by edouard1234567</li> <li>Fast Database Emerges from MIT Class, GPUs and Student&#39;s Invention - 33 minutes ago by signa11</li> <li> Connecting an iPad retina LCD to a PC - 6 hours ago by noonespecial</li> </ul></pre> <p>The distinct advantage is Ember's implicit specification of observer,s which will update your data upon an update.</p> <p>If your conditional expression needs to be more complex, you'll want to create a <a href="">computed property</a>.:</p> <ul> <li>I need a computed property to scan each item and tell me if the title matches</li> <li>I need to create a controller that can be used by each item being enumerated over in the template</li> <li>I need to update the template so that it uses this controller for each item<br> The first thing I need to do is create the new controller that will wrap each item being looped over and create the computed property within it: </li> </ul> <pre class="brush: js noskimlinks noskimwords"> App.TitleController = Ember.ObjectController.extend({ titleMatch: function() { return this.get(&#39;title&#39;) === &quot;Tearable Cloth Simulation in JavaScript&quot;; }.property() });</pre> <p>Looking at the code, we're subclassing <code>Ember.ObjectController</code> to create the controller. This is the controller that will wrap each item being looped over in our template. Next, we're creating a method, called <code>titleMatch</code> which uses the <code>get()</code> method to pull back the current title, compare it to the text I've defined, and return a boolean. Lastly, the Ember <em><a href="">property()</a></em> method is called to define the <em>titleMatch</em> method as a computed property. </p> <p>Once we have this in place, we update the template's <code>{{#each}}</code> expression to represent each item with the new controller we created. This is done by using the <em>itemController</em> directive. A key thing to understand is that <code>itemController</code> is a key phrase in Ember meant to associate a controller to items of a template. Don't confuse it for an actual controller name (as I did initially). The controller name is assigned to <code>itemController</code>, like this:</p> <pre class="brush: html noskimlinks noskimwords"> <ul> {{#each item in model <img {{bindAttr</pre> <p>The same can be done for attributes that don't receive a value, such as <code>disabled</code>:</p> <pre class="brush: html noskimlinks noskimwords"> <input type="checkbox" {{bindAttr <div {{bindAttr <div {{bindAttr <div> Warning! </div></pre> <p>for a <code>false</code> condition. Note that, when I specified <code>isUrgent</code> for the class, Ember dasherized the name and rendered the class as <code>is-urgent</code>. If you'd prefer to specify your own class based on the results, you can use a conditional expression similar to a ternary statement:</p> <pre class="brush: html noskimlinks noskimwords"> <div {{bindAttrEmber</a> and Handlebars site to get a good feel for their overall power. Even if you don't use Ember, Handlebars is a great framework for you to use day-to-day, and worth the investment in learning how to use it.</p> <p> Gabriel Manricks wrote a <a href="">great tutorial on Handlebars</a> here on Nettuts+ that you can use to get up to speed on the framework.</p> 2013-04-30T19:15:34.000Z 2013-04-30T19:15:34.000Z Rey Bango tag:code.tutsplus.com,2005:PostPresenter/net-31394 Getting Into Ember.js: Part 3 <p>I hope that you're <a href="">starting to see</a> that Ember.js is a powerful, yet opinionated, framework. We've only scratched its surface; there's more to learn before we can build something truly useful! We'll continue using the <a href="">Ember Starter Kit</a>. In this portion of the series, we'll review accessing and managing data within Ember.</p> <p><!--more--></p> <hr> <h2>Playing with Data</h2> <p>In <a href="">the last article</a>, we worked with a static set of color names that were defined within a controller:</p> <pre class="brush: js noskimlinks noskimwords">App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });</pre> <p>This allowed the controller to expose the data to the <em>index</em> template. That's cute for a demo, but in real life, our data source will not be a hard-coded array.</p> <p>This is where <em>models</em> comes in. <em>Models</em> are object representations of the data your application uses. It could be a simple array or data dynamically retrieved from a RESTful JSON API. The data itself is accessed by referencing the model's attributes. So, if we look at a result like this:</p> <pre class="brush: js noskimlinks noskimwords">{ "login": "rey", "id": 1, "age": 45, "gender": "male" }</pre> <p>The attributes exposed in the model are:</p> <ul> <li>login</li> <li>id</li> <li>age</li> <li>gender</li> </ul> <blockquote class="pullquote"><p>Data itself is accessed by referencing the model’s attributes.</p></blockquote> <p>As you see from the code above, you could define a static store, but you'll use <a href="">Ember.Object</a> for defining your models most of the time. By subclassing <code>Ember.Object</code>,.</p> <p>Alternatively, you could use a sister framework called <a href="">Ember Data</a>. It is an ORM-like API and persistence store, but I need to stress that it is in a state of flux as of this writing. It has a lot of potential, but using <code>Ember.Object</code> is much safer at this time. Robin Ward, co-founder of <a href="">Discourse</a>, wrote <a href="">a great blog post</a> on using Ember without Ember Data. It outlines their process, which I'll break down for you.</p> <hr> <h2>Defining your Models</h2> <p>In the following example, I'm going to use the <a href="">unofficial Hacker News API</a> to pull JSON-based data from the news resource. This data will be stored in my model and later used by a controller to fill a template. If we look at the data returned from the API, we can understand the properties we'll be working with:</p> <pre class="brush: js noskimlinks noskimwords">{ )\/" }</pre> <p>I want to work with the <code>items</code> property, which contains all of the headlines and story information. If you've worked with SQL databases, think of each element of <code>items</code> as a record and the property names (i.e.: <code>title</code>, <code>url</code>, <code>id</code>, etc.) as field names. It's important to grok the layout because these property names will be used as the attributes of our model object—which is a perfect segue into creating the model.</p> <blockquote><p><code>Ember.Object</code> is the main base class for all Ember objects, and we'll subclass it to create our model using its <code>extend()</code> method.</p></blockquote> <p>To do this, we'll add the following code to <em>js/app.js</em> immediately after the code that defines <code>App.IndexRoute</code>:</p> <pre class="brush: js noskimlinks noskimwords">App.Item = Ember.Object.extend();</pre> <p><code>App.Item</code> serves as the model class for the Hacker News data, but it has no methods to retrieve or manipulate that data. So, we'll need to define those:</p> <pre class="brush: js noskimlinks noskimwords"> App.Item.reopenClass({ all: function() { return $.getJSON("?").then(function(response) { var items = []; response.items.forEach( function (item) { items.push( App.Item.create(item) ); }); return items; }); } });</pre> <p>Let's break down this code. First, we use Ember's <code>reopenClass()</code> method to add our new methods to the <code>App.Item</code> class, and you pass it an object that contains our desired methods. For this example, we only need one method called <code>all()</code>: it returns all of the headlines from the Hacker News frontpage. Because jQuery is part of the deal with Ember, we have its simple Ajax API at our disposal. The API uses JSONP to return JSON data; so, I can just use <code>$.getJSON()</code> to make the request to:</p> <pre class="brush: js noskimlinks noskimwords">$.getJSON("?")</pre> <p>The "callback=?" tells jQuery that this is a JSONP request, and the data (once it's retrieved) is passed to an anonymous callback handler defined using jQuery's promises functionality:</p> <pre class="brush: js noskimlinks noskimwords">.then(function(response) {...});</pre> <blockquote class="pullquote"><p>I can easily pump in my JSON data into an Ember object.</p></blockquote> <p>The <code>response</code> parameter contains the JSON data, allowing you to loop over the records and update the local <code>items</code> array with instances of <code>App.Item</code>. Lastly, we return the newly populated array when <code>all()</code> executes. That's a lot of words, so let me summarize:</p> <ul> <li>Create your new model class by subclassing <code>Ember.Object</code> using <code>extend()</code>.</li> <li>Add your model methods using <code>reopenClass()</code>.</li> <li>Make an Ajax call to retrieve your data.</li> <li>Loop over your data, creating an <code>Item</code> object and pushing it into an array.</li> <li>Return the array when the method executes.</li> </ul> <p>If you refresh <em>index.html</em>, you'll see nothing has changed. This makes sense because the model has only been defined; we haven't accessed it.</p> <hr> <h2>Exposing Your Data</h2> <p.</p> <p>Currently, our app has the following controller (the one that defines a static data set):</p> <pre class="brush: js noskimlinks noskimwords">App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });</pre> <p>We can directly associate our model with <code>App.IndexRoute</code> using the <code>model</code> method (AKA the model hook): </p> <pre class="brush: js noskimlinks noskimwords">App.IndexRoute = Ember.Route.extend({ model: function() { return App.Item.all(); } });</pre> <p>Remember that Ember defines your controller if you don't explicitly define it yourself, and that is what's happening in this case.</p> <blockquote><p>Behind the scenes, Ember creates <code>IndexController</code> as an instance of <code>Ember.ArrayController</code>, and it uses the model specified in the <code>model</code> method.</p></blockquote> <p>Now we just need to update the index template to access the new attributes. Opening <em>index.html</em>, we can see the following Handlebars template code:</p> <pre class="brush: html noskimlinks noskimwords">{{#each item in model}} <li>{{item}}</li> {{/each}}</pre> <p>With one small change (adding the <code>title</code> property), we can immediately see the titles returned from the Hacker News API:</p> <p><code>{{item.title}}</code></p> <p>If you refresh your browser now, you should see something similar to the following:</p> <pre class="brush: html noskimlinks noskimwords"> ></pre> <p>If you want to display more information, simply add more properties:</p> <pre class="brush: html noskimlinks noskimwords">{{item.title}} - {{item.postedAgo}} by {{item.postedBy}}</pre> <p>Refresh to see the updates you've made. That's the beauty of Handlebars; it makes it trivial to add new data elements to the user interface.</p> <p>As I mentioned before, controllers can also be used to define static attributes that need to persist throughout the life of your application. For example, I may want to persist certain static content, like this:</p> <pre class="brush: js noskimlinks noskimwords">App.IndexController = Ember.ObjectController.extend({ headerName: 'Welcome to the Hacker News App', appVersion: 2.1 });</pre> <p>Here, I subclass <code>Ember.ObjectController</code> to create a new controller for my <em>index</em> route and template to work with. I can now go to <em>index.html</em> and update my template to replace the following:</p> <pre class="brush: html noskimlinks noskimwords"><h2>Welcome to Ember.js</h2></pre> <p>with:</p> <pre class="brush: html noskimlinks noskimwords"><h2>{{headerName}}</h2></pre> <blockquote class="pullquote"><p><em>Models</em> are object representations of the data your application uses.</p></blockquote> <p>Handlebars will take the specified attributes in my controller and dynamically replace the <code>{{headerName}}</code> placeholder with its namesake value. It's important to reinforce two things:</p> <ul> <li>By adhering to Ember's naming conventions, I didn't have to do any wiring to be able to use the controller with the index template.</li> <li>Even though I explicitly created an <code>IndexController</code>, Ember is smart enough not to overwrite the existing model that's been associated via the route.</li> </ul> <p>That's pretty powerful and flexible stuff!</p> <hr> <h2>Next up...Templates</h2> <p>Working with data in Ember isn't difficult. In actuality, the hardest part is working with the various APIs that abound on the web.</p> <blockquote><p>The fact that I can easily pump in my JSON data into an Ember object makes management substantially easier—although I've never been a big fan of large data sets on the client-side, especially when represented as objects.</p></blockquote> <p>It's something I'll have to do more testing on, and I hope that Ember Data makes all of this trivial. </p> <p.</p> 2013-04-17T23:20:52.000Z 2013-04-17T23:20:52.000Z Rey Bango tag:code.tutsplus.com,2005:PostPresenter/net-31132 Getting into Ember.js: The Next Steps <p>In my introductory article, I went over the <a href="">basics of the Ember.js framework</a>, and the foundational concepts for building an Ember application. In this follow-up article, we'll dive deeper into specific areas of the framework to understand how many of the features work together to abstract the complexities of single-page application development.</p> <p><!--more--></p> <hr> <h2>A Basic App</h2> <p.</p> <p.</p> <p>Open <code>index.html</code> in your browser, and you'll see the following:</p> <p><strong>Welcome to Ember.js</strong></p> <ul> <li>red</li> <li>yellow</li> <li>blue</li> </ul> <p>This is not very exciting, I know, but if you look at the code that rendered this, you'll see that it was done with very little effort. If we look at "js/app.js", we see the following code:</p> <pre class="brush: js noskimlinks noskimwords">App = Ember.Application.create({}); App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });</pre> <p>At its most basic level, an Ember app only needs this one line to technically be considered an "app":</p> <pre class="brush: js noskimlinks noskimwords">App = Ember.Application.create({});</pre> <p.</p> <p>The next set of code sets up the behavior of a route, in this case, for the main <code>index.html</code> page:</p> <pre class="brush: js noskimlinks noskimwords">App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });</pre> <p.</p> <p>In this case, the root route is created by default in Ember. I could've also explicitly defined the route this way:</p> <pre class="brush: js noskimlinks noskimwords">App.Router.map( function() { this.resource( 'index', { path: '/' } ); // Takes us to &quot;/&quot; });</pre> <p>But Ember takes care of that for me for the "root" of my application. We'll tackle routes in more detail later.</p> <p>Going back to the following code:</p> <pre class="brush: js noskimlinks noskimwords">App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });</pre> <p>In this case, when a user hits the site's root, Ember will setup a controller that will load a sample set of data with a semantic name, called <code>content</code>. This data can later be used in the app, via this controller using that name. And that's specifically what happens in <code>index.html</code>. Open the file and you'll find the following:</p> <pre class="brush: js noskimlinks noskimwords"><script type="text/x-handlebars" data- <h2>Welcome to Ember.js</h2> <ul> {{#each item in model}} <li>{{item}}</li> {{/each}} </ul> </script></pre> <p. </p> <p>In my last article, I mentioned that naming conventions are important in Ember, and that they make connecting features easy. If you look at the template code, you'll see that the name of the template (specified via the <em>data-template-name</em>:</p> <pre class="brush: js noskimlinks noskimwords">App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });</pre> <p>The controller sets a datasource named "content" and loads it with an array of strings for the colors. Basically, the array is your model, and the controller is used to expose that attributes of the model.</p> <p' <em>each</em> directive and specifying the alias <em>model</em> which points to the datasource:</p> <pre class="brush: js noskimlinks noskimwords">{{#each item in model}} <li>{{item}}</li> {{/each}}</pre> <p>To be more precise, the data is populated into dynamically created list items, thus generating the markup for you on the fly. That's the beauty of client-side templates.</p> <p.</p> <hr> <h2>Starting from the Ground Up</h2> <p>I briefly touched on the Ember application object and the fact that it builds the foundation for your application. The <a href="">Ember guides</a> do an excellent job of outlining specifically what instantiating an Ember application object does:</p> <ul> <li>It sets your application's namespace. All of the classes in your application will be defined as properties on this object (e.g. <code>App.PostsView</code> and <code>App.PostsController</code>). This helps to prevent polluting the global scope. </li> <li>It adds event listeners to the document and is responsible for sending events to your views. </li> <li>It automatically renders the application template, the root-most template, into which your other templates will be rendered. </li> <li>It automatically creates a router and begins routing, based on the current URL.</li> </ul> <p>So this simple statement: </p> <pre class="brush: js noskimlinks noskimwords">App = Ember.Application.create({});</pre> <p>wires up a whole ton of foundational pieces that your application will depend on. It's important to note that <em>App</em> is not a keyword in Ember. It's a normal global variable that you're using to define the namespace and could be any valid variable name. From what I've seen, though, the variable name, <em>App</em>, is a commonly used convention in most Ember apps and is actually recommended to make it easier to copy and paste much of the sample code being created in the community.</p> <p>Taking the list above, what Ember does, via that one line, is essentially create this code for you automatically behind the scenes:</p> <pre class="brush: js noskimlinks noskimwords">//></pre> <p.</p> <p>Now you might be wondering about this "application template" getting automatically rendered and why you don't see it in <code>index.html</code>. That's because it's optional to explicitly create the <em>application</em> template. If it's in the markup, Ember will immediately render it. Otherwise, it carries on processing other parts of your application as normal. The typical use-case for the <em>application</em> template is defining global, application-wide user interface elements, such as header and footers. </p> <p>Defining the <em>application</em> template uses the same style syntax as any other template except with one small difference: the template name doesn't need to be specified. So defining your template like this:</p> <pre class="brush: js noskimlinks noskimwords"> <script type="text/x-handlebars"> <h1>Application Template</h1> </script></pre> <p>or this:</p> <pre class="brush: js noskimlinks noskimwords"> <script type="text/x-handlebars" data- <h1>Application Template</h1> </script></pre> <p>gives you the same exact results. Ember will interpret a template with no <em>data-template-name</em> as the application template and will render it automatically when the application starts. </p> <p>If you update <code>index.html</code> by adding this code:</p> <pre class="brush: js noskimlinks noskimwords"> <script type="text/x-handlebars" data- <h1>Application Template</h1> {{outlet}} </script></pre> <p>You'll now see that the contents of the header tag appears on top of the content of the index template. The Handlebars <em>{{outlet}}</em> directive serves as a placeholder in the <em>application</em> template, allowing Ember to inject other templates into it (serving as a wrapper of sorts), and allowing you to have global UI features such as headers and footers that surround your content and functionality. By adding the <em>application</em> template to <code>index.html</code>, you've instructed Ember to:</p> <ul> <li>Automatically render the <em>application</em> template</li> <li>Inject the index template into the <em>application</em> template via the Handlebars <code>{{outlet}}</code> directive</li> <li>Immediately process and render the <code>index</code> template </li> </ul> <p>An important takeaway is that all we did was add one template (<em>application</em>), and Ember immediately took care of the rest. It's these feature bindings that make Ember.js such a powerful framework to work with.</p> <hr> <h2>Setting up Routes</h2> <p. </p> .</p> <p>Looking at <code>js/app.js</code> again, you'll notice that a route has been created for the root page (<em>index</em>):</p> <pre class="brush: js noskimlinks noskimwords">App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });</pre> <p>However, there's no router instance. Remember that Ember will create a router by default if you don't specify one. It will also create a default route entry for the root of the application similar to this:</p> <pre class="brush: js noskimlinks noskimwords">App.Router.map( function() { this.resource( 'index', { path: '/' } ); });</pre> <p>This tells Ember that, when the root of the application is hit, it should load the resources of a route object instance called <em>IndexRoute</em> if it's available. This is why, despite no router instance being declared, the application still runs. Ember internally knows that the root route should be named <em>IndexRoute</em>, will look for it, and load its resources, accordingly. In this case, it's creating a controller that will contain data to be used in the index template.</p> <p:</p> <ul> <li>Account: (URL: /account)</li> <li>Profile (URL: /profile)</li> <li>Gallery (URL: /gallery)</li> </ul> <p>In most cases, each one of these sections will have its own unique resources that need to be loaded (e.g.: data or images). So you would create route handlers using the <em>resource()</em> method within Ember's application router object instance like this:</p> <pre class="brush: js noskimlinks noskimwords">App.Router.map( function() { this.resource( 'accounts' ); this.resource( 'profiles' ); this.resource( 'gallery' ); });</pre> <p>This allows Ember to understand the structure of the application and manage resources, accordingly. The routes definitions will correlate to individual route object instances which actually do the heavy-lifting like setting up or interfacing controllers:</p> <pre class="brush: js noskimlinks noskimwords">App.GalleryRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['pic-1.png', 'pic-2.png', 'pic-3.png']); } });</pre> <p>So in the example above, when a user visits "/gallery", Ember.js instantiate the GalleryRoute route object, setup a controller with data and render the <em>gallery</em> template. Again, this is why naming conventions are so important in Ember.</p> <p>Your application may also have nested URLs, like <em>/account/new</em></p> <p>For these instances, you can define Ember resources that allow you to group routes together, like so:</p> <pre class="brush: js noskimlinks noskimwords"> App.Router.map( function() { this.resource( 'accounts', function() { this.route( 'new' ); }); });</pre> <p>In this example, we used the <code>resource()</code> method to group the routes together and the <code>route()</code> method to define the routes within the group. The general rule of thumb is to use <code>resource()</code> for nouns (Accounts and Account would both be resources even when nested) and <code>route()</code> for modifiers: (verbs like <code>new</code> and <code>edit</code> or adjectives like <code>favorites</code> and <code>starred</code>).</p> <p>Apart from grouping the routes, Ember builds internal references to the controllers, routes and templates for each of the group routes specified. This is what it would look like (and again it touches on Ember's naming conventions):</p> <p>"/accounts":</p> <ul> <li>Controller: AccountsController</li> <li>Route: AccountsRoute</li> <li>Template: accounts (yes it's lowercase)</li> </ul> <p>"/accounts/new":</p> <ul> <li>Controller: AccountsNewController</li> <li>Route: AccountsNewRoute</li> <li>Template: accounts/new</li> </ul> <p>When a user visits "/accounts/new" there's a bit of a parent/child or master/detail scenario that occurs. Ember will first ensure that the resources for <em>accounts</em> are available and render the <em>accounts</em> template (this is the master part of it). Then, it will follow-up and do the same for "/accounts/new", setting up resources and rendering the <em>accounts.new</em> template.</p> <p>Note that resources can also be nested for much deeper URL structures, like this:</p> <pre class="brush: js noskimlinks noskimwords"> App.Router.map( function() { this.resource( 'accounts', function() { this.route( 'new' ); this.resource( 'pictures', function() { this.route( 'add' ); }); }); });</pre> <hr> <h2>Next Steps</h2> <p>I've covered a lot of material in this post. Hopefully, it has helped to simplify some of the aspects of how an Ember application functions and how routes work.</p> <p>We're still not finished, though. In the next entry, I'll dive into Ember's features for pulling back data and making it available with your app. This is where models and controllers come in, so we'll focus on understanding how the two work together.</p> 2013-04-04T22:08:37.000Z 2013-04-04T22:08:37.000Z Rey Bango tag:code.tutsplus.com,2005:PostPresenter/net-30709 Getting Into Ember.js .</p> <p><!--more--></p> <blockquote class="pullquote"> <p> The old saying is true: "Use the best tool for the task." </p> </blockquote> <p."</p> <blockquote> <p>I recently did an interview with the <a href="">Ember.js team</a>; it was motivated by my desire to get to know what I've come to call "the new hotness": <a href="">Ember.js</a>.</p> </blockquote> <p.</p> <p.</p> .</p> <p>So let's kick this off.</p> <hr> <h2>Core Concepts</h2> <blockquote class="pullquote"> <p> Ember.js is not a framework for building traditional websites. </p> </blockquote> <p.</p> <p>I mentioned previously that Ember leverages the MVC pattern for promoting proper code management and organization. If you've never done MVC-based development, you should definitely read up on it. Nettuts+ has a <a href="">great article on the topic here</a>..</p> <p>Ember also relies on client-side templates... a <strong>LOT</strong>. It uses the <a href="">Handlebars templating library</a>:</p> <pre class="brush: js noskimlinks noskimwords"> <ul> {{#each people}} <li>Hello, {{name}}!</li> {{/each}} </ul></pre> <p.</p> <p>Handlebars is an incredibly powerful client-side templating engine and I would recommend reviewing not only <a href="">the Ember guides</a>, but the <a href="">Handlebars website itself</a> to get a full grasp of the options available. You'll be using it quite a bit.</p> <hr> <h2>Setting up Ember</h2> <p>Ember.js relies on additional libraries, so you'll need to go grab a copy of <a href="">jQuery</a> and <a href="">Handlebars</a>..</p> <p>The easiest way to get the files you need is to go to the Ember.js Github repo and pull down the <a href="">Starter Kit</a>. It's a boilerplate for you to start off with. At the time of this writing, it contains:</p> <ul> <li>Ember 1.0 RC1</li> <li>Handlerbars 1.0 RC3</li> <li>jQuery 1.9.1</li> </ul> <p>There's also a basic html template that is coded to include all of the associated libraries (jQuery, Ember, etc.) and along with an example of a Handlebars template and "app.js", which includes code for kicking off a basic Ember app.</p> <pre class="brush: js.</p> <p>When you look at the Starter Kit code, it may look like your typical website code. In some respects, you're right! Once we start organizing things, though, you'll see how building an Ember app is different.</p> <hr> <h2>The Lay of Ember Land</h2> <p>Before you start hacking at code, it's important to understand how Ember.js works and that you grok the moving parts that make up an Ember app. Let's take a look at those parts and how they relate to each other.</p> <hr> <h2>Templates</h2> <p>Templates are a key part of defining your user interface. As I mentioned previously, Handlebars is the client-side library used in Ember and the expressions provided by the library are used extensively when creating the UI for your application. Here's a simple example:</p> <pre class="brush: js noskimlinks noskimwords"> <script type="text/x-handlebars"> <h2><strong>{{firstName}} {{lastName}}</strong></h2> </script></pre> <p>Notice that the expressions are mixed into your HTML markup and, via Ember, will dynamically change the content displayed on the page. In this case, the {{firstName}} and {{lastName}} placeholders will be replaced by data retrieved from the app.</p> <p>Handlebars offers a lot of power, via a flexible API. It will be important for you to understand what it offers.</p> <hr> <h2>Routing</h2> <blockquote class="pullquote"> <p> An application's Router helps to manage the state of the application. </p> </blockquote> <p>An application's Router helps to manage the state of the application and the resources needed as a user navigates the app. This can include tasks such as requesting data from a model, hooking up controllers to views, or displaying templates.</p> <p>You do this by creating a route for specific locations within your application. Routes specify parts of the application and the URLs associated to them. The URL is the key identifier that Ember uses to understand which application state needs to be presented to the user.</p> <pre class="brush: js noskimlinks noskimwords"> App.Router.map( function() { this.route( 'about' ); // Takes us to "/about" });</pre> <p>The behaviors of a route (e.g.: requesting data from a model) are managed via instances of the Ember route object and are fired when a user navigates to a specific URL. An example would be requesting data from a model, like this:</p> <pre class="brush: js noskimlinks noskimwords"> App.EmployeesRoute = Ember.Route.extend({ model: function() { return App.Employee.find(); } });</pre> <p>In this case, when a user navigates to the "/employees" section of the application, the route makes a request to the model for a list of all employees.</p> <hr> <h2>Models</h2> <blockquote class="pullquote"> <p> An object representation of the data. </p> </blockquote> <p>Models are an object representation of the data your application will use. It could be a simple array or data dynamically retrieved from a RESTful JSON API, via an Ajax request. The <a href="">Ember Data</a> library offers the API for loading, mapping and updating data to models within your application.</p> <hr> <h2>Controllers</h2> <p>Controllers are typically used to store and represent model data and attributes. They act like a proxy, giving you access to the model's attributes and allowing templates to access them to dynamically render the display. This is why a template will always be connected to a controller.</p> <p.</p> <p>You can also store other application properties that need to persist but don't need to be saved to a server.</p> <hr> <h2>Views</h2> <p.</p> <hr> <h2>Naming Conventions</h2> <p":</p> <pre class="brush: js noskimlinks noskimwords"> App.Router.map( function() { this.resource( 'employees' ); });</pre> <p>I would then name my components, like this:</p> <ul> <li> <strong>Route object:</strong> <em>App.EmployeesRoute</em> </li> <li> <strong>Controller:</strong> <em>App.EmployeesController</em> </li> <li> <strong>Model:</strong> <em>App.Employee</em> </li> <li> <strong>View:</strong> <em>App.EmployeesView</em> </li> <li> <strong>Template:</strong> <em>employees</em> </li> </ul> <p:</p> <p>var App = Ember.Application.create();</p> <p>That single line creates the default references to the application's router, controller, view and template.</p> <ul> <li> <strong>Route object:</strong> <em>App.ApplicationRoute</em> </li> <li> <strong>Controller:</strong> <em>App.ApplicationController</em> </li> <li> <strong>View:</strong> <em>App.ApplicationView</em> </li> <li> <strong>Template:</strong> <em>application</em> </li> </ul> <p>Going back to the "employees" route that I created above, what will happen is that, when a user navigates to "/employees" in your application, Ember will look for the following objects:</p> <ul> <li><em>App.EmployeesRoute</em></li> <li><em>App.EmployeesController</em></li> <li>the <em>employees</em> template</li> </ul> <p.</p> <p.</p> <p>Also, I chose to use the <em>resource()</em> method to define my route, because in this scenario, I would most likely have nested routes to manage detail pages for specific employee information. We'll talk about nesting later in the series.</p> <p>The key takeaway is that by using a consistent naming scheme, Ember can easily manage the hooks that bind these components together without your needing to explicitly define the relationships via a ton of code.</p> <blockquote> <p> Full details of <a href="">Ember's naming conventions</a> are provided on the project's site and is a <strong>must-read</strong>. </p> </blockquote> <hr> <h2>Next Up: Building an App</h2> <blockquote class="pullquote"> <p> In the next part of the series, we'll dive into the code to create the basis for our application. </p> </blockquote> <p <a href="">Tuts+ Premium</a>, which will soon offer a full course that walks you through building an Ember-based application!</p> <p><em>As I noted at the beginning of this article, Ember.js Core Team leads <a href="">Yehuda Katz</a> and <a href="">Tom Dale</a> reviewed this for accuracy, and gave it the thumbs up. Ember team approved! See you in a bit!</em></p> 2013-03-15T04:20:48.000Z 2013-03-15T04:20:48.000Z Rey Bango tag:code.tutsplus.com,2005:PostPresenter/net-30258 Master Developers: The Ember.js Core Team <p>Single.</p> <p><a href="">Ember.js</a> is a very serious framework for doing just that. Check out the interview I did with Ember.js Core Team leads, <a href="">Yehuda Katz</a> and <a href="">Tom Dale</a>, as they discuss what prompted them to begin the Ember project, its design philosophy, and where it fits into an already crowded library ecosystem.</p> <p><!--more--></p> <div class="tutorial_image"> <a href=""><img data-</a> </div> <hr> <div class="question"> <h4> <span>Q</span> Tell us about your professional backgrounds.</h4> </div> <p><strong>Yehuda:</strong> I was an Accounting major in college, with a whole bunch of interesting minors (Journalism, Philosophy, History, TV/Radio). I loved learning, but somehow missed the fact that Accounting as a profession was pretty boring, at least to me.</p> .</p> <p>I got extremely lucky to see an internal job posted at my first place of employment for a web designer, and thought "I did print design in college, that's the same thing right?"</p> <blockquote class="pullquote"> <p>I was also aware of Knockout, but I wasn’t a huge fan of packing all of the binding information into HTML attributes. </p> </blockquote> <p.</p> <p. ;)</p> .</p> <p.</p> <p!</p> <div class="tutorial_image"> <a href=""><img data-</a> </div> <p>I was also aware of Knockout, but I wasn't a huge fan of packing all of the binding information into HTML attributes. This was mostly an aesthetic concern, but sometimes aesthetics matter.</p> <p.</p> <p.</p> <p.</p> <p>When Strobe was sold to Facebook, I formed Tilde with partners in crime Tom, Carl and Leah, to continue working on this project, outside of the confines of a VC-backed company. I've been there ever since.</p> <hr> <p><strong>Tom:</strong>.)</p> <p>After I graduated from college, I was working at the Genius Bar of an Apple Store in Southern California. The software they gave us for managing the paperwork for repairs was quite painful - you know, the standard enterprise stuff you'd expect. </p> <p.</p> <p.</p> <blockquote class="pullquote"> <p>Yehuda and I both felt like we needed to be independent to accomplish our goals in open source.</p> </blockquote> <p.</p> <p>A few months later, we were close to shipping our first version. Since SproutCore had not yet reached 1.0, we were working closely with Charles Jolley and the rest of the SproutCore team, and I got to know them quite well.</p> <p.</p> <p.</p> <p.</p> <p.</p> <blockquote> <p> We primarily make our money consulting, which we use to pay for time to work on Ember.js and other open source projects. We're also working on some products that we think developers will love. </p> </blockquote> <hr> <div class="question"> <h4> <span>Q</span> The million dollar question, "Why build another framework?"</h4> </div> <p><strong>Tom:</strong>.</p> <blockquote class="pullquote"> <p>SproutCore was way ahead of the curve when it came to JavaScript frameworks.</p> </blockquote> <p>SproutCore flipped that model on its head. The server became just a delivery mechanism for a JSON API. The interesting UI work started happening entirely on the client, in JavaScript.</p> <p.</p> <p>Take bindings, for example. Any competent engineer could build a simple binding system in a day. But it turns out there are a lot of edge cases, race conditions and infinite loops possible in larger systems. Getting something rock solid has taken a long time.</p> <p.</p> <hr> <p><strong>Yehuda:</strong>!"</p> <p.</p> <hr> <div class="question"> <h4> <span>Q</span> I believe Ember.js came out of your work on SproutCore. What prompted the rename and new effort that's now Ember.js?</h4> </div> <p><strong>Tom:</strong>.</p> <p.</p> <blockquote class="pullquote"> <p>Backbone's popularity was a wake-up call for us.</p> </blockquote> <p.</p> <p>While a lot of people think of SproutCore as just "native-like controls for the web," the reality is that it also embraced the architectural patterns of Cocoa that have made Mac and iOS apps so successful.</p> <p>We knew we could bring those tools to web developers without the cruft of the SproutCore view layer, while making the API easier to get started with. Additionally, we wanted new users to be able to lean on their existing skills as much as possible.</p> <p.</p> <hr> <blockquote class="pullquote"> <p>The goal of Ember was to give developers back the tools they were used to using.</p> </blockquote> <p><strong>Yehuda:</strong>.</p> <p.</p> <hr> <div class="question"> <h4> <span>Q</span> Ember is still a baby in terms of frameworks. What have been the challenges of launching a new OSS effort and gaining traction?</h4> </div> <p><strong>Yehuda:</strong> Open Source projects look easy from the outside, but they're probably the ultimate chicken-and-egg problem. You need a successful project to bring in contributors and users, but those same users are the ones who will make your project successful in the first place.</p> <blockquote class="pullquote"> <p>the reward for dealing with the instability is a much more stable 1.0</p> </blockquote> <p>The only real solution is to personally bootstrap the project by being the chicken and egg at the same time. You need to personally work with contributors and users at the same time as you build up the project to a reasonable degree of usability (and eventually, stability).</p> <p>.</p> <p.</p> <div class="tutorial_image"> <img data- </div> <p.</p> <hr> <p><strong>Tom:</strong>.</p> <blockquote class="pullquote"> <p>Anyone can decide to write a new framework or library and publish it instantly.</p> </blockquote> <p.</p> <p.</p> <hr> <div class="question"> <h4> <span>Q</span> In terms of MVC, I've read that Ember.js takes a slightly different approach to the paradigm than other frameworks. Can you explain the differences and design choices?</h4> </div> <p><strong>Tom:</strong> Ironically, Ember.js is probably closest to the original MVC, as used in Smalltalk in the 70's. Since then, server-side MVC frameworks like Ruby on Rails have become very popular, and, in most developers' heads, subsumed the term.</p> <blockquote class="pullquote"> <p>Perhaps the biggest difference from other JavaScript frameworks is that we put the router front and center. </p> </blockquote> <p.</p> <p.</p> .</p> <hr> <p><strong>Yehuda:</strong> I actually don't think there is any one MVC paradigm that other frameworks take. The main thing that everyone shares is a desire to decouple the HTML that ends up in the browser from the data models that power them.</p> <blockquote class="pullquote"> <p>Ember’s approach is inspired by Cocoa</p> </blockquote> <p>Backbone, for example, stops there. You get Models and Views, and if you want other layers you can roll your own (and many do).</p> <p>Other frameworks use the "controller" layer as something very close to views. These frameworks often use terminology like "View Model" to describe this layer. I believe that Knockout uses this approach.</p> <p.</p> <hr> <div class="question"> <h4> <span>Q</span> As I'm going through the learning process, I feel a lot of Rails influence in the Ember design. Am I off on this?</h4> </div> <blockquote class="pullquote"> <p>We realized that Ruby on Rails had long ago figured out how to orient a framework around URLs.</p> </blockquote> <p><strong>Tom:</strong>.</p> <p>When thinking about the problem, we realized that Ruby on Rails had long ago figured out how to orient a framework around URLs. In most Rails apps, models are just resources that you expose using conventional routes.</p> <p>So, the Rails inspiration you feel in Ember.js is us pairing the architecture of native apps with the URL-centricity of Rails. And, like Rails, we also value convention over configuration!</p> <hr> <p><strong>Yehuda:</strong>.</p> <p.</p> <p.</p> <hr> <div class="question"> <h4> <span>Q</span> Along those lines, what are the key things that new developers to Ember.js should know about?</h4> </div> <p><strong>Tom:</strong> Templates are connected to controllers, which are themselves connected to a model (or a collection of models). These connections are set up in the router. Building large Ember.js applications is just repeating this pattern, over and over. Template, controller, model.</p> <ul> <li>Ember.js does a LOT for you and in some cases, it feels like black magic. I know a lot of developers don't like that. What "constructive feedback" (i.e.: don't let Tom answer) would you offer them which this sort of black boxing of code?</li> <li?</li> </ul> .</p> <p>I'm not sure what the largest application out there is. A lot of businesses are betting big on Ember.js and building their next-generation web applications with the framework. That means that we don't get to see the source code for most Ember apps!</p> <hr> <p><strong>Yehuda:</strong> Ember has been used on some really big apps, like Zendesk, Square, Travis CI and Discourse. All of these apps make use of large amounts of data that are pushed through the Ember binding system.</p> <p>Square, in particular, has done really amazing work combining Ember.js and Crossfilter to allow exploration of hundreds of thousands of data points without going back to the server to display the results.</p> <hr> <div class="question"> <h4> <span>Q</span> Over the last year, the API has gone through numerous revisions. This puts high maintenance demands on projects that want to use the framework. Where are you guys at with locking down the API and how will you handle deprecating features in the future?</h4> </div> <p><strong>Tom:</strong>.</p> <div class="tutorial_image"> <img data- </div> <p <a href="">SemVer</a> standard, which means that apps you build today will be compatible with the framework until we release a version 2.0. Which, for the sake of our sanity, hopefully won't be for quite a while!</p> <hr> <p><strong>Yehuda:</strong>.</p> <hr> <div class="question"> <h4> <span>Q</span> The <a href="">Discourse</a> team just launched their site and made the use of Ember.js a key talking point. What was your involvement with that effort and in your opinion, what were the positives and negatives learned from that experience?</h4> </div> <p><strong>Tom:</strong> The <a href="">Discourse</a> guys have done just an incredible job. I am still stunned at what a polished product two engineers were able to build using a framework that was undergoing heavy development.</p> <p.</p> <div class="tutorial_image"> <a href=""><img data-</a> </div> <p.</p> <hr> <p><strong>Yehuda:</strong>.</p> <blockquote> <p>What Discourse showed is that you can build a content site with rich interactions without giving up the URL-friendliness of static sites. And they show up on Google just fine!</p> </blockquote> <p.</p> <hr> <div class="question"> <h4> <span>Q</span> In terms of project team, you've purposely chosen to keep it small. Tell me about the decision behind that and how you feel the community can best contribute.</h4> </div> <p><strong>Tom:</strong>.</p> <blockquote class="pullquote"> <p>Yehuda and I have a very specific vision for the framework</p> </blockquote> <p.</p> <p>People get impatient and even angry while you're still thinking about the best way to solve a problem. But the end result is worth it.</p> <p>It's hard to find other developers who have the resolve to say "no," and not just rush something in because it fixes a particular problem. But, again, it's worth it.</p> <hr> <p><strong>Yehuda:</strong>.</p> <hr> <div class="question"> <h4> <span>Q</span> Ember.js has a lot of company in the MVC framework space including Angular, Backbone, Knockout.js, JavaScriptMVC and more. What sets Ember apart from all of these options? Why would someone go with Ember over something else?</h4> </div> <p><strong>Tom:</strong> Ember is the only framework that both wants to solve the entire problem, top-to-bottom, and that also cares about crafting an API and documentation that is approachable and user-friendly.</p> <p.</p> <hr> <p><strong>Yehuda:</strong> Over the past year, one thing that we've really taken to heart is that if people are building web applications (as opposed to native applications), they need to make URLs the front-and-center way that they structure and organize their application.</p> <p>Other frameworks provide support for URLs, but only Ember starts new developers with this crucial aspect of the web experience from day one.</p> <hr> <div class="question"> <h4> <span>Q</span> What do you see as the key deciding factors for choosing something like Ember.js instead of using solely a library like jQuery or MooTools?</h4> </div> <blockquote class="pullquote"> <p>But neither give you any architectural tools.</p> </blockquote> <p><strong>Tom:</strong>.</p> <hr> <p><strong>Yehuda:</strong>.</p> <p>In my opinion, if someone is truly torn about whether they should use a low-level library like jQuery or an application framework, they should probably go with jQuery until they hit issues that would benefit from a framework.</p> <hr> <div class="question"> <h4> <span>Q</span> I noticed that Ember uses jQuery. Can you tell us about this choice?</h4> </div> <p><strong>Tom:</strong>.</p> <hr> <div class="question"> <h4> <span>Q</span> In terms of mobile, what do developers need to consider when using Ember?</h4> </div> <p><strong>Tom:</strong> Like any framework, Ember can't prevent your app from doing algorithmically inefficient stuff. Sometimes you can get away with inefficiencies on the desktop that become deal-breakers on mobile devices, with their constrained CPUs.</p> .</p> <p>That being said, many companies are building their business on Ember apps running on mobile devices.</p> <hr> <p><strong>Yehuda:</strong>.</p> <p>Libraries, like <a href="">Ember ListView</a> (by core team member Erik Bryn) also provide ways to reuse DOM when working with large amounts of data without giving up the nice APIs of the Ember templating system.</p> <p.</p> <hr> <div class="question"> <h4> <span>Q</span> With Ember keenly focused on the desktop-like paradigm, what recommendations or resources can you offer developers who want to make the jump into single-page apps?</h4> </div> <div class="tutorial_image"> <a href=""><img data-</a> </div> <p><strong>Tom:</strong> The <a href="">Ember.js guides</a> are great for understanding the framework. We're improving them all the time usually rolling out updates at least once a week and now that the RC is out, we're working hard on material especially designed to get people up and running as fast as possible.</p> <hr> Thanks so much to Yehuda and Tom for taking the time to speak with Nettuts+! If you have any of your questions, leave a comment below, and they just might get back to you! 2013-02-26T19:32:07.000Z 2013-02-26T19:32:07.000Z Rey Bango tag:code.tutsplus.com,2005:PostPresenter/net-26836 Game On: Backbone and Ember <p>So you've accepted the challenge to go thick on the client-side; well done. You've considered all the frameworks out there and are unsure which one to choose? You're not alone. Read on.</p> <p><!--more--></p> <p>My experience, when learning the way of writing client-side apps is proving to be steep and hard. It's not easy to deliberately choose to use <code>MV*</code> on the client for someone who wrote JavaScript, based entirely on jQuery and its plugins. This is an entirely new paradigm; it requires basic programming skills and a considerable understanding of JavaScript (the language) design. If your experience relates to mine, then read on! </p> <p>I will be explaining the main differences between two of the most popular JavaScript clientside frameworks: <a href="">Backbone.js</a> and <a href="">Ember.js</a>. Each of these tools has strong points, as well as weaknesses that might help you make a more thoughtful choice.</p> <blockquote> <p>Disclaimer: as software professionals, we must deal with diversity of opinion. Backbone and Ember are results of opinionated and experienced professionals, like you and me. One tool isn't better than the other; they just serve different crowds and, ergo, solve different problems. Thanks <a href="">Trek</a> for the solid advice.</p> </blockquote> <hr> <h2>The Philosophy</h2> <blockquote class="pullquote"> <p>Backbone is much easier to learn than Ember.</p> </blockquote> <p>First and foremost, you need to understand that Backbone and Ember particularly serve slightly different crowds. Regarding complexity, Backbone is much easier to learn than Ember. However, it's said that once you learn Ember, it hardly gets any more complex. Take <a href="">Trek's word on it</a>. If you're just getting started with some real JavaScript, then perhaps Backbone is your tool. If, however, you know that you're going to deal with a lot more than just a simple use case or two, then you might prefer Ember.</p> <h3>Backbone</h3> <p><a href="">Jeremy Ashkenas</a> built Backbone so it would be possible to <cite>take the truth out of the <code>DOM</code></cite>. What he means by this is: whatever business you did using only jQuery / Mootools / Prototype could and should be better extracted into pure JavaScript structures - objects, if you will. Instead of using <code>DOM</code> elements to define your business elements and behavior, Backbone invites you to do it the other way around. JavaScript objects are the core and the <code>DOM</code> is merely a representation of that data.</p> <p>With Backbone, you have some given assertions:</p> <ol> <li>Data lies in JavaScript objects, not the <code>DOM</code> </li> <li>Event handling lies in JavaScript objects, not jQuery event bindings</li> <li>The way you save data in a backend server is done through the objects that contain the data</li> </ol> <p>You are given complete control over the way you build your app. Backbone was meant to give you a basic way of designing your model objects and how these interact with each other through event bindings.</p> <p>Rendering <code>HTML</code> to the <code>DOM</code> is of your responsibility. You are free to choose any template engine: Mustache, DoT, Handlebars, Underscore, etc. Backbone contains a <code>View</code> prototype that has the responsibility of articulating the <code>DOM</code> and your JavaScript core.</p> <h3>Ember</h3> <p>When <a href="">Tilde</a> started building Ember, it did so with a far more challenging goal: to <em>provide standard conventions in client-side development, eliminating as much boilerplate as possible</em>. The result is a much more ambitious framework that aims for a predictable architecture and steady development.</p> <p>Ember shares some common points with Backbone in the way it tries to pull data and behavior out of the <code>DOM</code> by providing extendable JavaScript prototypes, but it does this in a very different manner than Backbone does.</p> <p>Ember stands on:</p> <ol> <li> <strong>Two-way data binding</strong>: objects in Ember are able to register bindings between one another. That way, whenever a bound property changes, the other one is updated automatically.</li> <li> <strong>Computed properties</strong>: if you wish to have a property that is a result of a function, you can create them and assign a property as computed by that function.</li> <li> <strong>Template auto-updates</strong>: when an object is updated in your app, all the views currently displayed in the screen that are bound to that object automatically reflect the change, with no boilerplate.</li> </ol> <hr> <h2>The DOM - Views</h2> <p>Both Backbone and Ember have common key concepts, such as <em>views</em>. They both represent <code>DOM</code> communication, respectively. The way they accomplish this concept are somewhat different, though.</p> <p>I'll use the Todo use case for the examples below, inspired by the <a href="">TodoMVC</a> showcase.</p> <h3>Backbone</h3> <p>A Backbone View might something like this:</p> <pre class="brush: js noskimlinks noskimwords">var TaskView = Backbone.View.extend({ tagName : "li" , template : "task-template" , render : function() { // your code to render here. } , events : { "click .mark-done" : "mark_as_done" , "change .body" : "update_body" } , mark_as_done : function() { /* code here */ } , update_body : function() { /* code here */ } });</pre> <p>This is simply the definition of your view. You will need to instantiate one if you want it to be in the page. Something like this will do the trick:</p> <pre class="brush: js noskimlinks noskimwords">var task_view = new Task({ model : task_model }); $("body").append(task_view.el);</pre> <p>Notice that we're passing a model in so you can keep a reference to the data object that feeds the template. The <code>template</code> property inside the view can be used to call an outside template, via an identifier. I've used something like this in the past:</p> <pre class="brush: js noskimlinks noskimwords">var TaskView = Backbone.View.extend({ template : "#task-template" , render : function() { this.$el.html( Mustache.render($(this.template).html()) , this.model); } // snip });</pre> <h3>Ember</h3> <p>Ember has a different approach to views. In fact, the convention states that views should talk to controllers and not models directly. This is a good practice, if you intend to follow a stable architecture. I'll explain the sample for the same view:</p> <pre class="brush: js noskimlinks noskimwords">var TaskView = Ember.View.extend({ templateName : "task-template" , mark_as_done : function() { /* code here */ } , update_body : function() { /* code here */ } });</pre> <p>That's it. But where's all the rendering stuff? Well, Ember lifts that boilerplate for you. Simply say what the template is, the controller that holds the data object, and then you just need to append it to the <code>DOM</code>.</p> <pre class="brush: js noskimlinks noskimwords">var task_view = TaskView.create({ controller : task_controller // Ember.ObjectController }); task_view.append();</pre> <p>When creating a new view instance, it will bind the controller's content (which can be an <code>Ember.Object</code> or a list of them) to the view. When you decide to append the view to the <code>DOM</code>, it will look up the template and place the generated markup for you.</p> <h3>Thoughts</h3> <blockquote class="pullquote"> <p>Backbone is more explicit and less magical.</p> </blockquote> <p>Backbone is more explicit and less magical. You create a <code>View</code>, tell it what template to use and how, register the events and do what you have to do. They own the page. That's a great start for those coming from a jQuery background. However, when something needs to be updated in the <code>DOM</code>, you will face some boilerplate.</p> <p>With Ember, updates are automatic. You say what template it is and event callbacks are functions inside the view object. Any time an object is updated, the view automatically updates the page.</p> <p>Some common event bindings are built into Ember and others must be put into the template. It's good for those who come from a backend perspective, as it reduces boilerplate in a considerable way.</p> <hr> <h2>The Data - Models</h2> <p>Models in Backbone and Ember are quite similar. They hold information for a business entity.</p> <h3>Backbone</h3> <p>An example of a Backbone model looks like this:</p> <pre class="brush: js noskimlinks noskimwords">var TaskModel = Backbone.Model.extend();</pre> <p>With this simple line of code, you have a working model with <code>REST</code>ful communication built-in. You get methods like <code>save</code> to persist the data and <code>fetch</code> to load it for free; no plugin is required. Validation is also built into the way data is saved by providing a <code>validate</code> callback, which returns a boolean that tells the record to be saved or not. The implementation of the validation is still for the developer to do.</p> <p>To create a new task, you instantiate a new <code>TaskModel</code>.</p> <pre class="brush: js noskimlinks noskimwords">var task = new TaskModel({ body : "Mow the lawn" , done : false });</pre> <p>You may inject as many attributes as you like, because the task's attribute list isn't strict (think of it as <em>schemaless</em>). You can still set a <code>defaults</code> property when extending <code>Backbone.Model</code>.</p> <h3>Ember</h3> <p>With Ember, there are no models, just objects. It might look something like this:</p> <pre class="brush: js noskimlinks noskimwords">var TaskObject = Ember.Object.extend();</pre> <p>Similar to Backbone, you need to extend from <code>Ember.Object</code> to create an object class. It inherits all the basic functionality for a class with callbacks for when it gets changed, created and destroyed, amongst other features. It does not, however, have backend communication out of the box. <a href=""><code>Ember.Data</code></a> is being developed as an extension of <code>Ember.Object</code> by the Ember core team to fulfill that need. It's already usable but not stable as far as the documentation tells.</p> <p>Ember objects are also considered to be <em>schemaless</em>. To inject defaults into Ember objects, you extend <code>Ember.Object</code> by passing an object with as many attributes as you require.</p> <pre class="brush: js noskimlinks noskimwords">var TaskObject = Ember.Object.extend({ body : "Mow the lawn" , done : false });</pre> <h3>Thoughts</h3> <p>Backbone has a consolidated way of syncing up with a persistence layer over <code>REST</code> and that's a good convention there. It's one less thing you have to configure in order to work with a backend server.</p> <p>Ember is working its way toward making <code>Ember.Data</code> ready for production use, and it looks promising. Even so, the particularity of Ember objects having two way bindings makes it dead easy to perform connections between objects.</p> <p>At this point in your reading, you have an inflection point between Backbone's stability in communicating with the backend server and Ember's bindings. Whatever's most important to you should determine your decision.</p> <hr> <h2>The Glue - Controllers</h2> <p>This is where the frameworks part ways. They have a huge conceptual gap on how to glue things together in your app. While Backbone strives to remain as simple and flexible as possible, Ember sacrifices codebase size for a better architecture. It's a tradeoff, really.</p> <blockquote> <p>Warning: the following examples don't contain HTML template samples.</p> </blockquote> <h3>Backbone</h3> <p>As I noted, Backbone aims for simplicity that converts to flexibility and it achieves such attributes precisely through <em>the lack of a controller class</em>. Most of the workhorse is distributed around views, collections, models and the router (should you choose to use Backbone's <code>Router</code>).</p> <p>Considering a list of tasks that needs to be managed, it would require:</p> <ul> <li>A <code>Collection</code> to store the tasks.</li> <li>A <code>Model</code> to store a task's information.</li> <li>A <code>View</code> to represent the collection.</li> <li>Another <code>View</code> to represent each task.</li> <li>A <code>Router</code> to manage URLs.</li> </ul> <p>Most of the application logic will live in the views, as they connect models to the <code>DOM</code>. There is no clear distinction of responsibilities, as the view does everything. It can be good for small applications that don't require a solid architecture.</p> <p>To display a list of tasks, you would end up with something like this:</p> <h4>Collection</h4> <pre class="brush: js noskimlinks noskimwords">var TaskList = Backbone.Collection.extend({ model : Task });</pre> <h4>Model</h4> <pre class="brush: js noskimlinks noskimwords">var TaskModel = Backbone.Model.extend();</pre> <h4>Views</h4> <pre class="brush: js noskimlinks noskimwords"; }, });</pre> <p><!----></p> <pre class="brush: js noskimlinks noskimwords">var TaskView = Backbone.View.extend({ tagName: "tr", render: function() { this.$el.html(M.to_html(template, this.model.attributes)); return this; } });</pre> <h4>Router</h4> <pre class="brush: js noskimlinks noskimwords">var Router = Backbone.Router.extend({ initialize: function() { this.tasks = new TaskList; this.view = new TaskListView({ collection: this.tasks }); }, routes: { "": "tasks_list", }, tasks_list: function() { this.view.render(); $(".bucket:first").html(this.view.el); }, start: function() { Backbone.history.start({ pushState: true, root: "/tickets/" }); } });</pre> <p>Notice that the collection doesn't have a template of its own; rather, it delegates to a single task view being rendered and appended to the final result being put on the page.</p> <h3>Ember</h3> <p>The number of classes required to have the same setup is slightly bigger. </p> <ul> <li>Instead of a <code>Collection</code>, you would have an <code>ArrayController</code>, which works very much alike.</li> <li>You would have an extra <code>ObjectController</code> for managing a single task. </li> <li>Instead of a <code>Model</code>, you would have an <code>Object</code> / <code>DS.Model</code>, which work alike.</li> <li>You would have the same kind of <code>View</code>s.</li> <li>A <code>Router</code> is also responsible for managing URLs.</li> </ul> <p>You might be thinking that the two frameworks are not too different from one another. It's rather tempting, but it's not exactly true. Some particular differences are:</p> <ol> <li>The controller is responsible for interacting with the data objects, not the View.</li> <li>The views are responsible for handling the <code>DOM</code>, not the controller.</li> <li>The views communicate with the controller, not directly to the data objects.</li> <li>The data that feeds the view template is actually a binding to the controller's data.</li> <li>The router is more of a <em>state manager</em>, which includes much more than handling URLs.</li> </ol> <p>The separation of concerns is good in the long term. Controller handles data, views handle the <code>DOM</code>, period. This kind of decoupled and cohesive, boilerplateless design allows for more focused testability.</p> <p>The implementation to display the same list of tasks would be something like the following, considering a full Ember application:</p> <h4>Application root architecture</h4> <pre class="brush: js noskimlinks noskimwords">window.App = Ember.Application.create(); App.ApplicationController = Ember.ObjectController.extend(); App.ApplicationView = Ember.View.extend({ templateName: "application" });</pre> <h4>Object</h4> <pre class="brush: js noskimlinks noskimwords">App.Task = Ember.Object.extend();</pre> <h4>Controllers</h4> <pre class="brush: js noskimlinks noskimwords">App.TasksController = Ember.ArrayController.extend({ content: [] });</pre> <h4>View</h4> <pre class="brush: js noskimlinks noskimwords">App.TasksView = Ember.View.extend({ templateName: "my-list" });</pre> <h4>Router</h4> <pre class="brush: js noskimlinks noskimwords">App.Router = Ember.Router.extend({ root : Ember.Route.extend({ index: Em.Route.extend({ route: '/', connectOutlets: function(router){ router.get('applicationController').connectOutlet('tasks'); } }) });</pre> <p>In Ember's case, there's not much being said about how things are done inside. All of that boilerplate is taken away so you can focus on what really matters in your app: you define a task object, a task list controller with an array called <code>content</code>, your view and the router simply combines them all together and puts it in the page.</p> <h3>Thoughts</h3> <blockquote class="pullquote"> <p>After realizing how Ember really works, it starts to become liberating.</p> </blockquote> <p>Predictably, this segment was the hardest to grasp on both frameworks. Backbone was definitely easier to learn and its flexible nature gives control over the way objects and <code>DOM</code> interact. This might be good for you, if you really need that kind of flexibility but still want to maintain a structure for your app's logic in the JavaScript side.</p> <p.</p> <hr> <h2>What Sets Them Apart?</h2> <blockquote class="pullquote"> <p>Ember was meant to lift the common burdens of JavaScript development in the browser.</p> </blockquote> <p>So far, the whole point of showing the two tools off has been to acknowledge their single and noble purpose: to delegate <em>power</em> to the client-side, <em>through both structure and method</em>.</p> <p><em>Backbone core strength is definitely its KISS approach</em>. It provides you with the minimum to let go of the <code>DOM</code> as the core supporter of your app, and start using real JavaScript objects that can be tested and designed properly.</p> <p>Backbone comes packed with collections, models, views and the router, amongst other small utilities. You are free to do what you please with them.</p> <p>Ember, on the other hand, was built with a different mindset, as it aims for a much more conventional and opinionated way of building web apps. It tackles a set of common problems, such as boilerplate, data binding and <code>DOM</code> templating so you don't have to worry about them from the start. <em>Ember was meant to lift the common burdens of JavaScript development in the browser</em>.</p> <p>Ember comes packed with objects, controllers, auto-updating views, state machines, bindings, observers and a router (which is also a state machine), all of them conjured with a good dose of conventions. You have an architecture already designed and ready to begin working without losing focus.</p> <hr> <h2>Conclusion</h2> <p? <em>Both</em>.</p> <h3>It's all about the JavaScript</h3> <p>If you're unsure how even jQuery does all its magic, then start learning Backbone. It's easier to begin with, and the <a href="">documentation</a> is dead simple to read and understand. After you're done, start building something. Go dirty. Check <a href="">these tutorials</a> if you need some help.</p> <blockquote> <p>If you're still in the dark, read <a href="">Yehuda Katz</a>'s entries on how <a href="">JavaScript</a> <a href="">works</a>.</p> </blockquote> <p>Once you get a better vision of how the JavaScript works as a language, you will begin to gain a better grasp of <em>how the objects interact with each other</em>. When you do, go for Ember. It's more complicated at first, but don't give up. Start reading the <a href="">docs</a> and the <a href="">guides</a>. You might want to check <a href="">Trek Glowacki's blog entry</a> just before getting your hands dirty.</p> <h3>My bottom line</h3> <p>Personally, I'm leaning towards Ember; I enjoy its robustness at a macro scale, and I also prefer its conventions. Backbone is a more malleable and easier tool for smaller apps or small features inside an existing app.</p> <p>I'm still learning both, and have a few challenges to tackle:</p> <ul> <li>Automatic tests: how to do them and which testing suite is better. Qunit or Jasmine? Headless (thinking PhantomJS), Node or browser test runner? Not sure yet.</li> <li>File uploads</li> <li>Internationalization</li> </ul> <p>What are your thoughts on this whole debacle? Do you have any challenges in mind? Any difficulties or impediments? Let me know! </p> 2012-08-31T20:32:59.000Z 2012-08-31T20:32:59.000Z José Mota
https://code.tutsplus.com/categories/emberjs.atom
CC-MAIN-2020-10
en
refinedweb
Utilities for scikit-learn. Project description Utilities for scikit-learn. from skutil.estimators import ColumnIgnoringClassifier # use a classifier that can't handle string data as # an inner classifier in some stacked model, for example Contents 1 Installation pip install skutil 2 Basic Use skutil is divided into several sub-modules by functionality: 2.1 Estimators ColumnIgnoringClassifier - A sklearn classifier wrapper that ignores some input columns. classifier_cls_by_name - Get an sklearn classifier class by name. Also supports lowercasing and some shorthands (e.g. svm for SVC, logreg and lr for LogisticRegression). 3 Contributing Package author and current maintainer is Shay Palachy (shay.palachy@gmail.com); You are more than welcome to approach him for help. Contributions are very welcomed. 3.1 Installing for development Clone: git clone git@github.com:shaypal5/skutil.git Install in development mode: cd skutil pip install -e . 3.2 Running the tests To run the tests use: pip install pytest pytest-cov coverage cd skutil pytest 3.3 Adding documentation The project is documented using the numpy docstring conventions, which were chosen as they are perhaps the most widely-spread conventions that are both supported by common tools such as Sphinx and result in human-readable docstrings. When documenting code you add to this project, follow these conventions. 4 Credits Created by Shay Palachy (shay.palachy@gmail.com). Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/skutil/0.0.2/
CC-MAIN-2020-10
en
refinedweb
SYNOPSIS #include <nng/nng.h> int nng_pipe_getopt(nng_pipe p, const char *opt, void *val, size_t *valszp); int nng_pipe_getopt_bool(nng_pipe p, const char *opt, bool *bvalp); int nng_pipe_getopt_int(nng_pipe p, const char *opt, int *ivalp); int nng_pipe_getopt_ms(nng_pipe p, const char *opt, nng_duration *durp); int nng_pipe_getopt_ptr(nng_pipe p, const char *opt, void **ptr); int nng_pipe_getopt_sockaddr(nng_pipe p, const char *opt, nng_sockaddr *sap); int nng_pipe_getopt_string(nng_pipe p, const char *opt, char **strp); int nng_pipe_getopt_size(nng_pipe p, const char *opt, size_t *zp); int nng_pipe_getopt_uint64(nng_pipe p, const char *opt, uint64_t *u64p); DESCRIPTION The nng_pipe_getopt()opt() This is untyped, and can be used to retrieve the value of any option. A pointer to a buffer to receive the value in val, and the size of the buffer shall be stored at the location referenced by valszp. When the function returns, the actual size of the data copied (or that would have been copied if sufficient space were present) is stored at the location referenced by valszp. If the caller’s buffer is not large enough to hold the entire object, then the copy is truncated. Therefore the caller should check for truncation by verifying that theopt_bool() This function is for options which take a Boolean ( bool). The value will be stored at bvalp. nng_pipe_getopt_int() This function is for options which take an integer ( int). The value will be stored at ivalp. nng_pipe_getopt_ms() This function is used to retrieve time durations ( nng_duration) in milliseconds, which are stored in durp. nng_pipe_getoptopt_size() This function is used to retrieve a size into the pointer zp, typically for buffer sizes, message maximum sizes, and similar options. nng_pipe_getopt_sockaddr() This function is used to retrieve an nng_sockaddrinto sap. nng_pipe_getopt_string() This function is used to retrieve a string into strp. This string is created from the source using nng_strdup()and consequently must be freed by the caller using nng_strfree()when it is no longer needed. nng_pipe_getopt_uint64() This function is used to retrieve a 64-bit unsigned value into the value referenced by u64p. This is typically used for options related to identifiers, network numbers, and similar. RETURN VALUES These functions return 0 on success, and non-zero otherwise.
https://nng.nanomsg.org/man/v1.2.2/nng_pipe_getopt.3.html
CC-MAIN-2020-10
en
refinedweb
bt_ldev_invoke_decode_event() This function is used to parse the Bluetooth event data received over the invoke interface when "bb.action.bluetooth.EVENT" occurs. Synopsis: #include <btapi/btdevice.h> int bt_ldev_invoke_decode_event(const char *invoke_data, int invoke_len, int *event, const char **bdaddr, const char **event_data) Since: BlackBerry 10.3.0 Arguments: - invoke_data The data provided by the invoke interface. - invoke_len The length of the data provided by the invoke interface. - event Returns the event which triggered the invoke. - bdaddr A pointer to the Bluetooth address of the event from within the invoke data. This pointer is valid only for the lifespan of the invoke data. - event_data A pointer to the event data from within the invoke data. This pointer references is valid only for the lifespan of the invoke data. Library:libbtapi (For the qcc command, use the -l btapi option to link against this library) Description: The data that is provided must have the mime-type of "application/vnd.blackberry.bluetooth.event". You must call bt_device_init() before calling this function. Returns: - EAGAIN: bt_device_init() was not called. - EPROTO: The data provided is not properly formatted to the required mime-type. - EINVAL: One or more of the variables provided are invalid. - ESRVRFAULT: An internal error has occurred. Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.bluetooth.lib_ref/topic/bt_ldev_invoke_decode_event.html
CC-MAIN-2020-10
en
refinedweb
From Mockup to Angular Material In this article we’ll take a user interface sketch and convert it into the visual structure of a new Angular application using Angular Material to quickly style and skin our new application. Specifically, we’ll be making the main user interface on a text based adventure game project called “Doggo Quest”. We’ll start with the user interface mockup that I created for my article on using event modeling for game design and we’ll end up with the visual structure and appearance of our application. Angular & Angular Material For those not familiar, Angular is a single page application (SPA) JavaScript framework used for designing complex but maintainable web applications. Angular is maintained by Google and uses TypeScript, dependency injection, npm, web pack, and other technologies from the start of every new project in order to equip new projects for success. Angular Material is a library that provides Angular components to support Google’s Material style UI popularized by the Android and Chrome operating systems as well as Google’s online services. Install Pre-Requisites If this is your first time building an Angular application, there are a few things you’ll need to do before you can get started. I’ll cover the basics, but if you encounter something unexpected I recommend you check out Angular’s setup guide for more information. While there are some fantastic editors for Angular applications (I recommend WebStorm or Visual Studio Code), everything I’m going to show you in this article you can do with a basic text editor like notepad and a command prompt. First, you’ll need to install Node Package Manager (NPM) in order to install Angular’s command line tools. Because the instructions for this can vary by operating system, I recommend you go to the npm website and follow their instructions. Once npm is installed, open a command line window and run this command from any folder: npm i -g @angular/cli That tells npm to install the package found at @angular/cli and to install it g lobally in npm's shared packages directory for the entire machine. You should see something like this: Once this completes, you are now set up with Angular’s Command Line Interface (CLI) and can use that for the remainder of this article. Create and Run the Project Now, let’s create our Angular application. Navigate to the folder that will hold the project folder we’re about to create and then use your command line to run: ng new project-name but replace project-name with the name of your project. I used doggo-quest for mine given the project I'm creating. Angular will prompt you several times. I’ll walk you through each prompt as of Angular 8.3 at the time of this writing. First, Angular asks if you want to use routing (navigation management) features. For my small single page application this is not needed and only adds size and complexity so I answered no. Next, Angular will ask you which form of stylesheet technology you prefer to work with. This is entirely your preference as all of these are translated to CSS during the build process, but the examples in this article are all written using SCSS. Once you’ve answered these questions, you’ll see Angular CLI generate a number of files: Once complete, your new directory will be present and you can navigate into it via a command like cd doggo-quest (again, the name of your project). If you list files in the directory you may notice that Angular CLI has automatically created a git repository in this directory and made an initial commit. This is just one example of Angular setting you up for good practices over time and helping you get a solid start on new projects. You can launch the default project by telling Angular to serve up the application and open it in your web browser by running ng serve -o You should see something like this: Replace the Default Content Now that you’re up and running, let’s look at how we can customize this application. First, lets go into the Src\Index.html page and replace its entire contents with Hello Doggo Quest and save the file. This will replace the application's current view with placeholder text, and change the screen to something more manageable. Unless you stopped the ng serve operation earlier, Angular should auto-refresh and render the changes you just made inside of the browser. This makes proofing changes much faster and increases the speed at which you can try new things. Note: Sometimes when making code changes Angular can get hung up on code that does not yet compile or requires other files to be modified. If this happens, you may see errors in your console window and may need to hit Control + C to stop the server, then start it again via ng serve. Translate a Mockup to Angular Components Let’s take another look at the mockup we’ll be converting. This is a pretty simple user interface but chopping it up into more manageable portions will lead to more maintainable code, so let’s look at how we can split this screen into components and sub-components. For those unfamiliar, an Angular component is essentially a custom defined HTML tag that you can include in other places (including inside other components). Angular gives components views in the form of HTML, styling in the form of view-specific CSS (more on this later), and logic in the form of a TypeScript class definition for the view. Think of a component as a region of a page that can be reused as needed. Here’s an annotated view of the mockup with its Angular components identified. Keep in mind that components can contain other components. The larger components are listed on the right side of the diagram with the more granular components listed on the left side. The game’s components are: App.Component- Main container for the application Header.Component- Contains the game title and high level game information, including the score Footer.Component- Holds the command entry component and the game over component StoryView.Component- Contains the game's narrative. New entries will be added to the bottom CommandEntry.Component- Allows the player to enter in game commands and submit them GameOver.Component- Shown when the game has ended and allows the user to restart the game PlayerCommand.Component- Represents something the player typed into the engine at one point and is now part of the story StoryText.Component- An individual paragraph within the game's narrative. Housed inside the StoryView.Component Create Application Components Okay, it’s time to start creating these components and working with them. First, go back to the command line and stop the server if it is currently running by hitting Control + C. Next we need to create our Angular components via the command line. While this may seem like an odd way to create components, it’s a very simple way of creating them and Angular creates multiple files in one command respecting your project’s settings. To create the first component, we’ll run ng g c header. This tells Angular to generate a component named header.component and creates a handful of files. The .html file is the component's user interface. The .scss file is an empty style sheet for styles that can be defined that will only impact this component (unless you customize Angular's CSS scoping behavior). I generally prefer to work with global styling, but everyone has different preferences. The .ts file contains the component definition as well as any custom logic you may add later. The spec.ts file is an auto-generated Jasmine test file. We won't be talking about testing in this article, but later on in the series we'll be discussing Angular testing in depth. Go ahead now and generate the rest of the components listed above keeping in mind the following guidelines: - Avoid upper case characters - If you would normally use a space or an uppercase letter to distinguish between words, use a dash instead (e.g. story-viewinstead of Story Viewor storyView) - Do not include the word component. Angular will append this automatically. Customize the App Component Once that’s done, modify app.component.html to have the following content: These three tags use your application’s prefix ( app by default) and reference components we just created to embed them into the generated view. Run ng serve -o and you should see something like the following: If you look at any of those components, you’ll see their HTML content is just “<ComponentName> works!”, so this is actually working exactly as we would expect it to. Add Angular Material Before we start structuring our content, let’s bring in Angular Material to help us style and structure the application visuals. Stop the server if it is currently running and then run ng add @angular/material. This tells Angular to add in the dependencies needed via npm for Angular Material and save those dependencies appropriately. Angular CLI will prompt you for a theme to use of the four default ones available. I plan on doing some customization, so I chose Custom, but feel free to select a default one if you’d like. Next, Angular CLI will ask you about HammerJS and browser animations. I recommend you say yes to both as HammerJS will matter if you want to develop complex user interfaces for mobile and the animations will make everything just a little nicer. Your experience should look something like this and complete after a few minutes: Import Material Components Now that we have Angular Material installed, we still need to tell Angular that we want to include it as something that can be injected into various components. We do this by going into app.module.ts and adding in an import for each component we want to use at the top. In this article, I'll be using 5 controls from Material so I'll import those now: import {MatCardModule} from '@angular/material/card'; import {MatInputModule} from '@angular/material/input'; import {MatButtonModule} from '@angular/material/button'; import {MatToolbarModule} from '@angular/material/toolbar'; import {MatIconModule} from '@angular/material/icon'; Next, scroll down to the imports list for the module definition and I’ll declare those things as things that the module imports: imports: [ BrowserModule, BrowserAnimationsModule, MatInputModule, MatCardModule, MatButtonModule, MatToolbarModule, MatIconModule ], This is one of my least favorite aspects of Angular but, thankfully, you do not have to do this too frequently once a project has reached momentum. Customize Angular Material Styles Material themes are just a combination of colors that get fed into some common functions, so customizing a theme isn’t too hard at all. In my case, I want a dark theme with blue and green accent colors. Go into the Styles.scss file in your src directory and take a look and see what I mean. In my case, I tweaked the colors named in the primary and accent color definitions to cyan and green respectively: $doggo-quest-primary: mat-palette($mat-cyan); $doggo-quest-accent: mat-palette($mat-green, A200, A100, A400); I also changed the mat-light-theme function reference to mat-dark-theme. That’s really all I had to do in order to get the high-level customization that I wanted. Note: I did add some custom global styling classes to the bottom of this file that are needed to give the final appearance of the application. They’re not core to what we’re talking about here so I’m not going over them, but check out the file in full if you are curious. Build the Header Let’s start out simple by adding the title bar from the mockup. Go into header.component.html and paste in the following code: <mat-toolbar> <span>Doggo Quest</span> <span class="spacer"></span> <span>Score: {{Score}}</span> </mat-toolbar> Some things to point out here: mat-toolbaruses Angular Material's Toolbar component (note the matprefix from that component library) to render some custom content we provide it. - The {{Score}}prefix tells Angular to bind that area of the user interface to the value of a Scoreproperty on the component's class Next, let’s go into the component’s .ts file and make a few changes to support the changes in the view we just added. First, add Input to the list of things we're importing from Angular Core: import {Component, Input, OnInit} from '@angular/core'; Next, define a new property inside the class as follows: @Input() public Score = 0; The @Input() syntax tells Angular that when the component is being declared inside of another component, that component can pass in a value to Score if they choose. For example, I could define <app-header Score=42 /> in another component and it would set the Score property appropriately. When you run the application you should now see the title bar: Conditional Display Let’s build the footer component now and illustrate an Angular directive in the process. In footer.component.html add the following HTML: Here we’re referencing two of our components, the command entry component where the player can type in a command to the game and the game over component where the game summary will be displayed. Note the *ngIf syntax here. This is an Angular directive telling Angular to only emit these components if the condition in that clause is true. This is why we want the game over component to only show if GameOver is true and the command entry component to only show if GameOver is false (! indicates negation in JavaScript). In this case we’re binding to GameOver inside of our component, which I just define as a constant false for this article: @Input() public GameOver = false; Now if you run you should see “Command Entry Component Works!” instead of “Footer Works!”. That’s progress since the footer component is hosting the correct child component. Forms and Input: Command Entry Component We’ll detail the command entry component next. While this component is fairly dumb for this article in that it doesn’t actually do anything with anything you type into it, Angular Material actually has some pretty awesome form controls. Use the following HTML: Okay, so a number of things here: mat-form-fieldindicates a region that Angular Material will influence for styling purposes. Although there are multiple children, everything in and below this level is the text box. - You’ll notice a few Angular Material attributes on controls here: matInput, mat-button, matSuffixand mat-icon-button. This is not uncommon for Angular Material in using to decorate existing HTML elements without using custom components everywhere. mat-iconrepresents a Google style icon that will be shown in the input box's right edge. The contents of this element refers to a specific icon in the iconset. Take a look at the documentation for additional details. - We’re manipulating focus and autocompletion on the input control via attributes since this resulted in a better user experience. Again, this control does very little right now, but you can interact with it and see the placeholder animation and theme working properly: Cards, Events, and Properties Now that we have Material in the application and working properly, we can refine our application’s main body to use it. Go back into app.component.html and replace its HTML with this: <mat-form-field <input matInput <button mat-button matSuffix mat-icon-button <mat-icon>send</mat-icon> </button> </mat-form-field> A few notes here: mat-cardand its related tags all refer to the Angular Material card component which is a very simple way of adding a polished visual structure to your application. (window:resize)="onResize()"is syntax that tells Angular to invoke an onResizemethod in this component whenever the window's resizeevent fires. I use this event to adjust a ContentHeightproperty. [style.height.px]="ContentHeight"tells Angular to maintain an inline style on that element that sets its height to the numeric value from ContentHeighton the control and that the value represents pixel values. #scrollMetells Angular to generate an id="scrollMe"on the element, but also allows Angular's view engine to hook up that element to the code behind. This is somewhat complex code and something I want to gloss over for an introductory article since this code is part of a system to auto-expand the card to fill the available height of the user interface, but I thought it might be beneficial to explain the syntax in this article. Story and Player Command Text The story and player command text nodes will show in the main visual area and are just controls that render content in the appropriate styling. The story text component uses simple HTML with <p>{{Text}}</p> serving as its entire HTML template. Similarly, the player command component uses a slightly more verbose template: <code> <mat-iconchevron_right</mat-icon> {{Text}} </code> This just adds an icon and preformatted text styling to the template. Both controls rely on an entry in their class definition: @Input() public Text: string; Neither one will render if you run the application because no component is including them yet. Let’s change that now. Add Story Text Going into story-view.component.html you can now customize the template to use the two components we just customized: This is just placeholder text, but it’s enough to finish our user interface for now. The Final Product Now that everything is working properly, you can take a step back and look at the result of this article: While this is certainly not a complex or even functional application, you can see how Angular let us quickly get started and Angular Material gave us the visual framework we needed to get up and running with minimal work on our end. The full code is available on GitHub on the MockupToAngularMaterial tag. The application is only beginning, so stay tuned for additional articles detailing how to make the rest of the application work properly, from click events and event handling to line rendering to text parsing and state management logic. It’s going to be a lot of fun. Originally published at on February 5, 2020.
https://medium.com/javascript-in-plain-english/from-mockup-to-angular-material-6a14fe0f8743?source=post_page-----6a14fe0f8743----------------------
CC-MAIN-2020-10
en
refinedweb
SYNOPSIS #include <nng/nng.h> nng_pipe nng_msg_get_pipe(nng_msg *msg); DESCRIPTION The nng_msg_get_pipe() returns the nng_pipe object associated with message msg. On receive, this is the pipe from which a message was received. On transmit, this would be the pipe that the message should be delivered to, if a specific peer is required. The most usual use case for this is to obtain information about the peer from which the message was received. This can be used to provide different behaviors for different peers, such as a higher level of authentication for peers located on an untrusted network. The nng_pipe_getopt() function is useful in this situation. RETURN VALUES This function returns the pipe associated with this message, which will be a positive value. If the pipe is non-positive, then that indicates that no specific pipe is associated with the message. ERRORS None.
https://nng.nanomsg.org/man/v1.2.2/nng_msg_get_pipe.3.html
CC-MAIN-2020-10
en
refinedweb
VTK/Java Wrapping Contents Introduction Some other documentation can be found here: - Configuration You basically just need to turn VTK_WRAP_JAVA on in CMake and build. Bartlomiej Wilkowski has created a nice tutorial of configuring Java wrapping with VTK. Windows To run a sample application provided in VTK against your VTK build directory (with an installed VTK remove "Debug"): $ set PATH=%PATH%;your_vtk_build_dir\bin\Debug $ java -cp your_vtk_build_dir\bin\vtk.jar vtk.sample.Demo Mac To run a sample application provided in VTK against your VTK build directory (with an installed VTK replace "bin" with "lib"): $ export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:your_vtk_build_dir/bin $ java -cp your_vtk_build_dir/bin/vtk.jar vtk.sample.Demo Linux To run a sample application provided in VTK against your VTK build directory (with an installed VTK replace "bin" with "lib"): $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:your_vtk_build_dir/bin $ java -cp your_vtk_build_dir/bin/vtk.jar vtk.sample.Demo Sample Code (from VTK/Wrapping/Java/vtk/sample/SimpleVTK.java) /** * An application that displays a 3D cone. A button allows you to close the * application. */ public class SimpleVTK extends JPanel implements ActionListener { private static final long serialVersionUID = 1L; private vtkPanel renWin; private JButton exitButton; // ----------------------------------------------------------------- // Load VTK library and print which library was not properly loaded static { if (!vtkNativeLibrary.LoadAllNativeLibraries()) { for (vtkNativeLibrary lib : vtkNativeLibrary.values()) { if (!lib.IsLoaded()) { System.out.println(lib.GetLibraryName() + " not loaded"); } } } vtkNativeLibrary.DisableOutputWindow(null); } // ----------------------------------------------------------------- public SimpleVTK() { super(new BorderLayout()); // build VTK Pipeline vtkConeSource cone = new vtkConeSource(); cone.SetResolution(8); vtkPolyDataMapper coneMapper = new vtkPolyDataMapper(); coneMapper.SetInputConnection(cone.GetOutputPort()); vtkActor coneActor = new vtkActor(); coneActor.SetMapper(coneMapper); renWin = new vtkPanel(); renWin.GetRenderer().AddActor(coneActor); // Add Java UI components exitButton = new JButton("Exit"); exitButton.addActionListener(this); add(renWin, BorderLayout.CENTER); add(exitButton, BorderLayout.SOUTH); } /** An ActionListener that listens to the button. */ public void actionPerformed(ActionEvent e) { if (e.getSource().equals(exitButton)) { System.exit(0); } } public static void main(String s[]) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { JFrame frame = new JFrame("SimpleVTK"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.getContentPane().setLayout(new BorderLayout()); frame.getContentPane().add(new SimpleVTK(), BorderLayout.CENTER); frame.setSize(400, 400); frame.setLocationRelativeTo(null); frame.setVisible(true); } }); } } Some key points from this code to note: - vtkNativeLibrary.LoadAllNativeLibraries() is required to load the dynamic libraries from VTK's C++ core. This call must happen before any VTK code executes, which is why it is put in a static block in our application class. - vtkNativeLibrary.DisableOutputWindow(null) simply hides any debugging information that may otherwise pop up in case of VTK error or warning. You can also provide a file path so any error will be written to disk. - SwingUtilities.invokeLater(...) is called because technically all GUI code, including setting up and using a VTK render window, should happen in the Swing event thread. Threading Sample Code (from VTK/Wrapping/Java/vtk/sample/Demo.java) In this demo, we want to illustrate the correct way to perform VTK tasks on separate threads in Java. The first thing to note is that VTK is inherently NOT thread-safe, which immediately rules out several possible use cases. Calling methods on the same VTK objects across threads, even if they seem to be read-only, should be avoided. The safest approach is to "hand off" objects from one thread to another, so one thread is completely done with an object before another thread begins manipulating it. Reclaiming memory for VTK objects is particularly tricky to perform across threads, as deleting a single VTK object may potentially cause the entirety of VTK objects to be modified. While we expose the Delete() method to explicitly delete VTK objects, if you are using VTK objects in multiple threads this is discouraged unless you are aware of its potential issues. VTK provides a special garbage collector for VTK objects in the Java layer that may be run manually or automatically at intervals if memory reclaiming is needed. For this example, we will have a checkbox for turning on and off the VTK garbage collection while an application is running. The application creates new actors using a separate processing thread, which are then added dynamically to the VTK renderer. This enables data to be loaded and processed without causing lags in the frame rate of the interactive 3D view. We need to implement a worker that is capable of producing actors. In the sample code we produce sphere actors with shrunk polygons in order to have something interesting that takes a bit of time to create. These will execute on separate threads to keep the rendering interactive. public static class PipelineBuilder implements Callable<vtkActor> { private vtkActor actor; ... @Override public vtkActor call() throws Exception { // Set up a new actor actor = new vtkActor(); ... // Wait some time for other thread to work Thread.sleep((long) (Math.random() * 500)); // Return return actor; } } A separate worker's job is to add actors to the renderer when ready. public static class AddActorRunnable implements Runnable { private vtkActor actorToAdd; private vtkRenderer renderer; private vtkPanel panel; void setRenderer(vtkPanel panel) { this.renderer = panel.GetRenderer(); this.panel = panel; } void setActor(vtkActor a) { this.actorToAdd = a; } @Override public void run() { this.renderer.AddActor(this.actorToAdd); this.panel.Render(); } } In our initialization code, we need to set up several things. First, two checkboxes toggle VTK's garbage collection and the debug mode. Since VTK is a C++ library, it has its own mechanism for ensuring that unused objects are deleted from memory. Many threading issues can be avoided by simply turning off VTK garbage collection. runGC = new JCheckBox("Enable GC", false); debugMode = new JCheckBox("Debug mode", false); We need to set up our completion service. exec = new ExecutorCompletionService<vtkActor>(Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors())); Next a setupWorkers() method starts a thread which invokes the code to add actors to the renderer whenever our executor completion service has a new actor available. Note that the code adding the actors to the renderer must be done on the event thread using SwingUtilities.invokeAndWait(), since that is where the renderer object was created and lives. private void setupWorkers() { // Add actor thread: Consume the working queue and add the actor into // the render inside the EDT thread final AddActorRunnable adderRunnable = new AddActorRunnable(); adderRunnable.setRenderer(panel3d); new Thread() { public void run() { for (int i = 0; i < NUMBER_OF_PIPLINE_TO_BUILD; i++) { try { adderRunnable.setActor(exec.take().get()); SwingUtilities.invokeAndWait(adderRunnable); panel3d.repaint(); } catch (InterruptedException e) { return; } catch (ExecutionException e) { e.printStackTrace(); } catch (InvocationTargetException e) { e.printStackTrace(); } } }; }.start(); } To load the completion service with jobs to run on a pool of threads in the background, we submit a collection of PipelineBuilder objects to be executed at a later time. public void startWorking() { for (int i = 0; i < NUMBER_OF_PIPLINE_TO_BUILD; i++) { exec.submit(new PipelineBuilder()); } } We'll also create a timer which every second renders the scene and takes out a sphere actor. This code also manually run the garbage collector using vtkObject.JAVA_OBJECT_MANAGER.gc() if the runGC checkbox is selected. Note that timers are scheduled on the Swing event thread, which is why we are allowed to manipulate the renderer and its actors here. We call the garbage collector manually in order to update the UI with garbage collector information. // Update GC info into the UI every second. // Reset camera each of the first 10 seconds. this.nbSeconds = 0; new Timer(1000, new ActionListener() { @Override public void actionPerformed(ActionEvent e) { if (nbSeconds++ < 10) { panel3d.resetCamera(); } vtkRenderer renderer = panel3d.GetRenderer(); if (renderer.GetNumberOfPropsRendered() > 1) { renderer.RemoveActor(renderer.GetActors().GetLastProp()); } // Run GC in local thread (EDT) if (runGC.isSelected()) { vtkReferenceInformation info = vtkObject.JAVA_OBJECT_MANAGER.gc(debugMode.isSelected()); if (debugMode.isSelected()) { System.out.println(info.listKeptReferenceToString()); System.out.println(info.listRemovedReferenceToString()); } gcStatus.setText(info.toString()); } else { gcStatus.setText(""); } panel3d.Render(); } }).start(); Instead of manually running the garbage collector, we can set up automatic garbage collection using a global scheduler. Turn on automatic garbage collection with the following statement. It is important to note that by default it is off, so be sure to include this line to reclaim memory from the VTK layer. vtkObject.JAVA_OBJECT_MANAGER.getAutoGarbageCollector().SetAutoGarbageCollection(true); To set up the interval at which collection runs, use SetScheduleTime. In this case it will run garbage collection every second. The automatic garbage collector runs in the event thread by default. vtkObject.JAVA_OBJECT_MANAGER.getAutoGarbageCollector().SetScheduleTime(1, TimeUnit.SECONDS); Another option will collect statistics on the garbage collector, that can be retrieved by listKeptReferenceToString() and listRemovedReferenceToString() on the collector's information object returned from the gc() method. vtkObject.JAVA_OBJECT_MANAGER.getAutoGarbageCollector().SetDebug(true); Java Wrapper Refactoring (Oct 8, 2007) There were a few problems with the old Java wrappers. One was that, as you said, objects were being deleted before they were supposed to. We hacked in a fix at one point about a year ago which basically made all VTK objects accessed from Java stay around forever, but this was not acceptable either. Ref: The other major concern was that the map from Java objects to VTK objects was in the C++ JNI layer, and while we tried to keep this map synchronized with a mutex, race conditions could still occur because other Java threads could advance while the JNI layer was being called (a thread could access a C++ object just as it is being garbage-collected and deleted). There does not seem to be a way to atomically call a JNI method, or ensure the collector doesn't run while a method is called. This second issue forced us to rethink how the map is done, and the solution was to keep the map in Java instead of C++. But we didn't want this Java map to prohibit objects from being garbage collected. Fortunately, Java has a WeakReference class for just this type of purpose. When accessed, the reference will either be valid or null depending on whether it has been garbage-collected. Thus, the wrapper code can lookup objects in this map when returning objects from methods, and if it is not there, or null, it creates a new Java object representing that C++ object. A final issue was that we wanted a way to guarantee all C++ destructors are called before the program exits. The natural place to decrement the reference count of the C++ object is in finalize(), which works when things are garbage-collected, but Java does not guarantee that finalize will ever be called. So the method vtkGlobalJavaHash.DeleteAll() will plow through the remaining VTK objects and call Delete on them.
https://vtk.org/Wiki/VTK/Java_Wrapping
CC-MAIN-2020-10
en
refinedweb
Nov 30, 2017 04:02 PM|chilluk|LINK I have an XML file that represents a thesaurus of sorts (basic example) : <?xml version="1.0" encoding="utf-8" ?> <synonyms> <group> <syn>spanner</syn> <syn>wrench</syn> </group> <group> <syn>lawnmower</syn> <syn>lawn mower</syn> <syn>grass cutter</syn> </group> </synonyms> I need to match this to an incoming string and expand that string by all of the terms in the matching group - for example : If someone enters the term "rotary lawnmower" I need to first match the group that contains the term "lawnmower" and then grab all the terms within that group - then my expanded string will become : "rotary lawnmower lawn mower grass cutter" I need an efficient way to do this without having to loop each word of my original string AND then the whole XML file for each word. In my head I am thinking I need to seek the term, and then grab the rest of the terms from within the group - but I can't fathom quite how to do this (and especially efficiently!) Thanks. Star 8670 Points Dec 01, 2017 07:43 AM|Cathy Zou|LINK Hi chilluk, Working sample as below: <asp:TextBox</asp:TextBox> Code behind: using System.Xml.Linq; protected void TextBox1_TextChanged(object sender, EventArgs e) { string search = TextBox1.Text; List<string> list1 = new List<string>(); list1.Add(search); XDocument doc = XDocument.Load(Server.MapPath("Success.xml")); var s = (doc.Descendants("synonyms").Descendants("group") .ToList() .Where(c => c.Descendants("syn") .Select(d => new { value = d.Value }).ToList() .Any(f => list1.Any(l => l.ToString().Contains(f.value))) )).Elements("syn").Select(ds=>ds.Value).ToList(); var result = String.Join(" ", s.ToArray()); Response.Write(result.Replace("lawnmower", "rotary lawnmower")); } Output: Best regards Cathy Dec 01, 2017 04:19 PM|chilluk|LINK Ah I wonder if there is a way to extend this? If the incoming string is multiple words, I wonder can it be matched back to the xml - for example with our lawnmower example in our file we have : lawnmower lawn mower grass cutter At the moment if someone enters "lawnmower" it matches and expands just great - but if they enter "lawn mower" then it doesn't because it's matching on individual words to what is coming in. I guess I maybe need to do it in reverse and compare the synonymns on the XML file back to the incoming string and map for individual or phrase matches? 3 replies Last post Dec 01, 2017 04:19 PM by chilluk
https://forums.asp.net/t/2132647.aspx?Seek+a+string+value+term+in+an+XML+document
CC-MAIN-2020-10
en
refinedweb
Plugins Extend Apollo Server with custom functionality Plugins are available in Apollo Server 2.2.x and later. Plugins enable you to extend Apollo Server's core functionality by performing custom operations in response to certain events. Currently, these events correspond to individual phases of the GraphQL request lifecycle, and to the startup of Apollo Server itself. For example, a basic logging plugin might log the GraphQL query string associated with each request that's sent to Apollo Server. Creating a plugin Plugins are JavaScript objects that implement one or more functions that respond to events. Here's a basic plugin that responds to the serverWillStart event: const myPlugin = { serverWillStart() { console.log('Server starting up!'); }, }; If you're using TypeScript to create a plugin, the apollo-server-plugin-basemodule exports the ApolloServerPlugininterface for plugins to implement. You can define a plugin in the same file where you initialize Apollo Server, or you can export it as a separate module: module.exports = { serverWillStart() { console.log('Server starting up!'); }, }; To create a plugin that accepts options, create a function that accepts an options object and returns a properly structured plugin object, like so: module.exports = (options) => { return { serverWillStart() { console.log(options.logMessage); }, }; }; Responding to events A plugin specifies exactly which lifecycle events it responds to by implementing functions that correspond to those events. The plugin in the examples above responds to the serverWillStart event, which fires when Apollo Server is preparing to start up. A plugin can respond to any combination of supported events. Responding to request lifecycle events Plugins can respond to the following events associated with the GraphQL request lifecycle: parsingDidStart validationDidStart didResolveOperation executionDidStart didEncounterErrors willSendResponse However, the way you define these functions is slightly different from the serverWillStart example above. First, your plugin must define the requestDidStart function: const myPlugin = { requestDidStart() { console.log('Request started!'); }, }; The requestDidStart event fires whenever Apollo Server receives a GraphQL request, before any of the lifecycle events listed above. You can respond to this event just like you respond to serverWillStart, but you also use this function to define responses for a request's lifecycle events, like so: const myPlugin = { requestDidStart(requestContext) { console.log('Request started!'); return { parsingDidStart(requestContext) { console.log('Parsing started!'); } validationDidStart(requestContext) { console.log('Validation started!'); } } }, }; As shown, the requestDidStart function can optionally return an object that defines functions that respond to request lifecycle events. This structure organizes and encapsulates all of your plugin's request lifecycle logic, making it easier to reason about. The following request lifecycle event handlers can optionally return a function that will be invoked after the lifecycle phase is complete: These "end hooks" will be invoked with any error(s) that occurred during the execution of that lifecycle phase. For example, the following plugin will log any errors that occur during any of the above lifecycle events: const myPlugin = { requestDidStart() { return { parsingDidStart() { return (err) => { if (err) { console.error(err); } } }, validationDidStart() { // This end hook is unique in that it can receive an array of errors, // which will contain every validation error that occurred return (errs) => { if (errs) { errs.forEach(err => console.error(err)); } } }, executionDidStart() { return (err) => { if (err) { console.error(err); } } } } } } Note that the validationDidStart end hook receives an array of errors, which will contain every validation error that occurred, if any. The arguments to each end hook are documented in the type definitions in the request lifecycle events docs below. Inspecting request and response details As the example above shows, requestDidStart and request lifecycle functions accept a requestContext parameter. This parameter is of type GraphQLRequestContext, which includes a request (of type GraphQLRequest), along with a response field (of type GraphQLResponse) if it's available. These types and their related subtypes are all defined in apollo-server-types/src/index.ts. Installing a plugin Add your plugin to Apollo Server by providing a plugins configuration option to the ApolloServer constructor, like so: const { ApolloServer } = require('apollo-server'); const ApolloServerOperationRegistry = require('apollo-server-plugin-operation-registry'); /* This example doesn't provide `typeDefs` or `resolvers`, both of which are required to start the server. */ const { typeDefs, resolvers } = require('./separatelyDefined'); const server = new ApolloServer({ typeDefs, resolvers, // You can import plugins or define them in-line, as shown: plugins: [ /* This plugin is from a package that's imported above. */ ApolloServerOperationRegistry({ /* options */ }), /* This plugin is imported in-place. */ require('./localPluginModule'), /* This plugin is defined in-line. */ { serverWillStart() { console.log('Server starting up!'); }, } ], }) Plugin event reference Apollo Server supports two types of plugin events: server lifecycle events and request lifecycle events. Server lifecycle events are high-level events related to the lifecycle of Apollo Server itself. Currently, two server lifecycle events are supported: serverWillStart and requestDidStart. Request lifecycle events are associated with a specific request. You define responses to these events within the response to a requestDidStart event, as described in Responding to request lifecycle events. Server lifecycle events serverWillStart The serverWillStart event fires when Apollo Server is preparing to start serving GraphQL requests. If you respond to this event with an async function (or if the function returns a Promise), the server doesn't start until the asynchronous operation completes. If the Promise is rejected, startup fails (unless you're using Express middleware). This helps you make sure all of your server's dependencies are available before attempting to begin serving requests. Example const server = new ApolloServer({ /* ... other necessary configuration ... */ plugins: [ { serverWillStart() { console.log('Server starting!'); } } ] }) requestDidStart The requestDidStart event fires whenever Apollo Server begins fulfilling a GraphQL request. This function can optionally return an object that includes functions for responding to request lifecycle events that might follow requestDidStart. const server = new ApolloServer({ /* ... other necessary configuration ... */ plugins: [ { requestDidStart(requestContext) { /* Within this returned object, define functions that respond to request-specific lifecycle events. */ return { /* The `parsingDidStart` request lifecycle event fires when parsing begins. The event is scoped within an associated `requestDidStart` server lifecycle event. */ parsingDidStart(requestContext) { console.log('Parsing started!') }, } } } ], }) If your plugin doesn't need to respond to any request lifecycle events, requestDidStart should not return a value. Request lifecycle events If you're using TypeScript to create your plugin, implement the GraphQLRequestListenerinterface from the apollo-server-plugin-basemodule to define functions for request lifecycle events. parsingDidStart The parsingDidStart event fires whenever Apollo Server will parse a GraphQL request to create its associated document AST. If Apollo Server receives a request with a query string that matches a previous request, the associated document might already be available in Apollo Server's cache. In this case, parsingDidStart is not called for the request, because parsing does not occur. parsingDidStart?( requestContext: WithRequired< GraphQLRequestContext<TContext>, 'metrics' | 'source' >, ): (err?: Error) => void | void; validationDidStart The validationDidStart event fires whenever Apollo Server will validate a request's document AST against your GraphQL schema. Like parsingDidStart, this event does not fire if a request's document is already available in Apollo Server's cache (only successfully validated documents are cached by Apollo Server). The document AST is guaranteed to be available at this stage, because parsing must succeed for validation to occur. validationDidStart?( requestContext: WithRequired< GraphQLRequestContext<TContext>, 'metrics' | 'source' | 'document' >, ): (err?: ReadonlyArray<Error>) => void | void; didResolveOperation The didResolveOperation event fires after the graphql library successfully determines the operation to execute from a request's document AST. At this stage, both the operationName string and operation AST are available. If the operation is anonymous (i.e., the operation is query { ... }instead of query NamedQuery { ... }), then operationNameis null. didResolveOperation?( requestContext: WithRequired< GraphQLRequestContext<TContext>, 'metrics' | 'source' | 'document' | 'operationName' | 'operation' >, ): ValueOrPromise<void>; responseForOperation The responseForOperation event is fired immediately before GraphQL execution would take place. If its return value resolves to a non-null GraphQLResponse, that result is used instead of executing the query. Hooks from different plugins are invoked in series and the first non-null response is used. responseForOperation?( requestContext: WithRequired< GraphQLRequestContext<TContext>, 'metrics' | 'source' | 'document' | 'operationName' | 'operation' >, ): ValueOrPromise<GraphQLResponse | null>; executionDidStart The executionDidStart event fires whenever Apollo Server begins executing the GraphQL operation specified by a request's document AST. executionDidStart?( requestContext: WithRequired< GraphQLRequestContext<TContext>, 'metrics' | 'source' | 'document' | 'operationName' | 'operation' >, ): (err?: Error) => void | void; didEncounterErrors The didEncounterErrors event fires when Apollo Server encounters errors while parsing, validating, or executing a GraphQL operation. didEncounterErrors?( requestContext: WithRequired< GraphQLRequestContext<TContext>, 'metrics' | 'source' | 'errors' >, ): ValueOrPromise<void>; willSendResponse The willSendResponse event fires whenever Apollo Server is about to send a response for a GraphQL operation. This event fires (and Apollo Server sends a response) even if the GraphQL operation encounters one or more errors. willSendResponse?( requestContext: WithRequired< GraphQLRequestContext<TContext>, 'metrics' | 'response' >, ): ValueOrPromise<void>;
https://www.apollographql.com/docs/apollo-server/integrations/plugins/
CC-MAIN-2020-10
en
refinedweb
With the popularity of instant messaging platforms and advancements in AI chatbots have experienced explosive growth with as many as 80% of businesses wanting to use chat bots by 2020. This has created great opportunity for freelance Python developers as there is great need for the development of both simple and complex chat bots. One of the more popular messaging platforms is Telegram with a reported 200 million monthly users. Telegram provides an excellent API for chat bots allowing the user to not only communicate using text messages, but also multimedia content with images, and video, and rich content with HTML and javascript. The API can even be used to manage purchases directly within Telegram. Python is excellent for creating Telegram bots and the extremely popular Python Telegram bot framework makes this much easier allowing you by default to create bots that run asynchronously so you can easily code bots that can communicate with many users at the same time. Let’s get started with making our first telegram bot in Python, the first thing we’ll need to do is download Telegram, create an account and communicate with the “Botfather”. The Botfather is a chat bot created by telegram through which we can get our Telegram Bot API token. To download telegram head to Telegram.org and download and install the appropriate version for your OS, install and create an account. Now add the botfather as a contact, do this by clicking the menu icon selecting contacts, search for “botfather” and selecting the @botfather user. A conversation will popup with a start button at the bottom, click the start button and you will receive a list of commands. The command to create a new bot is /newbot enter “/newbot” and answer the prompts for naming your bot and you will receive a Telegram Bot API token. You can name the bot whatever you like, but the bot username will need to be unique on Telegram. Store the access token somewhere as we will need it to authorize our bot. Installing The Python Telegram Bot Framework For the bot creation we will be using Python version 3.7. The Python Telegram Bot framework is compatible with Python 2.7 and above. Before we get to the actual coding we will need to install the Python Telegram Bot framework the easiest way to do that is with: $ pip install python-telegram-bot If you prefer to use the source code you can find the project on Github. Now with the Python Telegram Bot library installed let’s get started. Connecting Your Bot To Telegram The first thing you’ll need to do is have the bot connect to and authenticate with the telegram API. We’ll import the Python logger library to make use of the Python Telegram Bot frameworks built in logging so we can see in real-time what is happening with the bot and if there are any errors. Place the following code into a Python file, and place your telegram bot key where indicated in the update statement: from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, Dispatcher import logging logging.basicConfig(format='%(levelname)s - %(message)s', level=logging.DEBUG) logger = logging.getLogger(__name__) updater = None def start_bot(): global updater updater = Updater( '### YOUR TELEGRAM BOT AUTHENTICATION KEY HERE ###', use_context=True) updater.start_polling() updater.idle() start_bot() Now looking at the top portion of the code you can see that we imported a number of libraries from the telegram bot module, and set the logger to display any messages of debug priority or higher. We’ve created an updater variable and this will hold the updater for our Telegram bot, which is placed in a global variable so that we can easily access it later from the UI. Updater provides an easy front end for working with the bot, and to start our new bot with the Updater we simply need to pass in the authentication key, and we’ll also pass in use_context=true to avoid any deprecation errors as context based callbacks are now the default for Python Telegram bot. updater.start_polling() actually starts the bot, and after this is passed in the bot will begin to start polling Telegram for any chat updates on Telegram. The bot will begin polling within its own separate threads so this will not halt your Python script. We use the updater.idle() command here to block the script until the user sends a command to break from the Python script such as ctrl-c on windows. When running this script you will be able to see messages that the bot and the updater have started, and a number of threads are running. The bot will not do anything noticeable as of yet in Telegram. Making The Bot Understand Commands Let’s update the script and make our start_bot function look like this: def start_bot(): global updater updater = Updater( '### YOUR TELEGRAM BOT AUTHENTICATION KEY HERE ###', use_context=True) dispatcher = updater.dispatcher dispatcher.add_handler(CommandHandler('start', start)) updater.start_polling() updater.idle() We’ve added a dispatcher variable for clearer access to the dispatcher for our bot, we’ll use the dispatcher to add commands. With the line dispatcher.add_handler(CommandHandler('start', start)) we have added a command handler that will execute when the user enters /start and will execute the callback function start. This command automatically executes when you add a bot as a new contact and press the start button within Telegram .Now add in the function for our start command: def start(update, context): s = "Welcome I Am The Finxter Chat Bot! Your life has now changed forever." update.message.reply_text(s) Command handlers require both an Update and a CallBackContext parameter. Through the Update we send updates to chat, here using update.message.reply_text automatically adds the reply only to the specific chat where the /start command was sent. Now when entering the chat with the bot, or typing the /start command you should get a reply like this: Adding More Advanced Command That Reads Chat The start command executes a function within our bot whenever a user types /start, but what if we want our bot to read and respond to chat rather than simply an executing a command? Well, for that we use a different type of handler called a MessageHandler. Add the following line to the start_bot function underneath the previous add_handler statement: dispatcher.add_handler(MessageHandler(Filters.text, repeater)) This will add a MessageHandler to the bot, we also use a filter here so that this message handler Filters everything except text because the user could be posting something other than text in their messages (such as images or video). For this handler we will create a callback function named repeater with the code: def repeater(update, context): update.message.reply_text(update.message.text) In our repeater we use the reply_text method, replying with update.message.text which sends the message chat text back to the user. Turning A Bot Command Off Or On For The User A common bot ability is to turn on or off specific commands, but there is a problem that arises which is that we cannot simply remove the Handler for the functionality as that would remove it for all users of the bot. Fortunately, Python Telegram Bot allows us to store user specific data using the context that is passed in to our callback functions. Let’s add another handler underneath the repeater handler: dispatcher.add_handler(CommandHandler('echo', echo)) Now, we’ll first modify the repeater handler to check to see if the user’s text should be echoed: def repeater(update, context): if context.user_data[echo]: update.message.reply_text(update.message.text) Here we added the statement if context.user_data[echo]: before having the bot reply to the user. Python Telegram Bot has a user_data dictionary that can be accessed using the context. This is specific to the user, and by using this we can make sure that if there are multiple users of the bot they are unaffected. Now we’ll add in another function so the user can set the echo dictionary using the echo command in chat: def echo(update, context): command = context.args[0].lower() if("on" == command): context.user_data[echo] = True update.message.reply_text("Repeater Started") elif("off" == command): context.user_data[echo] = False update.message.reply_text("Repeater Stopped") In this callback function we gather the users extra command parameters from the context. The users parameters are contained with context.args which provides an array based on the spaces from the user, in this function we check the first parameter passed by the user looking for on or off and change the user_data[echo] variable. Posting Data From The Web And Securing Commands Python Telegram Bot makes it easy to reply with files from the web, such as photos, videos, and documents you simply need to give a url to the specified content type. We’ll use the Unbounce API to gather a free image based on user provided terms and post it into chat, and we’ll also use a Filter so that this command only works for your username. Add the following code to the start_bot() function: dispatcher.add_handler(CommandHandler('get_image', get_image, filters=Filters.user(username="@YOUR_USERNAME"))) Replace YOUR_USERNAME with your username. This code will execute the get_image function, but only if the username matches your own. With this filter we are only passing in 1 username, but you could also pass in a list of usernames. Now let’s create the get_image function: def get_image(update, context): terms = ",".join(context.args).lower() update.message.reply_text(f"Getting Image For Terms: {terms}") command = context.args[0].lower() if("on" == command): context.user_data[echo] = True update.message.reply_text("Repeater Started") elif("off" == command): context.user_data[echo] = False update.message.reply_text("Repeater Stopped") Like in the previous example we get the terms using the args variable from the context, but in this case we join the terms together with a , and convert to lowercase because that is what is required by the Unbounce API. You can then get an image in the chat using /get_image and some keywords like this: Adding A GUI Now we have learned how to a bot running, and handling commands. Thanks to the multi-threaded nature of Python Telegram Bot this bot can handle many users from all round the world. In many projects on freelance sites such as Upwork the client does not care what language is used to create the bot. However, they often want an interface for managing the bot so let’s create a simple interface that allows the bot owner to start and stop the bot. To build our user interface we’ll use the PySimpleGUI library. With PySimpleGUI you can create cross platform GUIs that work on Windows, Mac and Linux without any source code with an easily readable syntax and minimal boiler plate code. To begin adding the GUI code let’s first remove the updater_idle() line from our start_bot function, so your start_bot function reads like this: def start_bot(): global updater updater = Updater( '### YOUR TELEGRAM BOT AUTHENTICATION KEY HERE ###', use_context=True) dispatcher = updater.dispatcher dispatcher.add_handler(CommandHandler('start', start)) dispatcher.add_handler(MessageHandler(Filters.text, repeater)) dispatcher.add_handler(CommandHandler('echo', echo)) dispatcher.add_handler(CommandHandler('get_image', get_image, filters=Filters.user(username="@YOUR_USERNAME"))) updater.start_polling() By removing the updater.idle() line the bot no longer pauses the script after starting and runs in a separate thread until we decide to stop the bot or the main thread stops. Now we’ll create a GUI, this GUI will consist of a status line to show whether the bot is currently turned on along with a start, and stop button, and a title like this: To create this gui add the following code: def gui(): layout = [[sg.Text('Bot Status: '), sg.Text('Stopped', key='status')], [sg.Button('Start'), sg.Button('Stop'), sg.Exit()]] window = sg.Window('Finxter Bot Tutorial', layout) while True: event, _ = window.read() if event in (None, 'Exit'): break window.close() Now to start the gui remove the start_bot() statement at the bottom of our script and replace with gui(): gui() In the layout variable you can see that we’ve defined some text and button elements, and each element in the list shows up as a line within our UI. Our first line consists of 2 items to show the status (we gave the second element a key so we can easily refer to it later), and our second line consists of three buttons. The sg.Window function is where we provide our title, and layout. The while True: loop is the standard PySimpleGUI event loop. The window.read() function returns any GUI events along with any values passed along with the event (such as user inputted text), we won’t use any values in our loop so we pass them to the _ variable, you can pass a time to wait to the windows read function in milliseconds, passing nothing as we have makes the function wait until an event is triggered. The if event in (None, ‘Exit’): statement executes if the user hits the Exit button or the user closes the window by another means (such as the close button in the corner of the window), in this case we simply break the loop. Starting And Stopping The Bot From The GUI Now if you start the script the start and stop buttons won’t actually do anything so we’ll add in the code to start and stop the script and update the status making our gui function look like this: def gui(): layout = [[sg.Text('Bot Status: '), sg.Text('Stopped', key='status')], [sg.Button('Start'), sg.Button('Stop'), sg.Exit()]] window = sg.Window('Finxter Bot Tutorial', layout) while True: event, _ = window.read() if event == 'Start': if updater is None: start_bot() else: updater.start_polling() window.FindElement('status').Update('Running') if event == 'Stop': updater.stop() window.FindElement('status').Update('Stopped') if event in (None, 'Exit'): break if updater is not None and updater.running: updater.stop() window.close() Looking at this code you can see we added two different event conditions, the Start and Stop share the same names as our buttons, and when a button is pressed in PySimpleGUI an event is triggered based on the button name. In our start event we start the bot using start_bot if there is no updater yet, otherwise we execute the start_polling method of our updater as re-starting the updater in this way is much quicker than using start_bot to initialize the bot. We also use the find_element function of the window to access the status text using the key we created of ‘status’ and change that to show the bot is running. Turning GUI Buttons On And Off Now if we start our script we get the user interface, and pressing the start button will start the bot, and pressing the stop button will stop the bot, but we get an error when we press the buttons out of order. We’ll remedy this situation by modifying our event loop to disable the start and stop buttons so the user can only do one or the other at the appropriate times. if event == 'Start': if updater is None: start_bot() else: updater.start_polling() window.FindElement('Start').Update(disabled=True) window.FindElement('Stop').Update(disabled=False) window.FindElement('status').Update('Running') if event == 'Stop': updater.stop() window.FindElement('Start').Update(disabled=False) window.FindElement('Stop').Update(disabled=True) window.FindElement('status').Update('Stopped') You can see that we used the FindElement method on the buttons here, and then using the Update method changed the disabled variable which allows you to disable buttons. If you start up now you’ll see that at first both buttons are enabled so we’ll have to make a small modification to the layout. Change the layout variable to the following: layout = [[sg.Text('Bot Status: '), sg.Text('Stopped', key='status')], [sg.Button('Start'), sg.Button('Stop', disabled=True), sg.Exit()]] Buttons should now enable and disable appropriately within the GUI as shown: And there we have it, a working GUI for our Telegram Bot. Conclusion We learned quite a bit in this tutorial, and now you can have a Python Telegram Bot that responds to commands, and even presents data shown from the web. We also created a simple UI for starting and stopping the bot. With your new skills you’re now ready to answer those potential clients on freelancing sites looking for a Telegram Bot. About the Author Johann is a Python freelancer specializing in web bots, and web scraping. He graduated with a diploma in Computer Systems Technology 2009. Upwork Profile.
https://blog.finxter.com/python-telegram-bot/
CC-MAIN-2020-10
en
refinedweb
Current Map implementation moves map to Rome by default (albeit Rome, Italy is a cool city), but many times you want to move it to a different and specific position. Which means the control will be updated twice instead of just once, wasting draw cycles. Making the control not go to the default location requires some work and investigation which is completely unnecessary. I don't understand the decision to have a default location. MoveToRegion(MapSpan mapSpan) On all platforms the native map control has a way to animate when moving the map position. All current renderers in Xamarin Forms already use an animation value when moving the position on the native map. The Xamarin.Forms.Map control should have the bool animate parameter in the MoveToRegion(MapSpan mapSpan): void MoveToRegion(MapSpan mapSpan, bool animate = true) +1 to this, I have implemented maps into a recent application and I find the loading time is a little slow and I'm looking into ways to speed it up. If a user is displaying the map, chances are they are wanting to show a specific region so would it not make sense to make a mandatory parameter to pass in the location you want the map to display? Just a suggestion @seanyda Both can be a reason for a slower loading time. If someone from Xamarin team approves this, I could make a PR. There's also a bug I found with Map, MoveToRegionis buggy. It doesn't make sense to start working on a PR without having OK from the XF team first, otherwise I risk working in vain. I hope Xamarin team really looks at this forum... I wish this forum had a way to vote so people could give their votes ... I am facing the same problem... Any fix for this ? i do not want to center my map in Rome ! You don't have to. There is a method to move it: Just replace the above latitude and longitude in the position parameter. This thread is discussing how to improve the maps. note: you can also set the start-up map location in XAML if you need that @ChaseFlorell Correct, but that is very very ugly. Why would I need to do that in any place, any project where I need a Map control? because the Map Constructor is the only place to set a different "initial location". This makes it so that you don't have to use MoveToRegionafter the map is constructed. If you don't do it in the constructor, then you have to move the map away from Rome. @ChaseFlorell So you do not agree with not having the Map take a default location? 99.999% of the Xamarin Forms apps which use the Map control will not want to have Rome, Italy as default location. Do you think it's nice to ask every developer to write that ugly XAML to initialize the map to something else? Another issue is you can't even bind the start position. To get round it (as the geopos data is resfful and can be slow to come from our provider), I only create the map when the data arrives. Been able to set a default and bindable would be very nice. I personally like the constructor, and even with the "ugly XAML", I feel it's a perfectly acceptable approach. Basically, if you need a start position on your map, that's what you use. If nothing else, I wish the VisibleRegionproperty were set to TwoWay binding instead of OneWayFromSource. As for having it Bindable, I've written my own Map Renderer, and have made the VisibleRegionwork exactly like that. Under the covers, I use MoveToRegion(VisibleRegion);when the binding updates. This now allows the developer to update the VisibleRegion from the ViewModel, but also get the VisibleRegion back from the map if the user moves the map. We're pondering investing the time in doing something similar as we don't want to add more nugets but it should be bindable out of the box quite honestly. The map has had very little features added since Forms 1.3. It was pretty straight forward actually. I just made the VisibleRegion propertynew and changed it's binding toTwoWay` Then I detect which way the binding is coming from, blocking the infinite loop when you bind from the property. The native Map controls do not have a default position. And beside that, can you give me an example in the whole XAML platform where a control has such a "personal" default value? This isn't something like text color with the default to Black or background color. I wish there shouldn't be a need to write custom renders for such basic things. In our scenario, our business services a specific region. So constructing the map with a specific location works well for us. We show the boundaries of our service area by default and move the map if necessary after the fact. (using the User's GPS location). If your service area is larger, then I suggest using that.. IE: if you service the entire USA, then use the entire USA as your starting point, and MoveToRegion(loc);from there. Similarly, if you service the entire planet, you can start with a top view of the entire planet and MoveToRegion(loc);from there. At any rate, I think the conversation is off topic now. Let's leave it for the X.F team to decide. -1 from me. @ChaseFlorell My understanding of this proposal is there is a very slim chance that you want the location of your map to be on Rome, so the chances are you will be passing in the location you want it to display... So the map has to do multiple draw cycles instead of a single one. What's the disadvantage of creating a mandatory parameter to make the user provide the location they want the map to display which will improve performance because it won't have to go to Rome then elsewhere every time? Or am I misunderstanding the proposal? The misunderstanding is that the map already has this. It's part of the constructor and you can set the initial load location to anywhere you want. The OP says that it's "ugly XAML", so if the proposal is to make it prettier, then maybe I misunderstood, but as it stands, #1 in the list already exists. There are two constructors for a map If you don't want to center on Rome, you don't have to... you can center on anything you like. @seanyda said: You got it right Sean. But Chase keeps arguing it's perfectly fine to have Rome, Italy (why Rome and not Tokyo or New York?) , so I surrender. Should I hold my breath? Xamarin/MS should just invest in GoogleMaps X.F Maps is woefully inadequate for even basic things like working with pins. For example, pins need an ID, otherwise I have no idea which pin is being used when tapped. Sure, the pins Clicked() event can be wired up, but now which pin was clicked, exactly? Even if I get the whole object from the sender, I have no idea which is which if I have a collection that came from a programmtic list with IDs. And where are other events on the map, such as pins being selected, camera moving, etc.? Digging in the forums, people have been asking for a simple pin ID property since at least 2013! Speaking of programmatic, I need to be able to select pins programmatically too. All this is already in amay077 GoogleMaps. HOWEVER, even that project suffers because UWP support is not par (and desperately needs to be). Mapbox for Xamarin is superior to all of this in terms of features, but with no UWP or macOS support at all, that is a dead, unusable map too IMO. Maps should also support macOS apps now that Mac can be targeted with X.F. X.F is all about all supported platforms, not a subset. All things considering, mapping of any kind on Xamarin Forms is leaving us all wanting more. Much more! @dapug Instead of each of us doing islands of implementations, there should be pull-requests. But unfortunately Xamarin doesn't encourage participating, they look very late at PRs, so someone working on a PR could end up doing work in vain. only this for the map: Then in my view Model I have the following which allows me to use pure binding and no custom renderer. public class LocationHistoryViewModel: BaseViewModel { public LocationHistoryViewModel() { // Set the bindable Map LocationHistoryMap = CreateMap(); } One thing I would like to ask @NMackay is why doesn't the map api provide the current location? It is seems like you would know what it is since I can set IsShowingUser = true and it shows My Location. this: Then in my view Model I have the following which allows me to use pure binding and no custom renderer. One thing I would like to ask @NMackay is why doesn't the map api provide the current location? It is obvious that you know what it is since I can set IsShowingUser = true and it shows My Location. I get an error: Position 15:18. Type Position not found in xmlns C:\Users\User\source\repos\Test\Views\PSFinder.xaml And I also don't know how to do it on the code-behind because I use: protected override async void OnAppearing() { base.OnAppearing(); I want to center the map to somewhere else and not Rome for both iOS and Android devices. I use MoveToRegion but it still centers on Rome before moving to the region I prefer after a couple of seconds. My map in XAML: <maps:Map x: I'm having issues with this map. Like why can't the default location be the current location of the device? Or why can't i locate myself? isShowingUsercrashes my app var status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Location); if (status != PermissionStatus.Granted) { var results = await CrossPermissions.Current.RequestPermissionsAsync(Permission.Location); //Best practice to always check that the key exists if (results.ContainsKey(Permission.Location)) status = results[Permission.Location]; }
https://forums.xamarin.com/discussion/comment/303610/
CC-MAIN-2020-10
en
refinedweb
public class MultiTextEdit extends TextEdit Clients are allowed to implement subclasses of a multi-text edit.Subclasses must implement doCopy() to ensure the a copy of the right type is created. Not implementing doCopy() in subclasses will result in an assertion failure during copying. CREATE_UNDO, NONE, UPDATE_REGIONS accept, acceptChildren, addChild, addChildren, apply, apply, childDocumentUpdated, childRegionUpdated, copy, equals, getChildren, getChildrenSize, getCoverage, getExclusiveEnd, getInclusiveEnd, getParent, getRegion, getRoot, hasChildren, hashCode, isDeleted, moveTree, postProcessCopy, removeChild, removeChild, removeChildren, toString clone, finalize, getClass, notify, notifyAll, wait, wait, wait public MultiTextEdit() MultiTextEdit. The range of the edit is determined by the range of its children. Adding this edit to a parent edit sets its range to the range covered by its children. If the edit doesn't have any children its offset is set to the parent's offset and its length is set to 0. public MultiTextEdit(int offset, int length) offset- the edit's offset length- the edit's length. TextEdit.addChild(TextEdit), TextEdit.addChildren(TextEdit[]) protected MultiTextEdit(MultiTextEdit other) protected void checkIntegrity() throws MalformedTreeException Note that this method should only be called by the edit framework and not by normal clients. This default implementation does nothing. Subclasses may override if needed. MalformedTreeException- if the edit isn't in a valid state and can therefore not be executed public final int getOffset() -1if the edit is marked as deleted. getOffsetin class TextEdit public final int getLength() -1if the edit is marked as deleted. getLengthin class TextEdit public final boolean covers(TextEdit other) trueif the edit covers the given edit other. It is up to the concrete text edit to decide if a edit of length zero can cover another edit. coversin class TextEdit other- the other edit true if the edit covers the other edit; otherwise falseis returned. protected boolean canZeroLengthCover() trueif an edit with length zero can cover another edit. Returns falseotherwise. canZeroLengthCoverin class TextEdit, 2018 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
https://help.eclipse.org/2018-12/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/text/edits/MultiTextEdit.html
CC-MAIN-2020-10
en
refinedweb
com.google.appengine.tools.info Class UpdateCheck - java.lang.Object - com.google.appengine.tools.info.UpdateCheck public class UpdateCheck extends java.lang.Object UpdateCheckis responsible for gathering version information about the local SDK, uploading this information to Google's servers in exchange for information about the latest version available, and making both sets of information available programmatically via UpdateCheckResultsand for direct user consumption via a nag screen printed to a specified PrintStream. Constructor Detail UpdateCheck public UpdateCheck(java.lang.String server)Create a new UpdateCheck. - Parameters: server- The remote server to connect to when retrieving remote version information. UpdateCheck public UpdateCheck(java.lang.String server, java.io.File appDirectory, boolean secure)Create a new UpdateCheck. - Parameters: server- The remote server to connect to when retrieving remote version information. appDirectory- The application directory that you plan to test or publish, or nullif no application directory is available. secure- if true, use an https (instead of http) connection to the remote server. Method Detail allowedToCheckForUpdates public boolean allowedToCheckForUpdates()Returns true if the user wants to check for updates even when we don't need to. We assume that users will want this functionality, but they can opt out by creating an .appcfg_no_nag file in their home directory. checkForUpdates @Deprecated public UpdateCheckResults checkForUpdates()Deprecated.Returns an UpdateCheckResultsfor checking if a WAR directory or the local installation uses an out of date version of the SDK. Callers that do not already communicate with Google explicitly (e.g. the DevAppServer) should check allowedToCheckForUpdatesbefore calling this method. maybePrintNagScreen public boolean maybePrintNagScreen(java.io.PrintStream out)Check to see if there is a new version of the SDK available and, if sufficient time has passed since the last nag, print a nag screen to out. This method always errs on the side of not nagging the user if errors are encountered. Callers that do not already communicate with Google explicitly (e.g. the DevAppServer) should check allowedToCheckForUpdatesbefore calling this method. - Returns: - true if a nag screen was printed, false otherwise checkJavaVersion public boolean checkJavaVersion(java.io.PrintStream out)
https://cloud.google.com/appengine/docs/standard/java/tools/javadoc/com/google/appengine/tools/info/UpdateCheck?hl=tr
CC-MAIN-2020-10
en
refinedweb
We. We’ve been prototyping a completely mixin-oriented approach to component development in a project called core-component-mixins. import TemplateStamping from 'core-component-mixins/src/TemplateStamping'; class MyElement extends Composable.compose(HTMLElement, TemplateStamping) { get template() { return ` <style> :host { font-weight: bold; } </style> Hello, world. `; } }Use of the TemplateStamping mixin takes care of details like shimming any <style>elements found in the template when running under the Shadow DOM polyfill. import ElementBase from 'core-component-mixins/src/ElementBase'; class MyElement extends ElementBase { get template() { return ` <style> :host { font-weight: bold; } </style> Hello, world. `; } }Use of the ElementBase class is entirely optional — you could just as easily create your own base class using the same mixins. Taken collectively, these core component mixins form the beginnings of a deliberately loose but useful framework for web component development. They’re still rudimentary, but they already provide much of what we need from a layer like polymer-micro. We think this strategy confers a number of advantages: It’s worth remembering that web components are, by their very nature, interoperable. If you decide to write a component using an approach like this, it’s still available to someone who’s using a different framework (Polymer, say). The reverse is also true. That means any team can pick the approach that works for them, while still sharing user interface elements at the component level. As we’re experimenting with these mixin ideas in prototype form, we’re opportunistically trying some other technology choices at the same time: This mixin-based component framework isn’t done, but feels like it’s reached the point where it’s “good enough to criticize”. Please share your feedback at; @Component or +ComponentKitchen.
https://component.kitchen/blog/posts/building-web-components-from-a-loose-framework-of-mixins
CC-MAIN-2017-43
en
refinedweb
std.exception exception's functions: string synopsis() { FILE* f = enforce(fopen("some/file")); // f is not null from here on FILE* g = enforce); } Source: std/exception.d - auto assertNotThrown(T : Throwable = Exception, E)(lazy E expression, string msg= null, string file= __FILE__, size_t line= __LINE__); - Asserts that the given expressiondoes.Returns:the result of expression.Examples: import core.exception : AssertError; import std.string; assertNotThrown!StringException(enforce!StringException(true, "Error!")); //Exception is the default. assertNotThrown(enforce!StringException(true, "Error!")); assert(collectExceptionMsg!AssertError(assertNotThrown!StringException( enforce!StringException(false, "Error!"))) == `assertNotThrown failed: StringException was thrown: Error!`); - void assertThrown(T : Throwable = Exception, E)(lazy E expression, string msg= null, string file= __FILE__, size_t line= __LINE__); - Asserts that the given expressionthrows: import core.exception : AssertError; import std.string; assertThrown!StringException(enforce!StringException(false, "Error!")); //Exception is the default. assertThrown(enforce!StringException(false, "Error!")); assert(collectExceptionMsg!AssertError(assertThrown!StringException( enforce!StringException(true, "Error!"))) == `assertThrown failed: No StringException was thrown.`); - T enforce(E : Throwable = Exception, T)(T value, lazy const(char)[] msg= null, string file= __FILE__, size_t line= __LINE__) if (is(typeof(() { if (! value) { } } ))); - Enforces that the given valueis true.Parameters:Returns: value, if cast(bool) valueis true. Otherwise, new Exception( msg) is thrown. Note: enforceis used to throw exceptions and is therefore intended to aid in error handling. It is not intended for verifying the logic of your program. That is what assert is for. Also, do not use enforceinside, Dg, string file = __FILE__, size_t line = __LINE__)(T value, scope Dg dg) if (isSomeFunction!Dg && is(typeof( dg())) && is(typeof(() { if (! value) { } } ))); - Enforces that the given valueis true.Parameters:Returns: value, if cast(bool) valueis true. Otherwise, the given delegate is called. The safety and purity of this function are inferred from Dg's safety and purity. - T enforce(T)(T value, lazy Throwable ex); - Enforces that the given valueis true.Parameters:Returns: value, if cast(bool) valueis true. Otherwise, exis thrown. Example: auto f = enforce(fopen("data.txt")); auto line = readln(f); enforce(line.length, new IOException); // expect a non-empty line - T errnoEnforce(T, string file = __FILE__, size_t line = __LINE__)(T value, lazy string msg= null); - Enforces that the given valueis true, throwing an ErrnoException if it is not.Parameters:Returns: value, if cast(bool) valueis true. Otherwise, new ErrnoException( msg) is thrown. It is assumed that the last operation set errno to an error code corresponding with the failed condition. Example: auto f = errnoEnforce(fopen("data.txt")); auto line = readln(f); enforce(line.length); // expect a non-empty line - template enforceEx(E : Throwable) if (is(typeof(new E("", "std/exception.d", 610)))) template enforceEx(E : Throwable) if (is(typeof(new E("std/exception.d", 622))) && !is(typeof(new E("", "std/exception.d", 622)))) - If !value is false, value is returned. Otherwise, new E(msg, file, line) is thrown. Or if E doesn't take a message and can be constructed with new E(file, line), then new E(file, line) will be thrown.This is legacy name, it is recommended to use enforce!E instead. Example: auto f = enforceEx!FileMissingException(fopen("data.txt")); auto line = readln(f); enforceEx!DataCorruptionException(line.length); - T enforceEx(T)(T value, lazy string msg= "", string file= __FILE__, size_t line= __LINE__); - Ditto - T collectException(T = Exception, E)(lazy E expression, ref E result); - Catches and returns the exception thrown from the given expression. If no exception is thrown, then nullis returned and resultis set to the resultof the expressionis returned. E can be void: - string collectExceptionMsg(T = Exception, E)(lazy E expression); - Catches the exception thrown from the given expressionand returns the msg property of that exception. If no exception is thrown, then nullis returned. E can be void.If an exception is thrown but it has an empty message, then emptyExceptionMsg is returned. Note that while collectExceptionMsgcan be used to collect any Throwable and not just Exceptions, it is generally ill-advised to catch anything that is neither an Exception nor a type derived from Exception. So, do not use collectExceptionMsgto collect non-Exceptions unless you're sure that that's what you really want to do.Parameters:Examples: - enum string emptyExceptionMsg; - Value that collectExceptionMsg returns when it catches an exception with an empty exception message. - pure nothrow immutable(T)[] assumeUnique(T)(T[] array); pure nothrow immutable(T)[] assumeUnique(T)(ref T[] array); pure nothrow immutable(T[U]) assumeUnique(T, U)(ref T[U] array); - Casts a mutable arrayto an immutable arrayin an idiomatic manner. Technically, assumeUniquejustreturned by assumeUnique.Typically, assumeUniqueis used to return arrays from functions that have allocated and built them.Parameters:Returns:The immutable array.from the writable arraybuffer, replace the last line with: return to!(string)(sneaky); // not that sneaky anymoreThe call will duplicate the arrayappropriately. Note that checking for uniqueness during compilation is possible in certain cases, especially when a function is marked as a pure function. The following example does not need to call assumeUniquebecause the compiler can infer the uniqueness of the array.)Parameters:Returns:The value of expr, if any)); } writeln(computeLength(3, 4)); // 5 - pure nothrow @trusted bool doesPointTo(S, T, Tdummy = void)(auto ref const S source, ref const T target) if (__traits(isRef, source) || isDynamicArray!S || isPointer!S || is(S == class)); pure nothrow @trusted bool doesPointTo(S, T)(auto ref const shared S source, ref const shared T target); pure nothrow @trusted bool mayPointTo(S, T, Tdummy = void)(auto ref const S source, ref const T target) if (__traits(isRef, source) || isDynamicArray!S || isPointer!S || is(S == class)); pure nothrow @trusted bool mayPointTo(S, T)(auto ref const shared S source, ref const shared T target); - Checks whether a given sourceobject contains pointers or references to a given targetobject.Parameters:Returns: trueif source's representation embeds a pointer that points to target's representation or somewhere inside it. If sourceis or contains a dynamic array, then, then these functions will check if there is overlap between the dynamic array and target's representation. If sourceis a class, then it will be handled as a pointer. If targetis a pointer, a dynamic array or a class, then these functions will only check if sourcepoints to target, not what targetreferences. If sourceis or contains a union, then there may be either falsepositives or falsenegatives: doesPointTowill return trueif it is absolutely certain sourcepoints to target. It may produce falsenegatives, but never falsepositives. This function should be prefered when trying to validate input data. mayPointTowill return falseif it is absolutely certain sourcedoes not point to target. It may produce falsepositives, but never falsenegatives. This function should be prefered for defensively choosing a code path. Note: Evaluating doesPointTo(x, x) checks whether x has internal pointers. This should only be done as an assertive test, as the language is free to assume objects don't have internal pointers (TDPL 7.1.3.5).Examples:Pointers int i = 0; int* p = null; assert(!p.doesPointTo(i)); p = &i; assert( p.doesPointTo(i));Examples:Structs and Unions struct S { int v; int* p; } int i; auto s = S(0, &i); // structs and unions "own" their members // pointsTo will answer true if one of the members pointsTo. assert(!s.doesPointTo(s.v)); //s.v is just v member of s, so not pointed. assert( s.p.doesPointTo(i)); //i is pointed by s.p. assert( s .doesPointTo(i)); //which means i is pointed by s itself. // Unions will behave exactly the same. Points to will check each "member" // individually, even if they share the same memoryExamples:Arrays (dynamic and static) int i; int[] slice = [0, 1, 2, 3, 4]; int[5] arr = [0, 1, 2, 3, 4]; int*[] slicep = [&i]; int*[1] arrp = [&i]; // A slice points to all of its members: assert( slice.doesPointTo(slice[3])); assert(!slice[0 .. 2].doesPointTo(slice[3])); // Object 3 is outside of the // slice [0 .. 2] // Note that a slice will not take into account what its members point to. assert( slicep[0].doesPointTo(i)); assert(!slicep .doesPointTo(i)); // static arrays are objects that own their members, just like structs: assert(!arr.doesPointTo(arr[0])); // arr[0] is just a member of arr, so not // pointed. assert( arrp[0].doesPointTo(i)); // i is pointed by arrp[0]. assert( arrp .doesPointTo(i)); // which means i is pointed by arrp // itself. // Notice the difference between static and dynamic arrays: assert(!arr .doesPointTo(arr[0])); assert( arr[].doesPointTo(arr[0])); assert( arrp .doesPointTo(i)); assert(!arrp[].doesPointTo(i));Examples:Classes class C { this(int* p){this.p = p;} int* p; } int i; C a = new C(&i); C b = a; // Classes are a bit particular, as they are treated like simple pointers // to a class payload. assert( a.p.doesPointTo(i)); // a.p points to i. assert(!a .doesPointTo(i)); // Yet a itself does not point i. //To check the class payload itself, iterate on its members: () { import std.traits : Fields; foreach (index, _; Fields!C) if (doesPointTo(a.tupleof[index], i)) return; assert(0); }(); // To check if a class points a specific payload, a direct memmory check // can be done: auto aLoc = cast(ubyte[__traits(classInstanceSize, C)]*) a; assert(b.doesPointTo(*aLoc)); // b points to where a is pointing - class ErrnoException: object.Exception; - Thrown if errors that set errno occur. - final @property uint errno(); - Operating system error code. -and returns its result. If the expressionthrows a Throwable, runs the supplied error handler instead and return its result. The error handler's type must be the same as the expression's type.Parameters:Returns: expression, if it does not throw. Otherwise, returns the result of errorHandler. Example: /and the errorHandlermust have a common type they can both be implicitly casted to, and that type will be the type of the compound expression. Example: /"); - enum RangePrimitive: int; - This enum is used to select the primitives of the range to handle by the handle range wrapper. The values of the enum can be OR'd to select multiple primitives to be handled. RangePrimitive.access is a shortcut for the access primitives; front, back and opIndex. RangePrimitive.pop is a shortcut for the mutating primitives; popFront and popBack. front back popFront popBack empty save length opDollar opIndex opSlice access pop - - auto handle(E : Throwable, RangePrimitive primitivesToHandle, alias handler, Range)(Range input) if (isInputRange!Range); - Handle exceptions thrown from range primitives.Use the RangePrimitive enum to specify which primitives to handle. Multiple range primitives can be handled at once by using the OR operator or the pseudo-primitives RangePrimitive.access and RangePrimitive.pop. All handled primitives must have return types or values compatible with the user-supplied handler.Parameters:Returns:A wrapper struct that preserves the range interface of input. opSlice: Infinite ranges with slicing support must return an instance of std.range.Take when sliced with a specific lower and upper bound (see std.range.primitives.hasSlicing); handledeals with this by takeing 0 from the return value of the handler function and returning that when an exception is caught.Examples: import std.algorithm.comparison : equal; import std.algorithm.iteration : map, splitter; import std.conv : to, ConvException; auto s = "12,1337z32,54,2,7,9,1z,6,8"; // The next line composition will throw when iterated // as some elements of the input do not convert to integer auto r = s.splitter(',').map!(a => to!int(a)); // Substitute 0 for cases of ConvException auto h = r.handle!(ConvException, RangePrimitive.front, (e, r) => 0); assert(h.equal([12, 0, 54, 2, 7, 9, 0, 6, 8]));Examples: import std.algorithm.comparison : equal; import std.range : retro; import std.utf : UTFException; auto str = "hello\xFFworld"; // 0xFF is an invalid UTF-8 code unit auto handled = str.handle!(UTFException, RangePrimitive.access, (e, r) => ' '); // Replace invalid code points with spaces assert(handled.equal("hello world")); // `front` is handled, assert(handled.retro.equal("dlrow olleh")); // as well as `back` - template basicExceptionCtors() - Convenience mixin for trivially sub-classing exceptionsEven trivially sub-classing an exception involves writing boilerplate code for the constructor to: 1) correctly pass in the source file and line number the exception was thrown from; 2) be usable with enforce which expects exception constructors to take arguments in a fixed order. This mixin provides that boilerplate code. Note however that you need to mark the mixin line with at least a minimal (i.e. just ///) DDoc comment if you want the mixed-in constructors to be documented in the newly created Exception subclass. Current limitation: Due to bug #11500, currently the constructors specified in this mixin cannot be overloaded with any other custom constructors. Thus this mixin can currently only be used when no such custom constructors need to be explicitly specified.Examples: class MeaCulpa: Exception { /// mixin basicExceptionCtors; } try throw new MeaCulpa("test"); catch (MeaCulpa e) { writeln(e.msg); // "test" writeln(e.file); // __FILE__ writeln(e.line); // __LINE__ - 5 } - pure nothrow @nogc @safe this(string msg, string file= __FILE__, size_t line= __LINE__, Throwable next= null); - Parameters: - pure nothrow @nogc @safe this(string msg, Throwable next, string file= __FILE__, size_t line= __LINE__); - Parameters:
https://docarchives.dlang.io/v2.075.0/phobos/std_exception.html
CC-MAIN-2017-43
en
refinedweb
- Custom scopes from authentication providers - Single sign-on support for Windows Store applications - Updated dependency to Web API 5.2 Getting the updatesAs has been the case, you can get the latest updates to your project via the NuGet Package Explorer. Right-click the project node or the references node in your project in the solution explorer, then select the “Manage NuGet Packages” option: Custom Scopes for Social Authentication ProvidersThis has been a long-standing feature request, which is present in the node.js backend. When you log in to the .NET backend, you can ask in the server for a token to talk to the authentication providers. By default, the token only grants some basic information about the user. Now, in the .NET backend, you can also request additional login scopes so that the access token which you receive at the server can be used to retrieve more information from the authentication provider. Like in the node.js backend, this feature is available for Facebook, Google and Microsoft accounts. Like in the node.js backend, the login scopes can be defined using app settings, which can be set in the “configure” tab in the portal: MS_FacebookScope, MS_GoogleScope and MS_MicrosoftScope (for Facebook, Google and Microsoft accounts, respectively). Let’s look at an example of those scopes being used. I’ve set up a .NET mobile service with authentication with two of the providers mentioned above (Facebook and Microsoft). I’ll also add a controller that talks to the providers to retrieve information about the logged in user: public class UserInfoController : ApiController { public ApiServices Services { get; set; } [AuthorizeLevel(AuthorizationLevel.User)] public async Task<JObject> GetUserInfo() { ServiceUser user = this.User as ServiceUser; if (user == null) { throw new InvalidOperationException("This can only be called by authenticated clients"); } var identities = await user.GetIdentitiesAsync(); var result = new JObject(); var fb = identities.OfType<FacebookCredentials>().FirstOrDefault(); if (fb != null) { var accessToken = fb.AccessToken; result.Add("facebook", await GetProviderInfo("" + accessToken)); } var ms = identities.OfType<MicrosoftAccountCredentials>().FirstOrDefault(); if (ms != null) { var accessToken = ms.AccessToken; result.Add("microsoft", await GetProviderInfo("" + accessToken)); } return result; } private async Task<JToken> GetProviderInfo(string url) { var c = new HttpClient(); var resp = await c.GetAsync(url); resp.EnsureSuccessStatusCode(); return JToken.Parse(await resp.Content.ReadAsStringAsync()); } }Then, we can have an application that authenticates and gets information about its user. For this example, I’ll use a simple app with buttons for each of the authentication providers mentioned above. public sealed partial class MainPage : Page { public static MobileServiceClient MobileService = new MobileServiceClient( "", "yourapplicationkeyshouldbehere00" ); public MainPage() { this.InitializeComponent(); } private async void btnFacebook_Click(object sender, RoutedEventArgs e) { await LoginAndGetUserInfo(MobileServiceAuthenticationProvider.Facebook); } private async void btnMicrosoft_Click(object sender, RoutedEventArgs e) { await LoginAndGetUserInfo(MobileServiceAuthenticationProvider.MicrosoftAccount); } private async Task LoginAndGetUserInfo(MobileServiceAuthenticationProvider provider) { try { var user = await MobileService.LoginAsync(provider); Debug("Logged in as {0}", user.UserId); var userInfo = await MobileService.InvokeApiAsync("userInfo", HttpMethod.Get, null); Debug("User info: {0}", userInfo); MobileService.Logout(); Debug("Logged out"); Debug(""); } catch (Exception ex) { Debug("Error: {0}", ex); } } private void Debug(string text, params object[] args) { if (args != null && args.Length > 0) text = string.Format(text, args); this.txtDebug.Text = this.txtDebug.Text + text + Environment.NewLine; } }If we run the app and login with each of the three providers, we’d get some basic information about the user. For example, this is what I get with my credentials: Logged in as Facebook:xxxxxxxxxxxxx9805 User info: { "facebook": { "id": "xxxxxxxxxxxxx9805", , "locale": "en_US", "updated_time": "2014-09-30T09:38:42Z" } } Logged outThat’s some basic information, but if my application also needed the user’s e-mail or some other information, the access token granted by the service login didn’t have access to that. But if we request additional scopes during the login, by setting the MS_FacebookScope and MS_MicrosoftScope app settings, we’d get the additional information we need: Logged in as Facebook:xxxxxxxxxxxxx9805 User info: { "facebook": { "id": "xxxxxxxxxxxxx9805", "birthday": "xx/yy/zzzz", "email": "xxxxxxxxxxxxxxxxxxx@hotmail.com", , "emails": { "preferred": "xxxxxxxxxxxxxxxxxxx@hotmail.com", "account": "xxxxxxxxxxxxxxxxxxx@hotmail.com", "personal": null, "business": null }, "locale": "en_US", "updated_time": "2014-09-30T09:38:42Z" } } Logged outOne final note about requesting additional scopes:. Single Sign-On Support for Windows Store ApplicationsWhen you use the mobile services SDK in a Windows Store application, every time the app calls the method LoginAsyncin the MobileServiceClientpassing the authentication provider the authentication window is show, and the user has to enter their credentials and click the “sign in” button in the authentication page – even if the user selected the “remember me” button in the provider’s login page (Windows may have cached the credentials so that they don’t need to be entered, but the user still needs to click the button to log in). That’s because by default the cookies from an authentication session are not preserved, so that when the provider page is shown again, there will be no cookies from a previous authentication to identify the user. There’s an overload to LoginAsync which takes an additional flag which indicates that the client should cache the cookies in the authentication sessions, so that the next time LoginAsyncis called, the authentication dialog will just be shown briefly and then automatically dismissed, making for a better user experience. In the client shown in the previous section, all we need to do is to use the additional overload and pass true to the second parameter of LoginAsync. private async Task LoginAndGetUserInfo(MobileServiceAuthenticationProvider provider) { try { var user = await MobileService.LoginAsync(provider, true); Debug("Logged in as {0}", user.UserId); var userInfo = await MobileService.InvokeApiAsync("userInfo", HttpMethod.Get, null); Debug("User info: {0}", userInfo); MobileService.Logout(); Debug("Logged out"); Debug(""); } catch (Exception ex) { Debug("Error: {0}", ex); } }There’s one change which needs to be done in the server side as well, if you haven’t yet. To enable this scenario, the application needs to be associated with an app in the Windows Store, since the package SID (one of the app identifiers) for that application needs to be stored in the service. You will get a package SID by creating the app in the Windows Store Dashboard, and you can see how to find that value in the tutorial to register the app package for Microsoft authentication. If your app will not use Microsoft authentication (for example, use Facebook or Twitter), you won’t need to copy the client id / secrets, but you’ll still need to copy the package SID in the microsoft account settings under the identity tab in the portal.
https://azure.microsoft.com/nb-no/blog/custom-login-scopes-single-sign-on-new-asp-net-web-api-updates-to-the-azure-mobile-services-net-backend/
CC-MAIN-2017-43
en
refinedweb
I recently wanted to have a console application that had configuration specific settings. For instance, if I had two configurations “Debug” and “Release”, depending on the currently selected configuration I wanted it to use a specific configuration file (either debug or config). If you are wanting to do something similar, here is a potential solution that worked for me. First, let’s set up an application that will demonstrate the most basic concept. using System; using System.Configuration; namespace ConsoleSpecificConfiguration { class Program { static void Main(string[] args) { Console.WriteLine("Config"); Console.WriteLine(ConfigurationManager.AppSettings["Example Config"]); Console.ReadLine(); } } } This does a really simple thing. Display a config when run. To do this, you also need a config file set up. My default looks as follows… <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Default"/> </appSettings> </configuration> Your entire solution will look as follows… Running the project you will get the following amazing output… Let’s now say instead of having one config file we want depending on whether we are running in “Debug” or “Release” for the solution configuration we want different config settings to be propagated across you can do the following… First add additional config files to your solution. You should have some form of naming convention for these config files, I have decided to follow a similar convention to the one used for web.config, so in my instance I am going to add a App.Debug.config and a App.Release.config file BUT you can follow any naming convention you want provided you wire up the rest of the approach to use this convention. My files look as follows.. App.Debug.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Debug"/> </appSettings> </configuration> App.Release.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Release"/> </appSettings> </configuration> Your solution will now look as follows… The next step is to create a bat file that will overwrite one file with another. If you right click on the solution in the solution explorer there will be a menu option to add new items to the solution. Create a text file called “copyifnewer.bat” which will be our copy script. It’s contents should look as follows… @echo off echo Comparing two files: %1 with %2 if not exist %1 goto File1NotFound if not exist %2 goto File2NotFound fc %1 %2 . Your solution should now look as follows… We now need to wire up everything – which we will do using the post build event command line in VS2010. Right click on your project and go to it’s properties We are now going to wire up the script so that when we build our project it will overwrite the default App.config with whatever file we want. The syntax goes as follows… call "$(SolutionDir)copyifnewer.bat" "$(ProjectDir)App.$(ConfigurationName).config" "$(ProjectDir)$(OutDir)\$(TargetFileName).config" If I now change my project configuration to Release And then run my application I get the following output… Toggling between Release and Debug mode will show that the config file is changing each time. And that is it!
http://geekswithblogs.net/MarkPearl/archive/2012/04/04/getting-app.config-to-be-configuration-specific-in-vs2010.aspx
CC-MAIN-2017-43
en
refinedweb
diophantine A quadratic diophantine equation solving library. This package is not currently in any snapshots. If you're interested in using it, we recommend adding it to Stackage Nightly. Doing so will make builds more reliable, and allow stackage.org to host generated Haddocks. Math.Diophantine A quadratic diophantine equation solving library for haskell. Overview: This library is designed to solve for equations in the form of: ax^2 + bxy + cy^2 + dx + ey + f = 0 Throughout the library, the variables (a,b,c,d,e,f) will always refer to these coefficients. This library will also use the alias: type Z = Integer to shorten the type declerations of the data types and functions. Installation: To install the library, just use cabal along with the provided install files. Use: import the library with: import module Math.Diophantine The most import function of this library is solve :: Equation -> Either SolveError Solution. The types of equations that this library can solve are defined by the different instances of Equation: GeneralEquation Z Z Z Z Z Z- where the six Integers coincide with the six coefficients. LinearEquation Z Z Z- where the 3 integers are d, e, and f. SimpleHyperbolicEquation Z Z Z Z- where the 3 integers are b, d, e, and f. ElipticalEquation Z Z Z Z Z Z- where the six Integers coincide with the six coefficients. ParabolicEquation Z Z Z Z Z Z- where the six Integers coincide with the six coefficients. HyperbolicEquation Z Z Z Z Z Z- where the six Integers coincide with the six coefficients. For most cases, one will want to call solve with a GeneralEquation. A GeneralEquation is used when one does not know the type of equation before hand, or wants to take advantage of the libraries ability to detirmine what kind of form it fits best. One can call specializeEquation to convert a GeneralEquation into the best specialized equation that it matches. This function is called within solve, so one can pass any type of function to solve. The specific functions will try to match to a GeneralEquation if they can; however, they will throw an error if they cannot. The error behavior exists only because these functions should only be called directly if and only if you know at compile time that this function will only ever recieve the proper form. One may want to use these directly for a speed increase, or to clarify a section of code. The solve* functions will return a Solution. Solutions are as follows: ZxZ- ZxZ is the cartesian product of Z and Z, or the set of all pairs of integers. This Solution denotes cases where all pairs will satisfy your equation, such as 0x + 0y = 0. NoSolutions- This Solution denotes that for all (x,y) in Z cross Z, no pair satisfies the equation. SolutionSet [(Z,Z)]- This Solution denotes that for all pairs (x,y) in this set, they will satisfy the given equation. There is also a readEquation :: String -> Either ParseError Equation and solveString :: String -> Either SolveError Solution for parsing equations out of strings. This will do some basic simplification of the equation. TODO: - Finish the implementation of solveHyperbolic
https://www.stackage.org/package/diophantine
CC-MAIN-2017-43
en
refinedweb
Pumpkin - Notes on handling the Perl Patch Pumpkin There is no simple synopsis, yet. This document attempts to begin to describe some of the considerations involved in patching and maintaining perl. This document is still under construction, and still subject to significant changes. Still, I hope parts of it will be useful, so I'm releasing it even though it's not done. For the most part, it's a collection of anecdotal information that already assumes some familiarity with the Perl sources. I really need an introductory section that describes the organization of the sources and all the various auxiliary files that are part of the distribution. The Comprehensive Perl Archive Network (or CPAN) is the place to go. There are many mirrors, but the easiest thing to use is probably , which automatically points you to a mirror site "close" to you. The mailing list perl5-porters@perl.org is the main group working with the development of perl. If you're interested in all the latest developments, you should definitely subscribe. The list is high volume, but generally has a fairly low noise level. Subscribe by sending the message (in the body of your letter) subscribe perl5-porters to perl5-porters-request@perl.org . Archives of the list are held at:.. There are no absolute rules, but there are some general guidelines I have tried to follow as I apply patches to the perl sources. (This section is still under construction.) Never implement a specific restricted solution to a problem when you can solve the same problem in a more general, flexible way. For example, for dynamic loading to work on some SVR4 systems, we had to build a shared libperl.so library. In order to build "FAT" binaries on NeXT 4.0 systems, we had to build a special libperl library. Rather than continuing to build a contorted nest of special cases, I generalized the process of building libperl so that NeXT and SVR4 users could still get their work done, but others could build a shared libperl if they wanted to as well. If you are making big changes, don't do it in secret. Discuss the ideas in advance on perl5-porters. If your changes may affect how users use perl, then check to be sure that the documentation is in sync with your changes. Be sure to check all the files pod/*.pod and also the INSTALL document. Consider writing the appropriate documentation first and then implementing your change to correspond to the documentation. To the extent reasonable, try to avoid machine-specific #ifdef's in the sources. Instead, use feature-specific #ifdef's. The reason is that the machine-specific #ifdef's may not be valid across major releases of the operating system. Further, the feature-specific tests may help out folks on another platform who have the same problem. We should never release a main version without testing it as a subversion first. We should never release a main version without testing whether or not it breaks various popular modules and applications. A partial list of such things would include majordomo, metaconfig, apache, Tk, CGI, libnet, and libwww, to name just a few. Of course it's quite possible that some of those things will be just plain broken and need to be fixed, but, in general, we ought to try to avoid breaking widely-installed things. The embed.h, keywords.h, opcode.h, and perltoc.pod files are all automatically generated by perl scripts. In general, don't patch these directly; patch the data files instead. Configure and config_h.SH are also automatically generated by metaconfig. In general, you should patch the metaconfig units instead of patching these files directly. However, very minor changes to Configure may be made in between major sync-ups with the metaconfig units, which tends to be complicated operations. But be careful, this can quickly spiral out of control. Running metaconfig is not really. If you need to make changes to Configure or config_h.SH, it may be best to change the appropriate metaconfig units instead, and regenerate Configure. metaconfig -m will regenerate Configure and config_h.SH. Much more information on obtaining and running metaconfig is in the U/README file that comes with Perl's metaconfig units.. If you are using metaconfig to regenerate Configure, then you should note that metaconfig actually uses MANIFEST.new, so you want to be sure MANIFEST.new is up-to-date too. I haven't found the MANIFEST/MANIFEST.new distinction particularly useful, but that's probably because I still haven't learned how to use the full suite of tools in the dist distribution.. This will build a config.sh and config.h. You can skip this if you haven't changed Configure or config_h.SH at all. I use the following command sh Configure -Dprefix=/opt/perl -Doptimize=-O -Dusethreads \ -Dcf_by='yourname' \ -Dcf_email='yourname@yourhost.yourplace.com' \ -Dperladmin='yourname@yourhost.yourplace.com' \ -Dmydomain='.yourplace.com' \ -Dmyhostname='yourhost' \ -des [XXX This section needs revision. We're currently working on easing the task of keeping the vms, win32, and plan9 config.sh info up-to-date. The plan is to use keep up-to-date 'canned' config.sh files in the appropriate subdirectories and then generate 'canned' config.h files for vms, win32, etc. from the generic config.sh file. This is to ease maintenance. When Configure gets updated, the parts sometimes get scrambled around, and the changes in config_H can sometimes be very hard to follow. config.sh, on the other hand, can safely be sorted, so it's easy to track (typically very small) changes to config.sh and then propoagate them to a canned 'config.h' by any number of means, including a perl script in win32/ or carrying config.sh and config_h.SH to a Unix system and running sh config_h.SH.) XXX] The Porting/config.sh and Porting/config_H files are provided to help those folks who can't run Configure. It is important to keep them up-to-date. If you have changed config_h.SH, those changes must be reflected in config_H as well. (The name config_H was chosen to distinguish the file from config.h even on case-insensitive file systems.) Simply edit the existing config_H file; keep the first few explanatory lines and then copy your new config.h below. It may also be necessary to update win32/config. The embed.h, keywords.h, and opcode.h files are all automatically generated by perl scripts. Since the user isn't guaranteed to have a working perl, we can't require the user to generate them. Hence you have to, if you're making a distribution. I used to include rules like the following in the makefile: # The following three header files are generated automatically # The correct versions should be already supplied with the perl kit, # in case you don't have perl or 'sh' available. # The - is to ignore error return codes in case you have the source # installed read-only or you don't have perl yet. keywords.h: keywords.pl @echo "Don't worry if this fails." - perl keywords.pl However, I got lots of mail consisting of people worrying because the command failed. I eventually decided that I would save myself time and effort by manually running make regen_headers myself rather than answering all the questions and complaints about the failing command. souce and binary compatibility with older releases of perl. That way, extensions built under one version of perl will continue to work with new versions of perl. Of course, some incompatible changes may well be necessary. I'm just suggesting that we not make any such changes without thinking carefully about them first. If possible, we should provide backwards-compatibility stubs. There's a lot of XS code out there. Let's not force people to keep changing unordered list of aspects of Perl that could use enhancement, features that could be added, areas that could be cleaned up, and so on. During your term as pumpkin-holder, you will probably address some of these issues, and perhaps identify others which, while you decide not to address them this time around, may be tackled in the future. Update the file. In the os2 directory is diff.configure, a set of OS/2-specific diffs against Configure. If you make changes to Configure, you may want to consider regenerating this diff file to save trouble for the OS/2 maintainer. You can also consider the OS/2 diffs as reminders of portability things that need to be fixed in Configure.. I find the makepatch utility quite handy for making patches. You can obtain it from any CPAN archive under . There are a couple of differences between my version and the standard one. I have mine do a # Print a reassuring "End of Patch" note so people won't # wonder if their mailer truncated patches. print "\n\nEnd of Patch.\n"; at the end. That's because I used to get questions from people asking if their mail was truncated. It also writes Index: lines which include the new directory prefix (change Index: print, approx line 294 or 310 depending on the version, to read: print PATCH ("Index: $newdir$new\n");). That helps patches work with more POSIX conformant patch programs. Here's how I generate a new patch. I'll use the hypothetical 5.004_07 to 5.004_08 patch as an example. # unpack perl5.004_07/ gzip -d -c perl5.004_07.tar.gz | tar -xof - # unpack perl5.004_08/ gzip -d -c perl5.004_08.tar.gz | tar -xof - makepatch perl5.004_07 perl5.004_08 > perl5.004_08.pat Makepatch will automatically generate appropriate rm commands to remove deleted files. Unfortunately, it will not correctly set permissions for newly created files, so you may have to do so manually. For example, patch 5.003_04 created a new test t/op/gv.t which needs to be executable, so at the top of the patch, I inserted the following lines: # Make a new test touch t/op/gv.t chmod +x t/opt/gv.t Now, of course, my patch is now wrong because makepatch didn't know I was going to do that command, and it patched against /dev/null. So, what I do is sort out all such shell commands that need to be in the patch (including possible mv-ing of files, if needed) and put that in the shell commands at the top of the patch. Next, I delete all the patch parts of perl5.004_08.pat, leaving just the shell commands. Then, I do the following: cd perl5.004_07 sh ../perl5.004_08.pat cd .. makepatch perl5.004_07 perl5.004_08 >> perl5.004_08.pat (Note the append to preserve my shell commands.) Now, my patch will line up with what the end users are going to do. It seems obvious, but be sure to test your patch. That is, verify that it produces exactly the same thing as your full distribution. rm -rf perl5.004_07 gzip -d -c perl5.004_07.tar.gz | tar -xf - cd perl5.004_07 sh ../perl5.004_08.pat patch -p1 -N < ../perl5.004_08.pat cd .. gdiff -r perl5.004_07 perl5.004_08 where gdiff is GNU diff. Other diff's may also do recursive checking. Again, it's obvious, but you should test your new version as widely as you can. You can be sure you'll hear about it quickly if your version doesn't work on both ANSI and pre-ANSI compilers, and on common systems such as SunOS 4.1.[34], Solaris, and Linux. If your changes include conditional code, try to test the different branches as thoroughly as you can. For example, if your system supports dynamic loading, you can also test static loading with sh Configure -Uusedl You can also hand-tweak your config.h to try out different #ifdef branches. :-) It's often the case that you'll need to choose whether to do something the BSD-ish way or the POSIX-ish way. It's usually not a big problem when the two systems use different names for similar functions, such as memcmp() and bcmp(). The perl.h header file handles these by appropriate #defines, selecting the POSIX mem*() functions if available, but falling back on the b*() functions, if need be. More serious is the case where some brilliant person decided to use the same function name but give it a different meaning or calling sequence :-). getpgrp() and setpgrp() come to mind. These are a real problem on systems that aim for conformance to one standard (e.g. POSIX), but still try to support the other way of doing things (e.g. BSD). My general advice (still not really implemented in the source) is to do something like the following. Suppose there are two alternative versions, fooPOSIX() and fooBSD(). #ifdef HAS_FOOPOSIX /* use fooPOSIX(); */ #else # ifdef HAS_FOOBSD /* try to emulate fooPOSIX() with fooBSD(); perhaps with the following: */ # define fooPOSIX fooBSD # else # /* Uh, oh. We have to supply our own. */ # define fooPOSIX Perl_fooPOSIX # endif #endif If you need to add an #ifdef test, it is usually easier to follow if you think positively, e.g. #ifdef HAS_NEATO_FEATURE /* use neato feature */ #else /* use some fallback mechanism */ #endif rather than the more impenetrable #ifndef MISSING_NEATO_FEATURE /* Not missing it, so we must have it, so use it */ #else /* Are missing it, so fall back on something else. */ #endif Of course for this toy example, there's not much difference. But when the #ifdef's start spanning a couple of screen fulls, and the #else's are marked something like #else /* !MISSING_NEATO_FEATURE */ I find it easy to get lost. Not all systems have all the neat functions you might want or need, so you might decide to be helpful and provide an emulation. This is sound in theory and very kind of you, but please be careful about what you name the function. Let me use the pause() function as an illustration. Perl5.003 has the following in perl.h #ifndef HAS_PAUSE #define pause() sleep((32767<<16)+32767) #endif Configure sets HAS_PAUSE if the system has the pause() function, so this #define only kicks in if the pause() function is missing. Nice idea, right? Unfortunately, some systems apparently have a prototype for pause() in unistd.h, but don't actually have the function in the library. (Or maybe they do have it in a library we're not using.) Thus, the compiler sees something like extern int pause(void); /* . . . */ #define pause() sleep((32767<<16)+32767) and dies with an error message. (Some compilers don't mind this; others apparently do.) To work around this, 5.003_03 and later have the following in perl.h: /* Some unistd.h's give a prototype for pause() even though HAS_PAUSE ends up undefined. This causes the #define below to be rejected by the compiler. Sigh. */ #ifdef HAS_PAUSE # define Pause pause #else # define Pause() sleep((32767<<16)+32767) #endif This works. The curious reader may wonder why I didn't do the following in util.c instead: #ifndef HAS_PAUSE void pause() { sleep((32767<<16)+32767); } #endif That is, since the function is missing, just provide it. Then things would probably be been alright, it would seem. Well, almost. It could be made to work. The problem arises from the conflicting needs of dynamic loading and namespace protection. For dynamic loading to work on AIX (and VMS) we need to provide a list of symbols to be exported. This is done by the script perl_exp.SH, which reads global.sym and interp.sym. Thus, the pause symbol would have to be added to global.sym So far, so good. On the other hand, one of the goals of Perl5 is to make it easy to either extend or embed perl and link it with other libraries. This means we have to be careful to keep the visible namespace "clean". That is, we don't want perl's global variables to conflict with those in the other application library. Although this work is still in progress, the way it is currently done is via the embed.h file. This file is built from the global.sym and interp.sym files, since those files already list the globally visible symbols. If we had added pause to global.sym, then embed.h would contain the line #define pause Perl_pause and calls to pause in the perl sources would now point to Perl_pause. Now, when ld is run to build the perl executable, it will go looking for perl_pause, which probably won't exist in any of the standard libraries. Thus the build of perl will fail. Those systems where function that gets called something akin to #ifndef HAS_CHSIZE I32 chsize(fd, length) /* . . . */ #endif When 5.003 added #define chsize Perl_chsize to embed.h, the compile started failing on SCO systems. The "fix" is to give the function a different name. The one implemented in 5.003_05 isn't optimal, but here's what was done: #ifdef HAS_CHSIZE # ifdef my_chsize /* Probably #defined to Perl_my_chsize in embed.h */ # undef my_chsize # endif # define my_chsize chsize #endif My explanatory comment in patch 5.003_05 said: Undef and then re-define my_chsize from Perl_my_chsize to just plain chsize if this system HAS_CHSIZE. This probably only applies to SCO. This shows the perils of having internal functions with the same name as external library functions :-). Now, we can safely put my_chsize in global.sym, export it, and hide it with embed.h. To be consistent with what I did for pause, I probably should have called the new function Chsize, rather than my_chsize. However, the perl sources are quite inconsistent on this (Consider New, Mymalloc, and Myremalloc, to name just a few.) There is a problem with this fix, however, in that Perl_chsize was available as a libperl.a library function in 5.003, but it isn't available any more (as of 5.003_07). This means that we've broken binary compatibility. This is not good. We currently don't have a standard way of handling such missing function names. Right now, I'm effectively thinking aloud about a solution. Some day, I'll try to formally propose a solution. Part of the problem is that we want to have some functions listed as exported but not have their names mangled by embed.h or possibly conflict with names in standard system headers. We actually already have such a list at the end of perl_exp.SH (though that list is out-of-date): # extra globals not included above. cat <<END >> perl.exp perl_init_ext perl_init_fold perl_init_i18nl14n perl_alloc perl_construct perl_destruct perl_free perl_parse perl_run perl_get_sv perl_get_av perl_get_hv perl_get_cv perl_call_argv perl_call_pv perl_call_method perl_call_sv perl_requirepv safecalloc safemalloc saferealloc safefree This still needs much thought, but I'm inclined to think that one possible solution is to prefix all such functions with perl_ in the source and list them along with the other perl_* functions in perl_exp.SH. Thus, for chsize, we'd do something like the following: /* in perl.h */ #ifdef HAS_CHSIZE # define perl_chsize chsize #endif then in some file (e.g. util.c or doio.c) do #ifndef HAS_CHSIZE I32 perl_chsize(fd, length) /* implement the function here . . . */ #endif Alternatively, we could just always use chsize everywhere and move chsize from global.sym to the end of perl_exp.SH. That would probably be fine as long as our chsize function agreed with all the chsize function prototypes in the various systems we'll be using. As long as the prototypes in actual use don't vary that much, this is probably a good alternative. (As a counter-example, note how Configure and perl have to go through hoops to find and use get Malloc_t and Free_t for malloc and free.) At the moment, this latter option is what I tend to prefer. Sorry, showing my age:-). Still, all the world is not BSD 4.[34], SVR4, or POSIX. Be aware that SVR3-derived systems are still quite common (do you have any idea how many systems run SCO?) If you don't have a bunch of v7 manuals handy, the metaconfig units (by default installed in /usr/local/lib/dist/U) are a good resource to look at for portability. Why does perl use a metaconfig-generated Configure script instead of an autoconf-generated configure script? Metaconfig and autoconf are two tools with very similar purposes. Metaconfig is actually the older of the two, and was originally written by Larry Wall, while autoconf is probably now used in a wider variety of packages. The autoconf info file discusses the history of autoconf and how it came to be. The curious reader is referred there for further information. Overall, both tools are quite good, I think, and the choice of which one to use could be argued either way. In March, 1994, when I was just starting to work on Configure support for Perl5, I considered both autoconf and metaconfig, and eventually decided to use metaconfig for the following reasons:, a SVR3.2/386 derivative that also had some POSIX support. Metaconfig-generated Configure scripts worked fine for me on that system. On the other hand, autoconf-generated scripts usually didn't. (They did come quite close, though, in some cases.) At the time, I actually fetched a large number of GNU packages and checked. Not a single one configured and compiled correctly out-of-the-box with the system's cc compiler. With both autoconf and metaconfig, if the script works, everything is fine. However, one of my main problems with autoconf-generated scripts was that if it guessed wrong about something, it could be very hard to go back and fix it. For example, autoconf always insisted on passing the -Xp flag to cc (to turn on POSIX behavior), even when that wasn't what I wanted or needed for that package. There was no way short of editing the configure script to turn this off. You couldn't just edit the resulting Makefile at the end because the -Xp flag influenced a number of other configure tests. Metaconfig's Configure scripts, on the other hand, can be interactive. Thus if Configure is guessing things incorrectly, you can go back and fix them. This isn't as important now as it was when we were actively developing Configure support for new features such as dynamic loading, but it's still useful occasionally. At the time, autoconf-generated scripts were covered under the GNU Public License, and hence weren't suitable for inclusion with Perl, which has a different licensing policy. (Autoconf's licensing has since changed.) Metaconfig builds up Configure from a collection of discrete pieces called "units". You can override the standard behavior by supplying your own unit. With autoconf, you have to patch the standard files instead. I find the metaconfig "unit" method easier to work with. Others may find metaconfig's units clumsy to work with.. Mainly because no one's gotten around to making one. Note that "making one" involves changing perl.c, Configure, config_h.SH (and associated files, see above), and documenting it all in the INSTALL file. Apparently, most folks who want to override one of the standard library files simply do it by overwriting the standard library files. In the perl.c sources, you'll find an undocumented APPLLIB_EXP variable, sort of like PRIVLIB_EXP and ARCHLIB_EXP (which are documented in config_h.SH). Here's what APPLLIB_EXP is for, from a mail message from Larry: The main intent of APPLLIB_EXP is for folks who want to send out a version of Perl embedded in their product. They would set the symbol to be the name of the library containing the files needed to run or to support their particular application. This works at the "override" level to make sure they get their own versions of any library code that they absolutely must have configuration control over. As such, I don't see any conflict with a sysadmin using it for a override-ish sort of thing, when installing a generic Perl. It should probably have been named something to do with overriding though. Since it's undocumented we could still change it... :-) Given that it's already there, you can use it to override distribution modules. If you do sh Configure -Dccflags='-DAPPLLIB_EXP=/my/override' then perl.c will put /my/override ahead of ARCHLIB and PRIVLIB. Why isn't the shared libperl.so installed in /usr/lib/ along with "all the other" shared libraries? Instead, it is installed in $archlib, which is typically something like /usr/local/lib/perl5/archname/5.00404 and is architecture- and version-specific. The basic reason why a shared libperl.so gets put in $archlib is so that you can have more than one version of perl on the system at the same time, and have each refer to its own libperl.so. Three examples might help. All of these work now; none would work if you put libperl.so in /usr/lib. Anyway, all this leads to quite obscure failures that are sure to drive casual users crazy. Even experienced users will get confused :-). Upon reflection, I'd say leave libperl.so in $archlib. You can upload your work to CPAN if you have a CPAN id. Check out for information on _PAUSE_, the Perl Author's Upload Server. I typically upload both the patch file, e.g. perl5.004_08.pat.gz and the full tar file, e.g. perl5.004_08.tar.gz. If you want your patch to appear in the src/5.0/unsupported directory on CPAN, send e-mail to the CPAN master librarian. (Check out ). You should definitely announce your patch on the perl5-porters list. You should also consider announcing your patch on comp.lang.perl.announce, though you should make it quite clear that a subversion is not a production release, and be prepared to deal with people who will not read your disclaimer. Here, in no particular order, are some Configure and build-related items that merit consideration. This list isn't exhaustive, it's just what I came up with off the top of my head.. We should be able to emulate configure --srcdir. Tom Tromey tromey@creche.cygnus.com has submitted some patches to the dist-users mailing list along these lines. They have been folded back into the main distribution, but various parts of the perl Configure/build/install process still assume src='.'.). Various hint files work around Configure problems. We ought to fix Configure so that most of them aren't needed. Some of the hint file information (particularly dynamic loading stuff) ought to be fed back into the main metaconfig distribution.. On some systems, it may be safe to call the system malloc directly without going through the util.c safe* layers. (Such systems would accept free(0), for example.) This might be a time-saver for systems that already have a good malloc. (Recent Linux libc's apparently have a nice malloc that is well-tuned for the system.) Get some of the Macintosh stuff folded back into the main distribution. Maybe include a replacement function that doesn't lose data in rare cases of coercion between string and numerical values. The current makedepend process is clunky and annoyingly slow, but it works for most folks. Alas, it assumes that there is a filename $firstmakefile that the make command will try to use before it uses Makefile. Such may not be the case for all make commands, particularly those on non-Unix systems. Probably some variant of the BSD .depend file will be useful. We ought to check how other packages do this, if they do it at all. We could probably pre-generate the dependencies (with the exception of malloc.o, which could probably be determined at Makefile.SH extraction time. GNU software generally has standardized Makefile targets. Unless we have good reason to do otherwise, I see no reason not to support them. Somehow, straighten out, document, and implement $
http://search.cpan.org/~lbrocard/perl5.005_04/Porting/pumpkin.pod
CC-MAIN-2017-43
en
refinedweb
Internet Protocol Details Topic Last Modified: 2005-06-21 As mentioned, Exchange 2003 supports several Internet standards-based client protocols, including HTTP, POP3, IMAP4, and NNTP. These protocols are described in more detail in the following subsections. The Microsoft Exchange Information Store service includes native HTTP access to data. Every object in the Microsoft Exchange Information Store service is URL–accessible with short, easily understood names. Because every object in the Microsoft Exchange information store is URL–accessible, users have several different ways to access objects in mailboxes or public folder hierarchies. The URL for an object is based on its location in the hierarchy and usually contains the subject of the item. When a user opens a message through Microsoft Outlook Web Access, the IIS request processor calls the Exchange HTTP ISAPI application that parses the information in the request and determines the following: The action to be performed Exchange HTTP ISAPI determines whether the user is opening a mailbox, opening a folder, reading e-mail, creating e-mail, and so forth. Browser information Exchange HTTP ISAPI determines the browser type, version, and rendering information. The server then determines whether the user has permissions to access the item. If the user has access rights, the object state (read, unread), object type (folder, message, and others), and item type (message, appointment, contact) are determined. The Exchange HTTP ISAPI extension then matches the object attribute to its corresponding form definition. If a form definition does not exist for a particular object attribute, the default form is used, (the one used to read an e-mail item). The Exchange HTTP ISAPI extension then parses the form and queries the information store to bind to the data. After receiving the data from the Microsoft Exchange Information Store service, the Exchange HTTP ISAPI extension renders the data in HTML or XML, based on the browser type and version, and the client displays the message. The following steps show this process in more detail: The browser sends a request for an e-mail message. The browser issues a GET request for a URL, such as. This URL does not have any query strings attached, which would be processed first, so the server returns a rendering of this resource based on its Message-Class and the default action configured for this class. Exchange ISAPI processes the request. When IIS receives the request, it is passed to the Exchange ISAPI component Davex.dll. This component parses the request for the following information and then sends the request to the Exchange store. The following table illustrates the passed item and its purpose. The Microsoft Exchange Information store service then determines the item type. The server verifies that the user has access to the item, determines the type of object (folder, message, task, and more), and returns the item type and its state (read, unread, and more) to the ISAPI application. Exchange ISAPI selects the form. The ISAPI program takes the object attributes and looks for a form definition in the forms registry that matches the object's type. If a matching form definition is not found, a default form stored in Wmtemplates.dll is used. If the browser language is not English, language specific strings are loaded from other template libraries in the \Exchsrvr\Res\Directory. The Microsoft Exchange Information Store service retrieves data for the form. After a form definition is found, the ISAPI program parses the form, calling the Microsoft Exchange Information Store service, to retrieve the data it references. Exchange ISAPI renders the form. When the data is returned from the Microsoft Exchange Information Store service, the form is rendered in the appropriate HTML and XML, and then goes to the client. Davex.dll passed items and usage Web Distributed Authoring and Versioning (WebDAV) is an extension to the HTTP 1.1 protocol (RFC 2518). HTTP and WebDAV enable rich collaborative interaction with the information store in Exchange 2003. Exchange 2003 HTTP support enables adding, modifying, copying, moving, and searching of folders and items and manipulation of attributes on any object in the information store. WebDAV creates improved performance and user experience over the basic Microsoft Outlook Web Access client by exploiting client-side data binding and rendering. For example, when you click the column header, you can sort the Inbox in several different ways, enabling views based on the sender's name, the message subject line, or received date. The browser caches the user interface elements, such as Internet Explorer HTML components, Microsoft Jscript libraries, XSL, and Graphics Interchange Format (GIF) files. When the user changes the sort criteria, the browser can reformat the user interface elements locally and query the server for the view data. The following process shows how clients access items in their Inbox using WebDAV: The client issues an HTTP GET request for the client's Inbox. IIS receives the request on port 80 (unless you change this configuration) and sends the request to Davex.dll for processing using ExIPC. The request is forwarded using ExIPC to the Exchange Store OLE DB driver, Exoledb.dll. Exoledb.dll renders the request in a format that the Exchange store can process, sends the request to the Exchange store, and then retrieves the client's Inbox properties from the Exchange store. After the clients Inbox properties are retrieved, Exchange 2003 routes the information back to the client using the same components that it used to process the client request. Exchange Server 2003 implements a POP3 protocol stack that is compliant with RFC 1725, RFC 1734, and RFC 1939. Exchange 2003 supports the ten POP3 commands listed in the following table. POP3 protocol command verbs POP3 is considered a read only protocol. It only includes messages for requesting, reading, and deleting messages. To send messages, POP3 clients use the SMTP protocol. The following steps illustrate the interprocess communication steps that ExIPC goes through when a client such as Microsoft Outlook accesses a mailbox on the Exchange server using the POP3 protocol. IIS and Exchange Server shared memory architecture The client logs on to the server and gives the command to check e-mail. A Request Mail Message 1 command is created on the IIS side. IIS allocates shared memory from the shared memory heap for the request. A corresponding handle is assigned to part of the shared memory. The handle, which functions as a placeholder or pointer to a referenced part of memory, is then placed in the circular memory queue (enqueued) in the direction of the Exchange information store. On the Exchange store side, the ExIPC.DLL for POP3 checks for incoming POP3 requests. The DLL receives the Request Mail Message and removes the handle from the circular memory queue. The Exchange store-side POP3 stub references the handle to the data in the shared memory heap. If there are no failures or performance problems on the Exchange store side, the ExIPC process is complete and the data is successfully communicated from the IIS to the Exchange store. If a queue is full or the Exchange store has stopped, an error message is returned. A response (the mail message) is generated on the Exchange store side. The Exchange information store allocates the appropriate shared memory for the response from the shared memory heap. A corresponding handle is assigned to that shared memory. The handle is then enqueued in the direction of IIS. IIS removes the handle from the circular queue, references the shared memory, and binds them together. If there are no failures or performance problems on the IIS side, the response is complete and the data is successfully communicated from the Exchange store to IIS. Exchange 2003 is IMAP4 rev 1 compliant, in accordance with RFC 2060, RFC 2088 and RFC 1731. IMAP is comprised of over 30 commands, through which messages can be searched, fetched, and expunged from the Exchange server. IMAP is well suited for online and offline use. IMAP can connect to multiple mailboxes (if permissions are in place) and public folders and can be used for non e-mail purposes, such as news services. IMAP4 goes beyond the functionality available by using POP3. IMAP4 allows users to access any one of their folders, not just their Inbox. Because of this, it is more complex than POP3. However, it still adheres to the same standard of being a read-only protocol. Like POP3, IMAP4 also uses SMTP for sending e-mail. Exchange 2003 supports the IMAP4 commands listed in the following table. IMAP4 commands supported by Exchange Server 2003 Network News Transfer Protocol (NNTP) is a TCP/IP protocol based on text strings that are sent bi-directionally over seven-bit ASCII TCP channels. The IETF owns the NNTP protocol, which is defined in RFC 977. NNTP is commonly referred to as the Internet News Protocol, because it contains the rules for transporting news articles from one computer to another. NNTP is discussed here as a client/server protocol. It also encompasses server-to-server based news transfer. The NNTP service in Windows is designed to support a stand-alone newsgroup server that can easily create group discussions. When Exchange 2003 is installed, the NNTP service is enhanced with the ability to interface with other news servers through news feeds. The NNTP service can communicate with external NNTP servers to make popular USENET groups available to users. The standard storage location for the NNTP service is in one or more directories in the file system. With Exchange Server 2003, the NNTP service can also store newsgroups in the public folders of any available public folder tree. Internet Newsgroups folder is the default location for newsgroups. The NNTP service uses virtual directories to reference these locations. You can arrange multiple news servers in a master server/subordinate server layout. This enables clients to connect to a large group of servers and still maintain accurate views of newsgroup content. A bank or group of servers provides additional scalability for many clients and provides fault tolerance if a subordinate server goes offline. The Exchange Server 2003 implementation of NNTP provides the following additional features for this protocol: Content indexing provides search features for public folders. Full news feeds are accepted independent of back-end storage. MAPI or NNTP clients can read or post to newsgroups that are supported by the Exchange information store. Although Exchange integrates with IIS, as soon as Exchange 2003 is installed, protocol virtual servers are managed by Exchange System Manager, and not by Internet Services Manager. When you add, remove, or configure an item in Exchange System Manager, the configuration changes are first saved to the Microsoft Active Directory directory service and then replicated to the IIS metabase, on the appropriate Exchange 2003 server, by the Directory Service/Metabase Synchronization (DS2MB) function that runs in the System Attendant process. The IIS metabase is a hierarchical database that is used to store configuration values for IIS and Exchange 2003. The IIS metabase is both a storage mechanism and an application programming interface (API) set used to make changes to the configuration parameters. The function of the DS2MB process is to transfer configuration information from Active Directory to the Exchange server's local IIS metabase. For performance and scalability reasons, this configuration is stored in the local IIS metabase instead of in the registry. Paths in the metabase are named keys. Properties can be set at each key, and each property can have attributes that customize that property. All identifiers that are present in the directory service image of the subtree are required in the metabase, including identifiers such as KeyType. Additionally, the Relative Distinguished Name of the object in the directory is mapped directly to the key name in the metabase. DS2MB is a subprocess that is launched when System Attendant is started, and every 15 minutes thereafter. DS2MB copies all subtrees from Active Directory without changing the shape of the subtree. This is a one-way write from Active Directory to the metabase; the metabase never writes to Active Directory. It does not add or compute any attribute when copying. The operation of SMTP, POP3, IMAP4, and HTTP depends on the replication by DS2MB. Not all settings are synchronized from Active Directory, some are written to the metabase directly during the installation of Exchange. Upon instantiation, DS2MB registers with the configuration domain controller. The configuration domain controller notifies DS2MB, within 15 seconds, of any changes that are made to the Exchange configuration . As soon as the change is replicated to the configuration domain controller, it must be replicated to the metabase by DS2MB. High water marks are entries in the metabase that enable DS2MB to track changes that have been synchronized from Active Directory. High water mark entries are entered in the IIS metabase in the form of GUIDs. These GUIDs appear under the [/DS2MB/HighWaterMarks] node in the metabase, as illustrated below: Because DS2MB handles the entry and synchronization of high water marks in the metabase, there is usually no reason to adjust or manage this information. However, there are known scenarios in which the resolution includes deleting the high water mark entries from the metabase to reset them. A front-end server is a server running Exchange Server 2003 that does not host a database (except when also serving as an SMTP server), but instead forwards client requests to the back-end server for processing. The front-end server uses Lightweight Directory Access Protocol (LDAP) to query Active Directory to determine which back-end server hosts the user's mailbox. A back-end server is a server running Exchange Server 2003 that maintains at least one database. This architecture is available only for RPC over HTTP, HTTP/WebDAV, POP3, and IMAP4 clients. It is not intended for MAPI or NNTP clients. Clients that are supported connect to a front-end server that proxies client commands to the user's back-end server, which hosts an Exchange information store. This functional division between a front-end server and a back-end server provides several benefits. For example: Single Namespace As multiple back-end servers are configured to handle additional mailboxes, it is best to identify all the servers with a single name. You can refer to a front-end server with a single name, and it can proxy user requests to the correct back-end server containing that user's mailbox. If multiple front-end servers are configured to manage a high number of requests, a single namespace for these servers is maintained by configuring the Domain Name System (DNS) with one name mapped to the IP address on the server. It is not important which front-end server the client connects to. Offload SSL Encrypting and decrypting message traffic uses many CPU cycles. A front-end server can perform encryption work, giving the back-end server more cycles to manage the mailbox and public folder information stores. Public Folder Referrals for IMAP4 Clients Many IMAP4 clients do not support referrals. With this architecture, the front-end server can retrieve public folders that exist on a server other than the user's e-mail server. Server Location You can put the back-end servers that contain the databases behind a firewall for increased protection. You can configure the firewall to allow traffic only from the front-end server. Additionally, you can put a reverse proxy (such as ISA Server) in front of a front-end server and only publish the front-end server. It is not necessary to publish the back-end mailbox servers to the Internet. Therefore, you can configure your firewalls and reverse proxies to allow traffic only to the front-end server. SMTP mail from the Internet, you must start the Microsoft Exchange Information Store service and mount at least one mailbox store. In certain situations (most notably in the generation of non delivery reports), the SMTP service requires a mailbox store to perform a conversion. If the mailbox store is not mounted, messages that must be converted are stuck in the local delivery queue. For security reasons, make sure that user mailboxes are not homed on the mailbox store of a front-end server. If there are servers that are running Exchange Server 5.5 in the same site (routing group), you must configure the Microsoft Exchange MTA Stacks service to run on the front-end server. In this configuration, the MTAs can bind and transfer mail by using RPCs. If X.400 connectors or Exchange Development Kit (EDK) gateway connectors are homed on the front-end server, the MTA service must also run on the front-end server. If you delete all public folder and mailbox stores, you cannot change the configuration by using Internet Services Manager. If you must change the configuration by using Internet Services Manager, for example when you change an SSL encryption configuration, make sure that you either complete the procedures that this guide.
https://technet.microsoft.com/en-us/library/bb125093(v=exchg.65).aspx
CC-MAIN-2017-43
en
refinedweb
The downside of Arduino… First of all, I really like the Arduino. There are lots of reasons: great community, relatively inexpensive, wide hardware availability and variety, and often a good “impedance” match to projects. But there are a few design choices (both hardware and software) that can be a nuisance, especially as you try to push the limits of what can be done on such inexpensive and low power hardware. First of all, there is the basic software philosophy. I love the fact that it is a single download: this simplifies installation, and whether you are an expert or a novice doesn’t matter, that’s simply more convenient. But using the program editor provided is simply not that desirable for experts who might be more used to Eclipse, or to an old fashioned kind of guy like myself for whom vi and Makefiles is more reasonable. With a little work you can find out where avr-gcc, avr-libc and avrdude are installed and add them to your search path, but you still have some work to build a Makefile which can compile your sketch from the command line. That’s a little annoying. But more annoying are the design choices of the libraries themselves. High among them are the heavy use of delay and the relative invisibility of interrupts. This may simplify short programs, but it provides no help to more sophisticated programs, and may in fact hinder them. Consider my experiments of last night (which fueled the rant of this morning). I merely wanted to implement a serial terminal using the TVout library and some serial communications library. The idea would be “read from serial, print to TVout”, voila, should be simple. TVout generates NTSC video on the fly, and it basically works great. To do so, it uses regular timer interrupts. and if you dig into the library itself, you might find out that it uses Timer1. At regular interval, the timer 1 counter overflows, and the library knows that it’s time to bitbang out the video using some carefully constructed code. It also appears to use Timer2 to generate audio tones. Hmmm. The AVR chip that underlies the normal Arduino has only three timers… is that going to be a problem? How can I tell if another library also uses those timers? You won’t find it in the documentation to any library. Arduino programming is supposed to be easy, and remove the need to understand how the underlying hardware works. Except that when you try to use two libraries together, and they happen to both use the same underlying timer resources, it doesn’t work right. There are no compile issues, it’s just one or both of the modules fail. Which it did of course last night. You’d think that a loop like: void loop() { TV.print((char)Serial.read()) ; } might have a chance of working, but it doesn’t. It drops and mangles characters horribly. While you might be able to poll some switches, apparently typing is something that is simply beyond the realm of expectation. Some of you are probably going to leap to TVout’s defense, claiming that “You’re not doing it right! There is an example sketch which shows you the right way to do Serial and TVout!” Let’s have a look at that little sketch, shall we? #include <TVout.h> #include <pollserial.h> #include <fontALL.h> TVout TV; pollserial pserial; void setup() { TV.begin(_NTSC,184,72); TV.select_font(font6x8); TV.println("Serial Terminal"); TV.println("-- Version 0.1 --"); TV.set_hbi_hook(pserial.begin(57600)); } void loop() { if (pserial.available()) { TV.print((char)pserial.read()); } } Okay, we have some new library, which we never heard of before. It’s apparently part of the TVout download. Documentation? No. It calls a function I haven’t seen before, “set_hbi_hook”. Is that documented? No. I am pretty sure it’s a hook which will get called at the end (or the beginning?) of a horizontal blanking interrupt (this isn’t my first rodeo) but what’s really going on here. The pserial.begin call must return a function… time to crack open the source code. And, it’s about what you expect. There is a hook function that gets called on every line, right before the line is rendered. My guess is that it’s bad if the timing of the hook function is in anyway indeterminate, because then the rows will start at odd intervals. But each render routine starts with a line which waits… for some length…. maybe there is some amount of time (undocumented, different for PAL/NTSC, maybe just the front part of each scanline, read more code) that if you don’t exceed, you’ll be fine. What does pollserial do? Well, it snoops at the serial data registers (polling!) to see if a new character has arrived. It then puts it in a ringbuffer, so that it can be made available to the read call later. Okay, I understand. But did I mention the reason I didn’t use this code in the first place? It’s that pollserial didn’t compile on my Arduino 1.0 setup (most libraries that aren’t part of the core don’t out of the box yet, in my experience). I could probably figure that out (has something to do with inheriting from the Print class and a prototype mismatch) but in my real, ultimate application, I wanted to read from a PS/2 keyboard. It will of course have the same (but differing in details) issues, and I’ll have to tweak their driver code to make it work with TVout too. Sigh. Ultimately, I guess I disagree with one part of the Arduino philosophy: that programming can ever really be simple. Writing a program to blink an led, or read a switch isn’t hard, no matter what language or processor you use. Simple programs are simple, but the virtue of computers is that they can do things which are complex. If your software environment doesn’t give you sufficient help in organizing and doing complex actions, then you are missing out a great deal of the purpose of computers. Addendum: While trying to fix the compile errors in pollserial, I found this fragment: int pollserial::read() { if (rxbuffer.head == rxbuffer.tail) return -1; else { uint8_t c = rxbuffer.buffer[rxbuffer.tail]; //tail = (tail + 1) & 63; if (rxbuffer.tail == BUFFER_SIZE) rxbuffer.tail = 0; else rxbuffer.tail++; return c; } } Can you spot the problem? Hints: BUFFER_SIZE is defined to 64, and rxbuffer.buffer is malloced to be BUFFER_SIZE bytes long. Comment from Mark VandeWettering Time 1/14/2012 at 1:17 pm Sure, I could solve this problem by using bigger, faster more expensive hardware. But it is inspiring or virtuous to do with more, what can be done with less? Part of the reason that Arduino and microcontrollers interest me is that they allow a kind of crafstmanship spawned from minimalism that other kinds of programming don’t display. At issue isn’t whether the underlying hardware can do what I am asking (it clearly can, since projects like the Tellymate do even more than I was asking) but whether the Arduino environment helps one get at this underlying power. It mostly does not, and in some cases actually hinders, by saddling the programmer with abstractions that do not help him in meeting his task. People do indeed blame their tools, but that’s not to say that bad tools don’t exist. You can tell the difference between a $5 soldering iron and a $200 one. Try to use the first, and it’s a recipe for frustration. Tools matter. Comment from Chris Johnson Time 1/14/2012 at 2:41 pm The last paragraph of your post sums up my attitude to Arduino perfectly. Have you tried programming the Atmel chip in assembly? While I have only used PICs (rather than Atmels), I find assembly a very good way to get the ‘craftsmanship spawned from minimalism’ that you refer to. While one wouldn’t want to use assembly for everything, there’s a certain simplicity in dealing with the chip directly. I find the additional complexity of using assembly much less frustrating than the complexity of having to work around a high-level interface that doesn’t do what I want it to… Comment from Kenneth Finnegan Time 1/14/2012 at 4:11 pm I’m kind of embarrassed how long it took me to find the problem in the addendum. I stopped using Arduino for anything big and dropped down to Vim, makefiles, and avr-gcc a long time ago. It’s kind of a drag when finding libs takes more work/have to write them yourself, but once you build your toolbox, I find it much nicer than dealing with the Arduino environment. Comment from Brent Jones Time 1/14/2012 at 4:50 pm I have to agree Mark. There are some good things about the Arduino but they can also be a pain to work with. I built a Morse Code simulator with an Arduino and it was fairly simple just turn on/off the Morse code sounder at the right time. The Arduino is good for this job. But I a have built a Teleprinter driver which I used Avr Studio because to do it in Arduino would have mean’t mixing Arduino Code with explicit C code as well. For example the Teleprinter uses 5 BIT Baudot code. The Arduino Serial library does not appear , as standard, to let you set the bit lengths. Just having one cable to program the arduino is handy. It can be used as a good proof of concept tool. I tried using a Duemilanove for the Teleprinter driver but found a problem when I wanted to use the serial port to drive the Teleprinter. If you had the serial out pin connected to what you wanted to drive eg transistor you couldn’t program the board. I had to keep disconnecting my transistor. I could understand if my device was sending data but would have thought it should have had no effect when receiving except providing some wrong characters on the Teleprinter. The boot loader can be handy but for somethings the lag between switch on and your program starting can be too long. Is it documented anywhere what state the Ports are in while the bootloader is checking for serial input? The main thing going for the Arduino is it has got a lot more people using microcontrollers now then before it exsited. Pingback from » I love my Arduino, but… Matt Quadros . com Time 1/18/2012 at 2:49 pm […] a decent article on the limitations of the Arduino. The argument is something I completely agree with – by […] Pingback from » I love my Arduino, but… Matt Quadros . com Time 1/18/2012 at 2:49 pm […] a decent article on the limitations of the Arduino. The argument is something I completely agree with – by […] Pingback from Asciimation » Blog Archives » Taipan! on the Arduino. Part 2. Time 7/31/2012 at 3:51 am […] documented! I discovered it through Google searches and finding comments in other blogs, mainly this wonderful rant, which beautifully explains one of the annoying things about the Arduino – so much stuff is […] Comment from mike Time 3/18/2016 at 12:48 pm No no noduino.. Librarys makes C look like a very easy language to novices. C was preferred back in the days when C compiled into faster machinecode. Basic makes just as fast and small code as C today, and it’s much more structured. Learn the real way if you want to make a project that really is going to end up on a circuitboard! Get a AVR, a breadboard, usb-isp cable and download Bascom-avr.. C takes more time to master and nowdays doesn’t have any advatages over the much easier Basic language. Comment from Stephen Time 1/14/2012 at 10:56 am Perhaps if you are “pushing the limits” of hardware or software you should consider looking for more appropriate tools to continue your project unless you are trying to see how far you can go and then you should not blame your tools.
http://brainwagon.org/2012/01/14/the-downside-of-arduino/
CC-MAIN-2017-43
en
refinedweb
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: JiBX 1.2.1 - Fix Version/s: JiBX 1.2.2 - Component/s: None - Labels:None - Environment:JiBX 1.2.1, JDK 6 - Number of attachments : Description BindGen does not support generating binding definitions for class that contains java.lang.Character instance variables. public class LZPerson { private int id; private String firstName; private String lastName; private java.lang.Character netPassport; } Running BindGen causes: Exception in thread "main" java.lang.IllegalStateException: No way to handle type java.util.Character. The possible reason is the incorrect mapping from primitive types to XSD types and the incorrect definition of primitive types. They assume the class java.lang.Char, however the correct primitive type wrapper is java.lang.Character. The affected classes are: org.jibx.binding.BindingGenerator (s_objectPrimitiveSet) org.jibx.binding.SchemaGenerator (s_objectTypeMap) org.jibx.util.Types (s_objectTypeMap) Corrected the tables.
http://jira.codehaus.org/browse/JIBX-291
CC-MAIN-2014-52
en
refinedweb
My activity class: ...useless stuff... import android.opengl.GLSurfaceView; public class MyApp extends Activity { private GLSurfaceView mSurfaceView; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); //fullscreen getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); glSurfaceView = new GLSurfaceView(this); glSurfaceView.setRenderer(new MyRenderer()); setContentView(glSurfaceView); } } My Renderer class ...unimportant stuff... import android.opengl.GLSurfaceView.Renderer; public class MyRenderer implements Renderer { @Override public void onDrawFrame(GL10 gl) { ...iterate over draw objects and call .draw() on them... } } ...more unimportant stuff... 1) Is OpenGL ES double buffered by default? Right now, I've created a Renderer class that extends GLSurfaceView.Renderer. From that, I've got an override for onDrawFrame...like you can see above. Is this already double buffered? I've done a little bit of OpenGL coding for PC and I remember having to specifically tell it to swap buffers... 2) Given my current setup, is the renderer already on its own thread? Is that something handled by OpenGL ES (or the GLViewPort class)? 3) Right now I'm doing everything inside of the onDrawScene(GL10 gl) function. Is there a better way of creating a game loop? Seems like (assuming rendering is currently on its own thread), onDrawScene should just iterate of my drawable objects and draw them, but I should probably have a loop in another thread somewhere polling input, updating drawable object positions, etc. Any thoughts? Thanks, in advance, for the help! [Edit: spelling error] Edited by Holland, 30 April 2012 - 05:18 PM.
http://www.gamedev.net/topic/624267-android-opengl-esdouble-buffered-threaded-gameloop/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2014-52
en
refinedweb
- data Server - serverThreadId :: Server -> ThreadId - serverMetricStore :: Server -> Store - forkServer :: ByteString -> Int -> IO Server - forkServerWith :: Store -> ByteString -> Int -> IO Server - getCounter :: Text -> Server -> IO Counter - getGauge :: Text -> Server -> IO Gauge - getLabel :: Text -> Server -> IO Label - getDistribution :: Text -> Server -> IO Distribution Required configuration To make full use out of. API is versioned to allow for API evolution. This document is for version 1. To ensure you're using this version, append ?v=1 to your resource URLs. Omitting the version number will give you the latest version of the API. The following resources (i.e. URLs) are available: - / - JSON object containing all metrics. Metrics are stored as nested objects, with one new object layer per "." in the metric name (see example below.) Content types: "text/html" (default), "application/json" - /<namespace>/<metric> - JSON object for a single metric. The metric name is created by converting all "/" to ".". Example: "/foo/bar" corresponds to the metric "foo.bar". Content types: "application/json" Each metric is returned as an object containing a type field. Available types are: - "c" - Counter - "g" - Gauge - "l" - Label - "d" - Distribution In addition to the type field, there are metric specific fields: - Counters, gauges, and labels: the valfield contains the actual value (i.e. an integer or a string). - Distributions: the mean, variance, count, sum, min, and maxfields contain their statistical equivalents. Example of a response containing the metrics "myapp.visitors" and "myapp.args": { "myapp": { "visitors": { "val": 10, "type": "c" }, "args": { "val": "--a-flag", "type": "l" } } } The monitoring server A handle that can be used to control the monitoring server. Created by forkServer. serverThreadId :: Server -> ThreadIdSource The thread ID of the server. You can kill the server by killing this thread (i.e. by throwing it an asynchronous exception.) serverMetricStore :: Server -> StoreSource The metric store associated with the server. If you want to add metric to the default store created by forkServer you need to use this function to retrieve it. Arguments Like forkServerWith, but creates a default metric store with some predefined metrics. The predefined metrics are those given in registerGcMetrics." and "text/html". Registers the following counter, used by the UI: ekg.server_time_ms - The server time when the sample was taken, in milliseconds. Note that this function, unlike forkServer, doesn't register any other predefined metrics. This allows other libraries to create and provide a metric store for use with this library. If the metric store isn't created by you and the creator doesn't register the metrics registered by forkServer, you might want to register them yourself. Defining metrics The monitoring server can store and serve integer-valued counters and gauges, string-valued labels, and statistical distributions. A counter is a monotonically increasing value (e.g. TCP connections established since program start.) A gauge is a variable value (e.g. the current number of concurrent connections.) A label is a free-form string value (e.g. exporting the command line arguments or host name.) A distribution is a statistic summary of events (e.g. processing time per request.) Each metric is associated with a name, which is used when it is displayed in the UI or returned in a JSON object. Metrics share the same namespace so it's not possible to create e.g. a counter and a gauge with the same. Attempting to do so will result in an error.. Similar for the other metric types. It's also possible to register metrics directly using the System.Metrics module in the ekg-core package. This gives you a bit more control over how metric values are retrieved. Arguments Return a new, zero-initialized counter associated with the given name and server. Multiple calls to getCounter with the same arguments will result in an error. Arguments Arguments Arguments Return a new distribution associated with the given name and server. Multiple calls to getDistribution with the same arguments will result in an error.
http://hackage.haskell.org/package/ekg-0.4.0.2/docs/System-Remote-Monitoring.html
CC-MAIN-2014-52
en
refinedweb
13 July 2011 17:18 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> The activists destroyed fields for genetically modified potatoes near Stefan Marcinowski, head of VCI’s biotechnology portfolio, said the activists had wiped out many years of research work. They also threatened security personnel at the sites. Marcinowski, who is also a member of BASF’s executive board, said At the same time, Marcinowski called on politicians in He went on to add that biomass is becoming an important platform for chemical and biotechnological processes to produce chemicals, energy, animal feed and food products, as well as pharmaceuticals. Rising demand can no longer be met through conventional technologies alone, he concluded. “We cannot do without modern plant technology and cultivation,” Marcinowski said. “Without
http://www.icis.com/Articles/2011/07/13/9477274/german-chem-industry-condemns-attacks-on-biotech-plant-facilities.html
CC-MAIN-2014-52
en
refinedweb
28 November 2012 17:07 [Source: ICIS news] CAMPINAS, Brazil (ICIS)--Brazilian October chemical production fell by 2% month on month while domestic sales fell by 2.25% as a result of power outage, a trade group said on Wednesday. The outage affected ?xml:namespace> Incidents such as power outages increase the cost of the sector, according to Fatima Giovanna, Abiquim's economy and statistics technical director. "For the chemical industry, which operates most of its plants in a continuous process, the uncertainty regarding the safe supply of electricity is a very worrying factor now, because such stoppages have been resulting in a high cost for companies, especially when the furnaces are affected [and there's] a delay to get operations back to normal",
http://www.icis.com/Articles/2012/11/28/9619193/power-outages-cut-oct-sales-output-of-brazilian-chemicals.html
CC-MAIN-2014-52
en
refinedweb
Design and Implementation Guidelines for Web Clients November 2003 Applies to: Microsoft .NET Framework ASP.NET Summary: This chapter describes how to increase performance and responsiveness of the code in the presentation layer by using multithreading and asynchronous programming. Contents Using Asynchronous Operations In This Chapter This chapter describes how to use two closely related mechanisms to enable you to design scaleable and responsive presentation layers for ASP.NET Web applications. The two mechanisms are: - Multithreading - Asynchronous programming Performance and responsiveness are important factors in the success of your application. Users quickly tire of using even the most functional application if it is unresponsive or regularly appears to freeze when the user initiates an action. Even though it may be a back-end process or external service causing these problems, it is the user interface where the problems become evident. Multithreading and asynchronous programming techniques enable you to overcome these difficulties. The Microsoft .NET Framework class library makes these mechanisms easily accessible, but they are still inherently complex, and you must design your application with a full understanding of the benefits and consequences that these mechanisms bring. In particular, you must keep in mind the following points as you decide whether to use one of these threading techniques in your application: - More threads does not necessarily mean a faster application. In fact, the use of too many threads has an adverse effect on the performance of your application. For more information, see "Using the Thread Pool" later in this chapter. - Each time you create a thread, the system consumes memory to hold context information for the thread. Therefore, the number of threads that you can create is limited by the amount of memory available. - Implementation of threading techniques without sufficient design is likely to lead to overly complex code that is difficult to scale and extend. - You must be aware of what could happen when you destroy threads in your application, and make sure you handle these possible outcomes accordingly. - Threading-related bugs are generally intermittent and difficult to isolate, debug, and resolve. The following sections describe multithreading and asynchronous programming from the perspective of presentation layer design in ASP.NET Web applications. For information about how to use these mechanisms in Windows Forms-based applications, see "Multithreading and Asynchronous Programming in Windows Forms-Based Applications" in the appendix of this guide. Multithreading There are many situations where using additional threads to execute tasks allows you to provide your users with better performance and higher responsiveness in your application, including: - When there is background processing to perform, such as waiting for authorization from a credit-card company in an online retailing Web application - When you have a one-way operation, such as invoking a Web service to pass data entered by the user to a back-end system - When you have discrete work units that can be processed independently, such as calling several SQL stored procedures simultaneously to gather information that you require to build a Web response page Used appropriately, additional threads allow you to avoid your user interface from becoming unresponsive during long-running and computationally intensive tasks. Depending on the nature of your application, the use of additional threads can enable the user to continue with other tasks while an existing operation continues in the background. For example, an online retailing application can display a "Credit Card Authorization In Progress" page in the client's Web browser while a background thread at the Web server performs the authorization task. When the authorization task is complete, the background thread can return an appropriate "Success" or "Failure" page to the client. For an example of how to implement this scenario, see "How to: Execute a Long-Running Task in a Web Application" in Appendix B of this guide. Note Do not display visual indications of how long it will take for a long-running task to complete. Inaccurate time estimations confuse and annoy users. If you do not know the scope of an operation, distract the user by displaying some other kind of activity indictor, such as an animated GIF image, promotional advertisement, or similar page. Unfortunately, there is a run-time overhead associated with creating and destroying threads. In a large application that creates new threads frequently, this overhead can affect the overall application performance. Additionally, having too many threads running at the same time can drastically decrease the performance of a whole system as Windows tries to give each thread an opportunity to execute. Using the Thread Pool A common solution to the cost of excessive thread creation is to create a reusable pool of threads. When an application requires a new thread, instead of creating one, the application takes one from the thread pool. As the thread completes its task, instead of terminating, the thread returns to the pool until the next time the application requires another thread. Thread pools are a common requirement in the development of scaleable, high-performance applications. Because optimized thread pools are notoriously difficult to implement correctly, the .NET Framework provides a standard implementation in the System.Threading.ThreadPool class. The thread pool is created the first time you create an instance of the System.Threading.ThreadPool class. The runtime creates a single thread pool for each run-time process (multiple application domains can run in the same runtime process.) By default, this pool contains a maximum of 25 worker threads and 25 asynchronous I/O threads per processor (these sizes are set by the application hosting the common language runtime). Because the maximum number of threads in the pool is constrained, all the threads may be busy at some point. To overcome this problem, the thread pool provides a queue for tasks awaiting execution. As a thread finishes a task and returns to the pool, the pool takes the next work item from the queue and assigns it to the thread for execution. Benefits of Using the Thread Pool The runtime-managed thread pool is the easiest and most reliable approach to implement multithreaded applications. The thread pool offers the following benefits: - You do not have to worry about thread creation, scheduling, management, and termination. - Because the thread pool size is constrained by the runtime, the chance of too many threads being created and causing performance problems is avoided. - The thread pool code is well tested and is less likely to contain bugs than a new custom thread pool implementation. - You have to write less code, because the thread start and stop routines are managed internally by the .NET Framework. The following procedure describes how to use the thread pool to perform a background task in a separate thread. To use the thread pool to perform a background task - Write a method that has the same signature as the WaitCallback delegate. This delegate is located in the System.Threading namespace, and is defined as follows. [Serializable] public delegate void WaitCallback(object state); - Create a WaitCallback delegate instance, specifying your method as the callback. - Pass the delegate instance into the ThreadPool.QueueUserWorkItem method to add your task to the thread pool queue. The thread pool allocates a thread for your method from the thread pool and calls your method on that thread. In the following code, the AuthorizePayment method is executed in a thread allocated from the thread pool. using System.Threading; public class CreditCardAuthorizationManager { private void AuthorizePayment(object o) { // Do work here ... } public void BeginAuthorizePayment(int amount) { ThreadPool.QueueUserWorkItem(new WaitCallback(AuthorizePayment)); } } For a more detailed discussion of the thread pool, see "Programming the Thread Pool in the .NET Framework" on MSDN (). Limitations of Using the Thread Pool Unfortunately, the thread pool suffers limitations resulting from its shared nature that may prevent its use in some situations. In particular, these limitations are: - The .NET Framework also uses the thread pool for asynchronous processing, placing additional demands on the limited number of threads available. - Even though application domains provide robust application isolation boundaries, code in one application domain can affect code in other application domains in the same process if it consumes all the threads in the thread pool. - When you submit a work item to the thread pool, you do not know when a thread becomes available to process it. If the application makes particularly heavy use of the thread pool, it may be some time before the work item executes. - You have no control over the state and priority of a thread pool thread. - The thread pool is unsuitable for processing simultaneous sequential operations, such as two different execution pipelines where each pipeline must proceed from step to step in a deterministic fashion. - The thread pool is unsuitable when you need a stable identity associated with the thread, for example if you want to use a dedicated thread that you can discover by name, suspend, or abort. In situations where use of the thread pool is inappropriate, you can create new threads manually. Manual thread creation is significantly more complex than using the thread pool, and it requires you to have a deeper understanding of the thread lifecycle and thread management. A discussion of manual thread creation and management is beyond the scope of this guide. For more information, see "Threading" in the ".NET Framework Developer's Guide" on MSDN (). Synchronizing Threads If you use multiple threads in your applications, you must address the issue of thread synchronization. Consider the situation where you have one thread iterating over the contents of a hash table and another thread that tries to add or delete hash table items. The thread that is performing the iteration is having the hash table changed without its knowledge; this causes the iteration to fail. The ideal solution to this problem is to avoid shared data. In some situations, you can structure your application so that threads do not share data with other threads. This is generally possible only when you use threads to execute simple one-way tasks that do not have to interact or share results with the main application. The thread pool described earlier in this chapter is particularly suited to this model of execution. Synchronizing Threads by Using a Monitor It is not always feasible to isolate all the data a thread requires. To get thread synchronization, you can use a Monitor object to serialize access to shared resources by multiple threads. In the hash table example cited earlier, the iterating thread would obtain a lock on the Hashtable object using the Monitor.Enter method, signaling to other threads that it requires exclusive access to the Hashtable. Any other thread that tries to obtain a lock on the Hashtable waits until the first thread releases the lock using the Monitor.Exit method. The use of Monitor objects is common, and both Visual C# and Visual Basic .NET include language level support for obtaining and releasing locks: - In C#, the lock statement provides the mechanism through which you obtain the lock on an object as shown in the following example. lock (myHashtable) { // Exclusive access to myHashtable here... } - In Visual Basic .NET, the SyncLock and End SyncLock statements provide the mechanism through which you obtain the lock on an object as shown in the following example. SyncLock (myHashtable) ' Exclusive access to myHashtable here... End SyncLock When entering the lock (or SyncLock) block, the static (Shared in Visual Basic .NET) System.Monitor.Enter method is called on the specified expression. This method blocks until the thread of execution has an exclusive lock on the object returned by the expression. The lock (or SyncLock) block is implicitly contained by a try statement whose finally block calls the static (or Shared) System.Monitor.Exit method on the expression. This ensures the lock is freed even when an exception is thrown. As a result, it is invalid to branch into a lock (or SyncLock) block from outside of the block. For more information about the Monitor class, see "Monitor Class" in the ".NET Framework Class Library" on MSDN (). Using Alternative Thread Synchronization Mechanisms The .NET Framework provides several other mechanisms that enable you to synchronize the execution of threads. These mechanisms are all exposed through classes in the System.Threading namespace. The mechanisms relevant to the presentation layer are listed in Table 6.1. Table 6.1: Thread Synchronization Mechanisms With such a rich selection of synchronization mechanisms available to you, you must plan your thread synchronization design carefully and consider the following points: - It is a good idea for threads to hold locks for the shortest time possible. If threads hold locks for long periods of time, the resulting thread contention can become a major bottleneck, negating the benefits of using multiple threads in the first place. - Be careful about introducing deadlocks caused by threads waiting for locks held by other threads. For example, if one thread holds a lock on object A and waits for a lock on object B, while another thread holds a lock on object B, but waits to lock object A, both threads end up waiting forever. - If for some reason an object is never unlocked, all threads waiting for the lock end up waiting forever. The lock (C#) and SyncLock (Visual Basic .NET) statements make sure that a lock is always released even if an exception occurs. If you use Monitor.Enter manually, you must make sure that your code calls Monitor.Exit. Using multiple threads can significantly enhance the performance of your presentation layer components, but you must make sure you pay close attention to thread synchronization issues to prevent locking problems. Troubleshooting The difficulties in identifying and resolving problems in multi-threaded applications occur because the CPU's scheduling of threads is non-deterministic; you cannot reproduce the exact same code execution sequence across multiple test runs. This means that a problem may occur one time you run the application, but it may not occur another time you run it. To make things worse, the steps you typically take to debug an application—such as using breakpoints, stepping through code, and logging—change the threading behavior of a multithreaded program and frequently mask thread-related problems. To resolve thread-related problems, you typically have to set up long-running test cycles that log sufficient debug information to allow you to understand the problem when it occurs. Note For more in-depth information about debugging, see "Production Debugging for .NET Framework Applications" on MSDN (). Using Asynchronous Operations Some operations take a long time to complete. These operations generally fall into two categories: - I/O bound operations such as calling SQL Server, calling a Web service, or calling a remote object using .NET Framework remoting - CPU-bound operations such as sorting collections, performing complex mathematical calculations, or converting large amounts of data The use of additional threads to execute long running tasks is a common way to maintain responsiveness in your application while the operation executes. Because threads are used so frequently to overcome the problem of long running processes, the .NET Framework provides a standardized mechanism for the invocation of asynchronous operations that saves you from working directly with threads. Typically, when you invoke a method, your application blocks until the method is complete; this is known as synchronous invocation. When you invoke a method asynchronously, control returns immediately to your application; your application continues to execute while the asynchronous operation executes independently. Your application either monitors the asynchronous operation or receives notification by way of a callback when the operation is complete; this is when your application can obtain and process the results. The fact that your application does not block while the asynchronous operation executes means the application can perform other processing. The approach you use to invoke the asynchronous operation (discussed in the next section) determines how much scope you have for processing other tasks while waiting for the operation to complete. Using the .NET Framework Asynchronous Execution Pattern The .NET Framework allows you to execute any method asynchronously using the asynchronous execution pattern. This pattern involves the use of a delegate and three methods named Invoke, BeginInvoke, and EndInvoke. The following example declares a delegate named AuthorizeDelegate. The delegate specifies the signature for methods that perform credit card authorization. public delegate int AuthorizeDelegate(string creditcardNumber, DateTime expiryDate, double amount); When you compile this code, the compiler generates Invoke, BeginInvoke, and EndInvoke methods for the delegate. Figure 6.1 shows how these methods appear in the IL Disassembler. The equivalent C# signatures for these methods are as follows. // Signature of compiler-generated BeginInvoke method public IAsyncResult BeginInvoke(string creditcardNumber, DateTime expiryDate, double amount, AsyncCallback callback, object asyncState); // Signature of compiler-generated EndInvoke method public int EndInvoke(IAsyncResult ar); // Signature of compiler-generated Invoke method public int Invoke(string creditcardNumber, DateTime expiryDate, double amount); The following sections describe the BeginInvoke, EndInvoke, and Invoke methods, and clarify their role in the asynchronous execution pattern. For full details on how to use the asynchronous execution pattern, see "Including Asynchronous Calls" in the ".NET Framework Developer's Guide" on MSDN (). Performing Synchronous Execution with the Invoke Method The Invoke method synchronously executes the method referenced by the delegate instance. If you call a method by using Invoke, your code blocks until the method returns. Using Invoke is similar to calling the referenced method directly, but there is one significant difference. The delegate simulates synchronous execution by calling BeginInvoke and EndInvoke internally. Therefore your method is executed in the context of a different thread to the calling code, even though the method appears to execute synchronously. For more information, see the description of BeginInvoke in the next section. Initiating Asynchronous Operations with the BeginInvoke Method The BeginInvoke method initiates the asynchronous execution of the method referenced by the delegate instance. Control returns to the calling code immediately, and the method referenced by the delegate executes independently in the context of a thread from the runtime's thread pool. The "Multithreading" section earlier in this chapter describes the thread pool in detail; however, it is worth highlighting the consequences of using a separate thread, and in particular one drawn from the thread pool: - The runtime manages the thread pool. You have no control over the scheduling of the thread, nor can you change the thread's priority. - The runtime's thread pool contains 25 threads per processor. If you invoke asynchronous operations too liberally, you can easily exhaust the pool causing the runtime to queue excess asynchronous operations until a thread becomes available. - The asynchronous method runs in the context of a different thread to the calling code. This causes problems when asynchronous operations try to update Windows Forms components. The signature of the BeginInvoke method includes the same arguments as those specified by the delegate signature. It also includes two additional arguments to support asynchronous completion: - callback argumentSpecifies an AsyncCallback delegate instance. If you specify a non-null value for this argument, the runtime calls the specified callback method when the asynchronous method completes. If this argument is a null reference, you must monitor the asynchronous operation to determine when it is complete. For more information, see "Managing Asynchronous Completion with the EndInvoke Method" later in this chapter. - asyncState argumentTakes a reference to any object. The asynchronous method does not use this object, but it is available to your code when the method completes; this allows you to associate useful state information with an asynchronous operation. For example, this object allows you to map results against initiated operations in situations where you initiate many asynchronous operations that use a common callback method to perform completion. The IAsyncResult object returned by BeginInvoke provides a reference to the asynchronous operation. You can use the IAsyncResult object for the following purposes: - Monitor the status of an asynchronous operation - Block execution of the current thread until an asynchronous operation completes - Obtain the results of an asynchronous operation using the EndInvoke method The following procedure shows how to invoke a method asynchronously by using the BeginInvoke method. To invoke a method asynchronously by using BeginInvoke - Declare a delegate with a signature to match the method you want to execute. - Create a delegate instance containing a reference to the method you want to execute. - Execute the method asynchronously by calling the BeginInvoke method on the delegate instance you just created. The following code fragment demonstrates the implementation of these steps. The example also shows how to register a callback method; this method is called automatically when the asynchronous method completes. For more information about defining callback methods and other possible techniques for dealing with asynchronous method completion, see "Managing Asynchronous Completion with the EndInvoke Method" later in this chapter. public class CreditCardAuthorizationManager { // Delegate, defines signature of method(s) you want to execute asynchronously public delegate int AuthorizeDelegate(string creditcardNumber, DateTime expiryDate, double amount); // Method to initiate the asynchronous operation public void StartAuthorize() { AuthorizeDelegate ad = new AuthorizeDelegate(AuthorizePayment); IAsyncResult ar = ad.BeginInvoke(creditcardNumber, expiryDate, amount, new AsyncCallback(AuthorizationComplete), null); } // Method to perform a time-consuming operation (this method executes // asynchronously on a thread from the thread pool) private int AuthorizePayment(string creditcardNumber, DateTime expiryDate, double amount) { int authorizationCode = 0; // Open connection to Credit Card Authorization Service ... // Authorize Credit Card (assigning the result to authorizationCode) ... // Close connection to Credit Card Authorization Service ... return authorizationCode; } // Method to handle completion of the asynchronous operation public void AuthorizationComplete(IAsyncResult ar) { // See "Managing Asynchronous Completion with the EndInvoke Method" // later in this chapter. } } The following section describes all the possible ways to manage asynchronous method completion. Managing Asynchronous Completion with the EndInvoke Method In most situations, you will want to obtain the return value of an asynchronous operation that you initiated. To obtain the result, you must know when the operation is complete. The asynchronous execution pattern provides the following mechanisms to determine whether an asynchronous operation is complete: - BlockingThis is rarely used because it provides few advantages over synchronous execution. One use for blocking is to perform impersonation on a different thread. It is never used for parallelism. - PollingIt is generally a good idea to not use this because it is inefficient; use waiting or callbacks instead. - WaitingThis is typically used for displaying a progress or activity indicator during asynchronous operations. - CallbacksThese provide the most flexibility; this allows you to execute other functionality while an asynchronous operation executes. The process involved in obtaining the results of an asynchronous operation varies depending on the method of asynchronous completion you use. However, eventually you must call the EndInvoke method of the delegate. The EndInvoke method takes an IAsyncResult object that identifies the asynchronous operation to obtain the result from. The EndInvoke method returns the data that you would receive if you called the original method synchronously. The following sections explore each approach to asynchronous method completion in more detail. Using the Blocking Approach To use blocking, call EndInvoke on the delegate instance and pass the IAsyncResult object representing an incomplete asynchronous operation. The calling thread blocks until the asynchronous operation completes. If the operation is already complete, EndInvoke returns immediately. The following code sample shows how to invoke a method asynchronously, and then block until the method has completed. // Block until the asynchronous operation is complete int authorizationCode = ad.EndInvoke(ar); The use of blocking might seem a strange approach to asynchronous completion, offering the same functionality as a synchronous method call. However, occasionally blocking is a useful approach because you can decide when your thread enters the blocked state as opposed to synchronous execution; synchronous execution blocks immediately. Blocking can be useful if the user initiates an asynchronous operation after which there are a limited number of steps or operations they can perform before the application must have the result of the asynchronous operation. Using the Polling Approach To use polling, write a loop that repeatedly tests the completion state of an asynchronous operation using the IsCompleted property of the IAsyncResult object. The following code sample shows how to invoke a method asynchronously, and then poll until the method completes. // Poll until the asynchronous operation completes while (!ar.IsCompleted) { // Do some other work... } // Get the result of the asynchronous operation int authorizationCode = ad.EndInvoke(ar); Polling is a simple but inefficient approach that imposes major limitations on what you can do while the asynchronous operation completes. Because your code is in a loop, the user's workflow is heavily restricted, providing few benefits over synchronous method invocation. Polling is really only suitable for displaying a progress indicator on smart client applications during short asynchronous operations. Generally, it is a good idea to avoid using polling and look instead to using waiting or callbacks. Using the Waiting Approach Waiting is similar to blocking, but you can also specify a timeout value after which the thread resumes execution if the asynchronous operation is still incomplete. Using waiting with timeouts in a loop provides functionality similar to polling, but it is more efficient because the runtime places the thread in a CPU efficient sleep instead of using a code level loop. To use the waiting approach, you use the AsyncWaitHandle property of the IAsyncResult object. The AsyncWaitHandle property returns a WaitHandle object. Call the WaitOne method on this object to wait for a single asynchronous operation to complete. The following code sample shows how to invoke a method asynchronously, and then wait for a maximum of 2 seconds for the method to complete. // Wait up to 2 seconds for the asynchronous operation to complete WaitHandle waitHandle = ar.AsyncWaitHandle; waitHandle.WaitOne(2000, false); // If the asynchronous operation completed, get its result if (ar.IsCompleted) { // Get the result of the asynchronous operation int authorizationCode = ad.EndInvoke(ar); ... } Despite the advantages, waiting imposes the same limitations as polling—the functionality available to the user is restricted because you are in a loop, even though it is an efficient one. Waiting is useful if you want to show a progress or activity indicator when executing long-running processes that must complete before the user can proceed. Another advantage of waiting is that you can use the static methods of the System.Threading.WaitHandle class to wait on a set of asynchronous operations. You can wait either for the first one to complete (using the WaitAny method) or for them all to complete (using the WaitAll method). This is very useful if you initiate a number of asynchronous operations at the same time and have to coordinate the execution of your application based on the completion of one or more of these operations. Using Callbacks When you specify an AsyncCallback delegate instance in the BeginInvoke method, you do not have to actively monitor the asynchronous operation for completion. Instead, when the operation completes, the runtime calls the method referenced by the AsyncCallback delegate and passes an IAsyncResult object identifying the completed operation. The runtime executes the callback method in the context of a thread from the runtime's thread pool. The following code sample shows how to invoke a method asynchronously, and specify a callback method that will be called on completion. AuthorizeDelegate ad = new AuthorizeDelegate(AuthorizePayment); IAsyncResult ar = ad.BeginInvoke(creditcardNumber, expiryDate, amount, new AsyncCallback(AuthorizationComplete), null); ... // Method to handle completion of the asynchronous operation public void AuthorizationComplete(IAsyncResult ar) { // Retrieve the delegate that corresponds to the asynchronous method AuthorizeDelegate ad = (AuthorizeDelegate)((AsyncResult)ar).AsyncDelegate; // Get the result of the asynchronous method int authorizationCode = ad.EndInvoke(ar); } } The great benefit of using callbacks is that your code is completely free to continue with other processes, and it does not constrain the workflow of the application user. However, because the callback method executes in the context of another thread, you face the same threading issues highlighted earlier in the discussion of the BeginInvoke method. Using Built-In Asynchronous I/O Support I/O is a situation where you frequently use asynchronous method calls. Because of this, many .NET Framework classes that provide access to I/O operations expose methods that implement the asynchronous execution pattern. This saves you from declaring and instantiating delegates to execute the I/O operations asynchronously. The following list identifies the most common scenarios where you would use asynchronous I/O in your presentation layer and provides a link to a document where you can find implementation details: - Consuming XML Web services: - - Calling methods on remote objects using .NET Framework remoting: - - File access: - - Network communications: - - - - Microsoft message queue: Using the built-in asynchronous capabilities of the .NET Framework makes the development of asynchronous solutions easier than it would be to explicitly create delegates to implement asynchronous operations. Summary Application performance and scalability can be greatly enhanced using multithreading and asynchronous operations. Wherever possible, try to use these techniques to increase the responsiveness of your presentation layer components.
http://msdn.microsoft.com/en-us/library/ff647332.aspx
CC-MAIN-2014-52
en
refinedweb
06 November 2012 15:56 [Source: ICIS news] LONDON (ICIS)--BioAmber is to supply biobased succinic acid to an automotive bioplastics development programme between ?xml:namespace> “The objective of the joint Faurecia-Mitsubishi Chemical programme is to develop a polymer that can be used in mass-production for automotive interior parts," they said in a statement. “The joint development will start by modifying Mitsubishi Chemical’s patented biomass-derived poly-butylene succinate (PBS) and ultimately target to be produced from 100% bio sources.” BioAmber is a US-based renewable chemicals company while Faurecia is listed on the NYSE Euronext Paris stock exchange. No financial details
http://www.icis.com/Articles/2012/11/06/9611523/bioamber-to-supply-succinic-acid-for-autos-bioplastics-project.html
CC-MAIN-2014-52
en
refinedweb
{-# LANGUAGE CPP, RankNTypes, GADTs, ViewPatterns #-} {-# OPTIONS_GHC -fno-warn-unused-imports #-} -- Copyright (C) 2009 Ganesh Sittamp.Patch.Split ( Splitter(..), rawSplitter, noSplitter, primSplitter, reversePrimSplitter ) where import Data.List ( intersperse ) import Darcs.Witnesses.Ordered import Darcs.Witnesses.Sealed import Darcs.Patch.FileHunk ( FileHunk(..), IsHunk(..) ) import Darcs.Patch.Patchy ( ReadPatch(..), showPatch, ShowPatch(..), Invert(..) ) import Darcs.Patch.Invert (invertFL) import Darcs.Patch.Prim ( PrimPatch, canonize, canonizeFL, primFromHunk )' str ofPatch prim => prim C(x y) -> Maybe (B.ByteString, B.ByteString -> Maybe (FL prim C(x y))) doPrimSplit = doPrimSplit_ True explanation where explanation =" , "" ] doPrimSplit_ edit_before_part helptext (isHunk -> Just (FileHunk fn return $ if edit_before_part then hunk before before' +>+ hunk before' after' +>+ hunk after' after else hunk before after' +>+ hunk after' after) where sep = BC.pack "==========================" hunk :: PrimPatch prim => [B.ByteString] -> [B.ByteString] -> FL prim C(a b) hunk b a = canonize (primFromHunk (FileHunk fn :: PrimPatch p => Splitter p primSplitter = Splitter { applySplitter = doPrimSplit , canonizeSplit = canonizeFL } doReversePrimSplit :: PrimPatch prim => prim C(x y) -> Maybe (B.ByteString, B.ByteString -> Maybe (FL prim C(x y))) doReversePrimSplit prim = do (text, parser) <- doPrimSplit_ False reverseExplanation (invert prim) let parser' p = do patch <- parser p return . reverseRL $ invertFL patch return (text, parser') where reverseExplanation = map BC.pack [ "Interactive hunk edit:" , " - Edit the section marked 'AFTER' (representing the state to which you'll revert)" , " - Arbitrary editing is supported" , " - Your working copy will be returned to the 'AFTER' state" , " - Do not touch the 'BEFORE' section" , " - Hints:" , " - To revert only a part of a text addition, delete the part you want to get rid of" , " - To revert only a part of a removal, copy back the part you want to retain" , "" ] reversePrimSplitter :: PrimPatch prim => Splitter prim reversePrimSplitter = Splitter { applySplitter = doReversePrimSplit , canonizeSplit = canonizeFL}
http://hackage.haskell.org/package/darcs-2.8.1/docs/src/Darcs-Patch-Split.html
CC-MAIN-2014-52
en
refinedweb
NAME vm_fault_prefault - cluster page faults into a process’s address space SYNOPSIS #include <sys/param.h> #include <vm/vm.h> #include <vm/pmap.h> void vm_fault_prefault(pmap_t pmap, vm_offset_t addra, vm_map_entry_t entry); DESCRIPTION The vm_fault_prefault() function provides a means of clustering pagefaults into a process’s address space. It operates upon the physical map pmap. The entry argument specifies the entry to be prefaulted; the addra argument specifies the beginning of the mapping in the process’s virtual address space. It is typically called by vm_fault() after the first page fault. It benefits the execve(2) system call by eliminating repetitive calls to vm_fault(), which would otherwise be made to bring the process’s executable pages into physical memory. IMPLEMENTATION NOTES This is a machine-independent function which calls the machine-dependent pmap_is_prefaultable(9) helper function to determine if a page may be prefaulted into physical memory. SEE ALSO execve(2), pmap_is_prefaultable(9) AUTHORS This manual page was written by Bruce M Simpson 〈bms@spc.org〉.
http://manpages.ubuntu.com/manpages/intrepid/man9/vm_fault_prefault.9freebsd.html
CC-MAIN-2014-52
en
refinedweb
SecurityAction Enumeration .NET Framework 2.0 Specifies the security actions that can be performed using declarative security. Namespace: System.Security.Permissions Assembly: mscorlib (in mscorlib.dll) Assembly: mscorlib (in mscorlib.dll) This example shows how to tell the CLR that code in this assembly requires the IsolatedStoragePermission and also demonstrates how to write and read from isolated storage. using System; using System.Security.Permissions; using System.IO.IsolatedStorage; using System.IO; // Notify the CLR:
http://msdn.microsoft.com/en-US/library/system.security.permissions.securityaction(v=vs.80).aspx
CC-MAIN-2014-52
en
refinedweb
16 June 2009 18:49 [Source: ICIS news] WASHINGTON (ICIS news)--A top Obama administration security official on Tuesday joined industry leaders in opposing an inherently safer technology (IST) mandate and citizen lawsuits in new chemical facility antiterrorism legislation. Philip Reitinger, deputy undersecretary at the Department of Homeland Security (DHS), told a House hearing that his department has significant concerns that a provision allowing private right of action lawsuits could undermine national security measures at high-risk chemical facilities. He also indicated to the House Homeland Security Committee that the department favours existing voluntary use by industry of inherently safer technology “where appropriate”. While Reitinger did not address a provision in the pending bill that would give DHS authority to impose IST measures on chemical facilities, he indicated instead that that the department supports industry's own voluntary implementation of safer technologies. The committee was holding the first full hearing on HR-2868, “The Chemical Facility Antiterrorism Act of 2009”, which was formally introduced in the House late on Monday. The proposed bill would revise, broaden and extend the Chemical Facility Anti-Terrorism Standards (CFATS) that have been in place under DHS enforcement since 2006 and are due to expire in early October this year unless renewed or replaced. The existing statute underlying CFATS does not include an IST mandate and does not allow private citizens to file suits to force tougher enforcement of the chemical site security programme. In his opening statement at Tuesday’s hearing, committee Chairman Bernie Thompson (Democrat-Mississippi) said the existing statute is deficient because it does not allow private right of action or contain an IST mandate. But Reitinger said in his testimony that the department “has significant concerns with the citizen suit provision” because of “the potential for disclosure of sensitive or classified information” in court proceedings. Marty Durbin, vice president for federal affairs at the American Chemistry Council (ACC), warned that private right of action lawsuits would undermine plant site security efforts. He urged the panel to give the department more funding and more staff to enforce site security law and regulations rather than “create a litigious environment” that would complicate security. Durbin also argued against an inherently safer technology mandate, contending that safer technology decisions should be left to the industry, which traditionally has worked to employ safer and less costly substances and processes. He also pointed out that there are no safer alternatives for some chemical feedstocks and processes. “In these instances, you cannot simply eliminate potential security risks, you must work to manage or mitigate them,” he said. The Society of Chemical Manufacturers & Affiliates (SOCMA) argued in testimony that an IST mandate “would remove decisions about risk from those at facilities who manage it every day to a government bureaucrat in ?xml:namespace> SOCMA said the private right of action provision is “misguided” and could expose chemical facilities to increased risk. The trade group, which represents some 300 CVI includes detailed information about plant vulnerabilities and defences that high-risk chemical facilities must submit to the department to comply with existing site security requirements. Chemical industry officials worry that public disclosure of such information could provide terrorists with blueprints for attacks. “This is one area where citizen enforcement could actually work against, not support, the protective purpose of the law,” SOCMA said. The Homeland Security Committee is expected to complete work on the new site security legislation this week or next. The bill also must be reviewed by the House Energy and Commerce Committee, and a parallel measure has yet to be introduced in the US Senate. An eight-page summary of the pending HR-2868 is available from the Homeland Security Committee
http://www.icis.com/Articles/2009/06/16/9225451/us-official-joins-industry-in-opposing-safer-tech-mandate.html
CC-MAIN-2014-52
en
refinedweb
Design Details of the Windows Runtime The Windows Runtime (WinRT) was created to provide a fluid and secure application experience on Windows. WinRT was influenced by both .NET, C++ and JavaScript. WinRT does not replace the CLR or Win32, but rather provides unified support for applications written in different languages to run on Windows using the new Metro UI. Microsoft started working on the Windows Runtime (WinRT) 2 years ago starting from the desire to build a better development platform enabling the creation of fast, fluid and trustworthy applications using rich Intellisense and good debugging tools, while having freedom of choice of the language and libraries to use. The end result was an architecture and a set of API that can be called from .NET languages (C#, VB.NET, F#), C++, and HTML/JavaScript. All these languages had an influence on the design of WinRT. WinRT is not meant to replace all the functionality provided by .NET or Win32, but it is a common platform for applications written in different languages to run using the new Metro style interface. Hybrid C# applications will still be able to execute LINQ queries while relying on WinRT to create the Metro UI, but also for storage, network, new application security, etc. The overall architecture of the runtime is depicted in the graphics below: Language Projection represents the view on WinRT API that each of the languages supported has. The recommended API for building METRO applications can be found under the “Windows” namespace in Visual Studio 11’s Intellisense. Martyn Lovell, Development Manager of the Windows Runtime, presented the design principles behind WinRT during the BUILD session entitled “Lap Around the Windows Runtime”: - Anything taking more than 50ms should be done through an asynchronous call using Async in order to ensure a fluid and fast application experience. Because many developers tend to use synchronous API calls even when their asynchronous equivalents exists, Async was built deep inside the WinRT forcing developer to make asynchronous calls. - Better separation between applications so one’s performance would not affect another, and for better security. Runtime objects belonging to one application cannot be exposed to another except by going through the standard OS communication channels mediated by Windows Contracts. - Platform-based versioning ensuring that applications run well on different versions of Windows. Versioning is included in WinRT metadata, and Intellisense exposes functionality based on the version an application targets, so the developer knows which classes and methods are available for a certain version of Windows without consulting other documentation. Regarding types, WinRT had to provide language independent types –integer, enumerations, structures, arrays, interfaces, generic interfaces and runtime classes. A new type of String called HSTRING was introduced allowing the transfer of strings between an application and the runtime without performing any data copy. Each WinRT object is projected through a number of interfaces, two of them belonging to each object: IUnknown, the familiar COM interface, and IInspectable, used to discover information about the object based on its included metadata. An object may provide other functionality via interfaces, but those interfaces are exposed collectively via a runtime class. For example, the FileInformation object has the interfaces IStorageItemInformation, IStorageItem, IStorageFile exposed through the FileInformation class. WinRT objects are exposed to C++ applications at compile time, and they are bound to C#/VB.NET apps partially at compile time and partially at runtime. HTML/JavaScript applications see WinRT objects only at runtime, the metadata being generated dynamically. The METRO interface runs on a non-reentrant single thread but the rest of the application can use multiple threads automatically provided by the runtime from a thread pool. Harry Pierson, Windows Runtime Experience Team, and Jesse Kaplan, Common Language Runtime Team, presented some details for programming with .NET languages against WinRT in another BUILD session called “Using the Windows Runtime from C# and Visual Basic”. According to Pierson, .NET had a major influence on WinRT, many design guidelines being borrowed from it. For example, the WinRT library is augmented with metadata based on an updated version of .NET’s metadata format. Like Silverlight, WinRT uses a XAML framework for creating Metro applications. .NET applications will feel at home using WinRT since there is a direct mapping between the runtime and .NET: primitives, classes, interfaces, properties, methods, etc., the existing differences being hidden from the developer. Pierson also said that one can create Windows Runtime components in C# to be consumed by C++ or JavaScript WinRT applications by abiding to a series of rules: “Structs can only have public data fields, Inheritance can only be used for XAML controls, all other types must be sealed, Only supports system provided generic types.” Windows 8 and perhaps the following versions of Windows will provide a mixed environment where classic applications coexist with the new touch-friendly Metro ones. The future Windows applications based on Metro will benefit from a common infrastructure provided by the Windows Runtime, the developers having to program against a unique API that has slightly different views from different languages. This is Microsoft’s best attempt in maintaining compatibility with the past while providing new functionality for the future. COM-based runtime by arnaud m It seems that MS has still the same problem as apple with Obj-C... (arstechnica.com/staff/fatbits/2005/09/1372.ars) As for async API, it reminds me of the old days when I was programming on the classic MacOS, where the lack of threading forced to implement complex chain of callbacks. Well, maybe since modern languages have a better syntax to do that, this kind of API is easier to use now. Win32 is old by Java 陈 MS bets Metro is the future Windows appearance, ha, bad
http://www.infoq.com/news/2011/09/Design-Details-Windows-Runtime
CC-MAIN-2014-52
en
refinedweb
While JDK 1.1 certainly has streamlined event handling with the introduction of the delegation event model, it does not make it easy for developers to create their own event types. The basic procedure described here is actually rather straightforward. For the sake of simplicity, I will not discuss concepts of event enabling and event masks. Plus, you should know that events created using this procedure will not be posted to the event queue and will work only with registered listeners. Currently, the Java core consists of 12 event types defined in java.awt.events: - ActionEvent - AdjustmentEvent - ComponentEvent - ContainerEvent - FocusEvent - InputEvent - ItemEvent - KeyEvent - MouseEvent - PaintEvent - TextEvent - WindowEvent: Create an event listener Create a listener adapter Create an event class Modify the component - Managing multiple listeners We'll examine each of these tasks in turn and then put them all together. Create an event listener. Create a listener adapter. Create an event. Modify the component. Managing multiple listeners. Here is the event multicaster as implemented to handle WizardEvent: import java.awt.AWTEventMulticaster; import java.util.EventListener; public class WizardEventMulticaster extends AWTEventMulticaster implements WizardListener { protected WizardEventMulticaster(EventListener a, EventListener b) { super(a, b); } public static WizardListener add(WizardListener a, WizardListener b) { return (WizardListener) addInternal(a, b); } public static WizardListener remove(WizardListener l, WizardListener oldl) { return (WizardListener) removeInternal(l,oldl); } public void nextSelected(WizardEvent e) { //casting exception will never occur in this case //casting _is_ needed because this multicaster may //handle more than just one listener if (a != null) ((WizardListener) a).nextSelected(e); if (b != null) ((WizardListener) b).nextSelected(e); } public void backSelected(WizardEvent e) { if (a != null) ((WizardListener) a).backSelected(e); if (b != null) ((WizardListener) b).backSelected(e); } public void cancelSelected(WizardEvent e) { if (a != null) ((WizardListener) a).cancelSelected(e); if (b != null) ((WizardListener) b).cancelSelected(e); } public void finishSelected(WizardEvent e) { if (a != null) ((WizardListener) a).finishSelected(e); if (b != null) ((WizardListener) b).finishSelected(e); } protected static EventListener addInternal(EventListener a, EventListener b) { if (a == null) return b; if (b == null) return a; return new WizardEventMulticaster(a, b); } protected EventListener remove(EventListener oldl) { if (oldl == a) return b; if (oldl == b) return a; EventListener a2 = removeInternal(a, oldl); EventListener b2 = removeInternal(b, oldl); if (a2 == a && b2 == b) return this; return addInternal(a2, b2); } } Methods in the multicaster class: A review Let's review the methods that are part of the multicaster class above. The constructor is protected, and in order to obtain a new WizardEventMulticaster, a static add(WizardListener, WizardListener) method must be called. It takes two listeners as arguments that represent two pieces of a listener chain to be linked: To start a new chain, use null as the first argument. To add a new listener, use the existing listener as the first argument and a new listener as the second argument. This, in fact, is what has been done in the code for class Wizard that we have already examined. Another static routine is remove(WizardListener, WizardListener). The first argument is a listener (or listener multicaster), and the second is a listener to be removed. Four public, non-static methods were added to support event propagation through the event chain. For each WizardEvent case (that is, next, back, cancel, and finish selected) there is one method. These methods must be implemented since the WizardEventMulticaster implements WizardListener, which in turn requires the four methods to be present. How it all works together Let's now examine how the multicaster actually is used by the Wizard. Let's suppose a wizard object is constructed and three listeners are added, creating a listener chain. Initially, the private variable wizardListener of class Wizard is null. So when a call is made to WizardEventMulticaster.add(WizardListener, WizardListener), the first argument, wizardListener, is null and the second is not (it does not make sense to add a null listener). The add method, in turn, calls addInternal. Since one of the arguments is null, the return of addInternal is the non-null listener. The return propagates to the add method that returns the non-null listener to the addWizardListener method. There the wizardListener variable is set to the new listener being added. This is exactly what we expected: If there are no listeners and a new listener is added, assign it to the wizardListener variable. Note that at this point, wizardListener holds a reference to a WizardListener object that is not a multicaster (it is not necessary to use a multicaster if only one listener is registered). When a second call is made to the addWizardListener method, both arguments passed to WizardEventMulticaster.add are not null. In order to hold two listeners, we need a multicaster, so an instance of WizardEventMulticaster is returned by the addInternal method and therefore by WizardEventMulticaster.add. The new multicaster object is assigned to the wizardListener variable, which now holds a chain of two listeners. When a third listener is added, the procedure is the same as for adding the second listener. At this point, if the NEXT button is pressed on the wizard panel, it is enough to invoke the nextSelected method on a WizardListener object represented by a wizardListener variable to send a WizardEvent to all listeners in the chain (see sample code above). The removal of listeners is achieved by searching the listener chain in a recursive fashion: Event-specific remove calls removeInternal and that may call the protected remove method. Conclusions I think developing custom event types in JDK 1.1 is a non-trivial process. It requires a lot of (mostly) simple coding. The interaction between different events and event support classes is often intricate and difficult to follow (although, I must say, quite ingenuous). The good news is that you do not have to create a new multicaster for every new event type developed. Since one multicaster can extend several listener interfaces, it is enough just to add listener and event-specific methods to an existing multicaster to make it handle more event types. Please remember that we have not talked about support for event queuing on the system queue, nor have we mentioned event enabling. In the case of the Wizard class, such event queuing and enabling support would involve the addition of processEvent and processWizardEvent. Both of these deal with events delivered to a component from the queue. Event masks also need to be investigated and assigned with caution to avoid potential conflicts with events that are already part of AWT. Finally, the way in which events are dispatched would need to be changed: Currently, a call is made to listeners right where the triggering action occurs (for example, from the actionPerformed method for the NEXT button). It appears that components in AWT first post an event to the queue. The event is then received by a processEvent method of the source component, which in turn calls the event-specific processor (in our case that would be processWizardEvent), which calls the appropriate listener method, thus triggering event delivery. Note: I would like to thank John Zukowski for encouraging me to write this article. He was also a great help in improving the original draft.
http://www.javaworld.com/article/2077533/learn-java/java-tip-35--create-new-event-types-in-java.html
CC-MAIN-2014-52
en
refinedweb