分类目录归档:Java

Java….

[原] 给paoding-rose添加动态编译实现ParamResolver的功能

通过javassist,添加了动态编译解析参数类的功能。记录一下主要目的和原理:
目的:动态解析传入参数组合成类。
最后达到的效果:

主要是为了能自动获取DataTables传入的参数:

过程:最初修改自wanghaomiao的blog(以下2-6条)。原文是用java的反射机制,我加入了7-9条,实现了动态编译解析器类。

原理:rose的ParamResolver接口提供了supports和resolve方法,在supports判断某个类是否支持解析,resolve里实现解析。我引入了一个WebBean注解,通过是否加注了WebBean来判断。如果加注了,就用ResolverHelper来实现动态编译。

动态编译核心代码:

附动态生成的解析类主要方法(支持嵌套、支持List、暂支持int long boolean)

[原] paoding-rose获取spring的ApplicationContext

刚开始用rose,因为有些服务是在线程里的,需要getBean。
试了好几个办法,都没法得到spring的ApplicationContext,
比如实现 ApplicationContextAware 结果没Aware。
有一个解决办法是所有线程里用到的依赖都在new的时候传递进去(也是rose作者推荐的方案),不过有时候比较麻烦,比如依赖很多很多,参数一大串;或者临时测一下,又要写一堆@Autowired
另外wangqiaowqo(rose作者之一)在blog里写了通过 new RoseAppContext(); 的方法,实际使用发现这种方式相当于把spring又重启了一遍,所有服务都会重新init,有可能导致严重问题。
有没有办法实现类似一个App.getBean()这种方法呢。
我试了一下,是可以得到最初spring启动环境的那个ApplicationContext的,方法如下(有点绕哦,因为@Autowired不能注解static成员,就把app本身从ApplicationContext中getBean):

其实核心是

[原] JarClassLoader在mac下无法加载native library的问题

JarClassLoader配合maven把所有依赖打包在一起,还是很方便的。不过在mac下发现无法加载native library。
通过跟踪JarClassLoader运行,发现主要原因是:System.mapLibraryName returns suffix ‘dylib’ and not ‘jnilib’ using JDK 7/8 on OS X

解决:
1)复制一份.dylib,or
2)判断下把.dylib的后缀改为.jnilib

[原] 利用java8的新特性lambda和stream处理list转为map

一个需求是这样:根据list,转为map。
假设List,map的key为User.getId(),value显而易见为User
java8以前得这样写:

java8利用lambda和stream这样即可,还是挺方便的:

看起来复杂,好处是支持泛型。如

[原] netty 4.0.17.Final使用epoll模式的几个bug

netty支持linux的epoll是4.0.17.Final引入的新功能,不过测试使用发现几个bug。暂记如下:

[已解决的]
使用epoll时channel关闭错误

原因:Native.finishConnect()中有bug
见这里:https://github.com/netty/netty/issues/2280
下载最新代码编译package可解决。包括编译Native的c代码:

[未解决的]
使用epoll时channel中取不到localAddress()和remoteAddress()

log:

github上有相关的issue: https://github.com/netty/netty/issues/2262
虽然标记为已解决,但编译4.0最新的代码(4.0.18.Final-SNAPSHOT)错误仍在。
先记录下,明天再来看怎么回事。

[原] ftp4j的解析list的bug及解决

上次推荐的ftp4j在解析部分FTP站点的目录list的时候遇到了FTPListParseException(也怪FTP协议没有对LIST格式作出标准)。查看源码发现,主要是两个问题:
1)文件权限不只rwx这三个,附加了s、t(详见http://en.wikipedia.org/wiki/File_system_permissions
2)部分ftpd似乎直接调用的“ls -l”输出目录,第一行是“total xxx”
给作者写信了,说不定下个版本就有Sepcial Thanks to bianbian 了。嘿嘿嘿嘿。。。
修正后的代码(省略后面没有变化的部分):
继续阅读

[原] Shibboleth 2.0 Identity Provider (IdP) LDAP认证配置指南

首先佩服老外:1)把简单的东西搞得很复杂 2)很会创造标准和协议
这次遇到的Shibboleth就是这么个东西,看了两天英文,对人为复杂、创造协议痛恨中。简单写个配置指南,给其他人做个参考,少走弯路。

注意:
1) 系统时间必须设置正确
2) apache 需要 mod_ssl mod_proxy_ajp,假设安装在 /etc/httpd
3) 必须使用 tomcat-5.5.x+,假设安装在 /opt/apache-tomcat-5.5.26
4) 如果需要改变安装目录重新安装,必须退到解压那步(否则很多和目录有关的代码不会重新编译,导致严重错误–啊!我整整一天的痛苦啊!)
继续阅读

[原] solr 1.3 multicore使用指南

尽管solr 1.3还是dev版,目前还没release,它比1.2多的很多特性(特别是multicore的支持)还是让我选择了1.3(1.2用了一段时间,多个索引要复制多份,非常麻烦)。
使用基本和solr 1.2差不多,几乎没什么难度就切换到1.3了。只是多了multicore的配置:
solr/home比如设为/opt/solrs
在/opt/solrs下新建multicore.xml

继续阅读

[原] spring的事件监听和java反射及IoC注入还是很强大的

这几天做了一个东西:在DAO上往数据库插入一个bean的时候,用java反射机制自动产生SQL语句,同时publishEvent触发bean更新事件,事件监听类根据配置文件处理bean并自动提交到Lucene(Solr)全文检索(主要是bean的属性跟Solr字段的对应)进行准实时的索引更新(当然bean有缓冲);反过来,检索结果有望自动转成bean丢给用户callback。
danny这个巨牛完成的基于spring的再次开发框架除了自动controller Action绑定、ResultSet自动转成bean、自动分页、自动View视图等等快捷开发功能外(Orz),现在支持自动全文检索了,而且整个过程对其他开发人员是无缝过渡的,建立全文索引不需要修改任何一处代码(其实概念上用AOP比事件监听更适合:“可热插拔”的全文检索切面,但是事件监听最大的优势是异步的,全文检索准实时索引更新这种需求用AOP同步包装一层实在是太不实际了):只要在配置文件里指定哪些数据表的bean哪些字段需要全文检索,OK。。。添加或者全记录更新的时候索引都会准实时更新——而且几乎近似热插拔。

[原] 强烈推荐一个纯java的FTP Client库:ftp4j

ftp4j是个很年轻的开源项目,但是试用后发现很好很强大,如果你找一个纯java的FTP库,要支持socks4,socks4a,socks5,http代理,就是他了!
比apache的FTPClient(不支持代理)、半商业的edtFTPj(PRO支持代理,但是要$,而且是系统变量级的代理,不能单个指定)等好用多了,而且是LGPL协议,源码质量很高。(不过如果你需要FTPS及SFTP,那ftp4j不支持)
jar包只有50多k,地址在这里:ftp4j
使用代理的代码:

[转] Hibernate映射对象标识符(OID)与数据库主键对应关系

原文:http://www.blogjava.net/action/archive/2007/05/22/119134.html
Hibernate采用对象标识符,也就是通常我们所说的OID来创建对象和数据库表里记录的对应关系,对象的OID和表里的主键对应,所以说OID是非常重要的,不应该让程序来给它赋值.数据库区分同一表的不同记录是用主键来区分.数据库中的主键最重要的3个基本要素就是不允许为null,不允许有重复值,主键永远不会改变.所以通常我们设计表都会设计主键的值为自动增加,没有业务逻辑含义的一组数字,当然针对每个数据库,设置的方法也不同.但是都非常简单.加一个属性就可以了.
而JAVA区分同一类的不同对象是用内存地址,在JAVA语言中判断两个对象的引用变量是否想相等,有以下两种比较方式.1)用运算符”==”比较内存地址,此外还可以用Object的equals方法也是按内存地址比较.2)比较两个对象的值是否相同,JAVA中的一些覆盖了Object类的equals方法实现比较合适.例如String和Date类,还有JAVA包装类.如果是String.equals(String)这种方式的比较就是比较这两个String的值的.如果是Object原是的equals方法就是比较地址了.这点很容易混淆.
通常,为了包装Hibernate的OID的唯一性和不可变性,由Hibernate或者底层数据库来给OID赋值比较合理.因此我们在编程的时候最好把持久化类的OID设置为private或者protected类型,这样可以防止JAVA程序随便更改OID.而OID的get方法我们还是要设置为public类型,这样方便我们读取. 在对象-关系映射文件里的>一书.我非常喜欢这本书,讲的非常简单明了.感兴趣的朋友可以去买一本看看(当当打7.3折哦).

[译]volatile关键字有什么用?

最近看LumaQQ的源码发现一个volatile(中文意思是“可变的、不稳定的”),找了篇英文介绍。抽空我翻译了一下,翻错了大家不要见笑。。。

volatile关键字有什么用?
  恐怕比较一下volatile和synchronized的不同是最容易解释清楚的。volatile是变量修饰符,而synchronized则作用于一段代码或方法;看如下三句get代码:

  geti1()得到存储在当前线程中i1的数值。多个线程有多个i1变量拷贝,而且这些i1之间可以互不相同。换句话说,另一个线程可能已经改变了它线程内的i1值,而这个值可以和当前线程中的i1值不相同。事实上,Java有个思想叫“主”内存区域,这里存放了变量目前的“准确值”。每个线程可以有它自己的变量拷贝,而这个变量拷贝值可以和“主”内存区域里存放的不同。因此实际上存在一种可能:“主”内存区域里的i1值是1,线程1里的i1值是2,线程2里的i1值是3——这在线程1和线程2都改变了它们各自的i1值,而且这个改变还没来得及传递给“主”内存区域或其他线程时就会发生。
  而geti2()得到的是“主”内存区域的i2数值。用volatile修饰后的变量不允许有不同于“主”内存区域的变量拷贝。换句话说,一个变量经volatile修饰后在所有线程中必须是同步的;任何线程中改变了它的值,所有其他线程立即获取到了相同的值。理所当然的,volatile修饰的变量存取时比一般变量消耗的资源要多一点,因为线程有它自己的变量拷贝更为高效。
  既然volatile关键字已经实现了线程间数据同步,又要synchronized干什么呢?呵呵,它们之间有两点不同。首先,synchronized获得并释放监视器——如果两个线程使用了同一个对象锁,监视器能强制保证代码块同时只被一个线程所执行——这是众所周知的事实。但是,synchronized也同步内存:事实上,synchronized在“主”内存区域同步整个线程的内存。因此,执行geti3()方法做了如下几步:
1. 线程请求获得监视this对象的对象锁(假设未被锁,否则线程等待直到锁释放)
2. 线程内存的数据被消除,从“主”内存区域中读入(Java虚拟机能优化此步。。。[后面的不知道怎么表达,汗])
3. 代码块被执行
4. 对于变量的任何改变现在可以安全地写到“主”内存区域中(不过geti3()方法不会改变变量值)
5. 线程释放监视this对象的对象锁
  因此volatile只是在线程内存和“主”内存间同步某个变量的值,而synchronized通过锁定和解锁某个监视器同步所有变量的值。显然synchronized要比volatile消耗更多资源。

附英文原文:
What does volatile do?

This is probably best explained by comparing the effects that volatile and synchronized have on a method. volatile is a field modifier, while synchronized modifies code blocks and methods. So we can specify three variations of a simple accessor using those two keywords:

geti1() accesses the value currently stored in i1 in the current thread. Threads can have local copies of variables, and the data does not have to be the same as the data held in other threads. In particular, another thread may have updated i1 in it’s thread, but the value in the current thread could be different from that updated value. In fact Java has the idea of a “main” memory, and this is the memory that holds the current “correct” value for variables. Threads can have their own copy of data for variables, and the thread copy can be different from the “main” memory. So in fact, it is possible for the “main” memory to have a value of 1 for i1, for thread1 to have a value of 2 for i1 and for thread2 to have a value of 3 for i1 if thread1 and thread2 have both updated i1 but those updated value has not yet been propagated to “main” memory or other threads.

On the other hand, geti2() effectively accesses the value of i2 from “main” memory. A volatile variable is not allowed to have a local copy of a variable that is different from the value currently held in “main” memory. Effectively, a variable declared volatile must have it’s data synchronized across all threads, so that whenever you access or update the variable in any thread, all other threads immediately see the same value. Of course, it is likely that volatile variables have a higher access and update overhead than “plain” variables, since the reason threads can have their own copy of data is for better efficiency.

Well if volatile already synchronizes data across threads, what is synchronized for? Well there are two differences. Firstly synchronized obtains and releases locks on monitors which can force only one thread at a time to execute a code block, if both threads use the same monitor (effectively the same object lock). That’s the fairly well known aspect to synchronized. But synchronized also synchronizes memory. In fact synchronized synchronizes the whole of thread memory with “main” memory. So executing geti3() does the following:

1. The thread acquires the lock on the monitor for object this (assuming the monitor is unlocked, otherwise the thread waits until the monitor is unlocked).
2. The thread memory flushes all its variables, i.e. it has all of its variables effectively read from “main” memory (JVMs can use dirty sets to optimize this so that only “dirty” variables are flushed, but conceptually this is the same. See section 17.9 of the Java language specification).
3. The code block is executed (in this case setting the return value to the current value of i3, which may have just been reset from “main” memory).
4. (Any changes to variables would normally now be written out to “main” memory, but for geti3() we have no changes.)
5. The thread releases the lock on the monitor for object this.

So where volatile only synchronizes the value of one variable between thread memory and “main” memory, synchronized synchronizes the value of all variables between thread memory and “main” memory, and locks and releases a monitor to boot. Clearly synchronized is likely to have more overhead than volatile.

[原]Java正则表达式获取匹配结果

有关正则表达式语法,请参考:文档 Document

[原]Memory/logger leak with multiple VelocityEngine instances

之前有问过这个奇怪的问题,日志一直报错:
log4j:ERROR Attempted to append to closed appender named [null].

原来是velocity的bug,目前为止所有release都有这个bug (velocity的网站上说1.5版会fix这个bug)
> Key: VELOCITY-193
> URL: http://issues.apache.org/jira/browse/VELOCITY-193
> Project: Velocity
> Type: Bug
When creating and then releasing to garbage collection multiple VelocityEngine instances, the
instances are apparently not closing out or otherwise letting go of their logger instances. As a
result, code that needs to create and destroy several VelocityEngine instances will eventually choke and die. This happens with either Avalon Logkit or Log4j, although the exact nature of the choking differs. This test program isolates the problem:

Run the program with an integer command-line argument specifying the number of times to cycle through the loop, and make sure velocity-1.3.1.jar, commons-collections.jar, and either an Avalon Logkit or Log4j JAR are on your classpath. (I tested with logkit-1.0.1.jar and log4j-1.1.3.jar.) What *should* happen is that the program completes its specified number of loops, doing nothing but writing “Test repetition” over and over with an incrementing number. What *does* happen, at least on my machine, depends on which logging package is provided for Velocity.

Using Avalon Logkit 1.0.1, the program runs fine for 252 iterations; on the 253nd, it aborts with
the following message:

“PANIC : Error configuring AvalonLogSystem : java.io.FileNotFoundException: /Users/ibeatty/
Development/javaDev/VelocityBugIsolator/velocity.log (Too many open files)”

Using Log4j 1.1.3, the program runs fine for only one iteration; on the second and any subsequent iterations, it continues but prints out a whole mess of

“log4j:ERROR Attempted to append to closed appender named [null].
log4j:WARN Not allowed to write to a closed appender.”

That happens for as long as I care to let it run (95 iterations, with something over 800 lines of
such errors per iteration by the end).

To me, it sure looks like Velocity is leaving dangling loggers behind as VelocityEngine instances
are created and discarded, and that the two logging systems respond differently to this but both have problems.

Why, might you ask, should anyone care about making many VelocityEngine instances? I ran into it when developing a major web app using JUnit to build comprehensive test suites. To run
independently, every test has to start from scratch, which means getting its own VelocityEngine.
Many tests means many instances, and the logging problem kicks in. Running JUnit test suites
within Intellij IDEA and using Log4j, the ERROR/WARN messages were more than a nuicanse;
eventually, I’d start getting out-of-memory errors, too. These went away when I changed the tests to use a shared VelocityEngine instance (which caused its own set of problems).

Using binary download of Velocity 1.3.1, which claims to have been created on 2003-04-01.

I find it hard to believe nobody else has tripped over this before, so maybe it’s sensitive to the OS or something. It happened whether I compiled the test code with Javac or Jikes. Using Java
1.4.1_01.